►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
All
right
welcome
everybody,
it
is
October
6th,
and
this
is
the
cluster
API
provider
Azure
weekly
office
hours
meeting.
We
are
a
project
from
Sig
cluster
lifecycle
and
as
such,
we
abide
by
their
rules,
which
you
can
read
about
and
follow
a
link
at
the
top
of
this
document.
But
basically
it
boils
down
to
everybody:
try
to
be
nice
to
each
other.
Let's
try
to
use
the
raise
hands
feature
so
that
we
don't
talk
over
each
other
and,
let's
make
sure
everybody
has
enough
time
to
ask
their
questions
and
has
fun.
A
So
this
is
the
point
where,
if
anybody
is
brand
new
to
this
meeting,
I'll
be
quiet
for
a
second,
you
can
unmute
and
introduce
yourself
if
you
want
to
I
think
it
looks
like
we
have
The
Usual
Suspects.
But
let
me.
A
Okay,
yeah,
if
you
want
to
add
your
name
to
the
attendee
list
up
here
for
networking
purposes
or
whatever
that's
very
helpful,
and
let's
move
on
to
open
discussion
Jack,
you
want
to
talk
about
why
the
AKs
tests
are
failing
and
what
we
want
to
do.
B
B
Aks
support
for
124
has
dropped
due
to
a
cve,
so
I'm,
not
sure
I
think
they're
rolling
out
something
to
all
the
existing
customers
to
address
that.
But
for
new
clusters
you
can't
build
a
124
cluster
with
a
Windows
pool,
so
that
means
that
we
are
at
the
mercy
of
that
124
support
being
re-engaged
from
AKs
when
that
cve
is
addressed
or
updating
our
test.
B
Cluster
configuration
somehow
either
remove
windows.
New
windows
remove
the
Windows
node
pool
from
the
cluster
scenario.
That
doesn't
seem
like
a
super
desirable
thing
to
do
or
potentially
use
kubernetes
1.23,
instead
of
1.24
to
get
that
and
then
test
coverage.
B
Okay
cool,
so
I'm
just
gonna
go
ahead
and
state.
The
assertion
that
downgrading
to
123
is
the
preferable
choice
between
those
two
and
then
we
can
move
on
unless
someone
disagrees.
But
if
no
one
disagrees
then
I'll
go
ahead
and
drive
that
change,
and
then
we
can
go
to
the
next
topic.
B
Maybe
going
once
going
twice
thumbs
down
to
myself,
let's
quick
make
consensus
before
Cecile
say
anything.
A
Oh,
it
sounds
like
that's
the
way
to
go,
I
agree
cool.
Is
there
anything
else
about
reverting
the
tests?
Should
we
move
on.
A
Let's
go
on
to
we're
good
yeah
yeah,
let's
go
on
to
the
server
side,
apply
changes
which
I'm
guessing
is
why
some
folks
are
here
are.
C
Next
time,
all
right,
yeah
I,
would
just
you
know,
offer
help
exactly
like
I
have
some
context
on
that.
One
and
I
did
some
testing
like
last
time
and
figured
out
the
problem,
and
maybe
you
know,
I'll
pull
the
ultimate
and
bend
over
that,
and
that
is
fine
with
me.
B
B
C
I
was
also
like
discussing
with
Stefan
on
this,
so
I
I
have,
like
a
you
know,
thought
that
this
would
probably
add
the
name
field
into
the
subnet
template
spec,
so
that
we
don't
need
to
drop
the
other
main
field.
I
think
you
know,
Stefan
has
summarizes
very
well.
So
if
anybody
has
any
objection
on
that
part
or
I'll
just
try
to
build
that
change
and
try
to
verify
and
obviously
we'll
get
reviewed,
and
we
can
discuss
that
everywhere.
Maybe.
A
36.,
the
only
thing
is
the
only
thing
I
see
in
chat
is
Cecile,
saying
sorry,
I'm
late.
B
B
I
was
wrong.
It's
it's
in
the
agenda
right
there
in.
B
B
Let's
review
every
thread.
Conversation
oh.
D
Yeah
yeah,
that's
the
one,
so
cool
okay,
so
one
thing
I
think
we
identified
for
discussions.
The
last
few
weeks
is
that
we
have
to
introduce
a
name
field
in
the
Azure
cluster
template
because
you
have
to
be
able
to
set
a
name
in
the
Azure
cluster
template
so
that
we
can
also
specify
in
a
cluster
class.
So
we
get
a
name
field
in
the
Azure
classes
eventually,
and
there
are
essentially
two
options.
So
so
what
Azure?
D
What
the
idea
was
with
Azure
cluster
and
Azure
cluster
template,
essentially
that
they're
not
sharing
all
feeds
they're,
just
sharing
some
fields
so
that
there
are
no
fields
that
don't
make
sense
in
Azure
cluster
template.
So
that
was
some
kind
of
optimization
and
that's
why
we
currently
don't
have
a
name
so
essentially
looking
looking
at
the
the
struct
structure,
which
is
more
or
less
like
the
second
and
the
third
section
I
have
there.
D
So
in
Azure
cluster
we
have
like
Azure
class
networks
back,
etc,
etc
and
then
eventually
subnet
class
back
and
Azure
cluster
template.
We
have
another
templates
back,
subnet
templates
by
example,
attempted
spec.
D
Oh,
that's
twice
right
and
supplement
class
and
what
both
are
sharing
is
subnet
class
back
so
as
of
today
in
Azure
cluster,
the
name
field
is
in
subnet
spec
and
the
question
is
essentially:
should
we
introduce
the
name
field
as
part
of
subnet
templates
back
so
that
it's
it
would
be
essentially
duplicate
between
both
of
types
or
should
we
introduce
it
as
part
of
subnet
class
spec,
but
that
means
that
we
would
have
to
drop
it
from
subnet
spec,
which
is
a
struct
in
Azure
cluster.
So
essentially
one
and
two.
D
What
we
just
don't
know
is,
if
you
consider
dropping
the
name
field
from
subnet
spec,
to
move
it
down
one
layer
into
an
embedded
struct,
a
breaking
change
in
Azure
cluster,
and
if
you
would
rather
do
it
instead
in
subnet
templates
back
or
if
it's
fine
I
mean
both
options.
Work
for
achieving
what
we
want
getting
server-side
apply
to
work.
It's
just
a
matter
of.
Do
you
want
to
keep
a
strike
compatibility
or
is
yaml
completely
good
enough
for
you.
B
I
just
wanted
to
to
make
sure
that
for
folks
who
aren't
following
exactly
that,
the
the
surface
area
that
we
need
to
backfill
with
this
name
field
is
in
the
cluster
Cloud.
It's
entirely
constrained
to
Cluster
class,
which
is
experimental,
which
offers
us
the
advantage
of
being
able
to
more
rapidly
move
forward
on
this,
and
that
is
only
reflected
in
option
two.
B
So
to
me
that
that's
why
option
two
might
be
more
preferable
in
addition
to
the
fact
that
it's
that
it's
doesn't
require
us
to
change
two
surface
areas,
but
I
just
wanted
to
give
folks
some
more
backgrounds,
so
they
could
form
their
own
opinions.
Go
ahead.
Cecil.
E
One
question
I
had
is:
how
does
how
do
defaults
work
by
that
I
mean
in
this
situation?
We
have
name
which
is
the
unique
identifier
of
a
subnet
which
is
a
required
field
in
the
subnet
spec.
E
If
we
were
to
add
it
to
subnet
template
spec,
would
we
still
want
it
to
be
required,
or
would
we
want
to
get
leave
it
optional?
In
case
you
know,
maybe
you
do
want
to
set
a
name
for
your
subnet,
but
maybe
you
don't
want
to
have
like
a
template.
Names
like
each
cluster
can
have
different
subnet
names,
in
which
case,
if
we
want
different
defaulting,
then
it
would
make
sense
to
do
number
two,
but
yeah.
F
So
it's
not
strictly
related
to
that.
But,
like
another
question
that
I
had
with
the
first
option
is
regarding
heartbreaking
of
a
change.
It
is
basically
for
non
non-cluster
class
users,
meaning
that,
like
if
we
drop
the
name
field
from
the
subnet
spec.
Where
would
we
push
that
and
if
it's
on
the
same
object
or
not
or
whether
we
would
be
able
to
map
it
to
the
new
to
the
new
field?.
E
E
This
is
probably
not
clear
just
by
looking
at
this
comment,
but
subnet
class
spec
is
actually
embedded
into
subnet,
spec
and
so
subnet
class
spec
is
meant
to
be
everything
that
is
shared
between
subnet
spec
and
subnet,
template
spec,
it's
how
it
was
designed
originally,
and
so,
if
we
remove
it
from
subnet
spec
but
add
it
to
subnet
class
spec.
Essentially
it
will
still
be
part
of
subnet
spec,
just
in
a
sub
nested
struct.
E
Instead
of
being-
and
so
it's
not
an
API
breaking
change
that
for
users,
it
won't
change
anything
in
the
yaml
like
Stefan
was
saying,
but
it
is
like
a
code
like
go
breaking
change
in
the
sense
that
if
you
were
relying
on
like
subnet
Spike
dot
name
than
in
the
code,
then
you
will
have
to
change
that
to
subnet
spec
Dot
subnetclasspect.in.
F
Okay,
so
I
guess
yeah
like
it's,
not
it's
not
breaking
use
cases
such
as
like
users
bringing
their
own
network
and
like
setting
a
bunch
of
things
beforehand,
I'm,
not
relying
on
cap
Z
to
provision
things
and,
like
the
other
Quest,
the
other
thing
is
like
it
depends
on
what
is
the
policy
for
providers?
F
I
know
that
for
capping,
for
example,
we
do
have
guarantees
around
the
types
and
like
the
functions
and
the
the
consumption
of
the
code
in
general,
but
I
don't
know
if
we
do
have
something
like
that
for
the
providers.
C
B
Well,
it
is
actually
users
are
bringing
the
this
configuration.
So
what
we're?
What
we're
asserting
is
that?
Because
the
change
we're
making
has
to
do
with
changing
the
way
that
the
the
user-facing
struct
is
composed,
that
it
should
not
be
user
affecting,
but
in
fact,
users
do
use
this
property.
Is
that
right,
Cecile.
E
B
I'm
basically
saying
that
the
name
field
that
is
being
moved
around
from
one
sibling
of
the
embedded
struck
composition
to
another
yeah
or
child
or
whatever
you
want
to
say
that
is
a
user-facing
property.
So
users
have
this
property
reference
in
their
templates.
Correct.
E
Exactly
and
then
also
to
add
on
to
what
yaseen
was
saying
about
guarantees.
We
don't
have
such
guarantees
in
Cap
City,
currently
and
in
fact,
when
this
change
was
first
made
when
we
made
the
subnet
cluster
class
spec
type
for
the
first
time.
This
was
like
in
the
same
API
version
like
nb1,
better
one,
and
so
we
did
make
that
breaking
change
already
so
just
hit
that
so.
D
I
would
answer:
that's
not
just
one
find
comes
the
other
one.
So
just
because
check
mentioned
there
might
be
Edge
case.
I
think
we
should
be
fine,
because
in
both
cases
it's
totally
independent
of
where
you
place
the
name
field.
You
will
see
that
you
get
the
same,
open,
API
schema
and
the
same
crd
and
Cube
City
API
server.
They
only
have
that
schema
available.
They
don't
know
anything
about
our
structs.
So
as
long
as
the
series
is
the
same,
the
behavior
will
be
the
same.
D
B
B
D
Yep
regarding
defaulting
I
would
lean
towards
making
it
mandatory.
I.
Think
sorry,
I,
think
that
was
part
of
your
statement
before
so
so.
I
think
I
think
it
would
make
name
in
both
cases,
mentoring
and,
in
both
cases,
a
map
key
and
regarding
defaulting
I'm,
not
sure
I
mean
that's.
Are
there
good
defaults
that
we
can
that
we
could
set
in
the
Azure
cluster
template
webhook.
D
I
think
the
the
one
in
Azure
cluster
uses
a
bunch
of
cluster
specific
or
at
least
one
class
specific
field.
So
the
defaulting
that
we
could
do
in
the
Azure
cluster
template
would
at
least
lose
that.
E
I
think
the
default
thing
we
have
for
Azure
cluster
right
now
and
that
might
be
wrong,
but
I
think
it
uses
cluster
name.
So
it's
like
cluster
name,
Dash,
subnet,
Dash,
node
or
something
like
that.
So
is
that
something
we
can
do.
Cluster
class.
D
D
I
mean
a
webhook
no,
because
when
you
deploy
your
cluster
class
with
all
the
reference
templates,
you
just
have
the
Azure
cluster
template
and
the
web
can
only
act
on
that.
Azure
cluster
template
once.
E
D
Yes,
but
I
think
that
would
lead
into
the
problem
that
we
currently
have
that.
Essentially
your
name
is
not
specified
in
template
and
then
they're
fighting.
E
D
Think
we
can
go
one
step
further,
so
let's
say
we
introduce
the
name
field
in
Azure
cluster
template.
There
are
still
two
possible
ways
to
get
to
your
supplements.
One
way
is
to
just
have
it
in
your
Azure
cluster
template
directly.
The
other
way
is
to
do
it
via
variables.
D
So
what
you
could
do
is
you
could
write
or
let's
say,
variables
and
patches
what
you
could
do
is
you
could
write
a
patch
which
has
access
to
some
properties
of
your
cluster
like
the
cluster
name
and
then
just
generate
subnets
and
yeah,
basically
based
on
Yama
templating
and
there
you
have
access
to
the
cluster
name.
So
the
patch,
essentially
the
part
which
makes
your
whole
all
your
templates,
your
entire
cluster.
Sorry,
your
entire
Channel
shape
of
the
cluster
class
specific
to
a
cluster.
E
D
You
can't
with
inline
patches,
because
we
only
allow
appending
and
prepending
two
arrays
and
not
modifying
like
the
first
area
element
or
something
you
could
replace
the
entire
array.
But
you
can't
take.
D
Subnets
field
value
of
the
Azure
cluster
template
is
input
and
then
modified
or
something
that
doesn't
work,
but
you
can
do.
Is
you
can
write
an
external
patch
and
then
you
can
do
everything
you
get
all
templates
your
entire
cluster
class.
Essentially,
and
then
you
can
generate
patches
to
modify
whatever
you
want.
D
C
D
Be
good
to
talk
about
just
to
maybe
have
a
similar
discussion
on
why
it
makes
sense
or
doesn't
make
sense
there
wasn't
a
yeah,
just
a
slow
scroll
down
slowly,
maybe,
and
a
yep
further
I
hope
I
see
it
yep!
That's
that
one
here,
okay,
so
what
we
have
here
is
essentially
in
the
Azure
cluster
template
a
slice
which
has
a
slice
of
structs
and
that
struct
has
just
a
single
value
and
I
think.
D
The
result
of
that
conversation
was
essentially
that
single
value
in
that
struct
in
that
stress,
slice
struct.
It's
not
unique,
so
we
can't
use
it
as
map
key.
What
I
think
is
that
we
don't
really
need
a
list
type
map
and
list
map
key
here,
because
there
are
only
reason
we
want
those
map.
Keys
is
that
we
are
able
to
have
multiple
controllers
modifying
the
same
field,
and
my
impression
is
that
that
is
not
important
for
v-net
appearing
templates
back
I'm,
not
sure.
If
that's
true.
F
I
guess
yeah
another
question
on
top
of
that
is,
ultimately
the
question
is:
are
we
defining
allowing
to
define
those
as
part
of
a
cluster
class?
Because,
if
that's
the
case
then,
like
you
could
end
up
with
the
two
controllers,
the
two
controllers
fight
in
especially
if
it's
like,
if
it's
a
slice
that
we
are
talking
about
here,.
F
Because,
ultimately,
like
everything
can
be
defined
as
like
as
part
of
a
cluster
class,
whether
it
makes
sense
or
not.
That's
like
Up
For,
Debate,
yeah.
D
The
other
fun
thing
is
that
in
the
cluster
cluster
slice
has
just
one
field
and
in
the
Azure
cluster
it
has
two
fields.
D
The
problems
we
were
getting
into
with
subnets
was
that
the
topology
controller
was
creating
subnets
with
some
missing
fields
and
then
the
Azure
cluster
controller
was
modifying
subnets.
So
the
important
thing
that
we
have
to
know
here
is
essentially,
if
that
I
don't
remember,
strike
name
that
VM
net
appearing
thing,
if
that
is
also
Modified
by
the
Azure
cluster
controller,
or
if
it's
essentially
just
taking
assets
written
by
the
anthropology
controller,
and
that's
it
we're
not,
we
don't
necessarily
have
to
resolve
it.
Now
that.
E
D
So
the
only
reason
we
want
to
introduce
those
map
Keys
is
that
we
can
have
multiple
controllers
right
in
the
same
field
and
and
having
good,
merge
Behavior,
not
just
I,
don't
know
the
topology
controller
is
writing
the
field
and
then
the
Azure
cluster
control
is
writing
all
writing
the
field
and
then
quality
controller.
Again
and
again.
D
So
my
main
point
is
I
think
there
is
no
good
way
to
set
a
good
map
key
on
the
slides,
but
I
wonder
if
that
slice
is
even
written
by
the
Azure
cluster
controller.
E
So
for
subnets
I
guess
that
was
only
happening
because
we
had
something
that
was
like
in
the
controller
kind
of
writing
back
to
that
slice
to
write
the
ID
okay.
But
we're
saying
that
as
long
as
like
the
user,
writes
it
and
gets
defaulted
in
the
web
hook
and
then
we
don't
write
back
to
it
in
the
controller.
F
And
I
think
like
Stefan,
like
I,
guess,
is
part
of
the
changes
for
cluster
class.
We
we're
documenting
the
specific
behavior
in
the
sense
that,
like
you,
if
you,
if
you're
adding
something
that
is
co-authored,
then
you
have
to
have
basically
the
like
the
the
list.
The
list
map
key
added
to
you
to
that
field.
F
So
probably
another
thing
is
like
in
the
developer's
guide,
for
example,
for
cap
Z
and
for
Kappa
and
like
for
cap
V,
we
could
add
more
language
or
details
specifically
when
you
would
need
to
add
those
when
you're
developing
a
feature
for
a
provider.
D
Maybe
some
some
general
statement
so
we're
only
having
this
problem
now,
because
the
Polish
controller
is
consistently
applying
yaml
files,
I
would
say
in
general,
it
totally
depends
on
your
API
type,
so
it's
even
totally
independent
of
of
the
topology
controller
and
Dusty
class.
In
my
opinion,
if
you
want
to
have
fine
granular
merge
behavior
on
fields,
sorry
on
on
slices,
then
you
just
have
to
set
those
markers
because
otherwise
you
will
always
get
Atomic
merges.
D
So
it's
probably
more
like
whenever
possible,
you
should
set
those
map
keys,
just
I
mean
even
keeps
it
applies
using
server-side,
apply
and
uses
that
merge
Behavior.
It's
probably
everyone
going
forward,
it's
just
that
was
to
polish
control.
It
it's
more
obvious
because
it
permanently
reconciles
your
resource.
If
a
user
runs
cubesata
apply,
they
usually
probably
just
do
it
once
so.
It
doesn't
occur
that
often,
but
it's
still
not
great
I.
D
Think
today,
looking
at
the
Azure
cluster
with
that
subnet's
name
thing,
you
have
the
same
problem:
the
user
applies
it
without
names.
The
Azure
cluster
controller
is
overwriting
the
subnets
and
when
the
user
reapplies
the
same
yaml,
it's
all
it's
reset
again
and
then
Azure
plus
controllers.
So
over
and
again
so
I
I
could
definitely
double
check
what
we
have
in
our
documentation.
D
I'm,
not
sure
we
should
definitely
should
have
a
documented,
but
it's
probably
good
advice
in
general
to
to
look
at
every
slice
that
you
have
and
figure
out
if
Atomic
merge.
Behavior
is
good
enough.
I
hope
that
makes
sense.
E
Why
are
we
not
adding
Resource
Group
to
the
slice
like
I'm,
just
rereading
Resource
Group,
and
this
is
like
the
v-net
resource
Group
right,
not
the
cluster
Resource
Group.
So
even
if
you
had
one
cluster
per
Resource
Group
in
the
cluster
class,
like
each
cluster
class,
cluster
is
in
a
different
Resource
Group.
They
could
all
still
want
to
be
peered
with
the
same
v-net
that
lives
in
the
same
Resource
Group.
E
Can
you
say
that
again,
okay
yeah
sure,
so
the
struct
we're
looking
at
right?
Now
it's
called
v-net's
peering
v-net
peering
in
Azure,
is,
if
you
say,
I
want
my
v-net
of
my
cluster
to
be
peered
with
this
other
v-net.
That
I
have
pre-existing
yeah
that
v-net
what
identifies
it
is
its
name
and
its
Resource
Group,
because
in
Azure
vnet
Name
Is
coped
by
resource
groups,
so
you
could
have
two
v-nets
with
the
same
name
in
two
different
resource
groups.
E
So
you
say:
I
want
this
specific
v-net
in
Resource
Group,
my
Resource
Group
called
my
v-net
now
I.
Think
right
now
we're
saying
we
don't
want
to
add
the
resource
Group,
which
the
research
group
is
in
the
v-net
peering
spec,
but
it's
not
in
the
v-net
peering
class,
spec
I.
Think
because
we
were
working
under
the
assumption
that
every
cluster
in
the
cluster
class
is
a
different
Resource
Group.
But
since
we're
not
talking
about
the
cluster
Resource
Group
here
we're
talking
about
the
peer
vnet
Resource
Group
that
shouldn't
change
between
cluster
class
clusters.
F
F
E
Yeah
like,
for
example,
this
is
a
totally
valid
use
case.
I
can
make
a
cluster
class
which
says
I
want
all
my
clusters
in
this
cluster
class
to
be
peered
with
two
v-nuts.
The
first
one
is
called
my
v-net.
Is
it
in
Resource,
Group,
prod
and
the
second
one
is
called
my
v-net
and
it's
in
Resource
Group
staging
and
that
could
be
completely
valid.
E
D
I
think
that
sounds
good
in
general,
but
I
think
it
does
change
anything
regarding
the
map
key
thing,
because
yeah,
okay,
good
just.
C
My
stuff
one
question
on
this:
like
let's
say:
if
we
go
by
option
one,
what
happens
to
the
existing
template?
Let's
say:
I
had
an
interesting
history,
template
and
you
know,
created
a
clusters.
D
I
think
it
doesn't
really
matter
if
it's
option
one
or
two,
because
both
options
would
introduce
a
new
mandatory
field
called
name
and
subnets.
That's
that's
the
same
in
both
both
options.
It's
just
a
matter
on
in
which
structure
embedded
structure
we're
putting
it.
D
So
it's
definitely
the
case
that
if
you
have
a
Classic
Glass
today
with
Azure
class
attempted
without
a
name
that
won't
work
anymore,
but
that
was
I
I
think
the
idea
why
ycc
wrote
that's
acceptable
because
it's
an
experimental
API
to
make
this
change
to
get
to
a
good
state
if
I
understood
it
or
correctly,
but
considering
that,
if
you
have
a
cluster
class
today
with
a
subnet
without
names,
you
run
into
the
infinite
reconcile
problem.
D
C
A
D
F
Yeah
very
much
what
Stefan
said
like
I
think
like
even
if
you
were
using
this
before
it
wasn't
working
anyway.
So
it's
like
I'm,
not
sure
how
breaking
we
can
consider
that
because
it
was
like
broken
from
the
beginning.
A
Cool,
do
we,
you
have
any
more
to
say
about
the
SSA
changes.
Have
you
got
all
that
I
should
talk.
D
C
B
Very
much
so
I
just
wanted
to
speak
to
yasin's
point
really
quickly,
especially
since
we're
being
recorded
So.
That
is
totally
correct
that
if
something
is
already
broken,
we
we
should
consider.
B
You
should
consider
that
fixing
that
isn't
a
breaking
change,
however,
I
think
we
have
to
remind
ourselves
that,
as
software
engineers,
the
actual
stuff
that
our
software
does
is
what's
important
and
not
the
intention
so
oftentimes
we
can
unintentionally
makes
our
software
do
stuff
and
build
customer
bases
that
actually
rely
on
stuff,
that's
broken,
and
so
I
I've
had
conversations
like
this
and
Upstream
kubernetes,
where,
if
you
wait
long
enough
your
unintended
stuff,
that
is
working
in
a
way
that
you
don't
becomes
in
effect
a
user
contract,
so
it
becomes
breaking.
B
So
it's
just
a
reminder
to
all
of
us.
If,
if
we
see
stuff,
that's
not
working
as
intended,
we
have
to
fix
that
and
address
it
as
soon
as
possible.
Otherwise
it
can
actually
sort
of
concretize
into
user
expectations.
Then
it
becomes
really
difficult
to
fix,
even
though
it's
broken,
if
that
makes
sense,.
A
Hopefully
we
won't
say
that
very
often
should
we
move
on
to
the
next
topic.
Anything
else
about
SSA
to
say.
A
All
right,
let's
do
Jack
this.
Is
you
about
Azure.
B
B
That's
just
been
sort
of
bothering
me
at
night,
I,
say:
Azure
managed
cluster
there,
so
I'm
thinking
about
it
from
the
context
of
of
AKs,
of
the
manage
cluster
story,
but
I
think
that
it
really
applies
to
anything
and
but
I
think
it's
especially
interesting
with
AKs,
because
we're
dealing
with
a
sort
of
split
Authority
between
cluster
API
and
the
managed
backend
service.
So
what
I'm
thinking
about
is
what
how
do
we?
B
How
do
we
make
sure
that,
when
we
introduce
a
feature
for
Azure
manage
cluster,
that
we
can
gracefully
introduce
that
to
existing
customer
clusters
when
they
upgrade
to
this
new
version
of
capsi?
So
the
way
we
do
this
in
cap
C
by
convention
is
that
features
are
going
to
be
triage
and
shipped
only
with
minor
releases,
but
those
are
still
supposed
to
be
non-breaking
from
I
mean.
Let's
set
aside
the
fact
that
Azure
magic
cluster
is
experimental,
we
don't
intentionally
want
to
break
anything.
B
We
want
to
do
that
in
such
a
way
where
you
just
I,
click
your
button
or
do
Cube
CTL
edit.
The
controller
and
update
the
image
reference
from
1.5
to
1.6.
Now
you're
running
1.6
of
cap
Z
and
ostensibly
your
new
clusters
can
get
this
new
functionality
and
your
old
clusters
are
still
going
to
work
great
with
AKs.
There
is
sometimes
with
features
that
capsi
doesn't
know
about.
B
Aks
will
apply
its
own
opinionated
defaults,
so
the
thing
I'm
thinking
about
in
my
in
my
mind
is
the
thing
I've
been
working
on,
which
is
spot
priority
for
acasts.
So
that's
sort
of
I
think
folks,
in
with
experiences
in
all
clouds
can
are
sort
of
familiar
with
the
concept
of
a
spot
VM,
something
that
you
can
configure
with
like
bidding
configuration
to
conditionally
provision
VMS
if
they're
cheap
enough
for
your
criteria.
So
Azure
also
has
this
functionality
and
AKs
supports
it.
B
So
by
default
right
now,
AKs,
if
you
build
a
cluster
and
you
don't
declare
any
sort
of
spot
configuration,
AKs
is
going
to
give
you
the
non-spot
configuration
it's
going
to
to
create
a
node
pool
for
you
that
doesn't
include
any
spot
configuration
at
all
and
AKs
has
its
own
API
data
structure
to
express
that
so
up
until
this
time,
all
the
AKs
clusters
that
capsy's
been
creating
have
not
declared
any
opinion
kind
of
sort
of
sort
of
fallen
into
the
AKs
default.
B
So
with
a
with
a
feature
that,
as
we
introduce
that
feature
with
the
next
minor
version
of
cap
Z,
we
are
now
going
to
be
responsible
for
doing
a
for
all
the
existing
clusters
that
used
to
run
on
1.5.
We're
going
to
now
be
processing
future
reconciliations
against
an
updated
data
model
in
cap
Z.
That
includes
knowledge
of
spot
priority,
and
if
we
don't
do
that
right,
then
we
are
going
to
be
I'm
running
into
a
situation
where
we
update
our
data
model
on
the
next
reconciliation.
B
After
the
upgrade
we
send
that
put
to
the
AKs
apis
and
the
AKs
apis
reject
that
request,
because
they
say
something
like
well.
You
can't
change
the
spot.
Priority
configuration
and
I'm
I'm
alighting
a
little
bit
of
the
details
how
this
might
work,
but
you
can
sort
of
imagine
that,
in
in
a
scenario
where,
where
the
back-end
manage
cluster
provider
detects
that
the
user
doesn't
doesn't
configure
any
spot
priority
configuration
that
it
assigns
its
own
and
then
in
a
follow-up
operation.
B
If
the
the
user's
API
says
something
like
spot,
priority
equals
false
or
something
or
like
an
opinionated
default
that
doesn't
agree
with
what
AKs
put
then
you're
going
to
get
an
error
that
sort
of
falls
under
the
this
is
a
mutable
property.
Sorry,
this
is
an
immutable
property.
You
can't
update
this.
B
So
really
the
question
I'm
asking
is:
do
we
trust
that
our
current
end-to-end
tests
continually
validate
this
story?
Are
the?
Are
the
current
upgrade
tests
sufficient
for
that
I
know?
We
don't
actually
have
an
upgrade
test
for
AKs,
but
is
the
is
the
Cappy
upgrade
test
that
capsity
runs?
B
Does
it
do
this
for
the
cap
Z
surface
area
for
the
non-manage
cluster,
and
so
do
we
want
to
Simply
Fork
that
upgrade
test
pattern
into
into
the
Azure
manage
cluster
scenario,
so
that
we
can
continually
make
sure
that
this
situation
doesn't
happen
to
us
when
we
ship
a
minor
release?
That
was
a
lot
of
language.
Hopefully
it
made
sense
to
seal.
E
So
the
upgrade
test
we
have
two
types
of
upgrade
tests
currently
for
cabs
self-managed
clusters.
We
have
the
kubernetes
workload
upgrade
test,
which
upgrades
kubernetes
version
only
it
does
not
touch
cap
Z
version,
so
that
does
not
cover
any
of
that,
and
then
we
have
API
version
of
grade
tests.
Those
are
testing
upgrading
from
one
API
version
to
another.
E
E
That
offers
thumb
coverage
in
the
sense
that,
like,
if
you
add
like
a
breaking
changer
feature,
it's
gonna
detect
that
because,
like
it
wasn't
in
V1
off
before
so
when
you
upgrade
to
the
latest
It's
Gonna
Break
the
thing
about
that,
though,
is
it
only
detects
it
in
the
release?
So
it
does.
E
We
don't
have
what
I
think
our
Gap
right
now
is
an
upgrade
test
that
takes
the
latest
release
version
of
kab,
Z
and
upgrades
to
the
branch
or
like
the
controller
image
that
was
made
from
that
PR
so
that
we
can
test
it
in
prs2
and
I.
Think
yeah
Stefan
just
pointed
out,
Cappy
does
have
that
we
just
never
implemented
that
in
Camp,
Z,
so
I
think
that's
our
main
Gap
and
I
think
that
would
solve
kind
of
or
address
what
you're
pointing
at.
B
Okay,
cool
I'll
fold
that
into
the
graduate
Azure
magic
cluster
out
of
experimental,
Test
Section,
because
I
think
we
we
definitely
want
that
I
mean.
Arguably
we
want
this
for
non
AKs
clusters
as
well.
Stefan
is
the
pattern
to
create.
Is
that
upgrade
testing
Cappy?
Does
the
upgrade
happen
after
the
Clusters
have
already
been
created?
So
you
test
the
sort
of
In-Place
upgrade
scenario
to
make
sure
that
new
versions
of
Cappy
work
nicely
with
pre-existing
infrastructure.
B
D
A
test
does
it's
roughly
the
beginning
of
a
test.
We
created
an
additional
workload
cluster.
We
convert
this
work
with
cluster
to
a
management
tester.
Then
we
install
the
old
version
of
all
providers
so,
depending
on
what
you
want,
then
we
create
a
workload
cluster
on
that
with
that
second
management
cluster.
D
After
that
work
class
is
created.
We
upgrade
the
providers
on
that
second
management
class
and
once
that
is
done,
I
think
we
do
a
scale
up
like
machine
deployment
go
from
one
to
two
and
we
verify
that
oh
I'm,
not
sure
if
we're
doing
the
last
part,
probably
not,
but
what
we
will
definitely
verify
is
that,
after
the
providers
have
an
update,
that
there
were
no
rollouts
triggered
So
like
just
upgrading
from
cluster
API
I,
don't
know
1.1
to
102.,
it
shouldn't
trigger
any
machine
rollouts
and
that's
what
we're
verifying
in
our
test.
D
A
D
This
yeah,
and
so
once
we're
done
with
checking
that
no
roadload
was
triggered
and
then
we're
doing
your
scale
up
to
see
that
the
new
providers
actually
work.
But
that's
that's.
B
D
D
That's
just
if
there
are
some
more
to
clarify
or
something
just
write
it
down
somewhere
in
select
or
an
issue,
and
then
you
can
check
out
the
current
test
us
or
if
you
can
extend
it
Etc,
it's
generally
pretty
open
to
just
adding
ways
to
hook
into
the
test
to
do
more
things
if
necessary
providers.
E
I
was
going
to
type
in
chat;
instead,
you
caught
me,
no
I
was
just
going
to
say:
can
we
open
an
issue
for
the
specific
test,
because
I
think
we
do
have
that
one
that
Matt
mentioned?
But
this
this
isn't
really
like
adding
a
test
from
Cappy
that
we
don't
have
yet
it's
reusing
a
test
we're
already
using
in
a
different
way
and
I
think
we
should
definitely
add
it
through
both
managed
and
unmanaged
clusters,
and
maybe
we
can
do
both
in
the
same
test.
Who
knows
gonna
investigate
that.
A
Didn't
so
I
mean
yeah,
the
idea
was
well
a
few
meetings
ago.
We
said
we
wanted.
We
changed
the
policies
so
that
we're
potentially
doing
a
release
a
patch
release.
Every
week
we
initially
said
we
want
to
check
in
with
all
of
you
here
at
this
meeting,
and
then
we
quickly
thought:
that's,
probably
not
sufficient
to
include
everyone
who
cares
so
I
dropped
a
little
poll
in
the
slack
Channel,
but
I
don't
know
that
that
was
enough
to
get
people's
attention.
We
didn't
have
anybody
thumbs
up
or
thumbs
down.
A
A
So
there's
plus
one
from
Jack,
but
actually
the
idea
is.
We
were
also
talking
about
this
sorry
Cecile
that
we
would
try
to
get
it
done
today.
A
If
we
all
agree,
if,
if
it's
slow,
if
it
slips
off
today,
we
don't
want
to
do
it
on
Friday,
so
we
might
wait
till
Monday,
but
ideally
we
go
ahead
and
do
it
today
go
ahead.
Cecile.
C
E
Okay,
yeah
I'm,
also
plus
one
for
release
and
yeah
I.
Think
if
we
can
try
to
given
how
quick
the
release
process
is
now
if
we
have
like
with
the
automation,
I,
think
it's
worth
trying
to
do
it
the
day
off,
because
we
also
happen
to
say
we
would
do
release
last
Thursday
on
Monday
and
then
kind
of
slipped
up
and
then
the
other
thing
is
we
were
talking
with
Matt
I,
think
it'd
be
good.
E
If
we
can
assign
a
an
owner
when
we
do
say
we
want
to
release
because
otherwise
it's
kind
of
like
well.
We
want
to
release
but
who's
going
to
follow
up
on
it
and
I
think
right
now,
because
of
how
it
works,
it
has
to
be
one
of
the
maintainers
just
because
permissions
like
you
need
permission
to
push
the
tag.
E
So,
ideally,
if
we
can
just
get
one
of
the
maintainers
or
people
who
have
maintainer
permissions
in
the
call
to
kind
of
sign
up
and
we
can
rotate
on
who
does
it
every
time,
I
think
that
would
be
probably
the
best
way
to
be
accountable
and
I
guess:
Matt,
Jack
you're,
the
other
maintainers
here.
So,
if
you're,
okay
with
that.
B
There's
only
one
kind,
slash
bug
in
the
prq
I
haven't
done
a
full
audit
to
make
sure
that
the
the
labels
are
current,
but
that
is
and
and
that
is
actually
something
I'll
follow
up
with
uccl
on
offline
I'm,
not
even
sure
we
want
that
so
I
think
that's
a
really
good
signal
that
we
want
to
release
that
will
basically
all
the
fixed
bugs
are
in
the
queue
at
this
point.
There's
nothing
that
folks
are
waiting
for
to
get
merged.
E
The
other
thing
that
we
might
want
to
check
as
part
of
this
when
we'd
say
like
do
we
want
to
release?
Yes,
no
is
maybe
look
take
a
quick
peek
at
test
grid.
That's
the
thing.
We
also
forgot
to
do
last
time,
which
would
have
told
us
there
was
a
bug
that
we
shouldn't
release
right
away.
E
A
B
C
B
Is
so
minus
one?
We
have
another
a
different
known
failure
for
AKs,
which
is
UN
discovered,
which
has
to
do
with
delete
failures.
The
the
V1
beta
1
Test,
the
regular
View
and
beta
one
test.
I
know
this
is
like
kind
of
new
the
naming,
but
at
the
Top
If
you
go
to
I,
can't
even
possibly
explain
where
to
find
things
on
this
page.
So
I'll
stop
trying.
There's
the
one
right
above
that
actually
this
one
periodic
that
one
yeah
that's
1.5,
so
that
looks
really
green.
B
A
E
Do
we
want
to
try
to
get
one
green
signal
from
AKs
after
reverting
120
or
before
cutting
the
release,
or
do
we
have
that
signal
elsewhere?
Somehow.
B
B
So
is
that
what
I'm
saying
is
that
I'm
not
sure
that
fixing
that
test
doesn't
actually
require
us
to
change
our
capabilities?
And
so,
if
it
does,
then
that
would
need
to
land
in
the
release
branches
before
we
release,
or
we
would
potentially
use
that
as
to
say
we're
not
actually
going
to
release
we're
going
to
wait
until
AKs
fixes
this
I
don't
know
we
have
to
huddle
on
that.
A
A
Okay,
all
right
and
as
far
as
who's
going
to
actually
execute
the
release
I've
volunteer
since
Jack
has
done
the
last
several
and
I
guess
we
don't
want
to
continue
talking
about
it
here,
we're
just
going
to
huddle
offline
or
outside
of
this
and
figure
it
out.
A
So
we
will
try
to
do
a
release
today
unless
we
find
that
the
AKs
issue
is
something
we
can
wrap
up,
but
it
doesn't
happen
today
and
then,
in
that
case,
we'll
wait
for
that
fix.
Is
that
the
general
plan.
B
A
Is
that
what
time
we're
oh
yeah,
we
are
cool?
Does
anyone
have
anything
else?
That's
not
on
the
agenda.
We
want
to
talk
about.
C
The
I
was
hoping
to
get
some
guidance
on
the
status
update
on
cluster
class
and
manage
questions,
but
Megan
also
wait
till
next
week
or
if
it's
something
quick.
E
E
Yeah,
could
you
share
that
and
then
we
can
maybe
talk
about
it
on
that
issue,
offline
thick.
A
Sounds
good
anything
else.