►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
Alrighty
everybody
see
that
okay
I
said
yep
welcome
everybody,
it
is
what
September
15th,
and
this
is
the
cluster
API
kubernetes
Sig
cluster
life
cycle,
cluster
API
provider,
Azure
office
hours
for
September
15th.
We
meet
here
every
week
at
this
time
on
Thursdays
since
we're
a
cncf
project.
A
We
just
ask
people
please
to
try
to
use
the
raised
hands
feature
and
and
be
polite
to
each
other,
not
that
we
ever
have
any
issues
in
this
area,
but
it's
part
of
our
Creed
yeah
and
if
you
wouldn't
mind
adding
your
name
to
the
attendees
list,
it
helps
people
identify
each
other
and
maybe
Network
or
solve
problems
later.
A
Let's
see,
is
there
anybody
here
who's
here
for
the
first
time
and
wants
to
say
hello
and
welcome
I'll
be
quiet
for
just
a
second
I.
Think
everybody's
looks
like
a
veteran
here,
so.
A
All
right,
just
because
I
see
Milestone
review
here,
I
was
thinking
about
it.
We
didn't
actually
yet
open
the
testing
for
PR
to
enable
a
1.6
Milestone
and
have
that
be
our
main,
so
I
was
just
doing
that.
So
so
maybe
we
don't
have
Milestone
review
today
unless
we
want
to
start
from
scratch,
but
I
think
the
process
is
you're
supposed
to
open
a
test
in
for
PR
first
and
then
that'll
create
the
milestone
for
us.
It's
the
proper
way.
A
So
we
can
talk
about
that
at
the
end
or
we
can
just
skip
it
for
this
meeting,
I
don't
care.
Let's
get
on
to
open
discussion.
Ashitosh
I
want
to
talk
about
node,
outbound,
lb
bug.
B
Yes,
it
is
small
bug
when
Azure
cluster
and
cluster
name
is
not
same
so
I
mean
I,
put
them
put
more
details
on
the
issue,
I've
linked
it,
it's
a
little
PR
and
I
think
yeah,
it's
just
for
review
and
not
sure
like
do
we
want
to
put
a
dot
release
into
this
or
you
know,
wait
for
I
mean
Jack
or
you
can
comment
on
that
because
it
kind
of
blocks
provisioning
of
clusters,
for
which
Azure
clusters
name
is
not
equals
to
the
cluster
name.
C
Yeah
I
think
we
should
so
I
see
that
you
address
my
one
review
feedback,
yeah
yeah
the
tests
are
good
I
think
we
should
merge
this
and
include
this
in
the
next
cherry
pick
cool,
so
I
will
actually
go
overhead
right
now
and
just
I'm
not
sure
if
folks
are
aware,
but
you
can
identify
a
PR
for
cherry
pick
before
it
merges
to
Maine,
which
is
actually
I
would
argue
the
preferred
approach
so
that
reviewers,
who
are
approving
PRS
to
Maine,
can
better
sort
of
serialize
things
sure
so
I'll
go
I'll.
B
Yeah
I,
like
Jackie
solic,
when
we
are
regarding
the
SSA
markers
like
the
server
side
apply,
and
you
know
just
like
I-
was
also
falling
through
that
and
you
know,
I
feel
free
to
you
know.
I
can
help
there
chime
in.
If
there
is
anything,
I
can
do.
Actually
it's
super
busy.
C
Yeah
I
could
talk
about
that.
All
of
that
after
John's
agenda
item
and
I
can
kind
of
go
through
the
extent
which
I
also
I'm
able
to
accomplish
that.
A
D
Yeah
so
last
week,
when
we
talked
about
adding
AKs
preview
features
to
Azure
managed
clusters,
I
think
now
would
probably
be
a
good
time
to
settle
on
a
priority
for
that.
So
I
get
the
sense
from
the
input
that
folks
have
given
already
that
overall
AKs
preview
features
are
definitely
worth
adding.
D
D
C
Cool,
so
my
take
on
this
is
that
this
is
epic-ish
and
I
would
think
that,
to
the
extent
that
we
can
continue
to
move
this
forward
kind
of
with
say,
sixty
percent
of
of
the
assignees
focus
and
then
allow
a
little
bit
of
overhead
for
other
sort
of
more
near-term
pressing
items
to
be
worked
on.
So,
if
you're
going
to
work
on
this,
which
would
be
amazing,
some
some
arrangement
like
that
would
make
the
most
sense
so
from
a
priority
standpoint.
I
would
say
this
is
from
a
medium
term
standpoint.
C
C
D
C
E
I
lowered
my
hand,
but
I
didn't
unmute
myself
I'm
torn
on
this
one,
because
it
is
preview
features
I
feel
like
it's
somewhat
of
a
lower
priority,
but
at
the
same
time
I
really
want
like
bringing
around
cni.
E
So
I
kind
of
feel
selfish.
In
saying
that
you
know
a
higher
priority
from
that
perspective,
so
I
think
that
if
we
have
a
kind
of
maybe
a
slightly
higher
priority
is
kind
of
building
out
the
framework
to
allow
people
to
do
this,
and
that
would
allow
you
know
people
who
needed
this
functionality
to
to
bring
it
themselves.
So
you
know
I
I
could
have
a
teammate
of
mine,
say:
hey
yeah,
you
know
we
want
this,
come
in.
E
Here's
the
the
framework
that's
in
place
to
do
a
preview
feature
go
ahead
and
do
it
even
if
they're
not
kind
of
a
core
maintainer
core
part
of
the
community,
they
can
come
in
and
bring
in
that
preview
feature
without
having
to
build
the
whole
groundwork
of
using
preview
features.
E
But
I
don't
know
that
it's
enough
of
a
priority
to
say
we're
building
out
all
these
preview
features
before
all
these
other
GA
features
are
completed.
I
would
think
the
ga
stuff
should
come
first.
C
To
be
clear,
my
my
sort
of
characterization
of
what's
valuable
for
this
in
the
medium
term,
is
the
additional
API
itself
and
not
necessarily
any
one
or
more
of
these
preview
features
per
se.
C
What
I
see
is
that
the
longer
we
don't
have
a
way
to
expose
preview
features
in
AKs.
We,
you
know,
there's
just
a
sort
of
baked
in
risk
that
our
Azure
managed
cluster
customers
aren't
going
to
need
these
preview
features,
but
I
I
suspect
that
that
will
change
at
some
point
and
that
a
feature
will
land.
That's
preview,
that's
deemed
to
be
extremely
valuable
and
then
we're
in
a
position
where
we
have.
We
literally
have
no
way
to
expose
those
features
because
we
haven't
done
the
the
engineering
work
to
lay
that
surface.
C
So
that's
the
sort
of
that's
the
sort
of
priority
work
that
I
would
like
to
see
is
to
to
see
that
surface
laid,
and
maybe
just
some
a
couple
of
works
of
exploratory
like
practical
exploratory
effort
to
just
make
sure
that
our
idea
of
the
scope
of
it
is
correct.
C
You
know
I
think
that
John
and
I
and
and
Cecile
and
some
other
folks
have
gone
through
a
kind
of
intellectual
exercise
of
imagining
how
this
could
happen,
and
so
we
have
a
decent
idea
that
it's
feasible
and
how
much
work
it
might
cost.
You
know
I,
said
earlier
three
months
or
so,
but
I
think
we're
at
the
point
where
those
intellectual
assumptions
need
to
be
tested
with
a
little
bit
of
just
rolling
up
the
sleeves
and
poking
around
does.
C
Ahead
so
I
agree
with
you
100
that,
and
we
can
make
this
clear
in
the
issue
that
really
this
issue
is
to
expose
the
that
preview
API,
which,
as
far
as
I
can
tell
is
more
or
less
I
mean
this
needs
some
explore.
Exploratory
work
as
well,
but
as
far
as
I
can
tell
it's,
it's
feature
compatible
with
the
ga
API,
so
maybe
the
scope
of
that
work
could
be.
C
You
know
the
the
equivalent
of
turning
on
your
preview,
like
when
you
you're,
using
the
CLI
just
running
that
command.
That
basically
enables
the
preview
API,
which
essentially
redirects
your
AZ
AKs,
calls
I'm,
still
I'm
thinking
of
the
CLI
here
to
the
preview
API,
but
the
existing
interface
with
the
ga
API,
as
far
as
I
can
tell
is,
it
might
not
be
100
compatible,
but
it's
very
close
to
100
compatible.
So
you
can
use
GA
features
against
the
preview
apis
as
far
as
I
understand.
So
that
could
be.
C
The
scope
is
to
just
allow
the
preview
API
in
cap
Z.
You
can
declare
it
somewhere
in
your
spec
with
no
additional
features
and
then
the
preview
features
can
probably
be
implemented.
Like
we're
doing
now
with
GA
features
according
to
to
customer
priority
and
our
own
sense
of
priority
I
know,
there's
a
lot
to
take
notes.
Was
that
sort
of
clearish
for
folks
just
my
thoughts.
D
But
yeah,
so
just
looking
at
the
issue,
it
looks
like
you
have
to
swap
out
the
entire
API.
You
can't
just
like
enable
a
single
feature
or
something
you
have
to
completely
swap
out
the
apis
that
correct.
That's.
C
D
When
I,
just
today
is
the
first
day
I
saw
it,
I
was
thinking
that
it
was
going
to
be
much
simpler
than
that,
because
you
just
kind
of
make
that
one
API
call
but
looks
like
you
have
to
use
the
whole
thing
under
the
hood,
which
is
interesting.
C
D
I
mean
I
I
think
using
the
whole
API
is
probably
going
to
be
the
simplest
way
to
get
started
anyway,
but
yeah
I'm
not
sure.
If
there
are
ways
where
we
can
try
to
isolate
things
further
than
that.
C
Yeah,
the
easiest
test
is
just
to
literally
go
through
the
SDK
and
look
for
preview
features,
and
you
just
won't
see
it
in
the
GI
API
versions.
They
don't
exist,
so
that's
the
matriculation
process.
So
that's
when,
when,
as
an
AKs
customer,
you
like
wire
your
client
environment
to
enable
preview
features
you're.
Essentially
you
know
redirecting
where
your
underlying
API
and
endpoint
is
going
to
go
to
when
it
makes
those
make
S
requests.
D
Yeah
yeah
I
think
so
so
yeah
I
think
I
think
I'll
I'll
take
on
this
effort
for
now
at
least
and
then
so
yeah
I
can
start
kind
of
flushing
out
what
the
API
might
look
like
and
I'll
definitely
make
sure
to
ask
her
feedback
on
that.
So
stay
tuned
yeah,
that's
all
I
had
for
just
does
anybody
else
have
any
last
thoughts.
A
A
C
Yeah
just
really
quickly,
so
what's
the
best
way
to
I'm
going
to
link
some
docs.
C
C
and
what
it
Cappy
1.2
enables
this,
which
allows
for
more
Atomic
operations
against
sets
of
things
I'm
going
to
go
with
things
so
that
subnet
spec
up
there
that
was
actually
a
good
or
that
right
there
yeah,
but
yeah
line
140
in
types
dot
go.
You
can
see
that
this
so
This
PR
hasn't
been
merged
because
we're
not
the
the
set
of
folks
who
are
here.
C
Basically
Cecile
is
still
on
vacation
I'd
like
to
get
her
thoughts,
Cory
merges
because
it
will
have
a
change
in
behaviors,
but
that
subnets
type
up
there
is
the
type
of
atomic
sort
of
resource
set
that
is
subject
to
Edge
conditions
when
not
using
what
kubernetes
called
server-side
apply,
and
so
those
decorations
that
are
added
as
part
of
this
PR
or
essentially
pragmas
that
instruct
the
server-side
apply
functionality
how
to
uniquely
operate
against
items
in
that
set.
So
in
this
case
the
set
is
an
array
of
subnet
spec
atomically.
C
So
what
we're
doing
essentially
is
telling
the
server
side
apply
what
type
of
data
set
it
is.
So
that's
where
the
list
typical
is
map
and
the
key
is
the
unique
key.
So
this
this
allows
sort
of
deterministic
exclusive
access
to
items
in
that
set.
So
when
they're
controllers,
multiple
controllers
operating
concurrently
across
items
in
that
set,
they
can
essentially
take
out
exclusive
locks
on
particular
items.
C
So
without
these
this
pragmas,
when
you
have
crds
that
are
being
mutated
concurrently
by
multiple
controllers,
as
you
can
imagine,
the
the
atomic
transaction
is
is
indeterminate,
so
As,
One,
controller,
I,
start
operating
against
one
item
in
that
set
I
do
some
work
and
I
then
update
the
set
with
the
updated
item.
Another
controller
may
be
operating
on
that
same
item
or
a
different
item
in
that
set
and
without
any
sort
of
way
to
synchronize
those
behaviors
as
discrete
Atomic
transactions.
C
I
feel
like
I'm,
a
poor
computer
science
teacher
at
a
JC,
or
something
trying
to
describe
this
conceptually
so
the
the
tldr
is
that
we
don't
understand
we're
not
confident
without
some
more
thoughts
from
Cecile,
maybe
some
folks
from
Cappy.
This
was
identified
by
cluster
API
itself,
so
it
was
very,
very
generous
of
them
to
actually
do
an
audit
of
our
code
as
they
were.
Implementing
the
server-side
apply
functionality
in
one
two
and
they
identified
subnets
as
being
the
only
thing
there
was
another,
as
you
can
see,
there's.
C
Actually
this
was
a
quite
a
stream
of
stuff,
but
there's
a
there's.
Another
type
Edition
there
under
Neath
period
appearing
sample,
spec
and
I-
think
there
may
be
one
more
so
essentially
I
identified
the
the
properties
that
are
set
properties,
I,
think
you've
seen
in
some
other
folks
in
Cappy
did
some
more
auditing
and
identifying
the
subnets
is
the
only
one
that
is
subject
to
multiple
controller
operations.
So
that's
really
the
the
conditions
under
which
you
need
to
worry
about
this.
C
I,
you
know,
we've
been
using
one
point,
so
this
shipped
with
1.2
of
Cappy
and
and
cap
Z
I
did
the
work,
so
I
can
speak
to
it.
Capsi
pin
to
1.2
a
couple
months
ago,
six
weeks,
maybe
so
and
I
think
the
easiest
answer
is
that
we
just
don't
have
test
signal
for
this
particular
race
condition.
So
we
I
don't
think
that
we
know
whether
or
not
the
code
is
subject
to
it.
C
So
we
definitely
want
to
test
it.
Given
that
lack
of
knowledge
and
then
perhaps
work
with
Cappy
to
this
is
the
kind
of
thing
that
would
probably
be
a
combination
of
functional
and
and
testing
I
mean
it's
pretty
low
level,
but
obviously
fairly
nasty.
So
you
want
to
make
sure
that
that
we're
doing
this
in
the
right
way.
B
They'll
just
want
to
add
on
what
Jax
said
James
this
bug.
Actually,
we
will
face
the
work
if
study,
use,
capy
with
server
side
apply
on
cluster
class,
so
this
essentially
happens
on
cluster
class.
When
we
have
Azure
cluster
template
and
there
we
can
specify
those
subnets.
B
So
what
happens
is
once
the
capture
reconciler
Works?
They
actually
pull
in
other
couple
of
details
like
cider
block
and
stuff,
like
that
and
writes
it
back
to
that
subnet
slice,
but
once
it
is
done,
Cappy
topology
controller
will
still
say:
okay,
there
is
a
dip
and
it
will
update
it
back.
So
essentially,
there
are
two
things:
one
is
having
ownership
of
that
particular
ID
or
that
particular
field
in
the
subset.
B
B
You
know
so
to
do
that
the
SSA
API
should
be
able
to
identify.
You
know
whether
I
should
be
adding
this
list,
whether
I
should
be
muzzing.
It
or
not
so
for
that
we
need
a
unique
ID
to
actually
you
know
enable
this,
so
that's
how
this
markers
helped
there
foreign.
B
I
have
not
personally
actually
tried
it,
but
I
think
I'll
try
it
and
this
issue
will
happen,
but
yeah
I
have
not
like
tried
it.
B
So
I'll
get
back
onto
this
I
think
I'll
I'll
try
to
deploy
a
cluster
class,
so
provisioning
is
not
blocked,
so
you
will
end
up
creating
a
cluster,
but
it
will
just
oscillate
like
this
is
what
I
think
will
happen,
because
this
happened
with
cap
a
because
you
know
Cappy,
topology
controller
will
continuously
start
to
patch
it
yeah,
but
but
you
know,
I'll
get
back
on
this
I
think
it
should
be
pretty
quick
to
test
it.
Yeah.
That's.
C
Fantastic
and
if
you
could
comment
on
so
there's
a
if
you
could
go
back
to
the
change
that
Matt
there,
there
I'm
going
to
go
a
little
bit
deep
at
this
point,
so
we
essentially
tried
to
mimic
kappa's
solution
to
this.
So
I
think
this
is
solved
in
Kappa,
at
least
in
Maine,
I'm,
not
sure
if
they've
released
and
and
that's
what
you
see
in
the
the
cluster
API
docs-
that
as
an
example
of
what
providers
must
do
when
adopting
1.2.
C
So,
if
you're
able
to
comment
on
the
your
confidence
and
how
that
can
be
used
from
Captain's
perspective
as
a
unique
or
really
perspective
as
a
unique
key,
it's
not
the
unique
key
from
the
perspective
of
the
the
underlying
data
type,
but
we
can't
use
the
unique
key
from
the
underlying
data
because
we
passively
consume
it
from
Azure,
and
so
the
crd
generator
has
rules
that
don't
allow
you
to
use
the
value
of
list
map
key
with
a
property.
C
That's
either
read-only
or
not
required,
and
so
the
way
we
kind
of
hack
around
that
in
cap
Z
when
we
create
subnets
is
we
pass
the
subnet
spec
to
Azure
and
then
in
the
payload
response
we
get
with
the
201.
We
then
update
our
representation
of
the
data
type
with
the
authoritative
information
from
azure
and
we
Define
the
front-end
spec
of
this
data
type
like
you
can
see.
If
you
look
at
that,
you
go
drill
down
to
that
subnet
spec.
C
You
can
see
that
the
ID
property
is
marked
as
read
only
so
from
a
user-facing
point
of
view.
It's
read
only
and
the
the
generator
code
doesn't
like
a
list
map
key
value
with
a
property.
That's
read
only
so
that's
sure
if
we
were,
if
we
were
able
to
do
that,
we
would
have
already
merged
it,
because
that
makes
perfect
sense,
so
we're
trying
to
find
a
user
and
basically
a
required
user,
accessible
value.
C
That
in
fact
expresses
the
sufficient
amount
of
uniqueness
where
this
would
work,
and
we
think
that
there's
only
one
role
per
so
in
that
set
there's
only
going
to
be
one
subnet
for
a
given
role.
We're
not
confident
about
that.
Go
ahead.
C
C
Can
you
have
I
think
for
control
plane,
there's
only
going
to
be
one
control,
plane,
subnet,
that's
just
baked
into
the
architecture,
I'm,
less
sure
about
the
the
node
role
type
are
there?
Does
it
make
sense
on
on
a
single
cluster
to
have
multiple
subnets
each
with
the
role
of
node
I?
Think
it's
node
is
the
allowed
value.
B
C
B
B
B
E
I'm,
pretty
sure
that
we
only
allow
you
to
do
the
one
role
for
one
role:
name
per
thing:.
E
So
you
can't
have
the
same
role
name
for
more
than
one
subnet.
C
B
C
Would
want
to
make
sure
that
we
might
be
doing
that,
but
we
might
maybe
it's
possible
that
we
shouldn't
be
doing
that
and-
and
if
that
were
the
case,
then
that
would
not
be
the
correct
so
like
maybe
in
the
future.
An
orthogonal
work
effort
would
be
like.
Oh
I
need
multiple
node
subnets
and
we're
like
cool.
We
can
do
that
we'll
just
remove
the
web
of
validation
that
that
prevents
that
and
then
we've
now
broken
this
in
a
in
a
way,
that's
really
hard
to
correlate.
C
A
Cool,
thank
you.
I
got
a
question.
So
does
this
so
we've
entered
in
1.2
a
long
time
ago,
but
we
never
clued
into
this
until
recently.
Does
that
point
to
a
gap
in
e2e
test?
Do
we
not?
We
do
have
a
cluster
class
test,
at
least
through
Cappy,
but
we
never
triggered
this
bug.
As
far
as
I
can
tell.
Does
that
mean
we
have
e2e
gaps?
We
need
to
fill.
C
Add
it
to
the
list
of
gaps,
yes
well:
yeah
I
think
the
answer
is
yes.
There's.
A
B
B
C
C
Well,
maybe
I,
didn't
describe
it
well,
because
that's
actually
that
wasn't
my
understanding.
So
we
want
to
get
clarity
on
that.
I
described
it
in
a
different
way,
where
updates
to
that
subnets
resource
are
going
to
be
concurrent
updates
are
going
to
be,
you
know,
only
one
will
win
that
race,
and
so
one
updated
in
that
concurrent
set
will
be
one
or
more
will
be
dropped
depending
on
how
much
concurrency
we're
talking
about.
C
So,
if
that's
not
true-
and
it's
it's
more
like
just
thrashing
it's
not
able
to
determine
that,
there's
no
diff,
essentially
when
it's
doing
its
reconciliation,
then
that
is
a
I
mean,
that's
something
we
would
want
to
fix,
and
we
we
could.
We
could
write
a
test
to
detect
those
that
kind
of
thrashing.
B
A
B
A
Cool,
that's
the
end
of
the
agenda.
What
else
do
people
want
to
talk
about.
B
C
Anyone
that's
awesome,
I
I'll,
be
there
I,
don't
know
exactly
yet,
which
is
a
problem
and
also
probably
predicts.
The
answer
is
known.
So
there
was
a
Cappy
talk
that
was
rejected
and
then
later
accepted,
which
means
that
you
have
to
go
to
your
management
chain
at
the
last
minute
and
get
a
budget
approval.
So
does
anyone.
D
A
D
Were
going
I
hope
to
be
there,
but
budget
may
not
allow
we'll
see.