►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
B
Good
morning,
everybody
good
afternoon
good
evening,
I
think
we
have
all
the
time
zones
covered.
It
is
may
27th.
This
is
a
cluster
api,
azure
provider,
office
hours
and
bi-weekly
meeting.
We
are
a
sub-project
of
sid
cluster
lifecycle
and,
as
such,
we're
we
try
to
follow
the
general
kubernetes
community
rules
which
basically
boils
down
to
everybody,
be
kind,
try
not
to
talk
over
each
other.
Please
use
the
raise
your
hand
feature
in
zoom
so
that
our
conversation
can
be
more
early.
B
D
D
Yeah,
so
I
have
a
pr
in
flight
which
it's
so
it's
bumping
the
cappy
nightly
build
to
like
one
from
two
days
ago,
and
we
hadn't
done
that
in
a
while.
So
like
since
end
of
march,
I
think
so
there's
a
few
breaking
changes
that
have
gone
into
cappy
that
we
need
to
adopt.
So
I'm
working
on
those
I'm
getting
those
integrated.
D
D
D
So
as
part
of
that,
I'm
renaming
the
azure
machine
pool
api
group
following
the
docker
machine
full
example.
So
that's
going
to
now
be
infrastructure
instead
of
x,
dot,
infrastructure
and
so
just
a
heads
up
like
I
mean,
if
you're
using
machine
pools,
you're
already
being
broken
by
the
machine,
pull
change
anyways.
But
you
also
will
need,
if
you're,
using
azure
to
change
the
azure
machine
pool
group
on
your
existing
cluster.
D
But
in
general
this
is
an
experimental
feature
and
as
such
like
there
are
no
guarantees
for
backwards
compatibility.
So
I
think
the
easiest
path
forward,
if
you
are
using
machine
pool,
is
probably
to
build
new
clusters
from
the
new.
C
D
So
that's
the
first
thing
I
wanted
to
say,
and
then
the
other
thing
is
as
part
of
that,
I
noticed
we're.
Also
using
this
x
group
in
manage
cluster
crds
so
like
manage
cluster,
manage
control,
plane
and
manage
machine
pools
and
those
are
all
azure
cabs
specific.
D
And
so
I
would
propose
to
this
group
that
we
and
I
can
bring
this
up
outside
of
the
meeting
as
well.
If
others
aren't
here,
but
I
would
propose
that
we
move
those
as
well
outside
to
the
like
standard
api
group
for
the
same
reason,
because
it's
going
to
be
a
breaking
change
and
it's
better
that
we
do
it
now
than
if
we
do
it
later
when
there
are
more
users
of
it.
A
D
Cool
and
by
the
way,
if
you
are
user
of
the
aks
feature
like
experimental
aks
with
cab
z,
that
means
you're
by
in
france,
also
a
user
of
machine
pools.
So
that
means
you're
already
gonna
be
broken.
Anyways
so
might
as
well
break
you
once
and
have
like
two
different
occasions
where
there's
a
breaking
change.
D
D
We
gotta
rip
off
the
mandate,
there's
still
going
to
be
experimental
features
in
in
the
x
folder
under
feature
gate,
so
this
is
labeled
by
default,
but
this
is
changing
the
just
the
api
group
for
the
group
version
so
that
we
don't
have
to
do
it
later
when
we
do
want
to
move
them
out
of
experimental.
D
It
should
have
been
like
that
from
the
beginning.
Frankly,
we
just
thought
it'd
be
better
to
like
separate
the
groups
and
have
like
an
obvious
distinction
between
the
two
which
made
sense,
but
it
also
like
didn't
realize
the
impact
of
having
to
move
it
later.
B
A
Yeah
I
was
just
gonna
ask.
I
think
vince
was
saying
in
the
last
meeting
that
he
expects
the
0-4
release
to
be
in
the
next
couple
weeks
or
something.
So
I
think
that
means
we
should
plan
our
first
release
like
a
couple
weeks
after
that
or
something
so.
I
don't
know
how
much
work
we
have
left
there
or
any
breaking
stuff
that
is
still
outstanding.
A
A
D
C
Sure
yeah
so
recently
I
had
a
pr
open
for
for
reconciling
azure
machine
services
in
parallel
the
things
which
are
not
dependent
on
each
other.
This
is
slightly
different
from
what
david
had
done
earlier
with
the
machine
skill
sets.
So
basically,
what
what
I
was
trying
to
do
here
is
like
reconcile
the
azure
resources
that
are
not
dependent
on
each
other
parallelly,
and
so
there
was
some
feedback
from
david
and
he
also
suggested
a
library
to
try
it
out.
C
Ideally,
we'd
want
to
use
both
of
these
concepts
together,
but
not
sure
how
so
just
want
to
get
some
feedback.
C
I
haven't
yet
tried
the
library
that
david
has
suggested
I'll,
probably
do
it
next
week,
but
if
you
have
some
time,
please
take
a
look
at
the
pr
and
feel
free
to
add
comments.
E
Thank
you.
Let
me
try
to
frame
it
up
a
little
bit
so
so
folks,
who
maybe
haven't
looked
at
the
pr,
can
understand
the
the
two
different
ideas
that
are
going
on
here.
There
is
the
idea
that
when
you
go
and
ask
azure
to
do
something,
it
may
take
a
period
of
time
and
the
way
azure
shows
this
is
it
creates
an
operation,
and
then
the
operation
is
ongoing
and
in
the
sdk
or
if
you
were
just
using
the
http
api,
you
would
ask
the
http
api.
Hey.
E
E
Now,
there's
two
two
types
of
concurrency
that
we
can
look
at
the
first
type
is
we
go
through
and
we
run
all
of
those
serial
calls
concurrently.
E
E
So
these
are
the
two
two
branches
of
concurrency
in
this
pr
had
started
going
down
the
path
of
concurrently,
reconciling
resources.
Doing
that
with
the
concurrent
and
correct
me
if
I'm
wrong,
but
concurrent
synchronous,
client
requests.
C
Right
yeah,
the
concurrent
synchronous
for
resources
that
are
not
dependent
on
each
other.
E
Is
it
clear
the
distinction
between
those
two
types
of
asynchrony
nader,
your
hands
up.
A
Yeah,
I
wonder
like
if,
before
we
add
this
whole
complexity,
we
should
measure
actually
like
how
much
this
is
actually
helping,
because,
if,
like
five
of
the
things,
five
of
the
seven
things
are
gonna
kind
of
be
dependent
and
wait
for
each
other.
Are
we
really
saving
that
much
time
to
make
the
code
more
complex
or
not?
So
I
think,
maybe,
as
part
of
this
using
this
library,
this
pr
is
to
actually
measure
the
benefit
of
doing
that.
E
Yeah
that
that
was
that
was
part
of
part
of
the
concern
that
I
had
as
well,
and
also
we
can't
actually
do
every
resource
concurrently
right.
We
we
actually
have
a
kind
of
a
graph
of
resources
and
that
that's
kind
of
what
this,
the
the
library
that
I
was
proposing
does.
But
now
we
don't
necessarily
need
a
library
to
do
it.
We
know
which
resources
and
we
know
the
graph
of
resources.
So
it's
really
parallel
for
parallel
batches
of
sibling,
levels
of
resources,.
D
Yeah,
I
think
I
would
I
like
it
if
we
applied
similar
changes
than
we
did
to
azure
machine
pool
to
azure
machines,
maybe
first
or
vice
versa.
If
we
think
this
is,
you
know
the
way
to
go,
that
we
apply
those
to
azure
machine
pool
as
well.
D
The
only
reason
I'm
saying
that
is
that
we're
right
now
we
tend
to
diverge
on
the
implementations
between
azure
machine
pools
and
azure
machines
and
azure
mission
pool,
has
kind
of
been
this
like
playground
where
we
test
new
stuff,
because
it's
experimental
and
because
we
can
allow
ourselves
that
which
is
cool,
but
I
think
like.
If
eventually
we
want
to
graduate
it.
D
We
need
some
consistency
across
services
and
I
think
the
the
feedback
like
the
whole,
like
async
reconciliation
like
not
this
one,
but
the
one
that
david
was
describing
where
you
you
don't
wait
for
the
full
operation
to
complete.
Before,
like
returning
control
to
the
controller,
I
think
that's
pretty
that's
a
good
improvement
in
terms
of
ux,
just
because
the
feedback
loop
is
quicker
when
you're
a
user.
D
You
you
get
to
see
what's
going
on
with
your
machine
like
as
it's
provisioning
as
opposed
to
right
now,
where
it's
just
like,
it
doesn't
even
say
creating
it.
Just
like
stays
pending
and
or
it
goes
to
provisioning,
but
it
has
no
status
while
it's
creating
and
if
that
takes
like
a
minute.
That
means
you're
not
going
to
have
status
for
a
minute,
so
it'd
be
pretty
cool.
If
we
could
bring
that
change
in
there.
E
The
difficulty
that
we
have
there
is
in
machine
pool.
We
really
only
have
one
resource
that
we're
building
and
we
track
the
state
of
that
resource
in
the
status
of
the
azure
machine
pool
object.
So
we
can
see
that
we
have
a
future
like
a
promise
that
needs
to
be
completed,
and
we
we
hold
that
in
the
status.
E
If
we
were
to
combine
these
two,
you
know
concurrency
mechanisms,
you
would
end
up
needing
to
keep
track
of
each
of
the
resources
that
are
currently
in
flight
that
are
currently
in
progress.
If
you
will
so
on
the
on
the
status
of
of
that
object,
you
would
probably
want
to
track
each
of
those
futures
this.
This
can
get
a
little
bit
more
complex
and
I
think
that's
why
we
haven't
really
tried
to
do
that
with
a
logical
resource,
a
crd
that
actually
represents
many
infrastructure
resources.
D
Yeah
but
the
azure
machine,
like
the
main
one,
is
the
vm
right,
like
the
one
that's
going
to
be
the
long-running
operation.
So
even
if
there's
a
bunch
of
smaller
resources
around
there's,
still
one
main
long
running,
one
right.
E
That's
a
really
great
point.
So
if
we
can
knock
down
like
the
ux
to
like
say
like
you
know
it,
it
responds
in
three
seconds
or
five
seconds,
as
opposed
to
waiting
for
the
entire
completion
of
the
vm.
That
would
be.
That
would
be
a
huge
improvement.
Now
it's
not
going
to
be
the
like.
It
responds
in
200
milliseconds
by
you,
know,
throwing
off
all
those
requests
and
then
updating
status
right
away.
E
Maybe
it
does
a
few
concurrently
and
then
starts
the
vm
build
that
might
be
enough
for
us
to
be
good.
I
don't
know
open
to
different
ideas.
C
Or
we
could
also
like
if
we
could
somehow
encode
the
group
of
resources.
C
I
don't
know
like
I'm
just
thinking
out
loud,
the
group
of
resources
in
that
promise,
and
we
consider
each
each
as
a
each
as
one
one
entity
so
that
when
we
reconcile
that
that
group
of
resources,
it
will
be
like
it'll,
be
like
how
we
do
for
skill
set,
but
except
it
won't
be
for
a
single
resource,
it'll
be
for
a
or
a
group
of
resources.
E
Yeah,
you
could
totally
like
what
are
the
resources?
It's
it's
a
nick.
Who
knows
what?
What
are
the
resources
that
you
need
to
build
before
you
build
the
vm,
the
machine.
E
In
the
azure
machine
reconciler,
or
is
this
in
the
I
I'm
sorry
remind
me-
is
this
azure
cluster
that
we're
talking
about
our
azure
machine.
D
Already
been
built
so
public
ip
outbound,
inbound
net
rules
for
control
planes,
I'm
reading
and
network
interface
like
available
resets.
If
there
is
an
availability
set
virtual
machine
and
then
after
the
virtual
machine,
we
have
role
assignments,
vm
extension
tags.
E
So
what
if
we
stayed
concurrent
but
serial
requests
for
the
resources
before,
because
those
can
well,
those
actually
can't
be
built
in
parallel
right,
because
you
need
the
public
ip
and
the
public
ip
is
going
to
be
on
the
nick
right
yeah.
So
the
network
configuration
there
is
is
still
going
to
need
it.
So,
but
public
ips
are
actually
pretty
fast
to
build
like
those
are
like
one
or
two
seconds.
Maybe
so
what?
If
we
built
that
built
the
nick
and
then
the
vm
just
gets,
kicked
off
and
we
store
that
promise.
E
And
then
after
that
promise
succeeds.
We
finish
it
off
with
the
the
last
resource
that
that
probably
gets
us,
the
ux
that
that
we're
desiring
with
without
a
ton
of
extra
with,
without
like
a
lot
of
extra
complexity
or
a
lot
of
work,
and
it
might
be
just
that
initial
investment
and
maybe
in
the
maybe
afterwards.
We
see
that
that
wasn't
quite
enough
and
we
do
need
to
have
a
more
robust
approach
to
it.
D
This
this
is
also
like
in
the
worst
case
scenario
where
everything
happens,
but
I
I
that's
not
even
possible
actually
because
some
of
those
are
specific
to
workers
and
some
are
specific
to
control
planes,
so
it
won't
happen
ever
together.
So,
if
you're
looking
at
like
a
typical
like
worker
nodes
coming
up
the
public
ip
won't
happen,
because
that's
only
if
you
specific
allocate
public
ips,
so
that's
not
happening
inbound.
That
rules
either
because
that's
for
control
planes,
the
network
interface,
is
going
to
happen.
D
The
availability
sets
might
not
happen
if
you're
in
a
zone
region,
so
you're
not
going
to
need
that
vm
is
going
to
happen
role,
assignment's
not
going
to
happen
unless
you're
using
a
system
assigned
identity
and
then
vm
extension
is
going
to
happen.
But
tags
is
not
going
to
happen
unless
you
have
like
specific
additional
tags.
You've
specified
so
really
we're
looking
at
three
and
all
of
these
are
serial.
You
can't
parallelize
any
of
them,
so
they're
all
dependent
on
each
other.
G
Cool,
hey,
yeah,
I
think
aligning
the
machine
pools
with
the
machine.
This
is
a
good
idea
and
then
also
I
would
a
second
like
maybe
pulling
off
the
one
thing
like
the
vm.
That's
running
really
long
to
make
that
asynchronous
that
those
ideas
sounded
good.
Other
thing
is,
as
you're
kind
of
listing
out
all
the
things
that
are
dependent
on
each
other.
It
would
be
nice
to
almost
see
that
in
a
graph
as
almost
a
design
dock
or
something
as
well
just
to
kind
of
visualize
that
and
see
what's
happening,
just
throw
some.
A
C
I
was
just
going
to
like
try
out
the
library
that
david
has
suggested
and
then
like
measure
how
much
it,
how
much
time
it
takes
and
see.
If
it's
like
making
significant
difference
and
then
and
then
we'll
just
decide.
The
next
step
forward
from
there.
E
E
E
I
think
there's
a
lot
of
value
in
doing
this
to
machine,
doing
this
being
making
machines
asynchronous
reconciliation,
asynchronous
and
the
route
that
would
probably
be
most
beneficial
is
to
set
up
the
promise
and
keep
the
promise
in
the
status
for
the
machine
and
track
the
the
promise
in
a
similar
way
that
machine
pools
are
doing
it
in
cluster.
If,
if
you're
really
interested
in
in
doing
like
you
know,
the
concurrent
approach
with
synchronous
calls
cluster
could
benefit
from
that
for
sure.
E
So
I
I
think,
you'll
see
it.
It
really
depends
on
which
path
you
want
to
take.
I
think
we
do
need
to
get
to
similar
reconciliation
patterns,
as
cece
like
pointed
out
in
machine
pool,
and
machine
in
cluster
will
probably
be
just
just
a
bit
behind
that,
because
those
are
probably
our
two
biggest
areas
of
usage
and
most
impact
for
user.
C
Okay,
so
so
I'll,
just
like
do
the
same
thing
that
you
did
for
scale
set
for
azure
machines.
Is
that
like
what
we
want
to
do
going
forward?
Okay,.
E
B
All
right,
I
guess,
let's
move
on
to
the
psa
about
conformance
testing.
D
Yeah,
quick
psa
conformance
testing
broke
on
the
kubernetes
main
branch
with
our
usage
and
cluster
api,
there's
a
pr
that
merged
in
kubernetes
that
basically
changed
the
docker
file
for
the
conformance
image
and
because
of
the
way
we
were
using
that
image
in
the
cluster
api
framework.
It
completely
stopped
running
conformance
testing
on
the
main
branch,
so
this
doesn't
affect
conformance
testing
with
like
released
versions
of
kubernetes,
only
the
ones
that
we're
running
with
the
main
brush.
But
that
means
it
affects
any
pr
test
that
we're
running
with
conformance.
D
There
is
a
fix
that
was
merged
by
actually
the
author
of
the
original
pr
in
kubernetes
in
cluster
api,
which
fixes
like
the
thing
like
the
way,
we're
using
it
to
work
with
the
new
way,
and
that
is
presently
not
being
used
in
cabsie,
because
we
need
to
pull
in
the
latest
cappy
framework
which
I'm
working
on
doing
as
part
of
the
other
pr.
A
E
I'm
sorry,
I
have
been
pulled
away
into
other
things.
I
need
to
pick
this
back
up
and
add
tests
and
thank
you
so
much
for
the
reviews.
Cecil
nader
folks
are
taking
a
look
at
it.
It'll
also
need
a
rebase
after
cecile
changes,
the
groups-
I'm
I'm
gonna
race,
you,
but
I'm
pretty
sure
I'm
gonna
lose.
E
Dude
I'm
going
to
lose
so
I
will
have
a
nasty
rebase
but
we'll
get
that
in
and
I
I
I
feel
like
I
should
be
able
to
finish
up
the
test
tomorrow
and
then
just
get
the
rebase
in.
We
should
be.
I
think,
we're
pretty
close.
There.
D
E
D
D
D
D
D
Yeah,
so
that
one
is
actually
like,
we
need
to
get
that
finished
for
if
you
want
alpha
4.,
so
I've
been
working
through
those.
I
should
actually
make
it
more
clear
which
ones
are
done
and
which
ones
are
not
done
yet
so
I'll
do
that
today
and
go
through
them
and
maybe
like
add
a
check
list
with
like
checked
items
for
the
ones
that
have
already
been
done,
but
I've
been
going
through
them
like
the
manage
this
storage
account
type.
That's
done
that
merged
renaming
resource
group
to
resource
group
name.
D
I
don't
know
if
that's
really
necessary.
I
guess
we
can
discuss
the
ones
that
I'm
not
sure
about
in
the
issue,
but
yeah
and
then
yeah
remove
availability
zones.
We
did.
D
There
was
one
about
subnets,
like
changing
the
subnet
type
to
map
that
one,
the
pr
got
close,
because
we
decided
it
wasn't
following
conventions
for
not
using
maps
in
controller
tools,
so
our
crds,
so
we're
not
doing
that,
but
yeah
I've
linked
all
the
pr's
that
went
in
for
this
one
already.
So.
D
B
D
An
issue
for
it
yeah,
so
this
is
I'm
doing
that
as
part
of
the
experimental
group
rename.
So
what
happened
is
because
I
changed
the
group
and
it's
now
the
same
group
as
the
non-experimental
features.
There's
now
a
conflict
between
azure
machine
templates
and
azure
machine
templates,
which
are
two
different
types
defined
in
two
different
crds
or
one
of
them,
is
a
crd.
B
D
So
can
you
scroll
down,
I
think,
nicholas
or
someone
was
going
to.
I
said
this
week
that
they
were
going
to
start
working
on
it.
D
G
Hey
yep,
sorry,
the
yeah,
so
somebody
suggested
that
we
align
the
naming
differently
so
it
I
think
it's
probably
a
good
idea.
It's
been
pretty
straightforward,
but
it
would
be
a
breaking
change.
I
believe-
and
I
don't
have
any
plans
to
do
that
right
now,.
B
D
Yeah
yeah,
I
guess
the
the
reason
for
putting
it
in
the
milestone
is
because
it's
a
breaking
change.
So
if
we
don't
do
it
for
this,
one
we'll
have
to
wait
until
the
next
minor
release,
which
might
be
a
while.
So
all
depends
on
how
important
or
not
important
this
is,
and
if
james
you're
unable
to
work
on
it.
Would
it
help
if
someone
else
takes
it
on.
G
D
B
D
Maybe
we
mark
it
as
help
wanted
and
then
publicize
it
in
the
channel
and
see
if
anyone
has
cycles
to
take
it.
Yeah.
C
Warnings
yeah,
I
think
we
we
discussed
this
in,
I
think
the
last
officer,
so
we
planned
to
for
reporting
that
fix
and
then
give
some
kind
of
warning
saying
that
they
should
move
all
the
secrets
to
the
new
news
bank.
C
B
B
B
B
No
all
right,
I
guess
we'll
call
it
good
for
this
week,
thanks
for
coming
everybody,
it's
always
great
to
see
you
we'll
we'll
catch
you
in
june.