►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
Cool
all
right
welcome
everybody.
It's
april
14th.
This
is
the
cluster
api
for
azure
office
hours
meeting.
We
do
this
every
two
weeks,
we're
part
of
the
umbrella
organization,
sig
cluster
life
cycle
and
as
such
as
part
of
the
kubernetes
world.
We
follow
the
standard,
kubernetes
community
guidelines,
which,
essentially
you
can
read
about
them
in
the
document,
but
essentially
boils
down
to
let's
not
interrupt
each
other
and
everybody
be
kind
and
try
and
use
the
raise
your
hand,
feature
and
zoom
so
that
we
don't
accidentally
stomp
on
each
other.
B
A
A
Most
of
the
intro
stuff,
if
you
could,
if
you
have
a
second,
please
add
your
name
to
the
attendees
list
here.
Just
so,
people
can
put
names
to
faces
in
the
future.
A
Do
we
have
anyone
who's
here
for
the
first
time
who
wants
to
say
hi
and
why
they're
here
I'll
just
be
quiet
for
a
minute?
If
you
want
to
unmute.
B
Everybody
looks
familiar
to
me,
but
you
never
know
go
ahead.
I'm
somewhat
new!
I'm
evan
freed
I'm
here
with
elastic
we're
just
exploring
cluster
api
just
for
building
kubernetes
regions
and
multi-cloud
providers,
so
we're
kind
of
starting
with
azure,
since
that
seems
to
be
one
that
is
further
along
at
least
from
our
testing,
so
we're
pretty
excited
to
explore
it
and
I'm
just
learning
a
lot
as
I
just
drink
from
a
fire
hose
here.
A
Cool-
I
guess
that's
it
for
new
folks,
so
we
can
move
on
to
open
discussion.
I
guess
I
put
the
first
thing
in
here:
megan,
I'm
glad
you're
here,
because
this
is
super
annoying,
probably
to
you
as
well,
but
it
looks
like
we
went
ahead
and
merged
the
api
diff
or
I
went
ahead
and
merged
the
api,
diff
change
and
then
in
practice
it's
not
working
right.
So
I
guess
we
need
to
revert
it
today.
I
mean.
D
A
A
Okay,
this
here
2
206.
We
took
a
lot
of
time.
We
thought
to
get
it
correct
and
then,
in
practice
it
looks
like
it's
not
once
we
merged
it.
I
created
this
other
pr.
Jack
had
a
great
suggestion.
We
should
always
do
something
like
this
make
sure
it's
actually
working.
So
I
created
a
silly
pr
that
should
be
listed
in
here
somewhere
there.
It
is.
A
And
I
ran
it
and
then
the
api,
and
it
should
have
told
us
that
it
found
a
few
files
that
I
changed,
but
it
didn't
so
something
went
wrong
and
I'm
not
entirely
sure
why.
So
I
guess
we
should
go
ahead
and
revert
this,
so
we
have
some
more
time
to
figure
out
why?
Because
this
isn't
system
critical
or
anything,
I
just
want
to
put
that
out
there.
The
other
approach
we
could
take,
perhaps
is
just
let
it
sit
and
try
and
figure
it
out
without
reverting
it.
A
E
Yeah,
just
plus
one
for
revert
unless
there's
a
very
obvious
fix
that
we
can
just
make
quickly
to
fix
it.
That
would
be
as
quick
as
reverting,
but
I'm
kind
of
surprised,
because
I
had
tested
it
locally
when
we
were
debating
which
pattern
to
use
and
it
was
working
locally.
So
I'm
not
sure.
What's
going
on.
A
I
yeah,
I
saw
the
same
thing,
I
tested
it
locally
and
it
looked
like
it
was
fine,
but
I
don't
know
so.
D
A
It
is
there's,
obviously
something
different
yeah.
The
only
the
only
output
we
get
is
on
this
job
itself.
You
can
look
at
the
api
diff
job
and
it's
succeeded
which
it
probably
should
have
anyway,
but
she
changed
it
so
that
this
this
is
super
useful.
Now
we
know
it
didn't
the
basically,
the
regular
expression
didn't
match
any
files,
and
I
don't
understand
why
not
so
so
I
guess
we'll
revert
it
and
megana.
A
I
don't
know
if
you're
even
want
to
work
on
this
anymore,
because
it's
been
a
couple
weeks,
but
that's
that's
where
it's
at.
F
Yeah,
I
was
just
going
to
say
that
yeah
we
can
reward
it
and
I'll
test
different
solutions
or
stick
to
the
first
thing
that
we
added
in
the
grip.
A
A
All
right,
unless
anyone
else
has
any
comments
about
2,
206
cecile,
you
want
to
talk
about
manage
cluster
reconcile.
We
just
need.
E
E
If
you
have
time
it's
a
really
big
one,
but
basically
what
this
is
doing
is
we've
refactored
most
of
the
services
over
the
past
several
months
to
change
to
an
async
model
where
we
don't
block
on
reconcell
to
move
on
in
the
controller,
so
we
can
get
like
more
quick
feedback
to
the
user
about
what's
going
on,
and
so
this
is
basically
applying
the
same
changes
to
the
manage
cluster
service,
which
is
a
little
bigger
and
has
been
doing
things
a
little
differently
than
we
were
in
other
services.
E
So
did
some
small
refactors
here
and
there
to
make
it
kind
of
follow
what
we
were
doing
in
other
places
and
then
the
two
kind
of
side
refactors
that
happen
in
here
is
that
I
split
up
the
manage
machine
pool
and
the
manage
control,
plane,
scope.
That's
the
file
you're
looking
at
here,
because
it
used
to
be
in
one
big
file.
E
So
I
just
created
two
different
files
and
then
I
also
moved
a
bunch
of
validation
that
was
happening
directly
in
the
controller
at
the
webhook
level,
so
that
we
were
validating
when
things
were
created
rather
than
as
they
were,
selling.
A
D
So
I
have
a
question
about
that.
Actually,
sorry
is
that
cecil.
Is
that
a
test
flake
on
that
pr,
or
is
that
some?
Is
there
something
else
going
on
with
the
way
you're
refactoring
that
that's.
D
Red,
that's
actually
I'm
just
yeah,
I'm
just
looking
at
it!
Oh
that
one
that
one
well
cluster
api
provider,
azure
e
to
e
exp
yeah.
D
E
That's
like
yes
test,
so
I
need
to
look
at
why
it
failed.
I
think
I
pushed
something
last
night
that
might
have
broken
it
so
I'll
take
a
look
thanks
for
notification
noticing
that
yeah
that
one
should
not
be
failing.
We
definitely
want
that.
One
passing
for
the
pr
to
merge.
D
E
A
All
right
any
other
comments
about
that
other
than
please
take
a
look.
If
you
have
time.
A
Cool,
we
don't
have
anything
else
on
the
agenda,
but
maybe
should
mention
I
see
ashitosh
couldn't
make
it
to
the
meeting,
but
that's
because
he's
busy
with
some
presentation
about
cap
c,
which
is
great
so
I'll,
post
that
I'll
put
that
in
the
agenda
here.
Just
so
people
can
follow
up.
Does
anybody
else
have
anything
they
want
to
ask
bring
up?
This
is
open
time
for
everybody.
D
I
actually
have
something
I
thought
I
saw
that
we
were
removing.
I
think
it's
the
machine
pool
stuff
ahead.
I
unrelated
conversation
happened
to
have
somebody
asking
me
about
that
and.
D
Machine
stuff
and
yeah
yeah,
I'm
just
wondering
if
that
has
any
specific
capsi
specific
things
happening,
or
is
it
just
happening
in
upstream
cappy.
A
It's
happening
in
upstream
cappy,
it's
moving
slower
than
we
wanted,
but
that's
built
mostly
because
people
are
asking
really
good
questions
about
the
design
and
I'm
having
to
address
those
questions,
but
I
think
we're
all
converging
on
a
consensus.
Pretty
quick
here
I
have
implementations
of
the
proposal
that
work
for
docker
in
capi
and
also
for
cap
z,
cap
c.
We
kind
of
forged
ahead,
like
aws,
also
did
and
implemented
azure
machine
pool
machine
resources.
A
So
when
you
have
a
machine
pool,
we
actually
have
native
resources
that
map
to
each
instance,
even
though
there
isn't
a
representation
of
the
same
thing
at
the
cappy
level.
So
this
so
one
way
to
look
at
it
is
azure
and
aws
all
kind
of
forged
ahead
and
now
they're
going
to
glue
their
existing
implementations
back
into
the
kind
of
unified
idea
of
machine
pool
machines.
But
I
don't
think
that'll
be
a
lot
of
work.
In
fact,
I've
already
done
that
for
azure,
so
yeah
so
tldr.