►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
I
think
everybody's
here,
so
let's
get
going
welcome
everybody.
This
is
the
kubernetes
cluster
api
azure
office
hours,
it's
april,
15
2021,
cluster
api
and
cluster
api
azure
provider
are
sub-projects
of
sig
cluster
life
cycle
kubernetes
area.
As
always,
please
try
and
use
the
zoom
feature
to
raise
your
hand
so
that
we
don't
stomp
on
each
other
and
follow
the
kubernetes
rules
of
conduct,
which
is
essentially
just
everybody,
be
blight.
A
I
think
that's,
oh,
and
please
put
your
name
in
the
attendees
list.
If
you
have
a
minute
to
do
that,
we
don't
have
any
special
recurring
topics.
Do
we
have
anybody
new
who
hasn't
been
here
before
and
wants
to
introduce
themselves.
B
Sure
I
have
not
attended
this
meeting
before
my
name
is
kenny
woodson.
I
actually
think
you've
worked
with,
I
think
it's
cecile
or
cecil.
At
microsoft,
I
worked
on
the
aero
project,
the
azure
red
hat
openshift
project
in
the
very
beginning,
with
her
yeah.
Since
then,
I've
I've
moved
into
a
platform
position
here
at
red
hat
and
focusing
on
azure,
so
any
azure
related
topics
and
things
I've
been
kind
of,
overseeing
and
and
trying
to
assist
our
engineering
team.
C
C
So
currently
we
are
focused
on
github's
practices
for
cloud
native
applications
and
actually
copy
and
cut.
These
is
from
our
ladder
currency
in
conjunction
with
the
multi-cluster
setup
for
for
github's
operations,
and
we
are
investigating
deeply
using
copy
and
paste
for
provisioning
icat
clusters
following
secure
access
baseline.
C
So
actually
we
are
positioning
that
as
a
powerful
approach
to
set
up
your
multi-part
and
multi-tenant
multi-tenancy
fleet
with
kathy
and
kubzi
in
a
fully
detoxed
fashion,
so
they
don't
have
any
push
based
ic
pipelines
and
everything
is
done.
Just
decrementally
following
just
you
know
pure
full-based
github
spec.
So
we
keep
digging
into
this
area
and
actually
I
created
an
issue
with
some
gaps
between
secure
access,
baseline
and
what
we
can
do
with
copy
so
yeah.
Following
this
this
topic,
and
I
think
that
we
will.
A
D
A
Cool
all
right,
I
think,
we're
on
to
open
discussion.
Now
we
have
a
couple:
psas
go
ahead:
cecile.
E
Thanks
matt,
so
yeah
just
really
quick
ones.
First
of
all,
the
zero
fours
zero
four
fourteen
release
is
out
and
that
fixes
a
bug
that
was
basically
there
was
different
in
formatting
between
our
provider,
id
format
and
the
cluster
auto
scalers.
So
we've
changed
the
hours
to
align
with
cluster
otus
killer
and
also
cloud
provider.
E
So
you'll
notice
the
three
slashes
now,
instead
of
four
slashes,
a
very
minor
difference,
but
makes
a
big
difference
in
practice,
so
that
fixes
the
bug
if
you're
using
cluster,
auto
scaler
and
then
the
second
one
I
have
is
shayang
who's.
Actually
here
in
the
meeting
is
now
a
reviewer
for
cabzi.
A
All
important,
slash,
crazy,
cool
all
right.
Next
christian
wants
to
talk
about
the
etv
test
for
azerbastian
feature.
F
Go
for
it
yeah.
I
know
it's
a
deep
technical
topic,
but
maybe
it's
quicker
to
reach
consensus.
If
we
talk
about
it,
so
I
have
a
pr
open
about
implementing
the
azure
business
feature
and
there
has
been
some
discussion
around
the
end
to
end
test
if
and
where
to
advance
when
test
to
check
that
this
specific
feature
works
and
there
we
didn't
reach
consensus
on
github
about
where
to
put
it.
Basically,
so
maybe
we
can
have
a
quick
discussion
about
this.
I
feel
like
to
answer
your
questions
to
see
it
on
github.
F
The
question
was
when
this
feature
is
more
useful,
in
which
kind
of
scenarios
which
kind
of
clusters-
and
I
think
the
clusters
that
have
private
networking
only
so
they
don't
expose
the
notes
with
public
ips.
I
I
feel,
like
that's
the
more
natural
usage
of
visual
bastion
but
yeah.
If
anybody
has
any
idea
about
it,
I'm
up
to
here.
E
A
G
Okay,
so
at
some
point
this
was
this
was
so
much
more
than
I
wrote
down.
So
I
I
was
thinking
at
some
point.
The
default
should
probably
be
default
to
a
secure
cluster
with
a
bastion
and
an
unsecured,
insecure
cluster
should
be
something
opt
into
the
default.
G
At
some
point,
it
would
be
nice
if
we
could
get
that
closer
to
best
practices
rather
than
hey.
Here
is
something
that
leaves
you
open
to
problems
like
here's.
Here's
what
we
think
you
should
be
having,
if
not
here's
the
simplest
thing,
and
I'm
not
sure
if
that
really
makes
sense,
should
we
just
have
like
a
simple
template
like
here's
simple
instead
of
default
and
then
default
is
hey.
G
This
is
what
you
should
have
I
I
don't
know
that
that's
the
only
point
of
contention
you're
you
saying
put
in
default
for
the
the
test
flavor,
I
totally
back,
I
think
that's
awesome
and
it
achieves
our
goal
and
we
don't
have
to
make
another
flavor
for
it,
so
double
double
cool
that
was
just
my
thought
on.
Does
that
make
sense.
E
Yeah
that
makes
sense.
I
yeah,
I
agree
with
you,
but
I
don't
think
this.
Pr
is
the
good
place
to
change
our
best
practice
default,
but
I
think
for
me,
what
makes
the
most
sense
is
to
add
it
to
the
private
cluster
template
by
default
and
then
use
that
to
test
it,
since
that's
a
use
case
where
you'd
want
that
by
default,
probably
and
also
like
it
allows
us
to
test
it
with
that,
adding
yet
another
test
flavor
and
I
think
that's
what
we
want
to
achieve.
H
The
challenge
is,
you
know
the
sort
of
hurdle
to
adoption,
how
how
hard
is
it
to
use
that
default?
And
if
that's
the
default
setup
you
know,
maybe
we
should
have
a
simple
and
a
default.
I
I
kind
of
agree
with
david
on
that.
E
H
H
A
All
right,
I
just
thought
of
a
couple
things
I
should
mention,
so
I
guess
I'll
go
next.
A
Let
me
do
these
in
the
opposite
order,
because
this
one
there's
not
much
to
talk
about
just
as
always
we're
trying
to
build
the
reference
images
for
kubernetes
patches
that
dropped
yesterday
or
actually
very
early
this
morning
is
when
they
really
came
out
and
running
into
a
couple,
build
issues,
and
this
image
builder
thing
that
I
worked
with
cecile
with
previously,
but
isn't
quite
fixed,
so
long
story,
short
it'll,
probably
be
tomorrow,
just
because
there's
some
farmland
we're
doing,
but
I
hope
hopefully,
nobody's
waiting
for
those
eagerly,
but
it's
in
progress
and
then
a
couple
meetings
ago
I
mentioned
that
I
was
working
on
what
I
call
the
azcapi
extension,
and
so
essentially
this
is
an
extension
to
the
az
cli
that
tries
to
simplify
the
cap,
z
experience
and
give
you
kind
of
a
one-liner
that
does
all
the
setup
and
all
that
stuff.
A
There's
some
other
goals
there.
It's
not
that
far
along,
but
I
think
it's
at
a
point
where
it's
sort
of
an
mvp,
so
I
was
going
to
do
a
release.
Next,
tuesday
or
monday,
and
try
and
get
feedback,
but
since
we're
not
gonna
have
a
meeting
for
a
couple
weeks,
I
thought
I'd
put
it
in
here
and
drop
the
url
in
here
in
case
anybody's,
actually
interested
in
checking
it
out.
A
A
E
No,
I'm
just
gonna
ask
the
obvious
question:
when
should
I
use
azcappy?
When
should
I
use
cluster
ctl?
Should
I
use
both
together?
How
does
that
work.
A
G
David,
thank
you.
Sorry.
I
was
fumbling
to
get
to
the
hand
button
okay,
so
I'm
it's
interesting
to
hear
that
you're
trying
to
use
this
for
all
your
cluster
interactions.
What
is
what
is
the
benefit
like?
What
do
you?
What
do
you
get
out
of
it
like
why.
A
Well,
I'm
doing
that
simply
because
I'm
dog
fooding
my
own
code,
obviously
and
testing
it.
You
get
a
consistent
interface
if
you're
really
used
to
a
z
and
the
way
it
behaves.
These
commands
are
consistent
with
that.
In
the
you
know,
they
expect
json
to
come
back
from
almost
every
command
and
then
they
can
format
a
variety
of
ways.
A
There's
some
behaviors
to
the
azcl
I
might
be
used
to,
but
mostly
you
can
just
say,
cappy
create
with
a
couple
of
arguments,
and
you
know
it
does
everything
for
you
and
returns
after
cni
has
been
installed
and
some
of
the
nodes
are
actually
ready
and
all
that,
whereas
you
know
you
probably
know
right
now,
there's
several
steps
in
there
that
require
human
to
kind
of
pull
and
go
well.
Is
it
ready?
Oh,
did
I
forget
to
install
cni?
Oh,
is
it
working
now?
Okay,
now
I
can
actually
deploy
a
workload.
A
A
I
A
Yeah,
that's
part
of
it,
since
cluster
ctl
obviously
has
to
support
all
providers.
It's
ended
up
in
most
ways
being
sort
of
the
least
common
denominator
of
all
those
workflows.
And
so
you
know,
if
you
follow
the
quick
start,
there's
several
things
that
you
just
kind
of
have
to
cut
and
paste
and
do
separate
commands
and
all
that.
A
A
On
the
other
hand,
there's
always
going
to
be
some
kind
of
azure
specific
best
practices
that
we
know
of
you
know.
Maybe
you
should
always
use
machine
pools
at
some
point
and
forget
about
machine
sets
or
or
whatever
it
is,
and
that
would
be
something
that
would
be
appropriate.
I
think,
to
capture
in
the
defaults
in
easycappy,
so
that
people
don't
go
wrong.
E
Oh
no
worries
I
was
just
gonna
say,
and
on
top
of
that,
what
you
just
said
that
I
think
there
was
a
discussion
in
cluster
ctl
to
have
like
some
sort
of
provider
plug-ins
at
some
point
that
hasn't
really
moved
forward.
But
the
way
I
see
this,
this
could
potentially
be
like
a
poc
that
eventually
ends
up
being.
D
E
A
plug-in
for,
like
the
azure
provider
in
cluster
ctl
and
the
way
that
you
built
it
and
correct
me
if
I'm
wrong,
but
it's
mostly
like
using
cluster
steel
underneath
right.
So
it's
not
like
it's
rewriting
cluster
ctl,
it's
just
extending
it
and.