►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
july,
22nd
2021..
This
is
the
cluster
api
provider
azure
office
hours.
As
always,
please
abide
by
this
dnc
code
of
conduct
and,
if
you
can
add
your
name
to
the
attempt
list
and
if
you
have
any
questions
or
any
topics
you'd
like
to
bring
up,
please
add
them
to
the
agenda.
A
All
right,
so
I
guess
first,
let's
just
do
a
quick
release
update.
So
I
guess,
since
last
time
we
released
the
one
off
four.
So
that's
really
good
congrats
everyone
and
thank
you
for
all
the
hard
work,
the
release,
the
one
four
releases
v0.5.0.
A
It's
been
up
for
almost
two
weeks
now
since
monday
last
week
and
it's
the
first
provider
to
support
v1
alpha
4
with
cluster
api.
A
So
if
you
can,
please
try
it
out,
give
us
any
feedback
open
any
issues
if
you
find
any
bugs
or
any
problems
and
yeah
in
terms
of
patches,
we
did
find
a
couple
of
bugs
in
the
managed
cluster
implementation.
Those
were
known
previous
issues
and
there
is
one
bug
related
to
using
the
azure
cluster
identity,
with
service
principle
secrets
for
the
cloud
provider
that
has
been
fixed
and
I'm
waiting
for
a
couple
of
pr's
to
merge
to
release
a
patch.
A
So
there
will
be
a
zero
five
one
patch
coming
soon
does
I
think
that
should
cover
it?
Does
anyone
have
any
general
questions
about
releases
or
about
v104
or
anything
else?.
A
Cool,
I
will
write
a
little
summary
in
the
notes
for
people
who
are
reading
and
then
yeah,
I
guess
matt,
do
you
want
to
give
us
an
update
on
images.
B
Sure
I
mean
pretty
much,
I
just
typed
it
all
there,
but
since
I,
since
we
had
some
issues-
and
I
reported
this
meeting
the
last
couple
times-
I'm
just
trying
to
be
consistent.
So
we
worked
out
most
of
the
issues.
They
were
not
very
interesting
inside
baseball,
azure
kind
of
stuff,
but
I
think
we're
good
to
go
now
and
the
last
round
of
patch
releases
went
out
without
too
much
trouble.
So
hopefully
that's
the
way
forward
and
I'll
probably
just
stop
reporting
on
these
at
this
meeting.
Unless
we
run
into
anything
interesting.
A
Cool,
thank
you
all
right.
Does
anyone
here
have
anything
they
wanted
to
discuss
or
any
questions
about
using
capzi
developing
in
kevzie.
Anything
like
that.
B
Did
we
say
something
about
the
sort
of
system
d
event
yesterday,
because
people
people
in
azure
might
hear
it
seems
to
be
growing
still.
B
Oh
okay,
sure
so
a
lot
of
you,
some
of
you
are
probably
familiar
with
aks
engine
and
essentially
yesterday
the
night
before
an
ubuntu
1804,
I
think
also
2004.
B
Maybe
just
1804
update,
went
out
for
system
d
that
contains
some
sort
of
backward
incompatible
behavior
in
that
the
way
aks
engine
and
azure
cni
were
configuring.
Network
interfaces
wasn't
strictly
kosher
and
this
sort
of
invalidated
one
of
them.
So
the
end
result
was
it
wasn't
a
backward
compatible
change.
It
was
upstream
in
ubuntu
this
could
get
applied
by
unattended
updates,
so
your
cluster
could
and
it
killed
cubelet
pretty
quick.
B
So
if
this
happened,
your
control
plane
nodes,
the
end
result
is
your
cluster
just
kind
of
dies
until
you
reboot,
and
we
set
up
the
network
interfaces
again,
and
things
are
happy
so
this
this
unfortunately
had
a
lot
of
splash
damage
in
azure
and
the
workaround
was
mostly
to
reboot
the
nodes
and,
and
we
helped
out
with
that
and
aks.
B
The
service
was
okay
because
they
have
mitigation
or
remediation
components,
sort
of
actively
looking
at
nodes,
and
if
one
of
them
becomes
not
ready
it,
one
of
its
default
actions
is
to
reboot
it.
So
essentially,
this
didn't
cause
that
big
ripple
in
aks
because
of
their
remediation
behaviors
is
my
understanding
and
in
capsi
we
don't
have
unattended
upgrades
on
by
default.
B
A
Well,
christian,
did
you
want
to
discuss
the
bastion
dm
question
really
quickly.
E
So
my
basic
point
is
that
from
when
maybe
some
context
for
somebody
that
didn't
read
the
pr,
I
have
this
idea
to
kind
of
support,
having
bastion
hosts,
backed
by
virto
machines,
to
get
access
to
capsi
cluster
nodes
by
ssh,
for
example.
E
E
So
I
was
thinking
about
kind
of
generalizing
the
thing
and
implementing
it
become
c
by
adding
some
new
fields
in
azure
cluster
cr
fields
aimed
at
bootstrapping
this
new
virtual
machine.
But
I
also
wanted
to
reuse
what
we
have
already
so
all
the
controllers
that
the
azure
machines
and
so
on
and
so
forth.
So
in
order
not
to
reinvent
the
wheel,
because
it
doesn't
make
any
sense.
E
But
after
a
few
iterations
on
the
on
the
idea,
I
came
up
with
the
I
don't
know
the
opinion
that
probably
all
the
changes
I
was
thinking
about
are
just
useless
because
at
the
end
of
the
day
I
was
just
simply
embedding
a
whole
machine
template
inside
the
azure
cluster
template
the
whole
thingy.
So
I
get
I
I
need
to
have
the
machine
template
the
azure
machine
template
all
bundled
inside
the
azure
cluster
cr.
So
my
thinking
is
what's
the
point.
E
A
So
my
main
question
on
that
was
my
understanding
is
that
the
bastion
host
should
just
be
a
plain
virtual
machine
that
doesn't
have
kubernetes
running
on
it,
and
so,
if
you're
going
to
reuse
like
the
machine
pool
or
like
their
machine
deployments,
then
they're
going
to
be
part
of
the
kubernetes
cluster.
And
we
don't
want
that
right.
We
don't
want
like
additional
kubernetes
nodes.
We
just
want
a
separate
like
ubudu
or
whatever
vm
that
is
like
configured
to
be
sustainable.
E
That's
correct
if
you
use
the
qbm
bootstrap
controller
to
booster
the
machine,
but
if
you
provide
your
own
secret
with
a
pre-computed
cloud,
init
content.
Basically
you
don't
have
any
kubernetes
in
there.
A
Makes
sense
it
still
feels
a
little
overkill
to
me
to
have
to
like
redefine
the
whole,
like
machine
deployments
for
the
bastion
hosts.
I
have
to
think
more
about
this,
but
yeah.
Does
anyone
have
any
thoughts
or
questions
on
this.
F
Isn't
there
some
other
tooling
that
we
had
that?
I
forget
what
it's
called,
but
it's
not
part
of
the
cluster
api
project,
but
it's
essentially
reconcilers
inside
the
kubernetes
cluster.
That
goes
and
creates
azure
resources
for
us.
That
seems
like
it
might
be
an
interesting
like
add-on
for
the
management
cluster
to
be
able
to
provision
extra
components
like
this.
A
E
Sorry
to
interrupt
I
mean
if
we
decide
to
use
something
external,
virtually
anything
will
do
I
mean
you
can
use
terraform.
You
can
use
whatever.
E
F
I
mean
yeah,
I
think
that
makes
sense
if
the
like,
if
you
want
terraform,
then
you
have
two
different
ways
of
deploying,
whereas
if
you
added
aso
as
an
add-on
to
the
management
cluster,
you
would
still
be
able
to
provision
these
additional
vms
via
yaml,
like
kubernetes
yaml,
which
would
make
the
mechanisms
of
deployment
the
same.
It
would
be
a
controller
running
for
making
sure
those
vms
are
up,
and
so
it
seems
to
fit
a
little
bit
closer
to
the
whole
model
of
cluster
api.
F
It's
just
it
was
just
a
thought
so
yeah,
I
can
just
see
other
customers
wanting
to
do
other
things
like
provision
additional
v-nets
and
things
like
that
that
are
outside
the
scope
of
cluster
api
that
won't
necessarily
fit
into
the
apis.
And
so,
if
we
had
this
additional
add-on
in
in
the
cluster,
it
might
make
it
simpler
to
be
able
to
meet
those
use.
E
Cases
yeah-
that
is
a
good,
very
good
point
network
flexibility
is
now
the
yeah,
the
most
limited
part
of
my
the
current
state
of
my
idea.
So
yeah,
it's
totally
good.
A
A
Thanks
cool
all
right.
Anyone
else,
any
other
topics,
questions.
A
Thoughts
all
right,
if
not,
let's
call
it
a
day,
thanks
everyone
and
go
try
out
the
v05
release.
Let
us
know
what
you
think
and
see
you
all
on
slack
and
github.