►
From YouTube: Kubernetes Cluster API Azure 2019-10-10
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello:
everyone
welcome
to
the
October
10th
edition
of
the
cluster
cluster
API,
a
sure,
sub-project
meeting.
This
is
the
kickoff
meeting,
so
welcome
to
everyone
who
has
joined
the
call
so
far.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
say:
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome.
People
right,
so
we
we've
got
a
few
things
on
the
agenda.
I
want
to
give
some
time,
since
this
is
a
kickoff
meeting
to
do
some
introductions.
A
B
F
G
B
I've
been
working
on
cap
Z,
since
when
we
first
migrated
it
from
a
company
based
repo
from
another
company
and
then
moved
into
community
SIG's
and
my
I
think
my
number
one
contribution
to
the
project
has
been
basically
serving
as
a
rubber
duck
for
Steven
this
time,
so
yeah
I'm,
hoping
to
just
get
to
meet
everyone
and
continue
working
with
everyone
on
this
project.
I
think
it's
a
lot
of
fun.
Welcome.
H
A
Awesome,
ok,
intros
out
of
the
way,
let's
get
to
the
agenda
now,
so
we've
got
a
few
things.
We've
done
the
welcomes.
We
can
do
a
quick
I'll
share
my
screen
and
we
can
take
a
quick
look
at
the
test
grid
board.
So
we've
got
a
primary
board
and
we've
also
got
board
for
image.
Pushes
so,
let's
see
what
we're
looking
at
okay,
alright.
A
So
we
care
about
the
the
push
images,
job,
PR
test,
PR,
build
PR
integration
and
verify
the
I
believe
the
verify
Bowler
boilerplate
is
kind
of
shoved
into
the
verified
job
now,
so
the
these
tests
will
eventually
go
away
once
they
have
timed
out
in
terms
of
test
results.
So,
basically,
there's
a
there's
like
a
timeout
period
for
the
test
results
and
now
that
both
the
v1
alpha
one
and
two
branches
are
on
are
strictly
on
the
new
tests.
Those
will
fade
away
soon,
so
nothing
to
worry
about
there.
A
Everything
is
passing
on
master
happy
happy
day.
The
the
image
pushes
board.
We
had
a
few
failures
at
the
beginning.
I
think
this
is
when
I
was
still
wiring
up
the
jobs,
and
this
was
I
think
immediately
after
we
had
cut
the
v3
zero
alpha
one
release
or
something.
So
there
was
an
initial
failures
there,
but
those
have
cleaned
up
since
then
so
yeah.
A
A
So
how
many
people
on
the
call
are
not
familiar
with
what
we're
talking
about
when
we
say
V
1,
alpha
2,
you,
okay,
cool,
all
right!
So
for
anyone
who
is
listening
to
the
recording
later
there's
a
differentiation
between
V
1,
alpha
1
and
V
1
alpha
2.
These
are
essentially
API
versions
of
cluster
API
right.
So
we're
saying
that
we
are,
you
know
this
is
a
work
around
aligning
the
downstream
provider
Asscher
with
the
upstream
cluster
api,
API
versions
right,
so
we're
currently
working
on
we're
currently
working
on
the
v1
alpha
to
work.
A
It
has
been
going
phenomenally
well,
a
large
large
thanks
to
Cecile
4
for
crunching
away
on
that.
For
the
last
I
guess,
I
guess
it's
a
little
less
than
a
month
now
we
kind
of
turned
this
around
quickly,
Vince
for
for
kind
of
advising
us
and
making
sure
that
we
were
doing
the
right
things
as
we
were,
transforming
the
repo
from
V
1,
alpha
2
V
1,
alpha
1
to
V
1,
alpha
2.
A
C
So
yeah
so
image
builder,
so
for
people
who
aren't
familiar
with
what
this
means
is
so
basically
there's
this
new
requirement
since
we're
now
using
the
Q
medium
would
struck
provider
in
common
with
other
culture,
API
and
providers,
and
that
new
boost
route
provider
has
a
requirement.
It
expects
images
that
are
used
in
to
provision
in
VMs
to
have
some
things
pre-installed
like
CNI
and
kubernetes,
and
so
we
couldn't
basically
like
build
cluster.
What,
if
you
want
to
offer
two
before
we
had
that
those
images
kind
of
built
so
I
expect
the
like
image?
C
Building
process
will
evolve
in
the
future
and
it
seems
pretty
new
from
what
I've
seen
from
what
I
understand,
but
so
far
we
just
wanted
match
the
requirements
of.
What's
there
just
so,
we
can
build
clusters
that
are
functional,
and
so,
where
we're
at
is
there's
this
PR
that
been
started
and
I
took
over,
which
is
merged
now,
which
basically
adds
a
juror
to
the
image
builder
repo.
And
so
what
that
is
is
just
a
Packer
definition
and
it
builds
images
into
two
different
formats.
C
The
H
sees
and
managed
images
which
are
put
in
a
shared
image
gallery
and
so
right
now.
This
is
just
a
definition,
and
so
anyone
can
use
that
too.
To
just
like
make
an
image
in
your
own
subscription
and
then
once
you
have
that
image,
you
can
use
that
to
build
a
tabs,
each
cluster
and
yeah,
so
I
think
we
need
to
figure
out
how
to
get
those
available
publicly
and
how
we
want
to
enforce
like
defaults
for
images
and
how
we
want
to
have
a
process
around
publishing
new
images,
you're,
muted,
Steven.
A
Yeah
so
so
you
know
I.
Last
week,
I
was
playing
around
with
us
a
little
bit
while
you
were
coming
back
from
conference
and
stuff
and
I
yeah,
we
so
I
think
we
made
the
decision
to
publish
both
VHD
and
and
the
the
manage
images
are
publishing
to
a
shared
image
gallery
for
the
sake
of
people
who
are
interested
in
building
these
images
outside
of
the
standard
publishing
process
right
on
one
hand
you
for
the
and
someone
please
correct
me
if
I'm
wrong.
A
This
is
based
on
my
my
digging
around
the
the
requirement
for
publishing
images
to
the
marketplace
is
that
the
image
is
a
VHD
right.
Yes,
so
we
require
so
we
need
both
techniques
right
where
the
VHD
is
not
necessarily
useful
to
someone
who
may
come
in
and
just
wants
something
that
is
maybe
published
behind
a
corporate
firewall
or
something
right,
our
company
subscription
right.
A
C
While
you're
finding
that
I
actually
got
in
touch
with
the
marketplace,
people
and
they're
working
on
adding
support
for
publishing,
managed
images,
so
we
wouldn't
have
to
have
a
use.
Yeah,
oh,
not
available
right
now.
Okay,.
A
So
that
I
just
posted
in
the
chat
I'm
dropping
in
the
and
agenda
right
now
that
PR
will
allow
you
to
to
choose
between
images
that
are
published
to
a
shared
image
gallery,
or
so
you
can
specify
images
that
are
posted
to
a
shared
image
gallery
by
ID
or
by
specifying
the
the
name
of
the
image.
The
publisher,
the
name
of
the
images
subscription,
ID
resource
group
gallery
and
the
version
right
so
right
now
in
cap,
see
you
can
you
can
specify
either
published
images
or
shared
image
gallery
images.
A
C
So
I'm
actually
already
past
that
so
I
have
a
brunch
that
I've
been
using
to
tests
and
I
removed.
All
the
pre
cube,
ATM
commands
and
the
post
ones
that
you
added
for
the
add-ons
that
dye
works.
I
was
able
to
build
a
cluster
successfully
with
an
image
that
I
built
with
the
current
image
builder
code.
Okay,.
A
Cool
so
yeah
I
think
I
also
I
might
have
shared
this
with
you
as
well.
I
have
a
branch
floating
around
that
is
okay.
You
actually
have
a
PR
that
I
think
gets
us
to
the
last
mile
of
it
yeah.
So
it's
around
so
the
after
removing
the
pre
Aqib
ATM
commands
and
people
who
are
not
familiar
with
the
pre
cube
ATM
commands
for
your
post
cube
ATM
commands.
There
is
now
a
bootstrap
provider
within
with,
within
rather
within
the
cluster
API
ecosystem
right.
A
So
the
reason
we're
doing
the
applying
the
add
ons
as
a
post
cube,
a
DM
command
is
because
we
are
essentially
our
clusters
are
not
up
or
they're,
not
recognized
as
a
running
control
plane
by
the
time
that
the
command
is
complete,
if
you're
using
make
create
cluster
management
or
I.
Think
that's
the
name
of
the
target
right
so
be
this.
There's
one
step
of
a
create
cluster
create
cluster
management.
A
A
A
Ok,
cool,
which
will
increase
the
time
out
for
the
the
apply,
add
ons
phase
of
of
the
make
create
cluster
management
target,
which
will
allow
us
to
essentially
weight
it
based
on
what
I
think,
Cecile
and
I
saw
turning
up
clusters.
It
takes
about
a
little
less
than
five
minutes
for
the
first
control
plane
to
come
up
once
you're
using
a
built
image.
A
C
C
A
D
A
A
K
It's
what's
like
saying:
it's
like
there's
logic
in
copper
that
picks
the
image
based
on
the
current
version.
You
specify
same
for
cap
G,
although
cap
G
I,
think
actually
only
looks
at
the
one
point,
something
and
doesn't
look
at
the
patch,
and
then
you
could
just
like
say
like
I,
want
the
latest
of
this
okay.
A
So
I'm
wondering
so
in
terms
of
versioning.
The
images
I
know
we're
we're
kind
of
going
back
and
forth
on
like
what
we
want
to
call
them.
I
think
it
makes
sense
to
one
encapsulate
the
fact
that
it's
cluster
API,
the
OS,
the
OS
version
and
then
what
should
our
version
be?
Should
our
version
be
I
was
initially
thinking
it
should
be
based
on
the
cap,
C
version
or
the
cap
said
first
or
should
it
be
the
kubernetes
version.
K
A
What
I'm
worried
about
all
right?
Let
me
just
add
that.
C
A
We
need
so.
What
we
need
is
something
that
works
for
for
QuickStart
right.
So
when
we
add
something
to
the
QuickStart,
it
needs
to
be
able
to
to
work
out
of
the
box
right.
There
should
be
limited
guesswork
for
whatever
users
picking
it
up.
I
think
that
you
know
pretty
much.
Everyone
on
the
call
that
has
been
working
on
this
across
vmware
and
microsoft's
already
know
the
intricacies
of
like
doing
it
if
they
needed
to,
but
we
can't
make
that
assumption
for
anyone
else
right
so
can.
A
The
from
what
I
saw
from
the
QuickStart,
it
includes
like
the
infrastructure
component,
yeah
Moe
and
then
some
instructions
about
how
to
curl
to
base64
to
code
certain
things.
If
you
need
to,
depending
on
what
provider
you're
dealing
with
so
we
need,
we
need
them
to
not
be
required
to
use
the
repo
right.
It
should
be
able
they
should
be
able
to
curl
components
and
get
it
done.
So
if
we
need
to
so.
D
A
Okay,
cool,
so
can
I
did
you
do
do
this
is
just
adding
this
as
an
AI.
I
can
add
this
to
the
board
later.
I
know
that
ace.
You
had
your
hand
up.
B
That's
not
sorry.
Okay,.
A
J
A
I
think
so
so
David,
something
like
that:
I'm,
not
sure
that
we
need
to
differentiate.
So
we
can
call
whatever
the
publisher
name
is
cluster
API
or
it
can
be
kubernetes
and
the
offer
can
be
a
cluster
API
I.
Don't
think
that
we
need
to
say
that
it's
cap
C,
because
it's
on
Azure
right
then
the
SKU
would
need
to
include
the
it
would
yeah.
A
C
Something
else
to
take
into
account
that
we've
run
into
previously
building
images.
You
can't
have
more
than
I
think
there's
a
hard
limit
on
the
number
of
versions
that
you
can
have
Chris
cube.
It's
your
party
publisher
and
so
we're
not
like
if
we
don't
have
per
Cabrini's
version
per
distro.
That
makes
sense.
Okay,.
D
C
G
A
C
A
Yes,
I
think
that
so
I
think
we
had
a
maybe
a
difference
of
opinion
between
whether
the
the
end
end
or
the
unit
tests
or
the
right
target
I
would
like.
The
n
tends
to
be
the
primary
focus,
because
we
can
shove
those
into
a
pretty
submit.
I
I
know
that
we
care
about
the
behavior
of
the
individual
of
the
individual
pieces
of
the
code,
but
I
also
care
about
the
behavior
of
the
cluster
and
I.
Think
that
that
that
is
probably
the
higher.
K
A
Okay,
cool
cool
cool,
all
right,
so
let
us
alright
I
see
a
new
task
here.
So,
let's
before
we
do,
the
project
board
review
I'm
going
to
I'm
going
to
drop
that
to
the
bottom
of
the
list,
because
that
will
take
the
longest
and
the
yeah,
so
machine
pool
I
know
one
Li.
You
want
to
talk
about
that.
A
little
bit.
K
A
G
G
Please
please
give
me
feedback
there,
I'm
in
the
process
of
wrapping
up
that
code
and
moving
it
into
my
POC
code
and
moving
it
into
the
cap
series,
though,
so
that
we
can
start
to
play
with
concept
and
once
that's
ready,
I
think
people
can
start
looking
at
it
and
start
trying
trying
some
things
out.
I
think
really
that's
about
it
in
terms
of
updates
I
have
there
they
did.
Did
anybody
have
a
look
and
have
questions
that
they
wanted
to
talk
about
in
person.
A
So,
just
to
just
a
frame
it
for
people
on
the
recording,
so
the
the
machine
pool
concept
is
essentially,
we
want
to
create
a
new
type,
a
type
that
would
be
essentially
provider
agnostic
and
provide
the
means
to
understand
what
scaling
could
be
across
the
provider.
So
you
know
the
classic.
Auto
scaling
is
like
I,
I
have
X
amount
of
replicas
and
I
want
this
money
and
I
shouldn't
go
above
this.
A
Many
and
I
should
do
these
things
when
right
so
abstract,
as
much
of
that
as
we
can
a
way
to
a
singular
type
and
then
having
provider.
Specific
implementations
of
that
so
like
as
your
machine
pool,
would
map
to
what
we
want
them
to.
What
we
want
them
to
do
is
map
to
the
to
the
canonical
auto
scaling
thing
within
your
provider
right,
so
whether
it
be
vmss
and
asher
or
SGS
in
in
AWS
or
Miggs
and
GCP,
the
idea
being
that
we
can.
A
We
can
basically
push
some
of
the
logic
of
doing
auto
scaling
within
a
provider
specific
implementations
over
to
the
provider
right
and
then
sync
that
status
back
all
right.
So
that's
the
proposal.
Take
a
look
we
are
going
to
that.
This
is
the
second
right
through
only
okay,
so
I
am
due
for
an
edit
pass.
I
will
work
on
that
tomorrow
and
an
in
short
order
after
that
we
want
to
present
to
the
wider
cluster
API
group.
A
G
So
I
just
I,
just
refined
that
description
just
a
little
tiny
bit
I
would
say
that
the
first
and
foremost
concern
that
we're
trying
to
address
this.
You
know
the
being
able
to
manage
a
set
or
a
group
of
machines
as
a
single
configuration,
and
then
one
of
the
features
is
that
one
of
the
feature
or
benefits
that
we
get
then
is
auto
scaling
and
all
that.
So
just
wanted
to
clarify
that,
because
we
won't
actually
do
the
auto
scaling,
stuff
and
I
think
even
out
the
tree.
A
K
K
A
K
E
K
You
know
we
need
more
machines.
Oh
yeah,
on
the
proposal.
I
had
one
comment,
deadly
implementation,
pieces
or
not
super
clear
so
might
think
you
might
wanna
think,
like
a
little
bit
more
time
took
like
spec
out
like
what
the
implementation
will
actually
look
like
the
diagram
looks
good
and
just
like
explaining
like
the
flow
a
little
bit
in
the
sentences
would
be
McGrane.
Okay,.
A
B
A
little
bit
curious,
I
saw
in
the
bottom
of
the
Michigan
poll
notes.
It
looks
like
on
Friday.
There
was
some
discussion
and
it
looks
like
people
did
float
the
idea
of
what
like.
Why
are
we
not
just
using
machine
deployment?
Can
someone
just
summarize,
like
the
TLDR
of
why
that
happened,
because
it
seems
like
there
was
kind
of
discussion
and
there
wasn't
really
like
in
the
notes,
there's
not
much
resolution
as
to
why
yeah.
G
So
I,
so
if
I
remember
correctly,
the
idea
was
that
will
it's
an
incident,
it's
interesting
to
think
about
combining
the
two,
but
for
now
go
ahead.
We're
gonna
go
ahead
with
having
you
know,
specific
type
for
machine
pool,
and
then,
if
we
find
that
they
end
up
being
nearly
identical,
we
can
merge
them
back
together.
So.
B
K
A
little
bit
more
than
that
I
admit
that
I
pushed
for
this,
because
I
I'm
scared
of
changing
things
that
we
know,
especially
because
everybody
like
today
knows
the
machine
pool.
That's
one
thing,
and
it
does
it
really
well,
so
so
I'm
a
machine
polishing
deployment,
adding
a
new,
optional
I
feel
like
you'd
say
like
yeah,
and
like
we
changed
logic,
we
would
branch
out
in
this
path.
That
could
be
a
possibility.
But
what
that
does?
Is
it
like
it?
K
Then
you
have
every
time
you
talk
about
machine
deployment
and
you
have
to
request,
and
it
comes
in
it's
in
like
our
using
machine
like
the
pool
side
of
the
machine
deployment
or
using
machine
impact,
and
it
will
create
that
splitting
communication
first
and
the
second
thing
will
create
like
weird,
like
code
paths
that
like
we
will
need
to
take
it
into
account.
So
like
it's
a
machine,
but
what
this
configuration
won't.
B
A
Yeah
we're
we're
also
getting
the
the
the
benefit
of
like
so
you
know,
one
Lee
is
working
on
this
stuff
and
he's
gonna
bring
the
changes
into
the
machine
pool
branch
within
cap.
See
what's
nice
about
that
is.
We
can
define
all
of
the
types
within
cap
C
without
having
to
work
on
it
within
cap
e,
until
we
prove
it
out,
yeah.
E
G
K
K
A
All
right,
so,
let's,
let's
get
into
the
project
board
I,
have
tried
to
do
some
of
the
some
of
the
grooming
over
the
last
week
or
two,
but
there's
still
a
bit
to
do
and
I
didn't
want
to
do
it
kind
of
in
a
vacuum
by
myself.
I
wanted
to
do
it
with
all
of
you,
so
the
first
stuff.
First
things
that
we
should
take
a
look
at
our
let's
see
if
there
are
any
new
cards
right,
our
old
cards
that
are
not
merged,
yet,
okay,
no
all
right!
A
So
everything
below
this
line,
basically
the
the
ad
V
1
alpha
2
types
and
I'm,
not
adding
into
the
board
the
so
basically
the
way
I'm
breaking
up
the
board
is
all
the
stuff
that
was
done
a
while
ago
is
here
and
the
stun
column
everything
that
was
done
for
2019
q3
is
here
and
q4,
which
we're
in
right
now
is
here.
So
you
can
see
that
we
have
done
quite
a
bit
of
work
and
it's
really
just
been
less
than
a
month.
A
A
There
is
0
3,
which
is
V
1
alpha
2,
that's
our
release,
0
3
milestone,
and
so
next
is
everything
we're
you
know
so
people
who
have
done
project
management
kind
of
across
the
kubernetes
projects
before
next
is
like
this
stuff.
We're
punting
and
baseline
has
been
baseline.
It's
been,
we
need
these
things
for
the
repo
to
work
right,
release,
branching
strategy,
I
want
to
move
these
two
actually
into
C
1
alpha
3,
we
kind
of
know
what
we're
gonna
do
here:
release
branching
strategy.
A
Know
myself
and
do
a
little
write-up
and
then
close
that
out
essentially
we're
using
master
as
the
essentially
the
next
release
candidate
and
right
now
we
have
a
zero
of
release:
zero
two
branch,
which
is
V
1,
alpha
1,
that
is
I,
think
I.
Think
it's
fair
for
us
to
declare
that
we're
not
going
to
do
any
anymore
of
V
1
alpha
1
releases.
A
All
right
and
sweeping
to
dues
I
also
open
this
one
I'll
make
it
a
priority
for
V
1,
alpha
2,
essentially,
I
looked
over
them
fairly
recently
and
I
think
that
what
we
have
in
there
are
kind
of
code
hints
like
there
are
places
where
there
are
two
dues
about
tagging
right
and
they're
commented
out
they're.
Basically,
the
commented
out
functions
for
tagging
or
additions
to
the
types
so
just
reminding
someone
who
does
pick
up
the
tagging
feature
to
make
sure
they
hit
those
points.
A
Let's
take
a
look
at
okay,
all
right,
100%,
complete
I'm,
going
to
close
out
the
v02
milestone
as
well;
okay,
so
we're
just
doing
0,
3
and
and
next
now,
which
is
great
and
let's
go
back
to
the
board
and
we
can
sort
by
milestones
or
filter
by
milestones,
and
we
can
start
talking
about
some
of
the
stuff
right.
So
we
we
already
talked
about
the
image,
building
and
publishing
right
the
release
to
dues
release
engineering
tooling.
A
A
A
Tagged
some
the
people
who
are
working
on
the
release,
notes
upstream
for
cig
release
and
what
I
was
curious
about
is
if
it
was
actually
possible
today
to
use
the
tool
that
we
use
to
generate
release,
notes
for
the
kubernetes
kubernetes
releases
for
for
any
repo,
and
apparently
it
is
possible
which
is
pretty
awesome.
This
is
what
the
output
looks
like.
So
there
are
a
few
things
that
we
might
want
to
do
to
clean
up
the
way
what
the
issue
templates
look
like
or
what
the
PR
templates
look
like,
but
I
think
this
is.
A
K
A
K
C
K
A
A
So
I
have
active
PR
here,
which
needs
to
be
something
needs
to
happen.
I
either
need
to
decide
if
I'm
closing
this
out
or
our
rebasing
it,
but
essentially
it
does
most
of
the
work
to
support
existing
resource
groups.
I
just
need
to
make
sure
it
still
fits
in
line
with
what
the
the
v1
alpha
to
work
is.
A
G
A
It's
essentially
expanding
the
it's
actually
making
the
resource
group
a
type
instead
of
just
a
pointer
to
a
string
right,
so
it
becomes
the
research
becomes
resource
group
and
then
resource
group
name
and,
and
then
the
ID,
or
rather
the
ID
or
the
name
or
something
it's
ID.
Your
name
and
then
enabled
are,
is
enabled
or
something
like
that.
So
that
should
probably
be
switched
to
a
pointer
to
boolean
and
then
the
logic
needs
to
be
rewired
a
little
bit.
But
okay.
A
E
A
Okay
yeah,
so
it's
basically
ID
and
then
managed
and
managed
is
this
is
none
of
this
is
true
anymore.
Now
that
I
know
more
stuff
but
yeah,
but
basically
manage
tells
you
whether
or
not
it
should
be
managed
by
cluster
API
right
within
the
lifecycle
of
cluster
API.
So,
basically,
so
when
we
clean
up,
we
don't
accidentally
delete
your
resource
group
that
has
more
stuff
in
it.
So
yeah
does
that
sound
good
or
do
yeah.
C
A
This
is
probably
a
longer
discussion.
It's
the
underlying
services,
how
the
services
are
implemented
within
yeah,
how
the
underlying
services
are
implemented,
underneath
the
reconciler,
the
machine
and
and
and
cluster
reconciler.
So
talking
about
whether
or
not
we
should
have
like
a
reconcile
delete
pattern
for
each
of
them
and
then
that
would
kind
of
require
us
to
tweak
some
of
the
the
way
the
services
are
defined.
A
A
So
if
like,
if
something
was
in
a
certain
zone,
it
would
like
plus
one
that
score
right
and
then
basically
every
time
you
were
going
to
create
a
control
plane,
it
would
look
at
that
score
again
and
decide
whether
or
not
to
increment
that
but
I'm,
not
sure
that
we
want
to
do
that.
So
we
can.
We
can
take
a
look
at
that
later.
A
The
signature
label
doesn't
exist
in
the
repo.
That
is
true.
Now
that
we've
transitioned
from
now
that
sig
Asher
has
transitioned
into
a
cluster
into
a
sub-project
of
sig
cloud
provider.
The
reason
the
cigar
label
was
in
this
reap
was
it
was
migrated
in
as
a
sub-project
of
of
sig,
a
sure,
since
that
has
folded.
I've
recently
made
sure
that
we
are
now
made
migrated
under
sig
cluster
lifecycle.
So
if
you
want
to
read
the
details
on
that
one,
it
is
issue
284,
the
Associated
PR
there.
A
Let's
see,
okay,
alright,
so
a
few
things
to
do
docks
are
in
progress
on
my
side,
control,
plane.
Nsg
is
open
to
the
Internet
I'm,
not
sure
if
this
is
actually
still
true.
This
was
open
a
bit
ago
and
this
kind
of
dovetails
with
the
discussion
about
the
fashion
hosts
great
the
bastion
stuff
just
for
context.
The
bastion
stuff
had
stalled
out
a
little
bit.
We
initially
had
a
PR
open
to
work
on
bastions,
but
they're
kind
of
in
tandem.
A
There
was
the
release
of
a
sure,
Bastion,
but
I'm
kind
of
conflicted,
because
a
sure
Bastion
is
currently
in
public
preview,
I
believe
but
not
available
in
every
region.
So
I
don't
want
to
I.
Don't
want
us
to
create
logic
for
something
that
everyone
can't
have
right.
So
we
should
talk
more
about
how
we
want
to
attack
this
yeah.
But
if
someone
wants
to,
I
know
ace,
you
had
mentioned
something
about
the
do
you
do
you
want
to
check
in
on
this
ace
I.
B
A
All
right
all
right,
so
if
you
want
to
pick
it
up,
feel
free
to
assign
yourself,
but
no
pressure
do.
What
am
I
doing?
Okay,
alright
class?
Actually,
the
next
one
cloud
provider
configuration
right,
so
this
need
to
add
some
details
to,
but
essentially
the
the
pre
key
Bay
diem
commands
also
lend
a
land
a
cloud
provider
config
on
each
of
the
nodes.
We
should
have
a
way
of
configuring
that
stuff,
whether
it
be
by
I
think
in
115.
A
So
we
have
to
figure
out
the
strategy
that
we
want.
We
cannot
have
the
cloud
provider
config
in
in
the
pre
cube
ADM
commands
I
think
this
might
become
a
larger
question
as
well
towards
how
we
want
to
handle
that
across
all
providers,
because
I'm
not
sure
that
there
is
a
solution
for
handling
cloud
provider
config
for
all
providers.
Just
yet
maybe
that
becomes
another
bootstrap
provider
thing
Vince.
A
K
A
B
The
point
of
this
to
avoid
conflicts
between
controllers
and
I
mean
how
did
AWS
resolve
this
like
this
was
an
issue
there
and
I.
Remember
there
was
some
ongoing
pr's
to
fix
it.
The
cloud
provider
can
pick
well.
This
was
this
was
an
issue
with
like
KCM
and
capi
conflicting
with
each
other
right
like
that.
Well,.
A
A
The
fact
that
we
can't
see
the
config
it's
a
different
part,
that's
what
I'm,
referring
to
in
the
AWS
one
I
believe
they
just
referenced,
that
you're
going
to
be
like
cloud
providers,
AWS
and
I
believe
the
they'll
just
leverage
I
am
to
do
some
of
the
bootstrapping
in
terms
of
like
metadata,
so
we
should
figure
out
if
we
want
to
do
the
same.
I
think
it
would
be
fairly
simple
for
us
to
wire
the
the
list
of
the
IMI
MDS
I
always
get
the
yeah.
E
G
A
Yeah
I
am
DVS,
so
maybe
maybe
using
that
instead,
maybe
using
that
mode
instead
and
having
things
get
picked
up
from
from
the
cloud
provider.
Config
there's
also
I
think
we're
at
the
point
where
we
can
also
turn
on
the
system
assigned
identities.
If
people
are
interested
in
doing
it,
but
I
want
to
give
anyone
an
opportunity
for
closing
comments,
because
we
are
one
minute
over
I.
C
A
We
multiple
people
not
understanding
what
to
do
working
on
the
same
stuff
at
the
same
time,
all
right
so
we're
only
now
in
a
place
where,
like
those
things,
can
be
properly
vetted
out
and
helped
want
it
and
all
that
good
stuff
right,
I
think
we
were
feeling
it
out
before
then,
but
with
the
v1
alpha
to
work,
we
can
actually
describe
what
needs
to
happen,
so
that
is
coming
soon
cool
all
right.
Well,
thank
you.
Everyone
for
being
part
of
the
inaugural
cap,
C
meeting
cap,
said
made
it
great.