►
Description
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
A
Hello,
everybody
Welcome.
This
is
the
kubernetes
cluster
API
Azure
provider,
weekly
meeting,
it's
June
15th
2023
thanks
everyone
for
coming.
We
are
a
sub
project
of
Sig
cluster
life
cycle,
the
kubernetes
special
interest
group
and,
as
such,
we
abide
by
their
general
rules
of
conduct
which
you
can
read
about
through
the
link
at
the
top
of
the
document.
But
basically
it
boils
down
to
everybody:
try
not
to
interrupt
each
other
and
everyone
try
to
be
polite
and
try
to
raise
your
hand
in
the
zoom
meeting.
A
So
we
don't
talk
over
each
other
at
the
beginning.
Here
we
usually
take
a
minute
to
let
anyone
who
is
new
to
the
meeting
or
who
wants
to
introduce
or
reintroduce
themselves
to
say
hello,
I'm,
going
to
call
All
of
You
The
Usual
Suspects,
so
I
don't
think
we
have
anyone
new.
But
if
anybody
wants
to
do
that,
I
will
be
quiet
for
a
few
seconds.
A
Let
me
make
sure
I
can
see
if
people
raise
their
hands
would
be
helpful.
All
right,
I,
don't
think
so.
So,
let's
move
on
and
if
you
want
to
add
your
name
to
the
attendees
list
there
that
can
be
helpful
and
let's
move
on,
we
don't
have
a
big
agenda
here,
but,
as
usual,
it
is
filling
in
in
real
time.
A
So
first
thing
is
mine:
I'm
asking
probably
rhetorical
question:
should
we
do
patch
C
patch
C?
Should
we
do
cap
Z
patch
releases?
Today
we
do
have
a
fair
amount
of
stuff
in
there.
Sorry
I
should
have
linked
to
it
and
from
my
point
of
view
the
only
outstanding
thing
is.
It
would
be
nice
to
get
the
cappy143
VR
in
there,
since
it's
been
rate
emerged
for
a
while.
A
If
anybody
wants
to
take
a
look
at
that,
I
think
it's
ready
to
merge
and
then
we
could
put
that
in
the
patch
but
either
way
I
feel
like
we
should
do
a
patch
today.
Does
anybody
have
any.
A
A
Plus
this
is
that's
a
fix
for
what
Dayton
pointed
out
last
week,
so
hopefully
that'll
close
that
so
yeah
I
think
release
whether
or
not
the
143
bump
is
in
there.
So
I
guess,
I,
guess,
I'm,
saying
I'll
wait
until
noonish
or
so
and
see
if
that
merges,
and
then
we
can
do
a
patch
release,
I'm
happy
to
do
it.
Unless
people
want
to
pair
on
it,
we
could
we
could
do
it
in
a
zoom
meeting.
It's
not
really!
A
It's
not
really
that
complicated,
but
it's
good
for
people
to
see
how
the
sausage
is
made.
A
A
D
A
Yeah
and
then
as
well,
it's
probably
a
discussion
for
another
day,
but
Jack
was
pointing
out
in
the
chat
we
had
that.
Maybe
this
is
the
kind
of
change
Upstream
that
should
be
a
minor
release
rather
than
a
patch
release.
I
guess
we
can
talk
about
that
at
some
point
in
Cappy,
but
it
certainly
surprised
us.
It
was
a
little
more
work
than
we
expected.
C
Thanks,
nothing
about
Justin
Bieber
said
that
product
approval,
so
speaking
on
lazy
consensus,
subject,
Mars
and
then
we
have
one
implementation
here.
So
I
think
we
are
there
very
soon.
Thanks.
B
C
A
E
F
Yeah
so
we
discussed
last
week
we
weren't
going
to
do
a
lazy
consensus.
We
were
just
starting
the.
We
actually
started,
leasing
consensus
last
week
at
the
office
hours
and
we
were
just
waiting
for
english's
approval,
since
this
has
been
opened
for
like
over
two
months
now,
it's
like,
if
you
haven't,
start
reviewing
it
like
it.
You
know
it'd
be
good
to
not
like
start
reviews
like
at
the
last
hour.
F
So
you
know
that
being
said,
we
do
want
to
like
keep
it
open
just
for
our
last
day.
So
if
anyone
wants
like
people
who
have
been
reviewing
one
of
the
final
pass
or
anyone
wants
to
do
like
a
drive
by
review-
that's
fine,
so
I
just
approved
it
put
it
on
hold
and
I'm
planning
to
remove
the
hold
or
either
me
or
Ash
can
remove
the
hold
like
tomorrow
morning
or
tonight,
Depending
on
time
zone.
E
Yeah,
let's,
let's
maybe
pause
for
objections
and
then
but
I'll
just
say,
preemptively!
Congratulations!
Thank
you!
So
much.
D
Yeah
before
I
opened
an
issue
on
this
I
just
wanted
to
take
just
a
quick
minute
and
and
ask
if
maybe
this
was
intended.
There
was
a
change
I
think
about
three
months
ago
to
the
defaulting
web
hook
to
I.
Think
it's
something
with
setting
the
user
managed
identity
and
a
couple
Fields
were
deprecated
for
system
managed
identity.
D
But
what's
interesting
is
that
means
that
the
machine
pool
needs
to
be
created
before
the
Azure
machine
pool
and
at
least
in
our
Helm
flow.
Those
are
created
together
and
it
seems,
like
Helm
just
happens
to
be
building
the
Azure
machine
pool.
First
I,
don't
know
if
that's
like
a
case
sort
thing
and
it
starts
with
an
A.
So
it's
doing
that
one
first,
but
it
prevents
the
deploy
from
working.
D
D
It
and
I
wanted.
My
question
is:
is
that
a
bugger
is
that
an
enforcement
order
that
we
want
to
basically
enforce
ensure
that
the
machine
pool
exists?
First.
E
B
Yeah
I
remember
making
this
refactor
and
I.
Don't
think
it
was
like
an
intention
to
have
it
to
always
enforce
some
plural
order,
but
I'm
not
sure
if
that's
just
it's
just
a
Cappy
thing
to
always
assume
that
the
machine
pull
we
created
before
that
Azure
machine
pool,
I
guess
someone
else
can
correct
me
on
that.
But
I
can
definitely
take
a
look
because
I
I
think
I
was
working
on
that
refactor.
B
So
I'll
take
a
look
at
the
pr
or
the
issue.
D
Okay,
why
don't
I?
It
sounds
like
it's
probably
an
issue,
then
I
think
in
general
we
don't
enforce
that
kind
of
import
that
kind
of
order.
A
F
Yeah
I
think
this
is
like
one
of
the
few
places
in
web
hooks
where
we
have
an
object
whose
validation
depends
on
another
object
in
this
case.
I
believe
it's
because
we
need
the
subscription
ID.
So
the
only
reason
we're
getting
the
mission
goal
is
to
get
to
the
Azure
cluster
so
that
we
can
get
the
subscription,
ID
and
yeah
I.
Think
it's
not
ideal
because
it
can
lead
to
race
conditions
like
you
observed
like
when
one
object
gets
created
before
the
other
and
I.
Think.
That's
why
the
retry
was
added.
F
That's
why
there's
a
with
retry
there.
So
if
it's
not,
you
know
sometimes,
when
you're
applying
them
concurrently,
it
will
fail
or
like
transiently
until
the
other
object
gets
finally
created
and
that's
what
that
retry
is
for,
but
yeah
it's
it's
not
great.
I,
don't
know
if
we
have
a
good
alternative,
I
think
the
alternative
is
just
not
validate
this
in
web
hook
and
validate
it
in
the
controller.
F
D
F
F
F
D
A
All
right,
this
rings
a
bell
with
and
I'm
totally
missing
the
context,
but
I
feel
like
it
might
have
been
the
same
area
a
long
time
ago,
I
implemented
a
web
hook
that
need
to
look
up
from
machine
through
to
Azure
cluster,
because
we
need
to
check
the
version
of
kubernetes
and
disallow,
because
we
need
to
check
the
version
of
kubernetes
and
see
how
to
do
an
indirect
lookup
from
one
like
this
and
I
thought.
A
I
made
it
optional
like
it
was
the
best
effort
thing,
because
it
was
a
validation
where,
if
the
version
of
kubernetes
was
too
old,
then
we
were
going
to
reject
building
the
cluster
because
vmss
Flex
was
turned
on.
That
was
it
and
it
might
still
be
in
here,
maybe
I'm,
just
mumbling
about
the
same
code,
but
I
thought
our
resolution,
for
that
was
just.
It
should
be
a
best
effort,
validation.
A
If
you
try
five
times
and
we
can't
actually
get
the
Azure
cluster
and
we
don't
really
know
the
status
and
so
we'll
just
let
it
pass
and
have
an
error
later.
I'm.
A
B
We
have
something
yeah,
sorry
I
just
wanted
to
chime
in
I
I,
don't
know
if
a
best
effort
model
would
work
here,
because
I
think
before
the
subscription
ID
was
there.
We
were
Always
setting
a
default
like
scope
and
we
needed
the
subscription
IDs
to
do
that.
So
I
think
it
would
change
the
default
Behavior
if
we
let's
do
a
best
effort,
so
I
kind
of
forgot
how
I
was
getting
the
subscription
ID
before.
Maybe
it
was
doing
it
in
the
controller.
B
I'll
have
to
take
a
look,
but
maybe
we
can
just
move
this
default.
The
setting
of
the
default
somewhere
else
like
in
the
controller
Cecile
confirmed
I
was
in
the
controller.
So
so
maybe
we
can
just
move
it
back
there
and
I'll
just
be
the
issue.
D
A
All
right
next
up,
Benny,
you
want
to
talk
about
the
node
OS
caching
proposal.
A
Yeah
sure
I
just
have
to
figure
out
how
to
make
you
host
again
or
co-host.
Here
we
go
okay
that
should
do
it.
A
C
G
All
right
so
I
want
to
talk
about
the
this
proposal
that
I've
added
into
PRS
for
node
OS
image,
caching,
which
I'll
just
keep
getting
into
it
actually,
which
is
essentially
this
idea
of
on
a
constant
interval.
We're
going
to
be
caching
like
the
image
of
unknowns
like
OS
and
then
convert
what
every
different
Azure
machines
and
different
Azure
machine
pools
are
then
using
to
spin
up
new
nodes
off
of
that
OS.
G
So
I
just
wanted
to
kind
of
go
through
the
proposal
see
if
anyone
saw
any
glaring
issues,
there's
a
couple
outgoing
things
that
I
haven't
had
time
to
fix
on
this
that
are
still
problematic,
but
I'll
note
them
as
we
kind
of
get
there.
G
So
there's
just
a
general
glossary
of
what
node
prototype
pattern
is
what
worm
nodes
are
shared
image
gallery
snapshot,
prototype
node,
like
it's
General
background
and
then
yeah.
The
summary
of
this
is
essentially
just
we
will
be-
can
changing
his
current
existing
controllers
to
be
modified
so
that
we
cache
the
nodes,
OS
image
on
a
regular
interval
and
then
update
the
model
to
use
the
that
image
for
future
scale
outs,
and
so,
ideally,
we'd
have
faster
horizontal
scaling.
G
We'd
have
we'd
prevent
this
secure,
a
security
issue
of
like
when
a
new
node
is
spin
up.
It
immediately
needs
different
security
patches
and
just
helps
prevent
people
from
needing
to
spin
up
lots
of
warm
nodes
and
over
provision
to
deal
with
this
problem,
then
a
model
scenario
would
be
someone
spinning
up
the
cluster
and
having
it.
This
feature
be
toggleable.
G
If
it's
toggled,
then
as
different
as
months
passed
on
and
updates
accumulated,
they
would
be
cached
and
applied
regularly
so
that
they
wouldn't
have
to
constantly
keep
getting
them
every
time
they
split
up
the
node,
it's
kind
of
self-similar
to
what
just
before,
and
then
the
goals
are
to
create
the
solution
to
have
this
faster,
horizontal
scale
out
of
applications
themselves
and
to
prevent
security
issues.
G
Certain
things
that
are
out
of
scope
or
like
Windows
features
it
just
might
not
be
feasible
for
the
initial
proposal
itself
and
not
certain
optimizations,
there's
also
different
metrics
of
selecting
a
candidate
node
to
get
more
complex
that
might
be
more
optimized
and
just
automatic
bad
snapshot.
Roads
might
be
difficult
to
do
yeah
in
terms
of
user
stories
again
in
this
faster
horizontal
scaling.
G
So
the
current
plan
is
to
go,
modify
these
controllers
and
allow
for
the
feature
unable
to
be
toggled
on
or
off
over
the
entire
environment
over
the
entire
hep
C
pod
as
a
whole,
with
the
environment
variable
being
set
on
cluster
CTL
initialization
and
then
by
cluster
basis.
It
can
be
enabled
by
switching
different
things,
fields
in
each
of
these
resources.
G
So
here's
an
example
of
turning
the
environment
variable
on
the
thing
that
I
wanted
to
note.
That's
kind
of
still
outgoing
issue
or
wrong
is
a
lot
of
these
time
stamp.
Details
are
kind
of
funny
right
now
and
there's
notes
in
like
the
changelog
that
John
put
of
how
these
should
kind
of
be
resolved.
Just
as
like
a
quick.
G
G
Sorry,
it's
specifically
done
here.
These
timestamps
are
problematic
and
then,
in
terms
of
this
process,
I'm
just
going
to
show
a
figure
of
what
happens.
That's
Linked,
In
This
document,
where
essentially
you
choose
choose
a
healthy
node.
You
shut
down
the
chosen
node,
you
snapshot,
The,
Chosen
node,
and
then
you
restart
it,
create
the
shared
gallery
image
so
and
from
the
snapshot
itself,
delete
the
stop,
shot
and
reconfigure
everything
to
it.
G
That's
the
general
idea
of
it.
There's
lots
of
little
details
in
these
yaml
file.
Examples
there
and
then,
in
terms
of
security
model,
we're
going
to
parse
everything
through
certain
certain
there's,
certain
risks
of
bad
snapshots
being
taken
or
a
bad
thing
being
applied,
and
then
the
user
will
have
to
essentially
look
at
this
or
we'll
have
some
sort
of
detection
of
the
bad
snapshot
itself,
in
which
case
there
aren't
current
Alternatives
that
may
been
made
for
cap
Z.
G
Oops
we're
gonna
make
end-to-end
tests.
Some
of
the
graduation
criteria
needs
to
be
scoped
out
more,
but
there's
different
fields
that
we
moved
for
things
to
be
more
integrated
but
are
currently
supposed
to
be
in
a
different
place
to
be
less
breaking
yeah.
That
was
definitely
quick
and
dirty,
but
yeah.
H
I
want
to
start
off
by
saying
this
sounds
pretty
awesome.
Thank
you
for
your
hard
work
with
it.
I'm
was
reading
through
very
very
briefly
now,
I.
H
H
Was
just
saying
this
looks
really
awesome,
so
thank
you
for
putting
this
together.
I
had
a
quick
question
about
when
you're
saying
enabling
it
is
it
enabled
per
node
pool
or
is
it
enabled
per
cluster.
G
H
The
reason
why
I
ask
is
because
I
could
see
us
having
node
pools
where
we
wouldn't
want
it
to
be
cycled
on
a
regular
basis,
perhaps
along
the
lines
of
perhaps
more
the
critical
things
yeah
Jack
I
was
definitely
going
to
mention
it
into
the
out
of
comment
onto
there
about
it.
I'm
just
ripping
here,
because
this
is
all
new,
but
I
could
definitely
see
a
specialty
advantage
of
having
the
cached
images
for
some
of
our
tenants
as
these
get
applied
onto
it.
So
that's
that's
really
cool.
H
The
other
question
I
had
was
when
dot
TTL
that's
on
there
it
when
that
TTL
expires
or
when
that
time
frame
gets
hit.
It
basically
goes
through
and
does
the
full
does
the
upgrade
of
that
node
pool
right
to
apply
that
new
cached
image?
It
just
doesn't
do
the
all
new
nodes
or
scaling
at
that
point
use
the
new
image.
It
actually
replaces
all
the
existing
ones
right,
yeah.
G
Ideally,
it
won't
do
that
there'll
be
some
sort
of
scale
out
that
it
doesn't
immediately
have
to
update
every
single
thing
every
time
that
the
image
gets
redirected,
but
that
I
did
not
have
time
to
fix
that
in
the
dock.
In
terms
of
making
that
more
specific,
with
like
a
rollout
strategy
and
things
like
that,
because.
I
E
Cool
yeah-
this
is
really
exciting.
So
all
actually
wasn't
going
to
speak
to
that
last
comment
by
Mike,
but
I
have
a
thought
on
that
as
well.
I
think
that
we,
we
probably
could
add
a
brief
paragraph
in
The
Proposal,
to
describe
how
whether
or
not
the
updates
to
the
node
pool
are
applied
immediately
to
existing
nodes.
That's
a
configurable
thing
that
I
as
far
as
I
can
tell
cluster
API
already
provides
the
sufficient
interfaces
that
we
can
use
rolling,
upgrade
update
strategy.
All
that
kind
of
thing.
E
So
I
don't
think
you
need
to
go
into
detail
on
the
proposal,
but
it's
probably
a
good
call
out
long
term.
It
would
probably
be
a
great
doc
that
would
be.
That
would
be
like
a
nice
dedicated
dock
to
describe
why
you
may
or
may
not
want
to
do
that,
so
the
obviously
the
the
trade-offs
you're,
probably
you're,
priority
internalized
Benny,
having
a
sort
of
maybe
unnecessary
amount
of
node
replacement
and
thrashing
and
workloads
getting
evicted,
and
all
that
kind
of
thing.
E
If
you
use
the
really
aggressive
rolling
upgrade
configuration
every
time
you
updated
the
machine
pool
with
the
new
OS
image,
then
you'd
be
rolling
through
all
your
nodes.
So
if
you
have
an
aggressive
interval,
it's
going
to
be
happening
like
a
lot,
so
you
might
not
want
to
do
that,
but
we
might
want
to
have
a
doc
to
help
folks
who
would
want
to
do
that
either
because
they're
primarily
consuming
security
updates
or
maybe
a
doc
that
describes
how
you
could
do
that
in
a
one-time
situation.
E
So,
if
you've
got
like
a
cluster
of
a
thousand
nodes
in
it,
and
you
want
to
optimize
rolling
out
a
security
patch
to
a
thousand
nodes,
you
probably
an
optimization
if
you've
got
like
three
hours
to
do,
this
work
would
be
to
do
the
upgrade
on
one
up.
Do
the
patch
on
one
node,
take
the
snapshot
and
then
do
a
rolling
upgrade
as
opposed
to
sort
of
letting
the
the
vanilla
app
get
upgrade
or
unintended
upgrades
process
go
through
that
might
take
12
hours
24
hours
to
propagate
across
your
cluster?
E
If
they're,
if
each
one
of
each
node
that's
online,
is
manually
doing
the
work,
I
don't
mean
to
digress
too
much
anyway.
But
this
it's
a
super
interesting
topic,
and
the
key
point
is
that
it's
going
to
be
really
really
configurable.
So
we're
going
to
want
to
reinforce
it
with
a
lot
of
documentation
to
help
folks
figure
out
how
to
configure
and
then
the
the
the
thing
I
initially
raised
my
hand
about
was
Clarity
on
Linux
windows.
Are
we
explicitly
kicking
windows
out
of
scope,
or
is
that
a
stretch
goal?
A
Cool
pretty
good
stuff
Benny.
If
you
have
any
more
questions
or
comments
on
the
proposal,
anything.
A
Nope,
that
was
all
we
had
on
the
agenda.
A
I'm
just
going
to
put
in
Windows
Port
is
a
stretch
goal.
If
anybody
has
any
random
topics.
This
would
be
a
good
time
for
that
or
we
can
go
on
to
Milestone
review.
A
This
we
know
pretty
well
at
this
point
and
I
have
started
working
on
it
and
I'm
still
trying
to
find
the
best
approach,
but
I
think
we
know
the
way
forward.
So
that
should
happen
soon.
A
Reconcile
occasionally,
files
to
find
cluster
identity,
this
is
a
help
wanted
bug,
but
it's
on
the
milestone,
I
thought
in
general.
We
didn't
do
that,
but
does
that
mean
Cecile
that
we
need
to
schedule
it
to
one
of
us
or
is
it
just
hopeful
that
we're
going
to
get
it
done
for
this
milestone.
F
I
think
this
is
because
there
is
a
bug
and
that's
something
we
wanted
to
look
at,
but
I
think
at
this
point
we're
late
enough
that
if
it's
not
a
fine,
then
it
should
probably
be
removed.
It's
Martha's
helped
wanted.
A
I,
don't
care
which
way
we
go.
A
I
guess,
let's
just
leave
it
and
be
optimistic,
maybe
someone
will
jump
in
and
we
can
kick
it
out
next
week.
I
guess
this
is
obviously
in
progress.
A
Please
just
shout
out
if
you
have
any
comments
about
something
and
I'm
not
rolling
over
I'm
rolling
over
too
fast
moving
pool,
ux
wow.
This
is
mine
and
I
still
haven't
looked
at
it.
A
All
right,
the
proposal
we
just
talked
about
and
that
should
merge
very
soon
user
identity
in
Caps
c
managed
AKs
cluster.
E
Yeah
I
could
speak
to
that
a
little
bit.
I'm
I'm
going
to
prioritize
this
in
the
next
couple
weeks
before
I
go
and
leave
for
July
and
August,
and
is
this
an
issue
so
theirs
is
the
issue
yeah
right,
there's
a
PR
that,
depending
on
the
issue
that
that
is,
there's
a
PR
that
captures
this
or
there
is
a
PR
that
currently
doesn't
capture
this,
but
we're
considering
expanding
this.
E
Is
it
great,
so
I'm
I'm
committing
to
moving
this
forward
in
the
next
couple
weeks,
either
finishing
it
or
or
handing
it
all
in
terms
of
Milestone
110,
we'll
know
more
probably
next
week,
okay,.
E
Related-Ish
yeah
should
I
leave.
It
actually
I
think
that
one
I
don't
quite
know
what
the
scope
I
don't
quite
know.
What
that
is
versus
clusters
manage
identity.
A
Okay,
but
we
want
to
still
leave
it
on
the
milestone
for
now,
because
you
think
there's
a
chance,
we'll
yeah
we'll
know
more
in
a.
E
A
E
A
Good,
this
is
about
to
merge,
I,
think
right.
No
us
so
yeah
it's
about
right
on!
A
This
is
the
pr
we
were
just
looking
at
these
three
things.
I
think
still
have
a
Fighting
Chance
of
making
it,
but
I
need
to
buckle
down
and
finish
the
SDK
V2
auth
changes,
so
I'm
gonna
leave
those
on
for
now.
A
Okay,
I
still
has
the
cap
CVM
extension
stuff,
going.
F
It's
it's
going.
It's
I
was
hoping
to
have
it
done
by
this
week,
but
I
don't
think
that's
gonna
happen,
I'm
still
hopeful
of
getting
it
rolled
out
in
time
for
the
release.
That's
July,
11th,
right,
yeah,
yep
I
think
we
should
be
okay
for
the
release.
The
roles
in
progress
is
just
taking
a
while,
because,
if
deployment
practices,
which
is
good
but
yeah,
it
just
requires
a
lot
of
time
to
get
something
real
dealt
right.
A
Security
rules
don't
get
cleaned
up
on
Azure
side,
I,
don't
actually
know
much
about
this.
One.
B
D
And
I
think
I
addressed
nearly
all
the
feedback
on
it.
There
is
one
weird
thing
in
there
where
every
service
must
Implement
is
managed,
even
though
it's
only
used
by
one
service
and
so
there's
a
question
about
why
the
is
managed
call
is
stubbed
which
I
think
I
answered
there,
but
yeah
so
otherwise,
I
think
all
the
other
feedback
was
addressed.
A
They
need
some
more
reviewing
sounds
like
right
on
and
then
this
last
one
I
know
this
is
well
along
right.
It
was
so
this
is.
J
I
think
can
we
want
to
check
that
map?
Please
I
think
it's
more
of
an
epic
I'd
split
it
out
yeah,
so
in
in
this
one
actually
is
targeting
to
get
the
Azure
cnib
one
with
one
network
interface
per
node
merge.
A
J
A
I
You
want
to
go
to
the
pr
review
board.
You
can
maybe
go
to
the
ones
that
need
review
or
the
ones
that
are
in
the
no
status.
So
if
you
go
to
projects.
I
And
then
you
have
to
pull
forward
and
learn
the
pr
triage.
I
I
think
if
you
go
to
the
needs
review
category
or
the
ones
that
if
there
are
ones
that
don't
have
a
category
at
all,
like
there's
no
status,
if
there
are
any.
A
A
Progress
oops
just
moved
it
up,
but
didn't
put
it
on
the
release.
Oh
well
best
integration
with
auto
scaler
code
spells
or
provider
id.
This
also
all
I
think
all
these
except
the
Calico
one
probably
deserve
to
go
on
the
milestone.
A
So
let's
say
we
disagrees,
I'll
go
ahead
and
do
that
probably
async
after
this.
Let's
see
why
it's.
I
Confusing
to
me,
a
code
spell
yeah,
so
so
this
is
just
so
that
we
identify
PR's
and
where
they're
at
so
that
way,
when
it's
no
status
means
to
theoretically
no
one's
even
maybe
looked
at
it
and.
I
I
B
A
E
I
would
leave
it
outside
yeah
I
think
this.
In
my
mind,
this
is
closely
related
to
Benny's
work,
to
support
that
once
we
get
an
implementation
of
the
Prototype
stuff,
so.
C
A
Okay,
sorry.
A
A
That's
not
a
KFC
thing
all
right
anyway.
Does
anyone
have
anything
on
this
list
or
in
the
pr
list,
or
the
issue
list
that
we
need
to
update.
A
All
right,
what
else
anything
else
to
talk
about.