►
From YouTube: SIG Cluster Lifecycle - Cluster API 21-08-11
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
wednesday
august
11th
class
api
office
hours
meeting
cluster
api
is
a
project
c
cluster
life
cycle.
We
have
a
meeting
etiquette
if
you
like
to
speak
up,
use
the
race
hand
feature
on
their
reactions
at
the
bottom
of
your
zoom
window,
feel
free
to
add
your
name
to
the
attending
list,
I'll
post.
The
link
in
chat.
A
A
And
yeah,
let's
get
started,
so
I
guess
one
psa
is
like
we're
prepping
for
zero.
For
one
there
was
one
bug:
fixing
control
run
time
that
which
would
we
just
pulled
in
so
that
should
be
ready.
There
is
like
a
few
prs
that
folks
have
expressed
interest
to
get
into
before
the
release.
A
I
don't
know
if
they're
matter
yeah
status
version
in
kcp.
It's
not
really
release
blocking
it's
just
like
a
really
nice
to
have
before
the
release
actually
happens
and
the
pr
is
already
out.
If
you
want
to
go,
look
at
it.
This
is
a
way
to
say
this
is
the
minimum
version
that's
running
in
across
all
control,
plane,
nodes
in
kcp,
and
then
it
makes
also
kcp
compatible
with
cluster
class.
A
For
for
later
on,
when
cluster
class
becomes
available,
that's
the
only
psa
that
I
had
for
zero
four
one
and
any
questions
on
that.
A
All
right
as
we
we
covered
this
open
proposal
of
readout.
Let's
see
cluster
class
is
merged.
A
B
Go
ahead,
yeah,
I'm
in
the
process
of
kind
of
rewriting
the
scale
from
zero
stuff
right
now,
based
on
the
comments
we
had
before
and
I'm
running
into
some
issues.
So
I've
got
a
topic
later
to
kind
of
talk
about
this,
but
that's
where
scale
from
zero
is
right.
A
Cool
thanks
mike
so,
let's
proceed
to
the
to
the
discussion
topic
says
here:
go
ahead.
C
Yeah
I
added
this,
but
it's
not
really.
My
discussion
topic
just
remember
that
we
should
talk
about
this
since
we
mentioned
but
yeah
from
two
weeks
ago.
We
had
this
discussion
thread
in
the
beta
discussion.
If
you
can
scroll
down
actually
yeah.
So
basically,
we
had
started
this
discussion
in
march
about
you
know.
We
should
think
about
the
road
map
to
beta
and
v1
after
we
released,
we
went
off
four,
and
so
now
that
b1
alpha
4
is
behind
us.
C
We
should
start
thinking
about
what
is
our
plan
for
beta
and
if
we
need
to
do
another
alpha,
I
know
vince
mentioned,
maybe
doing
an
alpha
five
at
the
meeting
three
weeks
ago
for
being
able
to
do
a
breaking
change.
I
don't
remember
exactly
which
break
exchange.
That
was,
but
we
had
like
some
breaking
change
that
we
needed
so
yeah
just
thought.
We
should
touch
base
on.
A
This
yeah
for
the
breaking
changes
I
think,
like
a
load,
balancer
provider
was
the
biggest
one
and
and
potentially
external
lcd
as
well
load
balancer
provider
like
we
might
have
to
change
how
I
api
work
together,
like
it
to
support
like
different
load,
balancers
and
then
the
contracts
that
we
have
with
the
infrastructure
providers
for
external
xcd.
A
We'll
need
to
make
sure
that,
like
there
is
some
support
for
for
that
on
the
machine
objects
given
that
won't
necessarily
become
kubernetes
nodes,
and
that's
the
only
exception
you
will
probably
have
to
make
in
there
and
then
I
guess
after
five
there's
a
lot
of
you
know
every
time
we
rub
an
alpha
like
there's
like
a
lot
of
chores
that
need
to
be
done.
This
conversions
there
is
support
for
the
older
version,
while
we
develop
a
new
version
that
is
backwards
and
bug
fixes,
etc.
A
So
I
kind
of
want
to
delay
this
until
maybe
cluster
class
work
has
finished
a
little
bit
more
and
the
feature
date
is
actually
there
and
functional.
A
But
for
in
terms
of
timeline
like
I,
don't
have
strong
opinions
apart
from
like
yeah,
what
you
say
like
beta
is
probably
not
going
to
come
this
year,
given
that
if
we
have
to
wrap
another
alpha,
it's
probably
going
to
be
early
next
year.
Maybe
I
don't
know.
C
So
my
I
so
I
guess
my
question
is:
how
do
we
know
this
list
of
priorities
is
not
gonna
expand
because
that's
our
priorities
right
now,
things
were,
you
know
on
our
radar
right
now,
but
where
does
this
stop
where's
the
line?
Because
if
we
add
some
new
proposal,
like
maybe
next
month
that
we
want
to
get
in,
then
that
gets
added
to
the
list
and
it's
kind
of
never
ending.
C
D
Hey
so
I
guess
I
have.
Hopefully
this
is
a
simple
question.
I
think
industry-wide
one
of
the
things
that
you
are
telling
potential
users
when
you
label
something
alpha
is
don't
use
this
in
production,
and
I
think
that
in
the
past
20
years,
with
the
help
of
our
friends
at
google,
I
think
beta
has
become
sort
of
understood
as
meaning
you
may
use
this
in
production,
and
so
are
we
telling
people
not
to
use
this
in
production?
D
B
Do
you
want
to
respond?
I
mean
I
I
don't
yeah
I
mean
I
don't
necessarily
want
to
respond
to
jack's
question,
but
I
will
like
just
kind
of
back
it
up
like
we.
B
You
know
internally
at
red
hat
we've
had
discussions
about,
you
know,
increased
cluster
api
usage
and
that
that
topic
has
come
up
several
times,
usually
from
people
who
aren't
necessarily
in
the
development
cycle,
but
they
first
become
very
nervous
about
the
alpha
status
of
it,
and
then
they
ask
us,
like
you
know,
is
this
ready
for
production
use
now
or
should
we
continue
to
delay
on
it?
So
I
have
the
same
question
that
jack
does
I'd
be
curious
to
hear
other
people's
responses.
A
Yeah,
so
I
don't
think
we
want
to
discourage
folks
from
using
this
in
production.
In
fact,
there
is
lots
of
folks
that
are,
I
know,
are
using
this
in
production,
and
you
know
our
support
has
been
more
than
even
more
than
what
the
contribution
guideline
for
apis
and
that
cabrini's
publishers
are
especially
for
an
alpha.
A
The
reason
the
apis
are
still
alpha
is
probably
because,
like
we're
still
not
sure
like
it
that
we
want
to
commit
to
this
long
term
at
the
same
time
and
how
this
this
all
started
is
you
know
we
need
to
make
that
leap
of
faith
at
some
point,
and
there
is
more
beta
that
we
can
cut
like
along
along
the
side
and
when
we
get
to
ga
like
if
v1
we're,
not
happy
with
it,
we
could
cut
v2
right
like
there
is
like
always
like
a
new
version
that
we
can
cut
like
over
time.
A
The
the
thing
is
like
there
is
quite
a
bit
of
things
that
we
have
discussed
like
right
now,
which,
like
to
the
sales
points
like,
do
we
have
to
block
beta
for
these
features
or
not?
And
how
do
we
not
add
to
it?
That's
like
the
other.
A
I
guess
like
worry,
and
I
add
to
that
which
means
you
were
discussing
essays
like
how,
like
the
project
is
growing,
but
we
also
need
lots
of
more
reviewers
but
more
like
approvers
and
like
in
certain
areas
of
the
code
base
and
as
the
code
base
grows
like
as
I
was
joking.
Yes,
it's
like
there's
like
10
repositories
in
one
main
cluster
api,
which
it's
probably
close
to
true
but
yeah.
We
need
like
also
as
many
probably
owners
on
in
those
parts
of
the
code
base,
jack.
D
Cool
thanks
for
all
that
I'll
just
say
one
last
thing,
which
is
that
I
think
I
don't
think
whether
or
not
we're
in
alpha
or
beta
or,
as
you
say,
vince,
whether
or
not
we're
actually
at
a
one
dot
or
a
two
dot
materially
influences
our
ability
to
to
move
new
features
into
the
project
they're.
You
know
there
are
december
sort
of
surface
areas,
well
enough
understood
that
we
can
continue
to
do
that,
whether
we're
in
alpha
or
whether
we're
in
beta.
D
And
so,
if
we're
comfortable
with
that,
then
I
think
we
should
stay
at
alpha.
But
if
we
actually
want,
we
don't
want
to
communicate
that.
Then
I
think
we
should
strongly
consider
disentangling
all
this
stuff
in
this
conversation,
because
we
can,
we
can
continue
to
cut
a
million
beta
versions.
D
I
think
we
want
to
decompose
all
the
features
that
we
want
to
do
from
the
alpha
beta
question.
This
is
my
view
and
focus
on
whether
or
not
we
want
to
be
communicating
to
potential
customers
that
this
is
ready
for
use
of
production,
and
if
we,
if
we
do,
I
think
we
should
just
cut.
We
should
just
make
a
beta
date
and
meet
it
based
on
the
features
that
are
currently
in
the
in
the
project
and
then
line
up
all
these
additional
features
for
for
the
next
beta
beta.2
or
whatever.
C
It's
just
here,
yeah
from
in
my
opinion,
I
think,
from
a
code
perspective
and
a
code,
stability
perspective,
we're
already
operating
at
data
and
we're
already
keeping
that
support
matrix
as
if
it
was
a
beta
api,
and
I
think
a
number
of
us
have
had
to
tell
people
don't
worry
if
it's
alpha
people
are
already
using
this
in
production,
so
I
don't
think
keeping
that
as
like.
Officially
it's
an
alpha,
but
really
you
know
it's
more
like
a
beta
project
is,
I
think,
fair
to
our
users
and
it's
not
very
transparent.
C
So
I
think
we
should
just
call
it
what
it
is.
If
anything
I
think,
what's
keeping
us
from
beta,
like
vince
said,
might
be
more
of
a
processes
and
human
problem.
Where,
like
we
don't
have
you
know
enough
maintainers
and
reviewers
and
we're
maybe
like
a
bit
short
on
that
side.
But
I
don't
think
code.
Stability
is
the
issue
here.
I
don't
know
if
that's
a
controversial
opinion,
but
that's
yeah.
B
So
yeah,
I
was
just
wondering
if
we
could
kind
of
take
another
another
kind
of
look
at
this
too,
because
I
I
can
totally
get
what
you
know.
Jack
and
cecile
are
talking
about
from
the
perception
side
of
this,
but
I
like
I'm
kind
of
in
the
middle,
because
I
can
also
totally
respect
what
you're
saying
vince
about
like
you
know
wanting
to
get
these
things
out
before
we
get
out
of
alpha.
I'm
just
curious,
like
on
our
cluster
api
book
site.
B
Could
we
maybe
address
some
of
this
with
a
little
bit
of
like
marketing
like?
Could
we
reach
out
to
our
partners
who
are
using
this
and,
like
you
know,
I
don't
know,
get
people
to
get
people
to
step
up
and
kind
of
display
their
status
on
that
page
or
something
so
at
least
as
users
come
to
the
project,
they
would
have
a
clear
impression
of
like
okay.
I
know
that
you
know
these
people
are
using
this
in
production
now
or
something
just
a
thought.
A
Yeah,
I
think
I
think,
that's
a
good.
That's
a
good
idea
to
maybe
showcase
like
what
folks
have
done,
and
maybe
we
can
also
like
start
linking
like
all
the
talks
that
we
have
done
and
or
also
like
other
people's
have
done
on
cluster
api
and
or
like
all,
the
news
coverage
that
that
there
is
like
when
we
had
that
that
the
dutch
telecom
demo,
that
was
great
right.
So
I
could
maybe
link
to
that
as
well.
A
So
what
I'm
hearing
is
like,
we
should
be
less
afraid
to
get
to
beta.
At
the
same
time,
you
know
we
still
have
a
people
problem
where,
like
we
don't
have
enough
hands
on
deck
like
to
maintain,
like
the
whole,
the
whole
apis
and
code.
A
A
We
can
also
like
think
about
moving
types
at
a
different
pace,
so,
for
example,
having
more
groups
of
types
inside
of
cluster
api
that
just
they're
not
all
at
beta,
for
example,
right
it's
like
and
we
have,
it
will
complicate
the
mental
model
of
like
because
now
it's
really
simple,
everything
is
alpha-4
and
that's
it
right
like
and
but
that
will
make
a
little
bit
more
complicated
to
support
multiple
api
versions
over
time,
and
especially
when
you
receive
bug
fixes,
then
you
have
to
ask
what
api
versions
are
you
on
and
what
semver
version
are
you
on
at
the
same
time?
A
Things
like
we
need
to
put
in
place,
I'm
personally
in
favor
to
say
that
we
want
to
move
to
beta
in
instead
of
doing
another
alpha,
and
if
there
are
breaking
changes,
we
can
use
the
either
make
another
beta
or
introduce
breaking
changes
like
with
minor
releases,
because
right
now,
like
like
a
major
blocker,
is
that
we
keep
publishing
patch
releases
and
we
can
never
make
like
a
breaking
change
because
every
minor
release
it
has
to
be
also
like,
for
example,
like
another
alpha
release
so
by
publishing
1.0.
A
I
think
like
in
a
beta
one
types
like
we
could
solidify
the
types
that
we
have
today
and
we
can
keep
making
making
more
breaking
changes
but
spread
out
over
time,
not
like
these
big
releases.
They
have
like
so
many
breaking
changes
where,
like
our
migration
guide,
is
like
10
pages
long,
for
example.
So
I
see
a
lot
of
benefits
in
there
as
well.
C
Yeah,
it
would
also
be
great
if
we're
able
to
get
to
a
model
where
we
are
able
to
do
patch
releases
that
are
actually
patch
releases
and
they're
just
bug
fixes
and
then
have
like
more
stable,
minor
releases
that
have
the
features,
but
we
can
have
two
minor
releases
that
are,
you
know
the
same
api
version.
Just
a
minor
really
signifies
more
changes.
A
Yeah
or
you
know
like
to
complicate
things
like,
you
could
have
a
minor
release
that
introduced
new
apis
or
like
upgrades
the
api
from
beta
1
to
beta
2,
for
example.
That's
also
like
allowed
it's
like
how
kubernetes
does.
I
think
the
only
worrisome
thing
is
like
that.
I
think
we
need
to
diverge
a
little
bit
of
what
like
the
support
matrix
is
going
is
going
to
be
like
for
apis.
A
So
as
an
example
like
kubernetes
says,
beta
apis
has
to
have
to
be
supported
at
least
one
year
for
remember
correctly,
which,
if
we
start
grabbing
a
lot
of
like
a
betas
like
over
time
like
we
have
done
for
alphas,
that's
going
to
be
like
a
lot
to
support,
so
maybe
we
need
to
diverge
a
little
bit,
at
least
in
the
beginning.
A
While
we
get
new
maintainers
that
can
support
the
deprecation
policy
that
we
put
in
place
because
personally,
like
I'm,
mostly
like
in
favor,
to
keep
the
current
model
where,
like
we,
only
support
the
a
stable
version
of
the
apis
and
the
previous
one,
but
to
support
more
than
that,
it's
going
to
be
kind
of
a
lot.
A
Talk
about
the
timeline,
you
know
if
we
go
to
beta
like
what
what
do
we
need
from,
for
example,
this
list
of
priorities
that
we
absolutely
have
to
get
done,
and
how
do
we
time
box
these
things
as
well.
B
Mike
so
just
looking
at
that
list,
you're
highlighting
there,
it
seemed
like
the
load,
balancer
and
the
machine
pool
ones
were
pretty
I
mean
load,
balancer
seemed
a
high
priority,
and
machine
pool
seemed
to
have
a
lot
of
interest.
I
don't
know
that
it's
necessarily
a
blocker
for
going
to
beta,
but
it
seems
like
it'd,
be
a
nice
to
have
the
other
ones,
I'm
less
certain
about
they
seem
like,
maybe
just
they
could
be
lower
priority.
The
kublet,
secure
authentication
sounds
a
little
scary
to
me.
A
This
one
the
proposal
has
already
been
married
to.
We
just
need
to
work
on
it,
and
this
is
one
of
those
things
that
can
actually
live
in
its
own
api
group.
It
doesn't
have
to
live
in
the
main
cluster
bi
group
as
well.
A
The
cluster
class
work
is
already
underway,
and
you
know
we
also
have
feature
gates
in
place
for
that,
so
we
can
promote
those
over
time
in
new
miner
patches,
the
operator,
I
see
that
there
is
like
a
lot
of
work
going
on
right
now.
I
don't
know
how
like,
if
this
is
a
requirement
for
beta,
though.
C
Yeah,
I
think
we
should
look
at
it
as
not
like
how
complex
it
would
be
to
get
it
done
or
how
far
we
are
from
getting
it
done,
but
more
like
which
one
of
these
are
fixing
a
problem
that
is
right
now
making
us
not
beta
eligible
like
if
we
were
to
be
beta
without
any
of
these.
What
would
be
the
consequence,
and
so
I
think,
maybe
cubelet's
secure
authentication?
That's
maybe
one
we
should
consider
as
maybe
blocking
just
because
of
the
implications
of
that
the
other
ones,
I'm
not
sure.
C
Either
I
mean,
maybe
if
we
consider
ux
to
be
a
beta
blocking
feature,
I'm
not
sure
if
there
are
any
documented
requirements
for
like
ux
as
for
a
bit
of
graduation
policies,
but
that's
something
to
to
look
into.
A
They
want
to
use
load,
balancer
provider,
honestly,
I'm
on
the
fence
about
it
like
if
it
has
to
be
yeah.
A
It's
a
lot
of
nadir.
Do
you
wanna
just.
E
Yeah
sure
yeah
on
the
load
balance
I've
been
taking
a
bit
more
of
the
interest
in
azure
recently
and
it
seems
like
low
bands
of
proposal,
at
least
in
the
way
that
joel
proposed
it
with
the
separate
control
plane.
End
points
resolve
some
potentially
resolve
some
ux
issues
with
azure
private
clusters.
Because
of
some
of
the
dance
we
have
to
do
around
hair,
pinning
network
care
pinning,
as
I
understand
it,
so
it
removes
the
customers
having
to
try
and
set
up
resolution
towards
that
private
dns
zone
that
we
use
in
azure.
A
Make
sense:
how
do
we
feel
about
the
cluster
pierre
operator,
though
he's
the
only
one
in
this
list
that
we
haven't
touched
yet.
B
Mike
yeah,
I
mean,
I
think
the
operator
is
cool
and
it's
going
to
add
a
lot
a
lot
of
functionality,
but
like
I
or
a
lot
of
ease
for
users
basically,
but
I
I
don't
see
any
reason
why
we
should
hold
off
on
going
to
beta,
because
of
that,
like
the
more
we
kind
of
dig
into
these
things,
I
just
start
to
wonder
like
how
much
are
we
just
spinning
our
wheels
here
before
we
just
decide
to?
You
know:
go
to
beta
like
we.
B
F
So,
first
of
all,
I'm
trying
to
to
keep
it
note,
because
it
isn't
for
some
important
discussion
but
yeah.
It
is
really
difficult
for
me,
but
please
keep
a
look
at
the
notes
later
so
two
comments
in
terms
of
what
we
need.
I,
I
think
that,
according
to
the
discussion
and
also
from
what
I
think,
the
the
move
to
beta
is
mostly
to
give
a
signal
to
the
user,
because
we
are
already
acting
as
a
production
ready
project.
So
I
think
that
cluster
bi
as
a
project
is
already
production.
F
Our
api
is
not
yet
graduated,
so
we
have
to
give
a
signal.
In
my
opinion,
the
most
important
thing
given
that
it
is
a
senior
for
the
user
is
to
address
ux,
which
means
cluster
class
and
the
nice
things
about.
These
is
that
the
cluster
class
basically
hides
from
most
of
the
user.
All
the
internal
of
of
I
don't
know,
load
balancer
stuff
like
that
they
will
will
get
more
and
more
or
less
user
facing,
and
this
will
give
us
some
room
for
doing
load,
balancer
authentication
behind
the
scene
in
in
follow
iteration.
F
A
Awesome.
Thank
you.
Thanks
for
the
great
discussion,
are
there
any
other
things
that
we
should
talk
about?
Does
anybody
actually
have
any
timeline
in
mind.
A
I'm
kind
of
thinking
cute
you
next
year,
just
so
because
jack.
D
C
I'd
love
to
hear
there's
a
new
one
on
the
call
who
has
any
dissenting
opinions
like
anyone
who
thinks
we
should
not
go
right
now,
we're
not
ready
like
from
their
experience.
They've
used
it.
This
is
definitely
alpha.
You
know,
there's
this
needs
more
work.
Do
we
have
anyone
like
that?
Please
come
forward
or
you
can
come
forward
async
too.
If
that
makes
you
feel
more
comfortable,
but
we
definitely
need
to
hear
those
opinions
too.
G
I
think
the
the
most
important
part
which
is
currently
alpha
is
our
policies
around
support.
So
what
do
we
test?
How
long
do
we
support,
which
release
and
stuff
like
that?
So
I
think
we
should
write
that
down
somewhere
and
get
consensus,
because
that's
just
I
don't
know
every
time
I
have
to
update
a
book
with
which
versions
we
actually
support
or
testing.
We
just
don't
know.
A
Yeah,
I
think
I
think,
that's
also
a
good
point.
You
know
support
on
support.
I
was
also
thinking
like
right
now
we
actually
don't
have
we
never
have
put
a
ceiling
on
like
the
kubernetes
version
that
we
have
ever
supported
or
tested,
and
some
folks
have
asked
like
if
you
haven't
tested
like,
for
example,
the
new
version.
I
know
that
we
have
intentions
for
it,
but
should
we
put
a
seal
into
the
version
that
we
support?
That
was
something
to
think
about,
so
like,
for
example,
for
alpha
3.
A
A
I
don't
think
right
now,
like
we
have
time
to
wrap
the
like
a
new
version
based
on
the
current
types.
We
could
try
to
to
see
like
how
we're
you
know
like
how
we're
gonna
do
by
the
end
of
the
year.
If
we
are
to
do
this,
so
pretty
much
take
the
current
types.
A
The
current
work-
that's
in
there
fix
whatever
can
be
fixed
and
like
in
terms
of
like
these
new
features
that
we
want
to
add
in
the
breaking
changes
that
we
want
to
make
and
wrap
better
one
types
and
publish
the
1.0
release,
go
ahead
and
cco.
D
Yeah
cool
that
that
makes
sense,
I
mean
I
think
we
can
set
aside
any
differences
in
timeline,
expectations
or
desires.
It
sounds
like
based
on
this
conversation.
The
biggest
gap
we
have
for
going
to
beta
is
the
realization
that
it's
going
to
require
us
to
help
our
users
more
and
and
maybe
the
observation
that
we
don't
have
enough
resources
to
do
that
successfully.
D
So
maybe
we
can
accelerate
the
beta
process
by
figuring
out
ways
to
basically
add
more
support
overhead
to
the
project.
So
if
we
can
some
of
the
folks
the
project
who
are
wanting
to
cut
to
beta
more
quickly
than
others,
maybe
we
can
you
know,
do
the
hard
work
of
getting
more
support
resources
in
the
project.
So
everybody
feels
better
about
the
fact
that
we
can
support
folks.
C
Cecilia
yeah,
that
makes
sense
to
me
I
I
was
just
gonna
say,
maybe
like
we
don't
need
to
decide
on
an
artificial
date
right
now.
I
think,
because
it
would,
it
would
be
a
little
approximate,
but
I
think
what
we
should
do,
though,
is
like
have
a
maybe
like
a
road
map
of
things
that
we
want
to
get
in
for
beta.
Have
that
documented
publicly,
so
it's
available
to
our
users
and
also
agree
on
the
target
date,
maybe
like
soon
just
so
that
it's
not
a
never-ending
release
cycle.
C
C
A
A
That's
good
thanks
for
bringing
this
up.
I
think,
like
it's
yeah,
it's
a
good
awakening
that
like
we
want.
We
want
to
do
this,
so
if
we
do
have
the
support
of
like
multiple
folks
and
you
know
the
power
to
proceed
like
I'm
all
supportive-
and
you
know
that's
why
the
discussion
always
started
is
because,
like
this
is
long
overdue,
I
think
the
projects
are
mature
enough.
Maybe
our
bar
is
a
little
bit
too
high
at
times.
So
so
why
don't?
A
We
do
this
like
for
action
item
for
next
week,
I'll
try
to
put
like
a
table
together
of
like
all
these
items
and
like
what
do
they
mean,
and
you
know
pros
and
cons
like
to
block
beta
or
not
for
it,
and
then
folks
can
just
add
to
it.
I
just
added.
I
would
just
add
it
to
the
meeting
notes
I
think
for
in
terms
of
freeze
for
beta.
A
I
think
it's
probably
fair.
You
know
I
don't
necessarily
want
to
put
too
many
dates,
but
by
kubecon
maybe
like
we
don't
accept
anything
new,
I
mean
even
right
now,
it's
like
a
little
bit
too
much
to
be
honest,
but
I
don't
want
to
be
ex.
You
know
excluding
folks
that
might
want
to
come
in
and
contribute
that
make
sense,
and
we
should
also
email.
The
mailing
lists.
A
Cool
any
other
thing
on
as
well,
thanks
for
busy
for
taking
notes.
This
is
a
lot
of
discussion.
G
Yes,
so
we're
working
the
last
few
weeks
on
the
topology
implementation
and
we
moved
recently
in
a
separate
package.
And
I
want
to
bring
out
a
point
if
it
makes
sense
to
define
separate
reviews
and
maintain
those.
And
we
would
also
have
two
candidates,
just
as
a
part
of
scaling
up
the
code
base
and
reviews
containers
and
because
it's
just
unrelated
to
be
a
reviewer.
For
the
whole
repo.
B
Yeah,
so
this
is
kind
of
fallout
from
the
scale
from
xero
rewrite
that
I've
been
trying
to
do,
and
in
you
know,
we
had
been
having
a
discussion
about
how
we
wanted
to
expose
the
you
know
the
resource
or
capacity
information
for
machines,
and
in
that
discussion
it
sounded
like
you
know
what
we'd
agreed
on
was
that,
ultimately,
that
information
should
be
created
by
the
cloud
providers,
because
they'll
have
the
authority
to
know
what
machine
type
matches
up
to
you
know
what
specifications
you
know
assuming
that
exists
even
for
the
cloud,
and
so
the
the
natural
result
of
that
was
that
we
might
put
a
status
field
onto
the
infrastructure
machine
templates,
which
would
then
be
reconciled
somehow
by
the
individual,
like
you
know,
infrastructure,
provider,
actuators
or
something
to
add
that
information.
B
But
in
looking
through
the
various
implementations
we
have
like
that
seems
like
an
anti-pattern
in
cluster
api
like
we
haven't,
we
haven't
before
like
reconciled
those
resources,
specifically
as
far
as
I
can
tell,
and
there
isn't
currently
a
status
on
those
objects.
So
I
just
wanted
to
kind
of
talk
to
the
group
again
to
see
like.
B
Is
there
any
consensus
about
this
or
would
or
would
we
be
okay,
pushing
into
the
territory
of
allowing
those
infrastructure
templates
to
somehow
be
reconciled
and
have
a
status
that
could
be
added
to
them
by
the
infrastructure
providers
or
or
should
I
just
kind
of
avoid
this
methodology?
And
you
know,
maybe
we
need
to
rethink
this
again.
A
So,
just
to
clarify
so
like
this
is
this:
this
was
a
way
for
an
infrastructure
template
for
machines
that
you
can
attach
to
machine
deployment
or
whatever
else
to
pretty
much
say
this
infrastructure
template,
which
is
from
a
cloud
provider.
So
like
aws,
machine
template
or
azure
machine
template
will
give
out
information
of
cpus,
and
things
like
that.
A
B
Right
correct
so,
like
the
usage
from
the
auto
scaler
side
would
be,
you
know
it
sees
a
machine
set
or
a
machine
deployment.
It
tries
to
get
the
infrastructure
reference
from
that
record
and
then
looks
inside
the
status
on
that
to
see
what
the
specs
of
a
machine
there
is
so
yeah
like,
yes,
you're,
absolutely
correct.
The
infrastructure
provider
would
be
updating
that
template
to
contain
information
for
the
auto
scaler
to
use
for
the
scaling
operation.
B
A
So
I
mean
go
ahead.
Sorry,
no,
I
mean
like
personally,
I
think,
like
that's.
That's
actually
a
good
way
to
go
about
solving
this
problem.
It
would
require
work
on
the
infrastructure
provider
side.
At
the
same
time
like
I,
don't
necessarily
see
a
better
way
to
do
it.
That's,
like
you
know,
not
composable
like
this,
because
we
already
have
it
these
types
in
place.
We
just
need
to
add
the
status
field,
our
back
rules
yeah
and
he
was
pointing
out.
Like
we
already
said
it
was
okay
like
a
while
back.
A
A
So
this
goes
back
into
like
just
life
cycle
hooks
like
around
touch
api
that
you
could
put
in
places,
but
I'm
yeah.
I
want
to
hear
from
infrastructure
provider
folks.
B
Yeah-
and
I
just
want
to
say,
like
the
whole
condition
thing
it's
interesting-
you
mentioned
that
because
I
was
just
talking
with
joel
today
internally
about.
We
were
because
we
were
discussing
the
notion
of
putting
a
status
on
these
templates
and
how
previously
you
know
there
was
no
status,
so
they,
the
assumption,
was
they're
not
really
being
reconciled
in
the
normal
way,
and
I
was
saying
yeah
like,
but
in
the
future
you
might
want
to
put
some
sort
of
condition
on
there
to
indicate
you
know
something
about
it.
So
it's
just
interesting.
E
Are
you
yeah,
I
think
it's
just
matt
we've
never
written
a
reconciler
for
those
template
resources.
I
mean
it's
fine.
I
think
the
only
difficulty
would
come
up
is
if
the
status
of
a
template
relates
back
to
sorry,
one
more
thing:
if
the
state
of
the
template
relates
back
to
doing
some
sort
of
query
against
a
cloud
provider
and
then
how
that
would
tie
in
with
the
multi-tenancy
stuff
so
like
relating
that
back
to
a
cluster
resource
and
etc.
E
But
if
it's,
if
I'm
thinking
correctly,
I
think
chaops
and
class
daughter's
gonna
have
like
a
statically
compiled
library
with
all
for
aws,
at
least
which
actually
has
all
the
information
around
the
resources
and
no
just
in
santa
barbara
we
kept
we've
been.
I
mean
annoying
about
moving
that
to
the
cloud
providers,
the
library
that
you
could
import
anyway,
like
as
a
package.
B
Yeah
then
that
sounds
nice
and
I
mean
I
know
this
isn't
the
same
for
everything
so
like
on
aws.
You
just
have
like
you
know
some
identifier
for
the
machine,
but
on
vsphere,
it's
like
you
actually
do
know
the
cpu
and
the
memory
and
stuff,
so
there
would
just
be
copying
them
to
the
status
or
something
okay
like
it.
A
Yeah,
I
think
that
makes
sense.
The
the
only
thing
is,
you
know
we
need
to
make
sure
that
this
is
optional.
It's
like,
if
we
don't
find
that
information.
A
That's
it
right.
It's
like
yeah.
B
And
we
have
like
in
see
I
cause.
I
have
a
patch
for
the
auto
scaler,
where
I'm
already
like
kind
of
doing
this
scale
from
zero
stuff,
so
the
mechanics
I
already
have
in
place
for
it
they'll
do
just
that.
If
they
don't
find
this
information,
they
just
assume
that
that
node
group
cannot
scale
from
zero,
so
it
won't.
It
won't
allow
you
to
like,
take
it
down
to
zero
or
bring
it
back
up
from
zero.
A
The
only
problem
that
I
might
see,
though,
with
like
having
an
optional
reconciliation
of
it,
is
that,
like
you,
you
do
have
a
race
condition
between
the
time
that
the
cluster
auto
scaler
like
it
goes,
and
look
at
this
unstructured
object.
It
doesn't
have
any
status
field
that
you
don't
know
if
the
status
is
not
there,
because
it's
not
populated
or
the
status
is
not
there,
because
it
doesn't
actually
have
status
and
then
reconcile
behind
it.
So
there
is
some
sort
of
reconciliation
that
needs
to
happen.
If
that
information
comes
in
later,
for
example,.
B
Right
and
that's
that's
just
a
gap
that
would
have
to
you
know,
I'm
not
sure
how
we
could
tighten
that
up,
because
from
the
cluster
auto
scaler
side,
it
will
continue
to
try
and
do
an
expansion,
so
it
would.
It
would
be
continuing
to
try
and
do
this
like
on
average,
once
every
15
seconds,
unless
the
user
kind
of
changed
the
update
interval
so
like
from
the
auto
scaler
side.
B
B
A
Thanks
mike
any
other
questions
on
the
status
field,
infrastructure
templates.
A
Yeah,
I
don't
think
I
don't
think
we
actually
have
even
discussed
about
it.
There's
quite
a
few
things
that
like
are
relying
on
the
provider,
the
identifier
like
just
you
know.
I
know
that
they
can
be
different.
A
The
thing
is
like
the
provider,
the
structure
like
it
says
the
identifiers
at
the
end
nadir
pointed
out.
Those
are
like
not
necessarily
true
for
machines
in
multiple
regions,
which
is
definitely
not
ideal,
so
I
don't
know.
Does
anybody
else
haven't
thought
about
this?
I
I
don't
think
we
can
talk
about
enough
before,
but
we
can
improve
things
around.
A
H
I
Ace
yeah,
so
I
think
the
big
thing
is
that
it
doesn't
match
what
the
cloud
provider
actually
sets
right
now,
like
our
comparison
and
the
cloud,
what
the
cloud
provider
sees
are
not
the
same
so
like
what
our
package
returns
as
a
matching.
Node
is
not
what
the
cloud
provider
sees
as
a
matching
node.
That's
that's
kind
of
the
bug
here.
I
From
my
perspective
and
yeah,
it's
it's
due
to
what
exactly
what
you
were
just
saying,
like
the
the
uniqueness
that
we're
relying
on
is
not
a
generic
contract
that
kubernetes
enforces
or
that
cloud
providers
enforce
it's
just
it
happens
to
be
what
aws
supports.
I
think.
A
Yeah
so
like,
I
think
the
biggest
challenge
here
is
probably
like
that
this
is
an
structured
screen
right,
like
you,
don't
know
that,
like.
A
I
guess
you
do
know
but
like
at
the
same
time
you
don't
because,
like
this
string,
mechanics
could
change
like
between
one
version
of
the
club
provider
and
the
other,
and
what
I'm
interested
tonight
is
like
what
happens
if,
like
there
is
the
same
amount
of
paths
which
you
know
we
can
split
on,
but
then
the
meaning
of
each
path
changes
over
time.
A
B
A
I
I
can
clarify
the
bug
if
it
helps
yeah
the
actual
provider
ids
on
the
nodes
and
on
the
like
machine
objects
and
on
all
the
crds
are
correct.
The
way
that
we're
doing
provider
id
comparison
is
not
accurate
to
what
cloud
provider
sees
so
like.
We
will
return
true
that,
like
two
instances
which
are
not
actually
the
same,
they
represent
two
different
nodes.
B
I
So
we
do,
I
believe
I
think
what
vince
is
saying
is.
The
issue
is
like
for
aws,
for
example,
there
can
be
like
provider
ids
that
are
like
region
specific
and
can
have
different
shapes,
but
we're
kind
of
trying
to
do
like
a
structured
comparison
that
accounts
for
that,
but
we're
relying
on
a
contract
in
doing
that
sort
of
structured
comparison
that
is
not
reliable
across
cloud
providers.
It's
sort
of
aws
specific,
but
aws
kind
of
does
need
that
logic
for
the
comparison
to
actually
work.
I
I
do
you
think
there
is
a
path
where
maybe
we
just
use
exactly
whatever
cloud
provider
uses.
I'm
not
sure
how,
like
I'm
sure
there
are
other
components
in
the
like
ecosystem
of
add-ons
and
like
pves
and
stuff
like
that
that
have
this
problem,
I'm
sure
there's
some
prior
art.
I
think
the
issue
with
the
cluster
api
specific,
like
issue
is
no
one
has
the
context
from
all
the
providers.
So
maybe
we
could
just
like
tag
someone
who
knows
it
for
each
one
and
we
could
like
discuss
it
on
the
thread
there.
A
Yeah
for
what
it's
worth
like,
the
provider
id
is
synced
back
up
from
the
infrastructure
spec
provider
id.
So
whatever
that
set
yeah
like
we
can.
A
A
I
guess
we
could
fall
back
but
like
we
need
to
make
sure
that
either
we
compare
the
exact
strings
and
like
they
always
have
to
be
the
same.
Otherwise
the
provider
will
never
match
or
yeah.
This
is
a
behavioral
change,
though
this
needs
to
wait
for
beta,
like
100,
when
you
think
about
through,
like
all
the
implications
of
doing
this
change,.
I
A
Yeah,
I
can
double
check,
but
if
I
remember
correctly
to
like
address
some
behaviors
that
like
are
relying
on
the
fact
that
like
we
only
are,
maybe
there
are
internal
behaviors
and
maybe
that's
fine,
but
there
is
definitely
like
indexes
like
internally
that
like
are
relying
on
on
this
and
let's
comparison
here,
all
right.
Why
don't
we
add
more?
So
maybe
the
ace
like
can
we
add
more
like
documentation
here
like
what's
the
current
status
things,
and
what
do
we
want
out
of
it
like?
A
What
is
the
preferred
solution
for
for
this
issue,
and
then
we
can
think
about
if
it's
actually
a
breaking
change
or
not,
because
we
need
to
think
about
also
the
current
clusters
that
are
like
are
using
this.