►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
wednesday
july
28th.
This
is
the
cluster
api
office
hours.
Thank
you
for
joining
us
and
we'll
go
ahead
and
get
started.
As
a
reminder,
please
follow
the
cncf
code
of
conduct
if
you'd
like
to
participate,
go
ahead
and
raise
your
hand
and
I'll
make
sure
you're
you
get
your
chance
to
talk
and
be
nice
to
everyone.
Please
be
respectful,
et
cetera,
all
right.
So,
let's
get
started.
A
If
you
haven't
already
go
ahead
and
add
your
name
to
the
attendee
list
and
let's
get
started
with,
I
don't
think
we
have
any
psas
vince.
Do
we
have
an
update
on
releases
like
the
I
know,
there's
a
0
3
21
patch,
that
was
released
and
are
we
playing
for
zero
four
one.
B
It
was
zero
three
twenty
two
at
five,
twenty,
two
and
yeah.
We
got
a
bunch
of
we're
two
one
after
another,
because
there
was
a
bug
fix
like
that
came
in
late,
yeah,
zero,
four
one.
I
think
like
we're
almost
there,
like
the
only
thing
left,
it's
actually
a
bumping
controller
runtime,
which
fixes
a
cve
in
one
of
the
dependencies
like
around
jwt.
B
It
doesn't
actually
affect
us,
but
we
got
ping
to
just
remove
that
dependency,
and
so
that's
what
we're
waiting
on
right
now.
We're
blocked
on
that.
A
Got
it
thanks
any
questions
about
current
releases
from
anyone.
A
Okay,
and
do
we
have
any
other
psas.
A
Okay,
if
not,
I
guess
I
released
blocking.
We
should
not
have
anything
in
there,
although
probably
we
should
track
that
controller
runtime
in
there.
Okay,
I
guess
we
do
have
one.
Stefan
are
you
here.
Are
you
able
to
tell
us
a
bit
more
about
this.
C
Yes,
I'm
here
and
there's
already
a
pr
open
for
it.
So
it's
it's
a
parking
cluster
that
when
somebody
sets
minus
minus
target
namespace,
it's
only
applied
to
the
normal
resources
and
also
the
web
configurations,
and
not
to
the
crd
and
yeah
there's
a
pr
open
for
it
to
fix
it
and
already
has
reviews.
So
I
assume
it
won't
take
long
to
get
merged.
A
Okay,
great
thanks
sounds
like
we
should
wait
for
that
as
well
for
zero
four
one,
then.
A
All
right
open
proposals-
I
don't
know
if
we
have
any
major
updates
on
any
of
these.
D
A
All
right
thanks,
please
take
a
look,
and
this
is
the
link
to
the
pr
correct.
Yes,
okay,
perfect
do
we
have
any
updates
on
the
bouncer
provider.
I
know
this
had
been
taken
over
by
some
people.
A
E
Oh
yeah,
what
joel
said:
there's
no
updates
yet,
but
when
we
are
starting
your
next
this
week
we've
been
having
some
sideband
discussions
in
stack,
so
if
it
yeah
do
reach
out,
if
you've
got
any
anything,
you
want
to
add.
A
Sounds
good
thanks.
I
think
I'm
going
to
move
this
or
I
guess
this
can
still
be
zero
four
x
if
it's
not
breaking
yeah.
Okay,
I'll
leave
it
in
here
for
now,
but
we
need
yeah.
We
should
circle
back
on
the
spot
instance
proposal.
It's
been
marked
as
needs
review
for
a
while.
So
if
anyone
has
time,
please
take
a
look
all
right:
let's
get
going
on
the
discussion
topics,
then
stephan
you
have
the
first
one
go
ahead.
C
Yes,
already
post
something
about
it
on
friday,
I
think
it's
like.
So
unless
you
open,
we
currently
have
cuban
config
and
kubernetes
contact,
template,
mutable
or
let's
say
the
webhooks,
allow
that
they're
immutable,
but
we
treat
them
as
immutable.
So
we
never
roll
out
changes
to
those,
and
the
issue
is
about
asking
if
we
should
implement
if
we
should
make
them
immutable
now
or
if
you
should
wait
until
the
next
release.
C
A
Okay,
so
let
me
just
rephrase
to
make
sure
I
understand
the
issue
is
that
right
now,
if
you
update
a
qbm
config
template,
it
will
not
actually
trigger
rolling
updates
right,
so
the
update
is
basically
ignored
and
only
applied
to
new
machines
that
come
up.
So
the
question
is:
should
we
mark
it
as
immutable
so
that
it's
not
a
a
bug,
a
ux
bug
where
you
expect
it
to
be
your
machines
to
be
updated,
but
they
don't
actually
get
updated,
but
it's
a
breaking
change
because
it
changes
the
api
immutability.
C
Yeah
yeah,
so
it's
we
don't
need
to
fix
it
now
we
don't
have
any
hard
dependency
on
that.
It's
trust
the
question
how
we
want
to
handle
that.
B
Yeah,
it's
definitely
breaking
change
like
it's
a
big
behavioral
change
because,
as
you
mentioned
like,
if
you
change
a
template,
the
new
machine
that
will
come
up
actually
will
come
up
with
a
new
template.
So
technically,
like
we're
breaking
some
users
that
might
be
relying
on
this
behavior.
F
So
I
have
a
fundamental
question:
whenever
we
deal
with
things
like
this,
do
we
also
make
the
other
control
plane
configurations
immutable,
or
do
we
do
it
only
for
cubadium,
and
perhaps
I'm
not
understanding
this
right,
but
I'm
just
wondering
whenever
we
want
to
make
now
a
change
like
when
we
actually
take
one
control,
plane
provider
and
make
that
manifest
immutable?
Do
we
also
spread
it
to
the
other
control
providers.
A
A
My
understanding
is
that
it
doesn't
need
to
apply
to
every
provider
as
long
as
it's
not
part
of
the
contract
so
like
if
it's
a
change
to
the
bootstrap
provider
contract.
That
says
which
drop
providers
should
not
be
mutable.
That's
that's
different,
but
I
think
in
this
case
like,
for
example,
we
have
the
same
thing
with
infrastructure
providers,
where
machine
templates
can
behave
in
different
ways
and
there
are
some
infrastructure
providers
which
enforce
this
in
web
hooks
like,
for
example,
azure
has
a
web
hook,
validation
that
makes
the
azure
machine
template
immutable.
A
G
Hey
so
just
curious,
if
we
were
to
do
this,
would
the
proper
way
to
update,
be,
I
guess,
to
have
your
kcp
reference,
a
new
cuban
config,
if
you
needed
to
update
that
kebab
and
config,
and
then
that
would
trigger
a
upgrade.
A
Yeah,
so
the
current
experience
is,
you
have
to
create
a
new
config
templates
and
then
make
those
changes
that
you
want
in
that
new
one
and
then
change
the
reference
in
the
whatever
is
using
that
templates
to
point
to
the
new
template
and
that
will
trigger
rolling
upgrades,
yeah
advanced.
B
Yeah
so
like
from
a
more
realistic
point
of
view
like
we
need
to,
I
guess,
like
think
about,
like
for
the
next
version
given
that
like
this
is
not
like.
I
think
right
now
like
what
we
want
to
make
mutable
in
these
templates,
if
anything,
because
some
folks
actually
have
asked
for
some
mutability
from
the
infrastructure
provider,
for
example,
adding
a
security
group
I
do
agree,
that's
like
probably
not
ideal
to
roll
out
an
entire
machine
deployment.
B
The
tricky
part
here
is
that,
like
the
current,
the
current
api
capabilities
that
we
have,
we
don't
have
a
way
to
mark
a
field
as
like
mutable
and,
at
the
same
time,
give
a
signal
to
machine
deployment
and
the
cluster
api
controller,
which
know
nothing
about
dcrt
that,
like
that,
was
a
natural
change
that
was
mutable
like
in
place
rather
than
like
a
rolling
upgrade.
B
So
we'll
need
to
think
about
it
like.
How
do
we
deal
with
something
like
that?
And
maybe
it's
like
a
contract
change
between
flushed
api
and
providers,
or
it's
like
something
more
complex.
A
So
in
the
meantime,
I
know
that
we
had
some
documentation
about
how
to
update
a
infrastructure
machine
template.
Do
we
have
yeah
this?
Do
we
have
an
equivalent
for
cuba
dm
config?
If
not,
that
might
be
good,
so
so
that
that
behavior
is
at
least
documented
and
not
completely
hidden.
H
I
H
And
this
is
an
below
changing
our
machine
template.
A
A
That
this
applies
to
okay,
so
yeah.
If
someone
can
just
take
note
of
this
in
the
issue,
but
I'll
keep
going,
but
we
should
yeah
document
this
ideally
and
then
revisit
for
the
next
milestone.
A
A
All
right
mike,
you
have
the
next
one,
auto
scaling.
J
We're
back
for
another
round
of
auto
scaling
from
zero.
First
of
all,
thanks
to
everybody
who
added
comments
there,
there
was
some
really
good
discussion
going
on
and
I
wanted.
You
know
one
of
the
questions.
That's
coming
up
that
we're
going
back
and
forth
on
now
is
kind
of
like.
J
Should
we
have
a
proper
api
field
for
these
resource
hints,
or
should
we
maybe
just
use
annotations,
and
let
me
you
know,
give
a
little
background
here
so
for
everyone,
who's
kind
of
you
know
just
getting
into
this
we're
talking
about
how
the
cluster
auto
scaler
could
do
scale
from
zero
using
cluster
api,
and
this
is
a
this
is
a
feature
that
exists
in
the
upstream,
auto
scaler,
for
you
know
any
provider
who
wants
to
implement
it.
J
Not
all
providers
do
and
we
need
to
surface
information
about
the
nodes
that
are
created
specifically
their
cpu
usage,
their
memory
usage.
You
know
any
special
resources
like
gpus
or
whatever,
and
this
is
currently
difficult
for
us
to
do
because
of
the
way
that
we
embed,
you
know
infrastructure,
templates
and
whatnot.
So
you
know
the
idea
has
been
to
make
a
clear
field
where
we
could
just
say
this
is
the
cpu.
J
This
is
the
memory
you
know,
etc,
for
any
given
machine
template
and
in
openshift,
we've
implemented
this
by
having
annotations
on
our
machine
set.
So
we
only
use
machine
sets
in
openshift,
so
we
just
add
an
annotation
there
that
has
this
information
and
that
that
inspired
some
of
the
override
mechanism
that
I
put
into
the
you
know
into
the
enhancement
in
hopes
that
we
could
contribute
that
back
now.
J
The
discussion
has
kind
of
come
full
circle,
and
you
know
stefan
presented
an
interesting
point
which
is
like
you
know:
could
we
just
do
the
annotations
and
maybe
put
them
on
machine
deployments,
machine
sets
and
possibly
infrastructure
templates.
You
know
any
one
of
those
places
and
then
the
auto
scaler
could
use
that
to
inform
its
simulation
of
how
to
scale
up
you
know.
J
J
Perhaps
you
know
in
something
like
the
infrastructure
machine
template
because
that's
closer
to
the
provider,
and
you
could
put
all
the
specific
information
there
so
anyways
with
that
background
kind
of
talked
about
now,
I'm
curious
to
hear
you
know
maybe
fabricio
and
stefan
you
know,
if
there's
any
more
thought
about
you
know
if
we
could
just
use
annotations
like
if
we
decided
just
to
use
annotations
on
machine
sets
or
machine
deployments
as
the
first
implementation
of
this
feature,
then
you
know
I
could
almost
just
contribute
the
code
that
we
have
now
in
openshift
and
just
like
propose
a
pr
to
the
auto
scaler.
J
That
would
be
the
easiest
way
to
do
it.
No
yeah
cecil,
like
unfortunately,
machine
pools,
are
not
yet
included
in
the
cluster
auto
scaling.
I
know
this
comes
up
every
time.
We
talk
about
this,
but
we
need
a
little
more
work
to
get
machine
pools
fully
integrated
into
the
auto
scaler,
mainly
around
the
map,
mainly
around
the
mappings
to
the
nodes
and
whatnot.
A
J
J
So,
for
like
machine
pools,
we
would
have
to
come
up
with
a
way
to
to
kind
of
surface
that
information,
and
I
think,
if
we
could
create
you
know
a
singular,
auto
scaling
resource
that
would
allow
us
to
actually
kind
of
subsume
machine
sets
machine
deployments
and
machine
pools
in
addition
to
whatever
we
come
up
in
the
fut
in
the
future.
So
like.
I
think
it's
kind
of
sidereel
to
this
or
tangential
to
this,
but
it
could
help.
A
Okay,
fabrizio
had
his
hand
up
first
go
ahead
for
brazil.
H
Yeah
just
answering
to
the
question
on
annotation
or
explicit
api,
so
I
I
don't
have
a
strong
reference.
My
my
only
comment
with
regard
to
these
is
that
I
would
avoid
to
suggest
the
user
to
put
an
annotation
on
on
a
machine
set
or
machine
because
they
basically
manage
they
can
go
away
whenever
you
do
a
rollout
or
stuff
like
that.
It
just
means
that
the
user
has
to
to
continuously
add
this
kind
of
information.
H
H
If
an
instance
is
there
and
to
make
his
own
calculation
about
how
many
nor
node
are
there.
So
if,
if
an
easter
is
there
that
the
auto
scale
work,
the
problem
is
scaling
to
zero,
but
my
mic,
because
when
scaling
to
zero
delta
scale,
you
cannot
do
a
proper
calculation
of
the
number
of
required
nodes.
My
suggestion
is
that,
but
why
we
need
a
to
do
the
calculation
we
we
whenever
the
we
are
at
zero
machine
and
we
are
required
to
scale,
let's
start
scaling
to
one
without
any
calculation.
H
As
soon
as
the
auto
scale,
the
the
then
there
was
the
the
machine
deployment
gets
to
one.
Then,
then
we
are
done.
That
doesn't
mean.
That
means
that
we
can
even
make
it
work
without
adding
information
to
the
to
the
types
that
that's
my
I
I
don't
know
how
to
scale
internal,
so
I
may
be
totally
wrong,
but
the
idea
is
just
to
get
out
to
the
corner
case
and
then
let
everything
work
as
a
as
us
as
of
today.
J
Yeah,
no,
I
mean
it's
a
great
question
fabrizio
and
I
know
like
we're
really
getting
into
the
internals
of
the
auto
scaler
here,
but
yeah
like
essentially
you're
right,
like
the
auto
scaler,
should
be
able
just
to
say,
like
increase
this
node
group
by
one
and
that
node
group
knows
what
size
the
machines
are.
So
it
just
increases
itself
right.
But
what
happens
before
that
is
the
auto
scaler
has
a
simulator
that
it
runs
where
it
tries
to
simulate.
J
If
I
could
add
a
node
from
any
of
the
node
groups
that
I
know
about,
would
this
pod
fit
that
node?
And
so
when
a
group
is
at
zero
it
it
uses
that,
like
that
hinting
resource
information
to
basically
do
that
calculation
and
so
there's
an
interface
inside
of
the
cluster
autoscaler
that
each
provider
implements
where
they
have
to
like
return
that
information
back
and
that
allows
the
cluster
auto
scaler
to
do
these
type
of
calculations
from
zero.
I
mean
it's
possible
that
you
know
it's
possible
that
we
could
have
our
our
auto
scale.
J
You
know
so,
like
part
of
this
is
trying
to
make
it
so
that
we
have
a
common
abstraction
for
how
we
expose
this
information
to
the
auto
scaler,
but
without
changing
the
way
the
auto
scaler
works
in
the
core
we
wouldn't
be
able
to
have
it
skip
that
simulation
phase
like
we
need
to
the
auto
scaler
needs
to
kind
of
run
this
simulation
and
figure
out
if
it
can
place
the
pod
that
it
sees
as
unscheduled.
Otherwise,
it
will
it'll
do
something
different
if
it
can't
place
it.
Does
that
help
at
all?
I
guess.
J
I
mean
yeah
that
that
would
be
the
tough
part
is,
we
would
have
to
add
an
extra
kind
of
processing
conditional-
and
I
doubt
the
upstream
folks
want
to
do
that.
Just
for
our
provider,
but
they're
I
mean
I
could
dig
deeper
on
this,
but
it
seemed
like
if
we
could
just
do
annotations
or
something
we
could
probably
solve
this
for
ourselves,
at
least
for
now.
You
know.
K
Yeah
so
I'll
just
give
a
real,
quick
kind
of
related
topic.
K
We
have
been
doing
some
work
on
machine
pull
machine
proposal
and
should
probably
be
able
to
open
it
up
before
we
meet
next,
and
this
should
give
us
should
at
least
give
us
a
place
to
start
that
conversation
about
how
do
we
delete
an
instance
out
of
a
machine
pool
and
how
do
we
kind
of
bridge
that
difference
between
deployments,
machine
deployments
and
machine
pools
I'll
open
up
a
we'll
open
up
a
google
doc
here
before
we
meet
next
time?.
J
J
B
So
the
the
fields
versus
the
annotations
like,
if
I
mentioned
this
correctly,
though
like
this-
is
something
that
the
user
has
to
say.
Is
that
correctly,
before
creating
the
machine.
J
Yeah
I
mean
so
yeah
this
this,
like
ideally
yeah.
The
user
would
need
to
set
this
up
before
creating
well,
they
could
add
it
to
a
machine
set
after
they
already
created
it,
but
yeah
the
user
would
have
to
set
this
up
essentially.
B
Yeah,
it's
like,
I
honestly
like
find
that
a
little
bit
like
error-prone,
especially
like
if
you
you
know,
change
like
the
underlying
machine,
then
those
values
like
won't,
be
updated
and
things
like
that.
So
the
idea
that,
for
a
bizarre,
had
like
two,
I
guess
like
the
scale
up
like
I
I
I
don't
know
like
if
you
know
we
could
make
that
fit
but
like
that,
could
be
like
a
good
start.
B
The
other
thing
that
I
was
thinking
is
like,
maybe
each
machine
from
what
like
machine,
I
guess
like
infrastructure,
machine
template
from
a
provider
like
it
should
give
out
that
information.
If
it
does
have
it
and
if
we
don't
have
it,
we
just
don't
support
like
from
zero.
I
guess
scaling
or
like
I
don't
know
like
if
there
is
like
a
fallback
that
we
could
fall
into
like
that.
B
If
it
doesn't
have
the
information,
because
you
can
always
change
like
a
machine
template
from
one
machine
to
another-
and
like
I
don't
know,
maybe
you
don't
want
gpus
on
there
anymore
and,
like
someone
forgets
to
change
the
annotation
or
the
field,
because
this
is
actually
keeping
state
in
two
things
like
in
two
places,
and
usually
we
try
to
really
avoid
doing
things
like
that,
because
it,
you
know,
confuses
like
most
of
the
users.
J
Right
and
so
some
of
this
kind
of
depends
on
where
the,
where
your
infrastructure
templates
come
from
in
terms
of
usability
like
you've,
if
you
had
all
those
created
for
you,
then
you
know
yeah,
then
then
that
information
presumably
would
already
be
in
there.
I
also
agree
with
you
about
not
duplicating
this
information.
J
That's
why
you
know
and
there's
a
number
of
issues
kind
of
bundled
in
here
and
we're
kind
of
at
a
stop
gap
to
get
to
the
next
point
because,
like
if
we're
doing
this,
just
with
annotations,
then
yeah
it's
going
to
be
very
user
error
prone
and
the
user
experience
may
not
be
great
and
and
again
like.
This
is
kind
of
like
the
next
step
would
be
to
create
some
automation
to
put
these
annotations
on
for
us,
which
is
something
that
we
do
in
openshift.
B
What
I
was
thinking-
that's
like
it's
like,
not
that
the
user
will
still
put
them
on
the
infrastructure
template
I'm.
I
was
more
like
thinking
like.
Maybe
we
could
have
like
the
infrastructure
template
to
be
reconciled
somehow
and
then
the
status
will
be
updated
with
what
the
the
provider
thinks.
I
don't
know
how
many
gpus
memory
and
cpus
does
that
machine
have.
J
That
we
talked
about
like
earlier
on
in
this
was
having
this
exposed
in
the
status
part
of
the
infrastructure
object
and
then
yeah,
allowing
each
provider
to
populate
that
information.
You
know
as
they
see
fit,
I'm
I'm
you
know,
I'm
I'm
fine
to
go
back
that
direction.
That's
gonna,
put
more!
That's
gonna,
put
like
more
responsibility
on
the
providers,
I
think
is.
J
F
Yeah
yeah,
so
this
is
somewhat
similar
to
vince's
question,
as
in
so
far
we
have
used
machine
template
to
describe
everything
except
the
compute.
So
if
I
have
the
same
kind
of
machine
but
different
computes
do
I
need
to
replicate
the
machine
template
just
with
the
compute
changed
repeatedly
or
can
we
have
a
set
of
policies
because,
right
now
it's
a
clear
distinction
between
machine
with
respect
to
all
of
the
other
operating
I
mean
essentially
the
non-hardware
template
and
can
the
hardware
template
be
provided
because
it's
a
matrix
right?
F
Ultimately,
you
have
different
machines
templates
and
you
have
different
and
each
can
provide
a
set
of
compute
policies
in
our
infra
providers.
Essentially,
the
os
and
other
things
are
distinct
from
the
hardware.
So
it's
a
you
can
take
any
hardware
and
patch
any
os
for
it.
Potentially.
So
it's
like
take
any
compute
profile.
You
want
and
patch
a
machine
template
and
it
should
work,
but
in
this
case
we
have
to
provide
the
whole
matrix.
J
Well,
if
so,
if
I
understand
the
the
question
you
know,
this
is
kind
of
about
like
heterogeneous
versus
homogeneous
node
groups
and
the
way
the
autoscaler
currently
looks
at
things.
The
auto
scaler
wants
to
see
homogeneous
groups
of
nodes
so
like
when
we
have
a
machine
set
or
a
machine
deployment
that
gets
exposed
to
the
auto
scaler,
then
that
all
the
nodes
in
that
machine
set
are
assumed
to
be
the
same
type
right.
J
You
know
kind
of
groups,
and
I
realized
this
is
kind
of
like
this
is
looking
at
it
in
a
very
serial
way.
Almost
it
doesn't,
it
doesn't
get
into
some
of
these
new
abstractions,
we're
talking
about
like
machine
pools
or
or
heterogeneous
compute
clusters
and
stuff,
like
that,
you
know,
but
this
is
the
way
the
auto
scaler
works
now,
so
we
have
to
kind
of
match
what
it's
doing
at
the
moment.
I
guess
so.
F
J
I
mean
like
if
we
could
somehow
surface
this
information
on
a
machine
set
or
machine
deployment.
That
would
ultimately
be
the
easiest
to
implement
in
the
auto
scaler,
because
we're
already
reconciling
those
objects
in
the
auto
scaler
right
so
like
we
would
already
have
that
information
there,
but
the
problem
is
the
machine
sets
and
machine
deployments
aren't
really
controlled
by
the
individual
providers.
So
there's
no
really
without
adding
another
step
where
we
reconcile
from
a
provider-specific
object
like
an
infrastructure
template
out
to
the
machine
set
template
that
starts
to
you
know.
F
And
one
more
small
follow-up
question
and
we
can
follow
up
in
slack
if
this
doesn't
work,
so
the
cluster
auto
scaler
for
the
capi
and
the
workload
cluster
on
its
own.
They
are
identical,
binary,
wise,
correct,
correct
yeah,
so
in
that
case,
this
functionality
will
be
added
only
for
the
capi
world
and
not
for
a
plain
workload
cluster.
So
was
there
a
reason
why
we
didn't
wait
this,
the
more
fundamental
approach
was
not
taken
or.
F
J
I
remember
this
question
you
asked
like
so
yeah.
This
is
cappy
specific,
the
feature
we're
adding,
but
it's
because
every
provider
in
the
cluster
auto
scaler
has
to
provide
scale
from
zero
on
their
own.
So,
like
everybody
has
to
provide
this
on
their
own,
we're
providing
it.
You
know
through
this
mechanism
for
cluster
api
now
in
terms
of
workload
or
management,
you
know
like
it'll
work
on
any
cluster,
auto
scaler,
that
you
use
the
cappy
back
end
with
so
whether
you're,
using
a
joint
cluster
or
you're
using
a
management
workload.
J
J
Time
on
this
at
this
point,
like
I'm,
I'm
not
sure
I
have
a
resolution,
but
you
know
I'm
happy
to
to
move
on
and
continue
discussing
this
on
the
pr.
A
Thanks
sounds
good:
let's
do
that,
thanks
for
all
the
great
discussion
fabrizio
you
have
the
next
one
go
ahead.
H
So
this
is
about
cluster
class
and
especially
the
the
impact
on
the
provider
and
the
ongoing
pr
for
implementing
new
template
class
for
new
template
types
for
cluster
classes,
but
more
specifically,
the
infrastructure,
cluster
template
and
eventually
the
control
plane
template.
So
there
was
a
lot
of
discussion
and
trying
to
address
and
make
more
clarity.
Around
this
point,
I
opened
the
pr
which
amended
proposal
and
the
proposal
basically
documents.
What
are
the
impact
for
the
provider
documents?
H
What
are
the
the
convention
that
we
are
following
and
that
we
can
have
to
continue
to
follow
and
implement
it
in
place,
and
also
it
provides
some
clarification
about
our
cluster
class
is
going
to
reconcile
them
plays
with
the
generated
object.
Tldr
is
that.
H
Validation
rule
must
be
consistent,
because
when
I
clone
a
template
to
create
an
object,
I
the
operation
should
succeed,
but
really
more
detail
are
on
the
pr,
and
I
will
be
happy
to
answer
to
any
question.
H
The
other
interesting
part
that
we
are
documenting
is
how
templates
are
reconciled
and
and
the
interesting
and
what
what
we
are
assuming
to
do
is
that
basically,
we
are
going
to
enforce
only
the
field
that
are
explicitly
explicitly
define
it
in
the
template
object.
That
means,
if
the,
if
someone
does
not
define
something
in
the
template,
the
user
or
other
controller
will
be
free
to
set
them
in
in
the
object.
H
There
are
some,
let
me
say
corner
case
detail
about
omit
empty
and
how
json
selection
work
api
conversion.
They
are
all
in
the
in
the
pr.
Please
take
a
look
and
I'm
looking
for
feedback
for.
H
H
Exactly
this
is
a
clarification
and
add,
adding
more
detail
and
talking
about
the
detail
that
we
are
going
to
add
to
other.
We
are
also
making
really
clear
that
cluster
class
will
work
only
with
control
plane
provider
that
support
version.
This
is
the
only
required
and
the
the
old
enemy
is
a
restriction.
That
now
is
super,
hopefully
clear
in
the
proposal,
and
it
was
also.
H
It
was
also
was
clarified
that
the
cluster
class
could
work
with
control,
plane
provider,
not
supporting
a
replica
or
counter
pre-provided
not
baked
by
a
machine
so
which
one
is
a
follow-up
of
last
week
discussion.
So
I
tried,
let
me
say,
to
collect
feedback
condensate
them
into
the
proposal.
So
so
we
don't,
we
don't
lose
track,
and
hopefully
we
unblock
the
pending
pr.
A
Yeah
that
that
sounds
great
thank
you
all
right
and
if,
by
the
way,
if
you're
newish
to
this
community-
and
you
don't
know
what
cluster
class
is
or
you
have
no
idea
what
we're
talking
about,
I
highly
encourage
you
to
check
out
the
proposal.
It's
yeah,
it's
already
merged.
It's
the
same
one
that
rubico's
making
amendments
to,
but
this
is
basically
a
new
way
of
provisioning
clusters
in
like
a
simplified
way
to
have
like
cluster
stamps,
so
yeah
check
it
out.
If
you
have
any
questions
reach
out
on
slack.
A
All
right,
if
not,
let's
move
on
shukong,
I'm
very
sorry
if
I
mispronounce
your
name.
L
Yeah,
your
pronoun
is
actually
pretty
good
thanks
yeah,
my
name
is
very
similar
to
what
you
just
pronounced.
Okay,
so
I
just
want
to
like
say
some
very
quick
hello
from
databricks,
because
we
have
a
group
of
engineers
who
will
join
and
start
contributing
to
this
community.
We'll
do
some
like
I'll
give
some
very
quick
introduction
about
what
data
break
is
what
our
team
is
and
then
also
maybe
like
them.
L
Our
team
member
do
a
very
quick
self
introduction
and
then
maybe
we
will
try
to
seek
some
of
your
guidance
or
suggestion
about
how
we
can
get
started
in
this
awesome
community.
L
This
is
like
this
topic
that
does
it
sounds
good
cool,
okay,
cool
yeah,
okay,
a
little
bit
introduction
like
you'll,
probably
see
like
a
bunch
of
names
on
the
on
the
attended
these
that
from
databricks
that
is
like
we
are
all
from
like
the
multi-cloud
platform
organization
in
databricks
and
databricks
is
a
a
company
that
is
focusing
on
data
and
ai.
L
So
our
companies,
like
our
co-founder,
was
like
the
original
creator
of
apache
spark
and
we
are
also
like
I
mean
we
are
also
very
heavy
in
the
open
source
community
like
we
do.
It
opens
us
a
lot
of
like
a
project
like
ml
flow
delta
lake,
a
bunch
of
things
we
also
like
we.
We
are
now
running
on
all
three
clouds.
We
run
on
aws,
azure
and
gcp,
and
we
use
kubernetes
every
day.
Now
we
I
mean
database
is
also
like
a
growing
fast.
L
It's
like
we're
currently
like
a
28
billion
valuation
company,
but
I
mean
so
we
are
still
like
growing
and
then
one
of
the
pan
point
we
heat
like
as
the
grow
is
the
growing
pan
that
we
start
managing
more
and
more
kubernetes
cluster,
and
then
we
expect
they
will
even
scale
up.
L
So
we
start
like
thinking
about
how
how
we
can
manage
kubernetes
in
our
future
architecture,
and
then
we
evaluate
like
a
bunch
of
the
solution
in
the
in
the
marquee,
and
it
turns
out
the
cluster
api
seems
to
be
the
best
fit
for
us.
L
So
that's
why,
like
we,
we
join,
which
we
want
to
join
the
community
work
together
with
you
and
make
cluster
api
project
more
successful,
also,
if
hopefully,
that
can
also
make
a
like
also
like
try
to
put
our
use
case
if
possible,
that
could
be
supported,
natively
from
the
community,
that's
where
we
are
from
so
for
us.
We
already
like
internally,
we
already
like
funded
a
big
team
working
on
this
project
and
then
it's
becoming
like
one
of
our
top
priority
in
our
organization.
L
So
that's
why
you
see
like
a
bunch
of
people.
They
will
be
working
like
close
to
full
time
on
this
project
in
next
few
months.
So
hopefully
you
will
expect
like
a
lot
of
pr
coming
from
us
in
the
next
few
months.
So
this
is
like
we
will
try
to
do
some
introduction
and
may
make
you
familiar
with
these
names.
So
sometimes
you
you
like
you,
will
see.
Oh,
this
is
not
like
a
strange
people
just
submitting
emerging
random
pr.
So
that's
where
we
that's
a
a
quick.
L
Like
a
introduction
about
our
team,
I
mean
we
can
do
some
very
quick
self
introduction
as
well.
Maybe
let
me
stop
because
things
happen
already
talking
and
then
we
can
go
through
them.
I
can
do
like
maybe
10
seconds
introduction,
so
my
name
is
sotong,
so
I've
joined
databricks
about
a
little
bit
over
one
year,
I'm
the
tech
lead
of
the
cloud
infrastructure
team
and
in
databricks
I'm
also
like
driving
this
like
a
new
like
our
second
generation
of
infrastructure
management.
L
That
will
work
a
lot
with
the
kubernetes.
Maybe
we
can
go
over
with
as
the
name
in
this
case.
I
Hey
guys,
I'm
anders
so
I
joined
the
theater
x
around
like
march
only
a
couple
months.
Here
previously,
I
worked
in
azure
kubernetes
service.
I
guess-
and
I
worked
with
some
of
you
guys
before
so
yeah
super
excited
to
work
with
class
api.
I
like
it
a
lot.
M
Hi,
my
name
is
david.
I've
been
at
databricks
for
a
few
months.
Before
that
I
worked
at
google
on
kubernetes.
I
worked
on
the
kubernetes
scheduler
in
the
very
early
days
and
a
number
of
other
pieces
of
kubernetes,
while
at
google
and
then
had
more
recently
started
working
on
anthos
at
google
and
yeah.
I'm
looking
forward
to
working
with
the
cluster
epi
community.
N
Eric
hi,
okay,
yep
hi
everyone,
my
name
is
eric.
I'm
one
of
the
managers
working
with
teams
focused
on
this
new
initiative
around
scalable
management
of
our
kubernetes
clusters.
Yeah
I've
been
with
database
for
a
long
time,
but
yeah
really
excited
to
work
with
everyone.
N
O
Yeah
hi
everyone,
my
name
is
machine,
so
I'm
being
at
the
databricks
for
almost
three
years,
so
I
will
be
on
the
new
team
like
driving
this
effort,
like
we
use
cluster
api
to
manage
our
clusters
and
meanwhile
we
will
try
to
contribute
back
to
the
cluster
api
community
yeah.
We
will
work
closely
in
the
future
very
excited.
P
Hey
everyone.
My
name
is
phillip:
I've
been
at
databricks
for
a
little
over
two
years,
same
team
as
eric
and
meishing
and
chochan
and
richard
so
yeah
super
excited
to
work
with
you
all.
N
Hey
everyone,
my
name
is
richard:
I've
been
at
the
rx
for
a
few
months,
I'll,
be
working
on
the
same
initiative
on
the
same
team
as
philip
mission,
etc;
very
excited
to
get
to
know
the
community
and
hope
to
contribute
to
the
community
in
the
future.
Thanks.
L
Cool,
thank
you
very
much
for
the
introduction
and
now
yeah.
We
are
we're
really
excited.
I
feel
like
this
is
a
very
important
project
that
can
help
the
community
a
lot
and
then
we
also
want
to
contribute
back
and
also
like
to
work
with
you
together.
So
some
of
a
question
like
maybe
some
very
quick
question,
because
we
are
relatively
new
to
this
project,
so
please
forgive
us
if
I
like,
we,
we
didn't
understand
some
of
like
a
process
or
policy
world.
L
Please
teach
us
so
just
want
to
understand
like
if
we
want
to
study
any.
I
mean
we
already
wrote
a
read
your
documentation.
We
understand
the
design
of
the
cluster
api,
some
basic
design
for
sure,
not
not
the
irid
piece
of
design,
but
if
we
want
to
start
contributing
like
what
could
you
recommend
us
to
any
like
kind
of
like
process,
we
need
to
follow.
L
Also,
we
want
to
hear
a
little
bit
of
your
suggestion
about
what's
the
style
that
we
can
manage
our
code
because,
on
our
site
like
we,
I
mean
we
we
are.
We
have
a
relatively
time
tight
timeline,
so
internally
we
will
want
to
use
it
as
some
of
like
our
project
delivery,
so
we
will
likely
will
start
with
some
of
the
forking
internally,
but
we
also
want
to
eventually
merge
this
pr
as
much
as
possible
into
the
upstream.
L
So
we
don't
need
to
deal
with
like
this
forking
and
replacing
business
as
a
maintenance.
So
if,
if
other
people
have
gone
through
the
similar
process,
I'm
what
I'm
I'm
I'm
very
eager
to
hear
like.
What's
your
suggestion
like
how
do
you
feel
like
this
will
become
productive.
A
A
So
if
you
haven't
gone
through
that,
this
is
probably
the
best
place
to
go.
You'll
probably
want
to
check
out
our
proposal
process,
which
is
a
little
like
it's
pretty
similar
to
the
kep
process
in
kubernetes,
if
you're
familiar
with
that,
but
we
have
our
own
little
twist
on
it
and
then,
in
terms
of
like
the
pr
process
and
the
release
process,
it
should
all
be
documented
in
here.
So
breaking
change
is
also
worth
checking
out.
So
I'd
recommend
looking
at
this,
that's
the
first
part
and
then
vince.
B
Yeah,
I
hope
everyone
with
the
thanks
for
the
question,
but
so
yeah.
First
of
all,
welcome
like
with
open
arms
like
more
contributors.
This
is
awesome
in
addition
to
what
still
says,
like
I'm,
not
sure
like
how
much
like
you
have
like,
like
plus
api,
but
if
you're
starting
like
definitely
suggest
to
start
from
zero.
Four
zero
three
is:
is
our
current
like
stable
support
release,
but
it's
also
a
maintenance
mode
at
this
point,
so
we
only
back
port
like
bug,
fixes
to
it.
Okay,
but
zero.
B
Four
just
got
released,
so
we're
planning
on
the
patch
version
soon
and
we
have
extensive
end-to-end
tests
like
throughout,
like
the
code
base
in
terms
of
contributing
like
we
usually
don't
accept
the
breaking
changes
until
the
next
minor
release.
That
is
zero
five
that
is
planned,
but
I
think
we
we're
probably
gonna.
Do
it
zero,
five
and
alpha
five
types,
but
we're
not
planning?
B
Well,
we
haven't
put
it
on
the
calendar,
yet
we
usually
have
like
a
couple
releases
like
alpha
releases
up
so
far,
we've
done
a
couple
per
year,
but
we
yeah,
like
I'm
not
sure
like
if,
like
we're,
gonna,
do
this
at
the
alpha
five
like
by
the
end
of
the
year
like
early
next
year,
thanks
for
still
for
pointing
out
like
we're,
also
like
talking
about
the
beta
ones
like
how
do
we
get
to
more
stable
apis
and
like
what
would
our
next
version
of
the
apis
look
like
and
we're
focusing
like
a
lot
right
now
on
the
user.
B
Experience
so
definitely
get
familiar
with
cluster
class.
If
you're
managing
console
clusters
cluster
class
with
it
in
its
current
form
and
in
the
form
that
it's
going
to
get
into
by
the
end
of
the
year
will
be
super
beneficial
for
for
you
folks,
and
on
that
one,
you
could
probably
like
give
us
tons
of
feedback
as
well
in
terms
of
like
future
editions.
We
usually
discuss
it
in
slack
and
then
open
issues.
B
Issues
are
usually
like
really
well
maintained.
We
do
grooming
sessions
every
once
in
a
while,
so
yeah,
like
usually
an
issue,
has
attached
the
milestone.
If
we
want
that
feature
to
go
into
that
milestone
for
backward
compatible
changes,
we
would
accept
like
like
a
new
additions
to,
for
example,
to
apis
after
discussing
and
yeah,
like
communication
is
like,
I
think
it's
one
of
like
our
greatest
strengths
in
in
this
community.
B
We
do
communicate
a
lot
through
get
up
issues.
I
would
say,
like
our
biggest
blocker
slowdown.
Right
now
is
actually
reviewers.
B
There
is
a
handful
of
us
that
actually
do
review
all
the
pr's
that
are
coming
through
or
most
of
them,
so
there
is
also
like
an
opportunity
for
you
folks
to
actually
help
us
and
get
it
to
the
reviewer
group.
If
you're
interested
in
that,
you
know
regret,
power
comes
great
responsibilities.
So,
like
there's
some
expectations
when
you're
reviewer
to
answer
you
know
to
be
active,
more
mpr
reviews
and
yeah.
B
I
think
I'm
also
happy
to
chat
offline
a
little
bit
more
and
yeah.
If
you
folks
need
like
another
set
of
eyes
on
architectural
diagram,
I'm
sure
that
we
can
find
some
time
and
with
a
smaller
group
of
folks.
A
Yeah
plus
one
on
the
pr
reviews,
definitely
need
help
there,
and
also,
I
would
david
pointed
out
nightly,
is
in
the
chat,
that's
something
that
I
would
maybe
suggest
you
think
about
and
consider,
instead
of
like
forking
and
having
to
maintain
a
fork
and
rebases,
and
all
of
that
we
do
have
night
rebuilds.
So
that
would
be
something
that
might
help
you
maintain
that
fast
situation
at
the
beginning,
without
losing
the
benefit
of
upstream.
B
E
A
Yeah
and
other
thing
is,
I
don't
think
anyone
from
giant
swarm
is
on
the
call,
but
they
also
like
begin
their
journey
with
cluster
api.
Pretty
recently,
they
had
a
bunch
of
blog
posts
about
it
about
their
experience,
so
it
might
be
good
to
like
talk
to
them
and
see
if
they
have
any
tips
or
anything
that
they
learned
as
new
contributors.
A
A
So
yeah
thanks
for
introducing
all
of
your
yourselves
and
hope
we
can
help
you
get
active
on
the
project
quickly
and
let
us
know
if
you
have
any
other
questions,
we're
all
active
in
slack.
So
if
you're
in
the
cluster
api
channel,
that's
the
best
place
to
reach
out.
L
Thank
you
very
much.
Thank
you
for
well,
very
warm
welcome
and
the
suggestion
I
have
one
last
question.
I
just
as
I
just
told
a
moment
ago,
like
the
network,
is
all
three
clouds,
so
we
also
have
a
gcp
business
need,
I
know,
there's
a
pretty
good
support
from
aws
and
azure
in
the
cluster
api.
So
I
just
wondering
like
is
that
if
we
also
want
to
work
on
like
a
gcp
or
gke
support
in
cluster
api,
is
there
any
like?
What's
the
current
state
there?
L
Because
we
can
say
that
might
be
one
area?
Maybe
we
can
contribute
efforts
all
the
a
lot
at
the
beginning.
B
Yeah
huge
10
000,
yes,
so
we
don't
have
actually
any
active
maintainer
except
for
carlos
on
gcp,
which
I
don't
know
yeah
it's
the
implementing
list,
but,
like
I'm
not
sure
like.
If
is
it
like
an
ongoing
commitment
or
not?
So
if
I
remember
correctly,
gcp
still
had
still
needs
to
be
updated
to
zero.
Four
or,
like
you
know,.
B
But
it
wasn't
released
right,
yeah,
it
wasn't
released
so
like
the
main
branch
is
actually
on
zero.
Four,
but
like
there's
no
release
of
zero
four.
Yet
in
terms
of
like
the
state,
it's
in,
I
wanna
put
like
a
huge
warning
sign.
This
was
like
a
initially
like
some
sort
of
weekend
project
which
turned
out
into
a
provider.
B
B
The
other
thing
that
it
probably
needs,
like
you,
probably
will
find
like
that.
It's
limiting
in
how
it's
set
up
it
sets
up
like
the
the
cloud
provide
itself.
So
you
might
need
like
new
features,
and
things
like
that-
and
I
don't
know
like
if
folks
are
like
using
this
extensively.
So
there
is
like
probably
some
leeway
to
do
like
lots
of
changes,
especially
right
now
that
the
main
branch
is
accepting
breaking
changes
given
hasn't
released.
Yet.
H
First
of
all,
welcome
very
looking
forward
to
work
with
that.
There
talking
about
gce,
it
definitely
needs
more
hands
and
help
there,
and
I
would
only
to
highlight
the
opportunity
that
if
we
can
get
gc
in
good
shape,
then
it
will
be
easier
to
get
into
the
upstream
kubernetes
test
grid,
and
so
we
will
get
gcs
kubernetes
as
a
release
blocking
for
kubernetes,
which
will
be
definitely
a
win-win
for
this
community.
H
L
Okay,
so
basically,
I'm
understanding
here
is
like
a
sim
cycle.
You
are
open
for
contributing
for
this
gcp.
Currently,
it
is
relatively.
It
requires
a
bit
more
work
right
and
also
the
user
base
is
relatively
smaller,
so
we
will
need
to
build
some
of
our
tasks.
We
cannot
just
rely
on
the
user
to
battle
testing.
At
this
moment.
You
see,
you
see
a
reasonable
understanding.
L
Yeah,
okay,
okay
yeah
on
outside,
like
our
next
milestone,
we
will
want
to
have
azure
and
gcp
work.
So
probably
these
are
the
two
areas
we
will
start
first
and
probably
a
quarter
later.
We
will
also
start
working
on
aws,
but
this
is
like
the
the
other
plan
on
our
site,
but
we
will
give
us
some
time
we
will
ramp
up
and
then
we
will,
you
will
start
seeing
prs
from
us
sounds
good
all
right.
That's
all
from
my
side!
Thank
you.
A
Thank
you-
and
this
was
our
last
topic,
so
unless
anyone
has
any
last
minute
additions,
this
will
conclude
our
meeting.
So
thanks
all
for
coming
and
see
you
all
next
week.