►
From YouTube: 20191113 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Notes
document
which
I
encourage
you
to
open
up,
and
we
have
meeting
etiquette.
Please
use
the
raise
hand
feature
of
zoom
and
I
will
do
my
best
to
call
on
you
assuming
I
can
see
your
hand
is
up,
and
we
also
have,
in
the
agenda
document
a
place
to
add
discussion
topics.
So
please
do
so
if
you've
got
anything
and
also,
please
make
sure
that
you
add
your
name
to
the
attending
list
as
well.
A
C
A
C
Are
planning
on
attending
cube
con
and
you
don't
have
anything
to
do.
Thursday
morning
we
are
coordinating
a
cluster
API
meeting
Greek
breakfast.
The
link
is
in
the
agenda,
dot
and
feel
free
to
show
up,
eat
some
food
and
chat
with
other
cluster
API
users
and
contributors.
We
are
not
doing
any
type
of
planning
or
any
any
real
work
here.
It
is
just
a
forum
for
us
to
be
able
to
see
each
other
face
to
face
since
we're
mostly
interacting
online.
C
You
know
mainly
get
buy-in
from
others
to
kind
of
hold
off
adopting
the
IDI
framework
or
stuff
that
Chuck
is
doing
for
the
NWS
provider
until
later
in
the
cycle,
so
that
we
don't
interrupt
those
folks
that
are
working
on
the
88's
expansion
right
now
and
then
you
know
prior
to
cutting
the
v1
alpha
3
release.
We
can
refactor
things
to
consume
the
IDI
framework,
then.
D
Yeah,
that
sounds
totally
good.
I
just
wanted
to
say
that's
thank
you
for
letting
me
know
that
you
wanted
to
pull
off
in
that
work,
but
yeah
there's
a
lot
of
great
work
going
on
in
the
AWS
edu
tests
that
we
can
definitely
use
in
the
framework
so
I'm
looking
forward
to
integrating
that
awesome.
Thanks.
C
I,
don't
think
so.
I
just
wanted
to
raise
it
for
the
AWS
provider,
because
we
have
active
work
going
on
right
there
right
now
and
because
we're
getting
ready
to
start
introducing
the
breaking
changes
for
v1
alpha
3.
As
soon
as
we
kick
off
the
control
plane,
work
and
things
like
that
and
I
didn't
want
to
try
to
block
the
BD
work
that
we
have
on
the
8
of
us
side,
but
it
shouldn't
impact
other
providers
unless
they
also
have
active
ete
work
and
JCP.
In
particular,
I'm
not
aware
of
anything
chucked
back
to
you.
D
Yes,
so
one
one
thing
to
note:
is
it's
still
changing?
So
if
you,
if
you
would
like
to
work
on
the
edu
framework
in
GCP,
that
would
be
awesome
and
I
would
love
to
know
your
feedback
from
an
actual
implementer
standpoint
and
that
can
help
shape
sort
of
the
direction
that
the
framework
is
going.
So
if
you
were
looking
to
do
that,
work,
I
would
love
to
hear
your
experience
in
using
it.
A
Somewhat
related
to
that,
since
it
does
involve
a
go
module
dependency
for
cap
G
pulling
in
cluster
API
I'm
wondering
if
we
want
to
consider
more
frequent
alpha
tags
of
the
cluster
API
repository
not
to
indicate
that
it's
close
to
release.
But
just
so
downstream
providers
could
have
a
go
module
dependency
on
a
tag
instead
of
just
some
random
commit
from
master.
F
I,
like
that
idea
and
I
think
it
was
also
like
something:
I
came
I
at
the
face-to-face.
We
should
kind
of
like
remove
the
pre-release
tag
on
the
current
releases,
at
least
the
4:02,
and
use
those
for
these
kind
of
releases,
so
that
there
is
like
clear,
like
a
tag
like
in
github
that
access
like
hey
this
is
a
pre-release,
don't
use
it.
A
A
A
G
I
was
gonna,
agree
with
that
and
just
comment
that
this
is
sort
of
something
that
Matt
had
brought
up
in
regards
to
go
modules
and
I.
Think
Chuck
and
I've
discussed
it
before
it
just
drives
you
to
a
100
faster.
That
may
not
actually
be
technically
a
1.0,
but
all
of
the
tooling
seems
to
exist
around
host
1
dot.
Oh
yeah
I
mean
it
almost
feels
like.
F
A
There's
a
distinction,
or
it's
not
necessarily
correlated
that
github,
give
him
released
being
a
pre-release
or
not
a
pre-release
versus
the
version
of
the
code
base.
It
kind
of
seems
like
they
should
go
together,
but
there's
no
strict
requirement.
We
also
did
have
a
lot
of
discussion
around.
Should
we
take
cluster
API
to
one
play
0.0
and
say
we're
now
stable-ish
and
if
we
want
to
make
breaking
changes
will
go
to
v2
and
we
essentially
decided
that
until
we're
ready
to
change
the
API
to
maybe
beta
definitely
GA
that
we're
gonna
stay
below
1.0
Jason.
C
So
I
think
one
of
the
challenges
that
we're
gonna
have-
and
this
is
going
to
be
related
to
a
few
of
the
different
changes
that
we
made
is
that
I
think
our
github
releases
tab
is
going
to
become
pretty
much
useless
and
we're
gonna
have
to
figure
out
a
different
way
to
promote
releases,
whether
it's
in
the
gift
book
or
whatever.
We
do.
C
A
All
right,
I
think
I
got
everybody
muted,
Lube
Amir,
you
wrote
that's
the
side
effect
of
having
the
providers
as
part
of
the
same
repo
yeah.
So
if
we
have
multiple
things
that
were
releasing
from
the
same
github
repository,
then
yes,
you
would
be
seeing
multiple
tags
like
one
for
cluster
API,
one
for
bootstrap
provider.
Where
are
we
to
do
that
as
a
separate
release?
But
at
this
point,
as
far
as
I
know
and
remember,
we're
just
treating
caddy
the
docker
provider
as
potentially
having
a
separate
version
than
clustering
guy
itself?
A
C
Yeah,
so
we've
started
getting
multiple
approvals
on
it.
I
haven't
seen
very
much
feedback.
There
was
one
comment
from
Moshe
asking
about
changing
an
annotation
to
a
label
and
I
think
that's
something
that
we
can
likely
address.
Pretty
quick
and
follow
up.
I,
don't
see
him
on
the
call
but
I'll
reach
out
to
him
to
see
if
he's
okay
with
us
addressing
that
post-merge,
but
other
than
that,
just
waiting
on
final
approvals
or
time
out
for
lazy
consensus.
A
B
A
I
don't
have
this
I
meant
to
add
this
into
the
agenda
in
terms
of
the
other
caps,
so
we
have
we've
merged
the
cluster
cuddle
v2
kept
the
testing,
but
we
still
have
the
Machine
remediation,
open
and
machine
pool
open,
and
we
have
a
new
one
from
OSHA
for
load,
balancing
so
I'm.
Just
looking
at
the
attendees
here.
Do
we
have
representatives
for
either
machine
pool
or
remediation
here
today
to
talk
about
the
status?
Don't
think.
B
I
can't
I
can't
talk
about
the
machine
contracting
one
so
yeah,
since
everyone
who
share
feedback
so
far,
I've
been
trying
to
address
all
the
comments,
so
so
yeah
just
happy
to
to
see
more
feedback
coming
and
keep
the
discussion
going
and
how
do
I.
Just
let
me
know
how
can
I
help
to
to
move
forward
with
this.
A
F
F
I
did
it?
Yes,
the
women
we're
doing
me
for
machine
pool
proposal
and
there's
like
a
few
action
items,
which
is
one
to
kind
of
like
just
put
better.
The
goals
like
in
place
and
kind
of
like
the
changes
to
the
booster
provider,
a
little
bit
in
more
detail
and
I.
Think
after
that,
like
we
can
probably
like
go
ahead
and
seek
approver
to
merge
the
proposal
and
relatively
soon,
and
then
we
kind
of
like
Abby
to
like
follow
up
with
like
issues
or
PR
through
the
proposals
and
yeah
we'll
get
to
the
finish
line.
A
B
A
That
we
went
out
but
3
process
and
if
there
are
things
that
we
can
do
better
going
forward,
so
it's
probably
better
to
announce
this
in
advance
and
then
make
sure
people
are
available
to
discuss
it
in
the
future.
Does
that
seem
like
the
best
course
of
action,
or
should
we
try
and
have
that
conversation
now.
A
A
A
D
A
D
A
Okay,
next
up
is
a
request
to
add
fields
to
the
QB.
Diem
can
fix
that
for
Q,
proxy
config
and
cubelet
config.
This
is
a
copy
and
paste
typo
here,
but
it
is
two
different
types.
I
did
see
that
cube.
Idiom
does
allow
you
to
specify
the
cubelet
config
and
a
cute
proxy
config
seems
reasonable
to
me,
but
I'm,
not
a
cube
idiom
expert,
so
I
will
defer
to
others
and
Andres.
You've
got
your
hand
up
so.
G
I'm
gonna
quickly
defer
to
chuck
here,
but
that
I
this
got
raised
from
a
cat
V
perspective
a
little
bit
ago.
Somebody
said
hey:
why
don't
we
support
the
new
V
1
beta,
2
or
B
1?
They
don't
want
to
forget
what
it
wasn't.
There
was
an
issue
that
I
pointed
to
them.
Chuck
it
looks
like
maybe
we
didn't
actually
talk
about
it.
Books
like
over
the
summer.
You
tried
that,
but
there
were
unforeseen
consequences.
D
Those
types
that
you
are
the
issue
that
you're
talking
about
has
been
fixed
and
the
reason
we
were
using
the
U
and
beta
one
was
to
support
kubernetes,
113
I
believe,
but
it
seems
like
what
the
changes
coming
into
v1
alpha
3
hour.
Minimum
management
cluster
is
going
to
be
the
101
16,
which
means
we
can
probably
move
the
version
of
Covidien
forward.
A
C
So
to
address
yours,
they
are
specified
separately
in
the
config
that
you
pass
to
cube
ATM
but,
more
importantly,
if,
like
I,
think
we'll
be
okay.
Moving
to
V
1
beta
2
for
the
cube
ATM
config,
given
the
minimum
version
support
requirements
that
we
have
the
bigger
concern
that
I
would
have
is,
we
would
have
to
handle
conversion
within
our
types
between
the
V
1,
beta
1
and
B
1
beta
2
cube
ATM,
config
types,
because
there's
no
type
of
conversion
logic
that
we
can
consume
for
that
today.
H
So
we
looked
the
similar
ticket
in
the
cube
ATM
tracker
about
this,
and
there
I
made
a
proposal
that
we
should
probably
not
pin
any
API
objects
inside
the
coaster.
Api
config
because
currently
were
pinning
versions
how
about
just
feeding
a
string
with
all
the
different
config
files
at
convenience.
Reports.
Writing
this
to
disk
and
passing
it
to
the
Canadian
binary.
A
A
H
C
I'm
just
going
to
say
beyond
the
validation,
we
also
have
places
where
we
have
to
plumb
in
values
into
the
config,
based
on
the
current
state
of
the
cluster
or
other
configuration
of
the
other
resources
as
well.
So
it's
you
know.
Well,
we
could
validate
kind
of
blobs
in
the
sense
it's
it's
lauren
Sidious
than
that,
because
we
also
have
to
manipulate
the
resources
before
we
inject
them.
To
isn't.
A
A
I
A
I
G
I,
don't
remember
it
off
the
top
of
my
head.
I
need
to
go,
look
for
it,
but
1584
is
the
one
that
Andy
is
linking.
We
should
link
1594
to
the
defaults
issue
that
we
filed
over
the
summer,
because
I
think
part
of
one
of
the
suggested
solutions
was
supporting
for
the
component
support
for
the
component
config
and
this
there
are
two
different
issues.
Now
the
other
ones
like,
maybe
even
in
the
image
build
or
was
in
the
old
cube,
ADM
repo,
I
think,
is
where
it
is.
G
C
A
G
A
G
It
was
not
closed.
It's
still
open,
oh
I'm,
not
the
author
of
it.
That's
what
it
is.
I'm,
not
the
author
of
it
in
the
new
repo
am
I.
I
was
I,
was
keying
on
my
name
hold
on,
so
it's
it's
not
finding
it.
It's
this
one.
Let
me
paste
it
into
the
chat.
I
found
it
in
the
original
location.
Maybe
you
can
see
it
in
the
new
location,
I,
just
search
for
the
subject
and
didn't
find
it,
but
there's
a
long
discussion
here
about
I.
B
G
A
J
A
J
Yeah
I
wanted
to
know
basically
since
you're
talking
about
1584
do
I
do
have
a
couple
of
people
who
are
working
on
bare
metal
operator
in
a
ship,
and
they
be.
We
were
told
that
we
should
look
into
cluster
API
or
metal
cube,
I,
don't
know
where
it
belongs
to,
and
then
I'm
wondering.
Is
this
the
right
place.
A
J
K
J
A
A
Somebody,
you
know
you
got
a
comment:
okay,
so
this
one
is
in
the
milestone.
I
think
it
needs
some
TLC
in
terms
of
figuring
out
what
to
do,
but
we've
already
at
least
gotten
to
the
milestone.
It
is
a
long-term
issue,
though,
so
it
does
have
the
possibility
of
not
making
0.3,
but
I
think
we
can
move
on
to
the
next
issue
at
this
point,
unless
there's
any
more
discussion
here.
A
All
right,
reset
control,
plane,
initialize
to
false
when
all
control
plane
machines
are
removed.
I
know
that
there
has
been
some
discussion
in
the
related
poll
requests
here,
so
I
had
suggested
that
we
maybe
do
this
for
standalone
control,
plane
machines,
but
not
when
we
have
the
cube
ATM
control
plane
that
will
be
working
on
soon
Jason
indicated-
maybe
maybe
not
so
I'm
I'm
not
really
sure
what
to
do
about
this
one.
A
D
I
I
guess
I've
been
a
little
torn
on
this
one,
but
I
guess
it
comes
down
to
it.
This
is
a
use
case
that
we
want
to
support
because
I'm
pretty
sure
that,
when
we're
building
it,
like
you
said,
we
weren't
supporting
a
situation
where
we
spin
up
cluster
and
delete
machines
and
then
reuse
the
same
cluster
infrastructure,
yeah.
A
F
I
person
who
would
like
not
to
change
this,
given
that
we
decided
consciousness.
You
said
like
to
do
not
like
make
it
this
value,
specifically
freak
club,
and
maybe
we
can
update
the
commendation
to
like
to
make
that
clear.
Also.
Another
thing
that
I
actually
meant
was
that
we
actually
don't
support,
control,
plane,
nose
to
be
an
omission
set
so
yeah.
I
rather
not
do
this
yeah.
A
I
mean
even
if
it's
not
a
machine
set,
if
it's
just
a
single
machine
and
you
go
from
one
to
zero
back
up
to
one
just
by
creating
and
deleting
the
the
second
machine
will
fail,
because
it'll
attempt
to
do
enjoying,
based
on
the
logic
that
we
have
in
the
cube,
am
bootstrapper.
So
I
maybe
the
very
least
we
need
to
put
the
cluster
into
a
terminal
error
state
if
they
delete
the
last
control,
plane,
machine
or
something
I
don't
know,
jason
and
then
andrew.
C
Yeah,
I
think
the
tricky
part
is:
is
that
really
closer
api
has
no
concept
of
control
playing
right
now
and-
and
that
was
part
of
the
impetus
with
using
initialized
is?
Is
we
kind
of
hacked
around
that
to
say
that
we,
at
least
in
an
init
and
the
initial
machine,
came
up
now
the
control
playing
proposal
that
we
have
out?
C
There
does
add
a
ready
field,
but
that's
because
we
can
actually,
you
know,
do
some
status
checking
on
the
control
plane
after
it's
up,
whereas
we
don't
really
have
that
facility
today,
so
I'm
in
favor
of
basically
improving
the
documentation
in
this
case
and
will
improve
the
actual
story
around
it
longer
term.
You.
G
This
kubernetes
even
support
bringing
up
a
new
control,
plane
node
against
an
existing
and
CD
back-end.
If
there
are
no
existing
control,
plane
nodes
like
you've
gone
down
to
zero
and
you
basically
have
to
bring
up
a
new
control,
plane,
node
and
say
you
know
attached
to
this,
existing
data
store,
or
you
know,
data
backend,
because
if
it
doesn't,
then
this
a
non-issue
well.
A
G
A
You
add
a
new
control
plane,
machine
you're,
starting
with
a
new
data
store.
If
you
have
an
external
@
CD
there
very
well
could
be
a
lot
of
cruft
from
your
first
incantation
of
the
control
plane
and
then,
if
you
delete
everything
but
keep
the
external
@
CD
and
then
come
up
with
a
new,
a
new
control
plane,
member
I,
don't
see
why
it
wouldn't
work.
As
long
as
you
can
get
the
you
know,
anything
that
needs
to
be
reinitialized.
If,
if
there
is
anything
done
in
at
CD,
but
I'm.
C
C
A
A
Okay,
I
will
do
that
later.
Next,
up
is
configure
copy
manager
and
controllers
with
a
config
map
so
that
we
don't
have
to
do
flag
after
flag
after
flag
I
think
this
is
a
nice
thing
to
do.
I
did
comment.
It
needs
to
be
a
version
type
so
that
we
have
versioning
in
the
in
their
terms
of
priority
and
milestone.
I
would
probably
want
to
say
a
long-term
and
next
given
how
much
work
we're
going
to
undertake
for
alpha
3,
unless
you
feel
strongly
otherwise
check
before
Jason
I
see
your
hands
up.
C
So
the
first
question
that
I
have
is,
if
we
expect
this
to
be
something
that
the
controllers
watch
and
react
to
I
wonder
if
we
should
just
add
a
new
data
type
that
can
be
added
and
created
and
treated
like
a
singleton,
because
then
we
wouldn't
have
to
do
kind
of
you
know,
serialization
back
and
forth
into
the
config
map.
The
alternative
would
be
is
if
we
mounted
the
config
map
and
treated
as
a
config
file,
but
then
we'd
have
to
deal
with
also
reloading.
If
that
config
map
changes
as
well.
A
So,
in
terms
of
reloading
my
experience
in
doing
what
you
said
with
a
singleton
configuration
API
type
with
Valero,
was
we
punted
completely
on
trying
to
handle
a
reload
on-the-fly?
What
we
did
instead
was
watch
the
singleton
resource
and
on
any
change.
We
just
shut
down
the
process
and
expected
that
kubernetes
would
restart
the
pod,
and
that
would
then
it
was
able
to
pick
up
the
changes.
D
A
G
D
Yeah
I
can
just
talk
about
this
briefly,
so
there's
there's
a
fun
dependency
problem
right
now,
because
we're
using
we've
got
the
framework,
the
EDD
framework
and
we've
got
capti
here,
and
so,
if
capti
wants
to
use
the
e2e
framework,
it's
going
to
have
to
pull
in
a
particular
version
of
kind,
so
the
framework
is
depending
on
which
of
course
is
not
the
version
of
kind
of
it.
D
Kepta
is
using
and
the
api
is
completely
changed
on
kind,
so
I
would
like
to
move
capti
away
from
Horlicks
and
we've
kept
e
to
the
new
API
version
of
kind.
It's
just
a
lot
of
work
because
it's
changed
dramatically
and
some
of
the
nice
functions
that
existed
before
for
such
things
as
creating
a
load
balancer
no
longer
exists,
so
it
there's
a
lot
of
work
to
be
done
to
to
pull
out.
Basically,
this
is
all
in
in
service
of
removing
exact
commands
from
the
et
framework.
A
K
Yeah
I
mean
not
related
to
this
just
another
one
like
I'm,
trying
to
figure
out
if
there
is
a
like
proper
documentation
for
using
cluster
API
using
DCP.
So
it
is
that
does
that
work?
Is
there
any
representation
from
GCP
folks
to
confirm
it
is
working
and
the
link
to
replicate
the
plus
API
I'm
sure
answer.
F
Is
yes,
your
answer's?
No,
we
don't
have
like
any
dogs
like
there
are
some
examples
that
you
can
use
and
there's
one
open
issue
about
like
adding
to
the
QuickStart,
but
that's
still
work-in-progress
that
works
right.
It
works
so,
but,
like
kind
of
like
a
longer
end
is
like,
we
do
have
end-to-end
conformance
tests
running
on
it,
which
is
a
good
sign
like
it
actually
works,
and
it's
actually
tracking
v1
out
for
three
and
not
be
one
out
for
two,
so
might
break.
That's
for
sure,
yeah.
A
A
Is
mostly
a
tab
here
for
GC
p,
we
just
don't
have
the
documentation
yet,
but
in
general,
it's
going
to
be
very
similar
to
what
you
see
here,
where,
instead
of
an
AWS
cluster
you'd
have
a
GC
p.
Cluster
and
you'd
have
gzp
specific
information
and
the
same
thing.
The
machines
and
all
of
that
does
work
today
and.
A
Next
up,
we
have
ability
to
disable
rolling
updates.
This
is
a
request
to
turn
off
the
code
optionally
in
machine
deployments
to
do
the
actual
updating
and
the
suggestion
was
to
allow
Mac's
unavailable
and
Mac's
surged
to
both
be
0
and
then
have
something
else.
I
guess
manipulate
the
machine
sets,
although
not
quite
sure
how
this
would
work
exactly.
A
Yeah
I
think
we
probably
need
some
more
information
from
John
here.
I
feel
like
with
the
Machine
deployment
reconciler.
It's
not
doing,
maybe
there's
a
lot
of
logic
in
there,
and
so
it's
very
complex.
But
at
the
end
of
the
day
it's
manipulating
machine
sets
and-
and
that's
pretty
much
it
so
I-
think
the.
A
If
you
wanted
to
do
the
same
sort
of
thing,
you
could
probably
copy
and
paste
the
code
and
have
as
a
starting
point
and
use
a
different
reconciler.
So
like
maybe
the
request
is
the
ability
to
turn
off
the
machine
deployment
reconciler
in
the
caffeine
manager
and
have
something
else,
process
them,
but
I
think
we
probably
need
some
nation.
A
C
So
the
biggest
thing
I
was
trying
to
figure
out
was
clarification
on
the
behaviors
that
they
were
expecting,
whether
it
was
just
that
we're
not
properly
keeping
the
status
in
sync
as
the
machines
were
deleted
in
the
background
or
if
they
expected
the
actual
scaling
events
of
deleting
individual
machines
to
be
serialized
and
blocked
for
further
operations
until
the
the
formerly
machine
is
deleted.
I
think
the
former
behavior
is
what
I
would
expect,
rather
than
the
latter
behavior.
You.
G
Just
added
a
comment
to
this
one:
there
is
a
request
on
cap
B.
We
used
to
have
this
thing
called
maintenance
mode
that
disabled
the
controller
loop
for
machine
if
there
was
an
annotation
set
on
it.
It
just
seems
to
me,
like
it's
sort
of
related,
maybe
some
annotation-
that
we
might
promote
to
copy
to
imply
some
broader
maintenance
mode
like
don't
make
any
changes,
we
need
things
to
stay
the
way
they
are.
What
we
look
at
something
just
made
a
comment.
There
Jason.
C
G
G
A
J
J
J
Does
that
explain
my
question
where
do
I
look
for
I'm
grasping
in
dark,
whether
to
go
to
metal
cube
to
go
to
cluster,
so
I
get
pushed
from
one
to
that
group
to
the
other
group
and
I
had
come
to
this
group
earlier
three
months
back
at
that
time,
I
was
asked
to
go
to
metal
cube
and
there
was
nobody
in
metal
cube,
cleanser,
so
maybe
I'm
trying
to
figure
out.
Where
is
the
place
to
ask
and
where
do
I
engage
to
deliver
with
other
people
here,
I.
C
So
I
I
think
you
you've
mentioned
a
few
concerns
across
a
few
different
layers
of
the
cluster
API.
So
some
of
the
stuff,
as
far
as
how
do
I
generically
plumb,
some
of
this
information
towards
an
individual
machine
as
part
of
the
bootstrapping
that
could
potentially
fall
into
kind
of
the
core
cluster
API
and
the
cube
idiom
bootstrapper
that
we
have.
C
J
C
Basically,
the
implementation
of
the
individual
machines
and
the
cluster
resources
was
deferred
down
to
the
actual
provider
implementations
themselves,
with
V
1
alpha
2
and
beyond.
We've
created
a
model
where
cluster
API
handles
the
generic
resources,
and
then
there
are
provider
specific
resources
that
the
providers
themselves
reconcile.
So
when
metal
cube
actually
adopts
a
newer
version
of
cluster
API,
then
they
would
have
the
model
to
where
you
would
only
interact
with
their
provider
specific
like
they
would
only
really
reconcile
and
act
on
their
provider.
C
Specific
resources
and
the
generic
resources
will
be
handled
by
cluster
API,
so
that
the
the
more
difficult
part
is
is
how
to
how
to
engage
with
the
metal
cube
community.
Unfortunately,
I
can't
really
provide
any
any
additional
information
there,
because
they're
their
own
kind
of
project
and
they're,
not
under
the
kubernetes
eggs
and
kind
of
the
say,
cluster
lifecycle
umbrella,
that
the
AWS
provider,
GCP
provider
and
so
on
are
part
of
that.
That's
really
something
that
that
community
would
have
to
kind
of
handle.
I
guess
yeah.
A
So
we
are,
we
are
two
minutes
over,
so
Prakash
I
would
recommend
that
you
follow
up
with
either
the
metal
cube
or
the
cluster
API
folks
in
a
community
slack
and
we'd
be
happy
to
have
another
discussion
on
this
and
a
future
offer
office
hours.
But
we
are
out
of
time
today
and
I
know
that
at
least
several.