►
From YouTube: SIG Cluster Lifecycle - CAPV Office Hours - 2023-07-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Computer
okay:
here
we
run
before
I
come
today:
okay,
the
office
hours
on
25th,
July,
I,
don't
know.
I
I
got
a
feeling
the
last
time
that
we're
not
going
through
all
the
usual
overhead
that
we're
going
call
capping
anyway.
In
case
someone
wants
and
Truce
themselves
feel
free
to
go
ahead,
but
otherwise
I'll
just
start
with
that
channel.
A
A
So
Christians
I'll
take
his
test
as
well:
okay,
yeah,
so
first
up
so
I
wasn't
there
before.
But
if
I
got
it
right
basically
has
seen
already
brought
it
up
like
two
or
three
times
in
office
hours.
The
release
date
for
Capri
1.8,
let's
say
for
Captain
one.
B
A
Which
is
then
based
on
on
Cappy
one
five,
so,
basically,
what
we
would
want
to
do
is
release
roughly
two
weeks
after
the
core
copy
release.
The
core
Capital
release
is
on
25th
July,
and
the
question
is
basically
I,
don't
know:
does
anyone
have
blockers
or
do
major
concerns
about
a
date
Etc?
A
The
blockers
that
we
already
know
is
basically,
of
course,
you
have
to
wait
for
the
for
the
core
copy
release
undergo
120
bombs
should
be
fine,
I
think
we
have
to
double
check
if
we
no
I,
think
it's
in
sorry,
nadir,
I
think
you
know
right,
I
mean
Republican
tests
in
front
and
there's
there
should
be
nothing
else,
a
circle
back
of
Christian.
A
Yeah,
so
basically,
just
a
question
for
the
group
address
also
has
anything
no
problem
with
the
rate
I
think
otherwise
they
will
just
basically
put
it
in
a
channel
and,
let's
assume
that's
the
plan
except
something
unfortunate
happens.
A
I'll
look,
I
would
I
think
Tuesday
is
probably
a
date
for
the
capabilities.
I
will
just
take
the
same,
but
two
weeks
later
forget
me.
A
Let's
look
what
the
exact
date
is:
okay
and
then
next
topic.
So
following
so
we
had
a
PR
to
bump
the
end-to-end
test
to
test
against
127.
A
re,
martial,
Main
and
basic
question
is:
if
we
should
backport
it
to
the
release,
1.7
branch
I
think
the
main
reason
is
basically
just
because
I
think
107
should
have
already
supported,
127
and
I.
Think
that's
also
what
core
Cappy
is
doing,
but
we
did
this
a
bit
later
in
Cap
B.
So
those
are
only
test
changes.
We
wouldn't
modify
the
code
in
any
way,
so
yeah
just
a
question
a
slightly
lean
towards
backboarding
I,
don't
know
if
anyone
has
opinions
or
objections.
A
To
actually
do
it
good,
then
next
Up
release,
process,
automation,
so
yeah
just
give
a
quick
summary
of
what
a
proposal
is.
So
current
state
of
release
Automation
in
Capri
is
basically
the
following:
I
mean
it's
not
really.
It's
partially
unmatched.
A
A
For
some
reason,
we
have
a
step
with
push
documents
which
we
actually
shouldn't
do
then
there
is
a
post
submit
ultimately
running
after
you
pushed
attack
forward
shift
made,
and
then
we
have
to
publish
the
release
manually
so
basically
the
process
to
get
rid
of
of
this
step
in
the
first
phase
of
implementation,
so
that
the
only
thing
that
someone
has
to
do
is
basically
first
push.
C
A
Then
wait
and
then
click
on
a
publish
button
somewhere
in
the
GitHub
UI,
so
that
would
be
the
state
after
the
first
phase
and
then
the
second
phase
is
basically
to
align
to
what
other
providers
are
doing.
So
the
problem
is
basically
that
in
this
step
that
we
already
have
today
this,
where
this
post
submit
shop,
is
publishing
images
to
be
published
directly
to
the
production
registry
of
kubernetes
and
what
we
should
do
in
status.
A
A
D
A
Makes
sense
makes
sense
already
that
one
was
already
was
it
filtered
the
right
way
in
in.
D
D
A
D
A
A
Yeah,
let's,
let's
see
what
we
get
back
there,
so
we
definitely
between
me,
Christian
and
and
Korean
I,
don't
know
when
we
will
get
to
it,
but
we
definitely
want
to
get
an
overview
over.
What
are
the
features
we're
supporting?
What
are
we
supporting?
What
are
we
testing
already
do
some
sort
of
comparison
and
start
a
discussion
around?
C
A
I'm,
not
sure
I
mean
he
was
I,
think
they
were
just
asking.
Basically,
someone
was
not
a
review
list.
That's
why
yeah
Max
is
showing
up
here,
but
I
can
try
to.
D
B
I
would
like
to
vote
for
that
feature
as
well.
I,
don't
know
how
I
can
do
that,
but
supporting
multiple
vcenters
in.
B
A
B
There
is
it's
possible
with
the
caveat
there,
there
is
a
single
credential
that
works
across
the
multiple
V
centers
and
that's
very
limiting
I
mean
that's
a
that's
a
that's
an
assumption
and
from
what
I
can
tell
the
Assumption
comes
from
the
fact
that
we
save
a
reference
to
those
credentials,
or
there
is
one
vcenter
that
is
kind
of
the
lead.
V
Center
in
the
vsphere
cluster
object.
A
Yeah
the
global
difference
of
the
controller,
and
then
we
have
in
the
V3
cluster
and
I
think
then
the
identity,
ref
I
mean
at
least
those
three
places,
and
and
that's
probably
the
one
that
is
missing,
which
is
probably
roughly
I,
don't
know
on
a
machine
template
level
or
something
yeah.
D
D
D
D
Yeah,
that's
okay,
but
documentation
yeah,
but
we
need
to
well
I
think
that
will
be
a
exclamation
mark,
breaking
change
in
terms
of
security
impact.
B
B
It
does
so
from
from
what
I've
cried
so
far,
so
I
did
try
this
in
my
environment
with
the
same
credentials
working
across
different
V
centers
everything
works,
except
that
limited
limitation,
that
it
assumes
that
the
same
credentials
would
work
CSI.
As
long
as
you
are
able
to
set
up
topologies
it,
it
has
clear.
Actually
I
would
not
call
it
clear,
but
it
has
sort
of
somewhat
documented
how
you
could
have
multiple
credentials
for
CSI
and
I
was
able
to
use
it
without
issues.
So.
D
B
Because,
to
be
honest,
those
CSI
talks
themselves
were
not
good
enough
to
go
through
their
code
to
figure
out
where
to
provide
the
original
credentials.
But
yeah
I'm
happy
to
provide
that.
Okay.
D
B
C
D
D
C
D
A
A
Makes
it
e
to
e
faster
okay,
yeah
I
mean
also,
basically,
we
should
take
a
look.
I
tried
just
doing
next
few
days,
just
go
or
appears
and
look
at
the
ones
like
this
until
click
review.
A
C
A
With
the
cleaning,
after
any
secrets,
completely
different,
you
can
change
the
default
area
of
secret
deletion,
but
default.
It
behaves
as
it
was
studying
the
sequence
reference
from
research
clusters.
A
Which
I
do
with
the
link
problem?
Yeah
cheers
should
I
support
it
when
you
reference
the
secret
directly
that
it's
not
deleted
and
can
be
reused
across
clusters,
while
I
think
to
design
this
message
that
you
can
only
do
that
if
you
have
a
research,
cluster
identity.
A
I
mean
it
or
I
mean
either
timeout
or
feedback.
A
Yeah,
let's
see
I'll
definitely
skip
forward
for
now.
I
think
except
someone
else,
has
some
opinions.
A
What
would
like
what
we
said?
We
were
doing
yeah,
so
the
good
thing
is
definitely
I
was
reading
over
our
entire
talks
and
by
the
way
I.
C
A
A
few
PRS
to
just
fix
my
stuff,
but
that's
someone
summarized
it
relatively
nicely.
This
document
here,
yeah.
A
D
A
I
think
for
a
while
on
the
core
copy
meeting
notes
like
hey.
If
someone
wants
to
pick
it
up,
but
I'm
not
sure
if
it's
still
there
all
right
so
I
would
say.
Basically,
if
I,
if
I
got
it
right,
we
had
some
sort
of
consensus
at
some
point.
Oh
I
think
that's
good.
That's
gone
yeah,
just
some
consensus
at
some
point
about
what
it
should
look
like.
Can
we
implemented
I,
think
openstack
as
well
and
I?
A
D
A
Yeah
I
think,
if
you
still
want
to
pick
it
up,
I
mean
it
was
basically
your
PR
and
I
had
a
bunch
of
bits
which
are
basically
just
copy
minor
stuff.
We
could
probably
get
it
much,
but
I'm,
not
sure
if
it's
at
this
point
more
like
a
recommendation
versus
an
actual
contract,
I
mean
I,
don't
know
if
we
can
no.
D
D
They're
all
a
bit
different,
so
it
will
be
recommendation.
I
I
will
assign
this
issue
to
myself,
then
that
one
eight
zero
three
and
then.
A
Okay,
good
I
mean
we
still
have
to
decide
if
we
want
to
change
what
we're
doing
copy
right.
If
people
would
want
to
merge
this
PR
independent
of
what
we've
mentioned,
but
okay
thumbprint.
D
A
A
Yeah
by
the
way,
Christian
is
not
here
about
greetings
when
you're.
Looking
at
later,
we
were
hitting
this
issue
like
one
or
two
weeks
ago,
when
we
used
copy
for
some
other
stuff
and
because
I
had
no
idea
about
anything.
I
was
just
like
hey.
Do
the
scripted
edit
and
just
insecure
a
sign
for
you,
instead
of
just
to
explain.
A
A
D
A
While
you're
doing
oh,
no,
it's
fine,
okay,
provide
clarifications.
Oh
yeah
I
think
that's
straightforward!
I
just
didn't
review
it
yet
because
I
didn't
have
time
to.
D
A
It
was
changing,
go
dog,
red
product
of
yeah
yeah.
That
sounds
fine
as
well.
That
should
we
trigger
calling
student,
usually
yep,
okay,
fine
goodness,
but
what
you
should
be
able
to
do
is
just
click
here
on
restart,
but
that
didn't
work
for
you.
I.
D
A
That
makes
sense,
I
also
have
the
same
on
my
Fork.
Basically,
so
if
I,
if
I
don't
have
any
periodics
of
a
periodic
GitHub
action
yet
then
I
can
trigger
it.
But
as
soon
as
I
have
one
single
run,
I
can
run
as
often
as
I
want.
I
saw
it
on
the
trivia
scan
that
we
did
so
probably
same
problem
yeah,
but
yeah
retelling,
something
pops,
okay,
yeah,
but
that's
basically
it
wasn't
very
interesting
it.
Let's
just
go
to
go
there
and
then
oh
yeah,
so
I'm
not
sure.
As.
A
D
Me
yeah
so
well
in
the
sense
that
yeah,
so
it's
used
in
the
VM
operator
mode.
That's
for
sure
now
had
some
comments,
so
you.
D
A
Oh
yeah
yeah,
so
those
are
both
from
from
VMware
yeah
yeah
I'm.
Both
too
both
are
proud.
Of.Org,
okay,.
C
A
C
A
D
A
Yeah,
so
Auto
configure
I
think
is
basically
that
when
you
have
those
texts
that
it
just
adds
the
text
if
they
are
not
there
right,
so
if
you
set
an
order
configure
through
it
with
just
the
projects
everywhere
and
if
you
don't,
they
have
to
be
there
before
all
right,
but
I
don't
really
know
the
background
behind
like.
A
Why
is
this
not
needed
anymore
or
not
useful
anymore?
No.
A
Okay,
take
resources
out
of
plan
as
a
departure
from
this
Behavior
completely
received.
They
don't
want
to
expose.
A
D
A
B
C
A
Okay,
but
anyway,
we
have
to
take
a
look
at
the
state
of
the
PRN
and
potentially
maybe
close
and
open
it
ourselves.
A
A
Fine
I
need
a
few
hours,
but
I'll
look
over
all
those
small
appears.
That's
not
a
problem
yeah,
it
should
be
fine,
but
we
should
think
it
should
not
that
we're
like
changing
Edge
case
or
something,
let's
maybe
Circle
back
to
that
one,
because
this
one
might
be
a
bigger
one
which
GPU,
maybe
maybe,
let's
start
with.
Does
anyone
know
the
current
state?
So
if
I
got
it
right,
there
was
a
proposal
for
vgp
for
gpus
in
general,
and
we
have
three
modes
which
is
PCI
password.
A
We
GPU
and
was
a
GPU
Direct
and
I
got
like
that.
We
definitely
support
PCI
pass
through
and.
B
There's
already
a
proposal
and
a
PR
and
I
think
I
will
try
to
link
that
pr1579
so
that
this
one
is
mine,
but
then
I
think
that
is
so
one
one.
Five
seven
nine
was
something
that
was
opened
earlier,
and
so
it's
in
the
subject
yeah
right
there,
one
five
seven.
So
what
I
did
was
I
tried
to
make
it
with
the
version
that
I
was
working
with
and
it
had
some
issues.
B
Obviously,
and
some
things
were
not
implemented
yet
so
I
just
completed
the
implementation
based
on
the
proposal
that
I
saw
and
here's
the
pr
and
I've
tested
it.
D
Yeah,
so
you
need
to
reopen
this
on
the
main
branch
like
that.
We
we
won't
be
merging
stuff
into
release,
Branch
prior
to
it
being
on
Main,
okay,
got
it
so
yeah
and
then,
as
far
as
the
end-to-end
test,
because
we
don't
actually
have
any
hardware
at
the
moment
to
do
the
end-to-end
test.
So
I
think
it's
fine,
as
is
I
guess
we
just
need
to
make
it
optional
for
now,
until
we
get
some
infrastructure
to
run
it
on.
D
B
So
it
is
completely
optional.
If
you,
if
you
don't
specify
any
of
those
fields,
nothing
happens.
It's
completely.
D
B
Okay
and
what
was
the
other
question
so
yeah
I
I
have
tested
it
well
enough
in
my
environment
and
if
you're
looking
for
some
details
on
that,
I
can
share
it
offline
or,
however,
you
would
like
just
to
kind
of
get
some
confidence
into
the
pr.
If
that's
needed,.
D
I
mean
if
you
can
put
in
the
comments
or
in
the
description
but
yeah
if
it
is,
contains
private
data,
then
yeah
sure
in
VMware
slack,
it's
fine,
but
yeah.
Please
rebase
this
on
like
reopen
the
pr
on
the
main
branch.
B
Okay,
so
I
so
then
do
I
do
I,
leave
this
open
and
create
a
separate
one
on
Main
or
just
close
this
one
and
then.
B
A
Yeah
yeah,
so
in
general,
yeah,
it's
just
easy
with
GitHub
I
think.
Theoretically,
there
are
some
here's
an
edit
button
and
you
can
change
the
base
Branch,
but
that
never
really
worked
out
for
me
and
what
we
definitely
can't
do
is
we
can't
merge
another
PR
on
Main
and
then
merge
this
one
around
six.
We
will
basically
merge
the
one
on
Main
and
then
I
I.
Don't
really
know
about
the
backboard
policies
in
Capri.
A
At
this
point,
so
I
don't
have
an
opinion
on
backboarding,
okay,
but
but
we
definitely
have
some
merch
on
Main.
First,
that's
an
important
point.
D
B
Okay
sounds
good
and
one
additional
question
and
if
I
wanted
to
provide
some
documentation
on
this,
how
would
you
recommend
Ico
for
doing
that.
D
Yeah
markdown
in
docks
and
then
at
some
point
we
will
probably
do
something
like
we
are
doing
with
the
other
providers
where
we
publish
a
book
okay.
So
if
you
want
to
look
at
either
the
cluster
API
main
book
or
the
AWS
one
or
openstack
yeah.
A
But
I
think
for
the
purpose
of
your
PR.
You
don't
have
to
go
as
far
as
the
book,
no
because
we
just
migrated,
then
when
we
have
to
yeah,
but
just
because
I
was
reading.
All
of
this
documentation
I
really
liked
what
we
have
for
GPU
PCI
pass
through.
You
do
yeah
we
have
documentation
for
PCI
password,
so
maybe.
A
D
B
Now
I've,
so
I
I
was
not
able
to
test
the
GPU
operator.
Running
part
in
my
environment
due
to
some
private
mismatch
issues,
but
I
was
able
to
verify
that
the
GPU
device
was
added
to
the
VMS
that
were
created
using
Cafe,
so
the
device
was
usable.
I
could
test
I've
been
using
the
device
plugin.
That
was
that
creates
the
GP
operator
and
yeah
verify
that
certainly
no
time,
but
the
GPU
operator
is
bringing
some
challenges
for
me
right
now,.
D
Maybe
a
quick
easy
way
to
do.
That
is
if
we
use
our
Upstream
Ubuntu
22204
images
there
shouldn't
be
any
driver
issues.
B
The
NVIDIA
drivers
that
I'm
referring
to
are
for
vgpu,
so
the
Nvidia
AI
Enterprise
on
the
sxi
and
that
the
same
version
not
being
available
for
the
guest
so
which
means
I
have
to
re.
Install
the
Nvidia
AI
Enterprise
on
my
sxi
I
mean
those
kind
of
challenges.
B
Okay,
okay,
okay,
so
I'll
I'll,
give
it
a
shot;
I'll,
probably
keep
open
a
separate
PR
for
documentation,
but
yeah
for
now.
I'll.
Try
to
put
this
on
me.
Thank.
A
You
yep
sounds
good,
so
just
for
me
that
I
got
it
right,
so
basically
the
PRS
this
one.
We
already
have
this
one
and
we
don't
have
this
one
yet
right.
Just
because
I
don't
know
about
a
second
case.
I
know
it's
nothing.
It
has
nothing
to
do
with
your
PR
yeah.
D
Gpu
direct
is
using
remote
dma
between
so
you
like
I
guess.
The
closest
example
would
be
Amazon
elastic,
GPU,
okay,
so
we
would
need
another
set
of
Hardware
to
test.
A
A
Okay,
that's
good
to
know
next,
one
yeah
that
that's
not
one
of
the
ones
so
take.
D
A
Okay,
then
we
have
this.
One
I
think
that
that's
fine,
let's
just
see
that,
let's
just
see
that
we
got
yeah.
A
I
mean
that
all
sounds
good
to
you
right,
except
I,
mean
the
last
one
is
something
that
we
should
clarify:
yeah
I
have
a
bunch
of
other
questions
that
I
want
to
ask
Sagar
and
just
you
know,
put
it
on
this
list
as
well,
and
once
I
get
an
answer
there
I'll
update
here,
and
then
we
can
move
ahead,
but
let's
keep
it
open
until
but
I
mean
what
definitely
makes
sense
is
just
drop
this
outdated
document
right.
It
doesn't
make
sense
for
historical
purposes
or
something
to
keep
it.
A
A
Yeah,
that's
a
new
one!
That's
what
you're
talking
about
what
we
were
talking
about
before,
just
for
you,
Nadia
I,
think
I
can
probably
answer
a
bit
later,
so
I'm,
pretty
sure
that
my
core
copy
stuff
bonus
merged
and
the
joy
picker
finger
merged
later
and
then
I'll
remove
the
replace
them.
Okay,
yeah
and
I.
Don't
know
if
you
try
to
take
a
look,
but
basically
that's
how
it
would
look
like
the
generated
stuff,
so
yeah
standard
13,
slightly
nicer
than
one
or
two
before,
but
the
same
thing
yeah.
A
So
it
should
be
okay.
What
we
don't
have
anymore
is
like
this
table
where
you
can
see
what
the
cap,
the
image
version
is,
but
I
think
it's
not
really
necessary
and
you
can
look
in
the
yaml
if
you
want
to.
A
But
yeah,
that's
just
a
small
one.
We
don't
have
to
talk
about
it.
It's
just
intelligence
showing
me
a
warning.
Yeah.
A
A
Okay,
so
we
have
this
one
left
I
did
we
look
at
the
PR,
but
I
know
that?
Have
you
always
been
in
that
meeting
that
year,
no.
A
Like
yes,
especially,
we
have
the
first
round
of
let's
take
a
look
at
the
Google
Doc
before
we
Circle
this
Innovative
circulation.
Yeah.
D
But
okay
so
watch
this
space
well,
for
anyone
who
watches
this
and
it
isn't
from
VMware,
we
will
get.
A
A
dog
out:
it's
like
yeah,
your
services,
definitely
I
I'm,
pretty
sure
it's
Rachel
doc,
yeah
I
mean
did
you?
Do
you
know
anything
about
it
or
is
it
just
roughly
the
title.
D
Context
of
what
it's
for,
in
terms
of
what
the
I
mean,
I
get
the
use
case.
It
seems
reasonable
in.
A
A
Yeah,
let's,
let's
see
what
we
where
we
get
there
and
then
once
we
have
a
bit
of
a
better
state
and
we'll
bring
this
here
up.
Okay,
so
those
are
all
the
PRS.
A
Yeah
I
think
it's
okay.
Also,
we
I
was
positively
surprised
that
we
don't
really
have
that
many
issues
and
I
think
like
10
or
20
of
them
were
just
created
in
the
last
week.
A
B
And
once
you
spend
some
time
is,
if
possible,
asking
a
couple
of
general
questions.
Copy
and
cluster
API
engine.
Is
that
okay?
Fine
for
me?
Okay,
so
one
of
the
questions
I
had
was
so
the
notes
that
are
created
using
copy
and
using
link
loan.
They
don't
seem
to
be
motion.
It's
not
expected.
D
Don't
know
I
would
have
if
any
of
those
restrictions
are
going
to
be
DRS
based,
then
nothing
to
do
with
cat
B
yeah.
B
At
least
in
my
environment,
I
was
able
to
move
the
rest
of
the
VMS
except
the
ones
that
I
created
using
copy
and
I
wasn't
quite
sure
what
was
the
case,
and
these
are
also
non-gp
VMS.
So.
D
I
would
open
in
this
because
we
don't
do
a
lot
of
testing
with
link
loads
like
Downstream
as
well
so
I
would
we
play.
We
probably
need
to
look
at
vpxd
logs
or
what
the
vmx,
if
there's
any
difference
in
the
vmx
settings
between
them.
Okay,
maybe
there's
some
slight
difference:
yeah,
but
I
I,
don't
know
of
a
specific
blocker,
so
we're
probably
missing
some
vmx.
Some
something
needs
to
be
added
that
we're
missing
so.
B
Okay
sounds
good,
so
that
was
one.
The
other
question
I
had
was
with
respect
to
her
nodes,
so
I
I.
C
B
Hearing
about
VM
Service,
yes,
and
is
that
something
that
Cappy
is
in
general
moving
towards
or
are
we
gonna
continue
to
support
the
way
we're
doing
stuff
now
directly
using
Kobe
Mommy.
A
D
So
I
wouldn't
see
the
Gobi
mommy
going
away
anytime
soon.
My
only
ask
is
that
we've,
given
we
have
both
VM
operator
apis
and
non-vm
operator,
apis
that
we
try
and
keep
them
in
sync
as
much
as
possible.
So
so
we
add
like
there's
the
pr
for
the
guest
shutdown
that
was
added
to
none
to
the
Gobi
mummy,
we're
also
adding
it
to
the
VM
operator
API.
So
it's
consistent
on
both
sides.
So
no
I,
don't
it's
not
the
Gobi
mommy's
not
going
to
go
away
anytime
soon,
just.
A
A
question
well
just
because
we're
curious:
what
does
the
state
of
the
GPU
support
thing?
In
probably
I
mean
I,
guess
the
current
the
current
applications
we
have
are
probably
like
for
the
going
moment,
side,
I
guess.
D
I
think
we
we
GPA
supported
a
VM
operator
already
I
believe
so.
I
assume
it's
already
there.
If
it
isn't.
A
It
so
it
would
be
a
matter
of
also
I
mean
taking
a
look
of
course
at
a
supervisor
with
stream
machine
Cod
and
maybe
opening
an
issue
to
to
also
pass
through
the
configuration.
D
I
think
we
already
support
it.
Vm
service
I
think
that
was
my
understanding,
which
is
why
I
haven't
commented
on
the
vgp
saying.
Oh,
can
we
have
this,
but
I
could
be
wrong.
What
I
think
there
isn't
there
is
PCI
password
is
not
yet
supported
in
VM
service.
A
A
A
D
B
Yeah
VM
operator
must
reconcile
that
right.
The
vgpu
device,
Edition
yeah
and.
B
And
so
I
think
the
last
question
I
have
is:
is
there
a
way
to
bring
your
own
node
and
make
it
part
of
a
cluster
API
cluster,
and
this
is
outside
of
Cafe
now
so
cluster
API
in
general?
Is
it
okay
to
bring
a
node
and
add
it
to
a
cluster
API
cluster
in
an
automated
fashion
versus
doing
stuff
manually.
A
A
A
Was
a
week
one,
there
were
definitely
like
a
bunch
of
projects
and
I
think
at
some
point
they
talked
about
just
convert.
I
mean
a
few
of
them
to
converge,
they're,
probably
a
bunch
more
on
top
of
that,
but
I
think
we're
also
getting
into
the
part
where
your
basically
say:
I
want
to
have
a
classroom,
which
is,
it
shows
multiple
providers.
A
At
the
same
time,
potentially
I
mean
if
you
want
to
do
it
all
in
copy,
and
then
you
have
okay,
it
really
depends
so
I
think
what
you
can
do
is,
if
you
have
another
way
to
create
notes
basically-
and
you
just
join
them
into
a
cluster
API
cluster.
But
you
don't
want
cluster
API
to
have
anything
to
do
with
your
other
nodes.
A
That
probably
just
works
because
I
mean
I'm
guessing
of
it,
but
I
think
cluster
will
just
ignore
them.
It
won't
delete
them
or
anything
they're,
probably
just
tolerated.
But
if
you
want
to
bring
those
other
nodes
into
your
class
API
cluster
with
some
cluster
API
automation
around
it,
then
you
probably
need
some
sort
of
provider
for
it
and
I'm
not
really
sure.
A
If
we
support
it
today
that
you
create
a
bunch
of
nodes
with
the
copy
provider
and
then
a
bunch
of
other
notes
with
some
other
provider
and
put
them
all
in
the
same
classroom,
there
was
some
talk
around
it.
B
I
bought
that
up
for
I
was
thinking
about
in
that
direction
was
cap,
VM
specs
are
limited
in
what
they
can
do.
B
Vgpu
was
just
one
example,
but,
and
the
The
Logical
approach
is
to
typically
just
start
the
API
to
support
that,
but
that's
the
slower
process
if
there
is
a
way
to
use
like
VM
operator,
to
get
a
node,
because
it
probably
supports
more
more
customization
of
the
VM
then
and
attach
it
to
a
cluster
API
cluster
I,
don't
know,
maybe
that's
just
trying
to
circumvent
the
problem
or
work
around
it.
D
B
Yeah
I
don't
know,
I
haven't
played
around
with
that
very
much
and
I.
Don't
think.
A
I
mean
we
mainly
use
like
the
supervisor
mode
as
part
of
THC
as
I'm
a
supervisor,
but
it
doesn't
actually
have
to
run
on
multiple
ones.
Shouldn't.
D
A
They
should
do
some
exploration
around
it.
Yeah.
A
Yeah
and
another
thought
I
had
was
basically
what
would
be
kind
of
funny
is
if
you,
if
you
can
have
a
cluster
which
basically
uses
a
cap
V
in
both
modes
at
the
same
time,
so
like
a
mix
provider,
but
it's
actually
just
kept
me
all
the
time.
D
I
mean
that's
also
interesting,
right,
yeah,
so
that
could
be
interesting,
I'm,
not
against
you.
Now.
A
B
To
clarify
what
you
were
saying
about
documenting
that
approach,
or
you
were
referring
to
how
to
use
copy
with
the
supervisor,
enabled.
B
Right
yeah
I
didn't
find
I
haven't
tried
that
so
it'd
be
interesting
to
see
how
that
works.
D
A
C
D
Yeah,
maybe
not
all
of
it,
so
maybe
things
like
power,
virtualization
might
be
there.
Yeah
workload
clusters
will
play
still
more
look
like
normal
at
the
non-supervisor
ones.
You
don't
need
everything,
because
I
think
some
of
the
dependencies
get
a
bit
annoying
around
like
NSX
and
stuff
which
you're
not
gonna,
have
in
your
local
Dev
environment,
yeah.