►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
this
is
the
cluster
api
for
ida
vsphere
meeting
of
15th
of
october.
Please
note
that
we're
running
under
the
cncf
guide
code
of
conduct
be
excellent
to
each
other,
and
please
use
the
raised
hand
feature
if
you
need
to.
If
you
want
to
make
a
point,
yeah
we'll
get
started
so
I'm
covering
for
your
scene
today.
I
have
not
been
massively
involved
with
cluster
api
vsphere
before
recently,
so
I'm
just
going
to
be
mostly
facilitating.
A
So
there's
nothing.
We,
I
don't
think
we
have
anything
in
terms
of
news
and
updates.
So
I
guess
we'll
go
straight
to
the
first
discussion
topic,
which
is
june.
Do
you
want
to
take
that
away
with
the
academy
failure
domain
discussion.
B
Yeah,
so
this
is
the
continued
discussion
on
the
control
plane
field
domain
proposal
that
we
had
from
the
last
meeting.
First
thanks
a
lot
for
being,
and
the
teacher
and
other
folks
for
all
the
very
valuable
input.
So
I
have
like
kind
of
had
a
summary
of
all
the
comments
and
suggestions
we
had
and
put
it
under
this
topic
for
further
discussion.
B
B
I
think
we
can
go
ahead
under
this
summary
of
the
further
discussion
topics
if
we
can
for
other
folks,
if
you
haven't
take
a
look
at
this
proposal,
a
quick
summary
is
we're
trying
to
bring
in
the
fetal
domain
support
for
the
controlling
node
for
cap
v.
B
The
domain
for
controlling
kcp,
already
like
added
for
other
public
health
amazon
in
europe,
but
for
cap
v
is
there
is
like
specific
challenges,
because
we
need
to
provide
all
the
like
a
user-defined
topology,
together
with
all
the
placement
constraints
into
capability,
so
that
we
need
to
know
like
where
to
actually
place
the
machines.
B
The
discussion
that
we
had
last
week,
I
think,
during
the
full
two
weeks.
I
think
it's
mainly
focused
on
couple
of
items.
B
First
of
all,
the
ben
have
a
very
great
idea
that
we
should
clearly
separate
the
like
the
feeder
domain
and
the
placement
constraint
and
have
a
clear
separation
between
those
and
we
haven't
all
agreed
on
this
part
and
the
fader
domains
should
be
like
something:
the
big
sphere
admin
configure
on
the
infrastructure,
and
that
isn't
something
maybe
coverage
not
touch.
It
will
be
some
input
into
country.
Okay,
this
is
the
feeder
domain
that
is
configured
within
the
b
sphere
infrastructure,
and
you
need
to
using
these
already
configured
information
to
place
the
vm.
B
B
So,
though,
that
we
have
defined
the
glossary
and
we
have
the
clear,
like
separation
between
those
and
in
the
following
proposal,
we
try
to
maintain
this
clear
separation
between
those
different
concepts.
B
Then
the
couple
of
questions
we
we
haven't
reached
to
the
conclusion
the
first
one
is:
it
did
a
hard
requirement
that
the
region
and
the
zone
that
we
are
using
for
the
video
domains
must
align
with
the
cpi
and
csi.
B
B
So
there's
no
issue
for
that
use
case,
but
there
are
folks
that
are
trying
to
achieve
for
covering
for
other
use
cases,
for
example,
mapping
a
compute
cluster
to
a
region,
but
different
host
groups
into
a
zone
with
that
use
case,
the
one
of
the
challenges
we
think
that
we
might
have
is
there's
no
way
to
tag
your
host
group,
so
you
can
put
the
tags
on
your
computer
cluster.
You
can
put
your
tags
on
your
like
each
individual
hosting
machines,
but
there's
no
way
to
attack
the
host
group.
B
So
one
proposal
we
were
thinking
is:
we
can
use
the
host
group
as
a
zone
then
like
vsphere
admin,
can
tag
each
individual
host
within
that
host
group,
and
we
can
have
we
can
kind
of
like
summarize
and
create
these
kind
of
host
groups
dynamically
through
that
way.
We
we
can
also
achieve
the
goal
that
the
fader
domain
come
from
one
single
source.
It
comes
from
the
text,
but
at
the
same
time,
that
brings
other
like
concerns.
B
So
this
is
the
first
yeah
first
question:
yeah.
C
I
mean
so,
I
would
say
that
my
desire
primarily,
is
to
have
one
way
of
doing
things
if
we
possibly
can
and
to
try
to
exhaust
all
possible
like
ways
of
achieving
it.
With
that
one
way
before
we
start
looking
at
more
ways-
and
I
want
to
just
try
to
clarify
my
understanding
of
a
couple
of
things,
because
I
didn't
fully
understand
the
comment
that
two-shot
posted.
C
C
That
is
plumbed
all
the
way
through
that
nothing's
confused
and
that
everything
kind
of
knows
knows
where
it
stands
right,
particularly
drs,
because
we
really
want
drs
to
to
to
be
able
to
work
with
this
as
well
and
make
sure
that
you
know
if
it
has
to
move
hosts.
It
moves
them
to
places
that
are
appropriate
with
the
the
original
zone
that
it's
tagged
with.
So
I
may
be
showing
my
ignorance
here
is
host
group.
Actually
a
vsphere
construct,
it
sounds
like
it
is.
C
B
C
That
so
so
that's
weird,
so
it
doesn't
show
up
in
the
inventory.
But
it's
you
I
mean
you.
Do
it?
Okay,
so
scott's
helping
me
here,
scott!
You
could
host
group,
isn't
it?
Oh,
it's
a
folder
that
esxi
hosts
live
in
okay.
So
it's
like
a
convention
that
you
can
use
to
imply
that
hosts
are
grouped
by
just
putting
them
in
a
folder.
Is
that
is
that
am
I
reading
that
right.
C
C
So,
if
yeah
you're
right,
if
a
vc
admin
were
to
group
for
hosts
in
a
folder,
then
they
would
have
to
tag
those
hosts
in
order
to
provide
an
explicit
association
rather
than
just
the
implied
association.
So
can
you
can
june?
Can
you
tell
me,
then,
what
is
it
that
why
does
cat
v
need
to
need
to
create
a
host
group
like
what
does
what
purpose
does
that
serve?.
B
The
host
group
is
mainly
used
for
the
vm
to
bm
affinity
rules
that
we
want
to
achieve.
For
example,
you
carry
the
host
group
and
you
create
the
vm
group
and
the
vm
group
is
tied
to
the
host
group.
Then,
when
we
created
the
controlling
node,
we
can
spread
the
control
plane
node
into
different
vm
groups.
Then
the
drs
will
take
that
vm
and
put
it
into
the
host
group.
C
I
see
okay,
okay,
so
it
I
mean
this
problem
seems
to
be
really
the
fact
that
we
have
this
inability
to
tag
a
host
group,
because
if
we
could
tag
a
host
group,
it
would
solve
everything
okay.
So
if
we
can
tag
hosts,
let's
say
we
accept
the
annoyance
of
having
to
manually
tag
hosts
in
a
host
group
and
and
provide
that
tagging
mechanism.
C
If
we
were
to
do
that-
and
let's
say
that
we
have
you
know
like
rack
one
and
we
tag
a
bunch
of
hosts
with
rack
one.
Would
we
still
need
to
create
the
host
group
for
cap
v
to
be
able
to
create
the
the
drs
rules,
or
would
we
still
need
a
host
group
in
order
to
drs
to
understand
that.
E
E
So
I
would
like
to
add
something
there.
I
think
there
are
two
ways
prior
to
7.0.
There
was
no
compute
policy
and
then
after
7.00
there
is
a
compute
policy.
So
what
you
do
is
you
tag
some
host
and
then
you
tag
some
vms
and
you
define
a
compute
policy
that
these
dags
are
fine
and
that's
how
it
works.
But,
prior
to
7.0,
you
can
create
a
host
group.
E
You
can
create
a
vm
group
and
then
create
a
nt
affinity
or
affinity
must
or
should
rule
that
puts
all
the
vms
in
that
vm
group
onto
all
the
hosts
in
one
of
the
host
groups.
So
how
so
I
mean
yeah,
so
there
could
be
different
ways.
C
E
Yes
and
then,
okay,
I
don't
know-
I
mean
how
much
compute
policy
I
mean
was
introduced
in
6.7
u3,
because
that
can
be
found
out.
I'm
totally
not
sure,
but
I
definitely
know
7.0.
C
Okay,
my
my
unease
with
the
idea
of
having
cap
v
be
able
to
manipulate
vsphere
is
is,
is
that
you
know
like
for
a
vsphere
admin
to
be
giving
cap
v?
Well,
that's
not
true.
Actually
let
me
step
back
on
that
cat
v
is
going
to
be
creating
vms,
so
it's
already
manipulating
the
vsphere
and
it's
already
going
to
need
vsphere
credentials
in
order
to
be
able
to
do
that.
C
C
Well
then,
then,
then,
how
about
we
say
is
another
approach
to
say
that
if
you
want
to
take
advantage
of
the
host
group
scenario,
you
need
the
vc.
Admin
needs
to
set
up
a
host
group
ahead
of
time
and
then
and
then
pass
the
reference,
a
reference
to
the
host
group
to
the
to
cap
v,
which
would
mean
then
that
host
group
would
be
an
additional,
explicit
fault
domain
that
we
provide,
rather
than
being
a
placement
constraint.
F
F
So
what
I
see
here,
anything
that
is
internal
to
the
sphere,
why
do
we
need
to
expose
outside
can't
we
do
internally,
like
we
do
for
the
pod
affinity
and
kind
of
thing
we
do
in
kubernetes
to
just
the
host.
You
can
add
labels
to
the
host.
That
is
the
nodes.
Yes,.
D
F
C
That
would
be
that
would
be
my
preference
prakash,
but
june
was
saying
that
he
believes
that
when
it
comes
to
creating
affinity
rules
for
where
the
vms
get
deployed,
that's
that's
that's
not
enough
that
we
do
need
host
groups
as
well.
I
don't
fully
understand
that
point
june,
so
maybe
maybe
you
can
expand
on
why?
It's
not
why
tags
aren't
sufficient
to
be
able
to
create
those
affinity.
C
Rules
like
why
can't
I
tag
a
bunch
of
hosts
with
with
with
with
rack
one
and
then
create
affinity,
rules
that
say
all
these
control
plate
vms
must
be
deployed
into
into
any
host
tagged.
Without
that
rule,.
B
I
I
can't
dig
in
further,
but
based
on
the
like
experimental
that
I
have
done
so
far
and
also
the
documentation
I've
written
so
far.
The
affinity
rules
are
only
applied
to
the
host
groups
and
the
vm
groups.
It's
not
tied
to
a
specific
host.
C
I
think
we
should
get
someone
from
the
drs
team
to
double
check
that
because
it
the
reason
I
the
reason
I
wanted
to
just
dig
in
on.
That
is
because
it
really
feels
like
an
important
point.
C
You
know
if
there,
if
there
were
a
way
that
we
could
make
tags
on,
hosts
work
with
this,
then
it
would
solve
all
our
problems
and
we
would
be
able
to
move
on
because
I
take
prakash's
point,
like
my
preference
is
always
for
loose
coupling
rather
than
tight
coupling
like
any
time
that
we're
exposing
a
specific
knowledge
of
vsphere
inventory
items
or
vsphere
organization.
We
inherently
make
the
thing
fragile,
because
the
moment
something
changes,
our
configurations
are
broken.
C
So
if
we
can,
you
know
tags
give
us
this
nice
loose
coupling
from
one
to
the
other.
So
maybe
we
could
take
that
as
an
offline
thing.
To
I
mean
I
can
reach
out
to
to
some
folks,
I
know
in
the
drs
team
and
just
double
check
with
them
on
that
point.
Would
that
be
okay.
F
C
F
F
C
Yeah
I
mean
that
looking
into
it,
I
think
you,
you
have
to
take
all
three
things
into
consideration.
Right,
you've
got
to
have
shared
storage,
you've
got
to
have
the
networking
and
you've
got
to
have
compute
in
the
same
place.
Right-
and
I
guess
a
fairly
domain
by
definition,
should
have
all
of
those
three
things.
D
D
Unfortunately,
so
because
of
that
I
mean
the
host
groups
would
be
necessary.
The
other
option
is
to
possibly,
though,
if
you
want
to
use
the
failure
domains,
the
user
used
for
cap
v
has
to
have
the
permission
to
create
host
groups.
If
you're
not
going
to
use
failure
domains,
then
it
doesn't
need
that,
but
anyways
it's
going
to
need
permissions
on
drs
in
order
to
create
the
vm
groups
to
create
this
anyway.
D
So
with
that
being
the
case,
I
per
I,
I
don't
see
the
real
difference
between
giving
the
ability
to
create
a
vm
group
or
a
host
group,
because
a
host
could
be
a
part
of
multiple
host
groups.
It's
not
like
we
limit,
then
the
ability
of
the
user
to
actually
or
the
vsphere
admin
to
actually
change
other
configurations
or
run
other
workloads
on
that
same
cluster
as
he
would
see
fit.
D
A
Thanks,
I
can't
raise
hands
so
I'll
use
my
host
rights.
We've
got
some
similar
issues
in
the
aws
provider
right
so
around
lease
permissions.
A
You
see
it
more
in
aws
environment
because
there's
a
strong
emphasis
on
very
tight
iem
lease
permissions
across
the
board,
we're
thinking
of
moving
to
a
model
where
you've
got
to
be
absolutely
explicit
about
everything,
we're
going
to
have
separate
crds
for
pretty
much
every
construct
and
in
that
way,
users
can
then
fine-tune
to
permissions
that
they
grant
plus
the
api
aws,
and
I
think,
there's
a
similar
case
here.
What
I
don't,
if
you,
if
we
are
going
to
do
anything
around
dynamically,
creating
host
groups,
I
don't
think
it
should
be
something
that's
done.
A
C
I
think
that's
a
great
observation
nadir
and
that's
actually
ties
in
with
a
suggestion
that
I
made
on
the
document
that
we
actually
have
a
crd
that
actually
defines
that
that
binds
the
the
fault
domain
with
any
placement
constraints
that
you
have
and
that
that
be
something
to
find
out.
You
know,
as
its
own
crd,
is
something
that
you
can
configure
and
and
sort
of
use
use
over.
C
The
nice
thing
about
that
being
its
own
crd
is
that
we
can
apply
distinct
validation
to
it
and
display
that
validation,
the
status
to
say.
Oh
yeah,
this
this,
this
deployment
zone
that
you've
defined,
is
valid
or
isn't
valid.
You
know,
or
you
have
permissions
or
you
don't
have
permissions
or
whatever
we
might
want
to
put
the
status.
C
So
the
idea
of
having
your,
I
really
like
that.
Actually,
the
idea
of
of
of
defining
a
host
group
as
a
crd,
which
would
itself
be
a
fault
domain
and
to
be
being
able
to
again
validate
that
and
and
use
specific
permissions
around
being
able
to
create
and
maintain
that.
I,
like
that,
a.
C
Lot
and-
and
we
could-
and
you
know
the
other
reason
I
really
like-
that
is
because
it
opens
the
door
to
power
virtualization
right.
It
opens
the
door
to
the
vc
admin,
creating
a
host
group
themselves
and
then
the
host
group
crd
just
popping
up
and
appearing
in
the
in
the
cluster,
and
and
that
that
you
know,
if
we
get
to
a
point
where
we
can
do
that
and
the
visa
admins
tasks,
I
mean
this
is
what
we
do
in
pacific
already
right.
C
The
things
you
do
in
vsphere
just
appear
in
the
supervisor
cluster.
I
think
that
would
actually
be.
That
would
be
great,
so
I'm
a
big
thumbs
up
for
that
idea.
Thank
you.
A
Cool
tasha.
E
Yeah
I
I
also
like
that
idea.
The
only
thing
I
want
to
ask
is
like
who's
going
to
reconcile
those
crds
like
like
there's
a
crd
for
host
group
so
validate
or
create
that
kind
of
is
it
like
cappy
manager
or
we'll
have
a
separate
controller
like
vsphere,
ac,
configurator
or
like
fail
fall
domain
configurator.
A
I'm
not
sure
it
could
be
either
really
we
so
right
now
in
plus
api
aws
everything's
in
one.
Well,
all
the
ec2
bits
are
in
one
controller.
We
do
have
separate
controllers
for
elastic
kubernetes
service.
A
A
What
are
people
allowed
to
do
on
that
kubernetes
cluster
in
terms
of
resource
creation
or
you,
we
can
enable
sv
feature
flags
as
well,
so
we
have
a
feature
flag
mechanism,
cluster
api
models
on
the
kubernetes
feature
flags
to
allow
us
to
turn
off
and
on
certain
things
as
well.
F
So
therefore
there
will
be
a
separate
control
loop
to
manage
it
as
far
as
it.
How
does
it
interact
with
the
vsphere
control
group?
I
have
no
idea,
but
I
think
the
answer
to
what
agrawal
asked
is
that
how
will
the
watch
loop
work?
It
will
be
a
separate
control
loop
based
on
the
host
group,
api
extension,
and
so
therefore,
I
think
in
part,
I
agree
with
ben
that
this
is
the
best
way
to
go
about,
because
there.
F
Proven
mechanisms
to
apply
that
it's
just
a
question
of
are
the
control,
loops,
mutually
exclusive
sure
they
are
mutually
exclusive.
As
far
as
controllers
are
concerned,
you
can
have
controller
of
controllers,
manage
those
controls,
so
I
don't
see
any
reason
this
is
not
deployable,
but
how
does
it
deal
with
the
drs?
That
is
something
you
see.
We
have
to
look
at
it.
C
Yeah,
I
think
we're
going
to
be
going
more
and
more
down
this
path
over
time,
like
the
vm
operator,
for
example,
is
a
classic
example
of
a
a
controller
that
runs
in
the
management
cluster
and
knows
how
to
interact
with
vsphere
and
knows
how
to
deploy
the
mz
of
vsphere,
and
you
know
this,
what
we're
talking
about?
C
What
we're
describing
right
now,
something
that
actually
knows
how
to
interact
with
and
manage
some
of
the
way
in
which
vsphere
assets
are
organized
just
feels
like
a
very
natural
extension
of
of
what
we've
already
discussed.
C
In
fact,
we,
it
might
make
sense
to
aggregate
it
with
vm
operator
at
some
point,
but
regardless,
I
think
this
model,
where,
where
we
disaggregate
the
sort
of
the
the
vsphere
topology
from
the
the
the
the
cap
v
configuration
and
that
were
able
to
separately
validate
those
crds
that
represent
vsphere
assets,
I
think,
is
a
nice
model.
C
I
do
have
one
question,
though
june
you
mentioned
vm
groups,
so
we
talked
about
host
groups.
Can
you
just
talk
briefly
about
vm
groups
and
whether
you
think
we
might
need
that
to
be
a
similar
construct.
B
The
the
vm
groups
minus
then,
is
the
drs
affinity,
rules
and
anti-affinity
rules
applied
on
the
vm
group.
So
what
we
plan
to
do
is
we
have
a
vm
group
which
is
tied
to
a
host
group.
That
means
all
the
vms
that
you
put
into
this
group
should
be
deployed
into
the
host
within
this
host
group
and
then,
when
we
launch
the
control
plane
node,
we
just
need
to
separate
out
our
three
control
machines
into
different
three
vm
groups,
so
that
drs
can
actually
put
into
like
spread
out
into
different
hosts.
C
Okay,
so
it
sounds
some
it's
yeah,
it
does
make
sense,
but
so
it
sounds
to
me
like
it
doesn't
make
any
sense
for
that
to
be
something
that
we
define
ahead
of
time,
because,
frankly,
we
don't
know
which
vms
are
going
to
be
part
of
the
vm
group
ahead
of
time.
So
that's
something
that
makes
sense
for
cap
v
to
be
able
to
do
dynamically.
B
E
So
I
think
so
I
have
done
this
before
in
cloud
foundry
in
the
cpi,
vcr
interface.
So
what
we
did
was
we
got
the
host
groups
from
the
admin
and
then
we
created
the
vm
groups
with
host
group
plus
cluster
group
name,
and
then
this
was
the.
This
was
lifecycle
with
the
cluster.
E
So
let's
say
there
are
three
host
groups
and
then
all
three
are
being
used
as
fault
domain,
so
we'll
have
three
vm
groups
of
like
name
host
group
plus
cluster
name,
and
then
all
the
vms
of
the
relevant
vm
group
will
be
added
to
this
and
we'll
create
a
vm
affinity
and
enter
affinity
rule.
So
the
vm
group
and
the
rules
will
be
life
cycle
with
the
cluster,
but
the
host
groups
will
stay
on.
So
that's
how
it
was.
E
Should
be
able
to
create
vm
groups
and
it
should
be
able
to
create
a
vm
group
host
group
affinity
rules.
But
the
host
group
permissions
are
not
needed
for
creation
and
I
think
that's
something
which
can
be
fine-tuned
in
vmware
permission
and
privilege
model.
It
can
even
be
fine-tuned
at
data
center
cluster
level,
so
yeah.
So
actually,
a
a
very
fine-tuned
user
can
be
created
by
yvi
admin
which
will
just
restrict
carefully
to
what
it
does.
C
Yeah,
it
feels
it
feels
intuitive
to
me
that
cap
v
would
be
able
to
create
vm
groups
and
create
rules
around
the
vm
groups
in
a
way
that
isn't
intuitive
around
host
groups.
Just
because
you
know
that's
that
seems
to
be
a
little
bit
off
limit.
So
I
I
mean
to
me:
it
makes
sense
that
that
catfish
should
be
able
to
do
that.
F
F
So
when
the
scheduler
doesn't
say,
rank
ranking
whatever
page,
not
page
the
vm
ranking
which
it
wants
to
allocate
our
plays,
then
it
should
be
able
to
connect
the
trades
and
the
tolerations
to
deliver
what
he
needs.
So
two
are
separate.
Node
affinities
is
one
and
the
other
is
toleration,
so
the
node
part
of
it
and
vm
part
of
it
are
the
two
key
how
you
group
them
together,
where
you
group
them
together
that
you
have
to
tell
the
scheduler
to
identify
and
then
deliver.
I
think
that's
my
understanding.
C
A
D
Yeah,
I
think
also
the
permissions
for
the
vm
groups
is
also
a
key
thing
for
cap
v,
possibly
in
the
future,
to
be
able
to
whether
it's
with
a
machine
deployment
or
kcp
or
anything
of
the
sort
to
be
able
to
actually
set
up.
Also
anti-affinity
rules
in
disregard
of
failure
domains,
but
to
have
vms
not
run
on
the
same
host,
which
is
also
in
it
of
itself
being
able
to
create
that
vm
group
allows
us
then
to
create
anti-affinity
rules
for
the
vms.
B
Yeah,
I
think
we're
on
the
same
page
for
this
now,
so
the
next
topic
is
so
ben.
You
have
suggested
this
like
the
big
spirit
deployment
zone
right
that
concept
and
we
combine
the
each
phase
domain
with
one
placement
constraint
and.
D
B
That
model
works
well
for
like
one
category
to
manage
one
kubernetes
cluster,
because
we
don't
think
it
makes
sense
for
one
single
kubernetes
cluster.
You
you
want
to
have
all
the
nodes
to
spread
across
like
different
resource
groups,
but
from
a
different
perspective.
Right
now.
The
capabilities
model
is
one
cap
b
instance
could
manage
multiple
different
kubernetes
clusters,
and
it
is
possible
that,
though,
the
different
kubernetes
cluster
could.
B
B
C
Just
to
be
clear
june
would
suggested
that
we
that
you
can
associate
as
many
placement
constraints
as
you
want
with
a
fault
domain
is.
Would
that
solve
the
problem
that
you're
describing?
I
I'm
not
100
sure.
B
Well,
that's
another
option.
That
means
all
the
placement
constraints
need
to
be
pre-defined
right
from
one
capability.
You
need
to
predefine
all
the
placement
for
domains
and
from
the
base
field.
Clutch
level
you
just
need
to
mapping
not
to
e
the
mapping
is
not
specific
for
domain,
but
to
in
another
level
of
abstraction
right.
This
is
the
actual
for
domain,
and
this
is
the
placement
constraint.
C
You
can
tell
me
if,
if
where,
where,
because
I'm
not
I'm
not
100
understanding,
so
my
thought
was:
we
have
this
crd,
that's
a
binding
between
the
concept
of
the
fault
domain
and
potentially
multiple
placement
constraints,
and
then
once
we
have
that
we
can
specify
the
reference
to
as
reference
to
the
crd
when
we
deploy
a
cluster
in
cap
v,
so
reference,
the
crd
will
just
be
as
basically
a
string
in
in
the
in
the
when
we
deploy
v
cluster
then
within,
because
this
is
going
to
be
view
on
alpha
four.
C
So
we
can
make
changes
in
terms
of
being
able
to
pass
context
with
a
region.
I
don't
remember
100,
remember
all
of
the
the
the
types
in
in
cluster
api,
but
basically,
as
it
stands
at
the
moment,
you
can
pass
a
region,
but
you
can't
pass
any
context
with
the
region.
So
if,
in
view
and
alpha
4,
we
were
able
to
pass
some
context
with
a
region,
then
we
can
have
the
the
mommy
part
of
cat
v,
unpack
that
crd
and
get
all
the
placement
constraints
out.
C
In
addition
to
the
to
the
to
the
fault
man,
that's
my
understanding,
so
can
you
can
you
just?
Can
you
tell
me
like,
like
in
in
the
context
where
we
want
multiple
clusters?
If
we
have
multiple
of
these
deployment
zones,
crds,
then
one
cluster,
another
cluster
can
pick
a
different
deployment
zone.
Crd.
Where
is
where
is
the
disconnect
there.
B
I
think
I
I
I
understand
your
proposal,
the
the
only
point
of
that
I'm
thinking
is
from
like
the
user's
input
perspective,
for
example
right
now,
if
one
user
wants
to
create
one
cluster
within
for
domain,
one
with
placement
constraint,
one,
the
input
that
run
out
to
the
bisphere
cluster
is
not
the
resource
pool
and
data
store
that
user.
Usually
they
are
familiar
with
right.
C
C
Because
as
it
stands,
my
observation
is
that
we're
already
mixing
up
fault
domains
with
placement
constraints
in
the
way
our
api
is
designed,
and
I
would
prefer
it
if
we're
going
to
be
explicit
about
separating
those
which
I
think
makes
a
lot
of
sense.
Then,
let's
you
know,
let's,
let's
go
let's,
let's,
let's
commit
to
that.
B
Yeah
yeah,
that's
a
good
point.
I
think
that
we,
if
we
just
like,
have
the
all
the
agreement
on
the
community
that
this
will
be
kind
of
a
big
change
from
api
perspective
right
from
it's
quite
different
from
the
previous
behavior
yeah.
C
I
mean
we
can
you
know
we
can
continue
to
support
it
if
you're,
not
using
regions
and
zones,
for
you
know
a
period
of
releases
and
then
and
then
make
it
clear
when
eventually
it'll
be
deprecated,
yeah
sure.
A
Yeah,
your
point
we'll
need
to
add
a
section
on
this
about
upgrades,
basically
so
people
who
are
upgrading
from
one
version
of
capri
to
this
new
version
cam.
What
do
we
do
around
adoption
of
rerun
alpha
3?
A
B
Okay,
I
think
that's
all
the
two
two
of
the
major
questions
that
I
I
want
to
bring
up
those
the
following
are
ones
are
just
the
implementation
in
details,
so
I
don't
want
to
take
much
time
in
the
community
thanks
a
lot
guys.
I
think
you
know.
C
I
just
want
to
say
I'm
really
delighted
with
how
we've
come
together
on
this,
because
I
think
it's
a
difficult
problem
and
I
really
feel
like
we're
getting
close
to
to
a
well-defined
solution.
So
thank
you
june
for
all
the
work
that
you've
put
into
to
defining
this.
I'm
going
to
set
aside
some
time
to
continue
to
work
with
you
to
sort
of
help
crisp
it
up
and
then
hopefully
we'll
have
something
that
you
know
that
we
can.
We
can
formally
review
yeah
sure.
Thank
you.
Thank
you.
Yeah.
A
Yeah
thanks
a
lot.
The
next
one
is
from
me
it's
actually
on
behalf
of
yaseen,
so
I
think
this
is
we're.
Gonna
start
adding
a
roadmap
for
cadmie.
So
obviously
this
what
we
do
around
failure
domains
is
gonna,
be
important
part
of
it
there's
also
some
other
depletions
that
are
going
to
be
made
mostly
around
right
now,
capri,
directly
deploys
csi
and
cpi,
and
that's
going
to
be
taken
out
in
favor
of
using
crs
or
whether
cluster
api
does
around
add-ons
management.
A
So
please
take
a
look
if
you've
got
any
major
concerns,
then
leave
a
comment
on
that
pr,
I
suppose
yeah.
The
other
important
thing
is
the
h8
proxy
is
going
away
completely
for
we
went
out
for
as
we
standardise
on
the
cube
rip
approach.
So
yeah
just
take
a
look
and
that's
it
for
the
agenda.
Does
anyone
have
anything
else
that
want
to
raise
or
call
it.