►
From YouTube: 20201029 - Cluster API Provider vSphere Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let
me
share
my
screen:
okay,
hi.
Everyone
today
is
the
29th
of
october.
This
is
the
cabbie
by
weekly
meeting.
This
is
a
kubernetes
sub
project
and
it
goes
by
the
code
of
conduct
which
points
them
to
be
nice
to
each
other.
I've
sent
the
agenda
link
in
the
chat,
if
you
want
to
add
your
name
to
the
agenda.
A
A
The
ability
to
give
cube
with
the
dns
entries
so
cubic
is
going
to
resolve
the
entry
for
you
and
use
the
ip
for
our
broadcasting,
and
it's
also
going
to
detect
if
there's
a
dns
update.
So
it's
gonna
pick
up
the
new
ip
and
use
it
for
r.
Another
interesting
bug
fix
is
so
it's
apr
made
by
john,
which
updates
the
arp
packet
format
june.
They
want
to
speak
about
it.
B
Yeah,
we
have
discussed
this
about
like
a
couple
of
weeks
ago,
so
the
art
message
during
one
of
the
testing
in
one
of
the
network
environments.
What
we
saw
is
the
the
during
field,
where
the
gap
message
was
not
received
by
the
other
node,
and
eventually
we
figured
out
the
the
garbage
reply.
The
message
is
not
like
kind
of
a
male
formatted.
Some
of
the
fields
are
not
aligned
with
the
rfc
definition,
so
we
made
a
fix
for
that.
B
The
other
one
is
at
the
same
time.
We
added
these,
alternatively,
to
sending
our
request
and
also
reply
so
because
some
of
the
documentation
we
have
reading
is
that
the
device
may
support
either
one
of
them,
because
those
are
a
available
ways
to
send
out
this
garb
broadcasting.
Is
there
a
reply
or
request?
It
could
be
used
by
some
other
device?
Yeah?
That's
what
we
do.
A
Yeah,
so
this
is,
this
is
going
to
be
so.
This
is
already
released
and
we
did
cutter
release
and
push
image
upstream,
so
you're
able
to
bump
your
template
and
we're
also
gonna
bump
the
default
versions
on
the
flavors
that
we
provide
through
through
github
for
the
next
release
of
catfish.
A
Okay,
so
for
topics
of
discussions,
I
added
the
first
item
before
we
can
go
to
iteration
on
the
failure
demand
proposal.
The
so
I
started
like
some
work
to
remove
deploying
the
cpi,
so
the
cloud
provider
integration
from
cad
feed.
So
ideally,
now
that
we
have
csi
that
is
deployed
as
an
add-on
to
cluster
resource
that
you
would
want
to
move
the
cpi
to
use
the
same
mechanism.
A
This
would
allow
us
to
reduce
the
surface
of
the
cap
of
capti
and,
having
to
you,
know,
update
updates
like
cavity
whenever
there
is
an
issue
with
the
cpi,
so
this
is
also
enabling
customization
of
this
cpi.
If
you
have
some
specific
configuration
that
you
want
to
use
and
like
in
terms
of
api,
the
apis
that
are
enabling
you
to
deploy
the
cloud
provider
and
the
storage
csi
driver
are
going
to
stay
for
v1
f3,
but
in
v1
of
the
4
we're
planning
on
removing
them.
A
So
this
means
basically
that
in
v1,
author
4,
you
would
need
to
migrate
all
of
your
existing
templates
to
use
cluster
resource
that
obviously
like
the
default
template
are
going
to
be
updated.
So
you
should
be
able
to
sync
them
and
diff
pretty
quickly
to
apply
the
changes.
A
A
C
Yeah,
so
I
have
a
question
about
the
new
the
recent
kubevip
and
external
load
balancer
functionalities
for
for
cap
v.
I
don't
see
any
place
that
how
to
use
them
or
set
them
up
is
documented
anywhere
on
the
the
getting
started
guides
or
or
in
any
of
the
documentation
whatsoever
is.
Is
that
an
acknowledged
thing
is
it?
Is
it
coming
later
and
you're
just
supposed
to
keep
using
aj
proxy?
C
For
now,
like
a
section
that
says
like
when
you
pick
an
external
load,
balancer
flavor
like
it
doesn't
there's
no
reference
to
where
you
can
even
see
what
flavors
are
supported
or
allowed.
I
had
to
go
through
go
code
to
even
figure
out
its
coop
vip
and
external
whatever
that
even
is
so.
A
So
so
there
are,
there
are
actually
two
flavors
here
at
place,
so
there
is
the
default
one.
So
if
you
don't
specify
a
flavor
by
default,
cathy
is
going
to
create
a
cube
based
control,
plane
endpoint
for
you,
so
the
default
template
has
in
into
the
kcp
resource
a
bunch
of
a
file
which
gonna
be
then
deployed
into
etsy
kubernetes
manifests
so
there
you'll
have
your
cube,
the
bot
that
user
that
is
going
to
deploy
the
vip
for
you,
the
other
flavor,
which
is
the
external
load.
A
Balancer,
is
a
flavor
that
enables
users
to
have
some
specific
controller
against.
They
wrote
against
cad
v
to
work.
So
basically
what
what
is
happening
there
is
that
you
deploy
a
a
normal
caddy
cluster,
but
nothing
is
going
to
happen
until
your
controller.
Your
custom
controller,
kicks
in
lists
the
machine
and
create
the
load
balancer
for
you.
So
this
is
just
an
escape
hatch
for
anyone
writing
integration
against
cappy,
and
I
agree,
I
think
we
we
can
make
this
much
more
clearer.
C
D
Speaking
tomorrow,
yeah,
we
not
say
we
don't
have
instructions
for
using.
D
Is
my
headphones
working?
We
don't
have
instructions
for
using
this
at
all.
With
the
current
templates,
we
still
talk
about
proxy
templates,
etc
in
the
getting
started
guide.
So
that's
all
out
there.
It
needs
to
be
updated.
A
Okay,
so
I
thought
that
I
at
some
point
I
removed
the
hc
proxy
bits
in
there,
so
there's
so.
If
you,
if
you
check
the
getting
started
on
on
master,
I
think
the
only
so
there
is
explicit
call
out
at
least
that
hp
proxy
is
deprecated
and,
like
you,
I
also
removed
like
references
from
to
the
aha
proxy
ova
from
there,
so
that
we're
at
least.
D
D
Yeah,
so
if
you
just
go
to
class
api
6k
scio
right
now,
it's
still
got
hp
products
template.
We
don't
we're
not
saying
like
you
need
to
configure
a
static
ip
for
the
rip.
So
we
need.
We
need
to
improve
our
documentation
game
here
and
I
think,
along
those
lines,
we
should
create
a
website
like
the
aws
and
azure
providers
now
have
their
own
website.
So
I
think
you
should
do
the
right
thing
and
do
that
also
for
catway.
A
That
sounds
good,
so
one
question
nadir.
So
are
you
talking
about
the
cluster
api
book
specifically
or.
D
D
D
Just
because
that's
that
quick
starts
used
by
it's
got
all
the
information
for
all
the
different
providers.
There.
A
Okay
can
can
like.
Can
you
chris
for
an
idea
file,
an
issue?
I
can
start
working
on
this
if
anyone
wants
to
help,
you
welcome.
D
A
Oh
by
the
way,
since
we're
still
talking
about
h
a
proxy
I'm,
I'm
also
writing.
So
if
anyone
is
still
using
h,
a
proxy
as
a
load
balancer,
I'm
working
on
the
documentation
to
help
folks
migrate
from
existing
clusters
that
have
haproxy
as
a
load
balancer
to
set
up
using
cubelip.
So
I'm
working
currently
on
this
I'd
expect
to
have
something
in
the
middle
of
next.
A
B
B
See
yeah,
I
have
another
question
about
this
djcp
for
cubevape.
I
know
you
guys
like
previously
discussed
about.
Is
it
possible
to
use
to
get
ip
from
djcp
to
use
as
the
vape
for
q
vape?
Are
you
able
to
do
that
eventually
for
the
infrastructure,
testing.
A
So
so
for
the
infrared
and
the
deer,
are
you
do
you
have
your
hands
up,
yeah,
okay,
so
so
for
the
for
the
in
for
the
for
this
before
the
ci,
we're
planning
on
using
a
like
the
metal
free
ipad.
Probably
there
we
have,
we
do
have
we
have.
We
do
have
ongoing
thinking
of
on
using
something
that
we
have
internally
also,
but
we
still
need
to
discuss
some
of
the
details
there
as
for
dhcp
id.
A
So
ideally
you
can
do
that
if
you're,
if
you
have
a
dummy
mac,
address
to
to
do
a
static,
dhcp
reservation,
but
yeah
that
would
require
at
least
a
manual
enter
on
the
dhcp
server
to
say,
okay
for
this
specific
mac
address,
I
want
you
to
assign
this
specific
id.
B
Okay,
cool
yeah
in
terms
of
the
medic
medicine
ipam.
Actually
we
have
a
controller
which
integrated
it
with
a
medical
item,
so
we
are
using
that
to
do
the
like
static
id
allocation
for
our
own
system
and
we
just
opened
source
that
last
week.
So,
if
you
guys
are
interested,
probably
you
can
use
that
for
the
ci
as
well.
A
Yeah,
thank
you
like
it's.
It's
on
my
to-do
list
to
take
a
look
at
the
interesting
challenge
that
we're
gonna
see
is
that
we
for
ci.
We
need
a
standalone
ipad
that
is
not
going
to
like
it's.
We
need
an
item
that
is
going
to
run
on
in
on
an
infra
cluster
and
that
it's
not
going
to
be
specific
to
a
management
cluster,
so
it
needs
to
assign,
like
ips
to
all
of
the
management
clusters
that
we
we
create
out
of
prs,
for
example.
A
So
that's
that
that,
like
I
still
need
to
check
the
controller
and
do
some
testing
around
it
and
see
how
it's
gonna
go:
okay,
okay,
chris,
do
you
have
your
hands
up?
A
Okay,
I
think
we
can
move
to
the
failure
domain
proposal
now.
Okay,
so
last
last
meeting
I
wasn't
there,
so
I
I
I
took
a
look
at
the
recording
plus
the
proposal.
It
seems
like
we're
starting
to
converge
on
on
something.
So
I'm
simply
I'm
super
pleased
here
to
to
see
that
we're
we're
making
progress
so
june.
Do
you
want
to
share
the
latest
on
this.
B
Yeah,
maybe
you
can
scroll
down
to
the
eps
section
that
I
think
the
api
already
captured
most
of
the
discussion
that
we
have
so
the
the
discussion
initially
at
least
for
last
week.
What
we
have
is.
We
will
have
two
at
least
two
crds
one
in
the
four-figure
domain
and
one
is
for
the
deployment
zone.
B
Then,
in
the
end
like,
how
does
the
cluster,
like
figure
out
which
deployment
zone
to
use
that
is
within
the
cluster
spec,
will
provide
a
array
of
the
deploy
deployment
zone
names?
B
A
Okay,
so
I
guess
this
includes
failure,
domains
that
we
want
that
we
would
want
to
use
like
around
so
I'm.
So
I'm
still.
The
only
thing
that
I'm
still
unclear
on
is
the
vsphere
failure
domain
spec.
A
So
here
we're
defining
types
of
like
fader
domains
and
based
on
the
failure
on
the
type
do
we
have
any
changes
to
the
other
fields
that
we
need
to
that
would
need
to
occur.
A
A
So
what
so?
So
I
guess
the
the
other
things
here
are
more:
are
they
informational
or
do
they
have
any
other
play?
Because
if
we
like,
if
we
think
about
it
at
first,
it
seemed
like
the
type
and
the
inventory
path
are
what
we
would
need.
I'm
I'm
still
unclear
around
what
zone
and
region
would
mean?
Are
they
for
the
cpi.
B
Yeah,
the
region
and
the
zone
as
of
now
atoms,
the
api
level
in
the
end
may
not
use
this
regional
information.
That's
what
I'm
thinking
the?
Why
do
we
need
this
type?
The
main
reason
is
to
support
this
host
group
type
of
affiliate
domain,
which
is
a
a
little
bit
different
because
the
host
group,
when
you
provided
hosting
group,
for
example,
just
for
example
right.
We
use
the
whole
group
name
as
a
zone
name
if
you
provided
the
zoning
name
only
to
the
feeder
domain.
B
We
don't
know
where
this
whole
group
belongs
to.
We
still
need
to
know
this
parent
computer
cluster
of
the
host
group,
and
that
is
the
reason
we
use
this
type
together
with
the
inventory
path
to
really
figure
out
like.
Where
is
this
computer
cluster?
Where
is
the
host
group
and
where,
where
should
we
place?
The
vm.
A
B
B
So
though
the
configuration
right
now
is
within
the.
If
I
remember
correctly,
within
the
vsphere
cluster,
then
there's
a
cpi
configuration
and
maybe
we
should
extract
those.
The
label,
like
the
tag
category,
move
that
information
here
to
the
biz
feeder
like
feeder
domain
definition
as
well,
so
that'll
be
defined.
All
the
feeder
domain
related
to
the
stuff
in
a
single
place
not
like
it's
spread
out
into
different.
A
Structures:
okay,
so.
A
So
I
think
it's
it
depends
on
the
experience
that
we
want
to
create.
Do
we
want
to
create
a
holistic
experience
for
how
we're
defining
zone
and
like
zone
and
region
for
clusters
in
general?
In
that
case,
we
might
want
to
still
have
like
the
name
and
the
the
name
of
the
zone
and
the
region,
or
we
can
still
yeah
think
about
it
in
a
way
where
it's
defined
in
a
proper
way,
but
I
I
feel
like
it's
it.
A
E
E
There
may
be
cases
where
you
can't.
You
know
it's
better
to
to
provide
the
user
the
ability
to
set
whatever
it
is
they
want,
then
then
try
to
have
some
convention
that
auto
generates
them.
I
guess
I
guess
that's
one
argument
for
doing
it.
This
way
is
that
is
that
right
june.
B
Yeah
yeah,
I
agree
so
this
another
purpose
to
have.
This
is
like
for
at
least
one
of
the
things
that
I
can
think
of
it
for
future.
When
we
really
have
a
separate
controller,
we
can
manage
the
fader
domain
definition
that
is
in
the
future.
Maybe
potentially
we
can
do
some
configuration
eventually
so
currently,
at
least
for
the
first
step
or
what
I'm
thinking
is.
E
Yeah
in
a
future
in
which
these
are
actually
defined
automatically
as
part
of
the
fabric,
you
know
the
kind
of
para
virtual
story
we
talked
about
where
you
know
you
define
something
in
vsphere
and
these
the
failure
domain.
These
crds
just
appear
again,
you
know
being
being
able
to
to
get
that
information
of
the
region.
The
zone
that's
been
defined
in
vsphere
out
into
this.
E
This
spec
kind
of
makes
sense
to
me
as
well,
but
the
only
question
I
have
about
this
struct
is
in
cases
where
you
have
string
fields
that
can
mean
different
things
in
different
contexts
or
you
know,
for
certain
contexts
can
be
optional.
It's
also
worth
considering
making
like
the
data
center
compute
cluster
and
host
group
structs
their
own
sort
of
strong,
more
strongly
typed
structs
and
then
having
a
sort
of
a
a
composite
struct
that
allows
you
to
to
specify
one
or
the
other.
E
A
I
can't
raise
my
hand,
sorry
so
so
I
think
that
it
makes
sense,
especially
that
we
can
always
have
you
know,
validation
on
top
of
it,
so
validation
working
to
to
say.
Okay,
if
you
set
this,
you
can't
set
the
other
two
and
same
thing.
A
So
I
had
I
had
another
question,
so
it
was
regarding
the
creation
of
host
groups.
So
as
like
as
as
we
know
today,
users
are
sometimes
what,
via
admin,
are
used
to
tag
things
at,
at
least
at
the
host
level.
To
you
know,
define
zone
and
region.
A
B
Yeah,
the
current
plan
is,
we
will
differentiate
the
action
on
the
host
group
and
the
vm
group
and
also
the
and
affinity
rules,
because
for
this
to
work
we
need
three.
We
need
the
host
group
within
the
vm
group
and
we
need
a
affinity
rule
between
them
during
the
last
discussion.
B
The
decision
decision
we
reached
is
the
host
group
needs
to
be
to
be
configured
by
the
van
living
and
the
vm
group
and
affinity
rules
will
be
configured
by
capability,
so
that,
like
housing
group,
is
still
representing
the
federal
domain,
which
we
agree.
That
is
still
the
admins
responsibility.
B
A
Okay,
I
see
so
the
reason
I'm
asking
this
is
I'm
thinking
about
like
satisfying
in
both
cases
where
we'd
have
vi
admins
that
wouldn't
want
to
share
extra
privileges
with
cathy,
because
they're
like
the
kubernetes
admin,
is
not
the
va
admin.
In
that
case,
it
makes
sense,
but
in
cases
where
vi
admin
are
the
ones
that
also
deploy
kubernetes
it
might,
it
might
make
sense
to
allow
cabvie
to
create
the
host
group
if
it's
not
already
pre-created,
yes,
nadir,.
D
Yeah,
so
I
think
the
idea
we
discussed
last
week
and
was
mostly
ben's
ideas
that
we
would
have
it
might
be
a
function
of
vm
operator,
so
we'd
and
that
would
continuously
reconcile
would
be
center.
So
it
doesn't
matter
so
either.
The
bi
admin
creates,
creates
it
in
recenter
and
it
automatically
pops
up
as
something
that
we
can
consume,
or
you
can
privilege
the
operator
to
do
it
and
allow
the
person
with
the
kubernetes
api
access
to
be
able
to
create
those
resources
and
then
we'd
reference
them.
D
E
A
So
so
another
another
thing
that
I
was
thinking
so
since
we're
removing
the
cpi
for
in
v1
alpha
4
from
the
vsphere
cluster
spec,
we
need
to
so
hold
on.
If
I
go
here
and
check
tap,
free.
A
So
it's
your
cluster,
so
I
think
some
something
that
some
things
might
go
away,
which
is,
namely
the
cpi
config.
So
things
like
you
know,
defining
how
the
how
the
cpi
is
configured
might
go
away.
So
we
need
to
ensure
that
we're
not
we're
moving
everything
properly
to
this
structure,
if,
if,
if
we
need
them
to
define
failure,
domains,
if
not,
then
they're
just
going
to
get
removed
in
v1
in
v1
alpha
4.,
that's
my
only
node.
A
So
how
do
y'all
feel
about
this?
Do
you
think
that
we
can
start
having
a
poc
as
soon
as
we
open
v1,
alpha
4
in
in
the
main
branch
ben
a
dear
chris,
a
thumbs
up
from
me.
E
Yeah,
I
think,
the
sooner
that
you
have
a
poc
the
sooner
we
can
flush
out
any
things
which
we
might
have
missed
here.
A
So
so
for
for
for
the
poc
june,
I'm
gonna
I'm
gonna
next
next
week
start
the
changes
for
v1
alpha
for
meaning
creating
the
types
and
whatnot
so
that
you
have
a
fresh
folder
for
v1,
alpha
4
to
start
on
and
in
the
same
time,
I'm
going
to
create
a
0
7
branch
so
that
we
can
still
release
these
zero
seven
patches.
A
Okay,
so
other.
A
Going
once
twice
three
times,
thanks
all
and
thanks
jun,
so
much
for
this
like
thank
you
for
iterating
on
the
dock
and
keeping
up
with
the
changes.
I
think
that
we're
we
really
have
something
that
we
can
build.
We
can
build
upon.
So
thank
you
so
much
for
the
effort.