►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
we're
having
a
special
meeting
of
sig
cluster
life
cycle
for
the
cluster
api
project,
specifically
around
additional
work
for
the
load.
Balancer
provider
proposal
today
is
november,
2nd
and,
as
usual,
this
meeting
will
be
recorded
and
we
abide
by
all
the
community
guidelines.
So
in
general,
please
be
excellent
to
one
another
all
right.
A
All
right,
so
basically
it's
a
follow-up
from
the
meeting
that
we
had
last
week
where
we
went
through
and
started
to
sketch
out
the
goals
and
non-goals
we
had
some
action
items
from
that.
A
B
B
B
A
new
machine
service
will
mirror
that
into
a
service
object,
minus
the
selector,
and
then
it
will
reconcile
and
create
endpoints
in
the
background
as
machines
come
up
and
down
based
on
the
machine
service,
selector
versus
the
server
selector,
and
then
from
that
point
on,
everything
should
be
pretty
much
normal
kubernetes
services,
so
existing
load
balances,
look
at
the
service
and
end
points
and
don't
even
need
to
know
that
the
machine
service
created
them
and
then
the
only
other
change
that
would
be
required
would
be
in
cluster
api
to
add
a
reference
in
the
control
plane
endpoint
to
a
service.
B
So
you
point
control,
plane,
endpoint
to
a
service.
It
doesn't
even
need
to
know
again
that
it's
a
machine
service
in
the
background
and
it,
and
it
should
pick
up
the
load
balancer
ip
from
there.
B
That's
pretty
much
it
so.
I've
created
the
machine
service,
crd
and
then
the
corresponding
service
and
cluster
crd
at
the
bottom.
C
Yeah,
I
have
a
question
so
about
two
questions.
The
first
one
is
just
to
confirm
that
this
service
is
like
a
headless
service
without
the
selector
and
the
second
thing
it
would
be.
Are
we
supposed
to
use
endpoint
or
endpoint
slices?
I,
like
I,
don't
know.
What's
the
latest
recommendation
from
c
network
on
this.
B
So
this
would
be
not
a
headless
service
but
an
empty
service
and
then
end
point
whether
it's
endpoint
or
endpoint
slices.
I
think
it
would
be
whatever
is
required.
I
think
both
or
probably
both
it
makes
more
sense.
B
D
Yeah,
I
was,
I
was
thinking
I
was
discussing
with
the
with
the
guys
from
metal
metal
tree
side
about
the
the
actual,
the
the
whole
load
balancing
thing,
and
we
were
discussing
about
the
that
is.
Are
we
able
to
run,
for
example,
if
we're
running
now
this
as
a
service
can,
can
we
run
those
in
in
a
control
planes,
or
is
this
because
we
would
like
to
kind
of
handle
the
whole
load
balancing
in
our
control
planes?
D
We
don't
we
don't
want
it
to
use
the
external
server
to
run
rx
our
load
balancer
as
a
as
in
in
front
of
our
our
control
planes?
Is
it
there?
Are
we
able
to
do
this
with
with
this
solution?.
B
So
I
think
this
would
be
machine.
Machine
service
would
be
deployed
in
the
in
the
management
cluster.
Essentially
because
that's
where
you
need
the
control
plane
endpoint
from,
but
there's
no
reason
why
you
can't
deploy
that
inside
the
the
workload
cluster
as
well,
and
you
get
two
load
balances.
The
same
set
of
endpoints
underneath.
D
Yeah,
because,
currently
what
we
are
do
doing,
at
least
in
in
I
think
in
our
products
inside
the
we
are
when
we
deploy
these
workflow
clusters,
we
we
delete
our
management
clusters,
so
we
kind
of
create
those
workflow
clusters.
And
then
then
we
don't
have
this
management
cluster
anymore.
So
is
there?
Is
there
any
way
to
even
even
get
this
sort
of
a
or
do
you
have?
Any
idea
could
could
we
handle?
Obviously
we
need
to
find
a
way
to
do
that,
but
could
this
be
done
through.
D
Yes,
we
pivot
everything
from
from
the
management
clustering
into
the
workflow
cluster
and
and
then
the
kind
of
a
connection
between
those
ones
are
are
gone.
B
D
Yeah
exactly
yeah,
I
was
thinking
yeah
yeah
thanks.
This
helped
a
lot.
E
Thanks
john
just
to
clarify
so
that
that's
a
standard
pattern
where
you
take
your
bootstrap
cluster,
you
pivot
everything
into
the
workload
cluster,
and
then
you
delete
the
bootstrap
cluster
and
you're
left
with
just
one
cluster
that
is
self-managing
yeah.
So
it
would
own
all
of
its
own
resources
and
were
you
to
decide
to
create
other
clusters
from
it?
You
could,
if
you
wanted
to
so.
The
question
I
had
for
moshe
is:
is
the
service
in
here
just
a
standard
service
of
type
load?
B
Exactly
that,
so
then
your
your
cloud
provider
or
I
know
most,
the
load
balancers-
are
going
auditory
now
as
well.
So
I
know
the
aws
ald
yeah
is
a
separate
controller.
Cubevip
would
be
a
separate
controller.
Nsxt
load
balancer,
I
think,
is
now
bundled
into
the
vsphere
cloud
provider
so
yeah.
So
the
idea
is
to
leverage
off
the
existing
cloud
providers
and
not
reinvent
the
wheel.
E
Okay,
so
I
know-
or
I
think
I
know
for
vmc
that-
because
it's
not-
and
this
is
more
for
justine
and
deer
and
deer-
because
it's
not
normal
ec2
instances
in
the
sense
of
normal
ec2,
we
can't
put
the
vms
like
directly
as
back
ends
for
the
load.
Balancer.
Is
that
right.
E
I
looked
at
both
endpoint
and
endpoint
slice
and
I
think
we
have
space
for
either
the
host
name
or
the
ip
address,
or
maybe
both,
and
I
just
want
to
make
sure
that
if
we
proceed
with
this,
I
I
really
like
this
idea
that
we
aren't
missing
out
on
some
functionality
because
of
weirdness
between
like
vmc,
and
you
know
the
backing
vms.
F
Yeah
I'll
need
to
check,
so
I
think
that
aws,
the
external
load,
balancer
provider,
so
the
aws
ingress
controller-
has
been
renamed
aw,
aws
loads
and
squad.
I
think
there's
a
crd
construct
in
it
called
target
group,
and
then
I
think
that
allows
you
to
specified
it.
So
there
might
be
something
interesting
where
it
might
not
be
replicable
completely,
with
just
a
service
that
there
might
be
some
additional
crd
that
we
have
to
use
which
might
be
interesting.
I
will
need
to
go
and
check.
F
Oh
yeah,
I
just
want
to
just
off
jan
in
piglet,
so
are
we
or
everyone
actually
are
we
we're
not
excluding
in
this
the
idea
of
continuing
to
use
a
keyboard
where
everything
is
internal
to
that
control,
plane
right
with
or
and
if
we're
not
should
we
have
a
statement
to
that
threat.
A
C
Okay,
so
just
to
roll
back
to
vmc,
like
from
from
the
the
testing
that
I
did
earlier,
because
I
I
had
a
pr
that
added
like
elb
support
for
vmc.
The
only
solution
that
I
saw
was
to
actually
add
targets
as
ips,
because
vms
aren't
mirrored
as
ec2
instances.
C
That's
like
that's
the
first
thing.
The
second
thing
would
be
like
if
we
were
using
the
services
and
endpoint
of
kubernetes
wouldn't
be
like.
I
think
we
would
be
facing
maybe
a
chicken
and
egg
problem,
because
you'd
need
the
api
server
to
create
the
services
and
like
the
endpoints.
But
you
still
need
the
end
point
to
create
the
services.
A
I
think
my
concern,
I
I
think
this
looks
good
for
an
implementation.
I
don't
know
if
it
will
meet
all
the
requirements
that
we
have
based
on
kind
of
some
of
the
other
questions
that
have
come
up,
but
in
particular
I
think,
there's
a
limitation
where
you
can
only
have
one
type
of
a
load:
balancer
provider
installed
in
a
cluster.
A
So
if
we
start
creating
multiple
services
of
type
load
balancer
in
a
management
cluster
and
expect
it
to
work
with
various
different
kind
of
load,
balancer
implementations
for
those
services,
I
I
think
we're
going
to
run
into
issues
with
not
being
able
to
support
more
than
just
one
installed
kind
of
load.
Balancer
implementation.
B
B
That's
the
one
option,
then
the
other
option
is
that
you
leave
the
implementation
detail
up
to
the
implementer,
so
you
can
provide
this
as
like
a
default
implementation
and
then,
if
you
want
to
go
and
build
something
yourself,
then
your
contract
really
is
just
give
us
a
service
with
a
status
that
we
can
consume.
It
doesn't
have
to
be
of
type
load
balance.
It
can
be
of
type
anything
or
in
fact,
any
object
that
follows
the
service
status,
struct
and
cluster
api
can
pick
up
that
that
ips
as
the
contract
is
the
contract.
B
So
the
so
currently
there's
the
endpoints
controller,
which
is
going
to
watch
pods
to
create
endpoints.
So
we're
not
we're
not
going
to
use
that
we're
going
to
have
a
new
controller,
called
the
machine
service,
endpoint
control
or
the
machine
service
controller
that
watches
machines
and
creates
endpoints
out
of
machines.
C
B
B
C
B
Yeah,
but
from
for
for
the
perspective
of
a
load
balance
implementation
based
on
endpoints,
it
doesn't
need
to
know
about
the
nodes.
All
it
really
needs
to
know
about
is
fulfilling
the
contract
of
the
service,
which
is
based
on
endpoints
and
not
nodes.
B
So
an
endpoint
is
is
just
from
my
understanding.
An
endpoint
is
a
reference
to
an
ip
or
a
host
name,
and
the
contract
of
a
service
is
that
there
are
endpoints
behind
the
service
and
that
consumers
of
service,
such
as
q,
proxy
or
cloud
providers,
will
look
at
the
end
points
to
go
and
provision
the
back
ends.
So
as
long
as
we
we
maintain
that
contract
of
service,
it
should.
A
A
E
Oh
I'll
go
if
that's,
okay,
so
from
a
bootstrapping
perspective.
I
think
that
we're
in
a
situation
where
we
can't
necessarily
make
this
work
today.
So
I
I
think
it's
something
that
we
can
work
towards.
But
if
you
look
at
say
aws,
for
example,
you
you
need
to
configure
the
aws
cloud
provider
the
entry
one,
although
I
guess
nadir,
you
said
there
were
some
other
external
things
out
there.
E
But
if
you
do
have
a
cloud
provider,
that's
only
entry
and
does
load
balancing,
then
you're
not
going
to
really
be
able
to
bootstrap
your
target
environment.
So
that's
one
concern
that
I
have.
One
question
I
also
want
to
bring
up
is
that
I
know
that
there's
work
that's
going
on
in
sig
network,
for
it's
either
service,
v2
or
ingress
v2,
where
there's
like
gateways
and
gateway
classes,
and
I
and
there's
discussion
in
the
issue
about
that,
and
I
wonder
if
we
potentially
would
want
to
rally
around
that.
E
Instead
of
the
way
things
are
right
now
and
then
the
last
comment
that
I
have,
which
is
basically,
I
think
we
could
do
everything
that's
being
proposed
here
and
like
not
necessarily
have
to
rely
on
the
existing
apis
like
services
and
endpoints.
If
we
just
wanted
to
make
some
progress
faster,
but
I
realize
that
there's
benefits
like
if
we
can
just
take
advantage
of
a
service
type
load
balancer.
We
don't
have
to
go.
Ask
anybody
to
write
new
code
to
set
this
stuff
up.
B
Yeah,
okay,
so
on
the
on
the
bootstrapping
side,
so
I
don't
think
we
would
get
rid
of
the
host
and
port
combo
and
control
plane
endpoint.
So
if
you
want
to
provide
that,
you
can
still
provide
that
all
we're
saying
is:
you
can
provide
host
port
or
a
service
reference.
B
E
B
So
that
that
was
my
goal
for
getting
nsxt
support,
because
then
the
problem
with
building
a
dedicated
nsxt
load
balancer
is
is
who's
going
to
run
that
test
environment,
and
if
we
rely
on
the
nsxt
load,
balancer
that
comes
with
a
cloud
provider,
then
that
becomes
the
cloud
provider's
responsibility
and
we
don't
have
to
worry
about
testing
of
it.
E
E
B
So
from
what
I
saw
on
service
v2,
it's
not
it's
got
nothing
really
to
do
with
load
balancers
per
se,
it's
more
about
a
more
configurable
ingress
than
anything
else.
So
it's
a
better
ingress
api,
not
so
much
a
better
service
api.
F
Yeah,
I
just
wanted
to
respond
to
jason.
So
on
the
issue
of
what
happens,
if
you
can,
you
have
multiple
cpis.
There
is
a
kept
1959
which
will
make
mandatory
to
well.
It
will
be
behind
the
feature
gate
initially
for
121
and
maybe
beta
122,
but
I
will
allow
an
annotation
to
or
field
subject
discussion
to
swap
out
the
load,
balancer
provider,
implementation
and
some
cloud
providers
already
do
their
same
weight
load,
aws,
one.
So
right
now
the
cpi,
I
think,
there's
some
external
annotation.
A
All
right,
thanks
for
that
nadir,
so
it
seems
like
we
definitely
need
to.
A
Do
more
investigation
and
and
come
back
to
this
at
a
later
point,
I
I
think,
there's
still
some
questions
that
we
need
to
answer
and
since
we
only
have
about
four
minutes
left,
I
wanna
just
double
check.
Some
of
the
other
action
items
that
we
have.
One
of
the
items
that
I
had
for
upcoming
discussion
was
along
the
lines
of
the
idea.
Andy
brought
up
with
the
surface
v2
comment
in
there
and
we
can.
We
can
circle
back
around
to
that
later.
A
A
D
Yeah,
sorry,
I
didn't
deliver
these
use
cases
yet
because
we
are
kind
of
still
discussing
how
how
we
would
like
to
this
to
be
because
we
we
have
certain
problems
in
bare
metal
that
that
we
we
need
to
figure
out.
So
we
are
still
kind
of
trying
to
figure
out
how
to
write
these
use
case.
D
I
I
I
promise
to
send
them
as
soon
as
possible,
tomorrow,
maybe
latest
on
on
wednesday,
because
there's
a
there's
a
couple
of
things
that
we've
been
discussing
so,
but
we
don't
have
much
time
to
discuss.
So
I
will
try
to
send
the
add
those
use
cases
in
here
and-
and
we
can
continue-
probably
maybe
set
a
new
meeting,
because
I
think
we
still
have
a
lot
of
things
to
go.
Go
through
with
this
yeah.
D
A
All
right
sounds
good.
Thank
you
yeah.
I
do
agree
that
I
think
we
have
quite
a
bit
more
discussion
to
continue.
Do
we
want
to
go
ahead
and
try
to
do
schedule
some
for
another
week.
In
the
same
time,
slot
then.
A
I
know
I
will
go
ahead
and
review
some
of
the
service
v2
stuff,
as
well
as
try
to
internalize
some
of
the
thoughts
that
moshe
put
down
here,
so
that
we
can
so
that
I
can
more
actively
contribute
in
the
conversation
next
week
and
I
suppose
everybody
else
will
be
doing
similar
all
right.
Anybody
else
have
any
questions,
comments
or
concerns
before
we
wrap
up
for
the
day.