►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
a
special,
one-off
cluster
api
meeting
discussing
the
load
balancer
provider
proposal,
just
a
reminder
that
this
meeting
is
being
recorded
and
will
be
posted
up
to
youtube
afterwards
and
that
we
abide
by
the
kubernetes
community
code
of
conduct.
A
I
know
I
didn't
get
time
last
week
to
do
that.
I
assume
we
were
all
kind
of
distracted
by
the
events
of
last
week,
so
I
think
we'll
have
to
carry
that
over
until
next
week
we
did
get
a
lot
of
the
use.
Cases
and
kind
of
user
stories
filled
out,
at
least
for
aws
vsphere,
the
packet
now
equinix
metal
provider,
and
we
do
have
the
metal
three
ones.
A
Which
kind
of
brings
us
back
kind
of
the
point
that
we
were
at
last
week?
So,
oh
you
seen,
I
see
you
have
your
hand
raised.
B
Yeah,
so,
regarding
alternative
implementation,
so
I
tried
to
come
up
with
a
like.
I
tried
to.
I
tried
to
see
at
least
if
the
current
solution
would
be
easy
to
implement.
B
The
first
thing
that
I
saw
with
the
current
proposal
that
we
have
right
now
is
that
if
you
start
from
a
kind,
slash
bootstrap
cluster
and
you
try
to
rely
on
this
cpi,
you
eventually
end
up
with
a
chicken
and
egg
problem,
because
kind
is
all
about
docker
containers
and
the
nodes
aren't
able
to
register
so
like
you're,
not
even
able
to
create
the
the
kind
cluster
and
have
the
kind
node
marked
as
ready.
B
So
I
feel
like
that,
like
we
have
like,
we
have
two
paths:
it's
either
this
one
or
maybe
we
can
say
that
okay,
we're
relying
on
like.
We
still
keep
the
machine
service.
We
have
a
selector
on
the
machines,
so
we're
able
to
you
know,
select
copy,
based
machines
and
get
their
ips,
we're
able,
through
the
selector,
we're
able
to
specify
force
and
so
on.
So
we're
able
to
abstract,
like
the
way
that
we
want
to
the
fact
that
we
want
to
create
load
balancer
and
which
back-end
we
want.
B
The
only
thing
that
is
that
would
be
left
is
to
actually
enable
providers,
or
at
least
will
need
providers
to
build,
or
at
least
provide
the
two
images,
the
one
with
the
the
usual
controller
manager
and
one
for
the
load
balancer
controller,
so
that
we're
able
like
so
that
other
providers
are
able
to
pick
and
choose
between
the
implementations.
C
C
Foreign,
why
that
couldn't
be
deployed
off
the
client?
So
just
because
I
I
can't
attach
storage
to
my
kind
cluster
doesn't
mean
I
can't
use
my
kind
cluster
with
a
vsphere
provider
to
to
orchestrate
an
nsxt
environment.
B
I
don't
know
if
that
you
would
still
like,
you
would
still
need
like,
because
the
cpi
isn't
relying
only
on
storage
or
load
balancers.
The
main
issue
I
see
is
with
node
identity,
because
you
would
need
the
provider
id
the
node
object
would
wouldn't
reflect
the
vm
where
kind
is
running.
So
if
you
have
well.
C
So
if
your
controller
is
started
with
cloud
provider,
not
equal
external,
so
you
don't
start
it
with
a
cloud
provider.
And
then
you
run
your
cloud
provider.
Your
nodes
are
not
going
to
be
cloud
provided
enabled,
but
everything
else
that
is
running
in
the
cloud
provider
should
run
so
your
your
node
reconcile
loops
will
fail,
but
there's
no
reason
why
your
load
balancer
reconcile
loops
shouldn't.
B
So
entry
is
still
going
away.
That's
like
this
is
going
to
be
my
last
comment
and
then
I'm
going
to
hand
out
to
my
dear
entry
cloud
provider
are
going
away,
they've
andrew
filed
for
an
exception
for
the
cube
bits
that
are
still
missing
for
120,
so
the
121
timeline
is
seems
like
to
be
still
holding,
so
I
wouldn't
want
to
rely
on
something
that
would
be
deprecated
and
eventually.
C
I'm
not
talking
about
the
entry
provider,
but
the
way
in
which
you
configure
the
out
of
three
providers
still
by
by
running
controller
or
cloud
provider
equals
external.
So
if
you
don't
turn
on
an
external
cloud
provider
and
you
leave
your
provision
kind,
as
is
the
nodes,
will
come
up
fine
and
kind.
You
deploy
your
cloud
provider
and
you're
going
to
have
a
cluster
that
has
nodes
that
are
not
a
cloud
provider
enabled
and
a
cluster
that
is
not
cloud
provider
enabled,
but
that
shouldn't
stop
your
cloud
provider
controller
from
actually
running.
C
C
D
Yeah,
maybe
someone's
already
covered
this,
but
my
bigger
issue
with
this.
After
about
five
minutes
after
our
last
meeting
was
the
cloud
provider
contract
is
not
really
strong
enough
around
running
outside
of
the
cluster
and
in
particular,
if
you're,
looking
at
public
cloud.
Also
trying
to
respare
is
what
environment
is
it
running
right?
So
what
for
aws
concretely,
what
aws
account?
What
vpc?
D
What
subnet
should
I
be
provisioning,
this
load
balancer
in
there's,
there's
no
way
of
explicitly
targeting
that
what
you
would
have
to
do
for
the
aws
provider
is
to
run
a
separate
incident
of
that
cpi
with
a
bunch
of
credentials,
make
sure
resources
are
created
to
the
cloud
providers
writing
in
a
very
specific
manner
in
as
far
as
tanks
go
now,
we've
had
a
multi-tenancy
model
for
plus
api
provider
aws,
which
kind
of
acted
a
bit
like
that
in
the
sense
that
you
would
create
a
new
separate
name
phrases
and
one
instance
of
kappa
in
each
of
those
namespaces.
D
Unfortunately,
one
in
practice
no
one's
been
able
to
set
that
up
correctly,
because
the
ux
is
horrendous
in
terms
of
like
explicitly
configuring,
some
aws
credentials
and
making
sure
it
goes
into
the
right
place,
and
we
are
moving
much
more
to
an
explicit
interface
where
you
are
able
to
specify
which
aws
account
I'm
going
to
provision
a
cluster
into,
and
that
model
is
completely
incompatible
with
the
cpi.
So
we
are
relying
on
if
we
go
along.
The
cpi
group
we're
relying
on
the
contract,
which
is
not
well
defined.
C
C
That's
an
external
controller
that
you
run
and
and
yes,
that
external
controller
today
doesn't
handle
multi-count
very
well,
but
there's
no
reason
why
that
controller
can't
be
made
multi-count
aware
by
having
specific
subnet
groups,
vpcs
and
and
and
whatever
else
needs
to
happen,
to
to
provision
a
load
balance
to
be
specified
as
parameters
to
that
controller.
So
it's
not
that
we're
a
cloud
provider.
It's
just
that
today.
Many
of
the
cloud
providers
bundle
the
load,
balancer
provider,
but
that's
not.
It
doesn't
need
to
be
the
case.
C
C
B
Yeah,
so
the
fact
is
that,
like
today,
if
we
start,
if
we
think
about
the
implementations
in
the
provider
we
have,
most
of
them
are
bundled
into
the
cpi.
That's
like
that's
the
fact.
If
you
take
the
vsphere
one,
if
you
take
azure
like
aws,
doesn't
have
the
even
like
the
out
of
three
support.
Yet
so
there's
that
and
like
so.
C
B
Like
that,
that's,
like
my
point,
is
that
we
can't
pick
and
choose
and
say
sometimes
like,
like
the
user
experience
across
provider
across
providers
wouldn't
be
ideal
because
we
either
like
we
either.
We
need
some.
We
need
something
that
would
be
easy
to
reason
about,
like
today,
users
are
mainly
used
into
two
modes,
one
that
is
out
of
like
out
of
three
and
the
one.
The
other
one
is
the
entry.
B
If
we
start
mixing
up
like
say:
okay
for
aws
we're
gonna
run
the
alv
provider
for
asia,
we're
gonna,
run
the
cpi
for
vsphere.
We're
gonna
run
the
cpi
that
might
create
confusion
for
the
user
and
yeah,
like
that's,
that's
a
point
and
like
if,
if
we
still
say
the
cpi
for
azure
and
vsphere,
for
example,
when
you
start
out
of
tree,
you
have
an
initial,
an
initialized
change
that
is
going
to
be
stuck
anyway
to
the
node
object.
B
So
your
node
would
still
be
marked
as
like,
failing
so
at
least
for
a
kind
cluster.
E
Here
I
mean
we
could
even
go
some
with
a
more
generic
model
that
keys
off
of
endpoints
if
we
want,
but
I
think
that
the
trying
to
use
service
type
load
balancer
and
use
a
load,
balancer
implementation,
be
it
coming
from
a
cpi
or
a
standalone
thing.
I
think
we're
just
not
at
a
point
where
we
can
have
that
consistently
work
across
all
the
different
providers
that
we
have
right
now,
and
I
think
that
the
the
chicken
and
egg
problem-
maybe
we
can
solve
it
a
little
bit.
I
think
the
multi-tenancy
issues
like
we.
E
We
know
that
aws
doesn't
have
them
solved
right
now,
for
what
we're
trying
to
do
with
a
single
instance
provisioning
into
multiple
accounts.
So
again,
I
think
you
have
great
ideas,
but
I
just
think
the
timing
is
too
early
and
I
think
we
need
to
go
with
a
design
that
works
with
what
we
need
to
do
today.
E
So
it
would
be
something
we
talked
about
a
while
ago,
which
looks
very
similar
to
what
we
what
you
have
on
paper,
but
instead
of
using,
instead
of
doing
service
type
load,
balancer
and
relying
on
the
some
controller
like
from
the
cloud
provider
or
a
standalone
one,
we
would
be
providing
the
load
balancer
implementations
from
the
various
providers.
So
whether
the
data
model
is
you
have
machine
service
and
service
and
endpoints
or
some
slight
variation
like
I
don't
think
we
need
a
service
per
se.
E
I
think
we
can
get
by
with
call
it
machine
service,
call
it
machine,
load
balancer.
That
would
be
one
entity.
We
probably
would
want
to
have
end
points
or
something
similar
to
endpoints
that
could
keep
track,
of
which
machines
are
in
service
for
a
load,
a
machine
based
load,
balance
or
machine
service
at
any
given
time.
But
I
think
the
the
thing
that
actually
goes
out
to
vsphere
or
aws
or
azure
or
whatever
and
says,
make
a
load
balancer.
E
That
would
be
custom
code
that
the
various
cluster
api
providers
write,
and
I
think
we
would
probably
follow
a
similar
model
to
what
we
have
today.
With
object
references,
so
you
could
have
a
machine
load,
balancer
type
and
it
could
have
an
infrastructure
reference
that
points
to
an
aws,
nlb
load,
balancer
or
a
vsphere
nsxt
load,
balancer
or
whatever,
and
the
you
know
the
things
would
work
together
where
we
have
the
generic
portion
and
cluster
api
core
and
the
infrastructure
specific
portion
in
each
infra
provider.
C
So
if
we
make
the
contract
for
machine
load
balancer
the
same
as
service,
so
it
must
return
on
status
ingress
host
ip.
C
So
that's
my
question
because
if,
if,
if
so,
I
know
you're
probably
going
to
say
you
can't
make
that
commitment,
but
this
is
kind
of
the
problem
is
what
you're
proposing
is
that
everybody
does
it
themselves,
but
we
know
that
everybody
doesn't
do
it
themselves.
That's
kind
of
one
of
the
issues.
E
So
I
wasn't
going
to
say
exactly
that
I
can't
make
that
commitment,
but
that
I
would
go
back
and
talk
to
folks
and
and
circle
back
later.
Okay,.
B
So
so
like
for
photos
like
for
with
the
generic
model
and
the
the
reference
that
we
have,
that
we
would
have
technically
like
cluster
api
would
be
mostly
responsible,
for
you
know,
setting
up
the
owner
reference
to
manage,
say
the
left
and
so
on,
but
ultimately
like.
If
you
want
to
unblock
things,
and
you
want
to
use
a
specific
cpi
implementation
on
your
site,
you
can
always
do
that
if
you
have,
if
you
build
your
like
your
own
controller,
slash
crd
for
the
infrared.
B
Implementation,
I'm
not
saying
like
I'm
not
saying
that
we're
going
to
support
it,
for
example
in
caffeine,
but
like
that
would
be
an
option
to
you
know,
make
it
work
for
for
the
use
case
that
we
want
to
support
at
least.
A
If
we
do
reuse
the
service
type
load,
balancer
controller,
we're
looking
at
you
know,
which
version
of
the
controller
is
compatible
with
which
version
of
cluster
api
which
is
compatible
with
which
version
of
kubernetes
you
may
be
running
on,
and
things
like
that,
whereas
I
think,
with
kind
of
the
design
that
andy's
talked
about,
there's
less
of
that,
because
we
already
have
a
contract
between
providers
and
core
cluster
api
and
we're
not
expected
to
support,
say
like
a
v1
alpha,
3
cluster
api
controller,
with
a
v1
alpha,
4
provider
controller.
Things
like
that.
C
Too,
so
on
on
that,
I
would
actually
disagree,
because
I
think
that
the
v1
service,
by
definition
of
being
v1,
that
contract
is
very
well
defined.
Any
load
balancer
that
wants
to
be
implemented
for
pods
must
meet
that
contract
and
as
a
v1
service,
it's
going
to
be
tested
across
any
all
the
compatibility
layers
and
it's
going
to
be
tested
across
all
the
cloud
providers
and
all
the
kubernetes
configurations,
so
you're
already
going
to
start
with
a
compatibility
matrix
and
a
matrix.
C
C
I
don't
see
why
you
would
need
to
test
cluster
api
against
every
kubernetes
cloud
provider,
slash
load,
balancing
implementation,
you'd
need
to
test
it
against
the
v1
service
contract
and
and
then
you
make
sure
that
your
cloud
providers
provide
a
v1
service
contract
and
you
test
them
against
the
v1
service
contract
and
that's
already
occurring
all
over
the
place,
and
we
can
build
upon
that.
Rather
than
reinventing
the
will.
A
D
Yeah
yeah,
so
I
think
it's
all
really
well
saying
you
know
we
won
a
good
contract,
but
I
think
the
issue
is:
we
have
additional
requirements
outside
of
the
server
3-1
specification.
So
it's
probably
a
action
item
to
start
filling
out
those
functional
non-functional
requirements.
D
I
think
it
will
become
pretty
obvious
that
the
service
we
won
contract
is
not
really
broad
enough
to
cover
the
use
cases
that
we're
interested
in
dealing
with,
and
then
we
don't
gain.
We
actually
lose
out
because
it's
not
in
the
cloud
provider
implementation
of
interest
to
add
tests
for
things
which
are
outside
of
the
service.
We
want
specification,
whereas,
like
a
cluster
api,
implementer
we're
in
control
of
those
tests
granted,
we
we
need
to
be
a
lot
better
in
our
e2e
testing,
etc.
But
we
own
those
contracts.
D
C
But
correct
me:
if
I'm
wrong
here,
the
machine
load,
balancer
type
that
we're
going
to
be
creating
is
not
going
to
do
any
better
than
than
the
service
type.
All
of
the
the
the
features
that
you
would
need
would
need
to
be
implemented
on
the
specific
infra
provider.
C
So
you
you
it
we're
talking
about
the
same
thing.
Really,
the
the
the
machine
load,
the
capi
core
is
going
to
test
against
a
single
reference,
implementation
of
the
machine
load,
balancer
and
then
every
infrared
load
balancer
is
going
to
have
to
create
its
own
test
and
its
own
matrix
and
and
and
do
all
of
that
itself.
So
I'm
still
not
seeing
how
that
how
that
played.
B
You
see
so
to
answer
that
question
like
if
we
own
the
infra
implementation
technically
clustering,
the
cluster
api
providers
slash
cluster
api
community
can
decide
to
extend
the
infrared,
the
infrastructure
types
to
enable
some
specific
use
cases.
Whether
if
we
go
with
the
service,
we
would
need
to
go
to
lens
to
either
change
something
to
the
core
types
or
implement
or
change
things
at
this
cpi
level.
B
So
yeah
like
I,
I
I
feel
like
that.
If
we,
if
there
are
implementations
that
are
available
at
the
cpi
level,
but
that
aren't
available
at
the
capi
level,
we
should
be
more
pursuing
the
path
of
trying
to
externalize
those
rather
than
like
going
into
the
cpi
to
get
them.
E
Hey
I
gotta
run
so
see
you
all
next
time.
Thanks
cheers.
C
So
I
agree
with
andy
that
we
write
up
another
proposal
and
then
we
maybe
just
handle
each
of
these
concerns
on
on
both
proposals
in
our
writing.
I
think
and
then
have
a
a
another
discussion
next
week
and
maybe
a
follow-up
at
the
the
main
cappy
meeting.
A
C
C
B
A
Yeah,
I
think
rallying
in
the
same
document,
makes
a
lot
of
sense
and
I
think
one
of
the
things
that
we
may
be
looking
at
is:
what
do
we
do
in
the
short
term
versus
what
do
we
do
in
the
long
term
as
well,
because
there
are
a
lot
of
potential
issues
that
we
have
trying
to
rely
on
the
same
service
implementations,
and
if
we
can
enumerate
those
we
can
look
at.
What
can
we
do
to
eliminate
those
problems
upstream
and
then
potentially
kind
of
work
towards
a
longer
term
goal
as
well?
A
A
A
All
right
great,
so
it
sounds
like
I'll
go
ahead
and
reschedule
another
meeting
for
next
week
and
we
can
regroup
then
cool.
Thank
you.