►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
afternoon,
good
evening
to
everybody
it's
wednesday
june
1st
in
pacific
coast,
u.s
time
at
least
welcome
to
the
weekly
cluster
api
office
hours.
We
abide
by
the
cncf
code
of
conduct,
so
please
use
the
raised
hand
feature
be
kind
to
one
another
empathetic
and
let's
have
fun.
So
I
will
screen
share
the.
A
Okay,
so
here's
our
date,
I'm
gonna
scroll
up
to
the
open
proposals
and
we
will
actually
it's
the
beginning
of
the
meeting.
So
that
means
it's
time
for
anyone
who
wants
to
introduce
him
or
herself,
please
feel
free
to
unmute
right
now
and
say,
hi
and
tell
us
who
you
are,
if
you
like,
who
you
work
for
anything
you'd
like
to
add.
B
Hi
there,
my
name
is
max
I'm
working
with
aws.
We
are
working
on
the
new
cluster
api
provider
for
the
infrastructure
provider
service
called
cloud
stack,
it's
apache
cloud
stack.
We
recently
just
donated
our
repository
capc
to
kubernetes
six
as
a
over
the
weekend,
and
so
I
am
looking
to
become
a
member,
so
I
can
keep
working
on
it.
So
thank
you
for
the
welcoming
and
as
part
of
that
I'll
need
some
sponsors,
I
think
maybe
fabrizio
or
someone
else
have
worked
with
folks
in
the
past.
So
yeah.
A
Thank
you.
Welcome
max
yeah.
Are
you
on
kubernetes
slack?
You
can
do
all
those
follow-up
items
in
the
cluster
api.
C
D
Yeah,
yes,
correct
hello
folks,
my
name
is
amin,
I'm
from
aws
and
I
work
on
open
source
projects
for
aws,
I'm
very
new
to
kappa
and
capi,
and
I'm
looking
forward
to
learning
more
about
it
and
doing
few
contributions
very
soon.
E
A
Okay,
we
seem
to
have
reached
stable
state,
welcome
everyone.
Thank
you
so
much
for
speaking
up
in
front
of
strangers.
We
are
super
excited
to
see
newcomers
every
week.
A
Okay,
great,
so
the
first
thing
we
want
to
do
in
this
discussion
is
go
through
open
proposals,
so
if
you'd
like
to
I'll
call
them
out
real
quickly
and
if
you'd
like
to
add
a
brief
status
update,
please
do
so.
Please
make
it
brief,
so
we
can
get
to
our
agenda
items
so
the
first
one
on
the
list
is
spot
instances.
Proposal
update
with
termination
handler
design
anyone
care
to
add
any
updates.
There.
A
All
right
load,
balancer
provider.
A
All
right
going
down
the
list
managed
external
fcd
provider.
A
All
right
cluster
api
add-on
orchestration
proposal.
I
know
jonathan's
going
to
give
a
demo
later
today.
Please
for
retail,
go
sorry.
G
G
I
think
that
we
are
near
the
finish
line
so
suggest
the
authors
to
resolve
pending
comments
address
the
discussion
about
extending
this
proposal
to
kcp,
and
then
we
we
have
to
decide
if
to
start
immediately,
the
the
consensus
are,
as
suggested
by
cecile,
to
get
the
less
consensus
with
at
least
two
lgtm.
A
Okay,
great
so
hopefully
everybody
heard
lazy.
Consensus
is
around
the
corner.
So
if
you
have
a
stake
in
the
outcome,
please
go
address.
The
comments.
Add
your
two
cents
to
that
proposal.
A
Great
jonathan,
do
you
want
to
say
anything
quickly
about
add
an
orchestration
proposal,
or
is
your
demo
later
sufficient.
A
A
G
This
is
just
so.
First
of
all,
I
I
commented
also
manager
kubernetes
on
copy,
but
talking
about
class
api
condition
status
update.
This
is
basically
a
document
that
tries
to
summarize
all
the
state
of
condition
across
the
entire
cluster
api
objects
and
start
a
discussion
if
there
could
be
possible
improvement
to
improve
observability,
etc,
etc,
etc.
So
it
is
not
yet
a
proposal,
it
is.
It
is
an
error
that,
for
me
there
serves
some
laws,
and
so,
if
someone
is
interesting,
please
take
a
look.
A
All
right
here
we
are
wednesday
first
june,
so
we
are
at
the
agenda
part.
So,
let's
see
looks
like
the
first
is
the
cappy
add-on
orchestrator
demo.
We've
got
two
agenda
items
today.
So
it's
a
light
meeting.
Maybe
we'll
get
some
time
back
jonathan
you
want
to
take
it
over
yeah.
Can
you
give
me
screen
sharing.
A
I'm
so
sorry,
I've
been.
The
only
thing
I
see
in
my
tab
is
allowed
participants
to
share
screen.
I
assume
that
the
general.
H
H
All
right
well
I'll,
give
this
a
try
and
then,
if
it
doesn't,
let
me
we
can
try
to
figure
something
out
but
yeah.
So
today,
I'm
giving
a
demo
of
a
prototype
for
an
add-on
orchestrator
for
cluster
api.
I've
been
working
on
this
with
jack
francis
and
it's
a
proof
of
concept
of
a
proposal
that
fabricio
wrote
so
for
a
bit
of
context.
There's
been
a
lot
of
work
and
discussion
on
add-ons
in
kubernetes,
but
we're
trying
to
solve
a
pretty
specific
problem
so
cluster
api
lets.
H
You
manage
the
lifecycle
of
any
number
of
workload
clusters
by
applying
your
crds
to
the
management
cluster,
and
often
users
also
need
other
things
like
packages
on
their
workload
clusters
that
give
them
functionality
they
need
like
cni,
but
right
now,
if
you
want
to
install
packages,
you
have
to
like
go
into
your
workflow
clusters
and
manually
install
them
yourself,
and
if
you
need
to
update
it,
you
need
to
go
back
in
and
update
that
too.
H
So
first
we
have
a
cluster
selector
field
here
that
defines
a
label
key
in
value
and
this
selector
matches
any
cluster
with
a
matching
label
and
a
key
value
or
key
and
value.
So
any
cluster
that
matches
will
have
its
have
the
helm
chart
installed
on
there
and
it'll
be
part
of
our
reconciliation
loop.
H
H
We've
also
added
a
support
for
go
templating
in
the
value
map,
so
we
use
this
templating,
so
you
can
fetch
configurations
from
the
cluster
it's
installed
on.
So,
for
example,
if
you
selected
a
cluster
with
the
cluster
selector,
this
field
will
be
populated
with
the
name
of
the
cluster
and
the
cpu
limits
here,
as
an
example
would
be
populated
with
like
the
control
plane.
Replicas
plus
10.,
and
this
one's
just
an
example,
since
you
probably
wouldn't
actually
use
that,
can
you
I
think
I
need
to
share
my
terminal
now.
G
I
don't
think
this
is
allowed
by
the
country
backs
the
settings,
so
you
have
basically
stopped
sharing
and
sharing
the
other
windows.
It
is
annoying,
but
it
is
what
it
is.
H
Yeah
I
had
another
terminal
window
set
up,
but
let
me
see
if
I
can
share
it.
Yeah.
A
H
So
we
have
our
controller
running
and
we're
going
to
apply
our
crd
and
just
this
is
this
is
what
it
looks
like
just
as
a
reminder.
H
So
what
this
does
is,
is
it's
going
to
install
our
helm
chart
on
the
two
clusters
that
we've
added
our
label
to
so
we
can
look
into
those
clusters,
and
so
here
we're
on
the
default
cluster.
I've
set
the
cube
config
and
if
we
do
helm
list,
we
can
see
that
our
cloud
provider
azure
at
version
1.23.8,
is
installed.
H
H
So
if
we
do
this
on
the
next
reconciliation
loop
it'll
see
that
the
release
we
have
a
release,
we
installed
on
a
cluster,
that's
no
longer
selected,
and
we
consider
that
released
to
be
orphaned,
which
means
on
the
next
loop.
We
will
uninstall
it
from
the
cluster
and
we
have
this
cluster
run
or
this
controller
running
on
a
one
minute
timer.
H
H
So
now
we
can
go
ahead
and
add
this
back
and
we're
going
to
add
it
to
to
all
three
clusters.
H
So
on
the
next
reconciliation
loop,
it
should
see
that
all
three
clusters
have
the
matching
selector
label
and
value,
and
it
will
go
ahead
and
install
the
home
release
on
all
of
them.
H
But
the
point
of
this
is
to
show
that
we
can
easily
select
and
unselect
clusters
that
we
want
to
have
our
chart
on.
H
H
We
can
see
that
it's
still
on
the
windows
cluster
that
was
untouched,
and
if
we
look
into
the
machine
pull
buster,
we
can
see
that
we
have
our
release
installed
on
there
and
if
we
get
the
values
we
can
see
that
again.
It
is
the
same
cpu
limits
we
had
before
and
it
resolved
the
name
of
the
cluster.
H
A
Yeah
could
would
you
mind
while
you're
doing
this?
Could
you
go
to
your
tab
on
on
any?
I
guess
all
three
clusters
are
running
the
the
helm
chart
now.
Could
you
do
something
like
a
cube?
Ctl
get
pods
minus
a
minus
w,
so
we
can
see
sort
of
at
the
kubernetes
pod
level
and
can
you
can
you
put
a
weight
on
that?
H
B
H
H
You
can
see
that
the
cpu
limit
has
changed.
We
go
back
to
these
pods.
There
we
go.
I
was
getting
nervous
for
a
second
yeah
yeah.
I
think
the
watch
just
takes
a
second
because
but
yeah
we
can
see
that
since
we
changed
the
cpu
limits,
our
old
pods
are
terminating
and
we
have
a
new
cloud
controller
manager
creating
and
we
got
some
stuff
pending
as
well.
A
Is
there
anything
else?
Oh
no,
it
is,
it
is
finished.
I
think
it's
just
hard
to
interpret
that
in
linear
time,
but
yeah,
I'm
pretty
sure
that
that
that's
the
terminal
state,
that's
cool.
The
goal
state.
H
Yeah,
if
we
got
our
pods,
we
can
see
that
the
cloud
controller
manager
just
started
up
a
minute
ago.
A
Jonathan,
do
you
want
to
take
a
question
now
or
do
you
want
to
wait
till
the
end?
The
question
in
chat?
I
can't
really
see
chat
right
now.
You've
arrived,
you
want
to
unmute.
I
And
just
yeah,
yes,
yeah.
A
I
Hi
great
demo,
can
you
also
show
the
helm,
release
proxy
crd
and
the
crn
what
that
is
used
for?
I
remember
you
mentioned
that
at
the
beginning,.
H
Yeah
yeah
so
yeah.
Why
don't
we
pause
and
take
a
look
at
the
crd
so
before
we
get
to
the
helm
release
proxy?
Another
thing
in
the
helm,
chat
proxy
is
that
we
have
a
status
field.
H
H
Yeah
so
in
the
helm
chart
proxy,
we
also
have
a
status
field
that
shows
a
ready
status
and
it
also
shows
a
list
of
matching
clusters
that
you've
selected
with
your
label.
So
here
you
can
get
a
quick
glance
at
what
you're
actually
targeting
and
if
you,
if
you
know
you
selected
something
you
thought
you
shouldn't
have
or
if
something's
not
there,
that
you
think
there
should
be.
H
You
can
see
that
easily,
so
the
helm
and
so
the
home
release
proxy
is
more
of
a
inventory
crd.
But.
H
H
So
I
I
guess
short
tangent
is
that
the
helm
chart
proxy
controllers
are
responsible
for
parsing
out
the
values
and
creating
the
home
release,
proxies
and
the
home
release
proxy
controller
on
a
create.
It
just
creates
a
helm
chart
with
these
or
how
I'm
releasing
these
values
and
on
delete.
It
deletes
the
it
deletes
the
home
release.
H
H
So
going
back
to
where,
where
we
were
before,
I
think
the
last
thing
we
did
was
edit:
the
helm
chart
proxy.
H
So
if
we
go
back
on
the
default
cluster,
we
still
have
our
cloud
provider
chart
at
its
second
revision.
H
H
So
we
can
see,
we've
got
a
third
revision.
We
got
the
values
out
here.
We've
changed
the
cpu
limit
to
two,
but
all
the
other
help
tracks
are
still
on
the
second
revision.
So
yes,
since
we
went
over
the
helm,
release
proxy
stuff,
the
helm,
release
proxy
controller
on
its
next
reconciliation.
H
We'll
see
that
this
helm
chart
on
the
default
cluster
has
a
value
that
doesn't
match
what's
in
the
helm,
release
proxy
spec,
so
it'll
pick
up
on
that
change
and
do
a
helm
upgrade
or
a
helm
revision
to
reconcile
that
change
back
to
the
desired
state
we've
specified
and
that
it's
also
on
the
one
minute
timer.
So
we
can
see
that
now
it's
on
its
fourth
revision
and
if
we
get
the
values,
we
can
see
that
it's
been
changed
back
to
what
what
it
was
before.
H
A
H
Yeah
yeah
yeah:
this
is
the
last
thing
we
have
so
okay,
I
think
we're
almost
there
cool
cool.
H
Oh
there
we
go
it's
getting
nervous
for
a
second
there,
so
we
can
see
that
we're
on
the
fifth
revision
now
and
we've
we've
kicked
off
the
the
revision
from
our
scaling
of
the
control
plane
replicas.
So
if
we
get
the
values
out,
we
can
see
that
at
five
replicas
plus
two
we
have
seven
and
just
to
reiterate,
none
of
these
charts
have
changed.
H
So
that's
that's
all
I
have
for
the
demo
and
to
summarize
our
orchestrator
for
add-ons
lets
you
dynamically
and
install
and
configure
your
helm,
chart
it's
based
on
the
workload
clusters
and
update
it
based
on
changes
to
the
helm,
chart
proxy
definition,
the
workload
cluster
definition
and
state
of
the
home
release
itself.
H
A
Okay,
cool
that
was
great
jonathan
demo.
Gods
were
very
kind,
don't
take
that
for
granted
next
time
christian,
you
are
next
on
the
agenda.
J
Yeah,
I
just
wanted
to
shout
out
and
say
thank
you
to
the
machi
dispense
fox,
which
opened
the
poor
request
to
contribute
their
implementation
of
the
state
metrics
application,
which
follows
the
proposal
which
got
merged
prior
to
that
there's
also
an
issue
which
outlines
the
steps
to
take
afterwards,
which
are
integrating
into
tilt
and
implementing
the
missing
metrics,
which
are
at
the
proposal,
so
yeah
feel
free
to
take
a
look
at
it.
Thank
you.
F
Cool,
so
there
was
this
low
bounce
proposal
which
came
around
sometime
last
year.
I
want
to
say,
which
was
originally
driven
by
jason,
and
the
idea
was
around
what,
if
we
had
this
external
load
balance
of
types
so
that
you
could,
for
example,
if
you
had
a
vsphere
cluster
running
on
aws,
using
aws
elb
alb,
with
your
vsphere
cluster
and
kind
of
mix
and
match
and
basically
separate
the
load,
balancer
management
from
what
we
have
today.
F
At
the
same
time,
there
was
a
conversation
about
the
concept
of
having
internal
external
blowbacks
of
support
at
the
moment,
depending
on
platforms
and
stuff,
obviously
like
we
only
have
one
load
balancer
as
far
as
where,
typically
that's
external.
So
if
you're
on
aws,
for
example,
your
traffic
goes
out
and
egresses
your
network
and
then
comes
back
in
which
costs
money.
F
So
there
are
these
two
topics
which
ended
up
in
this
huge
conversation,
which
eventually
kind
of
just
died
a
bit.
So
what
like,
I'm
definitely
still
interested
in
internal
and
external
load.
Balancer
part
of
this,
I
wanted
to
see
if
there's
still
interest
in
the
more
generic
low
balance
the
crd
ecosystem
and
see
how
we
can
start
reviving
these
conversations
depending
on
what
we
are
currently
interested
in
as
a
group,
so
yeah
just
kind
of
generally
trying
to
gauge
interest
and
see
what
we
want
to
do
with
this
project.
K
The
scene
yeah
so
that
one
has
been
like
going
on
for
a
while.
I
think,
like
it's
like
it's
good,
that
we're
we're
trying
to
make
it
to
get
us
some
some
decision
there.
I
think,
from
the
the
last
discussions
we've
had.
One
of
the
issues
we
had
with
the
proposal
was
that
it
was
too
complex
in
its
current
form
and
it
was
like
adding
very
much
complexity
to
the
current
crosser
api
flows.
K
I
think
one
thing
that
we
can
do
probably
is
see
if
we
can
enable
you
know
the
the
external
internal
load
balancer,
without
actually
like
hampering
any
extensibility
point,
because
yeah
I'd
be
worried
like
of
building
the
abstraction
now,
given
that
we
don't
understand
yet
how
we
can
build
it.
So
if,
like
yeah
totally
open
like
if
we
can
work
on
external
and
internal
for
now,
since
it's
pretty
much
more
concrete,
while
not
blocking
any
eventual
or
potential
pluggable
model
for
data
balancers,.
F
Yes,
I
think
that's
definitely
my
preferred
approach.
I
just
know
at
the
time
because
the
the
plugable
low
balance
thing
was
going
on.
It
was
kind
of
just
roped
into
one
group.
Yep
totally
agree.
The
current
thing
is
is
very
complex
and
it
seems
given
it's
not
been
pushed
that
there
just
isn't
a
huge
amount
of
interest
in
that
anymore.
F
So
I'm
trying
to
work
out
if
that,
where
wherever
the
original
interest
came
from,
whether
that
has
died
down
to
the
point
where
we
can
say:
okay,
we'll
keep
in
mind
this
generic
low
balancer
plugable
thing,
but
for
now
we're
gonna
push
that
off.
Until
someone
comes
back
and
says:
hey,
we
really
want
this.
K
Yeah,
I
think
that,
like
that,
would
that
would
be
the
idea.
I
don't
think
that
interest
died,
but
it's
most
that
we
would
be
doing
a
disservice
ourselves
to
ourselves.
If
we,
if
we
build
something
while
we
don't
understand
the
use
cases
or
how
users
will
want
to
interact
with
it,
I
think
external
and
internal
is
far
more
concrete
and
we
know
like
which
use
cases
we
want
to
accomplish
through
that.
So
yeah
definitely.
A
L
Yeah,
so
I
basically
wanted
to
know
if
you
could
give
me
a
give
us
a
one
or
two
minute
overview
of
what
your
thoughts
were
like
both
options,
and
this
is
similar
to
what
jack
mentioned
just
now
and
also
if
we
are
going
to
so
yeah,
I
mean
the
reason
why
I
ask
is:
if
there
is
going
to
be
a
crd
for
a
load
balancer,
then
there
needs
to
be
somebody
who
implements
the
crd
and
runs
it
in
the
kubernetes
cluster,
and
I
was
just
wondering
if
you
could
have
a
static
pod
like
load
balancers,
which
are
actually
created
by
the
cluster
when
the
cluster
is
coming
up.
L
If
that
would
be
possible
or
if
it
had
to
be
something
outside
the
cluster.
So
yeah
thanks.
F
Yeah,
so
I'm
not
as
familiar
with
the
the
low
balancer
crd
ecosystem
as
the
other
part,
but
my
understanding
was
that
at
some
point
the
cluster
doesn't
exist,
so
we
need
load,
balancer
management
for
the
control
plane,
because
that
can't
really
be
managed
by
the
load
balancer
service
controller.
F
So
I
think
it's
something
to
do
with
that.
I'm
hoping
your
scene
has
put
his
hand
up
because
there's
more
than
I
do.
K
That
is
going
to
do
this
or
like
something
like
elb,
so
yeah
that,
like
that
was
the
that
was
the
the
first
part.
K
But
this
one
brings
a
bit
more
complexity,
because
then
you
need
to
define
how
you're
going
to
push
down
the
control
plane,
endpoint
from
the
generic
crd
till
to
like
the
lower
level
resources
on
how
like
users
are
going
to
interact
to
fill
in
their
their
control,
plane,
endpoint,
given
what
we
have
today,
which
is
like
a
cluster
api
reading
from
the
lower
resources
and
surfacing
back
the
info
to
the
generic
resources.
K
F
And
just
on
the
internal
external
just
to
cover
that,
like
it's,
it's
quite
a
common
pattern
of
people
to
have
an
internal
load
balancer
and
an
external
load
balancer
in
front
of
their
cube
apis.
And
then
you
can
have
things
like
the
nodes
go
by
the
internal
and
then
it's
just
your
kind
of
end
users
who
re
interact
with
the
external
one.
F
Obviously,
with
cluster
api
there's
interactions
from
the
cluster
api
management
cluster
into
the
like
guest
cluster,
at
which
point
you
then
have
decisions
to
paste
on
your
net
based
on
your
network,
topology,
on
which
cluster
components
should
use
which
flow
balancer.
For
example,
if
you're
say
you're
you're
running
your
guest
cluster
in
your
manual
cluster
in
similar
vpcs
and
they're,
paired
peered,
you
could
have
it
go
by
the
internal
load
balancer
rather
than
the
external,
but
if
they're
not
paired,
then
it
needs
to
go
by
the
external.
K
Yeah,
so
I
wanted
to
check
like
what
would
be
the
next
step
there
probably
like.
I
guess
we
need
to
pull
out
whatever
we
had
for
external
internal
load
balancers
in
the
proposal
and
like
probably
set
up
a
new
document
dedicated
for
that
right.
F
So
I
have
something
for
you.
When
this
originally
came
up.
I
was
asked
to
do
the
internal
external
as
a
separate
thing
anyway,
so
the
hack
md
I
just
linked-
covers
the
original
like
internal
external
requirement.
That
said,
it
was
last
changed
a
year
ago.
I
also
haven't:
read
it
in
a
year,
so
it
may
or
may
not
be
completely
out
of
date,
but
in
general,
the
user
stories
and
the
functional
requirements,
I
think,
are
roughly
filled
in
there.
So
this
gives
us
a
basis
to
work
off.
L
So
yeah
I
mean
I
am
new
to
cluster
api
in
the
cncf
ecosystem,
so
I'm
not
sure
how
I
can
help,
but
I
am
interested
in
helping
if
something
is
needed.
That
is
one
and
secondly,
I
wanted
to
know
if
we
could
just
implement
a
default
software
provider
like
cube
for
people
who
want
to
just
say
kick
details
on
this
or
add
it
to
I
mean
so
so
that
people
could
just
try
it
out
and
use
it
without.
L
You
know
using
an
external
load
balancer
I
mean
by
external
here
you
mean
using
a
hardware
based
or
an
external
software
component.
A
F
H
G
Yeah
two
to
comment
on
on
this
topic,
so
the
first
one
I
kind
of
joined
that
we
should
not
block
one
for
depending
to
the
other.
At
the
same
time,
I,
if
we
go
on
with
the
internal
external
proposal,
I
think
that
we
should
add
a
paragraph
or
or
something
that
tells
how
basically,
this
effort
is
not
preventing
the
other
to
happen
or
is
not
going
in
in
a
different
direction.
So,
let's
not
block,
but
let's
keep
them
these
in
in
the
horizon.
Second,
I'm
posting
in
in
the
chat.
G
F
Cool
yeah,
the
the
section
where
the
crossover
is
an
enhancement
very
reasonable
ask.
I
will
make
sure
that
we
we
get
that
in
and
the
other
issues
it's
the
end
of
my
day
here
and
it's
public
holiday
for
the
next
couple
of
days
in
the
uk.
So
I
will
get
to
this
next
week,
but
I
will
take
a
look.
Thank
you.
A
Great
thanks
for
everybody
who
contributed
today,
I
will
upload
this
recording
to
the
youtubes
and
reach
out
to
folks
thanks.