►
From YouTube: 20191023 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
October,
23rd
2019.
This
is
the
cluster
API
office
hours.
Cluster
API
is
a
sub-project
of
state
cluster
life
cycle.
As
always,
we
adhere
to
the
CN
CF
contact
conduct,
which
basically
means
be
nice
to
each
other,
and
we
do
have
meeting
etiquette.
So
please
use
the
raise
hand
feature
of
zoom
if
you've
got
something
to
say
and
I
will
do
my
best
to
make
note
and
call
on
you.
A
We
also
would
ask
that
you,
please
add
your
name
to
the
attending
list
down
here
in
the
agenda
document
and
if
you
do
have
any
agenda
items
or
PSAs
or
demos.
Please
add
them
to
the
spaces
here.
I
see,
there's
nothing
right
now,
so
I
think
it
probably
would
be
a
good
time
to
go
through
the
status
on
the
enhancement,
proposals
or
keps
that
we've
been
working
on
so
I'm
going
to
scroll
up
here
and
just
go
down
the
list,
so
machine
pool,
John,
Lee
or
cecile.
A
Were
you
able
to
get
a
pure
close
to
ready
or
open
for
this
against
one
mate?
So
yeah
I
need
to
rebase
my
branch
and
then
I'll
be
studying
a
PR.
Very
shortly,
okay,
great
thank
you,
I
haven't
had
a
chance
to
look
at
it
lately.
Have
you
gotten
a
good
amount
of
feedback
or
how
are
things
looking
on
the
Google
Doc
yeah
I
I've,
gotten
a
little
bit
of
feedback
I
would
I
would
like
maybe
a
little
a
little
bit
more,
maybe
but.
A
Cool
yeah
I,
think
I
mean
if
you
have
the
pleura
quest
close
to
ready
to
go
I
would
say,
feel
free
to
just
open
it
up.
You
know
mark
in
the
Google
document,
in
top
at
the
top
and
big
letters
that
you're
gonna
move
it
to
the
pull
request,
and
then
people
can
comment
there,
but
if
you
still
want
comments
on
the
Google,
Doc
I
would
say
hold
off
on
doing
the
pull
request
until
until
you
want
to
close
the
comment
period
sounds
great.
Thank
you.
B
A
So
I
know
in
terms
of
who
is
listed
here
as
reviewers.
We
have
Warren
Andrew,
Vince
Jason.
So
if
you're
not
on
this
list
and
you're
interested
in
providing
any
sort
of
blocking
feedback,
please
try
and
get
that
in
ASAP.
Otherwise
we
will
reach
out
to
other
reviewers
and
ask
for
approvals
so
that
we
can
proceed.
D
Just
gonna
say
currently:
we've
incorporated
a
lot
of
the
feedback
that
we've
received.
There
are
some
larger
items
of
feedback
that
we're
working
on,
addressing
right
now,
specifically
related
to
deployment
across
failure,
domains
and
health
checking,
so
we're
expecting
to
make
progress
on
those
over
the
next
couple
of
days.
So
please
go
ahead
and
continue
reviewing
and
providing
feedback,
we're
hoping
that
we
can
hopefully
address
the
remaining
items
within
roughly
the
next
week.
E
E
A
F
A
A
E
Well,
I
figured
I
just
followed.
The
issue
filed
an
issue
for
during
library
update,
but
I
just
came
from
the
coop
control
six
CLI
meeting
and
basically
talked
about
our
use
case
with
them
and
so
they're
receptive
to
getting
new
features
to
support
our
use
case
in
the
drain
command
and
control
upstream
I'm
going
to
be
filing
an
issue
outlining
some
some
enhancements.
That
I'd
like
to
see
there
probably
later
today
and
just
want
to
loop
everybody
in
on
that.
E
Well,
we
talked
briefly
in
the
patch
set
to
fix
the
bug
about
adding
a
context,
so
there's
something
that
we
can
add
there.
The
other
thing
that
I've
been
working
on
is
in
the
case
where
the
couplet
is
non-responsive
and
there's
a
pod
with
local
storage
draining
will
be
blocked,
because
that
pod
will
never
go
away.
It
will
be
stuck
in
terminating
state
and
for
our
use
case,
you
know
if
we're
deleting
a
machine
means
we
we
consider
failed,
there's
not
really
another
remediation
that
we
you
to
do
there
so
adding
some
functionality
to
account.
E
A
A
B
A
A
I'm
actually
gonna
put
this
in
the
0
to
X
milestone,
since
that's
the
branch
that
it's
going
against
and
if
and
that's
me
yeah
and
then
we
need
I
assume.
We
would
want
a
forward
port
there.
Yeah
I
see
the
need
to
cherry-pick
all
right.
I'm
gonna
give
you
the
both
deplore,
quest,
Chuck
and
the
issue.
Well,.
A
C
A
Okay,
so
andrew
has
an
issue
that
he
opened
about
shifting
dependency
versions
within
a
release
branch
which,
unfortunately,
we
ended
up
having
to
do
with
in
the
0.2
branch
to
get
some
functionality
that
we
needed
out
of
client
go
for
event,
recording
I,
think
generally.
What
we
want
to
avoid
within,
like
0
to
X
or
0.3
X,
is
making
breaking
changes
after
we've
done
our
initial
release
on
any
of
our
dependencies
like
the
azure
code
or
client
go
or
whatever
so
I,
don't
know.
If
we
can
automate
some
of
this.
A
G
A
A
A
All
right,
oh
good!
Are
you
still
here?
E
E
What
we're
doing
is
we
have
like
a
list
in
our
provider
specs
that
say
you
know,
target
pool
or
load
balancers
or
whatever
on
individual
machine
object.
Then
the
Machine
controller
will
add
and
remove
an
instance
from
those
load
balancers
at
Machine
creation
in
deletion
time
in
their
use
cases
they
want
to
be
able
to
later
add
load.
Balancers
to
that
list,
I
was
like
well
we're
kind
of
moving
towards
immutability,
like
your
infrastructure
should
be
immutable,
all
that
kind
of
stuff,
and
so
that's
what
spawned
this
discussion?
E
I
think
I
was
under
the
impression
that
some
of
what
we
were
doing
came
from
upstream
earlier,
but
it
looks
like
maybe
that
didn't
maybe
that's
something.
We
kind
of
made
like
a
hybrid
of
the
early
cluster
API
v1
alpha
one
a
delayed
us
provider
downstream,
but
in
any
case
so
I'd
still
think
it's
a
discussion
worth
having
where
let's
say,
you're
creating
a
new
master
in
that
master
needs
to
be
behind
an
ELB.
For
instance,
what
is
the
preferred
mechanism
of
putting
that
instance
behind
the
load?
E
Balancer
in
our
case
were
doing
that
in
the
machine,
controller
and
I
think
maybe
there's
room
for
like
an
abstraction
for
kind
of
a
load,
balancer
controller,
not
something
that
creates
load
balancers,
but
something
that
adds
and
removes
instances
from
little
balancers
and
I.
Think
it'd
be
useful
to
have
an
abstraction
at
least
a
default
abstraction
that
says
hey.
You
know,
you're
gonna
want
to
add
stuff
to
load
balancers.
A
Most
I
see
you
have
your
hand
up,
I,
don't
get
you
in
just
a
second
in
terms
of
the
how
we're
doing
it
now
so
the
AWS
provider
Kappa,
will
add
a
machine
at
an
ec2
instance
to
the
ELB.
If
it's
a
control,
plane
machine
when
the
machine
is
when
the
AWS
machine
resources
created
and
and
the
Kappa
database
machine
reconciler
sees
it
or
maybe
it's
the
clothes
to
run
a
today's
closer
reconciler.
But
it's
all
it's
all
Kappas
responsibility.
So.
H
Yes,
I've
been
interested
in
externalizing
or
obstructing
the
load
balancer
a
way
out
of
the
economic
or
infrastructure
to
support
these
type
of
use.
Cases
and
I
created
a
cap,
although
I
don't
think
it's
kept
anymore,
but
I
think
the
final
iteration
that
we
have
now
is
based
on
essentially
a
machine
load
balancer
that
uses
a
machine
selector
to
select
machines
to
add
into
a
load
balancer,
and
then
you
can
have
a
load
balance
of
control
before
each
provider.
H
E
So,
just
at
a
high
level,
it
sounds
like
kind
of
we
might
be
aligned
on
the
thinking
there
yeah
so
like
for
me
like
having,
like
the
cluster
infrastructure
provider,
say:
oh,
this
is
a
master.
So
I
have
to
do
these
things.
That's
not
really.
The
abstraction
I
like
I,
like
it
being
more
declarative,
like
machine
AP,
I,
really
shouldn't,
have
to
know
that
it's
a
master
like
the
machine,
API
I,
should
say.
E
I
want
this
thing
to
be
part
of
this
load
balancer
or
some
other
things,
as
this
machine
shall
be
part
of
this
load
balancer,
rather
than
it
being
like
role,
kind
of
side
effect
based.
It
should
be
more
of
like
a
declarative
thing
because
it
gets
really
hard
to
track.
You
know
you
got
all
these
extra
logic
flows.
I've
offered
some.
E
I
A
A
I
need
to
refresh
but
I
like
the
idea
of
being
able
to
do
that
declaratively
and
have
it
pluggable
so
that
you
can
have
different
backends
like
an
AWS,
ELB
or
nginx,
or
on
boy
or
f5
or
whatever
and
I
think
we
can
probably
come
up
with
a
data
model.
Maybe
you
already
have
it
that
would
lend
itself
to
it
so
I
I'm
interested
in
exploring
that
space.
C
Michael,
what
are
the
comments
that
sorry
I
was
gonna?
Raise
my
hand
between.
One
of
the
comments
that
we
had
talked
about
on
the
thread
of
emotions
original
kept
is
that
you
can
totally
prototype
this
outside
of
the
scope
of
gloucester
api,
which
I
think
it's
a
good
thing
to
do.
I
need
to
have
an
independent
controller
and
then
verify
that
this
is
the
pattern
and
construct
that
you
want
to
create
and
then
iterate
it
back
in
as
folks
see
the
need
yeah.
E
I
just
I
was
gonna,
say,
maybe
add
this
proposal
to
this
issue,
which
looks
like
maybe
we
already
did
and
also
I
didn't
know.
If
there's
another
issue
already
outstanding,
covering
what
we're
talking
about
here,
maybe
we
can
close
this
as
a
duplicate
of
whatever
that
issue
might
be,
but
yeah.
This
is
basically
just
kind
of
like
jumping-off
point
like
external
somebody.
External
to
the
project
has
some
need.
How
do
they
engage
with
this?
E
This
is
what
I
recommend
they
do
at
the
time,
because,
obviously
it
we're
very
good
about
triaging
things
here
and
it's
a
good
place
to
have
the
discussion
for
people
that
aren't
really
that
involved
with
the
projects.
We've
got
a
lot
of
Google
out
there,
and
so
that's
just
kind
of
my
motivation
here.
J
A
Most,
were
you
planning
or
actually
Andrea,
I'm
gonna
ask
you,
since
you
commented,
are
you
going
to
work
on
a
POC
for
this
or
is
there
a
POC
so.
I
A
Well,
I
will
NQ
your
proposal
to
my
reading
list
and
try
and
get
to
it
this
week.
I'm
definitely
interested
in
seeing
where
we
can
take
this
all
right
in
terms
of
milestone
and
priority
I
think
actually
before
I
do
that.
You
mentioned
another
issue,
Michael
and
I'm.
Pretty
sure
motion
you
created
one
that
we
closed
for
this.
A
A
D
So
this
was
something
that
was
brought
up
to
me
by
some
folks
that
we
have
doing
some
QE
testing
around
the
eight
of
us
provider
in
that,
if
the
infrastructure
goes
bad
today,
we
don't
actually
surface
that
up.
I,
don't
necessarily
know
if
that's
specifically
around
AWS
or
more
general,
around
behaviors
around
machine.
At
this
point,
though,
so
I
think
we
do
need
to
have
some
idea
of
making
sure
that
we
bubble
up
that
you
know
status,
but.
A
We
do
copy
error
message
and
error
reason.
We
don't
flip
ready
to
not
ready
if
there
aren't
any
errors,
though
so
the
specific
example
here
was
I
rebooted,
my
easy
to
instance,
and
CAPA
happened
to
reconcile
it
while
it
was
rebooting,
and
so
it's
at
the
AWS
machines
status
to
not
ready
or
ready
equals
false
and
the
cluster
API
machine
controller
didn't
pick
that
up
because
of
the
way
we
have
the
logic
coded,
and
so
it
looked
like
the
machine
was
still
running,
even
though
the
AWS
machine
was
listed
as
not
ready.
G
But
you
know
from
copy
side.
There
is
the
fact
that,
like
wish
either
wish
it
like
gather
information
if
the
node
linked
to
the
machine
is
not
ready
anymore
and
flip
back
to
provision
and
on
capital
side
had
any
other
in
future
provider,
like
we
shouldn't
mention,
there's
a
contract
out
there.
That
says
hey
if
you
said,
error,
message
and
a
reason:
we're
never
going
to
recover
and
we'll
expect
user
interaction.
G
G
A
G
G
Look,
that's
what.
A
G
A
D
G
A
A
G
Yeah
I
do
remember
that
doing
the
proposal,
like
we
explicitly
said
that
we
didn't
want
to
flip
it
back
and
forth,
so
it
was
only
like
either
it's
set
to
ready
or
it's
not
so
I
can
dig
that
up
and
see
a
best
way
forward.
A
A
A
B
A
F
So
this
makes
a
really
tricky
managing
cluster,
where
there
are
more
instance
of
one
provider,
but
thinking
about
this,
most
probably
a
make
sense
to
reason
about
multi-tenancy.
So
what
is
the
real
recommended
approach
for
having
a
cluster
that
measurement
details
or
what
is
multi-tenancy
for
cluster
API
and
so
I?
F
D
So
I
think
there's
a
few
different
concerns
here.
One
is
the
multi
tendency
around
separate
credentials
running
for
different
providers
and
then
there's
the
aspect
of
running
different
versions
of
a
provider
with
the
shared
same
common
resources
and
I.
Think
once
we
get
to
the
point
where
we
actually
have
conversion
webhooks
in
place
and
we're
actually
taking
into
account
upgrades
between
versions
and
not
fully
externalizing.
That
I
think
that
latter
part
goes
away
a
little
bit
because
the
conversion
webhooks
or
you
know
some
other
mechanism
can
be
in
place
for
handling.
D
You
know
having
the
multiple
versions
of
the
API
server
component
or
the
CRTs
deployed.
We
would
just
need
to
make
sure
that
we
have
some
testing.
You
know
forward
and
backward
testing
of
that.
Those
shared
resources
that
we
deploy
across
versions
to
verify
that
we're
not
breaking
anything
so
I
think
having
more
detail
around
the
different
specific
use.
Cases
would
help
us
rather
than
trying
to
talk
about
in
to
broader
terms,.
A
A
A
A
Thanks
cluster
cuddle
only
release
the
cluster
object.
Yeah.
Thank
you
for
getting
this
one
Lee.
It
looks
good
to
me,
but
I
added
a
comment
earlier
this
morning
that
I
think
Vince
and
Jason
have
a
lot
more
experience
with
cluster
cuddle
than
I
am
so.
If
one
of
you
all
could
just
give
it
a
final
glance
and
then
we
can
get
it
merged
in
and
then.
A
I
think
this
one
is
probably
a
backcourt
candidate
to
0-2,
because
cluster
coddled
elite
from
what
I
understand
doesn't
work
or
doesn't
work
all
the
time
and
0.2
know
already.
Then
we
have
the
vSphere
whip
that
Andrew
y'all
are
still
waiting
until
you
actually
release
0-5
to
on
this
one
right,
yes,
and
then
we
have
a
cup.
We
have
this
one.
That
Chuck
is
on
this
one's
waiting
on
a
CLA
issue
and
we
have
the
shell
check,
one
that
I
know
Chuck
has
been
reviewing
so
I
think
we're
on
top
of
everything
here.