►
From YouTube: 20191106 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
November,
6
2019.
This
is
the
cluster
API
office
hours
meeting.
This
is
being
recorded.
Coaster
API
is
a
sub-project
of
sig
cluster
lifecycle
and
we
do
have
a
meeting
etiquette
for
this
meeting.
So
please
use
the
raise
hand
feature
in
zoom.
If
you
would
like
to
speak
and
I
will
do
my
best
to
to
see
who's,
got
their
hands
up
and
calling
you.
A
Additionally,
if
you
do
have
discussion
topics,
please
feel
free
to
add
them
either
to
the
PSA
section
or
the
discussion
topics
section
in
this
document
and
finally,
before
we
get
started,
please
make
sure
that
you
add
your
name
to
the
attending
list
all
right.
First
up
we
have
some
PSAs
that
I
believe
came
from
Vince
yeah.
B
I
would
like
to
start
opening
tracking
issues
for
work
items
for
each
propose
that
we
have
either
merged
or
open
the
specific
cluster
cuddle
and
testing
framework
had
been
merged,
so
congrats
everyone
for
that
achievement.
We
have
control
plane.
There
I
think
it
needs
a
little
bit
more
time,
but
it
has
been
open
for
20
days.
I,
think
that
is
like
some
things
that
we
can
actually
extrapolate
from
there
and
we
can
always
close
every
or
edit
issues
if
we
need
to,
and
machine
pool
and
node
remediation
has
have
also
been
open
in
general.
A
Just
as
a
reminder,
if
you
weren't
here
last
week,
we
did
propose
and
have
consensus
that,
rather
than
flying
to
do
a
time
boxed
release
in
January.
We
don't
think
that
there's
enough
development
time
to
get
everything
done
by
that
time,
we
are
just
delaying
the
release
of
to
be
one
l43
version
of
cluster
api
to
sometime
in
late
February
or
early
March.
If
that's
feasible
and
we'll
revisit
the
timing
as
we
get
closer
once,
we've
started
actually
implementing
some
of
these
items
that
this
was
just
talking
about.
A
C
So
I've
worked
on
some
items
and
Cappy
before,
but
generally
just
more
like
Help
Wanted
items,
so
I'm
wondering
what
is
the
way
that
people
should
account
for
estimated
time
time
work.
Did
we
care
about
that
I
just
I
know
that's
minutia,
but
it
helps
me
also
think
about
whether
I
have
time
to
work
on
something.
If
I
can
try
to
estimate
it,
and
then
maybe
I
can
add
that
to
the
issue
have.
D
D
C
A
F
Right
so
it'll
probably
be
easier
if
I
go
ahead
and
I
share
my
screen
to
discuss
this
topic.
I
haven't
had
a
chance
to
fully
triage
its
issue
yet,
but
I
did
want
to
bring
it
to
people's
attention.
So,
let's
see
all
right
here,
we
go
hopefully
you're
seeing
a
test
cordon
page
right
now,
yep
yep
all
right,
so
we
now
have
automated
periodic
été
jobs
running
against
the
AWS
provider.
Master
branch-
and
you
know
just
to
give
a
comparison.
We
do
have
them
going
against
the
latest
stable
branch
as
well
and
they're.
F
Those
ones
are
mostly
great.
If
we
jump
over
to
the
master
branch,
we
see
that
we
are
having
quite
a
few
issues
related
to
creation,
so
going
in
as
we
you
know
showing
here.
You
know
the
issue
is
actually
during
cluster
deletion.
I
had
to
do
a
little
bit
of
work
to
add
some
context
to
the
logs
that
we're
seeing,
but
with
that
one
of
the
things
that
I
noticed
is
that
for
v1
alpha
3
clusters.
A
C
Jason
I
was
hoping
that
you
had.
Maybe
you
could
add
the
commit
to
the
notes
where
you
added
the
the
signal.
It's
something
we've
discussed
doing
a
cap
D
for
a
while,
but
one
of
the
reasons
that
we
haven't
is
because
we're
trying
to
figure
out
what's
the
best
matrix
to
test
and
since
you've
already
done
it
with
Kappa
and
we're
thinking
about
doing
it
with
cap
fee.
Maybe
we
can
get
together
and
figure
out.
Is
there
some
standard
set
of
inner
nature?
The
combination
and.
D
I
think
this
is
a
good
area
where
we
might
want
to
have
some
people
have
crossed
coverage
and
maybe
even
create
a
role.
I
know
we
talked
about
this,
but
there's
a
lot
of
tests
across
all
the
providers.
If
you
really
want
to
get
30
with
it
and
I,
don't
think
you
know,
I
want
Jason
to
go
on
vacation
and
spend
time
with
his
kids
and
stuff.
D
F
I
completely
agree,
I.
Think
part
of
the
challenge
right
now
is
that
we
haven't
had
this
level
of
signal
before
so
a
part
of
it's
just.
You
know
getting
everything
in
place,
but
anybody
who's
interested.
We
do
have
a
Google
Group.
Then
you
can
subscribe
to
and
I'll
dig
up
the
address
for
that
to
get
the
alerts
and
I
am
happy
to
screen
share
with
anybody
to
show
what
it's
like
to
walk
through
troubleshooting
the
alerts
and
and
all
of
that
as
well.
Jason.
A
F
E
I
kind
of
want
to
echo
what
Tim
was
saying
there
be
opening
a
bunch
of
issues
after
I
get
a
couple
of
patterns
in
place
for
the
and
framework
stuff
that
that
should
be
able
to
distribute
this
work
really
evenly.
So
anybody
who's
interested
reach
out
to
me
or
Jason
or
anyone
else
in
the
testing
space.
A
G
Yeah
hi
Andy,
so
yeah
I
just
have
a
basic
query
that
you
know
in
case.
We
have
like
running
notes,
physical
notes
or
you
know,
notes
used
up
in
AWS
at
UCP
and
we
have
a
need
for
increasing
or
decreasing
the
resource.
For
that
particular
note,
you
know
how
do
we
do
that
in
present
context
and
also
ensuring
that
we
give
minimal
impact
to
the
running
parts
which
are
represented
by
the
tenants?
Let's
say.
A
No,
okay,
so
the
short
answer
is
you?
Don't
the
machines
in
cluster
API
are
immutable
and
you
would
need
to
create
a
replacement
machine
or
if
you've,
preferably,
if
you
have
a
machine
deployment,
you
would
update
the
machine
deployment
such
that
you
change
the
characteristics
and
it
would
roll
out
a
new
set
of
machines,
matching
what
what
you're
looking
for.
A
H
Andy
yeah
I
just
wanted
to
give
a
quick
update
on
machine
pool
so
I
have
the
proposal
PR
submitted
against
the
kappa
repo
and
also
a
POC.
That's
there.
So
if
you've
read
it
and
have
more
questions
and
or
feel
like,
there
are
some
areas
that
could
use
a
bit
more
detail.
I
saw
that
that
Chuck
head
added
a
comment:
I'll
be
responding
to
those
and
and
at
adding
more
details
to
the
doc.
H
So
please,
if
you're
interested
in
shaping
the
future,
please
get
that
information
in
as
soon
as
possible
and
then
I
wanted
to
also
have
a
discussion,
perhaps
brief
about
the
idea
of
having
a
machine
placeholder.
So
I
have
had
a
few
questions
about,
or
requests
even
to
have
for
the
actual
machine
instances
to
have
a
corresponding
machine
placeholder
for
various
reasons.
So
one
reason
is
to
perhaps
be
able
to
communicate
some
status
about
the
machine.
You
know
what,
whether
it's
you
know.
H
This
is
a
provisioning
or
if
it's
in
a
failed
state
or
it's
even
healthy
and
also
as
a
potential
opportunity
for
I
know.
We
talked
in
the
past
about
the
idea
of
lifecycle,
hooks
for
machine
and
so
being
able
to.
You
know,
run
some
code.
You
know
before
or
after
a
you
know,
deletion
to
do
things
like
cordon
and
rain,
and
perhaps
you
know
other
kinds
of
administrative
tasks
and
then
finally
being
able
to
have
the
the
see
the
Machine.
H
The
four
different
scenarios
like
scaled-down
you
know
we'll
need
to
be
able
to
potentially
want
to
be
able
to
choose
the
specific
machines
that
will
be
deleted,
and
so
placeholders
might
be
a
decent
opportunity
for
that
as
well,
and
so
I
wanted
to
kind
of
just
test
the
waters
on
on
that
concept
and
is
maybe
it's
something
that
you've
already
thought
about
and
if
you
have
any
thoughts
or
ideas
around
whether
that's
a
good
idea.
I'd
love
to
hear.
H
B
Gonna
take
a
look
at
it
today,
hopefully
or
tomorrow,
but
for
the
placeholders
I
have
some
concerns
specifically
around
like
fake
machines,
but
I'm
happy
to
go
over
and
talk
more,
maybe
in
the
PR
or
like
we
can
have
like
a
specific
meeting
if
that's
necessary.
But
are
you
trying
to
tie
holders
for
the
POC
or
like
for
this
first
iteration
in
b1a4
3?
Or
is
this
more
of
future
work?
Iron.
H
That's
a
little
bit
of
what
I
was
trying
to
get
a
feel
for
I
I
think
there
are
some
interesting
potential
scenarios
there
about.
You
know
around
how
that
would
work,
and
you
know
I
think
the
the
most
critical
thing
to
me
to
solve.
Given
the
the
current
proposal
is,
how
do
we
actually
coordinate
the
coordinate
drain
of
each
node?
And
you
know
I
see
this
as
a
potential
opportunity
for
that.
But
you
know
I
wanted
to
kind
of
get
a
feel
for
you.
B
H
This
something
you
know
that
the
group
has
talked
about
before
nor
their.
You
know
a
bunch
of
downsides
that
we
want
to
avoid
and
and
but
yeah
I
think
it's
that's
one
Avenue
to
solve
that
problem.
I
think
there
are
other
ways
to
go
and
do
that,
but
it's
just
wanted
to
see
since
I
did
get
requests
from
other
folks
on
slack
and
in
other
places
around
whether
or
not
we
could
you
know
have
that
fake
machine
is
kind
of
a
you
know,
a
way
to
communicate
status
and
these
other
things
so
I
think
yeah.
H
B
I
think
I
think
that
sounds
great
and
I
would
like
to
like
kind
of
separate
through
the
response
like
at
the
solution
that
we're
proposing
of
like
trading
death
machines
from
the
actual
problem
we're
trying
to
solve.
In
the
case
of
like
status,
it's
one
and
no
references
and
call
them
variants
yet
another
one.
I
I
think
my
problem
with
the
fake
machines-
I-
don't
have
a
problem
with
it
fundamentally,
but
I'm.
Just
thinking
about
the
user
experience
like
somebody
deletes
a
machine.
You
know
what's
supposed
to
happen,
because
that
sounds
like
it's
not
really
what
you
should
really
do
and
if
we're
hoping
to
use
the
existing
machine
controller,
I.
H
B
H
A
Okay,
we're
at
the
end
of
the
planned
discussion
topics
I'm
going
to
go
over
to
backlog
grooming
now
and,
if
you
think
of
anything
while
I'm
going
through
the
issues
and
fewers
feel
free
to
add
it
and
we'll
come
back
to
it
assuming
there's
time
all
right.
So
we
have
nine
issues
that
don't
have
a
milestone
and
the
first
one
is
published,
Gamal
releases
with
and
without
the
key
video
bootstrapper.
A
So
this
we
were,
this
came
out
of
a
slack
discussion
or
maybe
last
week's
meeting
or
both
we're
in
a
cluster
API
master
right
now,
the
Gamal
that
we
have,
as
well
as
the
main
go
for
the
manager,
all
assume
that
the
cube,
idiom,
bootstrap,
custom
resources
and
controller
are
are
all
there.
So
this
was
a
request
to
be
able
to
deploy
cluster
API
without
the
cue
medium
bootstrapper,
both
the
custom
resources
and
potentially
the
controller
as
well.
A
D
Or
I
will
move
on
to
the
next
one.
You
want
to
solicit
volunteers
for
the
peanut
gallery,
yeah
sure.
F
A
Interested
in
dope
next
up,
we
have
something
that
came
out
of
the
control
plane
proposal.
This
is
support
for
various
ways
to
provide
a
stable,
API
endpoint.
This
is
because
we
need
a
stable,
API
endpoint
so
that
we
can
rotate
control
plane,
members
in
and
out
of
the
cluster,
and
there
was
some
back-and-forth
where
we
said
well.
Maybe
we
should
just
put
control
plane
endpoint
as
a
top-level
field
on
the
cluster
spec
I
think
this
is
something
that
would
be
that
we
need
to
resolve
for
this
milestone,
and
if
you
have
comments,
please
add
them.
A
I
would
encourage
you
to
read
through
them
and
definitely
weigh
in
if
you've
got
an
opinion,
but
I
would
like
to
put
this
in
and
we
need
to
do
it
I
think.
In
the
short
term,
we
can
most
likely
proceed
with
the
control
plane
implementation
once
we
get
the
proposal
merged
and
what
the
proposal
says
right
now
is
that
the
cluster
status,
API
endpoints,
which
is
a
pre-existing
field,
will
continue
to
be
used
to
represent
the
control
plane
endpoint
until
we
resolve
this
and
come
up
with
any
alternatives.
A
Is
about
refactoring,
the
machine
set
and
machine
deployment
controllers
to
be
consistent
with
the
way
that
we've
implemented
the
cluster
and
machine
controllers?
This
is
a
long-term
cleanup
task,
so,
basically,
in
the
machine,
deployment
and
machine
set
controllers,
the
they
will
patch
the
machine
set
of
machine
deployment
pretty
much
whenever
there's
a
change
and
the
way
that
we've
implemented
clustering
machine
is
that
we
retrieve
say
the
cluster.
A
Select
area
thanks
and
it
just
it
was
getting
cleared,
even
though
we
were
setting
it,
and
so
it
was
rather
difficult
to
figure
out
what
code
was
wiping
it
out.
So
this
is
just
a
long-term
thing
that
we'd
like
to
do
so.
It
doesn't
necessarily
need
to
be
in
the
milestone,
but
so
I
would
probably
just
put
it
in
next
at
this
point,
unless
somebody
kills
like,
we
should
definitely
try
and
tackle
it
now.
I.
A
F
F
C
Yes,
so
Jason,
you
know,
that's
so
I'm
sure,
there's
a
reason.
You
didn't
suggest
this
in
the
thread,
but
I'm
just
kind
of
curious,
so
I
don't
duplicate
the
suggestion,
but
caught
in
it
you
know,
has
the
multiple
stages,
but
it
also
has
targets
that
can
be
used
to
set
up
dependencies.
Can't
we
I
think
it's
called
the
networking
stage.
It's
not
when
it
stands
up.
Networking
by
the
point
which
networking
has
to
be
online.
C
C
F
A
All
right
so
I
put
this
in
the
milestone
marked
an
important
soon.
Jason
can
I
assign
this
to
you,
since
you
had
an
idea
of
how
to
do
this.
Yeah.
A
A
E
A
A
C
Chuck
looks
like
somebody
is
probably
mounting
a
volume
or
doing
something:
I
go
paths
such
to
the
place
where
they're
checking
out
the
code,
because
I
don't
know
why
you
would
have
mod
at
the
root
of
your
go
path.
So
I.
Imagine
there's
something
there
going
on
the
other
way
you
in
it.
Yeah
makes.
A
Sense:
okay,
this
one
I'd
like
to
close,
can't
apply
a
particular
node
role
label
at
bootstrap
time
and
I
know
that
kubernetes
is
restricted
or
has
made
it
more
restrictive
what
labels
you're
allowed
to
assign
to
the
node.
So
this
is
saying
trying
to
do
this.
Kubernetes
io
node
label-
and
it's
not
an
allowed
prefix
so
and
I
know
that
Lumiere
had
said
you
can't
do
this.
That's
by
design.
Anybody
object
to
me
closing
this
one.
A
A
A
F
I
Yeah
I
mean,
if
we're
directing
people
to
use
cube
a
DM
and
whenever
we're
doing
that
direction,
that's
where
we
should
say,
like
you
know,
make
sure
you
read
this
thing.
First
in
troubleshooting
tips
or
whatever
other
thing.
We
need
to
be
the
root
of
that
information,
but
we
should
link
to
it.
Yeah.
A
A
B
C
A
A
Just
a
few
times
all
right,
let
me
dump
that
there
and
I'm
guessing
that
CJ
did
this.
A
All
righty
I'm
also
want
to
spend
some
time
looking
at
this,
so
I'll
come
back
to
that
separately
and
this
one
this
is
control
plane
machines
that
you
try
to
create
after
you
delete
a
control
plane.
Machine
gets
stuck
in
pending,
and
the
issue
here
is
that,
given
that
we
don't
deal
with
that
CD
membership
management,
you
can't
delete
a
control
plane
machine
unless
you
also
managed
at
CDs
membership.
So
I
had
a
comment
here.
Just
saying
we
hopefully
can
address
this
as
part
of
the
qadian
control
plane.
Jason.
Does
that
sound
right
to
you?
A
A
A
Already.
That
is
the
end
of
the
issues
with
no
milestones
and
given
that
we
have
some
time,
let
me
just
see
there's
no
new
issues
or
grooming
topics,
so
we're
sorry
discussion
topic.
So
let's
just
go
and
take
a
look
at
the
open
PRS.
I
talked
with
andrew
earlier
today
and
he
said
he
is
going
to
put
the
finishing
touches
on
the
PR
for
how
to
update
an
alpha
one
provider
to
alpha
to
the
kept
control
playing
PR.
We
already
discussed
today
this
one
here
from
noah
on
changing
the
remote
new
cluster
client
signature.
A
A
Right,
so
if
you
are
using
the
new
cluster
client
in
the
remote
package
which
currently
takes
in
the
management,
cluster
client
and
the
cluster
and
returns
a
custom
interface
that
we
just
defined,
it
now
takes
in
the
management,
cluster
client,
the
cluster
and
a
scheme
and
returns
a
controller
runtime
client
that
is,
for
the
remote
cluster.
So
anything
you
can
do
with
controller
runtimes
client.
You
can
do
with
this.
The
output
from
this
function.
A
It
is
a
breaking
API
change,
so
we're
obviously
not
making
it
in
v1
alpha-2,
it's
just
in
master
and
will
be
B
in
B
103.
But
I
did
want
to
point
this
out
before
we
merged
it
in
case
anybody's
got
any
concerns,
but
I
think
this
should
hopefully
make
it
easier
for
anybody
to
do
work
with
the
workload
clusters
versus
trying
to
use
this
cluster
client
interface,
which
was
pretty
limited
so
I'd
like
to
try
and
get
that
merge
today.
Unless
anybody
has
any
serious
objections.
J
C
A
Thank
you
and
then
we
have
the
Machine
cool
API,
which
we
talked
about.
I
know
liz
is
working
on
documentation
for
how
to
go
from
zero
to
a
new
provider.
It's
still
work
in
progress
so
feel
free
to
check
it
out.
If
you
want,
but
I
know
she
had
far
more
changes,
she
was
going
to
be
doing
all
right.
Let's
talk
about
the
remediation
cap,
so
I'll
repeat:
I've,
been
on
vacation
for
the
past
several
days
and
haven't
looked
at
this
someone
who
has
looked
at
it
more
recently,
how's
it
looking.
A
Okay,
that's
Auto
Select
on
cluster
name
label
and
machine
deployment,
and
machine
said
this
was
the
change
that
we
were
going
to
do
so
that
you
don't
have
to
specify
a
label
selector
right
for
machine
set
machine
deployment,
II.
B
B
I
F
B
B
B
A
A
Imitation
genius,
okay,
so
this
one
is
a
change
to
address
this
issue,
where
the
utility
method
that
we
have
to
check.
If
something
as
a
control
plane
machine,
would
return
true
for
any
non
empty
string
value
for
the
label,
which
meant,
if
you
put
false
through
zero
or
whatever
it
would
come
back
as
saying
it's
a
control,
plane
machine,
and
so
what
we
are
doing
for
alpha
3
is
saying
that
any
value,
including
the
empty
string,
means
that
it's
a
control,
plane
machine.
So
true
and
false,
aren't
relevant
anymore
for
this
particular
label.
A
A
I
You
know
any
kind
of
flip-flopping
of
statuses,
but,
like
I
was
saying,
the
machine
is
the
infrastructure
to
host
a
node
node
ready
and
it's
not
really
a
function
of
the
machine.
In
my
view,
in
you
know
it
just
kind
of
goes
down
into
that
rabbit,
hole
of
cluster
house
and
all
that
kind
of
stuff,
and
what
we
should
be
watching
and
as
I
mentioned
in
my
comments,
I
think
the
right
place
to
be
looking
for
that
information
is
the
node
I.
I
B
B
I
Yeah
I
would
say
maybe
unlink
it
from
the
issue.
If
the
issues
broader
then
also
yes
having
they
go
to
provisioned
as
soon
as
there's
a
node
rift.
That
seems
reasonable.
Personally,
I
would
consider
it
provisioned
as
soon
as
we
have
networking.
That
means
that
the
cloud
has
created
the
incense
and
Sunday
some
kind
of
networking,
and
so
therefore
the
instance
is
provisioned,
but
I
know.
I
Maybe
the
bootstrap
flow
for
everyone
else
might
might
mean
always
still
gotta
run
the
boot
wrapper
after
we
do
the
networking
so,
but
it's
kind
of
a
I'm
not
really
sure,
what's
actually
going
on
there
with
the
bootstrapper,
but
yeah
either
way.