►
From YouTube: 20200129 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Today
is
Wednesday
January,
29
2020.
This
is
the
cluster
API
office
hours.
Meeting
close
to
API
is
a
sub-project
of
sig
cluster
life
cycle.
If
you
would
like
edit
access
to
this
document
that
I'm
sharing
on
the
screen,
please
make
sure
that
you
join
the
sig
cluster
life
cycle,
Google
Group.
We
do
have
meeting
etiquette,
basically,
let's
all
be
kind
and
nice
to
each
other,
and
if
you
do
have
a
topic,
please
add
it
to
the
agenda
in
this
document
and
use
the
raise
hand
feature
in
zoom.
A
So
if
there
is
anyone
new
and
you'd
like
to
say,
hi
I'll
give
you
a
chance,
you
all
right,
that's
quite
all
right
so
I'm
sorry
did
somebody
say
something.
Oh
sorry,
some
even
stuff,
it's
Dax
MacDonald
them
from
Rancher
first
time,
first
time
here,
how
did
you
kind
of
learn
more
about
the
project
and
I
get
some
like
understanding
about
it
so
glad
you
say
hi
great
thanks.
Dex
welcome
thanks!
A
Alright.
Moving
on
to
PSAs,
we
have
one
which
is
a
draft
roadmap.
This
is
based.
Tell
me,
fix
my
review
here.
This
is
based
on
a
Google
document
that
we
had
shared
with
the
community.
I
think
this
was
last
month
in
December,
and
nothing
in
here
is
set
in
stone.
I
just
have
rough
dates
that
I
came
up
with
so
completely
subject
to
change,
at
least
for
alpha
4
and
alpha
5
or
beta
1
or
whatever
we
call
it.
A
Ultimately,
what
we'd
like
to
do
is
get
whatever
initial
revision
of
this,
that
we're
satisfied
with
merged
recognize
that
it
is
subject
to
constant
revision
and
ultimately
figure
out
where
we
can
draw
the
line
for
moving
from
alpha
to
beta.
So
this
is
a
community
effort
and
community
projects,
so
definitely
looking
for
any
and
all
feedback.
A
Basically
I
think
what
I
would
ask
is
that
we
maybe
argue
and
debate
more
around
what
we
want
to
see
in
our
next
release
after
V
1,
alpha
3
and
save
comments
around
you
know
some
of
the
other
stuff
just
leave
it
under
TBD
or
if
it
genuinely
needs
to
come
out
and
go
to
infrastructure
providers
or
image
builder.
We
can
mention
that,
but
I
personally
am
interested
in
trying
to
solidify
and
start
talking
about.
C
C
Humor
me
now,
yes,
I
was
gonna
say,
like
seems
like
this
has
been
open
already
for
five
days.
I
would,
with
two
more
days
so
end
of
the
week
and
try
to
merge
it
in
and
then
the
next
community
meeting.
Maybe
we
can
discuss
the
four
but
like
let's
try
to
merge
it
and
then
like.
If
we
want
to
add
things
like
folks,
can
just
open
up
ER
and
things
to
it.
I'm.
A
A
Okay,
moving
on,
we
have
a
discussion
topic
from
Marcel.
D
Yes,
hello
can
hear
me:
yes,
okay,
cool,
so
this
like
really
a
small
topic
in
my
opinion,
but
I
open
troop
EOS
on
last
week
regarding
these
labels
on
the
metric
services
and
they
would
select
on
the
metric
services
and
that
doesn't
seem
to
be
any
consensus
on
which
labels
should
be
there
and
yeah.
Basically,
what
we
want
to
do
there
so
I
want
to
bring
this
here
to
kind
of
get
some
clarity
on
yeah.
D
Basically,
what
we
want-
and
we
want
to
go
with
this-
it's
not
really
a
big
PR
big
deal,
but
there
doesn't
seem
to
be
any
consensus
right
now.
So
that's
why
I
was
wondering
how
to
go
about
this.
Okay,.
D
A
It's
not
that
the
that's
all
defer
to
you
here,
but
I
thought
that
these
labels
got
adjusted
based
on
the
namespace
prefix.
Is
that
not
right?
No.
C
C
Will
change
manually,
and
so
the
thing
is
like
if
we
merge
this
one
with
the
selector
we
shouldn't
remove
them
from
like
we
should
actually
like
match
these
so
like
it
should
be
one
PR,
the
like
updates,
both
I
could,
rather
than
like,
having
one
that
removes
that
and
one
and
I
don't
think
27.
These
should
impact
this.
It
will.
Common
labels
will
add
to
like
a
label
to
the
selector
and
label
to
the
labels,
so
we
should
does.
C
C
E
So
I
think
the
question
that
I
have
is:
if
we're,
if
we're
adding
additional
labels,
do
we
actually
need
to
override
the
value
of
the
label
that
the
default
label,
or
can
we
just
stick
with
the
default
label,
and
then
that
would
make
it
so
that
we
don't
have
to
make
any
changes
from
the
kind
of
control
or
the
queue
builder
scaffolding?
So.
A
C
Dance,
that's
how
you
I
need
it,
so
this
was
generated
by
cue
builder.
If
we
do,
this
I
would
like
that
the
other
labels
are
also
like
updated,
so
they
like
it.
We
have
one
standard
for
all
of
them,
so
this
change
I,
like
it's
actually
like
consistent
with
what
we
have
today,
but
if
we
want
to
go,
get
away,
I'm
completely,
ok
with
it,
but
I
would
want
to
change
all
of
them
right
like
I'm,
not
just
these
that
make
sense.
E
Yeah
no
I
agree
I'm
just
saying
like,
as
far
as
like
managing
minimal
discs
to
the
scaffolding
and
what
we'll
eventually
have
to
be
documentation
for
telling
people
how
to
implement
these.
Instead
of
having
to
have
like
these
manual
edits
to
the
scaffolding,
we
could
just
tell
them
to
do
the
add
labels
thing
and
that
would
give
them
the
unique
constraints
for
their
provider.
Without
you
know,
yeah.
C
A
As
long
as
whatever
we
end
up
with,
we
can
run
multiple
pods
and
deployments
in
the
same
namespace,
so
we
can
do
a
management
stack
in
one
namespace
and
not
have
weird
selections,
because
we
just
have
you
know
everything-
says:
control
playing
controller
manager
so
match
on
everything
and
I'm.
Hopefully,
I
would
assume
that
this
would
take
care
of
that,
because
these
are
per
provider.
A
A
A
Or
Jason
could
one
of
you
please
file
an
issue
with
the
information
so
that
Marcel
can
work
on
it?
Okay,
thank
you
and
Marcel.
Thanks
for
bringing
this
up,
I
was
confused.
Seeing
those
PRS
come
through
so
I'm
glad
we
cleared
that
up
all
right.
Does
anybody
have
any
other
topics
before
I
move
on
to
v1
alpha
three
issue
and
PR
burn
down
and
backlog
grooming.
A
F
F
My
my
goal
of
bringing
this
up
is
that
if
you
have
things
that
you
want
to
work
on,
now
is
the
time
to
start
filling
up
the
pipeline,
and
if
you
have
specs
that
you're
thinking
about,
please
start
to
get
them
a
draft
form
and
start
talking
about
them
and
filling
out
issues.
So
that
way,
we
don't
have
this
large
cause
near
the
end
of
to
be
one
of
the
three
cycle.
Yeah.
G
A
Alpha
four,
just
briefly,
the
first
one
is
detecting
when
bootstrap
fails,
because
right
now,
it's
it's
going
to
end
up
most
likely
being
infrastructure
provider
specific,
but
at
least
with
the
providers
that
are
using
say
like
cap
up
the
AWS
provider
when
we
run
cloud
in
it
and
it
runs
cube,
ADM
in
it
or
join
or
whatever
else
it's
doing.
There
are
multiple
times
that
I've
seen
where
the
overall
cloud
init
script
fails.
A
It
returns
a
nonzero
exit
code,
but
cube
ATM
was
able
to
get
far
enough
to
join
the
node
to
the
cluster,
and
so,
if
you
run
cute
control
get
nodes,
it
shows
up,
but
more
likely
than
not.
It's
probably
miss
saying
some
labels.
That
would
otherwise
be
there
and
it
is
not
necessarily
a
100%,
healthy,
node
or
machine.
So
this
falls
under
stability
and
observability
and
feeling
confident
that
things
are
working
when
they're
supposed
to
and
that
we
can
detect
when
they're.
A
Not
so
that's
something
I'm
interested
in
trying
to
see
us
move
forward
for
the
next
release
after
alpha
3
related
to
observability
is
improved
status
conditions
right
now.
We
have
basically
some
events
that
may
or
may
not
be
generated
based
on
whatever
people
are
coding,
as
well
as
a
few
fields
in
status
in
the
various
custom
resources
that
we
have.
But
we
aren't
fairly
consistent
with
what
we're
doing,
and
there
have
been
a
lot
of
cases
in
Kappa,
for
example,
where
we
previously
were
just
the
logging
errors.
A
G
A
Is
extensible
machine,
delete
or
pre
delete
hooks
right
now
we
have
corn
and
Drain
hard-coded
as
part
of
the
Machine
deletion
process
in
the
machine
controller.
If
you
wanted
to
swap
out
that
behavior,
you
can't
if
you
wanted
to
turn
it
off,
there's
an
annotation,
but
there's
no
way
to
replace
it
with
anything
else.
And
if
you
need
to
do
anything
else
before
the
machine
is
100%
deleted
that
we
don't
I
believe
to
do
that.
So
I
think
that
would
be
nice
to
see
Dax.
Are
these
proposals
written
down
somewhere?
A
Yes,
these
are
links
to
issues
or
proposals.
Some
of
them
may
not
be
fully
fleshed
out,
like
improved
status.
Conditions
is
pretty
empty,
but
all
of
these
have
had
some
amount
of
discussion.
Here's
the
blocking
deletion
with
the
finalizar
annotation
and
there's
been
a
lot
of
discussion
about
it.
So
yeah
definitely
take
a
look
and
like
Tim
said:
if
there
are
things
that
you
think
should
have
a
higher
priority
or
things
that
aren't
listed
here,
please
let
us
know
sooner
rather
than
later.
H
A
A
A
A
A
A
C
Can
assign
I
mean,
like
the
PR
Gardner
just
said,
we
need
two
more
guys
that
are
in
flight.
I
need
to
be
her
base
and
then
we're
done
so
I
can
just
close
it
after
that
which
Pete
are
you
talking
about
those
numbers,
the
one
that
updates
the
references?
Oh,
the
one,
that
merged
is
the
one
that
we've
worked
like
two
for
the
Machine
set.
Do
not
like
yeah.
A
F
A
Mm-Hmm
well,
I
would
say
they
need
to
fix
that.
Maybe,
or
is
this,
do
we
need
to
triage
this
like
Warren?
Did
you
look
into
this
at
all
yeah.
I
A
A
I
A
A
I
G
A
Fixed
glossary
terminology
amount
around
management
clusters
this
one.
We
still
have
this
open
pull
request
and
I
pinged
out
of
the
memory
yesterday,
I
think
trying
to
figure
out
if
they're
gonna
be
able
to
make
the
requested
changes.
Hopefully
we'll
get
a
response,
but
I
don't
know
if
we
don't
get
it
soon.
We
may
just
need
to
make
the
changes
ourselves.
Jason
I
was.
E
A
G
A
A
A
J
K
K
A
A
A
A
There
was
an
a
automate
as
much
of
the
release
as
possible.
I
know
that
Noah
is
actively
working
on
that
update
drain
Lib
to
support
unready
nodes.
We
do
have
a
pull
request
for
this.
That
came
in
yesterday
or
the
day
before
and
I
don't
see
Michael
on
the
call
I'd
tag
him
on
this
one
I
think
largely.
A
The
contributor
was
looking
for
guidance
to
see
if
this
is
the
logic
that
we
wanted
to
have
in
the
machine
controller,
because
I
I'm
fairly
certain
that
the
rest
of
the
deltas
in
this
pull
request
are
just
copying
the
changes
from
kubernetes.
So
hopefully
we
can
get
Michael
to
look
at
this
Joel
did
you?
Are
you
in
contact
with
Michael
regularly.
A
A
Web
hook,
conversions
for
machine
resources
should
automatically
update
the
API
group
version.
We
do
have
a
PR
in-flight
for
this
one
and
it
just
needed
some
clarification
that
we
finally
arrived
at
yesterday.
So
I
expect
that
to
move
forward
document
logging
standards,
I
said
I
was
going
to
take
this
and
never
did,
but
it's
in
the
backlog,
so
I'm
actually
gonna
I
will
try
to
get
around
with.
H
A
Is
more
that,
like
log
messages
anywhere
that
you're
returning
an
error,
at
least
if
it's?
If
you
really
need
to
know
where
it
came
from
and
because
we're
not
always
printing
the
stack
trace?
And
you
can
log
the
error,
especially
if
it
makes
its
way
back
up
to
the
controller
runtime
code
for
reconciliation
and
it
gets
logged
you'll
have
no
idea
what
file
it
came
from
and
other
things
like
log
messages
should
always
have
a
start
with
a
capital
letter
use
structured
logging
that
sort
of
stuff
close
to
cuddle.
G
G
A
K
A
A
Don't
know
that
we
need
to
do
this
in
in
cluster
API
anymore,
I
think.
Maybe
we
can
just
close
this
or
at
the
very
least
we
could
have
like
a
single
sentence
somewhere.
That
says
the
bootstrap
data
that
kubernetes
stores
is
just
in
a
secret
and
when
you
send
it
over
to
your
machine
as
an
infrastructure
provider,
you
probably
should
encrypt
it.
I
think
that's
about
all
that
we
can
can
and
should
do.
A
A
E
Yeah,
it's
gonna,
say:
I've
talked
to
our
QE
folks
and
we
have
some
documentation
around
it,
but
our
documentation
is
basically
do
the
right
thing
and
put
them
in
the
right
places.
Eunice
yeah
exactly
so.
It
would
be
helpful
if
we
actually
told
people
or
appointed
people
to
more
proper
documentation
on
how
do
you
create
those
certificates
and
the
requirements
that
are
around
them.
A
A
A
A
H
A
Let's
see
this,
one
I
was
chatting,
we've
been
going
back
and
forth.
I
need
to
follow
up
because
there's
a
update
here
that
I
need
to
respond
to
I,
don't
know
that
it'll
make
the
miles
down
we'll
find
out.
After
some
more
clarification,
all
right
update
the
cap,
D
QuickStart,
and
we
went
back
and
forth
on
this
one.
We
talked
about
it
last
week
about
keeping
examples.
So
I
think
it's
worthwhile
to
keep
this
one
open.
A
And
recommendation
guide
how
to
implement
a
bootstrap
provider
I,
don't
feel
strongly
that
this
is
needs
to
be
in
the
milestone
and-
and
is
this
different
than
the
guy
that
you
wrote
Andy?
This
is
a
sibling
to
the
guide
that
Liz
wrote.
So
what
Liz
wrote
is
the
implementers
guide
for
implementing
a
Oh.
C
A
One
was
this:
she
no
can
provide
everything,
an
infrastructure
provider
for
machines
and
clusters
or
mailgun
machines
and
clusters.
So
this
was
having
basically
something
parallel
for
doing
a
bootstrap
provider
and
they
had
a
presumably
by
extension,
we
could
have
one
for
doing
a
control,
plane,
provider.
A
I
think
that,
like
generally,
if
you
follow
the
steps
that
are
in
here,
the
majority
of
what's
in
here
could
be
adapted
to
a
bootstrap
provider
without
needing
full
documentation
on
it.
So
Tim
this
is
a
signed,
so
it
doesn't
need
he'll
want
it.
It's
just.
We
haven't
had
a
PR
for
it.
I'm
gonna
just
move
it
to
next.
If
it
gets
done
in
time,
it
does.
A
A
We
know
we
need
a
document
how
to
upgrade
from
alpha
2
to
alpha
3
Noah's
gonna
be
working
on
that
and
anybody
else
who's
interested
can
sign
up
to
find
the
cluster
Caudill
move
process.
That's
in
flight
upgrade
to
C
or
D
b1.
We
have
a
pull
request
for
that
Vince.
Did
you
ever
look
at
this
fun
oldie?
A
C
G
A
H
A
A
Well,
you
may
not
be
probably.
You
may
not
have
permission
to
put
it
in
the
milestone
if
you're
opening
up
a
full
request
or
an
issue,
please
let
us
know
if
you
think
it
needs
to
be
in
0.3,
please
try
and
put
the
labels
on
it
for
the
areas.
So,
if
you're
opening
up
a
cluster
cuddle
issue
or
PR,
please
add
area,
cluster,
cuddle
and
so
on,
because
and
for
the
maintainer
x',
please,
if
you're
reviewing
PRS
or
opening
PRS,
and
you
have
access
to
set
the
milestones.
A
A
A
G
A
So
we
do
run
all
of
our
controllers
as
kubernetes
deployments
and
if
kubernetes
is
able
to
detect
that
the
single
replica
pod
is
unhealthy,
it
will
replace
it.
We
are
in
v1
alpha
3.
We
are
adding
or
have
added
liveness
and
readiness
probes,
which
will
help
with
that
and
yeah
I
think
that's
pretty
much
it
kubernetes
should
handle
replacing
failed
pods
and
if
you're
using
multiple
replicas
make
sure
you
have
a
leader.
Election
turned
on.
C
If
you
want
to
run
like,
like
you,
feel
more
concurrency
or
like
more
parallel
processing,
we
suggest
to
use
like
namespaces,
so
you
can
run,
for
example,
multiple
cluster,
a
P
I
and
use
the
namespace
flag
to
just
namespace
that
cost
API
to
a
specific
name
space
and
that
could
also
reduce
the
blast
radius.
In
addition
to
all
the
other
things
and
dimension
sure.
A
Let
me
go
just
briefly:
we
have
13
PRS
that
are
open
for
the
milestone
right
now,
I
have
looked
at
the
majority
of
these
and
I.
Imagine
several
of
you
have
as
well
I
think
they're
all
making
their
way
through
getting
to
be
merge,
ready,
I,
don't
know
that
we
really
need
to
discuss
them
in
detail
unless
anybody
wants
to
talk
about
any
of
them.
In
particular,.
G
A
G
We
have
a,
let
me
say,
not
ideal
approach
to
tease
with
printf,
with
K
log
and
with
machinery,
so
with
a
big
sort
of
technology,
forecaster
cattle
and
proposing
a
different
approach
that
is
basically
on
a
custom
Locker,
a
custom
Locker
that
that
provides
an
output
that
can
achieve
a
good
trade-off
between
all
the
goals
and
I'm,
also
proposing
that
the
day
at
the
end
of
the
locker
is
is
blackballed.
So
if
someone
is
not
happy
with
the
output,
it
can
change,
I
can
plug
in
his
own
order
and
control
the
output.
G
B
H
A
Yeah
I
I,
like
the
approach
that
you've
taken
for
eto,
for
having
just
a
logger
dot,
logger
that
is
available
everywhere
and
that
the
only
place
that
we
force
any
specific
implementation
is
at
the
very
top
where
cluster
code
is
set
up
and
I'm
I
think
to
an
end
user.
The
the
cleaner
version
down
here
that
doesn't
have
all
of
the
the
message
chunks
and
fewer
quotes
is
probably
gonna,
be
better.
A
All
right,
let
me
just
quickly
go
through
any
new
issues
that
have
come
up,
that
don't
have
a
milestone.
We've
got
nine
go
from
bottom
to
top
here
briefly,
so
we
had
a
request
to
allow
cluster
cuddle
to
specify
a
custom
workload
cluster
template
for
BT
Oh.
What
do
you
think
about
the
milestone
on
this
one.
G
A
E
A
I
A
And
then
we
had,
you
seen,
found
an
issue
with
the
cube
idiom
control
plane,
creating
more
replicas
than
desired.
I.
Definitely
think
we
need
to
consider
this
do
y'all.
Think.
Is
this
something
you
Jason
that
you
would
say
is
like
a
p0,
or
is
it
more
just
if
we
end
up
create
accidentally
creating
one
more
replica
than
we
should
we'll
get
around
to
deleting
it
as
things
settle
out
so.
E
I
think
the
problem,
I'm
not
sure
I-
think
we
definitely
need
to
triage
it
more
and
try
to
reproduce,
because
if
it's
trying
to
admit
to
different
control
planes,
then
it's
definitely
going
to
be
an
issue.
If
it's
a
knitting
and
accidentally
scaling
up
one,
then
it's
you
know
less
of
a
concern.
Yeah.