►
From YouTube: 20191030 - Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
today
is
Wednesday
October,
30th
2019.
This
is
the
cluster
API
office
hours.
Meeting
cluster
API
is
a
sub
project
of
sig
cluster
lifecycle.
Today's
meeting
is
being
recorded
and
we
do
have
a
CNC
of
code
of
conduct
that
we
abide
by,
and
we
also
have
meeting
etiquette.
So
please
use
the
raise
hand
feature
in
zoom.
If
you
would
like
to
say
something
and
I'll
do
my
best
to
to
notice
and
call
on
you,
and
we
also
have
this
agenda
document
which
I'm
sharing.
A
A
There
were
new
releases
in
the
past
week
for
at
least
three
of
our
providers,
so
we
had
a
new
release
of
the
acuity
on
bootstrap
provider,
and
this
was
just
a
minor
change,
but
it
did
fix
an
error
with
some
of
the
nit,
locking
that
we
do
to
make
sure
that
we
don't
have
to
control
playing
machines
that
try
to
both
run
cube
a
daemon
it.
At
the
same
time,
we
had
some
series
of
bug
fixes
that
were
in
Kappa
and
Kathy.
A
A
C
So
just
with
duquan
and
all
the
festivities
coming
up
like
I
would
I
would
like
I
know
that
we're
targeting
like
for
early
January,
but
for
you
went
out
for
three
and
given
that
you
con
Europe
is
end
of
March
and
we
have
a
lot
a
lot
of
work
to
do
and
also
have
emerged.
Yet
I
would
like
to
propose
that
we
kind
of
move
the
target
date
to
early
March
March
and
their
February.
Possibly
if
what
you
all
think
about
that
I.
C
D
Just
like
to
emphasize
that
we
should
focus
on
a
feature,
release
or
completeness
or
so
than
a
time
box,
so
if
that
means
pushing
the
date
I'm
wholeheartedly
in
support
of
making
it
more
complete.
So
that
way
we
have
part
of
the
feedback
we
got
from
our
face
to
face
was
that
our
release
and
our
documentation
was
not
necessarily
complete.
So.
E
A
E
F
A
C
E
E
A
A
A
A
I
do
have
a
section
down
here
below
when
we
get
to
the
control
plane
proposal
to
talk
about
load
balancers
in
more
detail,
but
I
think
we
can
wait
for
that.
Thank
you
for
the
update,
mush
I,
see
your
hand
is
up.
Is
it
you
have
a
question
or
just
didn't
tell
myself?
Okay,
thanks
yeah,
sometimes
zoom
clears
it.
Sometimes
it
doesn't
Jason.
You
have
the
next
couple
of
items
before.
A
B
C
A
H
D
I
do
think
from
the
yellow
perspective.
We
should
have
batteries
included,
but
swappable,
here's,
the
batteries
included,
yeah
or
maybe
even
have
it
in
a
templated
style
format
with
the
default
template
already
provided
alright.
So
that
way
it's
easy
to
override.
So
if
people
use
customize,
they
can
just
override
the
customize
integration
that.
B
H
C
A
H
But
as
part
of
this,
we
also
created
a
notifications
mailing
list
so
that
we
can
start
alerting
people
when
there
are
failures
with
these
periodic
and
post
submit
jobs
that
we
care
about.
So
if
you
are
interested
in
joining
this
group,
please
let
me
know
and
I
will
make
sure
to
get
you
added.
We.
We
opted
against
using
the
same
cluster
lifecycle
mailing
list,
because,
right
now
they
these
jobs
are
much
more
easier
than
the
current
kind
jobs
that
and
cube
ATM
jobs
that
are
out
there
today.
H
H
Alright,
so
the
other
topic
I
had
is
related
to
the
cluster
autoscaler
integration.
There
was
some
activity
recently
on
the
pr2,
the
autoscaler
right
now
it's
targeting
v1
alpha
one,
but
there
has
been
some
new
discussion
that
they
may
potentially
try
to
modify.
What's
out
there
today
to
be
able
to
just
target
parbat
rare
versions
of
cluster
API
resources,
rather
than
hard
coding
to
be
one
alpha.
D
H
H
D
Get
what
you're
saying
it's
just
a
choice
of
words:
yeah
the
have
it's
kind
of
out
of
our
scope,
but
I
think
we
can.
We
can
talk
with
them,
but
I.
Don't
it's
up
to
them
to
decide.
Honestly,
we
can.
We
can
offer
advice,
an
insight,
but
you
know
ultimately,
at
the
end
of
the
day,
they're
they're
in
control
of
that
code,
not
us
so
I
think
offering
feedback
on
the
PR.
It
seems
totally
like
the
right
idea
to
do
and
potential
suggestions.
D
Like
decoupling,
you
know
we
do
decoupling
in
a
couple
of
different
ways
and
we've
done
it
even
in
our
own
code
base,
even
though
it
makes
me
cringe,
but
there's
a
couple
ways
to
do
that.
So
that
also
seems
prudent,
but
ultimately
it's
it's
up
to
them.
To
choose.
I,
don't
know,
I
know
that
the
redhead
folks
are
very
interested
in
it
and
Michael.
I
Yeah
obviously
openshift
that
we
care
about
this,
so
getting
emerged
in
some
form
or
the
other.
What
I
was
feeling
out
for
to
do
an
alpha,
one
or
I
want
to
say
Andrew,
McDermott
who's
been
working
on
this
for
us
on
the
cluster
autoscaler
sired
internally
has
come
up
with
a
way
to
make
it
not
tied
to
any
specific
version,
but
I
would
like
to
see
it
merged
sooner
than
later,
so
I
said
previously
mirja
in
some
form,
and
then
we
can
work
on
it
together.
A
Yeah
I
think
that's
fair
I
mean
like
Tim,
said:
they're.
The
approvers
of
cluster
autoscaler
are
in
charge
of
approving
and
merging
those
PRS.
So
I
don't
think,
there's
anything
we're
gonna
do
to
stop
them,
nor
should
we,
and
if
they
run
into
the
cluster
API
v1
alpha
one
issues.
Then
we
basically
tell
them.
You
gotta
upgrade
alpha
two
to
get
any
fixes.
A
H
A
A
J
A
A
A
Yeah
so
I
think
with
this
one
based
on
the
discussion
that
was
in
the
the
issue
that
we
we
need
to
look
at
the
versions
that
we're
going
the
kubernetes
versions
that
we
expect
to
support
for
management
clusters
and
then,
if
we're
supporting
older
versions
before
117
or
whenever
this
lands,
then
we'll
need
validating
web
books
until
the
minimum
version
of
kubernetes
that
we're
using
is
one
that
has
the
field
image.
Immutability
markers
Michael
I
see
your
hands
up.
Is
it
still
up
from
before
or
okay?
Thank
you
seems
being
weird
today.
A
I
I
Last
week,
I
brought
this
up
some
some
refactoring,
the
coop
control
drain
library.
You
know
it's
going
through
and
a
couple
other
patch
sets
to
enhance
it
and
so
I
participated
in
their
weekly
meeting
last
week.
So,
basically
just
some
other
small
things,
I'd
like
to
see
get
done
there
they're,
really
more
nice-to-haves.
I
A
K
F
With
that
that
was
waiting
to
get
approved,
I,
don't
know,
if
kind
of
prove
that
I
was
not
paying
attention,
though
I
mean.
A
A
H
A
So
it
could
do
something
similar
to
what
I
say
Kappas
doing
today,
or
we
could
have
some
sort
of
external
thing
that
you
know
you
set
up,
DNS
yourself
or
you
use
route
53
or
whatever
you're,
using
to
get
DNS
and
a
load
balancer
put
together,
and
then
we
potentially
take
what
mosha
has
started
for
a
load,
balancer
provider
proposal
and
and
proceed
with
that.
But
what
I'd
like
to
see
us
think
about,
at
least
in
the
short
term,
is
the
way
the
proposal
is
written.
A
If
you
don't
specify
a
control
plane
ref
on
your
cluster,
then
all
of
the
existing
functionality
and
correct
me
if
I'm,
wrong,
Jason
I,
believe
all
the
existing
functionality
that
we
have
in
alpha
2
today
would
continue
to
function.
So
you
could
still
have
individual
and
multiple
machines
for
control
planes,
the
cube,
ATM
config
secret
and
the
CA
cert
and
key,
and
the
Etsy
DC
a
certain
key
and
the
service
count.
A
Key
pair
would
all
continue
to
be
generated
in
the
exact
same
fashion
that
they
are
ever
B
1
alpha
2,
and
if
you
opt
into
using
the
cube
idiom
control
plane
from
b1
alpha
3,
you
must
bring
your
own
stable,
DNS
name
and
the
control
plane,
upgrades
and
management
will
work
if
you
just
to
go
machine-based
there's
no
regression,
because
everything
that
currently
works
in
alpha
2
would
continue
to
work
in
alpha
3
and
if
we
can
get
the
community's
approval.
I
would
love
to
see
us
see.
A
Us
approve
the
control
plan
proposal
as
it
stands
today,
given
that
there
is
no
regression
in
behavior
and
simultaneously
work
on
fleshing
out
the
prototypes
for
load,
balance
or
providers
and
see
if
we
can
find
a
way
to
have
basically
a
win-win
where
for
infrastructure
providers
that
don't
have
load
balancers,
that
we
can
find
a
way
to
to
get
them
for
them.
Essentially,
yeah.
H
The
idea
was
definitely
to
retain
the
previous
workflows,
at
least
for
you
know
some
period
of
transition,
so
we
can
determine
exactly
how
long
we
need
to
maintain
that
behavior,
because
I
think
longer
term
we're
not
going
to
want
to
maintain
those
two
different
parallel
support
tracks,
but
I
don't
see
any
issue
doing
it.
For
you
know
the
v1
alpha
3
cycle
and
potentially
another
you
know
cycle
after
that.
If
we
need
to,
you
know,
retain
that
compatibility
for
some
reason
and
I
say
Daniel,
that's
the
stand
up,
yeah.
L
A
I
assume
that
a
stable
IP
would
work
as
well.
It
just
needs
to
be
a
stable
endpoint,
whether
it's
DNS
or
IP,
and
somebody
can
correct
me
if
I'm
wrong,
then
my
key
would
work.
But
the
issue
is
that
let's
say
that
you
have
a
one
machine
control
plane
to
start
off
with,
and
you
generate
your
certificates
for
the
API
server
using
that
machines
IP
address
and
there's
no
and
it's
not
a
stable
thing.
It's
not
a
bib,
it's
not
a
DNS
name.
A
So
now
you
create
a
second
control
plane
machine
because
you're
trying
to
do
a
rolling
upgrade
and
your
Kube
config
secret
points
at
the
original
original
machines.
Ip
address
your
API
server.
Cert
only
has
the
IP
address
of
the
original
machine,
and
so
this
second
machine
basically
can't
join
the
cluster
or
it
can't
can't
fully
work.
So
so
we
need
a
stable
endpoint
that
can
work
across
multiple
machines
as
they
rotate
in
and
out
absolutely.
L
And
thank
you
for
restating
the
the
problem
very
clearly
I
one
follow-up
question
Jason,
you
and
I
think
we
briefly
maybe
talked
about
this
in
and
slag
that
kind
of
the
same
same
topic.
This
is
that
does
anything
need
to
change
in
the
in
the
code.
As
far
as
I
understand,
I,
don't
I,
don't
think
anything
does
I
think
we
just
need
to
call
it
out
in
the
documentation,
maybe
make
it
clear
and
there
in
the
propose
like
this
is
a
no.
This
is
an
implicit
requirement
or,
or
are
we
talking
about?
L
H
The
question
is,
is
how
do
we
sit
there
and
help
those
providers
that
don't
have
that?
You
know
solved
that
problem
and
it
could
very
well
be
done
through
documentation,
but
I
think
there
is
a
hope
that
there
would
be
a
much
more
automated
fashion,
I'm
not
expecting
there
to
be
additional
changes
needed
to
the
code
right
now.
As
far
as
the
control
plan
proposal.
However,
you
know,
as
we
continue
to
investigate
what
this
you
know,
load
balancer
support
may
look
like
that.
L
Yeah,
just
I
guess
the
point
that
that
I
want
to
bring
up
is
that,
because
there
are
I,
think
in
practice
many
different
ways
of
providing
a
stable
endpoint
that
I
hope
we
don't
block.
You
know
what
ways
that
haven't
occurred
to
us
yet
so
as
long
as
long
as
the
requirement
is
clear,
and
as
long
as
you
know,
we
we
can,
you
know,
provide
some
some
information
when
you
know
when,
when
when
we
don't
find
a
stable
endpoint,
that
would
be.
That
would
be
great,
but
it
yeah.
L
C
Yeah
I
was
just
like
wanting
to
bring
out
there
like.
This
was
actually
one
of
the
driving
factors
and
reasons
to
push
out
the
release
like
for
completeness
leave
enough
for
three
I.
Think
like
this
might
look
quite
like
a
little
bit
more
time
to
flash
out
and
but
also
like
as
it
stands
today.
It's
actually
like
pretty
complete
like
for
controlling
it,
doesn't
address
this
particular
issue,
but
so
like
the
same
thing
for
like
multi
AC
soup
or,
for
example,
in
filler,
demeans
and
we're
gonna
spawn
off
that
into
a
different
issue.
C
A
1647,
so
this
is
an
addendum
or
a
proposed
addendum
to
the
control
plane
proposal.
So
if
you
haven't
had
a
chance
to
take
a
look
at
this,
please
do
so
because
we
would
ideally
like
to
get
the
control
plane
proposal
approved,
merged,
start
coding
on
it
and
resolve
this
and
include
it
just
part
of
V
1,
alpha
3.
If
possible.
G
B
So
are
we
also
like?
Are
there
any
assumptions
right
now
around
if
changes
to
the
machine
image
is
as
expected
or
not
like?
Are
we
essentially
saying
like
if
you
need
to
change
your
machine
image
to
have
a
keepalive
daemon
or
something
to
make
the
dip
work,
and
is
that
okay?
Is
that
up
for
the
user
to
do,
or
are
we
saying
that
we're
only
open
to
implementations
that
don't
require
changes
to
this
change.
A
A
H
I
also
think
one
of
the
challenges
here
would
be
is
that
there's
going
to
be
limitations
in
a
lot
of
the
cloud
provider?
Environments
that
wouldn't
allow
this
to
work
so
that
you
know
I
think
if
there
is
or
isn't
some
type
of
image
requirements.
It
may
be
related
more
to
provider,
infrastructure
provider,
implementations
and
less
as
a
generic
requirement
across
the
board.
B
Sure
yeah
I
was
mainly
asking
in
case:
we've
already
considered
it,
but
it
sounds
like
we
haven't
so
yeah
I'm
I
can
work
with,
or
whoever
else
is
working
on
this
and
see
what
it
would
take
to
implement
a
little
balancer
that
would
require,
like
other
demons
or
changes
to
machine.
That
sounds
good
thanks.
A
J
Sure
so
yeah
we
just
put
this
PR
app
with
a
single
checking
or
no
remediation
proposal.
These
basically
propose
a
new
C
or
D
and
the
best
air
for
to
make
sure
that
nodes,
that's
the
one
healthy,
are
remediated
automatically,
so
as
a
user
not
paying
for
useless
instances
without
even
notices.
Oh
yeah,
just
looking
forward
to
get
feedback
on
that
beer
awesome.
A
Thank
you
for
opening
this
up.
I
myself
have
not
had
a
time
to
take.
I
have
not
had
time
to
take
a
look
at
it
yet,
but
I
know
some
others
have
so
like
Alberto
said.
Please
take
a
look
and
add
comments
all
right
Fabrizio.
Where
do
things
stand
on
the
cluster
cuddle
redesign
proposal?
Are
we
close
to
getting
all
the
comments
resolved.
M
A
Come
on
github,
alright,
so
I
would
like
to
try
and
solicit
approvals
from
the
people
who
have
added
github
reviews.
So
if
you
wouldn't
mind,
if
you
haven't
already
done
so
update
the
list
of
reviewers
to
include
the
people
that
are
listed
here
in
github
and
let's
reach
out
and
see
if
we
can
get
approvals
from
them
as
well,
okay,
I
would
have
put
updated
at
least
great.
Thank
you.
C
Mean
first
I
would
love
like
for
like
proposal
to
kind
of
get
approve
the
murders
this
week.
If,
if
possible,
I
know,
we
have
like
a
new
one
that
just
came
today.
So
that's
probably
like
a
little
early
but
yeah
closer
got
a
redesign
like
the
the
testing
seems
all
ready
to
go
like
if
we
can
get
approvals,
that'll
be
quickly
but
yeah
if
like.
A
Alright,
well,
let's
take
a
look
at
the
Cappy
issues
we
have
for
that.
Don't
have
milestones
so
a
first
one.
I
know
we
talked
about
this
last
week
where
we
were
waiting
for
some
more
evidence.
I
think
this
was
about
having
cluster
API,
take
care
of
doing
things
like
you
know,
setting
up
CNI,
plugins
and
storage
and
whatnot.
So
there
was
some
discussion
about
trying
to
do
this
with
add-on
management.
So
I,
don't
know.
My
gut
feeling
is
to
close
this,
but
I
don't
know
Tim
or
Jason
or
anybody
else
do
you
have
any
ideas.
C
Maybe
this
could
be
dogs,
but
that's
pretty
much.
It.
A
A
E
A
C
A
A
A
A
Next,
we
have
default
secret
for
the
cube
config
after
cluster
creation
should
have
a
unique
kubernetes
admin
user.
So
if
we
create
100
clusters-
and
we
get
100
cube
config
secrets,
they
all
have
the
same
same
user
name
in
the
cube
config,
and
this
is
a
request
to
use
a
unique
username
so
that
if
you
use
a
tool
that
flattens
multiple
cube
configs
into
a
single
cube,
config
that
the
users
don't
stop
on
each
other,
so
seems
reasonable
to
me.
I
think
with
this
end,
this
wouldn't
be
a
breaking
change.
A
F
F
A
F
A
A
Do
you
want
or
not
I
think
we're
all
right,
we're
good,
okay,
yeah,
all
right
and
then
I
don't
know
if
it's
worth
going
through
I
actually
already
that
I
did
want
to
mention
that
there
are
several
issues
that
have
been
opened
within
the
past
week
or
so
that
have
things
like
add
a
waiting
or
a
not
ready
phase
for
machines
when
a
note
isn't
ready,
support
updating
infrastructure,
ready
on
the
cluster,
an
umbrella
about
states,
transitions
and
conditions,
renaming,
error,
reason
and
error
message
to
failure,
reason:
a
failure,
message,
I
think
these
we
haven't
necessarily
gone
over
in
the
meeting
because
we
tended
to
just
the
reporters
were
assigning
milestones
to
them,
but
I
would
encourage
you
all
to
take
a
look
at
the
list
of
open
issues,
especially
the
ones
that
are
in
the
0-3
milestone
and
please
provide
feedback.
A
We
need
your
feedback
so
that
we're
not
just
operating
in
a
vacuum,
especially
if
we're
renaming
fields,
although
we
will
have
conversion
webhooks
to
do
the
translations
between
alpha
2
and
alpha
3.
So
just
a
PSA
to
please
go.
Take
a
look
at
what's
out
there
and
add
comments.
If
you
have
opinions
and
then
in
terms
of
PRS,
we
have
double
check
again:
yeah
13,
open
you
get
the
proposals
out
of
here,
so
we
have
I
need
to
check
in
with
Andrew
again.
I
know
he's
dealing
with
something
right
now
so
see.
A
If
we
can
get
this
document,
updated,
I
think
I
need
to
go
through
and
do
some
reviews
on
things.
But
if
you
have
a
PR,
that's
in
this
list
and
it's
not
getting
attention,
please
let
us
know,
and
we
will
do
our
best
to
review
and
get
stuff
moving
forward,
all
right
anything
else
before
we
call
it
a
day.
C
A
Going
forward
yeah,
Thank,
You,
Vince
I
will
echo
that
and
again
I
really
encourage
you
all
to
take
a
look
at
the
open
issues
and
pull
requests.
If
you
are
new
to
cluster
API
and
things
are
confusing
file
issues,
we
know
our
documentation
can
use
some
improvements.
So
if
you
are
running
through
the
QuickStart-
and
you
can't
figure
something
out
if
you're
looking
at
the
code-
and
you
can't
figure
something
out,
if
you
read
any
of
the
other
Doc's
I
may
need
improvements,
file
issues
open
PRS.
We
really
want
to
try
and
improve
here.