►
From YouTube: Kubernetes SIG Cluster Lifecycle 20190227 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.rowj34kleg2z
A
B
B
Fieldy
time
zone
and
because
I
have
so
got
help,
including
the
assumed
URL,
and
if
you
any
of
you
guys,
are
interested
in
joining
the
meeting,
you
could
probably
add
a
few
little
engine
already
and
most
probably,
what
we'll
expect
is
that
we
have
a
couple
of
questions
open
with
respect
to
cluster
autoscaler
and
mission
controller
integration,
mainly
the
scaled-down
strategy
PR
that
we
have
been
discussing.
We
will
try
to
set
along
that
first
of
all,
and
it
will
be
more
or
less
the
agenda
for
the
meeting
and
we'll
see.
B
A
Were
also
some
questions,
I
think
in
slack
last
week
about
whether
there
was
an
overall
tracking
issue
for
the
integration
work
between
the
cluster
API
in
the
autoscaler
and
I.
Think
the
slack
conversation
and
I
both
determined
that
there
was
not
a
tracking
issue.
There
are
a
couple
of
issues
for
specific
integration
points,
so
in
particular
these
scaling
down
by
a
specific
machine,
but
there
wasn't
an
overall
tracking
issue
and
I.
A
Think
one
thing
we
should
do
here
is:
it
should
create
an
overall
tracking
issue
and
then
we
can
see
for
people
who
sort
of
subscribe
themself
that
issue.
We
can
make
sure
that
they're
aware
of
meetings
like
this
that
are
coming
up
and
also
you
know,
people
who've
opened
PRS,
you
know
are
likely
to
be
interested
and
we
can
ping
them
and
let
them
know
that
we'll
use
sort
of
a
single
tracking
issue
is
the
coordination
point
for
this
work.
B
I
think
that
affection,
so
I,
can
probably
pick
up
that
thing.
I
will
product
in
perfectly
it
one
issue
which
probably
connects
these
smaller
segmented
issues
and
peers
and
gives
more
or
less
a
larger
picture.
That
of
what
are
the
things
required
for
it
to
integrate
about
the
spinner
inside
the
machine
controller,
and
now
we
can
type
such
meetings
in
my
new
service
face
in
the
country.
C
E
I
can
I
can
provide
some
feedback
context
here.
So
we
add
a
mock
up
provider
example
provider
there,
and
what
the
integration
test
has
done
today,
just
to
compile
the
two
managers
and
and
Creole
the
client
caster
and
deploy
their
series
to
the
con
Custer,
and
we
we
actually
want
to
add
more
verifications
like
the
series
and
also
in
the
mock-up
provider
we
generate.
E
E
F
So
I've
been
adding
some
notes
to
the
to
the
notes,
but
they're
actually
in
the
wrong
order.
So
so
first
I
wanted
to
let
everyone
know
what
the
status
of
the
update
process
is
been.
The
older
has
been
working
with
Kate
and
Brooke
to
determine
what
a
proper
URL
ringing
scheme
will
be,
and
once
that's
determined,
we
believe
we're
going
to
share
the
same
that
if
I
account
to
serve
our
nation.
F
So
so
that's
where
we
are
on
that
now:
there's
an
umbrella
issue
for
documentation
requirements,
and
there
is
at
least
two,
maybe
three
or
four
actually
issues
or
PR
zorba
now
for
documentation
than
just
be
fixed
and
I'd
like
to
start
working
on
those
but
they're.
Two
big
big
items
that
are
not
really
documented
now
and
I
just
wanted
to
get
a
feel
from
the
group
in
terms
of
whether
or
not
anyone
was
interested
in
documenting
them,
whether
we
thought
they
were
necessary
to
document
when
alpha,
1,
etc,
and
so
those
two
issues
are
right.
F
F
F
D
D
F
G
G
G
H
F
So
this
ticket
is
in
the
v1,
Apple
and
milestone,
and
it's
a
number
of
it
has
a
number
of
things
and
you're
right
that
updating
the
documentation,
noun
doesn't
actually
change
the
live
documentation
whose
a
main
deployment
process.
So
we
have
a
larger
documentation,
issue
and
I
think
this
is
one
of
the
things
that
we
could
kick
out
at
v1
elbow
one
that
was
sort
of
the
question
is:
to
what
extent
do
we
want?
You
want
alpha
one
to
be
documented.
H
Given
the
Federation
of
the
providers,
I
think
that's
super
important
to
have
good,
clean
documentation.
I
think
one
thing
we
learned
from
KK
as
a
lesson
of
what
not
to
do
is
to
have
bad
documentation
on
releases
because
the
the
fallout
from
that
flows
back
into
the
project
in
multiple
different
incantations
people,
don't
know
where
to
find
issues.
They
don't
know
what's
going
on
with
like
how
to
get
the
release
artifacts,
so
we've
never
done
until
we
actually
threw
down
and
something
like
rubidium.
H
We
actually
devoted
people
towards
executing
the
docs
as
part
of
the
deliverable
for
given
milestone.
We
suffered
from
this
in
it
backfired
on
us
for
a
long
time.
So,
as
a
lesson
learned,
I
think
it's
probably
pretty
important
to
resource
this
appropriately
for
to
try
and
get
this
as
part
of
the
deliveries.
Part
of
the
deliverables
for
b1,
alpha
1.
F
Thank
you,
so
I'll
go
ahead
and
open
an
issue
for
control
and
machine
said
documentation.
Actually,
all
openness
sub
is
you
for
every
issue
in
the
umbrella
and
in
case
it
makes
it
easier
for
others
to
pick
up
parts
of
it
and
then
again,
I
think
all
of
the
current
open
documentation
tickets
are
things
that
you
derive
assigned
and
you've
done
or
plan
to
do
and
we're
just
waiting
for
the
notify
account
so
that
we
can
actually
submit
a
PR
and
updated.
A
All
right,
I
should
move
on
to
the
Alpha
burn
down
list
its
excellence.
I,
don't
know
who's
been
driving
this
for
last
couple
weeks,
I've
been
sick,
it
says
24,
open
issues
and
I
am
scrolling
down
to
see
where
we
were
at
last
week
anyway.
I
do
this
last
week
come
on
maybe
two
weeks
ago,
No
three
weeks
ago,
all
right,
I
guess
we
haven't
done
this
for
a
while.
I
was
trying
to
see
if
we.
A
I
was
trying
to
find
the
last
time
we
do
this
to
see
if
which
way
the
number
was
trending.
Okay,
so
the
numbers
been
says,
the
numbers
up
I
know
that
there
are
at
least
some
of
the
issues
that
are
in
there
that
are
linked
to
PRS,
that
I
have
not
gotten
to
reviewing
because
I've
been
not
in
the
office.
Pinterest
I've
been
feeling
awfully
sick
as
the
rest
of
my
family.
C
A
I
I
So
so
the
pivot
PR
that
I
have
open
right
now.
I
work
around
it
by
basically
recreating
the
correct
owner
reference
after
migrating.
The
Machine
sets
over.
So
we
can
work
around
it,
but
ideally
we
would
be
able
to.
You
know,
use
the
same
logic
that
we
currently
have
with
the
machine
sets
to
where
it'll
automatically
readapt
the
machines
that
are
associated
with
by
the
label.
Selector.
A
A
A
H
So
the
comment
below
is
the
long-term
issues
can
be
bumped
if
needed.
Given
the
fact
that
we're
like
we
are
t-minus
one
month
away
from
our
sort
of
hand-wavy
date,
should
we
built
them,
I
mean
this
is
typical
modus
operandi?
Is
the
code
freeze
for
COO
medium
is
for
140.
114
is
next
week
anything
inside
of
the
backlog
you
would
bump
at
this
stage.
C
A
A
So
I
guess
for
the
the
two
ones
in
the
700
range
that
are
assigned
to
Vince.
We
can
have
spence
if
he
thinks
those
look
at
in
in
time
and
then
75
I
have
tried
to
assign
to
Messam
because
he
was
gonna
work
on
that,
but
he
is
still
not
quite
a
member
of
the
community
cigs
org.
Even
though
he's
a
member
of
the
kubernetes
org,
so
the
assignee
was
failing,
I,
don't
think
he's
on
the
call
today
to
provide
any
sort
of
status
updates.
But
it's
related
to
the
meeting.
C
J
K
C
F
F
L
L
L
The
API
of
the
cluster,
that
is,
it,
is
managing
the
the
lifecycle
of
and
and
if
the,
if
the
management
cluster
is
running
in
a
different
environment
from
the
managed
cluster
or
you
know
the
cluster,
a
back
control
planes
running
a
separate
cluster.
Then
you
know
what
is
the
mechanism
by
which
it
has
access
to
the
managed
clusters.
Api.
L
F
Okay,
so
since
we
have
some
time,
why
don't
we
go
ahead
and
discuss
this
just
to
close
off
the
previous
issue,
so
for
253
I
got
an
opponent
saying
that
I
propose
we
leave
this,
as
is
what
this
means
is
that
the
documentation
currently
says
that
you
should
annotate
or
actually
I'll
need
to
check
this,
to
make
sure
that
it
says
that
you
must
annotate
order
for
cluster
control
to
work
and
that's
the
status
quo.
But
nothing
will
change
it's
just
a
question
of.
Do.
F
F
J
I
If
we're
gonna
manage
a
life
cycle
beyond
just
installed,
we'd
have
to
have
some
method
of
communication
either
into
the
cluster
directly
via
the
API,
or
we
would
need,
like
some
of
the
providers
are
doing
some
type
of
persistent
access
to
be
able
to
run
remote
commands
on
individual
instances
that
are
part
of
that
cluster
I.
Don't
I,
don't
think
we
can
actually
manage
more
than
just
install
if
we
don't
have
some
type
of
persistent
access
to
the
cluster.
I
F
I
I
C
G
Don't
know,
but
you
know
much
better,
the
overall
architecture,
but
something
that's
really
liked
me
fits
me
that
we
Fortis
refers
to
different
abstraction.
We
should
you
know
just
me
check
the
help,
because
the
responsibility
of
the
machine
control
authorities
provide
machines,
something
there
is
not
I
mean
understand
the
problem,
but
at
the
same
time,
is
like
just
bringing
some
an
abstraction
just
to
do
some
checking
because
didn't
want
not
only
to
check
the
machine,
but
also
that
the
machine
is
actually
providing
a
service
for
the
cluster,
which
is
alone
I.
G
Don't
know
why
who
miss
by
bringing
a
huge
you
know
abstraction
and
all
the
ability
or
having
been
able
to
access
that
instruction.
You
know
just
for
shooting
the
hell
check
and
be
something
we
there
are
responsibilities
of
the
different
component
and
probably
doesn't
mean
much
to
me.
I,
don't
really
I
guess
is
just
something
we
all
can
really
understand
out
that
really
what
we
want
to
do.
I
understand
the
problem
problems
beyond,
but
a
little
bit
is
the
best
way.
So.
F
I
C
F
F
Let's
assume
that
we're
going
to
use
the
node
represe
as
a
surrogate
for
node
health,
and
that
requires
API
access
and
that
requires
authentication,
and
so,
if
you
believe
that
chain
thought
then
question
is
who's
responsible
for
managing
authentication
and
since,
like
obviously
answer
would
be
you
sort,
the
big
is
the
secret
in
the
namespace
I
think
that
a
lot
of
people
would
argue
that
it's
not
secure
and
people
have
gone
to
great
lengths
to
avoid
doing
that.
So.
C
L
Think
the
distinction
is
specifically
for,
for
you
know
if
we're
thinking
about
something
like
you
know,
invoking
kuba
diem
that
may
not
require
direct
access
to
the
or
that
may
not
require
exposing
the
you
know,
there's
API
over
over
the
network.
You
know
you
may
be
able
to
use
some
other
mechanism
right
SSH
and
then
maybe
you
could
have
a
tunnel
to
the
API
via
by
your
SSH
connection,
but
yeah
and
then
another
example
might
be
a
garbage
collection
of
machines
that
are
machines
that
have
been
you
know
decommissioned,
or
maybe
it
failed.
L
I
L
I
G
Yeah
that
doesn't
give
him
a
point,
I
mean
even
which
looks
like
more
complicated,
a
costume
rider
for
that
is
cleaner,
because
the
structuring
is
not
amiss.
His
responsibility
fits
better.
You
know
bringing
what
a
structure
from
one
layer
to
another
just
to
be
able
to
ship
something
that
makes
sense.
It's
probably
not
in
that
way.
The
other
ways
looks
like
a
more
complex,
but
it
would
provide
some
basic
implementation
just
to
fulfill
that
specific
use.
Cases
from
I
think
it's
cleaner
issues.
G
B
B
B
For
example,
if
you
are
weighing
250
plus
GPA
controllers
and
machine
eBay
controllers
in
their
remit,
the
fact
that
it
would
already
help
the
clock,
Reuter,
specific
secrets
and
so
on,
which
will
help
it
create
the
real
machines
and
stuff,
and
that
is
the
baseline
secret,
that
anyone
would
want
to
keep
right.
I
mean
you
have
the
clock
where,
specifically,
then,
you
can
be
more
or
less
whatever
you
want
to
do,
and
just
wondering
that
in
the
current
architectures.
B
Do
you
if
I
understand
correctly
the
Machine
EPL
controllers,
which
are
which
are
more
or
less
only
machine
set
machine
deployment,
only
part
of
the
mission
controller
and
we
have
a
complete,
indeed
completely
independent
machine
controller
right.
So
from
the
cube,
config
access
point
of
view,
we
could
at
mix
go
in
a
direction
where
these
independent
machine
control
anyway
requires
access
to
both
of
the
cube
currency
of
the
quality
target
cluster
and
the
manage
and
the
control
cluster.
But
you
can
probably
think
of
these.
B
Other
shared
controllers
might
have
restricted
access
because
it
may
not
require
access
to
the
target
cluster
which
why?
Because
this
other
shared
controllers,
all
only
requires
to
need
to
read
machine
object,
which
is
already
filled
by
the
mission
controller,
which
is
independent
and
which
knows
more
or
less
about
both
of
the
clusters.
B
Oh,
it's
up
to
us
to
decide
who
is
hosted
well
if
we
decide
to
host
machine
control
completely
inside
the
managed
cluster
and
then
say
that
okay
I
am
not
keeping
the
chip
config
related
secrets
in
my
management
cluster,
then
it
might
be
okay
from
the
architecture
point
of
view,
but
the
baselines
still.
The
baseline
question
still
remains
that
if
we
have
a
management
cluster
and
is
the
plan
to
store
secrets
like
cloud
provider
specific
secret,
it
may
be
a
success
key
and
it
makes
key
ID
kind
of
stuff.
B
This
there,
then
it's
anyway,
a
big
thing
for
me
to
lose
if
I
boost
a
management
cluster
success,
so
just
trying
to
think
that
we
can
probably
solve
from
three
problems.
You
can
probably
solve
one
problem,
but
the
other
problems
will
still
remain
in
this.
Video
right
or
maybe
I
have
got
some
parts
wrong
and
promote.
H
Recommended
best
practices,
though
we
should
try
to
steer
clear
potential
single
point
of
failure
to
multiple
clusters
were
possible.
If
we
store
things
as
secrets,
that's
potentially
dangerous
scenario
that
we
could
allow
for.
That
type
of
you
know
having
an
external
client
that
is
much
more
secure
whatever.
Maybe
that
isn't
co-located
with
the
information
has
probably
recommended
you
know
with
we've
bettered
around
this
topic
several
several
times.
F
F
F
A
All
right,
so
we
have
a
couple
minutes
left
about
8:00.
If
I'm
reading
the
clock
correctly,
does
anybody
else
have
anything
that
they'd
like
to
mention
before
the
end
of
the
call
I'm
just
going
through
my
open,
PRS
and
trying
to
add
comments
of
those
so
hopefully
I'm
not
blocking
so
many
people
anymore?
A
There
are
quite
a
few
open
PRS
in
the
backlog.
I
think
I
sound
like
17.
So
if
those
are
assigned
to
you,
please
do
take
a
few
minutes
to
go
through
and
review
those,
because
I
think
we'd
like
to
keep
the
backlog
pretty
small.
Also,
if
you
have
PRS
that
you're
no
longer
going
to
be
working
on
or
if
it
looks
like
one's
assigned
to
you
that
somebody's
not
actually
doing
any
more,
please
ping
it
and
or
close
it
so
that
we
again
try
to
keep
the
backlog.
Pretty
small.