►
From YouTube: Kubernetes SIG Cluster Lifecycle 20180725 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.6vxzlo2hz75h
Highlights:
- Pruning dependencies
- Requirements for an external cluster
- Provider code has been removed from the main repo
- Why pivot the controller stack into the cluster? What about leaving it running?
- Using the Cluster API with hosted solutions like GKE, EKS, AKS
- Using ASG / MIG
- Provider Implementers' Office Hours Slots
A
Hello
and
welcome
to
the
July
25th
edition
of
the
cluster
API
working
group
as
part
of
Cygnus
or
life
cycle
in
the
key
rates
community
C.
The
first
thing
on
the
agenda
today,
I
put
a
note
here,
just
in
San,
Tamara
and
I
were
talking
last
week
about
DEP
and
in
particular
feature
that
he
let
me
know
about
called
pruning.
B
A
So
I
sent
a
PR
that
enables
DEP
pruning
and
I
before
getting
it
submitted,
just
wanted
to
mention
it
here
and
see
if
anybody
objected
to
turning
on
debt
pruning.
Basically,
what
it
does
is,
instead
of
importing
all
of
your
transitive
dependencies,
it
imports
all
of
them
and
then
actually
just
sort
of
deletes
all
the
ones
you're
actually
using
in
your
code,
the
one
little
wrinkle
here
is
it
because
of
the
API
Jen.
A
We
have
to
actually
override
the
pruning
behavior
for
a
couple
of
libraries,
because
we
don't
want
to
prune
out
the
things
we
need
to
actually
generate
our
API
definitions.
So
if
you'll
go
to
PR,
there's
a
there's,
a
we
turn
pruning
on,
and
then
there
are
a
couple
of
things
that
I
have
to
leave
in
there
generally,
it
makes
the
vendor
to
vendor
director
a
little
bit
smaller
and
I.
Think
Justin's
other
reasons
they
do.
This
on
cops
is
because
it
makes
it
really
obvious
when
you
start
actually
using
or
removing
dependencies
right.
A
A
C
A
It
shouldn't
be
correct:
I
think,
like
the
external
cluster
were
using
right
now
is
mini
cube,
which
happens
to
use
cube
ATM,
but
there's
no
reason
that
you
need
cube.
Atm
like
you,
just
need
a
humanities
cluster
on
which
to
deploy
that
sort
of
initial
bootstrapping
stack
and
when
I
talked
to
Jessica
about
this
the
other
day,
she
said
that
the
reason
that
there's
an
interface
there
for
mini
cube
was
to
make
it
to
repeat
C,
to
swap
out
if
we
decided
that
wasn't
the
best
implementation,
yeah.
D
D
A
I'm
not
sure,
there's
anything
special
about
the
components
that
we
run
right.
So
we
apply
sort
of
the
cluster
API
stack,
which
right
now
is
an
extension,
API
server
and
a
controller
we've
talked
about
potentially
moving
that
back
to
CR
DS,
which
would
sort
of
even
shrink
the
sort
of
things
you
need
running
in
a
cluster
but
running
a
controller
in
the
cluster
is
just
starting.
A
deployment
like
that
should
just
sort
of
work
anywhere
and
even
starting
the
extension
API
server
I
would
expect
to
be
pretty
portable,
so
I'm
not
sure.
A
There's
anything
in
particular.
We
care
about
from
the
bootstrapping
cluster
in
terms
of
requirements
right,
like
mini
cube,
is
a
single
node
cluster.
That's
fine
like
it
runs
those
control
plan
and
all
your
workloads
on
the
same
VM
and
it
works
in
that
environment.
So
it
expected
to
work.
You
know
like
a
local
up
cluster
to
without
really
any
changes
sounds.
D
A
E
C
A
B
A
B
When
you
are
the
provider,
components
which
include,
you
know
the
like
for
OpenStack,
it's
got
the
controller
manager
and
then
the
cluster
and
machine
controllers
right
those
try
to
use
a
label,
it's
looking
for
a
label
that
has
key
radium
master
or
something
like
that
on
it.
So
it
is
kind
of
qadian
specific,
at
least
in
those
templates
that
we
put
in
github.
D
B
A
B
A
Yeah,
it
looks
like
the
the
example
pod.
You
know
mounts
some
local
directories,
which
you
know
assume
a
host
operating
system
structure
and
it
also
mounts
user
Ben
cube
admin
to
grab
the
cute
config.
So
I
think
that
that
those
things
are
sort
of
one
spec
way
of
getting
the
data
into
the
provider
config
in
sort
of
the
sample
files,
because
they
happen
to
work
with
cube
admin.
But
I.
Don't
there's
anything
in
the
code
that
you
couldn't
inject
a
cube,
config
yourself
in
a
different
way
and
still
have
it
work.
C
E
F
Would
love
for
us
to
figure
out
long
term
what
those
affinities
are
and
remove
them
if
documenting
minute,
someone
type
of
document
and
that's
wonderful
but
suddenly
like
when
I,
try
it
I,
wolf
I
will
remove
any
cube
ad,
has
dependencies
that
roll
by
the
way
that
when
we
about
a
year
enough
to
years
ago
now,
I
think
when
we
we
needed
the
ability
to
put
rock
to
put
workloads
onto
the
master
and
onto
the
node
separately.
There
was
almost
universal
agreement
on
the
idea
of
doing.
F
There
was
one
holdout,
which
is
why
it
isn't
official,
but
the
I
guess
now.
The
argument
would
be
that
we
don't
actually
want
to
differentiate,
maybe
between
a
master
and
nodes.
So
we
probably
don't
wanna
continue
to
push
the
labeling
concept
but
I.
Imagine
it
everyone
that
runs
everyone
except
openshift,
shall
we
say,
should
be
putting
those
labels
on.
G
B
A
A
Nice
assign
me
right
now,
though,
so
we
should
try
to
figure
out
someone
to
actually
review
that,
so
we
can
get
merged
so
next
open
in
June
item
I
just
want
to
give
an
FYI.
Over
the
past
week,
I've
sent
a
couple
of
PRS
and
deleted
all
of
the
close,
the
code
under
what
used
to
be
the
cloud
directory
and
the
main
repository.
So
the
cloud
Google
code
I
moved
out
into
the
new
provider
that
we
voted
on
creating
last
week.
A
It's
since
been
created
and
the
toda
has
been
moved
over
there
and
the
the
for
the
vSphere
code,
there's
an
open
issue
to
create
a
new
provider,
but
nobody
has
taken
that
actually
created
one
yet
so
what
I've
done
for
that
is
I've
created
a
seed
repository
under
my
github
account.
If
someone
wants
to
drive
that
through
the
steering
committee
to
create
provider,
/
vSphere
and
then
removed
the
vSphere
code,
I
also
put
instructions
on
how
I
created
that,
so
it's
reproducible.
A
If
someone
else
wants
to
create
their
own
seed
repository
in
all
the
commits
history
is
still
here.
We
can
still
pull
that
out,
but
so
the
nice
thing
is
that
now
everybody's
on
the
same
playing
field,
right,
there's
no
sort
of
special
code
in
the
main
repo
and
then
I
also
sent
to
PR.
It's
update,
documentation
and,
in
particular
the
documentation
for
cluster
cuddle.
Now
has
a
big
warning
at
the
top
or
will
once
the
PR
emerges
saying
this
cluster
colors
actually
do
anything.
A
B
We
are
looking,
you
know,
we're
using
cluster
huddle
for
the
first
and
we
looked
at
it
in
our
use
case.
We
were
installing
our
everything
on
the
external
cluster
as
it
does,
we
wanted
to
use
a
cluster
or
other
than
mini
cubes.
So
that's
why
alejandra
made
that
change,
but
what
I
wanted
to
know
is:
why
do
we
pivot
everything
from
the
external
cluster
to
the
created
cluster?
B
H
C
It
depends
on
how
how
people
want
to
operate
so
like
in
our
case.
We
want
to
have
a
central
cluster
that
is
managing
a
number
of
external
another
number
of
managed
clusters,
but
it
could
be
that
you
are
not
an
infrastructure
as
a
service
company
right.
Maybe
you
just
have
your
own
little
kubernetes
cluster
and
you
want
the
cluster
API
objects
to
exist
within
the
cluster
itself,
and
so
that's
just
a
choice
to
make
by
default.
E
A
Yeah
I'm,
just
I,
was
seeing
it
looks
like
Jessica's
not
on
the
line,
I'm,
pretty
sure
that
she
wrote
up
a
dock
that
was
shared
publicly,
that
sort
of
describes
why
we
did
this
and
I'm
I
was
trying
to
find
it
quickly,
but
I
can't
find
it.
So
if
you
can
open
up
an
issue,
then
we
can
link
the
dock
to
there.
Maybe
circle
back
next
week
after
you've
had
a
chance
to
read
it.
I
mean
I.
Think
I
agree
like
an
option
not
to
pivot,
makes
a
lot
of
sense.
I.
A
Think
pivoting
is
clearly
the
right
answer.
If
you
are
using
mini
cube,
because
you
don't
want
to
have
to
keep
your
mini
queue
up
and
running
and
functional
to
make
sure
your
other
cluster
stays
alive,
especially
if
you're
on
a
laptop
and
you
you
know
accidentally,
you
know
close
your
laptop
or
to
leave
mini
cube
or
go
offline,
and
your
cluster
is
going
to
stop
functioning
right.
If
you
are,
as
the
other
PR
we
talked
about,
suggests
using
an
existing
cluster.
A
You
know,
especially
if
it's
like
a
you
know,
gke
type
cluster,
where
somebody
else
is
making
sure
that
stays
running
for
you
then
I
think
that
makes
a
lot
more
sense
where
you
say,
like
I
can
count
on
that
cluster.
Staying
up
and
running
and
I
can
just
leave,
leave
the
thing
in
there
instead
of
having
to
pivot
into
the
other
cluster
yeah.
B
C
C
The
only
reason
I
could
think
of
is
maybe
once
the
cluster
autoscaler
is
rewritten
against
the
cluster
API,
then
you
would
want
to
have
machine
sets
and
machines,
so
the
cluster
autoscaler
could
interface
could
auto
scale
in
the
same
way,
whether
it
hosts
that
are
understood
and
hosted,
but
I
couldn't
think
of
any
ideas.
I.
I
I
So
I
spoke
with
with
Bob
at
AWS
about
this,
and
they
showed
interest
in
either
having
some
sort
of
shim
that
would
speak.
The
cluster
API
that
goes
on
top
of
the
API
is
as
they
exist
today,
but
I
think
it
would
be
a
powerful
move
for
cloud
providers
with
managed
kubernetes.
It's
just
more
of
a
political
thing
than
anything.
At
this
point.
A
Think
the
other
thing
it
was
a
conversation
we
started
last
week
about
using
CR,
DS
and
I.
Think
Tim
st.
Clair
had
had
asked
about
sort
of
using
this
for
existing
clusters,
and
so
I
could
also
imagine
a
case
where
you
might
have
a
G,
K,
E
or
e
KS
cluster,
and
you
added
this
to
the
cluster
yourself
to
use
that
to
manage
machines.
E
So
I
don't
know
if
it
makes
sense
to
have
a
quote:
unquote:
the
term
providers
a
little
overloaded,
it's
a
implementation
for
a
given
provider
where
eks
is
one
solution
for
AWS,
so
I
suck
at
naming.
So
don't
ask
me
to
think
of
a
great
name,
but
it's
it's
kind
of
an
overloaded
term.
It's
one
of
many
choices
for
a
given
provider
where,
like
even
on
GCP,
you
could
have
your
own
custom
GCP
integration,
as
well
as
a
GK
specific
solution
for
a
given
provider.
A
E
I
A
K
K
We
should
make
sure
that
the
question
cluster
API
controller
and
motion
controller
are
as
independent
of
turns,
independent
as
possible
from
each
other.
They
should
not
have
such
a
tight
compelling
that
you
can
not
run
the
only
machine
controller
in
the
GK
and-
and
that
was
also
one
of
the
reasoning
that
why
we
were
not
quanta
to
put
the
pointers
from
the
machine
controller
to
the
cluster
controller
is
necessary.
Fee
I
just
wanted
to
learn
that
we
have
we
had.
This
will
be
buried
down
in
the
hot
Sun.
I
One
one
thing
I
wanted
to
bring
up
in
this
space
of
cloud:
specific
controllers
wins
existing
kubernetes
distributions
is
when
we
start
to
approach
larger
scale.
We're
gonna
have
to
look
at.
How
do
we
solve
creating
and
destroying
that
many
instances
at
one
time,
I
know
in
Amazon.
You
could
get
to
a
certain
point
where,
if,
if
you
try
to
do
it
manually
through
the
ec2,
API
you're
gonna
run
into
problems,
and
you
pretty
much
are
forced
to
use
auto
scale
groups
at
that
point.
So
just
food
for
thought
do.
I
Same
region
multiply
lazy's,
and
it
was.
It
was
more
that
I
think
they
at
first
to
API,
like
the
rate
limiting
of
the
API
kicked
in
and
I
just
wasn't
able
to
issue
enough
requests
and
then
after
I
got
over
that
limit,
ultimately,
like
the
machines
just
like
started,
taking
longer
and
longer
to
come
up.
A
I
F
I
A
F
F
I
K
Currently,
we
run
like
an
abyss
cluster
in
AWS
using
performance
now,
and
we
use
SDS
for
the
worker
nodes
and
let
me
use
the
cluster
autoscaler
to
keep
the
cluster
elastic
like
it,
can
grow
and
shrink
as
demand
as
the
workload
on
the
demands
so
seamlessly
like
same,
and
it
is
able
to
interact
with
the
ASG
api's
out
of
the
box.
And
that
may
be
something
folks
may
be
interested
in
running
to
keep
their
cluster
elastic.
C
K
Just
to
add
on
that
right
so
because
I
have
dealt
with
integration
of
cluster
scalar
and
motion
controller
manager,
so
the
cluster
autoscaler
community
has
already
decided
that
they
will
eventually
get
rid
of
the
cloud
provider.
Specific
good,
which
means
right
now
what's
happening,
is
that
the
core
logic
is
separate
where,
where
all
the
complicated
stuff
is
happening
when
to
scale
up,
one
does
come
down
and
so
on.
But
then
there
is
an
external.
There
is
a
slight
driver
code
where
it
basically
calls
the
ESG,
see
AWS
and
load
setting,
sure
and
so
on.
K
So
those
parts
will
eventually
be
removed
from
the
first
order.
Spinner,
and
the
expectation
is
that
first
order,
scalar
will
just
talk
to
the
Machine
deployment
or
machine
set
and
tell
it
what
to
do.
One
work
will
be
right,
so
the
benefit
that
we
were
getting
till
now
only
for
the
AWS,
so
called
for
from
the
auto
scrubber
will
not
be
there
anymore.
So
that's
one
point
in
the
second
power.
K
Why,
on
the
first
hand,
that
we
decided
to
go
for
the
bevy
ends
and
not
the
ASIS,
so
one
of
the
clear,
then
we
will
write
recommendations
for
different
providers
in
terms
of
machine
controller
right.
Unfortunately,
not
all
cloud
providers
are
offering
nice
features
like
ASC
so
from
the
experiences
ESG
as
such
works
great
for
data
players.
But
if
you
see
the
other
providers
would
have
different
kind
of
futures
and
different
kind
of
terminologies
for
the
same
auto
scaling
group
feature,
so
that
will
eventually
create
a
lot
of
confusions
and
about
the
rate-limiting
part.
K
So
we
have
also
actually
phase
the
rate-limiting
issue
and
anything
more
or
less.
Its
comes
from
the
point
that
it's
what
I
have
understood
so
far.
It's
not
it's
not
ability
to
ASG
or
bear
ec2
instances,
it's
about
the
account
subscription
and
how
many
PM's
created
and
AWS
I
think
documentation
itself
says
that
for
well
over.
It
allows
only
certain
number
of
calls
to
any
ec2
instance.
K
So,
for
example,
if
you
are
crossing
1,000
calls
in
your
account
to
the
ec2
instances
for
certain
segments
of
20
seconds,
then
it
obviously
starts
showing
the
error
that
now
we
have
possibility
literal.
So
that's
what
we
have
seen
even
for
the
bare
ec2
instances
and
I
think
the
same
thing
goes
on
for
other
provider
salesmen
that
you
have
at
some
point.
They
stop
you
from
making
the
calls
to
the
service
other
than
having
100
VMs
and
rather
than
stopping
from
reading
more
instances.
K
F
F
So
if
we
wanted
to
run
a
particular
instance
type
or
a
particular
instance
in
a
particular
zone,
because
if
we
need
to
attach
a
volume,
for
example,
it
is
easier
to
express
that
than
it
would
be
to
express
that
in
note
of
scaling
groups
where
we'd
probably
have
to
do
multiple,
what
our
scaling
groups,
I,
think
or
we'd
have
to
launch
the
instance
and
attach
it
to
the
auto
scaling
group.
I
guess
I
don't
know,
but
that's
that's
potential
reason
to
go
one
at
a
time.
So.
F
F
Had
genius
if
it
was,
if
it's
the
case
of
zones
right
where
we
know
we
need
a
volume
in
a
particular
zone,
I
mean
that's
easily
overcome
by
having
three
auto-scaling
groups
and
I
think
there's
also
tricky
to
be
instance
and
attach
it
to
the
water
skating
group
I
think
that's
allowed,
but
yeah
that
sort
of
thing
where
you
know
we're
starting
to
have
to
jump
through
hoops
and
maybe
those
if
we
don't
need
to
jump
through
those
hoops.
That
is
a
a
reason.
Just
it's
sort
of
you
know
frozen
home.
G
F
K
On
the
rate-limiting
front,
I
I
observe
ation
on
using
the
SDS
has
been
similar,
like
I
have
seen
the
Orcas
autoscaler
like
periodically
folds
like
the
limits
on
the
SDS
and
how
much
it
can
scale
up
to
and
does
that
loop
running
in
the
autoscaler,
so
by
default,
that
loop
is
super
aggressive
and
that
aggressive
loop
will
often
result
in
like
throttling
from
the
AWS
api.
So
that's
that's.
That
has
been
my
observation
using
aSG's.
K
A
That
Chris
was
talking
about
because
Chris
was
saying
if
you're
calling
the
ec2
API
is
to
currently
individual
machines.
You
skilled
and
you're
talking
about
calling
the
autoscaler
API
I,
don't
know
if
Amazon
distinguishes
between
those
two
API
services
with
different
rate
limits.
I
know
at
Google,
there's
some
more
more
to
five
more
fine-grained
rate
limits
apply
it
across
the
API
service.
C
So
I
think
they
do
distinguish
I
mean
certainly
there's
one
API
called
to
make
a
novice
gaming
group
where
there's
100
API
calls
to
make
a
hundred
machines,
but
Justin
reminds
me
of
a
really
good
reason
not
to
use
aSG's.
So
we
should
think
about
this
more.
The
reason
is
aSG's
will
attempt
to
balance
nodes
across
zones,
but
it
will
not
guarantee
it,
and
so,
if
you're,
depending
on
always
having
three
zones
to
survive,
easy
failure,
you
can
not
here,
and
you
cannot
rely
on
me
as
you
can
provide
you
that
can.
A
Right
now,
a
cluster
API
would
support
both
those
right
in
the
machine
deployment
you
could
say,
or
the
Machine
set
I
want
a
machine
set
in
this
region,
and
you
know
our
controller
or
the
or
an
ADA
ASG
would
then
presumably
try
to
do
some
best
efforts
spreading
and
if
you
said,
I
want
a
machine
set
in
this
zone,
then
you'd
pin
everything
to
that
zone
right
and
I.
Think
that
you
would.
F
J
I
I'm
I'm
wondering
if
without
SDS
sorry
I'm
losing
my
voice
a
little
bit
but
I
wonder
if
we
can
get
into
a
scenario
where
there's
some
sort
of
undesired
behavior
in
AC
to
the
SG
is
working
on
it
behind
the
scenes.
The
cluster
API
controller
also
detects
there's
some
sort
of
something's
wrong
like
we're
missing
a
machine
and
it
tries
to
create
a
new
machine
whereas
the
is
she
is
already
fixing
that
and
we
end
up
creating
two
machines.
Instead
of
one.
A
Okay,
so
I
guess
the
summary
that
I'm
hearing
is,
you
know,
sort
of
revisiting
the
discussion
we
had
before
about
aSG's
versus
managing
individual
VMs.
There
are
some
good
points
that
have
been
brought
up.
That
I'm
not
sure
we're
discussed
before
about
why
aSG's
might
be
better
choice.
There's
some
concern
about
fragmentation
of
the
user
experience.
A
If
we
can
use
you
know
aSG's
on
Amazon,
but
maybe
not
have
an
equivalent
on
somewhere
like
digitalocean,
and
so
it
might
be
worth
like
exploring,
maybe
an
implementation
of
the
API
using
aSG's
to
see
how
what
the
sort
of
drifts
and
user
experience
is
and
if
there's
any
sort
of
dissonance
with
the
API
to
the
underlying
groups
on
the
hub
provider.
You
know
think
Chris
just
mentioned
about
you
know
who
who's
in
control
of
the
different
machines
etc
and
make
sure
that
that
sort
of
maps
over.
A
Think
in
in
some
ways,
yes
and
I
think
that,
from
the
API
contract
point
of
view,
that
the
fields
of
your
promotes
the
top-level
API
like
we
want
to
make
sure
that
those
do
work
across
providers
and
that
you
can
have
a
similar
experience
where,
if
you
say
I
would,
like
you
know
a
machine
with
this
version
of
YouTube
up
with
these
paints
and
labels
on
it,
that
that
happens.
Right
and
like
the
time
it
takes
for
have
to
happen
might
be
different.
The
mechanism
by
which
it
happens
might
be
different.
A
The
way
you
specify
machine,
sights
sizes
and
shapes
and
extra
attributes
might
be
different,
but
I
think
that
that
the
pieces
we
want
to
put
at
the
top
of
the
API
are
things
we
think
we
can
make
be
a
consistent
experience.
So
I
think
that's
more
where
I'm
worried
about
dissonance
is.
If
there
are
things
that
we
believe
should
be
in
the
API
to
make
a
good
user
experience
or
they
matched
it
to
ready
style
and
those
don't
translate
down
to
two
Meg's
jeez.
That's
where
I
would
feel
like
there'd
be
a
problem.
A
A
L
L
You
know:
share
information,
tips,
hurdles
among
everybody,
that's
working
on
specific
provider
implementations,
you
know
things
that
maybe
don't
make
sense
discussing
and
the
meeting
that
we're
having
now
anyway,
the
most
popular
slots
from
what
I
can
tell
on
the
doodle
appear
to
be.
You
know
these
four
and
a
lot
of
people
can
attend
either
a
or
D
or
B
or
C.
Is
you
know
approximately
so
I
just
wanted
to
ask
if
you
know
do
we?
Do
we
like
another
book,
or
can
we
just
choose?
L
L
We
we've
met
last
week
and
yesterday
on
Tuesday
we
had
a
few
few
people
attending
been
recording
the
sessions.
We've
got
a
meeting
meeting
notes
that
I
think
I
added
a
link
to
that
in
the
in
the
top-level
signature
lifecycle
doc
anyway,
and
then
I
guess.
The
final
question
I
had
is:
are
we
leaving
anyone
out
from
the
eastern
hemisphere
because
the
the
slots
are
you
know
like
from
in
the
from
the
morning
to
the
like?
L
K
L
But
it
yeah
I
mean
I,
don't
know
at
some
point:
I
imagined
they
were
gonna,
be
conflict
between
some
some
some
kubernetes
meeting,
whatever
whatever
slot
we
choose,
but
any
case
yeah
I,
like
I,
said
we
we've
been
meeting
at
slot
B
and
you
know,
as
far
as
I'm
concerned
I
mean
we
can
have
I
guess
you
know
as
many
as
many
classes
as
people
want
to
attend.
I
guess
we
want
to
balance.
You
know
not
not
fragmenting
it
too
much
right,
because
we
sort
of
want
to
share
the
information
as
much
as
possible.
So.
L
You
know
I
picking
picking
one
slide
from
these
two
sort
of
clusters.
I
mean.
L
I
Yeah
I
think
I.
Think
two
meetings
are
fine,
I
think
again,
just
making
sure
that
we're
diligent
about
notes
and
recordings
and
keeping
remaining
information
somewhat
separated
so
that
we're
not
convoluting
it
all
together,
but
yeah.
That's
sounds
fine
to
me.
You
can
find
people
to
hide
their
meeting.
Okay,
all.
L
A
Of
the
thing
I'll
mention
is
I
can
add
the
slots
to
the
community
calendar
and
send
out
invites
people.
I
know,
I've
missed
the
two
that
have
happened,
even
though
I
meant
to
be
there,
because
they
weren't
on
my
calendar,
so
I
think
that
would
be
really
helpful
as
we
actually
settle
and
say
yes,
we're
gonna
do
this
slot,
like
we
just
put
it
on
the
calendar,
so
that
people
will
remember
to
go
or
remember
the
disruption.
Okay,.
I
Also,
just
in
the
name
of
clerical
items,
I
gave
Daniel
be
close
to
Rob's
account
that
we
use
for
both.
For
now
we
would
be
using
it
for
this
meeting.
It's
a
cluster
lifecycle
and
cube
admin
office
hours
and
these
office
hours.
So
at
some
point
there
might
be
some
some
conflicts
there
and
I
don't
know
how
they
want
to
navigate
into
moving
forward.
J
L
We
we
share,
you
know,
sort
of
our
lessons,
learned
and
tips,
hurdles
and
sort
of
talk
about
the
biz.
You
know
the
design
talk
about
how
we're
using
the
common
cluster
API
parts.
Maybe
we
talk
about.
You
know
how
we're
designing
our
provider
spec,
statuses,
etcetera
things
like
that,
so
things
that
don't
necessarily
get
discussed
in
this
meeting,
because
they're
not
really
common
to
you-
know
those.
The
top-level
cluster
API.
E
L
One
last
question:
I
had
is
I've
been
I've,
been
recording
those
office
hours,
but
there
I've
been
recording
them
to
the
quote.
Unquote,
zoom
cloud
and
and
I
see
that
the
links
that
we
have
for
the
recordings
in
this
meeting
are
YouTube,
so
maybe
offline
somebody
can
can
help
me
figure
out
what
what
what
I'm
supposed
to
do
there
to
make
sure
that
we
don't
lose
those
recordings
yeah.
A
C
C
I
A
We
talked
about
this
briefly
last
week.
I
think
the
least
my
personal
goal
is:
is
alpha
sometime
within
sort
of
the
neck.
Secure
areas
really
cycle
I've
been
trying
to
burn
down
the
there's
a
milestone
list
if
github
issues,
so
people
are
interested
in
helping
get
that
alpha
out
the
door.
Please
take
a
look
at
those
issues
and
help
reduce
them
to
zero.