►
From YouTube: Kubernetes SIG Cluster Lifecycle 20171025 - Cluster API
Description
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.nb9mbe929fm9
Highlights:
- Discussed the proposed APIs for Machines & the control plane
- Jacob did a demo of the scaffolding he's built around the proposed Machines API
A
B
All
right
sure,
so
it's
last
week
we
were
talking
about
machines
and
declarative
nodes
and
I
sent
out
a
PR
proposal
of
a
super
minimalist
start
for
this
for
how
to
create.
You
know,
declarative
nodes
with
subversion
and
information,
and
where
do
we
put
provider
specific
things?
Some
people
had
a
chance
to
look
at
it
and
give
feedback,
which
is
great
I've,
addressed
most
of
the
feedback,
I
believe
for
some
kind
of
figure
questions
before
we
get
too
far
down
that
done
that
path.
They
are
you
look
at
what
makes
it
worth
down
right.
B
So
so
one
of
the
things
that
has
come
up
internally
is
starting
with
just
a
single
machine.
Single
declarative
machine
allows
us
to
later
build
up
machines
and
machine
deployments
if
we
want
to,
but
are
there
any
environments
that
anyone
cares
to
support
or
can
even
fathom
where
we
don't?
It
does
not
ever
make
sense
to
refer
to
a
single
machine
and
to
just
have
there's
a
lot
of
other
installers.
Don't
really
worry
about
its
individual
machines
and
just
say
we
have
sets.
We
have
node
sense.
B
We
have
no
tools,
we
have
just
scalable
resource
of
homogenous
nodes
that
we
want
to
really
refer
to
an
individual
basis.
So
is
that?
Does
anyone
see
concerns
with
first
starting
with
an
individual
machine
object
concept
and
then
adding
sets
later
or
is
there
a
reason
to
jump
into
sets
either
completely
omit
machines
and
only
work
at
this
level?
You
can
still
have
sets
of
size,
one
if
you
want
or
to
not
move
forward
with
V
1
alpha
1,
unless
we
have
both
like
is
it?
Is
it
if
anyone
has
feedback
on
that?
B
That
would
kind
of
killed
the
current
design
and
then,
if
there
aren't
number
two
was:
are
there
any
concerns
with
the
capabilities
that
I
outlined
for
nodes
in
that
proposal?
The
very
limited
set
of
capabilities
for
v1
alpha-1,
the
kinds
of
things
that
we
want
to
target
for
the
end
of
this
quarter,
to
be
representing
the
API
and
have
working
prototypes
or
anything
that
you
think
should
be
stripped
out.
B
B
Can
just
chip
in
on
like
AWS?
In
my
opinion,
the
machine
set
concept
is
more
useful,
but
everything
is
still
individually
accessible
or
addressable
and
accessible,
and
certainly
you
will
want
to
terminate
an
errant
machine
or
you
know,
for
whatever
reason.
So
machines
are
good
to
start
with.
Okay,
I.
C
Would
also
just
comment
that,
even
if
we
do
like,
even
if
an
environment
does
have
just
machine
sets,
it's
probably
still
useful
to
have
machines
individually
addressable,
at
least
as
status
wise,
because
that
concept
still
has
a
lot
of
use.
But
I
would
I
still
am
in
favor
of
going
down
machines
first
before
we
even
start
looking
at
sets.
Even
if
we
do
start
reinventing
a
little
bit
of
each
cloud
providers.
Wheel.
B
B
B
For
the
second
question,
let
me
actually
just
really
quickly
go
through
the
use
case
of
the
capabilities
that
I
find
in
that
dock
and
if
there
aren't
any
concerns
where
you
move
right
along
otherwise
and
cause
dramatically
after
each
one
and
see
if
there's
any
feedback,
just
blowing
this
up.
Okay,
so
creation
of
nodes,
individual
notes,
given
some
sort
of
templates,
we
want
to
in
a
declarative
way,
be
able
to
create
a
new
node,
including
the
version
of
the
cubelet,
to
run
the
container
runtime
to
use
and
its
version
and
any
other
provider.
B
Specific
information
like
the
OS
image,
the
instance
type.
The
disk
configuration
all
of
that,
even
though
that
stuff
will
not
be
necessarily
portable
from
one
cluster
to
another.
A
specific
node
can
be
deleted
and
the
underlying
provider
controller
should
free
and
the
external
resources
associated
with
a
specific
node
can
have
its
cubelet
version
upgraded
or
downgraded
in
a
declarative
way.
B
Similarly,
a
specific
node
can
have
its
container
runtime
changed
or
its
version
upgraded
or
downgraded
in
a
declarative
way
and
a
specific
node
can
have
its
OS
image
upgrade
or
downgraded
and
I
guess
any
of
the
other
provider.
Specific
attributes
like
you
could
change
the
type
whatever
and
all
of
that
how
it
gets.
Actuated.
It's
actually
provider
specific
implementations.
So
a
really
nice
provider
might
try
to
upgrade
a
cubelet
in
place
and
make
that
operation
really
fast,
but
there's
nothing
wrong.
A
A
question
about
your
your
thought
that
replacements
could
be
in
place
or
via
replacement.
If
a
writer
can
do
some
of
these
things
in
place,
but
some
of
them
can't
is
there
any
way
to
indicate
to
the
user
like
you're
making
this
change?
Therefore,
it's
going
to
be
more
disruptive
than
if
you
made
this
other
change
right,
like
you're,
changing
two
fields
in
a
declarative,
API,
that's
gonna
have
a
very
different
impact,
potentially
under
applications
right.
Yes,.
B
So
that's
one
of
the
things
that
I
expected
it
to
be
controversial
with
its
proposal.
There's
there's
no
there's
no
feedback
from
from
the
controller
and
there's
no
there's
no
operation.
You
can
do
in
the
spec
to
say:
I
only
want
an
in-place
upgrade
or
in-place
replacement
in
place.
Actuation.
B
You
can
force
a
replacements
because
you
can
always
delete
the
machine,
create
a
new
machine
that
should
always
trigger
a
full
replacement,
but
there's
actually
nothing
to
say.
I
want
to
update
the
spec,
but
I
want
you
to
err
if
you're
unable
to
do
this
as
an
employee
separation,
and
that
explained
a
little
bit
the
proposal,
because
I
thought
it
was
a
more
I
thought.
B
If
we
had
that
functionality
and
say
updating
the
spec
should
always
be
an
in-place
shows
result
in
an
in-place
actuation
and
if
you
ever
want
full
replacement,
you
should
delete
and
recreate
the
machine.
Then
I
think
it's
just
going
to
create
this
pattern
in
every
tool.
That's
built
on
top
of
the
cluster
API
to
say
well,
first
you're
going
to
attempt
to
upgrade
in
place
because
that's
gonna
be
the
faster
operation,
but
I
really
want
this
changed
anyway.
A
Is
that
a
concern
right?
Do
we
want
machine
names
to
be
tied
to
the
fact
that
a
machine
is
something
that
has
a
boot
disk
and
might
have
things
on
that
boot
disk
and
might
have
local
storage
of
pods,
or
did
we
say
a
machine?
Is
you
know
more,
like
replica
set
of
size,
one
ish
where,
like
the
underlying
thing
that
implements
that
machine
can
be
replaced
on,
we
don't
care
about
that,
and
the
definition
of
the
abstract
machine
is
what
processes.
B
B
One
other
simplification
we
can
make
now,
which
I
would
be
completely
happy
for,
would
be
someone
from
the
wrong
side
of
things
chimed
in
on
the
PR
and
said
the
way
they
had
represented.
Things
was
much
more
like
pod.
In
that
it's
completely
immutable.
You
can't
change
any
fields,
and
so,
if
you
want
to
change
anything,
you
always
delete
recreate,
so
I
think
I
think.
Ultimately
it
would
be
hard
for
for
many
providers
and
installers
to
get
on
board.
B
It
was
just
completely
impossible
to
do
in
upgrades
like
that
seems
like
such
a
great
feature
to
be
able
to
have
an
express,
but
we
don't
have
to
have
that
support
now,
so
we
can
always
change
the
definition
in
the
future
to
say.
Okay,
now
you
can
update
the
Keyblade
version,
and
these
sorts
of
things
can
happen,
but
right
out
of
the
gate
for
v1
alpha
1,
we
could
just
say
these
are
completely
immutable
and
any
any
changes
have
to
be
a
deletion
recreate
just
to
get
us
moving.
I.
A
Mean
it
is
interesting
because
there's
kind
of
a
movement
for
immutable
infrastructure
right
which
that
that
aligns
with
where
you
know
it's,
it's
I,
didn't
I,
didn't
potent
right
you.
If
you
try
to
create
machine,
it
was
like
X.
You
already
have
one
of
those
great
it
just
succeeds.
You
know
it's
there.
A
A
Exactly
right,
or
even
like,
couldn't
contain
a
real
time
in
the
future.
Now
that
the
newer
versions
of
docker
can
be
swapped
out
without
killing
all
the
containers
right
that
you
can
imagine
upgrading
quite
a
few
things
in
place
on
your
nodes,
while
just
leaving
all
the
workloads
running
without
noticing
I
think
we
wanted
to
strive
towards
that
future.
I.
C
Think
what
I
would
like
to
move
towards
is
a
is
a
solution
where
the
provider
can
choose
what's
quicker
and
if
it's,
if
an
employee,
slipperiness
possible,
it's
fine
if
it
has
to
reinstall,
then
do
that,
but
yeah
I
think
we
should
advocate
for
a
position
where
machines
like
that
have
the
same.
Spec
are
pretty
much
fundable,
so
you
don't
have
people
SS,
aging
and
just
modifying
the
boot
partition
of
a
node.
C
B
Yeah
Justin
brought
up
an
interesting
question
about
the
relationship
between
the
name
of
the
Machine
and
the
communities
API,
and
what
the
instance
name
will
be
in
your
actual
provider.
I
had
completely
punted
that
well,
every
provider
can
just
do
exactly
what
they
want
and
I
don't
have
to
worry
about
it
right
now,
but
is
that
a
reasonable
assumption?
B
It
seems
like
if
you
seem
like
you
can
absolutely
do
that
and
if
you
can
do
in-place
upgrades
and
if
someone
tries
to
update
a
machine
spec
and
you
don't
support
in-place
upgrades
and
this
VM
is
already
in
use-
and
you
know
you
have
to
cycle
it,
you
could
just
update
the
status
to
say
sorry.
This
is
an
error
condition.
B
Yeah
I
think
we
should
let
the
name
be
user-controlled,
even
if
we
then
build
another
layer
on
top.
That
means
it
isn't.
In
fact,
these
are
controlled
and,
like
we
have
provider
ID
in
the
node
right,
which
now
you
need
to
identifies
the
cloud
provider
ID
like
some
sort
of
metadata.
That
is
a
system
owned
primary
key
and
then
yes,
it
would
be
up
to
the
cloud
provider
whether
they
enforce
some
restriction
on
the
name,
but
we
can
certainly
say
hey
by
the
way
like
look
at
it
of
us
like
we
can
change.
B
Instanceid,
see
probably
don't
want
to
put
the
instance
ID
in
the
name
as
a
requirement,
because
it
would
be
really
confusing
after
a
machine
replacement,
but
I,
certainly
like
the
idea
of
like
the
status
returning.
If
there's
something
is
cannot
be
updated
at
all,
the
status
can
reflect
that
and
the
sufficiently
smart
controller
can
act
on
it.
B
That's
all
ahead
on
that
the
P
are
still
open,
still
encouraging
more
feedback.
The
types
are
checked
in
so
that
we
can
start
prototyping
and
we
have
started
prototyping
against
them
to
get
even
more
feedback
on
the
specific
types
but
provide
feedback
as
you
never
get
a
chance
and
you've
gone
through
Chris
I
guess
is
next,
so
I,
just
yeah
I
thought
upgrades
of
like
packages
on
the
OS.
Where
does
that
fit
in
the
spectral
redundancy
yeah?
B
This
is
something
that
I
had
gone
down
the
rabbit
hole
on
and
like
a
we
could
have
very
cloud
in
it
style,
you
conversion
everything,
conversion,
your
kernel,
you
conversion,
every
package
that
you
care
about,
and-
and
maybe
we
want
that
super
long
term-
it's
something
that
I
I
thoroughly
punted
on
right
now
for
view
on
alpha
one.
You
can
still
do
it
in
the
provider
config,
which
is
the
cheap
way
of
doing
everything
but
yeah.
B
So
if
you
wanted
to,
if
you
want
to
change
any
of
that,
the
way
that
I've
been
thinking
about
is
just
make
a
new
OS
image
and
rub
the
OS
image
and
not
really
support
in-place
upgrades,
at
least
using
this
API.
You
can
still
use
some
other
mechanism
for
now,
the
long
term
like
we've
I've,
structured
it
so
the
node.
So
the
machine
has
a
versions.
Struct
currently
has
the
cupids.
B
In
the
container
on
time
we
could
just
have
a
generic
math
string
string
or
any
other
packages,
install
them
and
have
OS
specific
node
agents
that
are
running
and
actually
watch
for
changes
and
know.
How
did
you
have
to
get
upgrade
or
a
relative
to
install,
upgrade
or
downgrade
your
packages
because
there's,
like
you,
have
am
eyes
or
cloud
images,
but
you
don't
really
know
what
like
the
timestamp
of
the
Debian
package,
whatever
it's
called
like
that
release
that
really
sort
of
refers
to.
Maybe
we
could
do
that,
but
yeah.
D
Can
you
see
me
yes
yeah,
so
something
which
is
not
clear
to
me
is
like
from
the
users
perspective.
Let's
say:
I
have
I,
have
three
nodes
running
and
I
want
to
update
something
specific
on
one
particular
node,
where
my
pods
are
also
running
at
this
moment.
So
I
get
this
point,
that
it
is
provider
specific
that
the
the
upgrade
of
that
node
can
be
in
place
or
not
in
or
it
will
replace
the
node.
Basically.
B
So,
right
now,
based
on
the
way
the
API
is
written,
a
deletion
of
a
node
or
change
in
changing
a
node
is
strictly
immediate
and
if
the
clients,
if
they
find
site
ruling
that
is
doing
that,
replacements,
is
updating
the
spec
or
deleting
machine.
Creating
machine
actually
cares
to
be
workload
aware
or
be
safe,
which
I
think
is
probably
one
of
the
value
as
we
want
to
add
in
the
tooling
that's
built.
B
On
top
of
this,
it's
its
responsibility
to
cordon
the
node
and
drain
the
workloads
and
before
it
actually
changes
those
things
and
can
pen
can
be
as
sophisticated
as
it
wants
to
be.
I,
definitely
think
post
v1,
alpha
1
that
server-side
draining
it's
somewhere
in
here
that
that
we
should
have
a
safe
operation
to
say,
I
want
to
update
this
FAQ
and
by
the
way,
please
safely
drain
the
node
so
that
none
of
my
workloads
are
actually
affected
by
this
operation,
and
but
that
was
a
simplification
I
had
made
for
v1
for
one
okay,.
A
Should
the
controller
should
be
training
according
and
draining
the
node
before,
deleting
it
and
and
maybe
an
annotation,
you
can
say
like
force,
fast
deletion
or
something,
and
we
can
skip
that
step,
but
I
think
you
know
the
behavior
that
most
people
want
and/or
expect
is
for
that
that
to
happen,
like
you
said,
we
should
move
it
to
the
server
side,
but
the
server
side
can
be
the
controller.
We
have
that's
watching.
The
machines.
Api
doesn't
have
to
be
the
kubernetes
api
server
where
we
put
the
drain
functionality.
B
A
A
C
B
Well,
there
is
a
field
in
the
notes
back
for
unschedulable
I,
don't
know
if
there's
an
equivalence,
taint
or
annotation
or
something
to
do
the
same
and
I,
don't
know
which
one's
preferred
over
the
other,
but
yeah,
that's
something
we
could
just
mirror
similar
to
the
dynamic
cutely
config.
It's
something
we
can
mirror
in
the
machine
and
just
constantly
copy
over
to
the
notes
back.
If
there's
an
Associated
note
and
just
let
the
rest
of
the
existing
code
take
care
of
in
the
way
that
it
already
exists.
This
yeah.
A
C
Yeah,
just
a
really
quick
update,
I
want
to
point
out
to
everyone
here
that
it
is
out
for
review.
I
still
have
a
document
to
type
up
along
with
it,
but
the
spec
is
there.
There
have
already
been
some
comments,
but
I
just
want
to
make
sure
people
are
aware
of
this,
that
it
exists
and
are
telling
me
how
to
correct
it
and
I
went
ahead
and
checked
in
a
version
of
this
to
a
temporary
location,
so
people
could
start
playing
around
with
that
types
and
getting
used
to
it.
C
C
In
general,
I've
I
mean
not
specific
to
mine,
but
to
both
of
ours.
I've
I've
seen
concern
that
the
provider
can
fake
stuff
is
just
gonna,
be
a
dumping
ground
for
everything,
since
we
can't
get
anything
in
the
main
API,
but
I
just
want
to
point
out
that
any
prevalent
patterns
that
start
appearing
there
or
good
candidates
be
promoted.
We
just
want
to
take
a
simplistic
approach
to
the
main
API
before
we
start
over
complicating
it
and
we
are
in
the
future
more
than
willing
to
like
consider
like.
C
A
And
you're,
you
know
some
background
noise,
so
I
just
wanted
to
give
a
quick
PSA
that
I
went
to
the
sync
cluster
ops
meeting
last
week
and
volunteered
to
give
a
short
presentation,
slash
demo
about
the
cholesteral
yeah
and
the
work
that
we're
doing
as
part
of
this
breakout
group
to
that
community.
The
primary
goal
for
that
is
to
get
feedback
from
the
ops
community
about
sort
of
different
operational
patterns
that
they
would
like
to
see
the
cluster
API
support.
A
You
know
if
you
look
at
sort
of
the
context
for
the
cluster
API
that
we
reviewed
last
week,
we're
really
sort
of
focusing
the
API
around
sort
of
common
off
scenarios.
You
know
an
upgrade
is
obviously
sort
of
the
most
common
one,
but
we
believe
that
there
are
other
sort
of
operational
scenarios
for
people
that
are
managing
clusters
once
they
get
their
cluster
up
and.
A
We
want
to
because
the
a
consistent
experience
across
environments-
and
so
you
know-
hopefully
some
some
operators
will
show
up
and
talk
about
the
ways
that
they
currently
are
managing
their
clusters
and
different
sort
of
things
they
need
to
do,
and
so
we
can
think
about
feeding
those
in
as
requirements
into
this
effort.
So
that's
next
week
on
November,
2nd
at
1
p.m.
A
B
So
confront
see
that
yep,
that's
good
notes,
all
right,
so
I
don't
know
this
wasn't
really
prepared
for
this
meeting.
It
was
prepared
for
internal
sync
with
other
people
who
are
prototyping
these
things,
but
one
of
the
things
I
did
recently
was
I
mentioned
that
I
checked
in
all
the
types
for
the
machines
I've
actually
now
structured
them
a
little
better
as
machines,
be
one
alpha-1
figured
out
how
to
generate
all
the
deep
copy
method,
so
I
can
use
them
with
API
machinery
and
created
CRD
for
them
independently.
B
B
I
have
examples,
or
just
actually
call
that
code
for
to
create
the
CID
and
then
a
sample
controller,
just
to
get
the
start
of
how
you
would
start
activating
against
the
machines.
Api
I
haven't
done
any
of
this
for
the
control
plane
level,
because
we
just
got
the
attacks
for
them
so
actual
demo
over
here.
B
B
Cool
nothing
so
I
can
now
look
an
example
of
this
machine.
Crd
installer
already
compiled
it.
So
this
just
takes
a
cubed
file.
You
know
the
end
point
of
the
API
server
and
how
to
authenticate
against
it.
I
have
that
set
up
in
my
Dave
and
the
way
this
and
the
controller
are
written
right.
Now
they
just
take
a
cubic
file.
It
could
be
running
outside
of
the
cluster
that
can
be
run
in
the
cluster,
the
master,
as
in
all
the
processes
that
can
basically
run
anywhere.
B
It
doesn't
assume
anything
about
your
environment,
so
now
I've
actually
created
CRD
and
I'll
show
so
now,
I
have
a
weekend
bike
shed
about
what
this
should
be
called
specifically,
but
I
called
it
machines.
Cluster
API,
dot
gates
that
I
have
at
this
point,
I
can
actually
say
cube,
cut'
a
look
at
machines
which
is
pretty
cool.
There
are
no
machines
yet,
but
it's
actually
registered
as
a
top-level
plural,
and
if
I
want
to
create
an
example
machine
actually
I'll
do
two
things
in
parallel.
B
Let
me
just
start
up:
have
this
very
simple
machine
controller,
just
as
an
example
of
how
to
get
started,
there's
a
lot
of
ugly
client
code
and
clients
that
go,
but
it's
controller
level.
I
just
have
callbacks
for
what
to
happen
when
a
specific
machine
is
added
or
updated
or
deleted,
and
right
now
I'm,
just
printing
it.
These
capacity
is
opaque
objects,
but
you
can
cast
them
to
the
proper
machine
type,
so
you
can
interact
with
them
in
type
safe
ways.
B
So,
every
time
I,
every
time
machine
is
updated
or
created
or
deleted,
I
just
print
its
name,
but
on
update
I,
specifically
pull
out
spec.
That
versions
cubelets
to
show
old
and
new
just
to
to
show
how
you
might
actually
bees
in
any
place
if
you're
not
going
to
completely
replace
the
VM.
So
with
that
code,
I
can
just
run
machines,
controller,
same
queue
file
and,
while
that's
running
I
have
this
a
super
dumb
example
of
what
a
GCE
machine
might
look
like.
B
It
just
has
regular
name
and
I've
completely
punted
on
the
provider
config
all
of
the
actual
interesting
bits
for
this
particular
cloud,
but
gave
it
some
semi
random
values
for
the
cubelet
that
contained
a
runtime
and
gave
it
a
role
of
it
should
be
a
node.
If
I
create
that
you
can
see
my
controller
actually
gets
triggered.
It
gets
the
message
that
the
object
was
created
at
this
point.
B
It
should
actually
check
to
see
if
there's
a
VM
that
already
matches
this,
if
not
spin,
up
a
new
VM
and
then
watch
for
new
nodes
being
registered,
that
it
should
tie
together
with
that
machine
and
point
the
machine
status
node.
Now
it's
doing
nothing,
but
since
it's
created,
I
can
actually
see
my
machine
listed.
I
can
get
that
specific
machine
as
yamo
and
see
that
all
the
middle
Dell
has
been
filled
in
I
can
edit
it,
and
so
this
is.
B
This
is
how
we've
been
imagining
the
tooling
being
written
on
top
of
the
cluster
API,
because
in
a
completely
cluster
and
cloud
agnostic
way,
you
can
now
declaratively
say
for
this.
Node
I
want
the
capelet
versions,
b18
and
my
controller
gets
triggered
and
the
object
was
updated.
Has
the
old
kubernetes
version
a
new
one
and
it
could
trigger
an
in-place
upgrade
or
swap
the
VM
whatever?
B
So,
if
my,
if
the
tooling
that
I'm
writing
on
top
of
a
cluster
API,
that's
watching
the
status
of
this
machine
sees
that
there's
an
error.
It
goes
into
an
error
state
and
after
some
time
out
it
hasn't
recovered.
Then
it
can
just
roll
back
and
the
controller
will
notice
this
and
and
and
actually
back
in
the
reverse
direction.
I
can
delete
that
machine
and
the
deletion
code
should
get
triggered,
which
will
actually
deep
revision.
B
The
VM,
which
would
unregister
the
node
and
potentially
do
all
look
the
safe
eviction
logic
so
that
it
doesn't
just
immediately
rip
it
out
of
existence.
The
other
thing
that
you
can
do
is
use.
If
you
want
to
create
a
bunch
of
machines,
even
though
we
don't
have
a
you,
can
use
the
API
machinery
of
generate
name
to
give
it
a
prefix
instead
and
now
I
can
create
a
whole
bunch
of
these
and
each
one
it's
a
unique
name.
B
Another
one
another
one
and
get
all
machines
and
then,
even
though
they
all
started
with
the
same
template,
spec
and
still
independently,
if
I'm,
if
I'm,
writing
in
cluster
upgrader
I
might
say.
Well,
my
policy
for
rolling
out
kubernetes
node
upgrade.
Is
that
I'll
start
with
exactly
one
machine?
Wait
for
it
to
be
successful
and
then
I'll
roll
out
to
two
machines
and
five
machines,
and
if
you
would
be
the
right
balance
between
speed
and
safety.
B
And
it
actually
triggers
the
deletion
of
every
object
that
was
associated
with
that
CRD.
So
nothing
impressive
at
this
point,
but
it
helps
us
talk
about
what's
above
and
below
the
cluster
API,
the
things
that
are
actually
actually
doing
it,
the
provider,
specific
controllers
and
what
their
responsibilities
are
versus.
B
So
I
should
say
all
of
that
code
is
in
my
personal
repository
now,
I
had
to
jump
through
a
lot
of
venturing
hoops
to
get
all
the
dependencies
I
needed
and
it's
not
properly
modeled
using
depth,
which
is
what
we're
using
for
all
of
our
other
dependencies.
So
I'm
gonna
fix
that
and
then
push
it
upstream
so
that
people
can
use
it.
But
if
anyone
wants
a
leak,
I'll
just
put
it
in
the
chat
window.
You
can
start
looking
at
the
code.
That's
it
for
them.