►
From YouTube: 20190625 cluster api code walkthrough
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
Tuesday
to
June
21st.
This
is
a
special
cluster
API
session
and
we're
gonna
go
and
do
a
code
walk
through
to
Gloucester
API,
some
providers
and
kind
of
show
also
like
the
development
experience.
This
is
like
mostly
like
a
free-form
session
so
like
we
can
kind
of
like
be
dynamic
and
like
what
we
don't.
We
want
to
explore
and
kind
of
adapt
like.
A
How
does
that
sound
I
linked
into
the
chat,
the
the
agenda
and
feel
free
to
add
yourself
to
get
timing
lists?
We're
all
ready,
I'll
start
sharing
my
screen.
Hopefully
everybody
can
see
this.
Someone
confirm.
A
Some
new
things
that
have
happened
like
in
the
past
few
day
are,
for
example,
that
we
are
like
announced
before
you
go
of
modules
and
we
have
updated
the
dependencies
like
these
do
main
dependencies,
controller,
runtime
controllers
tools,
version
and
the
reason
we
did
so.
It
was
mostly
because
we
want
to
support
multi
version
and
compression
web
logs,
so
this
is
kind
of
like
I'm
required
for
the
v1
alpha
to
work,
which
I'm
happy
to
go
into
more
details
later.
A
If
folks
are
interested,
I'll
talk
about
like
these,
these
two
later
and
I'm
like
how
they
kind
of
like
tie
up
in,
like
all
the
Gloucester
API,
so
for
repo
structure,
like
the
first
thing
that
you
will
see
is
like
a
readme.
This
is
like
a
pre,
a
generic
and
cluster
API
as
right.
Now
like
it's
not
consumable.
A
So
if
a
user
comes
to
cluster
and
wants
to
create
a
cluster,
you
actually
can
create
a
cluster
and
that's
kind
of
like
a
little
bit
like
a
backwards
and
like
when
I
first
joined
the
project
I
had
the
same
experience.
It's
like
how
do
I
actually
create
it
like,
and
it
obvious
cluster
a
tcp
cluster,
and
to
do
so
like
you
actually
have
to
go
into
a
provider
implementation
and
build
your
tools
and
everything
that
you
need
to
to
create
your
cluster
there
and
the
experience
is
kind
of
different
from
each
provider.
A
So
what
I'm
talking
about
right
now,
it's
like
mostly
be
one-off
for
one
and
then
like
a
lot
of
like
these
problems,
we
have
like
kind
of
like
a
scope
and
objective
already
like
in
this
dark
section
that
we,
like
kind
of
like,
go
through
like
what
we
want
to
do
with
this
project.
So,
if
you're
not
familiar
with
it
like,
please
take
a
look
at
this.
A
This
markdown
document,
it's
under
dogs
and
kind
of
like
explains
like
what
we
want
to
do
with
the
project
and
definitely
100%
familiarize
with
the
glossary.
This
is
very
important.
So
the
like,
when
we
speak
to
each
other,
and
we
talk
like
an
issues
or
like
a
doing
community
meetings
like
we
know
like
what
we're
talking
about
one
definition
there
like
I,
always
like
like
one
I
like
point
out,
is
the
workload
cluster
definition.
A
This
is
the
cluster
that
we're
trying
to
create
the
cluster
that
I
use
is
trying
to
create,
and
you
also
like
here
like
about
a
bootstrap
cluster
or
pivot,
which
we
go
go
into
like
soon
so
directory
structure
on
top
level.
Well,
we
have
like
some
get
up
issue
template
which
we
can
skip
the
build
directories
for
some
like
a
build
basil
files.
A
They
did
use
of
it
like
really
like
it's
in
the
CMD,
the
command
directory
and
the
the
package
directory
simply
is
like
very
important
like
from
a
new
comment
perspective.
It
you
have
these
three
folders.
What
is
that
an
example
provider?
So,
if
you're
a
developer-
and
you
are
kind
of
trying
to
create
your
own
provider,
this
is
a
good
starting
point.
It's
not
like
complete,
because
you
will
need
like
a
lot
of
more
other
things,
and
you
can
see
it's
like
kind
of
like
pretty
simple
right
like
it
does.
A
It
says:
I
want
to
initialize
some
things
and
register
and
to
be
honest
like
if
this
is
mostly
like
a
boilerplate
code
at
this
point,
but
there
is
a
like
a
lot
of
things
right
now
in
here
that
you
would
be
like
what
is
this
cluster
actuator
of
machine
actuator?
So
historically,
we
have
cluster.
Api
was
all
like
done
in
one
repository,
and
then
we
split
into
multiple
repos
and
right
now.
For
example,
I
could
get
SS
providers
in
one
repo,
GCP,
vSphere,
etc.
A
But
we
honestly
don't
like
this
and
like
a
lot
of
people
like
have
like
kind
of
have
said
like
they
want
to
go
away,
do
actuator
and
the
good
news
is,
if
you
want
offer
to,
we
agreed
to
get
rid
of
the
machine,
actuator
and
I.
Believe
someone
is
working
on
the
cloud.
Did
we
move
the
cluster
actuator
as
well,
and
this
brings
a
lot
of
more
problems
that
I'm
happy
to
talk
to
about
here
are
two
things
here:
is
the
manager
and
the
cluster
cuddle?
A
That's
good
manager
first
expect
kind
of
simpler
to
go
into
the
manager,
so
this
is
where
cluster
API
kind
of
interacts,
with
controller
runtime,
controller,
runtimes
and
upstream
project
to
create
controllers
controller
tools
is
another
project
that
helps
with
the
creation
of
those
of
these
controllers,
so,
for
example,
for
cogeneration
or
like
type
generation,
CID,
etc.
So
there
is
like
a
lot
of
things
going
on
between
control
the
runtime
and
control
tools,
control
the
runtime
in
this
case,
like
in
the
in
the
main
logo
for
the
manager.
This
is
the
cluster
API
manager.
A
The
loop
will
run
in
the
pod
when
you
run
close
to
API
I.
Don't
know
like
how
many
of
you
have
like
a
run,
for
example,
of
get
a
bless
provider
or
any
other
provider,
but
alongside
an
infrastructure
provider,
you
have
to
run
also
the
flash
API
pod
and
the
reason
for
that
is
because
cluster
P
I
thought
is
generic
and
it
will
watch
and
specifically
machine,
said
machine
deployments.
A
So
because,
like
a
dozer
jannettek
operation
that
clashes,
I
can
do
for
you
and
going
forward.
We
want
to
move
more
logic
upstream,
so
in
like
what
will
running
this
file,
like
will
be
more
controllers
so
that
we
can
make
it
like
some,
for
example
like
if
we
removed
yet
machine
actuator,
come
from
fighters.
A
Twitter,
for
example,
in
do
you
want
alpha
2?
We
will
register
another
controller
in
here
and
dead,
like,
for
example,
here
with
only
registering
like
this
function
is
actually
only
registering
two
two
controllers
or
like
three,
but
one
of
one
of
them
is
deprecated,
which
is
machine
deployment
machine
set,
and
this
is
to
kind
of
like
a
manage
the
generic
operations.
A
So
you
can
think
that
like
when
you
are
creating
an
infrastructure
provider,
a
lot
of
these
operations
can
be,
can
be
engineering,
and
but
there
is
like
a
like
a
balance
acting
here
right
like
we
need
to
find
like
good
abstractions
and
then
create
those
obstruction,
cluster
API,
and
so
that
all
the
controllers
will
be
able
to
use
them
and
I'll
show
you
an
example.
Later
would
be
one
alpha
2
and
how
like
that,
improves
a
little
bit
development
experience
and
the
user
experience
as
well.
A
Now
the
other
most
debated
thing
of
a
cluster
api's
cluster
cutoff.
How
many
people
here
have
used
foster
cuddle,
at
least
once
the
funny
thing
is
when
you
build
cluster
cutter
for
cluster
API?
Actually
it
doesn't
do
anything.
It
cannot
do
anything
because
there
is
a
lot
of
kind
of
like
not
craft
but
like
they
just
like
a
lot
of
things
that
have
been
deprecated
and
but
you
they
will
be
removed
in
v1,
alpha
2
and
those
things,
for
example,
are
in
this
nice
folder
this
provider.
A
So
it's
very
specific
and
tied
up
to
like
how
do
you
manage
like
a
server
and
I'm
like
kind
of
like
how
do
you
get
the
IP
of
that
of
the
API
server
machine
or
cluster?
Or
how
do
you
generate
the
keuken
thing?
So
this
is
left
to
the
providers,
and
this
is
one
of
the
major
things
that,
like
we
kind
of
fight
for
everything,
because
it
should
be
generic,
but
it
isn't
because
we
don't
have
enough
kind
of
abstraction
to
make
a
generic.
A
Yet
in
here
there
is
like
a
bunch
of
other
things
that
are
important.
One
of
two
of
them
is
like
I
will
I'll
go
I'll
go
into
is
definitely
the
faces.
This
is
very
important
to
understand,
like
what's
actually
foster
cuddle
doing
when
you
wanna
see
like
each
operation
so
doing
that
things
like
past
year
and
I
can
like
early
this
year,
the
the
teamwork,
the
kind
of
to
bring
cluster
cuddle
into
a
phases
of
so
that
you
can
actually
like.
A
A
Then
cluster
cutter
will
perform
the
pivot,
which
here,
like
you,
can
find
as
a
pivot
face
here,
and
what
that
does
is
that
it
will
move
all
the
resources
from
the
bootstrap
cluster
to
the
workload
the
cluster,
and
so
now
the
worklet
cluster
is
actually
managing
itself.
One
of
the
things
that
we
have
talked
about
it
in
this
project
is
the
concept
of
a
management
cluster.
A
So
it's
a
great
project
you
already
have
and
that
you
want
to
use
to
manage
other
clusters
and
cluster
cuddle
actually
falls
short,
and
it
doesn't
actually
support
that
as
a
first
class,
but
that,
like
creating
clusters
in
a
management,
cluster
is
actually
pretty
easy.
So,
like
you
might
not
actually
need
cluster
cuddle,
and
so
maybe
maybe
like
in
the
future
in
Tim,
Tim
Sinclair's
we're
actually
working
on
this.
We
will
definitely
revisit
like
how
what
what's
cost
Ricardo
gonna
be.
Didn't
you
want
to
add
anything
to
them.
B
There's
an
umbrella
issue
where,
if
you
have
thoughts
or
feelings,
I
have
strong
feelings
about
cluster
cuddle.
That's
really
because
they're
they're
deep-rooted
feelings,
but
work,
because
this
design
philosophy
of
what
what
should
be
doing?
What?
Where,
when?
Why
and
how
there's
an
umbrella
issue
that
kind
of
enumerates
all
the
things
that
cluster
cuddle
does
from
a
high
enough
level.
B
It
kind
of
tries
to
a
propose
ideas
around
how
we
could
potentially
Nix
cluster
cuddle
itself
and
I'd
love
feedback,
because
there's
a
lot
of
things
from
a
command-line
utility
where
that
this
layer
it
violates
some
of
the
core
design
principles.
So
if
you
have
thoughts
there,
please
please
add
them
to
that
issue.
C
A
Need
to
impacto
this
a
little
bit
so
yes,
clasificado
assumes
pivot
I'm,
not
sure
about
the
second
part
of
the
question.
Yeah.
B
Cluster
API
does
not
itself
so
like
it,
but
what
are
the
components
for
the
that
I
mentioned
to
get
rid
of
cluster
cuddle?
What
is
that
on
the
cluster
spec?
We
should
potentially
have
a
potential
option
on
the
cluster
respect,
which
would
be
like
pivot
to
workload
cluster
or
some
parameter
that
allows
you
to
rectify
the
deployment
of
the
cluster
to
so
that
the
objects
are
on
the
actual
deployment
cluster
that
currently
does
not
exist.
That's
baked
into
cluster
cuddle.
B
D
I
get
it
I
had
gotten
the
sense.
I
can
speak
right.
Yep,
okay,
yeah
I
had
gotten
the
sense
that
just
a
sense,
and
you
can
correct
me
if
I'm
wrong,
that
the
cluster
API
community
was
leaning
more
towards
the
system
approach,
as
opposed
to
the
pivot
approach,
and
that's
why,
when
I
heard
you
at
first,
I
parsed
it
like.
Oh,
it
requires
a
pivot
and
then
that's
where
I
was
like.
Oh,
is
it
cluster
cuddle
that
requires
the
pivot?
Yes,.
A
B
Dr
is
that
there's
many
deployment
scenarios
in
the
community
we?
What
we
do
is
we
feedback
from
the
wild.
So
it's
most
of
the
things
in
sequester
lifecycle
are
people
tell
us
how
what
they're
doing
and
we
try
to
make
an
eight
around
the
common
patterns
to
distill
the
common
use
cases,
but
there's
no
way
you
can
encompass
all
of
them.
People,
people
use
tools
and
fascinating
ways.
So.
D
A
Thank
you.
The
bishop,
like
scenario
like
it's
very
helpful
when
like
well
in
the
development
cereal
like
if
you
want
to
like
quickly,
create
a
cluster
and
you
don't
have
a
closet
running
or
to
just
go
from
zero
to
kubernetes
in
like
a
much
faster
way.
So
yeah
like
this
is
like
definitely
up
for
discussion
like
how
plus
we've
got
a
little
shape
in
the
long
term.
In
this
folder.
Definitely
like
the
faces
is
like,
like
we're
kind
of
like
a
lot
of
the
code,
happens
and
I
really
pointed
out
like
the
provider
package.
A
This
package
is
the
one
it's
actually
like
kind
of
no
making
clothes
to
cuddle
like
work
across
providers.
Yet
so
that's
something
I'm
like.
Oh
definitely,
it's
a
shortcoming
in.
Oh
definitely
like
you
need
to
work
towards
maybe
removing
or
redesigning
this
package.
So
it's
kind
of
more
generic-
and
there
are
things
in
do
enough
for
today,
like
will
allow
it
honest,
like
haven't,
thought
that
all
of
them
through
yet
but
yeah
so
moving
on.
A
Let's
say
I
want
actually
switch
just
before
like
actually
moving
to
package,
and
this
is
always
in
cluster
cuddle
and-
and
this
is
something
that
I
kind
of
play
with
like
a
while
ago-
and
this
does
the
same
exact
thing
that
cluster
cuddle
does.
It
uses
only
one
of
the
Alpha
phases
in
cluster
cuddle,
and
but
this
is
using
a
management
cluster.
A
So,
as
you
can
see,
cluster
cuddle
could
be
replaced
by
a
like
either
good
documentation
or
a
bunch
of
like
make
file
script
in
this
case,
but
some
of
them
are
still
required
like
if
you
want
to
apply
add-ons.
But
this
really
doesn't
it's
not
actually
required
it.
Actually,
it's
just
like
a
nice
way
to
have
every
tribal.
So
it's
just
just
wanted
to
point
out
that,
like
management,
clothes
is
not
that
difficult
to
support
and
this
actually
uses
a
kind
cluster
as
a
management
cluster.
A
All
right,
I'll
switch
back
to
the
cluster
API,
so
let's
go
dig
deeper
a
little
bit
into
the
package
folder.
This
is
where,
like
kind
of
most
of
the
things
are
happening
for
not
all
of
them
package.
Api
is,
is
kind
of
where
we
define
our
own
API
types.
So,
as
you
can
see
here,
you
have.
This
is
all
like
a
generator
code
from
queue
builder
and
if
you're,
not
familiar
with
you
builder,
like
I'll,
do
like
a
little
example
there
or
like
how
to
create
a
new
typing
for
in
cluster
API.
A
It's
gonna
be
pretty
simple
and
straightforward,
and
here
we
have
the
v1
off
of
one
type
and
so
that
we're
in
package
API
cluster
everyone
out
for
one.
We
have
some
common
here,
but
let's
ignore
that
for
now,
like
I
think
it's
just
like
some
yeah
some
helper
functions
in
here
like
this,
is
how
like,
in
usually
in
kubernetes
land,
like
a
controller,
can
define
its
own
types
and
/cr
these,
and
also
how
also
how
we
generate
those
here
are
these.
A
So
let's
look
at
the
cluster
types
and
they're
like
a
bunch
of
types
here,
this
cluster
machine
machine
class
in
machine
deployment
machine
sets.
So
these
are
all
the
types
that
go
under
cluster
khe,
khe
sanh
Ayotte,
which
is
our
our
API
group
that
we're
using
right.
Now.
This
is
the
cluster
type.
As
you
see
we
use
cube
builder
to
generate
like-
and
this
is
our
just
cute
builder
markers,
so
controllers
tools
has
some
generation
key.
B
A
So
you
will
see
a
like
a
lot
of
these
kind
of
comments.
They're
kind
of
scary
at
first
say
because
kind
of
like
boys
go
actually
going
on
here,
but
truthfully,
if
this
is
all
I,
could
actually
generate
your
final.
So
these
files,
like
will
be
generative
for
you,
if
you're,
creating
a
new
type
and
you'll
actually
be
able
to
kind
of
kind
of
add
like
new
your
things.
A
If
you
want,
and
with
with
these
tags
and
then
I'll
show
you
later
like
how
controllers
tools
come
into
place
when
we
want
to
generate
the
actual
series
they
will
apply
to
the
cluster.
The
cluster
here
is
not
sure,
like
how
all
of
you
are
familiar
with
like
cluster
K
types,
the
cluster
is
kind
of
like
a
definition
of
like
what
like
kind
of
the
cluster
looks
like
so
it's
the
name
of
the
cluster
will
have
like
some
network
ranges
for
services
and
pods
this.
A
This
actually
was
made
optional
not
long
ago,
so
like
this
is
actually
not
needed,
because
it's
left
actually
the
provider
interest
for
providers
to
decide
which
network
ranges
to
put-
and
this
is
just
like
convenience
place
to
kind
of
have
this
side
of
blocks
specified.
One
thing
that
I
wanted
to
mention
in
kubernetes.
You
can
have
CR
these
with
some
resources
and
one
of
them
can
be
status.
I
think
the
only
one
right
now
is
actually
status
as
support
and
the
status
is
the
special
incentive
resource.
The
other
one
is
scale
Oh
scale.
A
Yes,
that's
actually
used
in
machine
set.
I
believe
thank
you
that
this
kind
of
a
special
resource
and
the
reason
for
that
is
because
user
should
never
set
status
so
like
from
Cuba
Theo
like
you,
would
never
like,
like
kind
of
CIA
user
kind
of
setting
the
status
for
for
a
resource
and
controllers
should
use
status
to
kind
of
like
store
some
information,
but
status
should
be
always.
You
should
always
be
able
to
recreate
the
status
from
your
spec.
A
So
if
you
live
status
and
like,
for
example
like
if
you
backup
your
cluster-
and
you
recreate
all
like
the
you
just
like-
restore
a
backup,
for
example,
but
your
status
codes
gonna
go
away,
so
you
don't
want
to
actually
store
things
in
status
that
are
not
recreate
upon,
and
so
a
little
controller
is
actually
like.
Will
you
spec
for
either
like
filling
defaults,
so,
for
example,
the
cluster
API
provider?
It
up?
A
The
machine
types
is
one
of
is
actually
the
type
that's
gonna
change
for,
be
when
alpha
2
and
the
proposal
has
been
merged
and
all
proposed
are
like
underdogs
slash
proposal
so
like
you
will
find
it
here
and
there's
a
like
a
lot
of
details
there,
like
what's
gonna
change
in
the
machine
spec.
So
all
of
these
things
that
you're
seeing
today
are
for
fee
one-off
for
one
and
that's
you're
like.
Why
do
you
want
to
change
the
machine
type?
A
A
I
will
show
you
like
what
it
looks
like
there's
like
a
lot
of
fields
in
here,
but
a
lot
of
them
are
read-only
and
the
new
controller,
runtime
and
tools
is
actually
generating
validation
for
these,
and
we
were
failing
and
talking
upstream
like
it.
We
kind
of
found
out,
like
we
shouldn't,
be
actually
using
this
particular
type
as
like
an
embedded
like
type
in
and
other
objects,
because
a
lot
of
these
things
are
kind
of
like
either.
They
are
here
for
historical
reasons
or
are
set
from
the
API
server
perspective.
So
we
don't.
A
A
Let's
look
at
this,
so,
as
I
mentioned,
like
infrastructure
providers
are
separated
from
foster
API
and
one
way
that
he
sort
of
we
kind
of
like
decided
that
how
do
we
allow
providers
to
add
information
to
the
providers
back
in
to
the
machines
back
in
Casa
API,
because
we
want
to
reuse
the
same
machine
and
we
use
the
same
machine,
but
we
want
to
give
providers
like
the
ability
to
kind
of
sort
their
own
information
on
a
machine
so,
for
example,
for
a
double?
Yes,
you
will.
A
You
will
find
things
like
at
the
subnet
ID
or
like
the
security
groups,
so
very
specific
to
the
actual
provider
implementation
and
truthfully,
like
we
went
for
runtime
dry
extension.
Now,
if
you
look
at
raw
extension,
it's
actually
just
an
array
of
bytes.
So
one
thing
there,
like
probably
a
lot
of
people
like
analysis
like
how
what
what
happens
with
validation.
A
Well,
the
truth
is
like
there's
validation,
so
when
I,
usually
like
kind
of
misspell
something
in
the
provider
spec-
and
you
can
see-
we
can
see
here
so
like
this-
would
be
something
like
this
and
there
would
be
a
kind
other
other
things.
So
if
I
say
subnet
and
then
I
don't
know,
for
some
reason,
I
add
an
X
at
the
end.
But
that's
not
that's
not
like
the
actual
subnet
ID.
A
There
would
be
nothing
telling
the
user
right
now
that
this
field
is
actually
not
non-existent
or
like
if
I
say
like
I
want
to
add
I,
don't
know
like
I
know.
The
like
subnet
ID
is
correct,
but
that's
a
string
and
I
had
one.
Nothing
will
tell
the
user
that
like
this
is
actually
wrong.
So
it's
like
a
really
bad
experience
and
there's
a
lot
of
trial
and
error
and
I.
A
So,
let's
see
so,
this
was
just
an
example,
and
so
this
is
what's
wrong:
with
provider
spec
and
providers.
Beckett
goes
alongside
like
the
cluster
actuator,
which
I'll
show
you
like
a
little
bit
like
soon
like
how
that
plays
into
all
of
this.
So
because
we
have
infrastructure
provider,
kind
of
configuration
in
provider,
spec
embedded
as
like
bytes,
the
actuator
has
kind
of
generalizes
some
of
the
logic,
but
it
can
generalize
all
so
what
happens?
A
This
is
this
was
another
thing
that
like
was
very
confusing
at
first
when
I
joined
the
project.
So
here
you
can
see
like
there's
a
machine
version
info,
there's
a
couplet
and
there's
a
control
plane.
So
I
like
I,
had
someone
ask
me:
it's
like
what
never
be
a
case
that
the
control
plane
version
will
be
different
than
the
cupola
version,
and
can
I
specify
just
one
and
if
I,
specify
control,
plane,
I
quote
the
cube.
Will
it
work?
The
answer
is
yes,
but
a
lot
of
things
I
actually
deprecated
so
on.
A
So
these
look
a
lot
of
functions
in
in,
like
kind
of
like
a
utility
functions
to
extract
control,
plane
machines,
for
example-
and
you
can
see
here
in
the
util
package
and
this
function,
for
example.
Just
does
this
so
there's
no
validation
again
like
on
the
control
plane
like
is
this
actually
like?
Why
would
I
have
need
to
specify
a
version
on
it?
I
could
specify
just
yes,
it's
a
control
plane
and
like
nothing,
will
actually
tell
me
that,
like
it's,
it's
not
working
so
to
overcome
this
limitation,
for
example,
in
V
1,
alpha
2.
A
We
don't
have
the
concept
of
like
specifying
the
version
for
a
control
plane,
but
instead
the
whole
version
became
a
string
and
it's
optional.
So
some
providers
might
use
this
information
to
kind
of
create
their
own
version
and
all
I
used.
This
version
too,
like,
for
example,
get
images
or
like
make
different
decision,
maybe
like
with
a
different
appearance
version,
and
we
we
kind
of
generalize
like
what
so
much
when
a
machine
is
going
to
be
a
control
plane
with
just
selling
them
label.
A
We
already
do
this
for
cluster,
so
the
relationship
between
machining
cluster-
it's
not
explicit
as
you
can
see.
There
is
nothing
here
that
I
should
specify
like
a
link
to
a
cluster
and
we
just
used
labels,
and
we
just
say
and
I
think
it's
here.
So
with
this
labels,
you
just
have
to
set
this
label
to
say
this
machine
is
part
of
the
school
start
and
that's
actually
like
how
the
Machine
actually
links
to
the
cluster
and
there's
some
parts
of
the
code.
C
A
My
name
is
Beach
and
there
has
been
a
big
question
ever
since
I
started
looking
into
this
project,
which
is
around
the
Royce
tension,
stuff
and
I
know
you
mentioned
it,
but
I
want
to
go
back
at
it
if
possible,
yeah.
So
you
mentioned
that
there
is
no
validation.
I
would
assume
so
at
least
if
you're
running
the
cluster
API,
as
it
is
here
in
this
repo,
as
the
thing
that
is
validating
the
input,
but
my
understanding
from
some
provider
code
is
that
they
are
in
the
registering
types
and
doing
their
own
validations
assessment.
Incorrect.
A
No
that's
actually
correct.
The
validation
happens
in
run
time.
So
when
I
meant
like
in
in
my
terminal,
let's
say
and
I
do:
QC
PL
apply
cluster,
for
example,
I
like
simplified
versions
when
I
click
enter,
and
this
goes
and
like
creates
my
cluster
or
a
machine.
For
example,
this
provider
spec
won't
be
validated
as
part
of
this
call,
so
from
a
user
perspective,
I'm
not
receiving
any
in
like
like
signal
they're
like
my
machine
or
cluster.
Wasn't
actually,
you
know
like
matching
the
spec.
B
The
typical
vernacular
that
people
use
is
admission,
there's
there's
usually
type
validation
and
emission
control.
So
if
you,
if
you're
used
to
kubernetes
all
the
resource
objects,
actually
do
type
validation
on
Michigan
Trump.
So
if
you
incorrectly
format
something
if
you
have
a
problem
with
the
ePub
garbage
data
inside
your
type
information,
it
would
speed
up
and
reject
an
admission.
You
can't
do
that
currently,
with
the
structure.
A
Even
if
you
wrap
a
cluster
API
and
a
cluster
API
implementation
and
but
the
way
I
book
there,
could
you
repeat
that
I
mean
so
from
my
experience
with
implementing
the
operator
pattern
so
basically
like
extending
the
API
and
then
implementing
controllers
around
that
it's
pretty
possible.
There's
nothing
blocking
you
from
the
point
you're
on
by
books
to
do
any
sorts
of
validations.
So
my
question
here
is,
as
an
implementation
of
cluster
API
can
I
not
enforce
that
if
I
deploy
my
own
bug
book.
B
A
E
Also
have
a
question:
my
name
is
Shen,
and
the
question
is:
if
you
look
at
this
page,
that
taints
objects
itself,
it's
outside
the
provider
spec,
but
then,
for
example,
if
I
want
to
add
a
note
label,
I
would
have
to
modify
inside
of
the
provider
spec.
So
it's
not
clear
to
me
how
the
structure
was
originally
designed
so
that
it's
you
know
like
I
would
imagine
things
that
label
are
all
both
like
kubernetes
properties.
That
should
go
pretty
much
in
the
same
place.
You.
C
The
taints
field-
that's
on
the
screen
right
now,
is
not
anything
that
is
guaranteed
to
be
supported
and
implemented
across
every
single
provider
that
exists
today.
They
may
honor
it
and
do
something
with
it
or
they
may
not.
The
approach
that
we
are
looking
to
move
towards
is
hopefully
removing
tanks
for
now
from
v1,
alpha
2
and
instead
we're
looking
on
the
bootstrap
configuration
to
handle
initial
node
labels
and
initial
node
taints.
C
So,
rather
than
sticking
it
in
the
machine
spec
where
maybe
it
gets
handled,
maybe
it
doesn't,
if
you
happen
to
be
using
the
cube
ATM
based
bootstrapper,
that
we're
going
to
be
developing
cube.
Idiom
has
Native
direct
support
for
setting
taints
and
labels
when
the
node
is
registered
and
that'll
be
the
way
in
doing
alpha
2
to
get
those
said.
A
I
think
maybe
this
is
like
a
good
I'm
good
one
for
any
issue,
and
so,
whatever
its
opening
issues,
because
I
think
this
is
one
of
the
personally
the
problems
that
we
see
in
cluster
API.
Right
like
we
have
this
fields
and
they're
optional
and
they
Meyer
might
not
be
like
satisfied
by
some
infrastructure
provider.
So
what
tell
what
that
tells
me
and
I'm
like
I,
want
to
hear
like
from
as
well
as
like
the
experience
is
not
the
same.
A
It's
not
gonna
be
the
same
from
like
if
I
want
to
run
GCP
cluster
or
AWS
cluster,
it's
going
to
be
different,
and
so
my
expectation
will
have
to
change
from
information
provided
to
inference
or
provider
and
think
like
one
of
the.
The
goal
is
like,
as
others
have
mentioned
like
to
make
these
generic
enough
and
actually
make
it,
make
them
work
everywhere,
not
just
one
provider.
F
Ok,
comment
on
that:
III
think
that
I
mean
I.
I,
agree,
I,
agree
with
that
goal,
but
it
also
may
be
the
case
that
providers
want
to
be
opinionated
in
certain
ways
right
so
to
limit
searches
yet
to
limit
the
the
different
options
that
an
end
user
can
use
and
I,
don't
I,
don't
think
we
want
to
prevent
providers
from
from
being.
You
know
from
being
able
to
impose
certain
opinions,
but
it
was
yeah
I.
Think
it's
a
it's
something
that
maybe
we
can
figure
out.
How
do
we
satisfy
both
goals?
You
know
it
can.
F
A
A
Cool
so
we
got
the
birth
typed
and,
as
I
mentioned,
like
their
package,
API
cluster
b1l,
so
one
and
there's
gonna
be
another
folder
here,
that's
gonna
be
called
me
when
alpha
too
soon,
so
look
out
for
that
like
when
that's
gonna
break
Oh
master
and
everything.
But
let's
see
thank
the
controllers,
that's
it!
A
The
controller
is
like
a
we're,
like
kind
of
all
the
magic
happens,
and
we
talked
about
the
actuator
that
the
machine
controller
lets
the
go
local
around
that,
like
a
little
bit
more,
the
machine
controller
and
but
actually
look
at
the
actuator
interface
you
can
see
like
this
is
what
we
would
expect
in
furniture
provided
to
fulfill
to
create
a
machine.
So
we
will
like
have
context
of
clustering
machine
and
when
we
call
it
create
like
we
would
expect
like
a
beaver
to
provide
it
to
actually
create
that
machine
and
give
some
information
information
back.
A
Builder
will
generate
for
us,
so
we'll
run
the
controller,
and
this
is
where
one
of
the
most
important
pieces
you
will
find
this
in
every
other
underscore
controller.
Like
even
the
North
controller,
deprecated
machine
set
controller,
a
deployment
controller
like
this
will
all
looks
the
same
and
the
biggest
lake
and
most
important
part
is
actually
what
happens
in
this
reconcile
method
in
here.
A
What
we
do
is
that
we
fetch
in
machines
like
will
have
a
request,
request
comes
in,
and
this
is
watching
events
and
we
actually
have
a
watcher
which
is
here
so
we're
just
watching
changes
from
machine.
You
can
add
more
watches
if
you
want
to,
and
there
is
like
machine
women
I
believe,
like
watches,
multiple
objects
in
this.
In
this
case,
we're
saying
I
want
to
watch
machines
and
thank
you
a
request
for
the
object
when
an
event
happens
on
that
machine.
A
So
when
you
do
cube,
CTL
create
or
like
update
some
like
a
field
on
a
machine,
that's
where
the
machine
will
get
B
fat
to
write
like,
and
this
is
going
to
be
one
of
the
controllers.
There
could
be
multiple
controllers
watching
for
the
same
object,
and
so
multiple
reconcile
will
be
kicked
off
at
that
point.
A
So
we
go
and
fetch
the
machine.
It
will
make
sure
that
the
machine
exists,
because
sometimes
it
does
happen
to
reconcile
happens
and
and
in
in
meantime,
like
somebody
believed
that
machine,
so
we
want
to
make
sure
they,
like
the
machine,
exist
and
retrieve
all
the
information
out
from
from
that
machine.
The
first
thing
we
do
is
actually
get
the
cluster
from
that
machine,
and
this
is
where,
like
I
mentioned,
the
label
before
this
is
where
we
actually
go.
Look
at
that
label
and
then
try
to
get
the
cluster.
A
As
you
can
see,
though,
like
here,
we
return
double
nil,
which
means
like
if
there
is
no
label,
we
actually
don't.
We
make
the
cluster
optional.
There
is
some
implementation
out
there
that
actually
don't
use
the
cluster
object
there
and
just
use
machines,
and
they
have
other
ways
to
tie
it
all
them
up
to
a
cluster.
A
So,
let's
go
in
the
logic
a
little
bit.
This
is
all
gonna
change
actually
for
the
machine
controller,
but
you
can
see
here.
This
is
like
how
do
we
kind
of
fight
everything
up
together?
So
we
kind
of
go
into
into
into
this
reconcile
we
get
the
cluster
we
set.
The
only
reference
is
excuse
me.
Instead,
lustre
is
there
we
set
some
finalize
errs.
The
finalizes
are
needed
for
when
we
do
garbage
collection
or
when
a
delete
request
comes
in
the
garbage
collector
would
remove
the
machine
right
away
if
there
is
no
finalizar
on
it.
A
A
One
of
the
things
that
we
have
talked
about
is
that
a
machine
will
become
a
cover.
Nice
note
in
any
case,
and
the
Machine
status
actually
has
a
node
ref,
which
is
a
references
to
a
node
that
could
the
meijer
might
not
live
in
the
same
cluster,
and
this
was
a
like
a
problem
like
for
for
a
while
in
the
project.
Is
that
that's
why
I
mentioned
like?
There
was
no
first-class
support
for
management
clusters,
and
this
was
another
place
where
there
the
management
was.
There
was
causing
issues.
A
The
node
ref
is
important
to
know,
for
example,
the
status
of
a
node,
the
cognize
node,
and
what
we
do
is
that
we
set
the
note
references,
the
row
reference
on
the
machine
so
that
we
can
actually
go
and
query
the
health.
The
machine
in
the
machine
said
we
actually
say
then
the
machine,
the
notice
ready,
so
the
machine
is
ready
and
when
we
actually
believe
the
machine,
we
actually
can
go
and
delete
the
notice.
A
A
So
this
is
more
like
the
machine
controller
I.
Will
quick
super
quickly,
create
a
new
use
queue
builder
to
create
a
new
controller,
and-
and
this
is
going
to
be
of
kind
test,
but
let's
ignore
that
for
now
so
what
I'm
doing
here
I
have
Q
buildings?
Don't
locally
I
want
to
create
a
new
controller.
I
don't
want
an
example.
I
will
put
in
the
group
of
cluster
and
I
won
the
controller
to
be
named
test
and
I
don't
want
to
create
a
type.
A
So
this
is
where
I,
why
I
specify
resource
equal,
false
and
I
can
say.
Prayer
should
be
went
out
for
two.
These
are
useless,
but
I
guess
like
I,
I,
don't
want
that
resource
and
your
builder
runs
make
afterwards,
but
I
don't
want
it
to
run
make
so
I
just
said
false
this
case.
A
So
this
is
was
like
a
super
quick
way
to
create
a
controller
which
is
called
test
controller
in
this
case-
and
you
can
see
here
like
you,
created
these
three
files,
and
this
is
their
file
up
here.
This
is
this
file
were
add
the
controller
to
this
array,
which
it's
actually
defined
in
in
here,
and
this
is
called
from
the
manager
main
dog
go
that
we
saw
before
so.
A
This
controller
has
one
work
if
I
run
it
right
now,
but
it
there's
this
like
a
big
warning
here
is
like
you
need
to
do
something
around,
but
for
the
business
logic,
and
you
actually
one
do
much
so,
as
you
can
see
like
it
will
trick
the
fetch
a
test
instance,
that's
because
we
call
it
test,
so
this
is
kind
of
just
like
how
this
cat
folding
works.
You
can
see
the
cute
builder
tags
for
like
again
like
that
kind
of
give
in
this
case,
like
our
back.
A
This
was
just
a
super
quick
way
to
create,
create
controller
and
I
actually
did
the
same
for
the
new
ref
controller,
which
sets
the
no
reference
of
where
the
cluster,
but
it's
called,
not
ref
controller,
but
actually
it
watches
machines,
and
so
it's
not
watching
any
node
ref
time.
So
you
can
actually
change
these
controllers
as
you
please,
and
as
we
think
as
you
create
a
like
a
new
controller.
It's,
like
you
will
add,
like
your
business
logic
in
there.
A
Alright,
like
for
developers
out
there
that
they
want
to
contribute,
please
contribute
too
close
to
API.
There
is
this.
The
make
file
is
a
good
starting
point.
I
mentioned
before
the
like.
We
use
controller
tools,
a
lot
to
generate
files
and
see
are
these
etc
so
actually
do
vendor
everything
in.
But,
as
you
can
see
here,
I
have
all
this
here,
the
under
the
config.
A
For
example,
DW
provider
today
like
creates
a
file,
that's
called
provider
components
which
is
like
a
big
massive
llamo
file,
which
which
has
all
the
series
from
cluster
API
and
its
own.
So
this
CR
these
are
actually
created
from
controller
tools.
So
look
at
that
example:
I
deleted
those
here.
These
and
I
can
run
make
generate
manifests
here,
as
you
can
see,
I'm
running
go
run
from
the
controller
of
Gen,
which
is
a
part
of
controller
tools
and
I
won't
generate
CR.
A
C
D
C
A
So
in
here,
so
we
create
like
the
manager
first
year,
and
this
is
just
something
I
like
comes
from
loop
control,
the
runtime
then
to
the
manager.
We
need
to
add
the
controllers
right
and
we'll
also
have
to
set
up
the
scheme,
which
is
the
types
that
we're
actually
going
to
use
and
this
controller
add
to
manager.
So
if
we
go
here,
this
is
the
file
that
I
was
showing
before
so
it's
here.
This
file
and
after
manager
will
will
do
a
range
over
at
manager.
A
Functions
which
are
the
function
generated
from
controller
controller,
run
totally
tools,
cube
builders
and,
as
you
can
see
here
like
we
actually
pretty
much
just
adding
the
same
array
within
it
functions.
So
this
will
run
when
you
actually
import
the
package.
We
add
machine,
deploy
machine,
set,
node,
node,
ref
and
they're
all
going
to
be
end
up
here.
So
when
we
in
the
main
file,
we
run
this.
This
array
slice
would
actually
be
already
populated
with
all
those
functions.
A
A
A
Vince
yeah
two
questions
here
is
I
can
see
there
that,
when
you're
instantiated
the
the
manager
you
set
the
namespace.
Does
this
mean
that
everything
in
the
cluster
API
is
namespace
scope
or
is
specifically
for
something
like
creating
a
lock
object
for
high
availability
for
the
Dementors
controllers?
What
I
really
want
to
call
yeah?
A
So
yes,
pretty
much
all
the
objects
in
clusters
yeah
all
this
year.
These
are
names
so
machine
cluster
and
machine
deploy
machine
said,
and
all
of
them
are
actually
names
based.
So
you
need
to
create
them.
You
know
namespace
this.
This
one
is
actually
a
flag
to
watch.
Only
a
single
namespace-
and
this
was
added
for
kind
of
like
a
security
reason
like
more
than
anything.
So
imagine
that,
like
you're
running
an
imagine
cluster
and
you
have
multiple
namespaces
and
H
namespace-
is,
for
example,
an
environment
right
production,
staging
and
development.
A
For
example,
you
might
decide
that,
like
you,
don't
want
to
run
a
like
the
same
cluster
API
across
all
namespaces,
which
is
the
default.
So,
by
default
closer,
they
will
watch
all
namespaces
and
all
the
objects
that
like
are
running
in
all
the
namespaces.
But
if
you
want
to
like
kind
of
say,
I
actually
just
want
to
watch
this
namespace,
you
can
actually
do
so
in
limit
the
what
the
manager
will
yeah
that
definitely
makes
sense.
A
I
mean
I've,
seen
a
lot
of
people
in
the
operator
community
complaining
with
about
this
exact
problem,
where
you
have
an
operator
that
or
a
controller
that
observes
the
entire
cluster,
and
they
just
wanted
to
observe
a
certain
namespace.
It's
an
interesting
discussion,
but
yeah.
Thank
you
for
verifying
that,
of
course,.
A
Sorry
I
always
do
this
Indian
of
meetings.
I
can
see
that
I
was
looking
into
like
how
you
build
kur
and
how
you
run
certain
things,
and
it
doesn't
seem
that
the
goal
calls
rely
on
the
vendor
directory.
A
Why
is
there
the
vendor
directory
I
want
to
understand
this,
because
sometimes
we
may
run
into
bugs
and
we're
think
we
think
that
we're
using
the
code
in
vendor
and
we're
not
we
unless
we're
using
the
flag,
I
believe,
is
mall
mall,
equal
vendor
and
go
so
which,
which
one
are
you
referring
to
like
file?
Yeah
I
mean
even
the
build
stuff.
Just
go.
It's
just
gold
bill.
A
For
instance,
there
is
no
mod
equal
vendor,
so
it's
not
actually
forcing
enforcing
for
the
vendor
folder
to
be
used.
Is
this
something
you're
aware
I
mean
I've
run
into
this
before
I
was
not
able
to
figure
it
out?
If,
if
you
have
any
anything,
protecting
us
from
running
into
that
here,
yeah
so
like
the
bender
folder
was
left
there
and
we
actually
have
ways
to
check
that
go.
A
Mod
bender
was
actually
to
run
before,
for
example,
like
you
have
been
a
PR,
and
the
reason
for
that
is
like
first
its
consistency
with
like
what
upstream
kubernetes
and
controller
runtime
control
tools
are
doing
so
they're
doing
the
same,
to
keep
backward,
compatible
behavior,
for
example,
if
you're
not
on
go
112
just
as
an
example.
These,
though,
like
this,
do
use
the
vendor
directory
and
that
speaker
and
we
do
keep
it
because
code,
generator
and
I
believe
also
former
chain
and
pretty
much
everything
in
code.
A
A
A
Yep
I
think
stop
sharing
perfect.
Well.
If
there
are
any
other
questions,
please
do
kind
of
come
on
it
slack
or
open
it
github
issues,
and
definitely
thank
you
for
coming
in.
If
you,
if
you
want
to
do
this,
like
again,
like
maybe
next
month,
probably
be
able
to
schedule
something
if
we
can
talk
about
V
1,
alpha
2,
let's
talk
the
name
if
it's
cluster
cuddle
or
cluster
control
I'd
like
to
have
that
debate,
just
kidding.