►
From YouTube: Cloud Native Social Hour - April 12, 2019
Description
Join us for Cloud Native Social Hour on Friday 4/12 at 2:00pm PST!
@apinick, special guest @stephenaugustus, and friends are going to discuss Cluster API and run through a demo of Cluster API using Azure.
A
All
right
so
this
time
on
the
cloud
native
social
art
we
are
going
to
be
talking
about
cluster
API
and
the
clustered
API
as
your
provider.
So
part
of
the
reason
we're
doing
that
is
nepotism,
sake.
I
I
am
my
buddy
Steven
other
code,
maintainer
zuv
cluster
API
as
your
provider,
and
by
that
I
mean
Stephen,
is
doing
basically
all
the
work
and
I'm
occasionally
reviewing
so
TRS
right.
The
community
works.
A
A
B
A
cluster
with
clustering
API
last
week
and
this
week
I've
been
playing
with
Windows
kubernetes.
So
it's
been
a
it's
interesting
I'm
doing
the
adea's.
Your
route
right
now,
which
kind
of
like
you
know,
sets
everything
up
in
the
equivalent
of
a
cloud
formation,
I
hope
to
just
like
be
able
to
do
things
a
little
bit
more
manually
with
jeeve
idioms
soon,
but
oh,
no
so
far,
so
good
I
mean
pests
like
all
of
the
infuriating
Windows
stuff,
like
the
fact
that
there's.
B
B
C
A
Bet
there
are
some
people,
I
personally
haven't
been
doing
it.
I
haven't
the
last
time.
I
touch
windows
was
actually
technically
last
night
when
I
played
Borderlands
2
on
my
fiance's
laptop,
because
the
Windows
version
of
Borderlands
2
took
an
update
and
I
can't
play
my
Linux
version
anymore.
So
that
sucks
it's
a
game,
that's
5
years
old
and
they
push
an
update.
All
of
a
sudden.
A
A
Of
maybe
for
some
places
having
their
first
interaction
with
Linux
in
their
ecosystem
right,
because
the
control
plane
of
the
on-prem
windows
or
just
any
windows
kubernetes,
the
control
plane
is
still
Linux
right.
It's
only
the
cubelet
and
the
worker
or
the
worker
nodes
that
are
going
to
be
windows
compatible.
So
if
that's,
it's
gonna
be
interesting
for
some
people,
I
think
I.
D
C
A
C
Interesting
and
cool,
so
we
also
have
a.
We
also
have
a
little
dock.
That
is
just
generally
the
kind
of
like
the
the
framework
or
the
the
very
loose
agenda
of
of
today's
social
hour
so
up
into
that
hack,
MD,
and
so
we
have
a
section
called
Twitter
handles
for
gratuitous,
follow
requests
if
you're
on
the
call
you
want
to
say
hello
drop,
your
name
in
there.
That's
yeah.
A
A
D
To
a
workshop
I'm
using
them
that
was
run
by
docker
Inc
about
a
year
ago
and
for
a
while
they
were
doing
a
lot
of
these
road
shows
like
in
LA,
where
I
lived.
They
had
one
in
LA,
one
in
Orange,
County
and
I
spent
about
two
and
a
half
hours
trying.
It
came
away,
just
astounded
at
how
slow
it
was.
So
the
idea
was
that
you
have
this
working
legacy
and
we're
going
to
stand
up
a
Microsoft
sequel
server
and
a
Microsoft
is
server.
D
D
A
A
C
B
Struggling
with
the
fact
that
it
requires,
like
the
container
that
you
run,
beat
this,
it
has
to
be
built
on
the
same
kernel
version
as
the
actual
worker
node
that
you're
running
on
which
to
me
like
kind
of
defeats.
The
purpose
of
containers
altogether
like
if
you
can't
like
emulates
as
possible
back
yeah.
B
B
B
D
A
D
Well,
in
my
mind,
having
worked
in
the
backup
arena
a
long
time,
there's
more
than
there
are
more
things
to
worry
about
them.
What
Valero
is
going
to
take
care
of
for
you?
It
might
do
a
lot
of
it,
but
it's
a
building
block
and
a
lot
of
it
is
even
like
simple
things
that
you'd
be
shocked.
How
few
times
people
think
of
it
until
the
ugly
moment
happens,
of
just
having
a
run
boat,
giving
instructions
on
how
you
recover.
D
Even
how
to
log
into
the
back
up
to
where
the
backups
are
stored,
to
get
them
out
so
anyway,
that's
one
of
them.
The
other
one
is
the
VMware
sake,
which
is
the
cloud
provider
and
storage
plugin
for
running
kubernetes
on
vSphere,
then
the
cloud
provider
sig
itself
had
a
paucity
of
people
who
wanted
to
go
to
Shanghai.
So
really,
even
though
I'm
not
in
a
leadership
role
on
that
say,
guy
got
asked
if
I
could
stand
in
to
do
the
SIG's
presentation.
D
B
D
You
worked
it
out.
It's
like
5%
of
the
people
went
to
it,
so
I
mean
it's.
It
isn't
clear
that
the
intersection
of
VMware
and
a
jyoti
is
viewed
as
strategic
at
this
point
in
time.
But
my
observation
is
out
there
in
the
user
communities.
There's
people
who
think
that
there
is
such
an
intersection
and
right
now
the
people
like
chick-fil-a
are
really.
D
If,
if
you
went
to
that
talk,
it
was
fascinating,
but
it's
violating
a
lot
of
the
assumptions
that
went
into
the
original
architecture
of
kubernetes
that
assumed
a
public
cloud
with
more
or
less
well.
It
is
a
literally
infinite
scalability,
but
you
know,
limitations
on
underlying
resources
were
not
commonly
dealt
yeah.
It
wasn't
something
you
expected
to
deal
with
on
a
regular
basis,
but
sort
of
like
you
know.
D
Eventually,
you
might
use
up
what
you
have
and
then
you
order
more
hardware
and
it
might
take
a
few
days
for
it
to
come
to
get
provision,
but
as
a
fact
of
life
where
chick-fil-a
is
running
on
three
intel
hooks
and
that's
all
you're
ever
gonna
have
and
some
people
in
that
space
dream
of
running
on
two.
Instead
of
three,
maybe
even
one
instead
of
three
it's
kind
of
interesting,
so.
B
D
The
tough
things
for
those
are
running
out
at
those
edge
locations
are
both
the
resource
constraints.
The
fact
kathiria's
there's
no
backstop
of
any
I-I-I-t
knowledgeable
person
like
ever
it's
sort
of,
like
you,
know
the
the
smartest
guy
on
the
third
shift,
who
might
be
called
upon
to
do
something
and
I
got
involved
with
VMware
doing
the
same
sort
of
thing
on
vSphere.
So
right
now,
interestingly
enough,
there
are
vSphere
clusters,
very
small
ones
in
every
Kroger
grocery
store,
including
the
brands
that
are
like
Ralph's
that
are
part
of
Kroger
in
every
Home.
D
Depot
has
a
vSphere
cluster
and
they
manage
to
be
running
these
things
for
over
a
decade
successfully
with
remote
management.
So
pretty
clearly,
if
you
could
do
it
in
hypervisors,
there's
no
reason
you
shouldn't
be
able
to
do
it
with
container
orchestrators,
but
the
issue
is
whether
kubernetes
ready
for
right
now
and
yeah.
I
think
the
answer
is
no
not
less
than
one.
D
C
I
think
that's
one
of
the
really
exciting
parts
about
the
community
in
general
that
we
have
these
opportunities
like
working
groups.
They
are,
you
know,
kind
of
outside,
of
the
outside
of
the
the
within
the
governance
structure,
but
that
you
know
the
ability
to
kind
of
like
aggregate
all
of
these
different
SIG's,
to
talk
about
some
effort
and
and
then
actually
do
it
right
like
it.
It's
like.
We
talked
about
like
hey.
These
things
are
hard
and
then
people
get
together
and
like
okay,
well,
it's
hard,
but
let's
fix
it
all
right.
C
So,
like
IOT
having
things
like,
you
know.
Finally,
having
this
this
conversation
that
comes
up
like
every
you
know,
every
two
years
or
you
know
every
year
and
change
in
in
Cooper
net,
it's
like
hey!
What
about?
What
about
LTS
right,
like
is,
is
LCS
even
a
thing.
That's
on
the
table
right
and
the
fact
that
this
is
actually
like.
We
spun
up
a
working
group.
People
are
actively
talking
about
it.
People
on
the
release
side,
people
on
the
testing
side,
like
everyone,
is
like
getting
involved.
A
Also
LTS
for
everyone
who
isn't
familiar.
It
means
a
long
time
a
lot
of
long
term
support
right,
so
something
that
can
be
used
longer
than
the
normal,
like
lifespan
of
a
criminai
DS
version-
and
you
know
it's,
we
do
these
things,
not
because
they're
easy,
but
because
they're
hard.
That
is
a
phrase
that
I
just
came
up
with,
and
please
don't
fact-check
me,
but.
D
That's
one
of
the
other
things
that
comes
into
play
at
these
edge
locations.
There
are,
with
the
resource
constraints,
there's
plenty
of
people
used
to
public
cloud
who
say
sure
the
update
procedure
is
stand-up
new
cluster
nodes
in
parallel
with
the
old
ones
and
then
flop
over.
But
if
you
already
just
barely
have
enough
hardware
that
becomes
problematic,
so
people
now
that
IOT
edge
are
definitely
look
at
that
they're.
Definitely
leaning
strongly
towards
an
upgrade
in
place,
philosophy.
A
Yeah,
that
makes
perfect
sense,
and
you
know
a
lot
of
the
challenges
that
they
face,
like
I
said
under
the
resource
constraints
and
I
think
that
chick
fila
does
a
like
they've
done
a
really
good
job
of
creating
this
pipeline,
not
just
for
like
their
code,
but
also
for
their
hardware
right.
That's
the
big
part
that
IOT
needs
to
that's
the
actual
main
hard
part
of
IOT
or
edge
computing
is
getting
the
resources
where
you
need
them
and
so
like.
A
If
there's
something
wrong
with
the
kubernetes
cluster
and
chick
fila,
they
will
ship
out
a
new
cluster
they'll
just
like,
and
they
have
like
this
machine.
They
set
up
that
kind
of
just
prints
these
like
clusters
out,
and
they
just
send
them
out.
So
the
resource
constraint
isn't
so
great
on
the
store
location,
but
they
have
this
like
place
that
this
Factory,
essentially
that
just
pumps
them
out
on
a
regular
basis,
so
they
can
just
be
like
upgrades.
I
know
we
need
to
upgrade
to
cluster
boom.
Here
you
go
put
that
in
I
spent.
D
And
after
three
or
five
years,
they're
old
enough
that
they're
not
even
going
to
be
worth
diagnosing.
You
know
your
ethical,
useful
life
of
the
components,
so
they
want
this
scenario
where
the
box
just
shows
up
and
some
time
during
the
night
when
the
store
can
maybe
deal
with
a
potential
outage.
Somebody
gets
instructions
of
here's,
a
picture
of
what
this
looks
like
unscrew,
the
old
one.
But
this
one
where
it
was
and
plug
it
in
and
that's
all
you
got
to
do.
Yeah.
A
A
If
you
look
at
kubernetes
right
kubernetes
is
this
API
I
know
they
invited
me
to
take
her
words,
but
as
I
could
rise
to
say,
VI
and
it's
very
similar,
in
fact
to
the
Linux
kernel
right,
and
so
it's
like
the
old
problems
are
new
but
they're
the
same
but
they're
at
a
grander
scale.
Now
right
and
so
like
you
have
this
issue
with
vSphere
at
these
edge
sites
and
lo
and
behold
we're
having
similar
concerns
and
issues
again
at
a
different
or
grander
scale,
or
maybe
more
mobile
scale.
At
this
point,
yeah.
D
And
what
it's
interesting,
that
I
think
is
lacking
in
kubernetes,
but
we
could
get
there,
but
that
old
vSphere
want
for
clusters.
People
complained
about
the
expense
of
having
three
nodes
and
wanted
to
go
to
and
with
es
X's.
That's
a
fairly
expensive
piece
of
hardware.
Right
I
mean
they're
they're
like
to
you
server
grade
Xeon
Hardware,
yeah,.
B
D
Probably
ten
thousand
bucks
apiece,
so
two
of
them
versus
three
is
a
significant
amount
of
money,
so
they
wanted
to
and
they
came
up
with
a
method
of
putting
the
third
one
in
the
cloud.
So
it's
called
an
external
witness
but
I
think
a
lot
of
dev
work
went
into
that
to
get
that
to
work.
So
even
the
storage
is
in
theory
three-way
shared,
but
you
can
deal
with
network
partitions
and
things
by
having
redundant
connections
out
over
the
Internet
to
this
third
witness
thing
standing
in
the
cloud
right
now.
D
D
C
B
C
I
say
on
every
call
to
where
I
mentioned
the
release
team
that
I
think
that
joining
the
release
team
is
one
of
the
most
valuable
experiences
in
participating,
kubernetes
and
upstream
kubernetes.
So
if
you
have
the
opportunity
to
it
to
do
it
or
if
you
just
want
to
check
out
the
release
team
meetings,
you
can
join
the
cig
release
cig
release,
mailing
list
and
you'll
automatically
get
invited
to
those
meetings.
Right
now
we
do
meeting
weekly
meeting
release
team
and
then
moving
into
later
in
the
cycle.
C
C
A
Will
not
be
going
into
these
meetings
with
this
level
of
energy
because
it's
there
at
9
o'clock
in
the
morning,
so
maybe
no
happy
hour
the
out
the
week
of
code
soul
and
that's
you
know
Copa.
So
that's
a
luxury,
that's
not
kind
of
an
interesting
position
to
be
in
because,
like
the
first
half
or
the
first,
like
actually
3/4
of
the
release
cycle
as
much
really,
my
job
is
sitting
back
and
relaxing
because
the
bugs
haven't
been
added
to
1:15.
Yet
so
we
don't
really
have
a
lot
to
do.
C
A
A
C
A
We
try
not
to
be
too
chatty
with
our
requests
for,
like
hey,
it
doesn't
actually
apply.
Are
you
gonna
be
working
on
this
bah
bah
bah,
which
I
do
like
at
least
like
once
a
day,
my
communication
on
these
tickets,
but
that's
how
we
get
things
kind
of
shepherded
into
the
releases
just
by
pinging
the
relevant
parties
and
know
like
signals,
one
of
the
people
I
talked
to
last
time
and
usually
you'll
talk
to
them
every
release.
Those
things
touch
a
note
for
some
reason
and
so
be
like
hey.
A
C
So
we've
we've
we've
also
gone
into
these
like
modes
where
we've
we've
tried
to
figure
out
if
it
makes
sense
to
use
to
solve
some
of
them
and
I
did
there
is
there's
an
immense
value
and
the
right
human
touch
to
to
reach
out
to
people,
as
opposed
to
like
we
have
a
bunch
of
bots
that
run
on
kubernetes
PRS
and
issues
that
do
all
kinds
of
match
things
at
labels
do
merges,
do
squashes
tag
them.
C
As
you
know,
the
tag
tag
issues
retests
so
like
all
these
things
are
happening,
but
we're
like
this
is.
This
may
be
one
too
many
you
know
like,
and
it's
and
it's
more
important
that
it's
handled
by
a
human
I
think
you
have
like
having
that
that
feedback
loop
between
human
as
opposed
to
having
a
bot
do
this.
It
has
been
a
lot
more
successful
for
us
yeah.
A
And
I
gotta
say
like
the
automation
that
goes
into
the
kubernetes
releases,
is
astounding
and
I
can't
imagine
going
through
a
release
without
it
like
it's
unbelievable,
the
amount
of
like
stuff,
that's
going
on
behind
the
scenes
so,
and
if
you
want
to
see
all
these
things,
you
can
join
the
release
meetings.
They
are
open
to
the
public.
A
C
I
think,
overall,
that
you
know
it's
of
course,
for
every
release
thing.
We
can
only
have
a
finite
amount
of
people
on
the
team,
just
to
ensure
that
you
can
actually
have
a
real
connection
with
the
shadows
that
you
pick.
But
you
know
what
we're
trying
to
do
overall
is
even
in
the
the
acceptance
and
rejection
process
of
the
of
the
the
shadow
selection
like
we
want
to
keep
you
in
the
community
right.
C
C
Think
last
last
cycle
was
the
first
cycle
that
we
we
sent
out
a
questionnaire,
and
then
we
kind
of
like
single
signal
boosted,
the
questionnaire
on
kubernetes
dev
and
on
Twitter,
and
we
ended
up
with
like
200
plus
volunteers
right
and
that's
that's
a
little
larger
than
we
need
for
a
team
right.
So
it's
so
like
people
going
through
this
process
just
be
aware
that
there
are
infinitely
more.
You
know
in
infinitely
more
rejections
and
there
are
acceptances
right,
but
it's
not
it's
not
rejection
right.
A
C
A
C
Complain
with
us,
so
the
I
I
posted
in
the
in
I
can
be
that's
what
it's
called
a
link
to
Deb
stats,
which
is
not
everyone
knows
about
def
stats,
but
it's
kind
of
interesting
because
it's
a
whole
bunch
of
it's.
It's
Griffin
to
dashboards,
essentially
right,
but
it's
a
whole
bunch
of
metrics
around
how
the
community
flows
like
who
is
contributing
to
what
what
repos
are
most
active.
The
the
velocity
at
which
you
know
issues
are
created
or
closed
or
PRS
are
opened
and
merged.
So
a
whole
bunch
of
like
really
interesting
statistics
there.
A
A
A
What
is
cluster
API?
That
is
a
very
good
question.
Thank
you
for
asking
you're
welcome
the
culture.
Api
basically
is
a
means,
a
mechanism
to
spin
up
a
kubernetes
cluster
or
community
like
machines
themselves,
in
a
fashion
that
follows
the
kubernetes
api.
So
you
can
schedule
clusters
and
machines
like
they
were
pods
like
they
were
services,
a
skid
like
a,
which
is
something
that
I
found
to
be
very,
very
fascinating.
A
The
idea
of
a
machine
as
a
scheduled
object
is
really
cool,
but
it
seems
like
that's
something
that
you
can
handle
pretty
easily
with
something
like
cloud
formation
or
the
azure
equivalent,
which
I
can't
remember
the
name
of
at
the
moment,
some
form
of
like
user
data.
What's
that
deployments
deployments
those
these
things
could
be
handled,
but
to
make
it
a
contract
like
any
API
that
everyone
can
use
there
needed
to
be
some
form
of
governance
around
it
and
that's
where
the
cluster
API
project
kind
of
steps
in
so
Stephen.
C
For
sure,
so
you
know
like
like
Nick
mentioned,
cluster
API
is:
what's
the
tagline,
it's
a
declarative
configuration
of
kubernetes
clusters
using
kubernetes
api
right.
So
it's
the
idea
again
that
you
can
yeah
it's
like
the
get
book
is
in
my
head
now.
So
just
again,
imagine
this
world,
where
you
can
leverage
things
like
CR,
DS
of
customer
resource
definitions,
to
define
the
specification
for
what
a
machine
looks
like
and
what
a
cluster
looks
like
great.
So
if
we
have
a
homogeneous
way
of
doing
this
right,
we
have
an
abstraction
for
this.
C
That
means
that
we
can.
We
can
start
to
do
this.
We
can
start
to
do
like
really
interesting
things
right.
We
can
focus.
We
can
do
things
like
auto
scaling.
We
can
add
additional
functionality
to
provide.
We
can
focus
on
not
even
having
providers
right
like
what,
if
I
can
build
what,
if
I
can
build
kubernetes
consistently
right
on
bare
metal
on.
You
know
on
bare
metal
on
vSphere,
on
on
AWS
on
Azure
right-
and
this
is
like
this
is
the
mechanism
for
right.
So
this
is
a
cluster.
C
It's
a
class,
a
cluster
lifecycle
project
and
we
kind
of
have
shared
custody
of
the
the
the
individual
providers
right.
So
there
is
so
if
someone
can
pop
the
link
for
cluster
API,
say
cluster
API.
That
repo
will
give
you
a
list
of
all
the
current
implementations
of
cluster
API.
So
off
the
top
of
my
head
I
know:
there's
a
GC,
p1
and
Azure
one
AWS
I
think
there
might
be
an
IBM
one,
there's
a
vSphere
one.
There
is
a
you
know.
C
There
are
look
at
that,
there's
a
body
camera
so
that
will
give
you
a
list
of
all
the
current
implementations.
They're.
Also
like
you
know-
and
this
is
like
this
is
a
model
that
we
have
decided
as
a
as
an
industry
as
a
community,
really
that
you
know
that
we're
we're
moving
forward
on.
So
you
know
it's
it's
very
interesting
to
see
it
happen
in
you
know,
to
see
like
company
product
strategy
start
being
built
around
this
right
and
and
and
then
seeing
products
that
you
have
known
or
or
maybe
use.
C
You
know
like
a
like
a
PKS
like
an
open
shift
right.
These
are
all
all
the
new
versions
of
these
products
are
moving
towards
a
cluster
API
model
right.
So
what
we
wanted
to
do
today
was
again
nepotism.
We're
like,
let's
show
off
what
we've
been
working
on,
so
so
the
the
I
guess
the
the
genesis
of
the
first.
Are
there
any
questions
about
cluster
API,
yeah.
A
B
B
B
C
A
B
B
Provides
a
really
nice
declarative
model,
so
it
matches
sort
of
if
you've
started
to
bind
a
declarative
approach
and
you
love
it
for
all
the
rest
of
kubernetes.
Here's
claret
of
model
for
declaring
what
what
you
want
to
happen
is
the
type
of
cluster
you
specify
and
then
that's
what
should
go
happen.
Correct.
C
B
A
So
the
the
base
repo
is
kind
of
like
this
is
how
we're
defining
the
API
that
will
define
machines
that
will
define
clusters
like
this
is
how
we're
defining
these
things.
The
provider
repos
actually
have
the
mechanisms
to
make
that
happen.
Right.
You're
like
I,
want
this
machine
to
happen,
and
if
you
try
and
use
the
Amazon
provider,
they'd
have
a
provider
in
GCP,
it's
gonna
be
like
yo,
that's
the
same
thing:
I,
don't
know
what
you're
trying
to
do.
So
there
are
some
mechanisms
that
exist
specifically
for
these
providers
that
those
sub
repos
provide.
A
Now
there
is
some
talk
in
the
community
about
taking
some
of
the
work
in
the
specific
providers
and
making
them
abstract
and
then
moving
those
up
to
the
cluster
API
provider.
Like
yeah,
like
I,
said
the
definition
of
a
service
definition
of
a
machine.
Those
things
should
be
contracted
across
the
providers
right
there
I
at
least
that's
my
thumb.
Yeah.
C
Helper
tools
and
things
that
are
littered
across
each
of
the
provider-
implementations
that
could
probably
you
know
like:
let's
not
duplicate,
the
effort
you
know
so
Chuck
has
worked
on.
Chuck
has
worked
on
a
like
a
releasing
tool
great
for
for
the
cluster
API
for
AWS,
right
and-
and
you
know,
then
there's
a
then
there's
a
certificates
package
that
both
you
know,
they're,
both
AWS
and
Azure-
implement
right.
C
A
So
something
you'll
see
that
that's
kind
of
interesting
in
the
cluster
rekha
repos
is
that
the
providers
themselves
create
or
define
the
cluster
CTL
command.
So
you
know
how
there's
like
a
cube:
CTL
command,
there's!
Well,
there's
not
with
clusters
detailed
command
to
do
cluster
control.
Each
provider
at
the
moment
has
their
own
binary
because
it
has
the
specific
commands
to
create
those
things
that
will
eventually,
hopefully
move
up
to
the
cluster
API
repo
itself.
So.
C
If
I'm
understanding
the
history
correctly
I
think
you
know,
part
of
the
reason
of
there
were
initially
a
few
providers
that
were
in
the
top-level
repo
and
they
were
kind
of
split
out
to
like
you
know,
speed
up
velocity,
I
think
I
think
that
what
we
I
think
that
once
we
find
consensus
across
the
providers,
are
we
get
to
some
close
to
consensus
close
to
quorum
about
the
way
certain
pattern
should
be
built?
I
expect
to
see
some
of
the
provider,
information
or
provider
specific
stuff
fold
back
into
the
top
level,
so.
C
C
Perfect
well
so
huge
shout
out
to
platform
nine
said
they.
They
initially
started
off
the
work
for
the
cyber
provider,
so
I
I
have
this
like
curious
I.
Have
this
curious
habit
in
the
community
of
ending
up
on
Azure
projects,
I'm
the
I'm,
the
so
initially?
This
goes
back
to
core
OS,
where
I
was
working
on
a
project
and
they're
like
hey,
we
need
to
do
that.
C
You
want
to
chair
this
sake
now,
so
invariably,
whenever
something
as
related
comes
up,
I
will
I
will
end
up
in
the
mix,
and
so
I
was
like
okay
well,
cluster
API
is
happening,
I
wonder
if
we
have
an
agile,
iment,
ation
right
and
started
poking
around
noticed
this.
This
platform
9
implementation,
and
we
so
the
the
curious
bit
about
it
is
the
platform
9
folks,
the
the
one
of
the
guys
who
is
the
primary
maintainer.
C
He
was
an
intern,
so
he
is
back
at
school
now
and
on
the
Microsoft
side,
there
was
also
so
Microsoft
got
involved
in
the
platform,
9
implementation
and-
and
they
also
had
an
intern
working
on
it
and
he's
back
at
school
that
so
really,
ok,
real,
like
ok,
who's
actively.
Maintaining
this
I
was
like
I,
think
that
you
know
it
there.
There
should
always
be
a
goal
to
minimize
effort,
so,
where
possible,
like
we
shouldn't,
be
creating
a
new
provider.
C
If
this
is
if
this
is
workable,
right,
so
kind
of
made
some
some
inroads
to
migrate,
the
platform
Don
got
their
permission,
of
course,
and
access
the
repo
and
and
then
migrated
this
into
the
kubernetes
sakes
org
all
right.
So
the
kubernetes
SIG's
github
organization
is
essentially
the
the
top-level
organization
for
all
of
the
for
what
what
should
be
all
of
the
sig
related
work
right
and
then
we
we
kind
of
we
kind
of
got
to
work.
It
was
so
I
was
just
getting
ready
to
leave
Red
Hat
I.
C
A
A
C
B
C
Teammates
and
two
friends
like
I'm
looking
at
this
cluster
API
thing
and
I,
think
it
I
think
would
be
kind
of
like
I.
Think
like
this
is
the
direction
that
a
lot
of
people
are
going
and
it
would
be
nice
to
be
ahead
of
the
game
they're.
Just
like
one
for
you
know,
personally
and
and
to
like
it's
it's
a
it's,
also
a
strategy
player
right,
so
I.
C
You
know
we
spent
a
week
hacking,
hacking,
some
of
this
out
and
watching
and
watching
bench
watching
Steven,
universe
and
and
yeah
got
to
got
to
a
place
where
I
think
it
was
close
to
workable.
We
refactored
quite
a
bit
of
it
and
shout
out
to
the
AWS
folks,
everyone
who's
working
on
Cluster,
api,
AWS,
yeah,.
A
A
C
C
We've
gotten
the
repo
to
something
that
that's
spun
up,
just
couldn't
control
plane
if
you
knew
the
right
buttons
to
tweak
and
and
and
things
to
play
around
with
in
the
repo,
otherwise
it
would
just
it
was
broken
on
master
right,
so
now
not
broken
on
master.
Now
we
have
a
release,
release
process.
We
have
more
than
just
two
contributors
we've.
You
know,
we've
started
to
talk
to
you.
C
We
had
a
good
chat
with
a
bunch
of
the
folks
at
Microsoft
and
have
on-boarded
contributors
on
the
Microsoft
side
and
we're
starting
to
talk
to
Red
Hat
as
well
so
like
slowly
but
surely
like.
This
is
turning
into
something
that
was
a
I
think,
like
largely
an
intern
and
a
hobby
project
into
something
that
it's
actually
like
a
real
implementation.
So
I'm,
proud
of
I'm,
really
proud
of
some
of
the
progress
that
we've
made.
I
don't
want
to
keep
yapping
about
it,
though,
if.
B
A
B
C
B
C
We
have
this
process
that
there's
like
a
deep
copy
generator
that
runs
in
so
usually
like.
You
have
to
remember
the
file
to
go
to
I,
think
it's
package,
API
is
API,
go
or
something,
and
within
that
file
is
like
a
as
a
snippet
of
the
the
code
to
run
right.
So
I
took
that
out
and
turned
it
into
a
make
target,
and
apparently
I
didn't
do
that
right.
Yes,
I've
been
part
of
that
I've
broken
part
of
the
generation,
so
I
think
that
was
in
the
last
PR,
so
I'll
clean
that
up,
probably
yeah.
A
C
C
We
had
a
lot
of
stuff
that
was
MVP
will
be
considered
MVP,
so
the
the
projects
kind
of
broken
into.
We
have
multiple
milestones:
the
bile
stones
that
I
consider
my
head
or
bassline,
MVP
V,
1,
alpha
1
and
next
right,
so
the
baseline
baseline
milestone
is
kind
of
like
okay,
we've
just
migrated
this
project
in
it
needs
proud
plugins,
for
you
know
it
needs
proud
plugins.
C
It
needs
an
owner's
file,
it
needs
security
contacts,
it
needs
like
bug,
templates
and
issue
templates
and
all
these
like
little
things
that
would
go
into
making
it
a
kubernetes
project.
That's
what
I
consider
that
milestone
so
anything
that
kind
of
fits
that
mold.
Oh
we've
got
to
tweak
some
tests
or
something
like
them
like
that
would
fall
into
the
baseline
milestone.
The
next
milestone
is
MVP
and
MVP
is
MVP,
was
kind
of
created
as
a
milestone
to
say,
like
this,
repo
is
broken
right
now
right.
C
C
B
C
Alright,
so
first
things
first
I'm
going
to
just
edit
some
of
these
environment
variables,
so
cap
C
that
is
going
to
become
I.
C
C
Cluster
name
so
I
mean
a
lot
of
these
variables
are
pretty
self-explanatory.
So
this
this
one
is
setting
the
resource
group
that
it's
going
to
create
an
azure.
So
anyone
who
is
not
familiar
with
Azure
are
more
familiar
with
AWS
than
Asher.
Asher
has
a
concept
of
resource
groups,
which
is
pretty
cool,
as
it
allows
you
to
encapsulate
a
set
of
resources
within
this
like
logical
grouping
right.
C
So,
if
I,
you
know
if
I
am
playing
around
those
like
this
is
a
perfect
example
right,
if
I'm,
if
I'm
testing
out
a
bunch
of
cluster
API,
Azure
clusters
and
I
want
to
destroy
them
very
easily,
if
anyone
has
tried
to
destroy
VPC
in
an
AWS,
you
know
if,
like
oh
well,
these
rules
are
connected
to
this
or
this
IgE,
like
this.
Its
net.
B
C
Here
and
like
you,
you
run
into
these
like
very
interesting
things:
we're
being
able
to
consolidate
these
into
a
single
resource
group.
It's
pretty
cool,
because
I
can
just
go
and
delete
them
easily.
I
can
also
I
can
also
use
the
resource
group
to
scope
scope,
the
access
that
I
provide
to
certain
people
right,
so
certain
people
can
create
resource
groups
or
certain
people
can
operate
into
certain
resource
groups.
So
there's
a
lot
of
flexibility
around
one,
the
organization
and
then
two
and
two
the
way
that
you
build
permissions
in
advert.
C
So
the
kubernetes
version
is
setting
the
kubernetes
version,
the
manager
image
tag,
because
I'm
still
kind
of
playing
around
we're
pre,
0
t0,
but
right
now,
I
want
to
I'm
hoping
I,
keep
saying
I'm
hoping
to
cut
it
this
week
there
is
a
refactor
that
I
want
to
end
before
I
cut
that
and
that
that
should
add
support
for
existing
resource
groups
as
well
as
existing
virtual
networks.
I
would
love
to
land
that
before
I
move
on
to
the
next
version,
so
cluster
name
again
pretty
obvious
and
then
finally
we're
going
to
run
this.
C
This
make
manifest
target
right
to
see
what's
to
see,
what's
kind
of
happening
behind
the
scenes
you
know
at
the
end
of
the
day,
I
think
everything
in
kubernetes.
It's
kind
of
funny
that
you
know
there
there's
always
like
a
bash
grip
hidden
somewhere.
That
does
something
that
you
may
or
may
not
expect
it
to
do
so.
C
One
of
our
targets
actually
will
do
an
environment
substitution
of
a
bunch
of
these
environment
variables
right
so
we're
you
know:
we've
got
a
standard
control,
plane,
machine
type,
a
node
type,
a
kubernetes
version,
that's
set
the
cluster
name
and
some
of
the
things
that
we
saw
before.
If
you
don't
provide
a
resource
group,
it
will
take
some
random
string
that
I
decided
to
be
clever
and
do
and
and
do
copy
that
random
string
right.
C
Then
you
know
some
additional
substitutions
around,
so
this
kind
of,
like
preps,
the
perhaps
the
the
azure
credentials
to
be
be
64,
encoded
and
then
popped
into
a
secret
right,
and
then
that
secret
is
mounted
into
like
the
the
manager
of
the
controller
manager
for
a
sure
provider.
So
more
more
things
around
these,
like
templating
for
these
files,
so
there's
a
cluster
file,
clustered
network
spec
machines,
yeah
Mel,
right-
and
these
are
all
I
mean.
You
know
when
you
see
the
file
you're
like
oh
wow.
This
is
crew
brunette
days
right.
C
C
Right
that
you
can,
you
can
take
all
the
knowledge
that
you've
been
gaining
in
Gruber
net
is
and
apply
it
to
actually
building
kubernetes
for
yourself
instead
of
worrying
about
deployment
pipelines,
and
you
know
whether
I'm
going
to
need
you
know
like
how
am
I,
how
am
I
going
to
provision
these
systems
right
that
kind
of
goes
away.
So
if
we
see
we
can
check
out
this
credentials,
I
animal
template
and
it
you
can
see
as
I
said,
it's
it's
pulling
these
environment
variables
and
and
popping
this
into
a
secret
from
the
cluster.
C
Yet,
clusters
out
you
side,
you
can
see
we're
setting
some
some
Siders
for
the
services
in
pods,
Network,
the
service
domain
and
and
then
finally,
some
of
these
environment
substitutions
again
right.
This
is
the
provider
spec
and
we're
essentially
providing
a
resource
group
and
location
right.
So
the
things
that
Asher
would
need
to
understand.
It's
actually
instantiate
resources
within
within
a
region
and
then,
finally,
the
the
machines
that
the
Machine
sam'l,
which
again
a
lot
of
a
lot
of
environment
substitutions
for
the
cluster
name.
C
But
then
you
can
also
see
this
spec
is
like
hey
I
want
to
know
what
the
VM
size
is
going
to
be.
I
want
to
be
able
to
choose
an
image.
You
can't
currently
choose
an
image,
but
this
probably
something
we'll
look
at
doing
later,
but
then
you
know
choosing
the
OS
disk
size,
the
SSH
public,
private
keys.
So
all
the
information
that
I
would
need
to
know
to
again
create
a
machine
with
in
Azure
right.
So
enough.
A
Good,
so
taking
a
look
at
that
file,
it
looks
exactly
like
a
kubernetes
manifest
right.
That's
the
thing
that
I
find
to
be
so
cool
about.
This
is
like
that
looks
exactly
like
the
way
you
deploy
like
a
pod,
but
in
this
way
it's
actually
affecting
something
in
reality.
It's
a
machine
that
you're
creating
right
and
that's
all
I-
think
that's
really
awesome.
So
I
just
wanted
to
call
that
out.
Yeah.
C
It's
this
like
super
tangible
thing,
and
you
know
if
you
want
to
get
you
know
if
you're
more
curious
about
what
some
of
this
stuff
looks
like.
We
actually
have.
You
know
the
the
types
defined
right
for
the
API
right.
So
you
know
this
is
a
struct
that
includes
like
okay,
I,
want
to
know
the
the
key
pairs
for
you
know
for
different
resources.
The
admin
cube
config
right
and
then
also.
C
How
do
we
define
Avena
right?
How
do
we
define
a
network
in
general?
A
network
is
something
that
has
Avena
and
multiple
subnets
right
or
that's
our
consideration
of
what
the
network
is
right
and
then
you
know
what's
what's
in
a
subnet,
what
am
I
expecting
to
get
out
of
this
struct
right?
So
the
idea
of
the
subnet,
the
name,
the
that
it's
tied
to
all
right.
C
So
you
know
that's
that's
kind
of
what
that's
kind
of
what
the
cluster
spec
looks
like
and
then
the
machine
spec
over
here
is
again
all
of
the
stuff
that's
getting
spit
out
in
that
ya.
Mole
template
is
the
stuff
that
we're
looking
for
right.
You
know
so,
and
you
know
an
image
is
a
struct
and
the
struct
contains.
You
know
the
publisher
offer
SKU
version,
all
the
things
that
again,
you
need
to
create.
C
You
know
to
create
this
type
of
resource
within
adjure
right,
and
this
is
this
is
part
of
the
reason
that
you
know
it's.
It's
the
it's:
it's
not
the
Machine
spec
per
se
right,
but
it's
the
Azure
machine
provider
spec,
because
this
is
a
this-
is
something
that
we
we
can
give
a
general
idea
of
what
some
of
these
things
are
supposed
to
look
like,
but
when
it
comes
down
to
it,
every
provider
has
a
different
implementation
of
these
constructs
right.
C
C
C
So
I'm
running,
I'm
gonna
run
all
of
these
and
in
tandem
and
make
this
screen
much
bigger
right,
so
yeah,
so
just
a
explanation
of
what's
happening
here,
so
the
make
clean
target
is
going
to
essentially
when,
in
the
manifest
generation
we
create
a
an
out
folder
within
these
examples
right
so
it
includes.
It
includes
like
information
about
that
cluster,
specifically
right,
so
you
can
see
that
these
actual
these
cluster
CTL
create
cluster
calls
are
actually
calling
out
to
the
out
cluster
out
provider,
components
and
then
out
add-ons
and
out
machine
right.
C
The
make
manifests
again
make
manifests,
runs
that
generation
that
generate
that
y
Amal
script,
so
it
will
spit
out
it'll
spit
out
all
of
these
things
that
I
was
just
talking
about
right,
so
provider
components,
the
addons,
the
addon,
the
cluster
amel
on
the
machine
side,
Amal,
alright,
so
make
kind
reset
is
a
target
that
will
delete
a
kind
cluster.
That's
named
cluster
API
right,
so
so
kind
I
go
into
I,
don't
think
we
need
a
I
think
recently
there
have
been
a
bunch
of
reviews
and
blog
posts
and
meetups
around
kind.
I.
C
Think
it's
a
really
cool
tool,
really
powerful
allows
you
to
use
docker
to
a
use
docker
to
spin
up
a
bunch
of
kubernetes
nodes
right
and
then
the
way
it's
built
allows
you
to
do
like
really
cool
things
like
edit.
You
know
edit
the
the
types
of
clusters
that
you
expect
to
come
out
of
kind
side,
load
images
that
you
may
be
testing
at
the
time.
A
and-
and
you
know
one
cool
part
about-
is
it's
it's
fast?
It's
pretty
fast!
So
this
this
allows
us
to
iterate
pretty
quickly
over
likes
paying
up
clusters.
C
Essentially,
what
happens
in
this
flow?
Is
we
instantiate
a
bootstrap
cluster
right?
So
the
comment
the
kind
cluster
is,
is
our
bootstrap
cluster
right
and
and
we
deploy
the
the
cluster
API
cluster
API
controller
manager
to
that
cluster,
as
well
as
the
the
Azure
provider
controller
manager
right,
the
Azure
provider
controller
manager
is
the
one
that
contains
all
of
all
of
the
doodads
to
create
actuators
and
reconcile
errs
and
all
that
stuff.
C
C
All
of
our
I
would
say
a
vast
majority
of
our
make
targets
will
call
its
basil,
so
it
part
of
the
requirement
is
to
learn
a
bit
of
basil,
I
I've,
gotten
to
the
point
at
this
point,
like
I've
gotten
to
the
point
where
we
now
need
to
we're
refactoring
some
of
our
interfaces.
So
we
need
to
write
different
tests
which
require
mocks,
which
I
so
I'm,
now
going
through
the
pain
of
like
understanding
how
the
mock
generation
process
is
happening
with
within
basil
and
rewriting
some
of
that
stuff
to
to
generate
mock
interfaces.
A
C
All
right,
so
we've
we've
created
the
binaries
now
right
so
I
have
a
cluster
CTL
I've
got
a
manager
right
and
we
are
going
to
be
off
to
the
races
after
we
run
this
command
right.
So
this
command
says
I
want
to
create
a
cluster
I
want.
You
know,
don't
go
too
crazy
with
the
verbosity.
The
provider
that
we're
going
to
target
is
a
sure.
The
bootstrap
type
is
kind.
We
we
technically
also
support,
mini
cube
as
a
bootstrap
type.
I
have
not
tested
many
Cuban
quite
some
time,
because
kind
is
just
that
much
faster.
C
C
Right
so
you
see
it's
creating
the
kind
be
trapped
cluster
right
now,
as
you
notice
I
have
like
three
terminal
tabs
open.
The
reason
for
that
is
so
right
here.
I
just
want
to
watch
the
pods
that
are
coming
up
so
I'm
going
to
wait
till
it
does
the
thing.
Well,
so
it'll
kind,
actually
pops
up
right
and.
C
Right,
so
we
can
see
each
of
the
some
of
the
assets
that
are
coming
up.
You
can
see
that
it's
using
Q
proxy,
obviously
we've
core
DNS,
and
then
this
is
just.
This
is
a
cool
command
like
just
adding
a
watch
to
adding
a
watch
to
a
get
command.
You
can
see
when
certain
things
are
popping
up
right,
so
I
I
like
to
do
that,
just
to
see
how
long
it's
taking
for
these
assets
to
come
up.
We
can
see
that
the
the
azure
provider
controller
manager
is
now
running.
A
C
C
B
C
A
C
Yep
yep
and
like
what's
what's
really
cool
about
that
is,
is
I
have
not
I
have
not
manipulated
my
environment
in
any
way.
All
the
stuff
that
was
happening
was
happening
within
the
context
of
kind
right.
So
it's
not
like.
Oh
I've,
you
know
I,
you
know,
I've
switched
this
DNS
resolver
I've
done
this
thing
and
I
have
to
reconfigure
my
local
like
this.
It
allows
you
to
have
a
consistent
framework
for
for
for
running
kubernetes
and
for
testing
kubernetes.
C
So
it's
kind
of
cool
to
see
that
some
of
the
starting
to
be
used
for
our
actual
testing
framework
within
like
kubernetes,
that's
infra,
all
right.
So,
like
you
can
optionally,
you
can
absolutely
swap
to
using
a
kind
job
now
which
again
they're
faster,
but
they're.
Do
it.
If
you,
if
you
use
tests,
if
you,
if
you
create
tests
for
kubernetes
like
use
kind,
please
so
right
now,
this
is
just
doing
a
docker
build
and
if
you
want
to
see
the
docker
file,
nothing
no
magic
in
magic.
C
It's
just
a
multi-stage,
build
right,
we're
using
going
one
twelve
one,
because
it's
good
to
be
current
and
then
we're
copying
it
into
a
GC.
Our
distro
list
base
image
right.
So
there's
an
overarching
effort
in
the
community
to
move
or
images
to
to
destroy
the
space
where
we
can
and
it's
again
good
get
to
be
ahead
of
the
game.
So
so
we
switched
over
in
PR
not
so
long
ago.
B
A
C
We're
using
query
data
store
images
we
will
eventually
be
moving
to
GC
are
the
CN
CF
maintain
GC
are
right
now.
This
is
just
my
my
koi
account
with
a
with
an
organization
called
Kate
staging
and
holding
those
controller
images,
so
we
can
see
I
just
pushed
up
the
I
just
pushed
up
that
version,
and
you
can
see
that,
like
there
have
been
quite
a
few
versions,
and
these
are
not
like
all
of
the
ones
that
have
been
like,
not
including
all
the
ones
that
have
been
deleted
over
the
the
past
few
months.
C
So
they
could
mentioned
a
pivot
right,
so
there's
there's
the
interesting
part
and
where
it
gets
even
more
meta
right.
So
what
happens
in
kind
is
like
I
said
we
deploy
these
two
controllers,
alright
to
handle
some
of
the
cluster
API
objects
and,
and
then
after
the
control
plane
comes
up.
The
cluster
CTL
command,
actually
dreams
that
that
bootstrap
cluster
and
pushes
the
objects
out
to
the
cluster
that
we've
just
created
an
azure
right
so
functionally
functionally.
C
So
once
it
actually
did
is
like.
Oh,
it's
doing
all
this,
this
crazy
stuff,
this
crazy
cool
stuff
right.
So
we
we
then
push
the
the
the
cluster
API
controller
manager
and
and
the
the
azure
provider
controller
manager
out
to
that
new
management
cluster
and
and
then
you
can
manipulate
that
cluster
from
within
that
cluster,
all
right,
so
you've
you've
you
trapped
the
cluster.
So
if
anyone
has
ever
worked
with
like
tectonic
or
like
boot,
cube,
that's
exactly
what
we
did
like
just
using
different
constructs
really.
C
So
it's
so
it's
managing
itself,
essentially
you've
built
a
kubernetes
cluster
that
now
manages
itself
and
then
like
at
the
end
of
the
process,
kind
kind
cleans
up,
destroys,
will
cluster
CTL
cleans
up
kind,
destroys
the
kind
cluster
and
then
gives
you
instructions
about
the
applies
any
add-ons
for
the
cluster
and
then
gives
you
instructions
about
how
to
how
to
access
the
cluster
right
where
the
cube
config
is
all
right.
So
the
cube
config
gets
spit
out
in
the
the
root
of
that
directory.
B
A
A
C
So,
what's
essentially
happening
is
now
that
we
can
describe
these
things
as
kubernetes
objects
right
and
now
that
we're
we're
building,
you
know
we're
building
controllers,
essentially
right
so
controllers
there.
You
know
you
turn
them
on
turn
them
off.
You
kill
them.
You
don't
kill
them
whatever
right
they're
supposed
to
essentially
have
some
flow
that
they
continuously
reconcile
right.
C
So
the
cool
part
about
being
able
to
describe
your
cluster
as
and
your
machines
as
objects
within
kubernetes
means
that
it's
just
it's
basically
just
sam'l
I
can
kick
it
somewhere
else
right
so
after
so
so
after
the
cluster
successfully
are
the
control
plane
successfully
come
comes
up,
it
transfers
the
it
transfers,
the
clusters,
information
and
the
machines
information
over
to
the
management
cluster
that
you've
just
created
right,
so
that
management
cluster
then
instantiates
that
that
reconciler
flow
again
right.
So
it's
like
hey.
C
B
B
B
B
A
C
Just
want
I
just
want
to
point
out
some
of
the
stuff.
That's
happening
right
now
right,
so
we
have
our
some
of
the
stuff
that
did
happen,
so
we
have
started
to
reconcile
layer
flows
right
so
I'll
I'll
show
you
in
the
code
what's
actually
happening
right,
so
this
reconciling
machine
and
in
reconciling
cluster
flows
that
are
happening
right.
So
the
the
cluster
one
is
the
one
that
we
care
about.
C
C
This
is
the
the
machine
actuator
getting
ready
to
create
a
machine,
but
before
I
can
create
a
machine.
I
have
to
create
the
NIC
right
and
it's
looking
to
create
the
NIC
within
the
cap,
see
rainbows
sparkles
a
resource
group
which
no
long,
which
does
not
exist
yet
right.
The
the
resource
group
is
being
is
about
to
be
created
within
the
cluster
actuator
right
now
we
see
that
it's,
it's
failed
to
update
this
machine
and
it's
failed
to
update
the
machine
because
it
didn't
have
the
NIC
right
and
the
actuators,
like
hey
I'm,
just
gonna.
C
Try
again
in
a
minute
is
that
cool
it
goes
and-
and
just
you
know
and
like
I'm
gonna
hope,
I
have
the
information
by
then
right.
So
continuing
down
the
line,
we've
got,
you
see,
we've
successfully
generated
some
certs,
we've
generated
some
cube
configs
and
some
discovery
hashes
for
cube
ATM.
Now
we
have
created
the
resource
group
resource
group.
Is
there
now
that
we've
created
the
resource
group?
C
We
can
start
instantiating
the
rest
of
these
things,
like
the
virtual
network,
adding
the
appropriate,
creating
subnets
as
a
result
of
that,
adding
the
appropriate,
adding
the
appropriate
security
groups
to
the
control
plane
and
the
node
subnets,
creating
a
route
table
for
the
node,
the
the
node
subnet
right,
and
so
that's
like
the
basic
flow
right
and
dududududu
right.
So
now
we're
creating
internal
load.
Balancers
and
all
of
this
is
listed
out.
As
you
know,
you
know
these,
you
can.
You
can
tell
exactly
where
it's
happening
in
the
code.
Thank
You,
K
log.
B
B
C
Like,
oh,
that
was
five
clusters
ago.
Where
was
that
error
message,
but
you
know
so
internal
load
balancers,
the
the
public
IP
is
the
public
load
balancer
right
and
then
the
reconcile
is
essentially
complete
right.
It's
it's
running
back
through
it
because
it
just
it
likes
to
have
fun.
I
probably
need
to
clean
up
some
of
that
logic,
but
now
we're
at
the
point
where
the
the
machine
is
ready
to
go
right.
So
we're
looking
at
the
this
virtual
machine
stock
go
is
saying:
I'm
going
to
create
the
NIC.
C
So
the
ZM
is
coming
up
and
right
now
it's
running
through
the
startup
script
right
so
I
can
we
can.
We
can
inspect
what's
happening
in
the
startup
script
to?
But
if
we
are,
you
know,
so,
if
we
look
at
this
cluster,
CT
I'll
call
it's
just
it's
just
waiting.
It
wants
to
see
when
that
it
wants
to
see
you
in
that
cluster
when
that
control
plane
comes
up
right
and
it
signifies
that
the
control
plane
comes
up
by
annotating.
The
control
plane
with
like
cluster
API
Azure
equals
true,
or
something
like
that
right.
C
Particles
particles,
examples
a
sure
out,
SSH
key
right.
That
is
a
private
key
for
for
the
all
the
nodes
that
were
all
the
notes
that
we're
gonna
be
SSH
into
you.
That's
an
SSH,
we'll
call
this
rainbow.
Sparkles
I
can't
believe
I
named
it
that
so
now
we
have.
We
have
the
SSH
key
that
we
need.
We
also
need
to
understand
what
I'm
SS
aging
into
so
rainbow
sparkles.
You
can
see
all
these
assets
have
been
created.
What
we
care
about
is
the
reading
is
hard.
We
want
to
get
the
DNS
name
for
the
public.
C
C
I'm
in
alright,
so
just
to
show
you
where
these
scripts
end
up.
It's
bar
Lib
W,
a
agent
custom
scripts,
download
0,
alright.
So
here
we
can
see
that
it's
downloaded
container
D.
It's
got
a
script
here,
a
standard
standard
error
and
standard
out.
So
we
want
to
look
at
what
the
script
is
really
quick
right,
pretty
pretty
pretty.
Oh,
my
god!
Oh
my
god,
I
said
I
wasn't
going
to
oh,
my
god,
all
right!
Oh.
B
A
B
B
C
Social
era,
sometimes,
if
you
expose
credentials
so
this,
so
it's
a
it's
doing,
a
bunch
of
installs
of
the
the
packages
required
the
see
Ric
and
I
all
those
good
things.
You
blitz
the
qadian
package
and
keep
CTL
right
at
this
point.
It
has
all
the
information
that
needs
to
instantiate
a
kubernetes
cluster
using
cube
ADM.
So
this
is
actually
the
cube.
Atm
run
that's
happening,
and
then,
at
the
end
of
the
run,
we've
got
at
the
end
of
the
run.
C
You
see
the
information
that
you
need
to
grab
the
cube
config
and
below,
or
some
information
about,
the
the
bootstrap
the
bootstrap
taken.
That
is,
that
script
that
so
brilliant
can't
believe
I
did
that
alright,
so
over
here.
So
we
see
this.
This
follow
that
I
was
doing
on
the
logs
has
stopped
right,
and
the
reason
for
that
is
that
that
that
Asscher
provider
controller
manager
has
died
right.
C
So
this
cluster
client
has
killed
off
that
Azure
as
your
provider
controller
manager,
because
it
no
longer
needs
it
right.
It
has
the
information
that
it
needs.
It's
saying
hey.
This
is
the
pivot
phase.
Now
right,
so
the
pivot
phase
generates
the
list
of
assets
that
are
on
the
cluster
that
it
needs
to
move
somewhere
else
that
it
needs
to
pivot
right.
So
we
can
see
that
there
are
no
machine
classes
to
care
about,
but
there
was
a
cluster
that
it
found
rainbow
sparkles
right.
So
it's
now
moving
that
default
that
rainbow
sparkles
cluster.
C
That's
in
the
default
namespace
out
to
the
target
cluster,
the
cluster
that
we've
just
created
right
and
it's
checking
for
machine
deployments,
machine
sets
and
then
finally,
machines.
It's
found
a
machine
and
it's
transferring
the
information
about
that
machine
out
into
out
into
this
new
management
cluster
right.
C
So
once
that
happens,
we
realized
that
the
the
machine
lists
that
I've
transferred
over
to
that
I've
transferred
over
is
actually
a
list.
That
includes
one
more
machine
right
and
it's
the
it's
the
node
right.
So
it's
right
now,
it's
like
oh,
hey,
I've
got
I've
got
one
more
thing
to
create
right.
So
it's
it's
going
through
the
creation
of
the
node
right
now
and
if
I
get
out
of
here.
C
B
B
C
So
we
can
see
it's
running
back
through
this.
This
reconciler
flow
right
so
again,
I
should
be
able
to
I
should
be
able
to
use
this
control
like
any
controller
manager
in
kubernetes
I
should
be
able
to
use
it.
It
should
be
able
to
start
its
flow
anywhere
and
be
able
to
get
to
the
desired
state
without
having
any
expectation
of
like
synchronicity
right.
C
So
it's
running
through
all
of
those
steps
that
you
saw
before
reconciling
the
cluster,
the
certificates
making
sure
the
security
groups
are
in
place,
the
subnets
it's
found
a
VM
for
the
machine
right
and
it's
starting
to
do
an
update
for
that
stuff
right
now,
right
so
what's
cool
about
again,
we
said
these
are
kubernetes
objects
right.
So,
if
I
look
at,
if
I
just
do
get
clusters
right,
I
can
I
can
now
see
that
cluster
as
a
kubernetes
object
right
and
if
I
want
to
go
a
layer
deeper,
get
yeah
describe
any
the
circles.
C
That's
going
to
get
blown
up
all
of
these
certs
that
will
will
not
exist
soon,
but
again,
remember
the
the
stuff
that
I
was
showing
you
in
the
spec,
the
the
ammo
file
before
we
started
this
whole
journey,
the
things
about
the
cider
block,
the
the
admin
cube,
config,
the
service
domain,
so
on
and
so
forth.
Right
so
clearing
my
screen
again,
because
life
is
great.
C
We
all
shake
now,
let's
check
out
machines
right,
so
it's
created
two
machines
right
and
we
can
actually
see
like
machines
exists,
they
have
provider
IDs
right,
I
can
introspect
on
these
things
and
get
really
interesting
information.
If
I
can
copy
it
properly
keeps
it
yeah.
I'm
gonna
do
a
describe
if
I
can
spell
correctly
and
machine
right.
So
again,
it
tells
me
vergence
of
the
cubelet,
the
you
know,
the
the
size
of
the
VMS
different
information
about
like
the
running
state
of
the
of
the
machine,
the
public
key
and
some
more
information
about.
C
So
it's
it's
chosen.
It's
chosen
a
ability
zone.
It's
set
some
labels
right.
This
is
this
is
a
late.
This
is
the
annotation
that
happens
to
allow
it
to
proceed
to
the
next
step
and
and
what
we're
investigating
right
now
is
how
cloud
providers
do
labeling
an
annotation
in
general
right
and
making
sure
that
we
can
choose,
or
we
can
come
to
some
consensus
about
like
what
the
source
of
troops
should
be
if
we're,
if
we're
using,
if
we're
leveraging
the
the
the
Azure
cloud
provider
and
the
cluster.
C
So
things
like
have:
have
you
provided
your
own
resource
group
all
right,
so
the
resource
group
would
say
like
manage
is
know
right
so
when,
when
the
provider
sees
it,
it
goes
like
okay,
I'm
not
going
to
bother
to
create
this
resource
groups,
because
it
should
already
exist
right,
I'm,
gonna,
plop,
all
the
resources
into
that
group
instead,
so
lots
of
like
nice
little
information
about
trying
to
do
this
without
I.
Don't
think
there
is
anything
super
okay
right.
A
C
Yeah
for
the
machine
that
is
about
to
die,
but
but
anyhow
the
you
know
so
that
again
this
is
the
stuff
that
was
provided
inspect
the
offer
publisher,
SKU
version
for
the
images,
the
location,
the
the
disc
sizes
and
the
OS
type
right.
So
again,
all
the
things
that
you
would
need
to
create
to
create
resources
with
an
azure
right
right
and
we
should
see
that
boom
done.
I
said:
20
minutes
we
got
16
minutes
no
yay.
C
B
A
C
So
we've
got
the
the
actuator
right
and
the
actuator
kind
of
like
kicks
off
the
flow
right.
You
create
a
new
actuator
and
has
a
reconcile
pattern.
The
there's
a
scope
right
and
the
scope
is
kind
of
cool
right.
Cuz.
Your
scope
is
going
to
kind
of
slurp
the
provider
spec
and
status.
It's
going
to
grab
an
authorizer
that
will
be
used
for
a
sure
like
to
authorize
against
Azure.
It
checks
for
your
subscription
ID
within
the
environment
right
and
then
what
it'll
spit
out
for
you
is
a
set
of
Adric
lines
right.
C
So
these
Adric
lines
here
are
come
on.
Okay,
so
again
include
that
scope
of
the
subscription,
ID
and
authorizer.
I
forgot
that
we
refactor
that
and
then
a
little
lower
down.
We
see
that
we
have
the
cluster
client
right,
so
the
the
interface
to
do
actions
over
the
cluster.
They
config
that
we
were
talking
about
again
that
and
then
the
the
provider
spec
right,
the
prior
to
respect
in
the
provider
status
right.
C
So
we've
got
some
of
these
methods
that
are
made
available
operating
over
the
scope
right
being
able
to
pull
out
information
about
the
network.
Right
so,
like
I
was
saying
about
the
network
right,
it's.
How
do
we
want
to
describe
what
an
azure
network
looks
like
right
so
right
now
we
describe
it
as
I've
got
some
security
groups.
I've
got
a
load.
C
Balancer
I've
got
tonight
right,
pretty
simple
like
these
are
all
only
the
things
that
I
need
to
care
about
within
the
context
of
the
cluster
right,
but
you
know
we're
also
able
to
pull
out
other
information
right
like
the
subnets,
the
the
v-net,
the
security
groups,
name,
namespace,
location,
right
and
then
finally,
what
we've
got.
We've
got
these
methods
to
store
the
cluster
config
and
the
status
and
a
close
method
which
operates
over
that
stuff.
So
you'll
see
that
within
the
actuators
will
do
a
defer
close
at
the
beginning
or
close
to
the
beginning.
C
Right
and
and
it'll
kick
off
a
reconciler
right,
and
this
reconcile
so
once
that
so
once
this
reconcile
is
successful.
It'll
hit
this
that
it'll
hit
this
close
and
that
close
will
make
sure
that
I
stored
information
about
the
cluster
config
anything
that
I've
changed
along
the
way
during
the
reconciliation
and
the
cluster
status
right.
C
So
this
reconcile
this
is
where
the
magic
happens,
all
right.
So
again,
this
right,
the
this
reconciler
struct,
is
instantiation
of
a
bunch
of
these
address
services
that
I
need
for
the
to
operate
over
the
cluster
and
then
the
reconciler
is
like.
Okay,
let's
actually
create
a
new.
You
know
a
new
service
of
each
of
these
individual
services.
C
I
will
check
to
see
if
I
can
grab
the
the
API
server
IP
right,
because
we
need
this
for
various
things
like
you
know,
pushing
this
into
certs
being
able
to
generate
the
cube
config
being
able
to
know
how
to
access
the
cluster
in
general
right.
So
each
of
these
run
a
create
or
update,
alright,
so
I'm
going
to
try
to
create
or
update
these
certificates.
C
C
C
C
And
and
then
you
know,
we
have
like
the
way
so
afterwards
we
have
like
some
converters
right,
our
one
converter
right
now,
which
is
commented
out,
the
wazoo
which
basically
takes
the
takes
the
known
as
err
type
right.
So
the
computer
chewable
machine
right
with
all
of
this
fun
pointer
stuff
to
handle
and
we'll
return
a
create
internal
representation
of
what
a
VM
is
to
us
right.
So
the
way
we
DIF
define
a
VM
is
slightly
different
from
the
way
Asher
defines
it
and
part
of
the
reason
for
that
is
like
one.
C
We
don't
need
all
the
bells
and
whistles
that
Asher
has
to
represent
what
a
VM
is
right
and
two
it's
nice.
If
we
have
strings
that,
we
can
pass
around
strings
instead.
Right
are
things
that
break
down
into
like
easy
types
to
handle
instead
of
pointers
based
on
like
virtual
machine
extension
right,
so
then
I've
got
a
I've
got
to
do.
I've
got
to
do
some
like
gymnastics
to
represent
that
in
the
code.
Right,
so
prefer
not
to
do
that.
So
we
have
converters
here
and
basically,
these
converters
will.
C
Some
of
this
is
not
wired
up
yet,
but
essentially
what
the
converter
is
supposed
to
do
is
one
convert
to
that,
so
our
concrete
internal
representation
of
what
that
type
should
be,
and
then
it
will
so
ace
the
stuff
that
you
were
running
into
if
you're
still
on
the
call
it's
within
this,
this
Zizi
generated
deep
copy
file.
Alright,
so
a
bunch
of
the
types
like
will
expose
like
deep
copy
and
deep
copy
into
methods
for
each
of
these
types.
C
Alright,
so
at
the
after
the
conversion,
then
I
want
to
deep
copy
that
information
back
into
the
scope
right
so
that
the
next
the
next
reconcile
that
runs-
it's
like
oh
I,
have
the
ID,
okay,
cool
I.
Don't
need
to
do
anything
right,
you
know
so
being
able
to
store
that
information
and
have
like
a
true
view
of
the
world
right
instead
of
getting
so
we
can
minimize
the
calls
that
we
make
to
a
jerk
I
doing
by
doing
that.
Alright.
C
B
B
B
Right
right,
ideally,
we
would,
if
you
have
like
one
cluster,
that's
kind
of
our
control
plan,
or
we
would
just
like
bootstrap,
all
of
them
individually,
using
kind.
So
I'm
curious.
Why
you're
saying
there's
like
what
is
the
idea
that
that
would
just
be
to
bootstrap
like
kind
of
a
control
plane,
so
yeah.
C
So
you
like,
while
there
are
two
points
to
it,
only
one
of
them
is
happening
within
the
the
cluster
CTL
run
right
and
that's
the
pivot
from
kind
into
this
new
cluster
right
from
there.
If
you
decided
that
cluster
is
going
to
be
your
management
cluster
for
everything
else,
totally
fine
right,
then
you
don't
need
to
deal
with
kind
anymore.
You
just
the
same
way.
We
did
a
generation
and
man
and
of
the
manifests
right.
You
generate
the
manifests.
You
run
the
the
cluster
CTL
on
the
target
cluster
instead
right.
B
A
B
C
Think
that
you
know
the
next
thing
that
we
do
will
we
will
want
to
attack,
is
I
like
overall,
we
need
increased
test
coverage
right.
I
need
to
write
end-to-end
tests
because,
like
sometimes
I,
can
only
discover
these
things
after
like
oh
well,
let
me
try
that
out
by
building
a
cluster
all
right.
So
all
these
things
that
we
just
went
through
I
would
have
to
I
would
have
to
do
right.
C
Well,
what
do
we
have
to
do
to
be?
Like
a
sane,
you
know
a
sane
implementation
of
this
provider
so
definitely
increase.
The
test.
Coverage
definitely
start
to
do
a
more
exploration,
around
availability
zones
right,
so
so
making
sure
that
everything
that
we
build
is
so
and
redundant.
I.
Think
that's
mostly
true
right
now
or,
if
not
totally
true,
already
and
I
think
that
you
know
there
needs
to
be
some
flexibility
in
the
way
people
deploy
right.
C
So
the
common
that
you
know
tried
to
try
to
dredge
up
the
common
patterns
for
users
right
so
one
being
I
might
not
have
access
to
to
create
a
resource
group
right
based
on
based
on
my
company's
policy
right.
So
we
need
to
support
existing
resource
groups
right
and
I
may
not
want
to
build
the
network
myself,
but
up
or
we
may
already
have
a
network
built
and
I
want
to
supply
it
over
time
right.
C
C
If
I,
if
we
support
existing
fee
Nets-
and
we
say
the
V
nuts
provided
right,
then
we're
are
we
still
going
to
create
the
subnets
if
we
are
going
to
create
the
subnets,
that
means
I
need
to
figure
out
what
the
Sider
block
is
for
the
V
net
right
and
then
do
some
magic
to
do
some
subnet
math
to
understand
like
okay.
Well,
how
am
I
going
to
structure
these
subnets
right?
C
What's
going
on,
it
just
says
like
net
config
invalid
subnet
right
and
not
the
fact
that
the
subnet
range
overlaps
with
something
else
right.
So,
starting
to
do
things
like
that
that
that
will
tease
out
some
of
the
the
common
user
use
cases
I
think
like
existing
resources
and
existing
peanuts
are
two
big
things
to
handle
next
and
those
are
things
that
I'm
trying
to
land
before
the
zero
to
zero
release
and
then
I
think
we
can
improve
some
of
the
tooling.
C
C
We
want
to
do
some
of
that
stuff
too,
and
then
we
also
want
to
figure
out
like
what
we
can,
what
we
can
get
away
with,
like
the
the
level
of
permissions
that
we
can
get
away
with
granting
to
the
user
right
now,
we're
assuming
that
you're
coming
to
the
table
with
with
a
service
principle
that
has
contributor
access
right
and
like,
however,
you
get.
That
is
like
that's
an
exercise
left
to
the
user
right,
but
we're
assuming
that
you
have
contributor
access
right
again:
that's
not
the
state
of
the
world
right.
C
You
won't
always
have
contributor
access
and
service
principles
right,
so
we
need
to
be
able
to
structure
policies
and
and
figure
out
exactly
all
of
the
different
tools
that
we
need
like
we
need
to
create
for
units.
We
need
to
create
subnets
like
so
going
down
that
ladder
and
kind
of
like
enumerating
those
cases
and
then
building
a
tighter
security
model
around
some
of
the
stuff,
but
I
actually
forgot
what
the
question
was:
I
just
started
yammering
so.
A
C
B
C
C
It
depends,
it
depends,
is
the
answer
part
of
you
know.
Part
of
our
current
issue
is
that
we
don't
have
a
true
represent
internal
representation
for
all
of
the
fields
that
we
need
to
to
determine
that
right.
So
the
things
I
was
talking
about
like
supporting
existing
resource
groups
and
supporting
existing
Venis
right.
So
right
now
a
resource
group
exists
as
a
string
right,
which
means
I
can't
like
encode
any
like
cool
Azure
information
about
it.
It's
just
like
rainbow
sparkles
cool,
ok
right,
so
the
the
next
PR
that
I
have
coming
is
like
okay.
C
C
If
manages
sets
yes,
then
it'll
say:
ok
well,
I
need
to
create
this.
It
manages
set
to
know,
then
it
will
go.
Ok,
well,
we're
gonna
skip
this
entire
flow
right
because
you
said
you
didn't
want
me
to
do
anything
about
it
right,
so
it
will
depend
on
whether
you
decided
to
provide
that
resource
or
not,
and
then
secondarily
there
are,
you
know
it
like
I
think,
there's
a
balance
between
giving
people
levers
to
or
knobs
to
tweak
and
providing
like
a
consistent
framework
to
build
something
right.
C
So
there
are
certain
things
that
we're
we're
never
going
to
allow
you
to
mess
around
with
right.
So
if
we
decide
that
you
are
going
to
bring
your
own
v-net
and
guess
what
we're
not
going
to
mess
around
with
your
security
groups,
we're
not
going
try
to
mess
around
with
the
subnets
we're
not
going
to
mess
around
with
anything
related
to
that
right.
So
we
need
to
be
able
to.
C
We
need
to
be
able
to
break
out
of
those
those
cases
quickly
within
the
code
right
and
then
you
know,
and
then
there's
certain
things
that
were
like.
Okay,
we'll
absolutely
have
to
manage
these,
because
if
we
don't
like
you
won't
get
a
cluster
right
right,
so
I
I
think
you
know
we're
still
investigating
like
how
to
strike
the
balance
between
those
two
things.
I
think
that
you
know
the
resource
groups
and
V
Nets
are
a
good
first
target,
because
those
are
the
things
that
usually
come
up.
First
right,
cool.
A
All
right
well,
thank
you
so
much
for
giving
us
such
a
deep
dive
and
a
demo
into
the
hazard
provider.
That
was
awesome.
I
like
again
really
great
work
getting
this
across
to
this,
not
even
the
finish
line.
We're
not
done
right.
There's
always
gonna,
be
work
to
do
right,
but
get
it
to
this
point.
That
was
awesome.