►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Howdy
howdy
everyone.
Well,
if
that
countdown
didn't
get
your
morning
afternoon
or
evening
started
then
maybe
maybe
another
cup
of
coffee
might
do.
Thank
you
all
for
joining
us
today.
Welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
taylor,
dolezal,
a
senior
developer
advocate
at
hashicorp
focused
on
all
things,
infrastructure,
application,
delivery
and
developer
experience.
A
Every
week
we
bring
on
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
They
will
build
things,
they
will
break
things
and
they
will
answer
your
questions
join
us
every
wednesday
at
11
a.m.
Eastern
time
this
week
we
have
jason
detiberous
here
with
us
to
talk
about
the
cluster
api
with
a
bit
of
pixie
dust.
A
I
gotta
say
I'm
looking
forward
to
this
magical
presentation
today
also
join
us
for
kubecon
cloud,
nativecon
virtual
north
america,
from
october
11th
to
the
15th
to
hear
the
latest
from
the
cloud
native
community.
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
A
B
Yeah
so,
as
taylor
said,
my
name
is
jason
de
tiburus.
I
am
a
principal
software
engineer
at
equinix
metal,
the
cloud
provider
formerly
known
as
packet.
B
I
know
some
folks
in
the
community
may
know
that
name
a
little
bit
more
and
you
know
I've
been
working
on
infrastructure,
specifically
kubernetes
infrastructure,
since
about
2015
now
and
for
the
last
year
I've
been
working
primarily
on.
How
do
we
enable
that
infrastructure
in
a
data
center
in
a
more
cloud-native
way?
B
You
know
how
can
we
take
these
experiences
for
managing
kubernetes
clusters
in
the
cloud
and
actually
apply
those
in
a
data
center
on
bare
metal,
whether
it's
a
bare
metal
cloud
provider
like
we
have
at
equinix
metal
or
your
own
data
center?
So
let
me
take
you
into
my
data
center
here.
I
refer
to
this
as
us.
B
The
timber
one
and
what
we
have
here
is
five
small
form
factor
amd,
look
like
machines
and
on
the
right
is
just
a
little
mini
itx
box,
that's
running
tinkerbell
and
we'll
get
into
what
tinkerbell
is
in
a
moment,
but
these
five
small
form
factor
machines,
hopefully,
by
the
end
of
the
demo.
I
have
today
will
become
part
of
a
actual
kubernetes
cluster
and
there's
nothing
on
them
to
start
with,
and
we
should
be
able
to
bootstrap
them.
B
You
know
from
zero
to
kubernetes
relatively
quickly
today,
assuming
everything
works,
if
not,
we
can
dive
into.
You
know
how
to
troubleshoot
this,
and
you
know
more
of
what's
going
on
along
the
way,
but
that's
kind
of
the
overview.
B
And
we
can
go
ahead
and
skip
past
a
couple
of
these,
but
the
the
first
technology
to
talk
about
today.
That's
going
to
underlie
how
we're
going
to
manage
these
physical
machines
is
cluster
api
and
basically
you
know
if
we
want
to
distill
it
down
into
the
basics.
Cluster
api
is
a
you
know,
a
project,
that's
sponsored
by
kubernetes
sig
cluster
lifecycle.
B
That's
the
special
interest
group
in
kubernetes
dedicated
to
trying
to
improve
the
life
cycle
management
of
kubernetes
clusters
in
general,
and
the
goal
of
the
project
is
to
provide
a
declarative
set
of
apis
similar
to
kubernetes
the
what
kubernetes
provides
for
applications,
but
you
know
apply
it
to
the
infrastructure
management
that
you
need
for
running
kubernetes
clusters
themselves,
including
installation,
upgrade,
and
you
know
anything
else
that
you
need
to
do
to
tweak
the
configuration
of
a
running
kubernetes
cluster
from
the
infrastructure
side,
and
you
know
basically
the
way
it
works.
B
We
actually
use
kubernetes
to
manage
kubernetes
so
for
cluster
api.
There
is
a
kubernetes
cluster
that
is
running
somewhere.
You
deploy
the
cluster
api
components
to
it,
you
define
the
cluster
like
you
would
any
other
kubernetes
resource.
You
know
basically
just
a
big
old
ball
of
yaml,
you
throw
it
at
the
api
server
and
out
the
other
end.
B
You
get
a
kubernetes
cluster
and
that
could
be
running
in
one
of
the
various
different
supported
infrastructure
providers
that
cluster
api
supports
anything
from
aws
to
vsphere,
to
equinix,
metal
and
now
even
tinkerbell
for
running
bare
metal,
and
that's,
I
think,
basically
all
I
want
to
do
for
cluster
api.
But
if
anybody
has
questions
as
we
get
along,
we
can
easily
dive
deeper
into
that.
B
The
other
project
that
I've
already
mentioned
is
tinkerbell
and
tinkerbell
is
basically
trying
to
take
what
kubernetes
did
and
apply
that
apa
eye-centric
management
to
actual
physical
infrastructure
in
a
data
center.
A
lot
of
the
existing
tools
that
are
out
there
in
the
infrastructure
management
space.
To
do
things
like
provision
os's
on
machines,
be
able
to
you
know
boot
machines
over
the
network.
All
of
that
you
know.
A
lot
of
the
systems
that
were
previously
designed
were
designed
before
you
know.
B
Cloud
native
was
really
a
thing
and
tinkerbell
is
trying
to
basically
take
those
cloud
native
approaches
and
apply
it
to
that
infrastructure.
Space
and
there's
a
few
different
components
in
there
and
because
we're
building
on
top
of
tinkerbell
probably
go
into
it
a
little
bit
more
there's
several
different
micro
services
that
make
up
what
we
call
tinkerbell.
B
The
main
one
is
the
actual
tinkerbell
workflow
engine
itself,
and
this
is
basically
what
runs
underlying
it's
the
you
know
connects
to
the
data
store.
It
provides
the
basic
api
for
interacting
with
the
hardware
and
there's
three
basic
resources
in
tinker
bell:
a
hardware,
a
template
and
a
workflow.
B
The
hardware
is
actually
you
know
the
description
of
the
infrastructure
that
you're
going
to
manage
itself.
So
you
point
it
towards.
You
know
the
the
actual
machine
you
give
it.
The
mac
addresses
for
the
machine
what
ip
addresses
you
expect
that
that
machine
to
have
and
some
other
basic
definitions-
and
we
can
dive
into
that
when
we
get
into
the
demo
too.
B
But
it's
basically
a
description
of
the
hardware
that
tinkerbell
is
expected
to
manage,
and
then
you
define
a
set
of
actions
that
you
want
to
do
on
that
hardware
through
what's
called
a
template
and
that
template
basically
just
says
you
know,
do
this
step.
Do
that
step,
you
know
until
you
get
to
the
end
of
whatever
it
is
you
want
to
do.
B
In
the
general
case,
we
talk
about
provisioning
os's
on
machines,
in
this
case
we're
talking
about
provisioning
kubernetes
on
a
machine,
but
it
doesn't
even
need
to
be
that
you
can
actually
do
other
tasks
that
you
might
need
to
do
on
infrastructure.
Like
you
know,
update
the
you
know
firmware
on
all
the
machines
that
you
have
in
the
data
center.
You
could
define
those
types
of
actions,
the
other
primitive
the
workflow
basically
just
takes
those
other
two
primitives
and
ties
them
together,
so
that
tinkerbell
knows
that.
B
There's
an
actionable
thing
that
you
need
to
do
so.
The
workflow
just
says:
take
this
template,
apply
this
hardware
to
this
template
and
go
and
do
the
thing
and
that's
where
the
other
microservices
start
to
get
involved
as
well.
So
you
know,
I
mentioned
the
tinkerbell
workflow
engine
there's
also
a
boot
service.
This
provides
the
basic
dhcp
pixie
booting
services
that
are
needed
by
tinkerbell,
so
that
you
know
basically
provides
you
know
ip
addressing
to
the
hosts
based
on
what
you
defined
in
tinkerbell.
B
It
also
provides
a
way
to
network
boot,
those
hosts
and
get
into
a
minimal
os
environment
that
we
can
then
run
the
tinkerbell
workflows
from
there's
also
a
kegel
metadata
service,
and
this
we
leverage
quite
heavily
with
the
cluster
api
integration,
because
in
general,
when
you're
bootstrapping
cluster
api
cluster,
it
expects
that
you
can
shove
user
data
somewhere
and
be
able
to
you.
B
Instead
of
you
know,
just
you
know
the
the
basic
pixie
boot
that
a
lot
of
people
are
familiar
with
and
then
the
other
component
is
the
actual
worker,
and
this
is
just
basically
a
client
that
connects
to
tinkerbell
and
says
you
know:
do
I
have
any
workflows
assigned
what
are
they
and
goes
off
and
executes
the
actual
actions
that
make
up
those
steps
and
that's
all
run
in
what
we've
generally
called
a
minimal
operating
system,
installation
environment
and
that's
just
a
slimmed
down
os
that
does
nothing
but
boots
up,
runs
that
tinkerbell
work
or
contacts
tinkerbell
and
does
what
it
needs
to
do.
B
And
the
one
other
project
that
I'm
going
to
bring
up
today-
and
I
haven't
yet
mentioned
it,
but
because
we
are
talking
about
kubernetes
clusters,
one
of
the
biggest
requirements
that
you
have
there
is
you
need
some
way
to
have
a
persistent
end
point
that
stays
constant
for
the
life
of
that
cluster,
whether
that
is
a
load
balancer
of
some
kind
or
a
virtual
ip
address
that
can
be
migrated
between
the
machines.
B
You
know
various
different
types
of
virtual
ip
mechanisms
running
on
hosts
or
you
know,
through
a
load
balancer,
and
you
know,
because
a
lot
of
the
things
that
we
do
in
the
tinkerbell
space
we
try
to
keep
things
as
simple
as
possible
and
try
to
keep
things.
As
you
know,
a
minimal
asp.
You
know,
what's
the
minimal
way,
that
we
can
do
this
to
provide
the
exact
functionality
that
we
need
and
not
try
to
take
on
any
additional
management.
B
You
know
we
wanted
to
avoid
doing
things
like
trying
to
spin
up
an
h,
a
proxy
load
balancer
and
manage
you
know
the
the
back
end,
endpoints
on
that
h8,
proxy
load,
balancer
or
anything
like
that,
and
that's
where
qbit
comes
in
and
it's
basically
a
project
that
manages
you
know
virtual
ips,
either
for
the
kubernetes
control
plane
or
of
services
of
type
load
balancer
for
kubernetes
and
in
our
case
we're
doing
the
control,
plane,
load,
balancing-
and
you
know
it
does
this
through
a
couple
of
different
mechanisms
that
are
involved.
B
It
can
either
do
it
through
arp
advertisements
of
the
virtual
ip
address,
or
it
can
do
a
bgp
based
configuration
and
actually
publish
the
bgp
address
out
to
different
bgp
peers.
And
then
you
enable,
like
you
know,
full
active,
active
type.
You
know
load
balancing
you
know
essentially
behind
you
know
bgp
in
our
case,
for
the
demo
today
we're
using
art
based
just
to
keep
the
simplicity
down
in
the
environment
that
I
have
here.
B
I
did,
you
know,
didn't,
require
having
to
set
up
a
bgp
server
and
you
know
publishing
those
routes
anywhere
to
enable
access
to
that.
So,
in
my
case,
I
can
just
do
the
you
know,
arp
advertisements
and
and
be
done
with
it.
You
know
in
this
link,
there's
also
links
out
to
all
of
these
different
projects
and
the
demo
script
that
I'll
be
running
through
today
as
well,
because,
hopefully
you
know
if
everything
works
well,
I'll
be
able
to
run
through
actually
creating.
A
B
Yeah
and,
like
I
said
the
the
idea
here
is,
you
know
the
cluster
api
project
has
shown
being
able
to
create.
You
know
kubernetes
clusters
and
be
able
to
more
easily
manage
these
clusters
across
various
different
cloud
providers,
and
there
have
been
a
few
attempts
in
the
past
at
doing
that
in
data
centers,
as
well
with
our
approach
with
tinkerbell,
though
we
wanted
to
try
to
take
as
cloud
native
as
an
approach
that
as
we
could,
you
know,
try
not
to
basically
shoehorn.
B
You
know
cloud
native
management
through
cluster
api
into
that
traditional
data
center
management.
You
know
how
can
we,
you
know
more
accurately,
do
real
cloud
native
management
in
the
data
center,
so
in
this
environment
right
now,
I've
already
stood
up
a
cluster
api
in
a
local
kind.
Cluster.
I've
already
got
the
cluster
api
provider
tinkerbell
stuff
already
installed,
basically
just
to
save
the
time
for
bootstrapping
that
bit
and
if
I
come
in
here.
B
We
can
see
here
that
the
first
thing
you'll
notice
is,
is
there's
three
entries
down
here:
hardware
templates
and
workflow
related
to
tinkerbell,
and
these
basically
just
coincide
with
the
hardware's
templates
and
workflows
that
I
discussed
earlier.
This
just
exposes
them
through
a
thin
shim
layer
through
the
kubernetes
api.
Instead
of
having
to
talk
directly
to
tinker
bell,
it
gives
me
the
opportunity
that,
as
we
start
to
finding
these
things,
I
can
actually
look
at
the
status
of
them.
B
You
know
through
kubernetes,
instead
of
having
to
switch
back
and
forth
to
tinkerbell
and
and
back,
but
we
also
have
you
know
some
resources,
re
related
to
cluster
api
for
tinkerbell,
specifically
these
tinkerbell
clusters,
tinkerbell
machines
and
tinkerbell
machine
templates,
which
coincide
to
the
same
similar
types
that
are
exposed
by
the
other
infrastructure
providers.
B
B
B
You
know
five
hardware
resources
related
to
the
small
factor,
machines
that
I
have
over
here
and
if
I
switch
back
over
to
the
hardware
cam
folks
that
are
there,
they're,
basically
labeled
a
b
c
d
e
from
the
bottom
to
the
top,
so
I've
also
cheated
a
little
bit
and
given
these
ids
suffixes
that
you
know
correspond
to
those
so
that
I
can
more
easily
identify
which
you
know
pieces
of
hardware,
we
need
to
toggle
as
we
need
to
toggle
them
as
we
go
along.
B
Kubernetes
and
if
we
describe
it,
we'll
get
some
more
information
about
it,
and
this
is
all
stuff
that
I've
pre-defined
within
tinker
bell
for
the
purposes
of
the
cluster
api
integration,
and
if
I
take
this
one
in
particular,
what
we
see
is
is
there's
a
spec.
The
id
and
this
associates
with
the
id
of
the
resource
within
tinker
bell
itself,
and
then
the
status
has
gone
and
populated,
with
the
actual
information
from
that
was
defined
in
tinker
bell.
B
Just
some
basic
metadata
that
cloud
in
it's
going
to
be
able
to
use
to
pre-populate
things
as
we
bootstrap
things
along
we've
defined
that
we
do
want
this
to
be
able
to
pixie
boot,
and
we
do
want
to
allow
it
to
run
workflows
and
we've
defined
it
an
actual
ip
address,
and
this
is
the
mac
address
associated
with
the
with
the
actual
hardware,
and
that's
what's
really
going
to
tie
everything
together.
You
know
the
request
is
going
to
come
in
saying
it's
coming
from
this
mac
address.
B
The
other
important
thing
is
is
that
I've
defined
a
disk
device
here
as
well
and
that's
important,
because
when
we
go
to
actually
bootstrap
this
machine,
we
need
to
be
able
to
know
where
to
actually
write
that
information
and
the
way
hardware
is
that
could
be
varied
from
device
to
device.
So
currently
we
require
people
to
like
predefine.
It
eventually
we'll
be
able
to
add
support
to
auto,
detect
the
stuff
and
populate
it
as
we
need
to
the
you
know.
B
The
challenge
here
is:
is
we
can't
just
assume
that,
like
the
first
block
device,
we
find
is
something
that
we
can
write
to,
because
if
you
look
at
like
some
arm
hardware-
and
things
like
that,
sometimes
you'll
have
an
sd
card
that
contains
your
actual
firmware
that
your
you
know,
bootstrapping
the
device
with
and
if
you
overwrite
that
now
you've
just
broken
all
booting
of
that
machine.
So
in
this
case
you
know
you
tell
it
where
you
want
it
to
be
able
to
deploy
to
as
a
prerequisite.
B
And
I
see
somebody
did
ask
about
bmc
support
in
pb
j.
We
haven't
integrated,
pb
and
j
yet
into
this
workflow,
but
we
do
plan
on
adding
that
in
the
future.
B
You
know
the
the
hardware
that
I
have
here
actually
wouldn't
even
work
with
pb
and
j,
because
the
remote
management
is
dash
based
and
that's,
you
know
a
whole
nother
specification
outside
of
things
like
ipmi
and
redfish,
and
things
like
that
that
are
supported
by
pb
and
j,
but
server
class
hardware
is
supported
by
pbnj
and
we
can
integrate
that
into
those
workflows.
B
The
main
thing
that's
been
stopping
us
right
now
is
we
want
to
make
sure
that
as
we're
integrating
that
into
kind
of
the
standard
upstream
for
lack
of
a
better
term
reference
architecture
for
tinkerbell
and
into
the
cluster
api
integration
that
we're
kind
of
doing
it
in
the
best
way
possible,
because
there's
various
different
ways
that
we
can,
you
know
add
support
for
it.
We
can
add,
workflows
that
go
off
and
you
know
process
the
pb
and
j
type
requests.
B
We
can
kind
of
build
it
in
a
little
bit
more
automated
in
different
fashions,
either
through
the
cluster
api
integration
or
through
tinker
bell
itself,
and
we
want
to
try
to
make
sure
that
we're
doing
that
integration
properly.
Instead
of
trying
to
rush,
you
know
the
the
most
expedient
solution
out
of
the
box
for
us.
A
B
B
And
I'm
going
to
copy
and
paste
here
again,
and
I
will
describe
exactly
what
I'm
running
before
I
do
this,
so
anybody
who's
already
familiar
with
cluster
api,
there's
pre-published
templates
that
we
have
for
the
cluster
api
resources
to
make
a
cluster
we're
leveraging
the
same
thing
here
and
we're
specifying
some
specific
variables
that
get
plugged
in
there
in
order
to
be
able
to
actually
create
the
machine.
B
The
first
one
here
is
this
control
plane
vip,
and
this
is
actually
the
ip
address.
That's
going
to
be
managed
by
qbip
and
migrated
around
as
needed
through
the
cluster
and
we're
also
specifying
the
pod
sider
here-
and
this
is
actually
important
for
my
demo
use
case,
because
the
default
pod
cider
would
actually
conflict
with
the
physical
networking
that
I
have
for
this
lab
environment.
B
B
The
other
things
are.
Basically,
you
know
just
the
basic
things
we're
saying
to
start
with.
We
just
want
a
single
control,
plane
machine,
don't
create
any
worker
machines
to
start,
and
that's
just
to
kind
of
serialize
the
startup
a
little
bit
for
our
purposes,
because
I
have
to
toggle
these
machines
somewhat
manually
and
we're
also
specifying
the
kubernetes
version,
and
you
know
what
that
does.
Is
you
know:
we've
actually
pre-built
images
with
kubernetes
with
the
kubernetes
components
already
in
there.
B
This
is
some
that
most
of
the
other
cloud
provider
implementations
for
cluster
api
have
done.
We
followed
the
same
suit
and
right
now
for
the
way
things
are
configured
for
this
demo
environment
that
actually
sits
on
a
web
server
on
the
tinkerbell
host
that
I
have
here.
B
So
that's
the
version
of
kubernetes
for
the
image
that
I
previously
built.
We
are
looking
in
the
future
to
move
to
oci
registry
based
distribution
of
the
operating
system,
images
that
contain
kubernetes
and
once
we
do
that,
we'll
be
able
to
actually
stream
them
live
from
the
web
pretty
much
anywhere
and
I'd
be
able
to
have
a
little
bit
more
flexibility
for
which,
which
kubernetes
version
I
was
doing,
but
for
right
now.
This
is
the
only
version
that
I
have
an
image
available
for
right
now,.
B
All
right
so
at
this
point
I've
created
the
cluster
and
I
can
run
basically
keep
cuddle,
get
cluster
api
here,
we'll
get
everything
that's
associated
itself
with
the
cluster
api
category,
so
this
is
all
of
the
main
cluster
api
resources,
plus
also
the
tinkerbell
cluster
api
resources
that
I've
defined
as
well,
and
we
can
see
here.
You
know
we
have
a
few
different
resources.
We
have
the
cluster,
we
have
some
machines.
B
We
have
this
cube,
adm
control,
plane
machine
which
actually
manages
the
control
plane
for
us
based
off
of
qubit
adm,
another
sid
cluster
of
life
cycle
project
for
helping
to
bootstrap
clusters.
We
can
also
see
that
it's
tried
to
create
one
replica
that
replica
is
up
to
date
according
to
the
configuration,
but
it's
unavailable
right
now
and
that's
because
I
haven't
actually
turned
on
the
machine.
It
hasn't
bootstrapped.
B
Yet
the
other
thing
we
can
see
is
that
here's,
this
tinkerbell
machine
this
is
the
bit
that's
actually
associating
that
a
cluster
api
machine
with
the
actual
tinker
barrel
infrastructure,
and
it's
going
to
tell
me
that
you
know
it
assigned
it
to
this
instance
id
which
basically
just
gives
me
the
uid
of
that
hardware
device
and
because
I
cheated
a
little
bit
with
the
uid
creation
here.
I
know
that's
related
to
the
hardware
d
box
that
I
have
over
here.
B
So
if
I
switch
over
here
and
my
vm
locked
up
so
in
this
case,
what
I
will
do,
instead
of
triggering
it
remotely
I'll,
just
go
over
and
power
it
on
real,
quick
by
hand.
B
All
right,
so
that's
coming
up
now.
Unfortunately,
I
can't
even
redirect
the
text
console
because
that
windows
machine
locked
up
right
now,
but
I
can
tell
you
that
it
is
going
ahead
and
bootstrapping.
It
is
attempting
to
pixie
boot
against
boots.
It's
going
to
get
that
minimal,
operating
system,
insulation,
environment,
image.
It's
going
to
run
that
and
it's
going
to
start
executing
the
tinkerbell
workflow,
which
is
the
more
important
thing,
and
I
can
show
you
basically
what
cluster
api's
created
as
far
as
that
workflow.
B
B
Work
flow,
actually,
let's
describe
it
and
because
I've
serialized
the
creation
with
just
one
control
plane.
Instance
right
now
I
can
just
describe
you
know
workflow.
I
don't
have
to
specify
which
one-
and
we
can
see
here-
that
basically,
these
are
the
individual
tasks
that
it's
going
to
run
as
it
goes
along.
B
Let's
see
all
right,
yep
all
right,
so
the
first,
the
first
test
that
it's
going
to
run
it's
going
to
go
ahead
and
stream
the
image
this
is
that
pre-created
image
that
I've
already
created,
it's
templating
out
the
url,
based
on
some
configuration
that
I've,
given
it
I
told
it
that
tinker
bell
can
be
found.
The
tinkerbell
host
can
be
found
at
192.168.1.1,
colon
8080..
So
that's
filling
that
in
for
us.
B
I've
told
it
that
I
want
to
use
ubuntu
2004
through
the
the
resource
that
I
created
through
that
template,
and
I
told
it
I
wanted
kubernetes
version
1.18.15
and
the
important
part
here.
Is
we
see
that
the
destination
disk?
This
is
actually
using
that
data
that
we
pre-populated
in
the
hardware
to
fill
this
in.
B
So
we
specify
this
link
local
address,
which
I
configured
this
tinkerbell
machine
to.
Listen
on
that
port.
You
know,
5061
is
the
heagle
default
port,
so
it's
going
to
go
ahead
and
contact
the
meta
server
that
we
have
set
up
and
I've
also
given
it
some
basic,
a
basic:
u
default
user,
to
set
up
so
that,
if
I
wanted
to,
I
could
create
a
cluster
defined
in
a
way
that
would
let
me
inject
my
ssh
key
through
the
cluster
configuration
and
be
able
to
access
this
remotely
as
it
is.
B
B
What
we'll
see
here
is
there's
a
lot
more
information
here,
and
this
is
basically
all
the
information
that
cluster
api
created
for
bootstrap
in
this
cluster
and
it
created
it
in
the
user
data
section
of
the
metadata.
So
when
cloud
init
runs,
it's
going
to
find
this
and
it's
going
to
go
ahead
and
execute
the
script
just
as
if
it
was,
you
know,
one
of
the
other.
You
know
real
clouds
so
to
speak.
B
B
A
As
as
it
sets
with
live
demos,
always
always
they
always
like
to
throw
a
wrench
in
the
plans
at
the
best
of
times.
So.
B
B
B
So
that
is
fun,
so
what
would
have
happened?
You
know
in
the
case
is
this
would
have
bootstrapped
up.
It
would
have
configured
cubevip
as
part
of
that
bootstrap
process.
With
that,
you
know
virtual
ip,
that
we
defined
and
then
at
that
point
this
machine
would
be
up.
B
We
could
apply
a
cni
and
then
we
would
be
able
to
scale
the
clusters
up
at
that
point,
but
it's
kind
of
hard
to
proceed
from
this
point
without
breaking
out
a
keyboard
and
monitor,
and
that's
going
to
be
a
bit
awkward
for
this,
especially
since
that
windows
vm
locked
up
on
me
for
the
bit
of
remote
management.
B
I
could
do
on
these
machines,
so,
yes,
I
would
gladly
be
able
to
chat
offline
about
you
know
how
do
we
enable
the
remote
management
of
things
like
power
and
boot
order
for
the
machines,
especially
with
concerns
around
ipmi
things
like
that?
B
B
You
know
configure
things
on
the
firmware
side,
that
most
folks
in
a
data
center
probably
aren't
doing
or
don't
want
to
do
so,
as
we
start,
adding
things
like
pb
and
j
support,
we
want
to
make
sure
that
we're
not
just
supporting
that
overly
opinionated
environment
that
we
care
about
internally,
but
also,
how
do
we
support
things
that
folks
are
gonna
hit
in
the
real
world
as
well?
B
And
that's
likely
to
be,
you
know,
be
able
to
support
things
like
you
know,
some
of
the
consumer
based
hardware,
whether
it's
like
these
amd
based
boxes
with
dash
or
the
intel
based
stuff
with
their
management
firmware.
B
And
with
that,
you
know,
obviously,
it's
kind
of
hard
to
show
the
actual
aha
capabilities
here,
but
you
know
cubit
basically
acts
like
a
kubernetes
controller.
B
It
keeps
an
eye
on
things
it
uses
leader
election
to
determine
who
is
the
primary
machine,
at
least
in
the
arp
case,
which
is
what
we
have
configured
here.
So
basically,
whichever
one
is
able
to
connect
to
the
local
api
server
and
declare
itself
deleter.
That's
the
one!
That's
going
to
advertise
that
the
arp
for
the
vip
that
we've
defined,
so
that
ensures
that
you
know
that
you
know
there's
minimal
interruption
of
that
ip.
B
Basically,
you
know,
as
things
go
along
for
folks,
that
want
more
highly
available
machines,
be
able
to
do
things
like
be
able
to
scale
out
requests
across
multiple
api
servers.
The
bgp
based
configuration,
you
know,
is
there
for
that,
and
there's
also
been
additional
support,
added
for
being
able
to
update
dhcp
as
well.
B
So
you
know,
even
if
you
don't
want
to
deal
with
bgp
and
you
have
access
to
being
able
to
access
dhcp,
that's
another
alternative
there,
but
we
also
want
to
make
sure
that
we
don't
make
things
overly
opinionated
and
only
support
cubefit.
We
want
to
be
able
to
support
other
types
of
load
balancers
as
well
and
as
cluster
api
figures
out
how
to
do
kind
of
proper,
like
load
balancer
support
across
different
providers,
we'll
be
able
to.
You
know,
integrate
with
that
and
consume
things,
because
right
now.
B
There's
other
providers
that
have
multiple
load,
balancer
options
and
some
of
those
are,
you
know
easily
applicable
to
other
providers
as
well,
especially
the
in
data
center
ones,
whether
it's,
the
openstack
provider
or
the
vsphere
provider
or
tinkerbell
metal
cubes.
All
of
those
so
we
want
to.
B
B
So
we're
looking
at
adopting
that
as
well
for
this
integration,
which
means
kind
of
one
less
requirement
that
you
have
for
kind
of
bootstrapping
the
environment
to
get
started,
and
it
simplifies
the
management
and
will
give
us
the
ability
to
actually
have
public
images
available,
similar
to
like
the
aws
provider
and
some
of
the
other
ones.
Not
that
we
would
recommend
folks
do
that
in
production
environments.
B
We
want
folks
to
kind
of
create
their
own
images,
and
you
know
do
the
types
of
verification
with
their
own
workloads
that
they'll
be
running
on
the
cluster.
B
To
ensure
that
you
know
everything
works
like
they
want,
but
you
know
having
those
images
available
for
the
initial
you
know,
poc
use
case
or
or
demo
use
case
would
greatly
simplify
stand
up
of
the
environment.
A
One
one
question
that
I
had
for
you:
there
jason
was
kind
of
the
I've
liked
how
you've
gone
through
and
kind
of
described
each
of
these
abstractions
with
the
project
with
cluster
api
with
tinkerbell.
A
Were
there
any
were
there
any
abstractions
that
you
came
across
that
kind
of
were
intentional
at
first
or
were
there
some
that
were
things
that
you
really
didn't
see
coming
and
then
you're
like?
We
should
draw
a
line
here
or
or
kind
of
draw
out
that
abstraction
when
it
came
to
composing
this
project
and
working
with
these
two.
B
Yeah,
so
I
think
I
briefly
touched
that
touch
on
that
a
bit
with
like
the
load
balancer
and
you
know,
and
and
trying
to
add
that,
as
like
a
first
class
citizen
with
like
cluster
api,
you
know
when
we
started
you
know
part
of
it
was,
is
you
know
not
necessarily
not
seeing
the
need
for
those
abstractions
but
trying
to
limit
the
complexity
and
the
you
know
the
amount
of
work
it
takes
for
an
initial
implementation
to
go
so
we
tried
to
stay.
B
You
know
very
minimal
with
the
cluster
api
abstractions,
especially
in
the
in
the
early
days,
but
we
tried
not
to
be
overly
simplistic
at
the
same
point.
You
know
the
idea
wasn't
to
basically
define
an
arbitrary.
You
know
one
cloud
abstraction
to
rule
them
all
type
of
abstraction,
because
you
know
that's
been
tried,
various
places
in
the
past
and
every
place
that
it's
worked.
You
end
up
with
you
know
a
least
common
denominator
that
ends
up.
B
You
know
working
good,
for
you
know
generally
a
demo
poc
use
case
and
very
limited
kind
of
production
use
cases,
and
you
know
generally
folks,
pretty
quickly
outgrow
those
super.
You
know
limited
abstractions
when
you
try
to
attract
them
across
all
providers,
so
you
know
that's
the
reason
why
you
know
when
we
define
the
resources.
B
You
know
there's
this
tinkerbell
machine
in
addition
to
a
machine
resource,
we
didn't
want
to
try
to
abstract
things
away
too
much
with
the
with
the
way
that
we
were
defining
things.
B
But
at
the
same
point
you
know
there
were
plenty
of
us
in
you
know
who
talked
about
you
know
how
do
we
support
use
cases
outside
of
the
cloud,
and
you
know
some
of
the
things
that
are
going
to
be
important
there,
especially
if
we
don't
want
every
single
you
know,
on-premises,
you
know
deployment
to
have
to
define
their
own
particular
cloud
provider
and-
and
we
took
that
limitation
at
first
to
try
to
get
the
you
know,
project
bootstrapped.
You
know
in
some
cases
you
know
simplify.
B
You
know
the
conversations
that
you're
having
to
come
up
with
consensus
among
contributors
all
of
that
stuff,
the
more
stuff
that
you
can
kind
of
throw
out
of
those
early
conversations,
the
easier
it
is
to
great.
You
know
create
consensus
and
be
able
to
get
started,
but,
as
we
start
seeing
more
adoption
in
other
places
now
some
of
those
ideas
are
starting
to
bubble
up
and
become
bigger
pain
points.
B
So
I
know
in
addition
to
the
proposal,
that's
out
there
being
worked
on
for
adding
load,
balancer
support,
there's
other
folks
that
are
looking
to
add
things
like
ip
address
management
support
to
cluster
api
proper,
which
is
something
that
will
come
in
great.
You
know
really
handy
to
us
on
the
tinkerbell
side,
because
right
now
we
define
require
people
to
pre-define.
B
You
know
the
ip
addresses
on
all
these
hardware
devices,
but
if
we
can
integrate
with
you
know,
some
type
of
you
know,
authoritative
ip
address
management
solution
that
would
give
us
the
ability
to
be
a
bit
more
flexible
there.
You
know
other
places
where
I
see
room
for
abstraction
as
well.
B
Is
you
know,
especially
on
the
you
know,
data
center
front,
you
know
being
able
to
configure
firewalls
and
you
know
equivalent
to
you,
know
security
groups
in
the
clouds
you
know
being
able
to
define
those
types
of
things
to
provide
more
granular
external
restriction
of
you
know
what
devices
can
communicate
with
other
devices.
B
That
sort
of
thing
you
know
and
and
that's
going
to
be
something
that's
different
for
you
know
pretty
much
every
on-premises
environment
depending
on
what
you
know
vendor
they,
you
know,
go
go
through
for
that
type
of
solution
or
if
they've,
you
know,
in
some
cases
now
with
software-defined
networking
written
their
own
solution
in
that
area.
B
The
other.
The
other
aspect,
too,
is
you
know
more
granular
control
over
networking.
You
know
right
now
you
know
in
the
major
cloud
providers
you
can
generally
be
kind
of
opinionated
in
those
things
you
know
in
aws
everything
is
vpc
based
now,
but
you
know
when
you
start
looking
at
things
like
openstack
or
you
know,
vsphere
or
tinkerbell.
B
You
know,
what's
going
to
be
available,
networking
wise
to
provide
that
kind
of
automated
networking
configuration
is
going
to
be
a
lot
more
diverse
than
you
see,
with
kind
of
the
major
cloud
providers.
So
we
want
to
be
able
to.
You
know,
will
eventually
want
to
be
able
to
support
those
types
of
abstractions
as
well,
and
whether
those
are
proper
abstractions
within
cluster
api
that
are
shared
globally
or
their.
B
You
know,
external
abstractions
that
are
shared
between
multiple
providers,
that's
yet
to
be
seen,
but
I
fully
expect
at
some
point
we're
going
to
see
that
you
know
in
an
ideal
world.
I
would
love
a
you
know
the
ability
for
folks
to
be
able
to
treat
their
data
centers
like
you
know,
aws
does
like
you
know.
Google
does
or
you
know
any
of
the
large.
You
know,
operators.
You
know
they
should
be
able
to
just
rack
and
stack
machines.
Have
them
automatically
become
available,
be
used
as
needed?
B
If
you
have
a
hardware
fault,
you
know
you're,
not
troubleshooting
down.
You
know
individual
hardware
faults
within
an
individual
device,
you're,
basically
ripping
and
replacing
that
device,
because
it's
cheaper
than
you
know
a
person's
time
and
labor
to
replace
the
physical
hardware
than
it
is
to
you
know,
deal
with
troubleshooting
down.
You
know
a
failed,
you
know
ram
module,
you
know
and-
and
you
know
being
able
to
do
that,
but
also
provide
things
like
virtual
network
isolation,
so
that
hey
I
want
this.
B
You
know
these
turn
these
machines
into
a
kubernetes
cluster.
They
get
deployed
on
their
own
separate
vlan
with
the
proper.
You
know,
ingress
and
egress
routing
that
it
needs
for
that
more
granular
than
that.
You
know
that
would
provide
you
know:
granular
restrictions
between
clusters
themselves,
at
least
at
you
know
the
virtual
level.
You
know
you
know,
I'm
not
going
to
say
it'll,
be.
B
You
know
the
same
as
physical
hardware
network
separation,
but
you
know
it's
much
improved
over
just
throwing
everything
on
the
same
l2
broadcast
domain
type
of
thing,
and
then
you
know
with
the
firewall
type
thing
you
know
being
able
to
integrate
at
that
level
and
be
able
to
say
that
you
know
a
workload
running
on
you
know
this
subset
of
worker
machines
cannot
access
the
you
know
these
various
ports
outside
of
the
you
know,
prescripted,
you
know
networking
ports
to
access
the
internal.
You
know
api
server,
client.
B
That
sort
of
thing
you
know
would
provide.
You
know
more
isolated
restriction
on
a
per
workload
basis
as
well.
B
So
I
see
those
things
happening
in
the
future,
but
again
you
know
you,
you
can
do
things
both
faster
and
better,
by
limiting
the
scope
that
you
do
at
first
and
as
we
define
these
things
and
hit
the
limitations
in
various
areas.
That's
you
know,
that's
the
beauty
of
the
community,
that's
that's!
B
When
we
get
together-
and
we
say
you
know
how
do
we
solve
this-
and
and
what's
the
right
approach-
and
I
I
won't
say
that
you
know
cluster
api-
is
the
perfect
abstraction
to
rule
them
all
around
declarative
management
of
kubernetes
clusters,
but
it
seemed
to
do
pretty
well,
and
you
know
it's,
you
know
the
more
people
that
adopt
it,
the
more
people
we
have
bringing
their
own
expertise
and
feedback
into
the
group
and
the
better
we
improve.
B
You
know
the
entire
ecosystem
for
everybody,
and
I
think
that's
the
goal
that
we
have
is
not
necessarily
be
the
best.
You
know
abstraction
around
building
a
kubernetes
cluster.
If
I
wanted
to
do
that,
I
would
write
a
super,
overly
opinionated
installer.
That
said,
you
know,
give
me
a
five
node
cluster
on
aws
and
that's
all
you,
you
know,
that's
the
only
input
you
give
it
and
out
comes
a
cluster
that'll
work
great
for
about
five
people.
B
You
know
once
you
get
into
like
the
more
diverse
use
cases,
especially
with
people
that
are
having
to
run.
You
know
workloads
in
highly
regulated
environments
or
that
are
running
workloads
that
are
going
to
be
attacked
by
you
know,
nation
state
level,
threat
actors.
B
You
know
they
have
a
lot
of
different
requirements
and
we
want
to
be
able
to
make
sure
that
we
can
support
all
the
varied
use
cases
because
similar
to
cube
adm
before
it
the
thing
with
cluster
api
is
we
wanted
to
build
the
next
building
block
on
top
of
cube
adm,
to
help
enable
folks
to
build
out
and
manage
these
these
kubernetes
clusters?
B
You
know
in
the
early
days
you
know
every
different
kubernetes
vendor
every
different
types
of
you
know
open
source
distribution.
You
know
you
know
vanilla
kubernetes,
you
know
there
were.
There
were
different
installers
and
different
ways
to
manage
these
things,
and
the
hope
is
is
to
try
to
unify
some
of
that
effort
so
that
not
everybody's
having
to
go
out
and
the
next
kubernetes
release
comes
out
and,
like
oh,
my
god,
this
doesn't
work
with
our
tooling
anymore.
How
do
we
reverse
engineer
all
that
stuff?
You
know,
provide
a
common
substrate.
B
You
know
so
that
you
know
people
can
worry
more
about
building.
You
know
the
actual
features
that
the
customer
actually
cares
about.
You
know
at
this
point
I
think
installing
and
upgrading
kubernetes
cluster
is
table
stakes
for
anybody
in
the
market.
So
you
know
everybody.
You
know
all
the
customers
want
more
focus
on.
Well,
what
are
you
enabling
for
my
workloads?
How
do
I
do
ci
cd?
On
top
of
this?
How
do
I,
you
know,
take
care
of
some
of
these
more.
You
know
complicated
challenges.
B
How
do
I
run
aiml
workloads
on
my
you
know,
cluster?
How
do
I
deal
with
you
know
the
security
aspects,
and
you
know,
let's
let
everybody
focus
on
those
higher
level
concerns,
and
you
know
kind
of
share
some
of
that
burden
for
kind
of
the
common
crud
that
you
know
only
us
infrastructure,
geeks
actually
really
care
about.
A
And
that's
and
that's
what
I
I
think
is
most
helpful
when
you
kind
of
talked
about
you
know
not
not
having
the
best
abstraction
or
things
like
that.
I
feel
like
when
people
get
to
that
mind
space
and
a
head
space
of
focusing
on
the
abstraction
solely
then
you
kind
of
get
into
that.
Xkcd
comic
one
of
my
favorites,
where
it's
two
people
talking
and
they
and
they
say
there
are
14
competing
standards.
We
should
make
one
to
unite
them.
All
on
that
next
pain.
Is
there
15
competing
standards?
A
A
Seeing
you
know
in
in
my
in
my
time
within
the
industry,
I've
really
liked
taking
a
look
at
seeing
the
jumps
back
and
forth
between
the
mainframe
setup
and
the
personal
computing
setup,
and
then
honestly,
it
really
excites
me
to
kind
of
be
around
for
what
does
a
data
center
at
home?
Look
like
you
know,
I
feel
like
cluster
api
and
some
other
things
on
that
front.
Allow
for
that
granted.
It
might
be
some
raspberry
pi
clusters
or
some
of
the
intel
nooks,
but
you
know
nothing,
nothing
too,
wild
or
crazy
or
you
know.
A
A
If
for
anyone,
that's
seen
that
I
won't
spoil
it
for
you
but
recommend
checking
that
out
if
you
haven't
seen
it,
but
it's
it's
really
interesting
to
me
and
I
really
like
how
you
focused
on,
and
the
community
has
really
focused
on
what
I
would
consider
the
right
things
to
focus
on
and
kind
of
making
sure
that
you
have
that
feedback
from
everyone
and,
while
you
know
upgrading
kubernetes
is
has
historically
been
difficult
to
do.
A
I
really
appreciate
the
fact
that
that's
becoming
easier
and
easier
with
each
passing
day
when
it
when
it
comes
to
making
sure
your
workloads
work
on
that,
of
course,
you're
going
to
have
to
experience
some
some,
you
know
back
and
forth
on.
A
Has
this
api
endpoint
gone
from
beta
to
ga,
like,
of
course,
you
kind
of
have
to
deal
with
some
of
those
things
and
kubernetes
makes
a
lot
of
those
those
abstractions
easier
to
work
with,
but
I
think
that
that's
it
that's
also
exciting
to
kind
of
be
working
on
a
platform,
that's
going
to
help
enable
easier
upgrades,
so
it's
not
like
an
ios
or
android,
or
a
core
computer
operating
system
update
where
it's
like.
Well,
I've
gotta
rewrite
all
these
things
for
all
these
new
apis.
A
B
B
You
know
there
are
edge
cases
where
you
know
that's
not
the
case,
but
it
basically
treats
those
underlying
instances
that
back
that
as
ephemeral,
you
know
we
don't
want
to
treat.
You
know
these
things
as
long-running
snowflakes,
because
that's
where
you
get
into
the
challenges
of
upgrades,
you
know
you
know.
Does
this
specific
kernel
version
have
an
issue
with
the
specific
os
packages
that
are
installed
in
the
way
that
it's
used
with
the
container
runtime
that
you
have
installed
with
the
you
know,
various
security
configurations?
B
You
know
kms
plugins,
that
you
might
have
running
on
that
individual
os
with
the
version
of
kubernetes.
You
know
I
I
spent
time
at
red
hat
in
the
early
openshift
v3
days.
You
know
there
was
you
know,
even
when
we
controlled
the
entire
stack
from
the
kernel
to
the
kubernetes
binaries.
There
would
still
be.
You
know
these
weird
edge
cases
where
this
version
of
iptables
doesn't
you
know,
have
the
support
for
kind
of
fake
locking
for
the
clients
to
use.
B
You
know
that
opt-in
flag,
that
you
can
provide
to
iptables
to
say
you
know,
I'm
being
a
good
citizen,
don't
modify
things
at
the
same
time.
I
am
type
of
thing.
You
know
that,
like
kubernetes,
you
know
adopted
that,
and
you
know
if
they
were
running,
that
older
version
of
ip
tables
and
they
didn't
have
it
all
of
a
sudden.
You
know
kubernetes
is
blowing
up
and
you
know
we
thought
we
had
validated.
B
You
know
all
of
the
you
know
use
cases
that
we
cared
about
and
we
controlled
all
of
the
packages
that
were
shipped,
but
we
couldn't
even
prevent
those
types
of
issues.
So
how
can
we
do
that
in
an
unopinionated
way?
You
know
across
various
different
operating
systems
across
potentially
various
different
kubernetes
distributions,
not
just
qadm
based,
you
know
with
the
upstream
bits
you
know
the
the
challenge
is
ridiculous
and
especially,
if
you're
throwing
different
container
runtimes
in
there.
So
the
idea
was.
Is
you
know
what?
B
If
we
didn't
worry
about
that
and
I'd
even
take
it
a
step
further-
and
you
know
most
people,
you
know-
have
various
workloads
running
on
these
clusters.
I
would
rather
see
people
migrating
their
workloads
between
kubernetes
clusters
as
part
of
their
upgrade
strategy,
rather
than
upgrading
in
place,
because
you
know,
especially
if
you
have
those
applications
are
talking
to
kubernetes
itself,
then
that's
where
you
get
into
some
of
those
issues
around.
B
If
you're
migrating
the
applications,
then
you
know
you
have
control
over
that
availability
and
that
availability
to
roll
back
that
you
wouldn't
necessarily
have
with
kubernetes
upgrades,
and
I
I
don't
think
a
lot
of
people
understand
but
like
rollback,
is
quite
a
tricky
thing
in
the
kubernetes
world,
there's
there's
only
a
limited
window
in
the
part
of
kubernetes
when
kubernetes
upgrade
where
you
can
actually
roll
back
and
that's
basically,
you
know
your
the
the
time
that
you're
upgrading
that
control
plane
of
that
cluster.
B
If
you
fully
upgrade
that
control
plane
of
the
cluster,
you
know
rolling
back,
really
isn't
an
option
anymore,
not
without
a
backup
of
restore
of
etcd,
and
then
that
means
you've
lost
state
from.
You
know
that
last
backup
to
that
current
point
in
time
that
that's
a
tricky
thing
to
like
try
to
recover
from
and
know
the
implications
of
that
recovery
and
and
how
does
that
affect
the
workloads
that
you
have
running
on
it.
B
So
you
know
I'd
much
rather
see
people
migrating
those
applications
rather
than
everything
else,
because
then
you
can
do
full
validation
of
the
kubernetes
cluster.
Make
sure
that
it's
fully,
you
know
cncf
compliant,
and
then
you
can
also
make
sure
that
you
know
run
additional
compliance
checks
and
make
sure
it
works
for
your
workloads
and
then
know
that
you
know
that
migration
is
going
to
happen
safely
and
you
don't
have
to
worry
about
those
edge
cases
around
rollback.
A
It's
it's
so
true.
I
I
think
I've
been
bit
by
that
a
few
times
myself
just
trying
to
upgrade
in
place,
and
it's
it's
it's
so
much
better.
When
you
can
kind
of
have
a
you
know,
you
have
a
guaranteed
state
when
you
set
up
kubernetes
for
the
first
time
in
a
lot
of
cases,
and
so
that's
I
like
having
that
certainty.
When
getting
those
workloads
moved
over
well,
awesome,
awesome
awesome.
We
are
unfortunately
time
jason
I
feel
like
I
could
talk
to
you
all
day.
A
This
has
really
been
fantastic.
Thank
you
so
much
for
your
demo
today
and
for
everything
else.
Thanks
everyone
for
joining
in
to
the
latest
episode
of
cloud
native
live,
it
was
wonderful
to
have
jason
talking
about
the
cluster
api
in
tinkerbell.
Sometime,
we'll
have
to
ask
him
if
we
stop
leaving
in
ferries,
will
the
tinkerbell
project
disappear?
Hopefully
not.
We
really
liked
the
interaction
and
questions
from
everyone.
Thank
you
all
for
showing
up
we'd
like
to
bring.
We
bring
you
the
latest
cloud
native
code,
every
wednesday
at
11
a.m.
A
Eastern
next
week
we
will
have
danielle
cook,
presenting
optimizing
and
securing
kubernetes
workloads
with
polaris
and
goldilocks.
I
really
like
the
progression
on
all
these
names
and
these
and
these
parables
and
mess
that
should
be
fun.
Thank
you
so
much
for
joining
us
today.
We
will
see
you
next
week.
Adios
everybody
have
a
good
one.