►
From YouTube: This Week in Cloud Native: Cluster API & Tinkerbell
Description
Up until now managing Kubernetes infrastructure outside of cloud providers has been difficult, and while there have been attempts to ease management of Kubernetes clusters within the data center previously we feel those attempts have been focused mostly on trying to shoehorn the management of clusters into legacy practices. With Cluster API and Tinkerbell we are attempting to bring real cloud-native management of infrastructure into the data center
A
Ladies
and
gentlemen,
and
everybody
welcome
to
the
first
ever
this
week
in
cloud
native
put
on
by
the
cncf
the
cloud
native
computing
foundation,
my
name
is
mario
lauria.
I
am
so
excited
to
be
here.
We
have
jason
d
tiberius
and
someone
else
joining
us
pretty
soon.
Don't
worry
he
will
be
here.
A
I
wanted
to
kind
of
introduce
myself
talk
a
little
bit
about
our
goals
here
and
you'll
be
seeing
a
lot
more
of
me
and
another
counterpart
of
mine
kind
of
flip-flopping
every
week,
so
we're
really
excited
to
bring
this
show
to
all
of
you.
We
are
kind
of
test
driving
streaming
on
twitch
youtube,
linkedin
and
you
know
getting
this
content
out,
getting
everybody
up
to
date
on
what's
going
on
in
cloud
native,
so
thank
you
so
much
for
joining
us
today.
I
am
mario
lauria.
A
I
am
a
cncf
ambassador,
a
senior
sre
at
a
company
called
karna
and
a
cloud
native
consultant.
I've
been
involved
with
community
in
many
many
ways.
I
remember
seeing
jason
and
his
amazingly
colorful
facial
hair
at
many
previous
cube,
cons
and
contributor
summits.
Things
like
that.
I
love
being
involved
with
this
community
and
I
thank
all
of
you
for
coming
and
tuning
in
and
being
involved
as
well
at
cardo
the
company
I
actually
just
joined
a
month
ago,
working
on
kind
of
investor
relations
and
cap
table
management.
A
Highlight
a
couple
of
roles,
or
actually
for
my
team
that
we're
trying
to
hire
for
I
see
you
get
to
work
with
me
because
I'm
a
really
awesome
dude.
I
can
guarantee
you
that
and
you'll
hear
more
trust
me
you'll
you'll,
be
sold
by
the
end
of
our
session,
we're
hiring
for
a
senior
foreign
engineer
and
a
a
senior
software
engineer
with
focus
on
back
end.
You
see
that
at
mario
payloria,
that
is
me
on
twitter.
Please
follow
me
dm
me
about
any
of
these
roles.
A
I'm
always
looking
to
meet
new
people
as
well,
so
this
this
show
is
meant
to
give
everyone
kind
of
an
idea
of
this
week
that
the
past
week
in
in
cloud
native
and
the
environment,
we
find
ourselves
in
with
kind
of
the
moderate
distributed
architectures
kubernetes
service,
mesh,
all
the
the
tools
and
and
the
hype
train
that
you're
very,
very
well
aware
of
so
our
goal
is
to
have
some
really
amazing
guests,
provide
some
amazing
insights
on
what
they're
working
on
and
really
gain
introspection
on
the
industry
as
it
relates
to
cloud
native
technologies.
A
Of
course,
cloud
architecture
and
and
other
pieces
of
the
puzzle.
So
without
further
ado
I
am
going
to
introduce
jason
t,
tiberius
and-
and
I
think
I
know
a
little
bit
about
what
they're
talking
about
today,
but
I'm
gonna
kind
of
act
like
the
explain
like
I'm
five
guy,
I'm
not
gonna,
you
know
know
very
much
about
what's
going
on,
I'm
gonna
ask
a
lot
of
dumb
questions
and
that's
okay,
because
I
want
everybody
here
to
get
something
out
of
this
session.
B
B
B
B
I
am
a,
I
believe,
I'm
a
senior
staff
software
engineer
with
equinix
metal,
the
cloud
provider
formerly
known
as
packet.
You
know,
I'm
pretty
sure
most
folks
in
the
cloud
native
space,
or
at
least
that
have
been
to
like
kubecon
and
stuff
are,
are
familiar
with
at
least
our
former
branding.
B
I've
been
here,
I
think,
about
five
months
now
and
and
one
of
the
things
that
drew
me
to
packet
was
or
equinix
metal
is
the
fact
that
we
are
not
a
full-service
cloud
provider,
like
you
think
of
with,
like
aws
or
gce,
we're
very
much
down
to
the
bare
metal,
giving
folks
the
ability
to
self-provision
actual
hardware
and
use
it
as
they
need
to
and
from
for
me.
B
That's
very
interesting
because
I've
been
wanting
to
take
some
of
the
stuff
that
I've
been
doing
in
the
cloud
native
space
with
provisioning
kubernetes
clusters,
especially
with
things
like
cluster
api
and
things,
and
I
wanted
to
bring
that
more
into
a
data
center
type
environment
where
people
can
actually
run
it
in
their
own
environments
on
their
own
hardware.
So,
where
better,
to
do
that
than
a
provider
that
actually
provides
bare
metal.
B
So
that's
what
brought
me
here
and
related
to
that
is
also
one
of
the
other
projects
that
we're
going
to
talk
about
today,
which
is
tinkerbell,
which
is
basically
the
provisioning
software
that
helps
run
a
bare
metal
cloud
and
can
help
you
automate
your
own
infrastructure,
and
I
think,
on
that
note
it's
probably
a
good
idea
to
introduce
manny,
who
has
a
lot
more
background
on
actually
provisioning
the
infrastructure
itself.
B
C
Yeah,
I
am
manny.
I
have
been
with
packet
originally
for
a
while
started
there.
I
think,
maybe
four
or
five
years
ago
now
made
it
through
a
lot
of
like
the
early
stuff,
a
lot
of
the
early
bugs
and
whatnot
and
and
riding
along
with
equinix
metal
and
all
the
the
fine
work
going
on
to
get
us
into
cncf
and-
and
you
know,
expand
our
horizons,
which
is
actually
pretty
nice.
C
I
have
been
turning
service
on
and
off,
even
from
before,
like
personally
right
in
a
lab
for
a
while,
they
used
to
be
embedded
systems.
Then
it
came
into
servers
and
then
I
started
working
for
packet.
A
Nice
welcome
manny.
Thank
you
so
much
for
joining
us.
So,
okay,
packet.net
for
those
that
don't
know,
did
some
amazing
things
when
it
came
to
getting
dedicated
bare
metal
servers.
I
remember
a
number
in
the
realm
of
nine
minutes
to
deploy
a
server
right
to
get
a
dedicated
bare
metal
server.
I
just
I
just
want.
C
A
C
Yeah,
so
I
think
our
best
time
is
less
than
a
minute.
It
depends
on
the
operating
system.
You
select,
ubuntu
debian
are
ubuntu
and
centos,
I
think,
are
are
the
ones
that
are
give.
You
have
the
best
chance
to
get
there
and
that's
that
kind
of
depends
like
on
the
state
of
the
cluster
system
right
on
the
nodes.
If
we
have
machines
available
and
they're
kind
of
in
a
pre-warmed
kind
of
state,
we
can.
C
We
can
turn
around
within
60
seconds
right
and,
if
not,
then
it'll
take
a
few
minutes.
It
takes
a
few
minutes
for
the
machines
to
reboot
right.
So
three
or
four
minutes,
I
think
yeah.
That's.
B
Yeah,
so
let
me
see,
let
me
get
the
screen
switched
over.
B
Thank
you
so,
like
I
mentioned
tinkerbell,
it's
a
relatively
new
cncf
project.
We
were
just
admitted
into
the
sandbox
and
I
I
expect
many
can
go
into
some
of
the
background
behind
tinker
bell
better
than
I
can
because
I
am
like,
I
said
sure,.
C
B
Months
into
contributing
to
the
project
at
this
point.
C
Yeah
so
tinker
bell
kind
of
started
off
really
coming
up
from
what
the
the
what
we
call
pre-install
right
and
that's
kind
of
how
you
get
to
60
seconds
right
with
pre-installed
operations
for
you
and
then
when
it
gets
selected.
We
do
the
final
bits
of
configuration
network
configuration
hosting
type
of
things,
but
most
of
it's
already
like
laid
down
to
disk,
and
so
that
was
just
kind
of
waiting
on
the
machine.
C
C
We
could
kind
of
get
rid
of
a
lot
of
the
complexity
and
kind
of
like
the
worries
from
putting
the
network
and
and
things
like
that,
and
then
we
also
like
the
the
the
the
ability
to
kind
of
move
forward
and
kind
of
react
to
things
better,
maybe
kind
of
just
take
things
and
manage
them
without
having
to
kind
of
take
the
machine
out
of
the
pool
for
a
long
period
of
time
right
to
do
kind
of
firmware
updates
and
things
like
that.
C
And
so
we
wanted
to
extend
like
just
how
we
do
things
in
the
supreme
court.
Environment
have
different
kind
of
states
so
that
we
can
kind
of
change,
executions
and
things
like
that,
and
so
from
that
kind
of
idea
it
was
kind
of
pretty
pretty
much
like
just
thrown
together
like
originally
right.
C
It
was
a
python
script
that
was
kind
of
pretty
pretty
just
slapped
together
for
a
long
time
and
wanted
to
make
that
you
know
much
nicer
kind
of
more
robust
and
whatnot,
and
so
we
started
working
on
what
ended
up
being
tinkerbell.
It's
kind
of
it
looks
a
little
bit
like
talking
compose
like
from
the
from
our
configuration,
but
it's
all
kind
of
running
containers
so
that
we
can
kind
of
minimize
kind
of
runtime
things.
C
So
we
can
avoid,
like
you
know,
fetching
things
from
the
network
if
possible,
and
avoiding
kind
of
you
know,
breaking
provisions
in
our
case
or
having
to
deal
with
kind
of
anything
like
dell.
Things
like
that
right
if
you've
got
a
hiccup
and
dell
and
and
their
servers
or
their
certificate
is
expired
right.
You
don't
want
to
have
that
affect
you,
and
so
it
kind
of
grew
out
from
there.
We
we
kept.
Some
of
the
other
services
boots
are
dhcp
tftp
kind
of
server.
C
Pb
j
is
what
kind
of
controls
our
bmc's
right.
So
that's
powers
on
toggles,
pixie
boots,
and
things
like
that.
Aussie
is
what
kind
of
boots
into
the
machine
kind
of
runs
overflows,
and
so
we
kind
of
kept
those
things
come
try
to
get
it
to
work
in
a
dual
mode
right,
where
our
they
still
work
on
our
old
stack
right.
C
These
are
production
systems,
production
services,
production
code,
and
then
you
know
we're
trying
to
face
in
tink
compared
to
kind
of
some
of
the
other,
and
you
know
and
phase
out
the
the
the
the
pre-installed
environment.
A
This
is
super
cool.
I
I
have
a
question
because
back
in
the
day,
kubernetes
is
picking
up
steam
closes
at
the
top
of
the
game.
I
actually
have
a
crow
s
hat
somewhere
over
here
from
2017
kubecon.
A
It's
super
comfy
and
I
look
really
hipster
it's
great,
but
there
was
a
thing
called
matchbox
and
I'm
not
sure
if
you
guys
are
aware,
but
really
it
was
all
about
the
ipixie
booting
and
getting
the
os
installed
and
all
the
other
elements
as
well
and
I'm
trying
to
figure
out
is
is
this
kind
of
is:
is
tinkerbell
kind
of
similar
to
matchbox.
In
this
way,
kind
of
the
next
iteration,
like
a
larger
framework,
is
matchbox
really
the
only
like
only
the
boots
microservice
in
this
in
this
diagram.
A
C
Yeah,
I
don't
remember
too
much.
C
I
remember
looking
into
matchbox
matchbox
a
few
years
ago,
and
I
don't
I
don't,
you
know
recall
exactly
all
those,
but
I
think
it
was
kind
of
like
the
boots
set
up
when
you
get,
and
I
think
it's
like
a
boot
and
it's
like
pretty
tight
together
to
like
the
environment
that
would
boot
into
right,
and
so
that
kind
of
would
get
you
boots
and
and
aussie
kind
of
flavor,
and
I
don't
recall
it
being
like
too
kind
of
generic
after
that,
as
like
far
as
like
how
we
envision
like
work,
the
thing
about
to
go
right,
you
know
we're
just
kind
of
trying
to
stabilize
the
api
and
the
code
base
and
all
that
right.
C
But
it's
it's
meant
to
be
about
as
generic
as
possible
right
all
like
the
smarts
happen
in
what
we
call
our
our
workflows
right,
they're,
just
kind
of
very
simple
top-down
execution
of
containers,
one
at
a
time
you
do
whatever
you
want
right.
They
run
on
your
machine
and
you
you
have
like
full
access
over
it
and
it's
just
kind
of
having
like
the
entire
environment.
C
So
you
can
just
boot
into
it
and
then
just
do
whatever
you
want
and
there's
a
lot
of
things
that
that
that
we
just
want
to
see
happen
from
there.
The
way
we're
running
it
now
and
kind
of
in
tests
is
kind
of
just
the
straight
up
kind
of
translation,
of
what
we're
doing
in
aussie.
Now,
a
huge
script
right.
This,
like
monster
ugly
script,
that
we
kind
of
broke
up
into
smaller
pieces
kind
of,
for
you
know,
testing
and
purposes
more
than
anything.
C
But
then
you
know
we
can
break
it
down
to
smaller
images.
You
know,
have
images
with
specific
vendor
tools
and
things
like
that
kind
of
use
use
the
best
image
for
for
the
job.
A
Absolutely
that's
awesome,
okay,
so
one
more
question
really
quick
and
this
is
from
our
audience.
Thank
you.
Everybody
who's
tuning
in
right
now,
please
enter
your
questions
in
the
respective
environment
here
streaming
in.
We
will
get
to
them
when
we
can.
I
also
want
to
give
these
guys
some
time
to
get
get
more
deep
in
in
what
they're
trying
to
tell
us
about.
A
One
question
I
will
ask
is
when
will
tinkerbell
be
available
for
the
enterprise
level,
and
I
guess
maybe
maybe
taking
a
step
back
and
zooming
out
and
saying
what
does
that
actually
mean
and
what
is
the
actual
adoption
of
something
like
tinkerbell
at
an
enterprise
level?
Look
like
for
someone
trying
to
solve
these
problems
so.
C
B
See
you
when
go
ahead,
I
was
just
gonna
say
one
of
the
fun
and
interesting
things
about
being
involved
in
the
project
is:
is
that
we've
actually
already
found
out?
There
are
people
who
are
running
this
in
actual
real
production
environments,
which
is
both
awesome
and
and
kind
of
a
little
scary,
with
the
fact
that
we
are
a
relatively
early
project
and
there
are
a
lot
of
rough
edges
today.
B
I
wouldn't
expect
most
enterprise
users
to
be
able
to
use
it
today,
but
you
know,
take
a
look
back
in
in
a
year
or
two
see
where
we're
at
at
that
point,
we're
still
trying
to
define
how
a
lot
of
these
things
function,
independent
of
equinix
metals
underlying
infrastructure.
How
do
we
make
this
more
general
purpose
for
users
and
how
do
we
integrate
it
into
other
additional
infrastructure
tooling
to
get
kind
of
higher
order?
Functionality.
C
Yeah,
I
would
echo
the
same
thing:
I've
been
hearing
a
lot
of
people
using
this
in
production
scenarios
and
we're
not
doing
that
either.
So,
not
even
yet
right,
and
so
it's
it's
a
it's
it's
great
and
I
I
most
certainly
want
them
part
of
the
community
and
kind
of
come
in
and
and
give
us
the
pain,
points
and
awards,
but
we're
still
very,
very
much
early
in
in
kind
of
that
setup.
B
From
my
experience,
a
lot
of
those
tend
to
be
very
monolithic,
they
they
tend
to
be
very
overly
opinionated
and
it
tends
to
cause
issues
either
if
you're
trying
to
adopt
it
into
an
existing
environment
or
you're,
trying
to
do
anything
outside
of
the
very
strict
workflows
and
and
kind
of
methods
that
they
already
define,
and
one
of
the
things
that
attracted
me
to
tinkerbell
is
the
fact
that
there's
a
lot
more
of
non-opinionation
in
place
about
a
lot
of
it.
B
You
know
you
don't
necessarily
have
to
use
all
of
the
services.
If
you
go
to
the
sandbox
repository
today,
which
is
linked
from
the
website.
You'll
see
that
pbnj
isn't
even
in
the
discussion
yet
and
and
how
do
we
bring
those
services
in
and
make
them
available?
Make
them
composable?
B
That's
all
goals
of
the
project,
so
you
can
consid,
you
should
be
able
to
consume
as
much
or
as
little
as
you
want
eventually,
so
that
that's
that's
where
the
big
difference
is
with
the
existing
tooling.
I
think.
A
That's
awesome.
Thank
you
for
breaking
that
down,
gentlemen,
that
that
is
amazing.
Okay,
I'm
gonna.
Let
you
guys
talk
a
little
bit
more
and
then
we'll
get
to
a
few
more
questions.
So
please,
everyone,
who's
watching,
feel
free
to
drop
your
questions
about
tinkerbell,
and
you
mean,
I
think,
we're
gonna
delve,
it
more
into
maybe
cluster
api
as
well,
hopefully
a
little
later.
So
with
that,
I
will
turn
myself
off
and
take
take
it
over.
B
Yeah,
so
I
think
manny
already
did
a
pretty
good
background
to
what
we're
doing
with
with
tinkerbell
today.
The
other
part
of
the
equation
that
we're
talking
about
a
little
bit
today
is
about
cluster
api
as
well,
and
the
the
whole
goal
behind
cluster
api.
B
When
we
started
the
project
was
to
have
a
way
to
declaratively
manage
kubernetes
clusters,
the
way
that
you
can
manage
applications
on
top
of
kubernetes,
so
in
general
kind
of
think
of
it
as
like,
using
kubernetes
to
manage
kubernetes
and
hence
the
kind
of
stack
turtle
logo
you
get
to
it.
It's
pretty
much
kubernetes
all
the
way
down
with
it,
but
the
benefit
there
is
that
we
already
have
this
declarative
management
and
we
already
have
this
controller
reconciliation
concept.
B
Why
not
use
that
to
actually
match
the
infrastructure
for
the
clusters
rather
than
having
to
build
external
tooling?
To
do
that
and
you
kind
of
get
into
a
chicken
in
an
egg
scenario
in
that
in
in
that,
but
I
think
we've
started
to
define
a
good
way
to
be
able
to
bootstrap
the
clusters
and
then
go
ahead
and
manage
those
clusters
using
cluster
api
from
there.
B
And
with
that
you
know,
I
think
we
can.
Probably
I
have
an
environment
here.
That's
almost
ready
to
go
for
testing
actual
cluster
api
usage
with
tinkerbell.
B
However,
it
is
not
quite
working
yet
so
we'll
be
able
to
go
as
far
as
we
can
until
we
hit
the
breakage
and
we
can
see
that.
But
before
we
do
that,
we
should
probably
touch
on
some
of
the
questions
that
came
in.
I
know
the
first
one
that
I
saw
was
asking
the
difference
between
tinkerbell
and
openstack,
and
I
would
say
the
big
difference
is.
B
It's
just
a
different
type
of
approach
to
the
problem
and
and
taking
the
expertise
that
we've
built
running
an
actual
public
cloud
service
with
economics,
metal
and
and
turning
that
into
something
specific
for
managing
workflows.
Around
hardware.
A
Interesting
yeah,
we
have
someone
else
isaac
actually
talking
about.
I
believe
all
five
microservices
are
the
alternative
to
openstack
for
bare
metal
deployments.
Can
anyone
confirm
yeah
can?
Can
you
guys
break
out
a
little
bit
more,
and
I
think
people
are
thinking
that
this
is
going
to?
This
is
like
a
pass
sort
of
solution,
maybe
describe
a
little
bit
more
at
what
layer
tinkerbell
really
operates
at.
C
Yeah,
I
think
it
sort
of
can
be
it's
certainly
how
we're
using
it
today
right,
it's
it's
how
or
at
least
most
of
the
services
are.
You
know,
as
I
mentioned,
they
are
our
production
services
and
and
they
are
being
used
to
in
a
past
kind
of
set
up
to
enable
the
equinix
metal
cloud
right
and
pack
it
from
before,
and
then
certainly
there's
kind
of
a
kind
of
two
two
worlds
in
the
tickable
project.
C
Right,
there's
people
just
trying
to
mess
with
this
or
like
get
their
their
lab
environment
set
up,
and
then
we,
we
also
kind
of
want
to
cater
to
to
people
wanting
to
use
this
in
in
production
kind
of
past
systems
for
internally
right,
and
so
those
are
both
valid
use
cases
and
they're.
Definitely
both
sides
to
the
equation.
B
Yeah,
so
what
I
have
here
is
this
is
actually
a
shell
environment
into
a
tinkerbell
setup
that
is
running
directly
based
off
of
the
sandbox
environment,
and
let
me
switch.
B
The
larger
of
the
two
boxes
here
is
the
machine
running
tinkerbell,
and
then
I
have
five
smaller
pcs
that
are
configured
to
pixie
boot
and
I
think
it
will
be
easy
to
just
kind
of
go
through
and
describe
what's
going
on,
actually
just
show
what's
going
on,
rather
than
try
to
describe
it
so
over
here
I
can
probably
make
my
font
a
little
bit
bigger.
A
B
All
right,
so
the
first
thing
is:
is
this
because
this
is
based
off
of
the
sandbox
this
environment's
running
in
docker
compose?
So
if
I
just
do
a
docker
ps
here,
we
can
see
some
of
the
various
different
components
that
are
running
right
now
down
here
at
the
bottom.
We
have
this
registry
and
it's
basically
just
a
simple
docker
registry.
That's
storing
some
of
the
images
that
we're
going
to
use
in
some
of
the
tinkerbell
workflows.
Later
we
also
have
a
web
server.
B
We
have
a
postgres
database,
that's
the
backing
data
store
for
tinkerbell.
We
have
our
boot
service,
which
is
the
pixie
boot
service
heagle
at
the
metadata
service,
and
then
we
have
the
tank
server
service
itself
and
there
is
also
a
tink
client
instance
running
in
there
as
well.
That's
going
to
allow
me
to
more
easily
kind
of
run
some
of
these
commands,
instead
of
having
to
try
to
constantly
rework
some
of
the
variables
to
connect
to
things.
B
So
if
I
come
here,
the
first
thing
that
I'll
list
out
is
the
actual
hardware
that
I've
already
pre-defined.
B
You
can
see
that
there's
each
one
has
a
unique
id,
the
mac
address
form
and
the
ip
address
defined.
I
basically
provided
all
of
the
data
for
this
in
advance
to
match
the
hardware
that
I
actually
have
here
and
then,
if
I
come
through,
the
other
concepts
that
you
need
to
be
aware
of
in
tinkerbell
are
templates
and
workflows.
B
The
templates
basically
provide
like
the
raw
instructions.
For
what
do
you
want
to?
Do?
You
define
a
list
of
actions
and
when
you
build
out
a
workflow,
it'll
fill
out
the
template
and
then
apply
those
actions
to
you
know.
Whatever
hardware
you
want
to
the
workflow
is
basically
a
way
of
joining
a
template
with
hardware
to
create
an
actual
unit
of
execution
for
tinkerbell
for
lack
of
a
better
word.
I
don't
know
if
you
want
to
provide
better
explanation
of
that.
Manny
yeah.
C
No,
not
not
not,
really,
I
think
you
demonstrate
like
one
of
the
things
like
we
we
have
to
work
on.
I
think,
is
our
nomenclature
for
a
lot
of
things,
and
you
know
we're.
Certainly
I
think
early
enough,
where
we
can
like
nail
these
things
down
and
kind
of
come
up
with
some
better
names
and
and
some
better
concepts
to
get
the
ideas
across
trying
kind
of
streamline
things
to
make
it
easier
to
kind
of
consume,
but
yeah.
I
think
I
think
your
definition
is
is
good.
B
B
So
coming
in
here
into
the
docks,
if
my
browser
will
work
right,
we
have
this
hello
world
workflow
example
and
I've
already
pulled
this
image
and
pushed
it
to
the
internal
tinkerbell
registry.
So
I'm
just
going
to
copy
this
template
right
here
and
what
I
can
do.
B
B
So
what
I
can
do
here
is
tank
template
and
I
think
it's
create
yep.
B
Hello,
world
peter-
and
actually
I
don't
want
to
provide
the
path
here
because
I'm
running
it
in
this
container.
So
let's.
B
B
I
just
get
that
template.
I
see
exactly
what
I
have
now,
but
the
next
thing
I
want
to
do
is
actually
create
a
workflow
related
to
that
template
and
if
I
can
find
the
right
browser
window,
because
I
can't
work
a
computer
today,
just
like
I
said
when
you
create
the
workflow,
it's
just
a
matter
of
linking
that
template
to
to
the
hardware.
So
I'm
just
going
to
copy
this
these
as
an
example,
because
I
can
never
remember
these
flags
when
I'm
doing
this.
C
B
We're
named
docker
yep,
that's
why
thank
you.
B
Here
we
can
see,
that's
the
only
workflow,
that's
in
there
right
now,
which
is
good,
because
that
means
I
actually
cleaned
up
the
stuff
I
was
doing
earlier,
and
if
I
get
that
workflow
now
we
see
that
the
template
actually
is
filled
out
and
we
see
the
ip
address
for
the
worker
here
rather
than
the
template.
That
was
in
there.
A
Gotcha
so
I
I
did
want
to
highlight
a
question
which
I
kind
of
have
as
well,
and
maybe
I
don't
want
you
to
get
it
too
far
ahead.
Felipe
cruz
says:
what
does
it
mean
to
link
the
template
to
the
hardware?
Can
you
expand
on
that
step?.
C
Yeah
sure
so
you
can
reuse
like
the
template,
is
just
gonna,
be
a
recipe
for
what
you
know
actions
you
want
to
take
right,
and
so
you
want.
We
need
to
fill
out
things
to
kind
of
know
when
we're
gonna
boo
and
when
we
get
like
a
dhcp
request
or
kind
of
http
requests
kind
of
link,
it
back
to
figure
out
what
step
we
should
be
on.
C
We
can
kind
of
specify
the
boot
environment
to
go
into
all
that's
kind
of
defined
in
the
hardware
kind
of
definition,
and
so
to
be
able
to
get.
You
know
to
figure
out
what
we're
going
to
serve
up
that.
That's
where
you
want
to
go
from
the
template
to
the
workflow
by
by
linking
those
two
right,
I
think,
and
something
to
kind
of
point
out
is
like
when
jason
showed
the
ip
address.
C
The
way
boots
works
and
the
way
we
have
everything
set
up
to
work
is
we
kind
of
pre-allocate
the
ip
addresses
right,
we're
not
doing
like
dynamic
dhtp
configuration
right
all
this
is
kind
of
pre-allocated
to
avoid,
I
think
you
know
pitfalls
of
you,
know,
duplicate
ip
addresses
and
things
like
that
things
that
we
kind
of
run
into.
So
you
get
your
your
hardware
information.
A
Right
exactly
yeah
and
that's
your
your
standard
kind
of
pixie
process,
but
getting
that
configuration
down
ahead
of
time
is
super
helpful.
That's
really
cool!
Thank
you,
fellaini
for
the
for
the
question.
Thank
you
for
all
these.
These
wonderful
questions,
there's
one
other
question
and
it's
because
it's
I
don't
really
know
a
lot
and
maybe
it's
a
little
bit
more
zoomed
out,
but
it's
from
the
infi
can
tinkerbell
be
considered
as
a
software
stack
for
say,
aws
outposts
or
azure
stacks
and-
and
I
wasn't
sure
what
aw's
outposts
are.
A
It
looks
like
kind
of
enabling
on-premise
like
a
hybrid
sort
of
experience
with
aws
cloud.
So
could
it
could
something
like
outposts
or
maybe
maybe
even
ampos
or
other
tools
that
are
coming
out
in
the
cloud
world
that
let
you
tap
into
on-premises
infrastructure?
Could
they
leverage
this
to
provision
and
deploy
that
infrastructure?
For
you.
C
Yep,
it
should
certainly
be
possible
as
part
of
equinix
meta
like
we
support
vmware,
we
support
anthos,
and
you
know
it's
using
all
the
same
stack
right.
The
difference
is
we're
still
not
using
tinkerbell.
Everything
else
is
still
being
used,
and
so
you
know
we're
working
on
on
plugging
in
tinkerbell
into
our
production
stack
and
it
certainly
will
get
there
for
us
and
others
can
use
it.
The
same.
B
Yeah
so
well,
y'all
were
talking.
What
I
did
is
because
I
don't
actually
have
nice
hardware
that
has
like
remote
management.
B
I
went
ahead
and
switched
over
the
monitor
the
keyboard
and
I
went
ahead
and
powered
on
the
host
that
I
used
for
linking
to
that
template
in
the
workflow.
So
at
this
point
we
are
actually
pixie
booting.
B
The
system
manny
mentioned
oc
before
this
is
using
an
actual
experimental
version
of
oc
that
we're
working
on
that's
based
on
linux
kit,
that's
a
little
bit
smaller
and
a
little
bit
more
optimized,
so
we've
already
booted,
we've
already
run
and
looking
at
my
terminal
here,
we
see
that
the
actual
hello
world
example
went
ahead
and
ran
and
finished.
B
A
B
Yeah,
the
only
thing
there
is
just
the
console
messages,
so
there
were
just
some
console
messages
from
linux
kit
from
logs
being
generated
and
put
out
to
standard
out
on
the
console,
so
nothing
really
of
any
unique
interest
there.
It's
just.
I
I
put
it
there
because
I've
been
messing
around
with
this
hardware
and
it's
boot
to
boot
from
the
hard
drive
first.
So
if
this
actually
had
an
os
on
it,
I
was
gonna
have
to
intercede
and
get
it
to
prioritize.
The
pixie
boot.
A
Right
yeah,
we
all
remember
the
days
of
installing
windows
and
linux
and
trying
to
do
a
boot
and
all
that
those
days
still
are
with
us.
Booting
is
not
the
funnest
experience,
so
this
is
really
cool.
I'm
loving
the
kind
of
the
the
docker
executive
going,
that's
showing
us
this
started
execution
and
finished
successfully.
What
other
details
can
we
get
from?
These
runs
in
terms
of
logs
and
other
bits
of
information
to
maybe
flow
into
systems
that
we
have
already
or
to
just
get
more
introspection.
C
Yeah,
so
at
the
moment,
logs
stay
on
the
what
we
call
a
working
machine
right,
so
one
of
jason's
nukes
are
what
we
call
the
worker
machine
and
for
the
moment
they,
you
know,
we
kind
of
keep
these
some
of
these
operational
bits
kind
of
outside
of
configuration
like
that.
But
we
do
have
a
so
the
logs
stay
there
you'd
have
to
jump
onto
the
machine
and-
and
you
know,
run
docker
logs,
docker
ps
things
like
that.
But
you
know
we
can.
C
We
have
an
issue
right
in
github
where
we
started
to
actually
discuss
this
earlier
today
and
even
from
yesterday,
just
kind
of
how
to
like
get
both
creds
for
registries
for
internal
registries,
centralized
logging
kind
of
configuration
up
from
from
tinkerbell
from
our
sync
server
down
to
the
workers
so
that
they
can
flow
through
and
and
make
it
to
where
you
need
to
go.
C
It's
one
of
the
things
we're
working
on
that
we
need
right,
doesn't
really
work
out
for
us
to
have
to
jump
into
a
machine
somewhere
and
centralized
logging
is
is
where
to
go
for.
A
Sure,
and
and
kind
of
on
that
point,
looking
at
more
of
the
ux,
the
user
experience,
if
you
will
for
what
what
kind
of
we
want
tinkerbell
to
transform
into
you
know
we
have
a
question
and,
and
it's
a
little
bit
long-winded
but
I'll,
try
to
give
it
give
it
a
at
least
from
burke.
I
can't
pronounce
that
I'm
gonna
butcher
it,
but
thank
you
very
much
for
your
question
and
he
kind
of
says,
there's
an
extra
here,
so
maybe
I
could
ask
about
using
tinkerbell
like
do.
A
We
need
to
hire
us
this
admin,
a
person
whose
profession
is
to
be
more
closer
to
the
infrastructure,
not
as
much
the
application,
development
and
kind
of
another
way
to
ask
that
he
says
is
like
how
easy
is
it
to
use
tinkerbell?
Are
you
considering
building
some
sort
of
user
interface,
and
so
obviously
we
see
here,
you
know
it's
very
terminal
focused
it's
you
know
using
docker,
which
is
you
know,
primitive
that
most
those
people
probably
feel
comfortable
with
at
this
point
doing
infrastructure
like
this.
A
C
Yeah,
so
I
think
there's
there's
a
lot
of
options
here.
There's
you
know,
there's
so
many
communities
working
on
terraform
setups
to
to
to
manage
tinkerbell
hardware
and
templates
and
workflows.
So
if
that's
your
jam,
you
know
that
that's
an
option.
We
do
have
like
a
web
interface
that
was
kind
of
done
as
a
hackathon
early
on
and
that's
getting
some
work.
But
you
know
it's
pretty
early
and
you
know
if
you've
got
opinions.
I
certainly
welcome
them.
B
Yeah,
so
I
think
the
interesting
thing
is
is
that
I
I
would
not
look
at
tinker
bell
as
like
a
product
that
you
buy.
You
know
like
shrink,
wrap
software,
it's
very
much
a
set
of
building.
C
B
I
would
expect
that
more
of
what
I
would
consider
like,
the
user
experience
will
come
from
external
type,
tooling
or
or
folks
that
end
up
wrapping
tinker
bell
into
something
larger.
You
know,
or
you
know,
if
folks
want
to
just
use
tinkerbell,
I
would
expect
them
to
interact
more
with
terraform,
directly
or
potentially
some
like
cross
plane
or
cluster
api
to
interact
with
the
hardware.
B
You
know
that
this
is
one
of
the
interesting
things.
Is
you
know
it's
always
a
balance
between.
You
know
what
is
you
know,
a
polished
product
versus
what
is
a
project
that
you
can
use
and
incorporate
into
something
larger
and
tinkerbell
very
much
falls
into
that
project
space
rather
than
the
product
space.
A
Gotcha,
that's
really
good
to
know.
I
think
that
really
breaks
it
down
in
terms
of
how
to
mentally
think
about
tinkerbell
and
and
what
it's
here
to
solve
now
and
where
it's
going
for
the
future
you
mentioned.
Crossplaying
crossplane
is
super
cool
for
those
that
don't
know,
visit
crossflame.io
a
really
cool
project
from
outbound,
and
it
kind
of
goes
on
the
formula
of.
Can
we
use
kubernetes
resources
to
deploy
other
resources
in
other
pieces
of
our
cloud
infrastructure?
A
So
I
don't
know
if
everybody
remembers
a
few
years
ago,
when
aws
announced
that
they
were
releasing,
I
believe,
an
operator
that
would
be
able
to
spawn
aws
s3
or
create
s3
buckets,
or
I
figured
it-
was
s3
or
rds
instances
from
inside
kubernetes.
Without
doing
anything,
you
know
kind
of
on
your
own
in
terms
of
tapping
the
api
or
being
in
the
aws
console
and
cross
crosspoint
has
taken
that
content
to
the
next
level.
A
Do
you
guys
see
crossplane
and
other
tooling
in
that
realm
as
something
that
could
be
used
to
deploy
better
model
servers
with
tinkerbell
configurations
that
have
kind
of
already
decisively
been
set
up
and
provisioned
for
the
type
of
infrastructure
that
you're
trying
to
plan
for
an
event
and
scale
right?
So
you
know,
say
I
want
more
resources,
more
dedicated
servers,
I
could
use
crossplane
and
kubernetes
objects
to
define
those
pieces.
Is
that
something
that
can
be
done
now
or
is
could
be
done
in
the
future?.
B
It's
definitely
something
in
the
future.
I
don't
think
we've
actually
started
on
the
crossplane
provider,
yet
I've
started
on
a
basic
of
a
subset
of
something
that
could
morph
into
either
a
crossplane
provider
or
a
basis
for
a
crossplane
provider
with
part
of
the
cluster
api
work.
B
But
that
gets
into
a
couple
of
different
aspects,
and
this
is
all
very
much
experimental
kind
of
bleeding
edge
work,
that's
happening
so
so
it's
not
anything.
That's
that's
fully
fleshed
out
and
and
ready
to
consume
yet,
but
the
idea
is
is
for
a
couple
of
things.
One
is
we
also
want
to
be
able
to
not
just
deploy
tinkerbell
using
binaries
or
with
docker
compose.
B
We
want
to
be
able
to
deploy
it
on
a
kubernetes
cluster
for
folks
that
want
to
use
it
that
way
as
well,
and
we
also
want
to
have
a
good
experience,
consuming
tinker
bell
resources
and
interacting
with
tinkerbell
resources
from
kubernetes,
so
part
of
the
experiment
with
the
cluster
api
has
been.
You
know,
how
can
we,
you
know,
help
define
what
both
of
these
things
are?
What
what
is
a
good
method
for
deploying
this
to
kubernetes?
B
What
are
the
challenges
that
we
have
to
solve
there,
whether
it's
you
know
the
simple
things
like
actually
exposing
services
properly,
so
that
you
can
actually
pixie
boot
things
and
and
not
have
it
limited
to
just
one
specific
node
and
things
like
that?
B
How
do
we
properly
do
those
things,
but
the
other
aspect
is
is
then
how
do
we
model
these
tinkerbell
resources
that
we
have
today
as
kubernetes
objects,
and
and
how
do
we
make
them
so
that
we
can
more
easily
interact
with
there
and
that's
where
it
plays
into
what
will
eventually
become
like
the
cross,
plane
provider
and
things
like
that.
A
That
is
super
cool.
That's
really
really
great
to
hear
so,
there's
a
question
that
was
asked
a
couple
times
by
let's
see
antonio
or
antoine,
and
I'm
gonna
push
for
that
last
name:
I'm
gonna
try
I'm
toying
b
and
it's
a
really
interesting
question,
because
I
don't
know
if
it
exactly
has
to
do
with
tinkerbell.
A
How
do
you
manage
persistent
volumes
and-
and
I
want
to
just
ask
it
in
that
context-
so
I
want
to
see
where
your
brains
kind
of
jump
to
and
and
if
that's
something
that's
completely
out
of
scope
or
if
that
actually
isn't
scoped,
because
how
you
configure
how
the
storage
on
a
particular
node
is
at
this
level
right.
B
You
know
one
of
the
things
that
we're
not
going
to
do
is
get
into
actually
managing
storage
infrastructure
with
tinker
bell.
That's
just
not
a
place
that
we're
going
to
really
take
the
project.
So
if
you
want
to
do,
if
you
really
want
to
manage
persistent
volumes
for
use
with
kubernetes
on
top
of
tinkerbell,
you'd
have
to
run
some
type
of
storage
software,
whether
it's
having
an
appliance
that
you
you
can
be
api
driven
to
request.
B
C
Yeah,
I
don't,
I
don't
know
if
there's
much
to
go
into
really
well,
if
you've
got,
you
know
machines
with
lots
of
hard
disks
and
you
want
to
kind
of
bring
them
into
a
cluster,
a
storage
cluster
of
some
sort.
You
know
think
about
certainly
an
option
for
kind
of
booting
and
getting
your
os
on
it,
but
as
far
as
managing
like
the
bodies
themselves,
it's
not
something
you
know
in
in
our
purview.
A
Got
it
that's
really
good
to
know,
thank
you
and
there's
another
really
really
good
question
here
from
ashmita
jeez.
These
names
you
guys
are
killing
me.
Thank
you.
So
much
though,
the
diversity
in
the
audience
is
amazing
and
we
really
appreciate
everybody
tuning
in
again.
You
are
what
makes
this
happen
and
you
are
our
audience.
So
thank
you.
So
much
smitha
asks
how
can
one
begin
contributing
to
the
tinkerbell
project
development
on
github?
I
see
the
repo
wiki
has
just
rfcs.
B
Yeah,
so
I
think
the
biggest
thing
is
is
we
have
the
links
here
at
the
top?
We
could
probably
call
them
out
a
little
bit
more
clearly,
so
you
know
the
the
easy
it
depends
on
where
you
want
to
get
involved.
We
have
a
slack
channel
in
our
equinix
community
slack.
Where
we
answer
questions.
B
We
also
have
github,
it's
probably
the
best
place.
We
have
a,
I
think
it's
now
a
bi-weekly
community
meeting
and
there's
a
mailing
list
as
well
that
you
can
join
to
get
that,
and
I
will
dig
up
that
link
as
soon
as
I
can,
but
within
this
repo
or
within
the
within
github,
we
have
various
repos
in
there
related
to
some
of
the
sub
projects.
The
tinker
bell
sub
project
has
quite
a
few
issues.
B
Some
of
them
are
related
to
discussion.
Some
of
them
are
actual,
like
you
know,
documentation
issues,
things
like
that
reaching
out
in
slack.
You
know,
there's
plenty
of
folks
that
are
active
there.
That
can
help
get
you
routed
towards
areas
that
you
can
do
that.
You
can
help
out
with
answer
questions
if
you're
trying
to
get
up
and
running
and
and
all
of
that.
A
Yeah
awesome,
that's
super
great
another
question
from
hamad
and-
and
I
forgive
me
if
you
we're
going
to
get
into
this
at
some
point-
you
know
what
is
the
feature
of
tinkerbell
and
this
is
a
really
wide
question.
So
maybe,
if
you
guys
could
give
us
kind
of
the
the
short
term
and
then
the
the
super
long
term
roadmap,
some
of
the
key
items
there.
C
So
the
main
features
I
think
of
tinkerbell
are
kind
of
automated
bring
up
of
machines
right
what
you
do
with
them,
whether
you
provision
them
into
a
cluster,
whether
you
kind
of
manage
firmware,
bios
config,
that's
kind
of
all
up
to
you
right.
We
just
kind
of
bring
it
up
or
you
can
kind
of
do
whatever
you
want.
I
think
key
to
that
is
kind
of
you
know.
C
We
are
strictly
kind
of
just
running
containers
right.
I
think
that's
kind
of
one
of
the
differentiators
between
other
systems
out
there,
where
you
know
you
can
you
can
kind
of
manage
the
whole
kind
of
process
and
kind
of
all
the
like
the
the
file
statically,
I
think
which
is
kind
of
nice.
It
kind
of
will
help
you
bring
down
kind
of
any
issues
with
reproducibility,
or
you
know,
network
kind
of
troubleshooting
and
things
like
that
right
and
then
moving
forward.
C
I
think
just
kind
of
break
out
those
workflows,
more
and
kind
of
have
more
reproducible
and
kind
of
enable
all
those
kinds
of
things
we
can
do.
Those
kind
of
workflows
right,
there's
a
lot
of
things
that
we
have
in
boots.
That
are
kind
of
just
from
how
we
had
how
we
used
to
do
installs
kind
of
the
environments
we
we
run
and
then
getting
those
out
so
that
we
can
kind
of
run.
C
Different
kinds
of
workflows
do
different
things
with
that
are
not
just
installing
operating
systems
and
kind
of
being
be
able
to
do
them.
Kind
of
declaratively
and
reproducibly
are
are
where
we're
going
forward
and
then
kind
of
integrating
in
in,
and
that's
that's
at
least
like
my
main
objective
in
the
immediate
and
foreseeable
future
and
then
kind
of
there's
a
lot
of
work
going
on
for
integrating
kubernetes
and
kind
of
being
able
to
be
a
big
piece
of
that.
B
Yeah-
and
I
think
one
of
the
interesting
things
about
trying
to
do
like
the
cluster
api
integration
is
that
it's
really
a
non-trivial
workflow
for
tinkerbell.
You
know
a
lot
of
the
examples
that
we
have
out
there
for
workflows.
Are
you
know
relatively
simplistic
like
it's?
You
know,
run
the
hello
world
example
or
or
install
an
ubuntu
image,
but
it
doesn't
handle,
like
you
know,
some
of
the
after
effects
like
rebooting
into
that
image,
and
things
like
that.
So
we're
we're
definitely
running
into
fun
sort
of
challenges.
B
I
will
say
you
know
we
with
cluster
api.
We
generated
a
cloud
init
config
and
the
way
that
we're
kind
of
working
around
things
today
we
generally
we're
generating
a
relatively
large
template
size,
and
one
of
the
recent
issues
we
hit
was.
Is
we
implemented
an
eventing
system
in
tinker
bell
that
was
based
on
postgres
triggers?
Well,
there's
an
actual
size
limit
to
records.
B
If
you
have
those
triggers
attached
and
we
exceeded
the
size
in
there,
so
so
we're
you
know
we're
hitting
these
edge
cases
and
and
we're
having
to
work
through.
You
know
both.
How
do
we
work
around
these
challenges?
In
the
short
term,
and
and
how
do
we
sit
there
and
incorporate
those
things
into
longer
term
fixes
for
the
project
so
that
it
it
benefits
not
just
us,
but
everybody
there's
also
fun
things
related
to
you
know.
How
do
we
automate
things
like?
How
do
we
you
know?
B
Is
there
a
way
to
trigger
a
reboot?
Do
we
have
to
rely
on
something
like
pbnj
to
force
the
reboot,
or
can
we
do
like
rebooting
an
action
and
and
there's
challenges
there,
because
you
can't
just
you
know,
issue
a
reboot
command
in
a
container
and
have
the
host
reboot.
You
know
and-
and
things
like
that,
another
aspect
is:
is
that
you
know
how
do
we
handle
the
initial
experience
for
bringing
up
hardware
when,
when
workflows
haven't
been
defined
yet
for
that
that
hardware?
What
do
we
what's
the
experience
there?
B
Do
we
just
do
we
failed
a
pixie
boot?
Do
we
go
ahead
and
pixie
boot
into
kind
of
like
a
waiting
status
these
these
are
you
know
the
things
that
we
need
to
define
that
define.
How
do
the
you
know?
How
do
we
build
this
higher
order
level
of
complexity
into
the
project
and
and
the
surrounding
projects
around
it.
A
For
sure
yeah
the
challenges
are
the
fun
part
and
and
as
we
talked
about
a
little
bit
earlier,
if
anyone
listening
is
really
interested
in
what
those
challenges
are
and
understanding
some
of
the
bits
and
pieces
that
go
into
this
tinkerbell.org
community
viewing
the
github
some
of
the
projects
there,
as
as
jason,
was
showing,
can
really
give
you
a
good
introspection,
and-
and
you
can
I
mean
you-
can
help,
contribute
right,
it's
open
source
and
that's
how
we
like
it.
A
So
another
question
here:
someone
who
joined
like
this
is
froze
shone.
He
joined
late,
and
he
said
why
do
we
need
to
know?
What
do
we
need
to
know
in
advance
about
the
physical
server
that
we
actually
want
to
bring
up
with
tinkerbell
and
or
bring
into
a
kubernetes
cluster?
You
know
he
highlights
bmc
credentials,
mac,
address,
etc
and
then
from
what
we're
doing.
In
your
example,
it
seems
like
the
mac
address
is
probably
the
key
item
you
guys
get
into
a
little
bit
more.
C
Yeah,
certainly
booting
and
mac
address
and
ip
addresses
are
most
likely,
the
minimum
you
kind
of
want
to
need
to
know
and
then
kind
of
as
jason
mentioned
earlier,
you
know
he's
not
using
pb
j.
He
doesn't
have
kind
of
bmc
setup,
so
it's
not
strictly
necessary
right.
If
you
are
running
with
servers,
you
know
it
it's
it's
generally
gonna
be
a
much
nicer
experience.
C
If
you
can
run
out
to
your
bmc
and
inhabit
change,
pixie,
booting
or
or
boot
order
for
you
and
so
having
credentials
in
there
is
is
something
you
would
want.
There's
it's
actually
not
supported
at
the
moment
right.
It's
actually
something
I've
been
doodling
and
thinking
about
just
kind
of
how
to
deal
with
secrets
that
are
both
kind
of
tied
to
hardware.
C
Bmc
is,
is
one
and
then
kind
of
maybe
some
secrets
that
you'd
want
for
for
the
workflow
itself
to
kind
of
hook
back
into,
and
you
know
rest
of
it,
your
your
systems
to
either
log
or
to
you
know,
post
back
any
hooks
or
things
like
that.
So
those
are
all
things
that
are
active
and
kind
of
you
know.
Initial
considerations
and
development.
A
Gotcha,
that's
awesome
to
know
one
more
question
and,
and
if
anyone
has
one
they
want
to
sneak
in
now
is
the
time
for
sure
this
is
from
nerlin.
Can
workers
start
using
wake
on
land?
I
actually
was
gonna
ask
about
this
like
what,
if
notes,
are
in
a
state
where
they
are
off,
but
the
network
allows
for
wake
on
land
there.
Their
cards
are
configured
for
wake
on
land.
Is
that
something
that
tinker
bell
would
kind
of
plug
and
play
with.
C
Yeah,
that
would
be
fine.
It
doesn't
come
across
in
the
hello
world
example,
but
we
have
tasks
right
and
a
task
is
defined
to
one
on
one
specific
worker.
But
a
workflow
and
a
template
can
have
multiple
tasks
right,
and
so
you
can.
If
you
have
you
know
kind
of
you,
can
you
can
run
a
worker
on
the
same
tank
server?
You
know
the
same
main
machine
or
others
right,
and
so
that
can
run
as
a
that
can
run
a
workflow
action
on.
C
A
Awesome
really
good
to
know
jason.
I
saw
you
messing
with
kind
there,
for
those
that
don't
know
kind
is
kubernetes
and
docker
it's
kind
of
an
alternative
to
running
kubernetes
locally.
In
the
same
vein
as
minicube
and
and
others
jason,
you
have
something
you
want
to
show
us
there.
B
Yeah,
so
what
I'm
doing
is
I'm
getting
ready
to
actually
bring
up
the
cluster
api
bets
to
run
against
this
tinkerbell
cluster,
and
we
can
see
how
far
we
get
until
it
falls
apart.
A
Yeah,
why
not
try
right?
Thank
you
so
much
to
everyone
who's
tuned
in
thus
far
like
I
said,
if
you
want
to
squeeze
in
one
more
question,
we'd
totally
be
down
to
try
to
tackle
it.
Thank
you
for
everybody,
who's
shared
some
things.
I
think
mark
coleman
even
came
in
with
a
good
way
to
think
about
tinkerbell,
saying
the
best
way
to
think
about.
A
Tinkerbell
is
as
a
system
to
get
from
the
hardware
to
being
able
to
deploy
outpost,
eks,
a
ampos,
etc,
and
for
those
that
don't
know
outposts,
as
we
mentioned
earlier,
is
kind
of
the
aws
hybrid
solution
for
tapping
into
on-premises
infrastructure.
A
Eksa
is
actually
the
recent
release
of
eks
as
open
source
for
people
to
run
that
same
framework
for
deploying
eks,
like
environments
on
premise
and
then
and
then
tapping
that
into
probably
outposts
and
other
clusters
they
might
have
running
in
eks
proper
in
the
cloud
really
cool
stuff
there.
So
thanks
thanks
for
the
share
mark
and
helping
everyone
else
in
the
community,
that's
really
awesome.
I
think
bill.
One
of
our
cncf
show
maintainers
that
you
will
posted
the
mailing
list
as
well.
A
It's
just
called
tinkerbell
contributors,
and
it
should
be
pretty
easy
to
tap
into
that
and
meet
other
people
in
the
community
that
are
working
on
this
very
cool
project,
jason
I'll.
Let
you
kind
of
try
a
little
bit
more
as
you
play
around
here.
B
Yeah,
so
what
I
did
is
I
used
tilt
to
bring
up
the
kind
of
development
environment
that
I've
been
using
and
tilt's,
basically
an
iterative
development
environment
for
developing
against
kubernetes,
and
that
brought
up
all
the
controllers,
both
the
core
cluster
api
controllers
and
the
tinkerbell
controller.
B
Then
what
I
did
was
is
I
basically
defined
kubernetes
crds
to
correspond
to
the
tinker
bell
hardware
right
now
with
the
kind
of
what
we're
calling
the
shim
layer
right
now.
But
when,
when
I
was
mentioning
how
we
were
building
at
the
core
of
what
would
eventually
be
the
cross
plane
provider,
that's
kind
of
what
the
shim
is,
the
shim.
It
doesn't
allow
you
to
create
the
hardware
within
kubernetes,
but
it
allows
you
to
reference
existing
hardware.
B
So
I
basically
just
did
the
definitions
with
just
the
ids
for
the
hardware
and
if
I
go
ahead
and
don't
do
it
yet,
but
do
a
describe
on
here.
What
I'll
see
is,
as
I
actually
get
all
that
information
that
was
defined
on
that
hardware
in
tinkerbell
itself,
there's
also
templates
and
workflows
are
defined
here
as
well.
B
So
if
I
wanted
to,
I
could
create
kubernetes
resources
related
to
a
template
and
a
workflow,
and
I
can
run
the
exact
same
hello
world
example
there
or
I
can
actually
do
something
bigger
like
actually
try
to
provision
the
kubernetes
cluster
with
cluster
api,
and
it
has
a
few
tweaks
to
kind
of
the
generic
templates
that
we
use
upstream,
mainly
because
we're
booting
from
a
cloud
image
for
ubuntu
and
no
users
are
defined
there.
So
we
have
to
specify
some
users
in
the
cloud
init
config.
A
Gotcha
and
tilt
is
really
cool
for
those
who
are
interested
tilt.dev.
I
have
not
tried
it.
It's
in
a
browser.
Tab,
that's
been
open,
for
I
swear
at
least
three
weeks
taking
up
memory
that
I
could
be
using
for
slack.
Thank
you
so
much
to
everyone
who's
who's
joined.
I
I
wanted
to
mention
rama
asked.
A
What's
the
use
case
of
tinkerbell
and-
and
you
know,
I
can
give
it
to
jason
and
may
to
describe
a
little
bit
more,
I
say
rama
and
others
who
maybe
miss
the
earlier
part
of
the
stream
please
tune
in
afterwards.
This
will
be
available,
of
course,
on
your
respective
platform.
A
Mine
is
youtube,
and-
and
you
can,
you
can
check
out
kind
of
the
first
few
minutes
of
the
video,
and
I
think
manny
and
jason
did
a
great
job
kind
of
describing
how
tinkerbell
could
be
used
with
that
I'm
going
to
give
jason
and
manny
one
last
chance
to
say
anything
they
want
about
the
project
and
and
then
we'll
we'll
close
out
from
there.
C
I
I
think
I'll
just
reiterate
once
again
that
we're
very
early
we've
got
lots
of
warts
and
lots
of
opportunities
to
come
and
and,
and
you
know,
clean
things
up
and
and
fix
things
up
and
play
around
with
hardware
at
home.
A
For
sure
awesome
jason's,
thank
you.
So
much
for
showing
the
actual
physical
boxes
are
those
intel
knucks.
What?
What
was
that
that
hardware
you
were
showing.
B
Oh,
so
this
is
some
old
hardware
that
I've
had
laying
around
for
years
that
I
sat
there
and
I
bought
to
do
some
bare
metal
demos
with
kubernetes
a
few
years
ago,
so
they're.
Actually,
these
little
leva
mini
pcs
they're,
not
even
as
powerful
as
most
snooks.
I
think
it's
like
dual
core
with
like
four
gigs
of
ram
with
like
a
pretty
low
end,
celeron
processor,
but
they're,
they're,
small
compact,
and
and
seem
to
do
the
trick.
A
That's
really
cool.
Thank
you
for
showing
us
this
see.
This
is
dedication
to
a
stream.
I
I
swear.
That's
awesome,
I
don't!
I
don't
even
have
a
green
screen,
I'm
just
it's
just
I'm
I'm
doing
it
wrong!
That's
awesome!
Many
of
you,
I'm
sure
I've
seen
posts
from
others
doing
this.
Some
of
the
thing
with
the
raspberry
pi's,
very,
very
cool
stuff.
A
I
want
to
thank
manny
and
jason
for
taking
the
time
this
hour
and
equinix
metal
for
loading
them
out
to
us
to
talk
about
tinkerbell,
cluster
api
and
everything
kind
of
bare
metal.
This
has
been
really
fun.
I
want
to
thank
all
of
our
audience
out
there
for
tuning
in
for
the
first
ever
this
week
in
cloud
native
cloudnative.tv,
and
you
know
youtube
and
linkedin
channels
as
well.
Please
leave
comments.
A
Let
us
know
how
we're
doing
tell
me
that
I'm
doing
something
wrong
and
I
should
probably
position
my
camera
in
a
more
professional
manner.
I'm
still
learning,
and
I
think
next
week
we
will
have
paulo-
will
be
kind
of
the
host
and
running
things
so,
like
I
said
we're
gonna
flip
flop
here
and
there
it
has
now
been
a
total
of
one
hour.
I
don't
wanna
take
any
more
of
anybody's
time.