►
From YouTube: MetalĀ³: Kubernetes-Native Bare Metal Host Management
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
think
it's
time
for
us
to
get
started
thanks,
everyone
who
is
joining
us.
My
name
is
marston
alive.
I
would
like
to
thank
everyone,
who's
joining
us
and
welcome
to
today's
cncf
webinar
metal,
cubed,
kubernetes
native
bare
metal
house
management
I'll
be
moderating
today's
webinar.
A
We
would
like
to
welcome
our
presenters
today,
mile
kimerlin
senior
software
engineer
at
ericsson,
software
technology
ferro,
jean
muyasarov,
a
cloud
developer
at
ericsson,
software
technology
and
pep
tour,
maori
senior
software
engineer
at
red
hat
and
before
we
before
we
start
a
few
housekeeping
items
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
A
A
C
I
hope
you
can
see
it
hello,
everyone
and
thanks
for
joining
this
webinar,
we
are
really
really
happy
to
present
to
metalcube
and
thanks
for
giving
us
a
chance
to
introduce
to
the
project
called
metal
cube
so,
but
before
we
jump
into
the
actual
topic
and
start
talking
about
metalcube,
just
a
small
introduction
about
ourselves.
My
name
is
ferris
john
wilsarf
and
I'm
working
as
a
experienced
developer
at
ericsson.
C
Great,
thank
you
guys.
So
what
is
metal
cube?
Why
do
we
need
it?
What
problems
that
it
solve?
So,
first
of
all,
it's
a
bare
metal
hose
provisioning
tool
that
allows
you
to
to
manage
your
environmental
nodes
through
the
kubernetes
apis.
So
you
might
be
wondering
why
do
we
need
it?
Because
there
are
already
a
bunch
of
like
existing
tools
to
manage
the
bare
metal
host,
but
the
main
difference
and
the
goal
of
the
metal
cube
is
to
manage
your
bare
metal
nodes
through
the
kubernetes
native
apis.
C
C
Metalcube
is
also
self-hosted,
meaning
that
all
the
custom
controllers
and
all
the
building
blocks
are
running
within
your
kubernetes
cluster,
which
kind
of
avoids
the
having
the
need
to
to
have
an
extra
tooling
to
manage
the
the
metal
cube
itself
right.
So
you
need,
of
course,
kubernetes
cluster
for
the
metal
cube
and
that
kind
of
eliminates
the
need
for
many
many
problems
that
you
might
encounter
other
ways.
C
In
fact,
it's
very
young
project,
but
currently
we're
seeing
more
and
more
interest
from
different
communities
and
a
lot
of
different
contributions,
which
is
really
nice
and
then
the
last
one
is
that
it's
a
cncf
sandbox
project.
Currently
it's
been,
I
guess,
couple
of
months
that
we
entered
this
cycle
all
right,
oops.
C
So,
let's
like
during
the
talk,
you
will
be
hearing
quite
many
times
about
the
word
cluster
api
or
in
short
capi.
So
I
think
it
makes
sense
to
just
give
you
some
kind
of
brief
introduction
about
what
cluster
api
is,
so
that
you
have
some
good
idea
about
the
next
slides.
So
the
cluster
api
is
the
kubernetes
sub-project.
It's
focused
on
the
cluster
lifecycle
and
it
allows
you
to
do
the
management
of
your
clusters
in
many
many
different
cloud
environments,
but
not
only
cloud,
but
it
could
be
even
the
bare
metal
right.
C
All
the
components
of
the
cluster
api
are
running
within
the
kubernetes
cluster
and
it
manages
your
target
clusters
which
are
running
somewhere
in
the
cloud
so
to
start
with
the
cluster
api.
Basically,
you
need
some
kind
of
kubernetes
cluster
and
that
cluster
has
different
names,
but
they
all
mean
the
same.
So,
for
example,
some
in
some
context
you
might
hear
management
in
some
contexts
you
might
hear
even
from
us
like
ephemeral
or
the
source
cluster.
C
So
cluster
api
comes
with
its
own
client
called
cluster
ctl,
which
you
can
use
to
spin
up
the
clusters
in
your
desired
environment,
for
example,
to
start
the
your
clusters
in
desired
environment.
You
start
usually
with
cluster
ctl
in
it,
and
then
there
are
different
flags,
but
we
focus
on
the
infrastructure
provider.
C
C
So
imagine
the
let's
take
a
small
use
case
and
see
how
it
really
ends
up
in
the
bare
metal
infrastructure.
So
imagine
that
you
want
to
create
a
small
cluster
that
has
three
nodes:
one
master
node
and
two
working
nodes,
and
you
want
these
two,
no
three
notes,
sorry
to
to
be
represented
by
your
physical
notes,
because
we're
in
the
context
of
the
bare
metal.
So
what
sorry?
What
happens
is
that
between,
like
kubernetes
node
and
then
the
actual
physical
server?
C
There
is
a
like
a
couple
of
layers
or
the
processes
involved.
So
the
first
thing
is
that
a
cluster
api
project
comes
with
its
own
custom
resources.
Custom
controllers,
of
course,
and
one
of
the
object
is
called
or
the
source
is
called
machine
which
actually
represents
your
kubernetes
node
machine
is
actually
generic
across
all
the
providers,
so
it
doesn't
know
about
any
provider
yet,
but
what
it
knows
is
that
it
knows
it
has
a
reference
to
to
the
desired
infrastructure.
C
But
let's
focus
on
the
metal
cube
this
right
now,
so
after
the
machine
has
object,
has
created
metal
key
will
take
care
of,
let's
say,
creating
the
metalcube
machine
object
that
will
be
referenced
by
the
key
machine
object
and
after
that
we
have
another
kind
of
the
controller
or
the
operator
to
be
exact,
which
is
called
bare
metal
operator.
C
That
actually
knows
how
to
really
talk
to
your
underlying
infrastructure
or
to
your
real
physical
machines.
But
we
we
have
another
object
that
is
controlled
by
the
bare
metal
operator.
It's
called
bare
metal
host
and
that
bare
metal
hose
is
has
like
really
one
to
one
kind
of
almost
one
to
one
mapping
between
your
server.
It
has
a
lot
of
data
about
your
actual
servers
it.
It
knows
it
has.
C
It
stores
a
lot
of
information
like,
for
example,
the
cpu,
this
gram
and
all
this
kind
of
stuff,
and
then
you
can
manage
everything
through
the
bare
metal
host,
but
that's
basically
the
chain
of
the
objects
which
end
up
which
help
you
to
create
the
node
in
the
in
the
bare
metal
infrastructure.
Let's
say
so
coming
to
the
metal
cube
now
so
metal,
let's
focus
on
the
battle
cap
stack
and
see
what
metal
kit
actually
brings
and
how
does
it
really
manage
the
bare
metal
infrastructure
or
the
servers?
C
C
So
the
first
thing
that
you
need
to
do
is
that,
of
course,
on
another
node.
You
need
a
kubernetes
cluster,
because
metal
cube
is
running
inside
the
kubernetes,
so
you
can
start
with
very
minimalistic,
like
a
small
kubernetes
cluster.
To
start
with,
and
on
top
of
that,
you
can
install
bare
metal
operator,
which
is,
as
I
already
mentioned,
component
of
the
metal
cube.
That
knows
how
to
talk
to
your
underlying
infrastructure,
and
only
just
by
running
the
bare
metal
operator.
You
are
already
able
to
manage
your
servers.
C
But
if
you
want
to
extend
the
capability
of
the
metal
cube
and
then,
if
you
want
to
have
the
features
that
are
also
provided
by
the
cluster
api,
then
you
will
have
to
use
another
component
of
the
metal
cube
called
cluster
api
provider
metal
cube.
So
this
is
the
plugin
that
I
already
earlier
mentioned
that
we
basically
plug
into
the
cluster
api
and
now
cluster
api
knows
how
to
create
nodes,
for
example,
bare
metal,
nodes
or
environmental
cluster
and,
let's
say
through
the
metal
cube
right.
C
So
in
the
in
the
next
slide,
we
will
briefly
talk
about
custom
controllers
that
we've
built
in
the
metal
cube
and
then
some
of
the
objects,
but
before
we
jump
when
we're
saying
about
the
navigation,
so
if
you
go
to
the
metalcube
github
organization,
you
will
see
four
peanut
github
repos,
which
you
might
already
know
as
based
on
the
slides
that
I
showed
earlier.
But
the
first
thing
is
the
metal
kit
docks
where
you'll
find
a
lot
of
design
documents.
C
It's
always
growing
because
we're
having
more
and
more
contribution
more
features
are
being
added.
So
this
is
the
place
where
we're
storing
some
documentation
for
like
design
docs
but
also
currently
started.
Writing
our
dock
for
the
whole
project
or
extending
the
existing
documents.
C
The
second
component
that
I
mentioned
that
knows
how
to
interact
with
the
underlying
infrastructure,
is
the
bare
metal
operator
that
is
living
in
a
separate
github
repository
as
you're
seeing
in
the
middle
here,
then
you
have
cluster
api
provider,
metal
cube,
which
is
the
plugin
for
the
cluster
api
project,
and
then
we
have
another
repository
that
we're
using
for
testing
and
development
purposes
called
metal
cap
defense,
about
which
we
will
shortly
talk
in
the
in
the
next
slides.
C
B
Sure,
let's
start
then
with
the
diameter
operator.
So
as
fedo's
already
mentioned,
this
is
the
really
the
base
building
block
of
metalcube.
B
On
top
of
the
of
the
provision
nodes,
so
bare
metal,
operator,
standalone
and
really
the
base
for
for
metal
cube,
then
how
does
it
really
work
with
them
with
the
hardware
that
we
have
under
the
hood?
So
the
bare
metal
operator
has
this
representation
that
that
was
already
mentioned.
This
bare
metal
host
and
the
baremetal
host
represents
the
physical
hardware
there's
only
two
requirements
to
be
able
to
to
like
start
managing
that
hardware.
It's
first
to
know
all
the
details
with
about
your
dmc,
this
baseboard
management
controller.
B
You
need
the
credentials,
you
need
the
address,
maybe
the
certificate,
the
if
the
ca,
if
you're,
using
a
specific
certificate.
Well,
anything
like
that
allows
you
to
to
manage
that
node
directly.
So
that
also
implies
that
you
need
to
have
connectivity
between
the
cluster
where
you're
running
the
meter
operator
and
the
bmc's
of
your
hardware
and
then
the
second
thing
you
need
is
the
host
mac
address.
B
The
host
mac
address
is
used
to
identify
the
node
when
it
boots
which
like
which
bare
metal
host
we're
talking
about
like
when
it
puts
using
ironic
once
you
have
those
two
things
and
they
are
putting
the
bare
metal
host
you're
ready
to
go,
and
you
can
like
kick
in
the
kick
in
the
deployment
process.
B
So
yeah
the
let's
talk
a
bit
about
how
barometer
operator
interacts
with
those
different
components,
so
photos
if
you
can
go
through
like
put
everything
at
once
or
right
away.
So
there
is
the.
B
There
is
two
two
things
actually
like
behind
this
metal
host:
the
there's
the
bare
metal
hose
itself,
the
object
that
represents
your
hardware,
but
there's
also
a
secret
that
is
attached
to
that
to
that
baremetal
host
and
that
secret
contains
the
username
and
the
password
for
the
bmc
right.
So
then,
in
the
bare
metal
host
in
the
in
the
cr,
you
have
a
field
that
references,
the
the
credentials
that
that
you
are
using
here
called
the
credentials
name
and
then
the
address
also
of
the
of
the
bmc.
B
Then
you
put
the
mac
address
that
you
want
and
then
yeah
sorry
and
then
you
can
specify
if
you
want
it,
if
you
want
the
node
to
be
on
or
off
so
we're
going
to
dive
deeper
in
the
fields
right
after
so,
let's
go
to
the
next
slide,
so
you
have
the
bar
metal
host
and
then
the
bar
metal
operator
that
keeps
reconciling
that
object.
B
Can
you
please
put
the
yeah,
then
we
are
now
like
going
to
to
look
in
details
like
what
are
the
fields
of
this
bar
metal
host,
and
how
can
you
like
use
it
to
manage
your
server
right?
So,
as
any
other
kubernetes
subject,
there
is
an
api
version,
other
kind,
so
the
api
is
metalcube.io.
B
It's
v1
alpha
1,
because
it's
still
under
development
and
the
kind
is
bare
metal
host.
Then
you
have
the
spec
part
of
that
object.
The
spec
contains
a
lot
of
information
like
how
you
want
the
the
state.
Basically
you
you
declare
that
the
state
you
want
your
bare
metal
host
to
be
in
so
the
first
thing
you
have
to
give.
Obviously
we
just
already
talked
about
it
is
the
bmc
the
address
and
the
name
of
the
secret
in
which
you
store
the
credentials.
B
Then
you
specify
the
boot
mac
address
of
the
node.
That's
that
one
mac
address
the
node
will
use
to
pxe
boot
so
and
that's
going
to
be
used
to
match
which
node
is
being
booted
to
like
which
bare
metal
host
we're
talking
about.
B
Then
you
can
specify
the
boot
mode,
whether
you
want
it
uefi
or
legacy,
and
after
that
you
have
so
yeah.
Okay,
good,
you
have
the
consumer
ref
field.
The
consumer.
Ref,
is
the
object
that
is
currently
consuming
the
barometer
host.
If
any.
So
this
doesn't
have
to
be
set,
but
if
you're
using
the
cluster
api
metal
cube
provider,
then
you
will
have
this
set
to
the
actual
metal
cube
machine
that
is
currently
consuming
the
bar
metal
host.
B
The
next
field
that
we
have
in
the
bimetal
host
is
the
image
field.
So
that's
where
you
specify
the
image
you
want
to
have
written
to
the
disk
of
your
hardware,
so
this
should
be
available
over
like
like
an
http
request
and
you
need
to
provide
the
checksum
and
the
type
of
the
checksum
and
then
the
format
that
you
use
to
that.
The
the
image
is
using
when
you
specify
that
image
then
what's
going
to
happen,
is
that
ironic
will
start
the
temporary
image
using
an
iso
that
is
called
epa.
Ironic
python
agent.
B
That
epa
will
then
download
the
image
that
you
just
gave
here,
write
it
to
the
disk
and
then
reboot
from
disk,
so
that
allowing
you
to
start
start
the
the
node
with
the
the
os
provisioned
and
directly
from
the
disk.
It
will
also
write
the
cloud
init
data
and
that's
what
we're
coming
in
now
that
there
are
a
couple
of
fields,
the
next
one
being
the
metadata.
B
That
is
basically
the
the
set
of
like
fields
that
you
you
can
give
that
will
be
used
to
render
the
cloud
init
user
data
and
network
that
are
given.
So
you
can
give,
for
example,
in
the
metadata,
like
the
name,
the
hostname
of
the
of
the
node
and
any
other
field
like
it's
like
a
map,
so
you
can
give
whatever
you
want
in
there.
B
Then
there
is
the
network
data,
the
network
that
well
yeah
exactly
the
network
data
is
a
reference
to
a
secret
and
that
secret
contains
the
network
configuration
that
will
be
applied
by
cloud
init
on
the
node.
So
you
can
do
all
the
networking
configuration
from
there
if
you
don't
want
to
do
it
through
the
user
data
in
cloud
in
it.
B
So
the
next
field
is
just
this
online
field,
it's
basically
a
switch
like.
Is
it
on
or
off,
and
then
the
following
field
is
the
user
data.
So
this
contains
all
the
all
the
cloud
in
it
like
that.
Let's
say
core
data
given
by
the
user,
and
it
allows
you
to
do
a
lot
of
configuration.
It's
really
really
powerful.
You
can
create
users,
you
can
run
commands
you
can
like
install
packages
like
it's.
It's.
It
really
allows
you
to
do
a
lot
of
things.
B
So
then,
once
once
you
have
clarinet
booted
sorry,
once
you
have
the
note
booted,
then
cloud
and
it
will
kick
in
so
the
image
need
to
have
cloud
in
it
installed
and,
of
course,
we're
talking
about
clouding
it
now,
but
it
could
be
exactly
the
same
thing
with
ignition,
so
closing
it
will
kick
in
and
then
once
it's
started,
it
will
like
read
the
the
data
that
was
written
by
ironic
on
the
specific
part
of
the
disk
and
then
use
that
to
to
perform
the
perform
the
setup
of
the
node.
B
Then
the
last
field
here
in
the
spec
is
called
root
device
hints.
So
when
you're
on
physical
hardware,
you
probably
have
multiple
disks
and
you
probably
want
to
specify
one
specifically.
That
would
be
the
the
the
root
like-
let's
say,
the
os
disk.
It
could
even
be
red,
like
so
you're
going
to
give.
What's
called
here,
root
device
hints,
so
it's
it's
basically
basic
hints
that
are
telling
ironic
to
choose
how
to
choose
the
disk
on
which
it's
going
to
write
the
image
you
can
give
the
name.
B
You
can
give
some
some
things
like
hctl,
for
example,
or
some
like
identifier,
like
this
there's
quite
a
broad
range
of
fields
available
there
that
allow
you
to
specify
which
disk
exactly
you
want,
and
it
will
default,
like
ironic,
has
its
own
way
to
to
to
default
the
the
selection
that
to
a
disk
that
is
writable
and
more
than
four
gigabyte.
B
But
if
of
course,
if
that
doesn't
fit
you,
and
you
want
to
be
much
more
specific,
then
you
can
give
anything
here
in
the
in
those
root
device
hints
and
it's
going
to
be
matched
with
what's
on
the
on
the
node,
and
if
it
doesn't
match
that
this
will
be
selected
for
writing
the
image,
then
the
status
part
of
that
object.
B
It
contains
a
lot
of
information
about
your
node,
so
the
node
will,
when
like
created,
it
will
go
through
a
process
that
is
called
introspection
and
the
introspection
gathers
all
the
data
of
the
node
and
this
data
will
be
put
in
a
field
called
hardware.
You
will
have,
for
example,
the
cpu
like
with
details
about
the
what
you
have
on
your
node.
Then
you
will
have
something
about
the
firmware.
You
will
have
the
hostname.
That
was
at
the
time
when
it
was
gathered.
B
B
Then
there
will
be
another
field
called
powered
on
true.
That
will
indicate
if
your
server
is
actually
turned
on
or
turned
off,
and
then
the
rest
will
be
just
reflecting
what
you
have
in
the
in
the
spec,
except
that
you
can
find
the
state
of
your
node.
That
will
be
either
like
ready
in
the
case
it's
waiting
for
being
used
or
provisioned.
B
Once
it's
like
being
used
on
and
running
running
the
workload
you
want
to
have
on
top,
so
that's
it
for
the
bare
metal
host.
Then
we
are
going
to
move
to
the
cluster
api
provider,
metal
cube
and
the
integration
with
cluster
api.
So
just
it's.
Basically,
this
is
basically
the
same
slide
as
fellows
already
presented,
but
with
a
bit
more
information
about
the
cluster
api
and
then
the
controllers
and
how
things
work
together.
B
So
you
can
see
here
that
you
have
different
objects
representing
different
things
in
kubernetes,
so
you
have
the
cluster,
for
example,
that
is
the
representation
of
a
kubernetes
cluster,
and
then
there
is
the
metal
cube
cluster.
That
is
the
infrastructure
part
of
that
kubernetes
cluster.
Then
you
have
the
machine
that
represents
the
kubernetes
node
and
then
you
have
the
metal
cube
machine
that
represents
the
actual
infrastructure
part
of
that
machine
and
then
the
bimetal
host
that
represents
the
hardware.
B
There
is
also
the
qbm
config
object
that
contains
the
that
contains
the
cube
adm
config
part
of
the
that
will
be
used
to
to
provision
this
specific
machine,
so
each
of
them
are
reconciled
by
different
controllers.
B
B
So
we
are
going
now
to
like
have
a
short
look
at
what
those
objects
are.
So
the
the
cluster
was
the
description
of
the
of
the
of
the
kubernetes
cluster,
and
I
would
recommend
for
you
to
go
into
the
cluster
api
book
to
see
more
detail
about
that
part,
but
we
are
going
to
focus
on
the
method
cube
side
so
for
the
metal
cube
cluster,
it's
basically
just
a
representation
of
the
endpoint
that
you
will
have
set
up
for
your
kubernetes
cluster,
so
it
contains
the
only
field
that
it
contains.
B
Is
this
control
plane
endpoint
with
a
host
on
the
port?
So
that's
where
your
api
server
will
be
listening
once
your
cluster
is
up.
Then
we
have
the
metal
cube
machine
and
this
is
the
the
infrastructure
part
of
the
machine.
So
the
machine
will
contain
everything
related
to
the
kubernetes
part
and
the
metal
key
machine
will
contain
anything
relate
regarding
in
relation
with
metal
cube,
so
that
contains,
for
example,
the
image.
B
B
So
that's
exactly
the
same
thing
as
we've
seen
in
the
bimetal
host
and
then
it
will
contain
the
the
provider
id
that
is
the
same
same
as
in
kubernetes
node,
so
that
will
be
the
exact
same
provider
id
here,
as
you
have
on
the
on
the
kubernetes
node,
when
it
pops
up
and
then
in
the
status,
you
will
find
all
the
addresses
of
your
node
so
that
way
which
address
you
can
reach
it
and
then,
whether
it's
ready
and
ready
to
go.
B
So
this
was
for
a
very
short
overview
of
the
different
objects
that
we
have,
and
I
think
after
that
we
can
probably
like
switch
to
the
demo
and
show
like
how
how
everything
is
working
and
then
I
think
pep
can
take
over.
B
D
You
yeah
okay,
so
we
will
show
this
in
action
using
one
of
the
ripples
that
ferrus
mentioned
there
is
you
know
in
the
git
github
or
for
metal
cube.
There
is
this
metal
3d
vm,
which
is
a
developer
development
environment
for
metal
cube,
which
actually
simulates
bare
metal
hosts
using
virtual
machines.
This
can
get
confusing.
I
want
to
clarify
that
the
target
of
this
is
bare
metal.
The
demo.
We
will
see
virtual
machines
running,
but
they
simulate
virtual
mach.
D
Sorry
bare
metal
hosts
a
quick
overview
of
of
the
environment,
of
what
the
environment
looks
like,
so
we're
going
to
deploy
a
new,
a
brand
new
kubernetes
cluster
as
a
small
one,
you
just
with
one
control,
plane,
member
and
two
worker
nodes,
and
if
you
go
skip
so
those
all
of
those
three
will
be
on
well
bare
metal,
which
will
actually
be
simulated.
The
next
one.
You
will
see
that
those
bare
metal
servers
are
actually
well.
D
It's
highlighted
later,
the
deployment
of
this
new
cluster
will
be
handled
through
a
management
cluster.
This
management
cluster.
On
the
on
the
developer,
my
on
metal
through
the
ramp,
it's
actually
a
small
cluster
one.
All
in
one
cluster,
using
mini
cube,
so
in
in
this
management
cluster,
with
mini
cube,
we
will
have
all
the
components,
meta,
key
components,
bm
the
development
operator,
captain
three
and
also
the
cluster
api
deployed
by
the
way
again
acronyms
here.
D
So,
if
you
skip
forward
the
starting
point
of
of
the
demo
will
be
the
management
cluster
already
deployed,
with
all
the
components
in
place
and
a
few
of
the
resources
already
in
place,
we
will
see
we
will
be
deploying
a
few
more
during
the
demo,
okay
yeah
yeah.
So
just
to
summarize,
we
have
a
management
cluster.
Actually
I
think
I
can
take
over
here
from
sharing
the
screen.
D
Okay,
I
hope
you're
saying
this
so
by
the
way
this
is
a
screenshot
of
of
the
website.
Metal3.Io,
there
is
a
section
there
called
try
that
actually
goes
through
this
developer
environment
and
explains
it
how
to
run
it
by
the
way.
This
is
a
video
it
has
been.
You
know
I
recorded
this
because
well
bare
metal
provisioning
does
take
some
time,
a
time
that
we
don't
have
here.
This
is
a
small
diagram
of
the
slides
that
you
were
seeing.
The
the
environment
is
actually
the
implementation
of
it.
D
Is
you
see
a
management
cluster
or
mini
cube
a
target
cluster?
There
are
four
networks
here:
one
used
for
provisioning,
one
for
the
bare
metal
network
is
actually
the
access
the
public
network,
let's
say
of
the
cluster
here
we
see
you
know
just
let
me
post
quickly
to
see
what
we
have
here.
We
see
a
virtual
machine
manager
again
showing
the
virtual
machines
that
we
have
a
mini
cube
here
is
not
open
at
the
moment,
but
you
you
see
it
running
is
where
the
management
cluster
is
keep
ctl
on.
D
This
host
is
configured
to
talk
to
the
management,
cluster
you're
running
a
mini
cube
and
we
have
two
nodes:
node
zero,
node
one,
those
the
consoles
of
those
nodes,
are
open
here
at
the
left.
Okay,
just
to
confirm
we
already
have,
as
mentioned,
we
already
have
metal
cube
deployed.
This
is
just
the
cap.
M3
namespace.
D
Okay,
this
is
the
starting
point
and
we
will
see
you
know
we
have
this
kind
of
empty
two-node,
two-bar
metal
nodes
where
we
want
to
deploy
a
new
kubernetes
cluster,
a
new
bare
metal,
kubernetes
cluster,
using
metal
cube
from
the
management
cluster.
Okay.
So
first
we
start
by
declaring
the
this
is
a
cluster
api
object,
cluster
that
that
mail
described
and
we
will
not
get
into
the
details
of
oops,
sorry
but
yeah.
I
do
want
to
mention
sorry.
D
So
we
just
declare
that
cluster
here
nothing
much
will
happen
other
than
the
resources
being
created,
and
now
we
move
to
actually
start
deploying
the
cluster.
First,
we
start
with
the
control
plane
and
again.
D
Some
of
those
objects
are
cluster
api
objects
and
we
will
not
get
into
the
details
of
this
but
like
the
control
plane,
but
this
this
actually
references
a
a
metal,
cube,
object,
metal,
machine
template,
we
didn't
talk
about
templates,
but-
and
we
will
not
talk
about
them
now,
but
imagine
this
is
a
kind
of
a
generator
of
metal,
cube
machines.
D
Right
and-
and
here
we
have
the
the
the
fields
of
the
spec
that
make
a
metal
cube
machine
like
the
image
here,
we
will
be
deploying
centos
on
those
systems
which,
by
the
way
I
didn't
mention,
but
what
you
see
on
the
consoles
at
the
moment
is
that
the
ironic
python
agent
image
just
waiting
for
instructions
right.
D
D
What
we
will
immediately
see
starting
to
happen
is
that
one
of
the
one
of
the
hosts
will
be
picked
up
as
the
target
for
this
for
the
control
plane.
It's
a
single
node
control
plane,
and
we
actually
see
here
that
you
know
node
zero.
The
very
metal
host
that
was
declared
has
been
picked
up
to
as
a
target
for
the
cluster.
Sorry
for
the
control,
plane
and
ironic
is
now
provisioning
it
it's
already
rebooting
and
well
it
already.
This
is
by
the
way.
D
This
is
the
part
that
has
been
highly
accelerated
this.
This
does
take
time
time
that
we
don't
have.
But
you
know,
if
you
look
at
the
clock
there
at
the
right,
you
will
see
that
it
moves
faster
than
reality.
Anyway,
we
are-
and
we
are
already
done-
we
have
this
node
zero
has
been
provisioned
with
okay.
Let
me
post
just
close
quickly
here.
D
The
node
zero
now
has
been
provisioned
with
the
image
that
we
said,
and
this
is
what
we
see
here
as
provision
we
see
a
machine.
This
is
a
down
here
is
a
cluster
api
object
mentioned
that
it's
being
provisioned
and
in
the
middle
between
the
two
we
have
the
metal
tube
machine
object.
So
the
physical
part-
let's
say
the
bare
metal
part,
is
already
done
provisioned,
but
the
machine
itself.
It's
not
it's
not
done
yet,
so
we
still
don't
have.
In
other
words,
we
still
don't
have
a
kubernetes
node
there.
D
D
Just
taking
a
look
after
booting
cloud,
init
is
still
running
and
you
can
see
that
for
instructions
from
the
bare
metal
operator,
it's
actually
running
tube
adm
to
install
the
kubernetes
cluster
here.
Well,
it
does
again.
This
is
another
part
that
has
been
highly
accelerated
here.
It
does
take
a
while
to
download
the
images
you
can
see.
Docker
images
here.
Well,
it
started
the
api
server
controller
manager,
lcd
core
dns,
et
cetera,
et
cetera.
After
a
while,
we
will
see
you
know,
containers
actually
starting
to
run
the
new
clusters
control
plane.
D
So
let
me
explain
this
a
bit.
We
have
the
api
server
already
starting
okay.
Well,
you
get
the
idea
right.
So,
okay,
at
this
point,
this
this
node
is
already
this
system's
already
bare
metal
host.
Node
0
is
already
a
single
node
new
kubernetes
cluster.
It's
not
ready,
yet
we
don't
have
a
cni
plug-in
deployed
so,
but
we
can
move
on
and
actually
grow
the
cluster
by
adding
workers.
D
So
for
this
we
will
use
this
manifest
here
again.
This
is
a
cluster
api
object,
a
machine
deployment
that
you
know
we
will
use
by
the
way
I
didn't
mention
the
name
test.
One
is
the
name
of
the
cluster
that
we
created.
It
was
mentioned
in
the
depression
of
the
cluster.
That's
a
reference
here
again.
We
will
not
get
into
the
details
of
the
of
of
the
fields
here.
D
Just
get
the
idea
that
equivalently
to
the
to
what
we
did
before,
we
also
have
a
machine,
a
metal
cube
machine
template
that
will
specify
how
the
machines
that
will
actually
become
nodes
will
look
like
and
again.
We
are
using
centos
here
applying
this
this
manifest
here.
Well,
the
consequences
are
relatively
similar
to
what
we
saw
with
the
control
plane.
We
will
see
one
of
the
well.
D
Actually,
we
only
have
one
host
left,
one
bare
metal
hose
left,
node,
one,
which
is
what
has
been
taken
here
and
we
here
we
are
following
the
same
basically
the
same
process
as
before
the
the
note
has
been
through.
Well,
it's
being
provisioned,
it
will
be
image.
Let
me
skip
that
a
bit
faster
and
just
to
recap,
at
this
point
we
have
the
two
nodes
provision
from
you
know.
D
Physical
point
of
view
parameter
point
of
view,
one
of
the
machines
that
the
machine,
by
the
way
we
you
can
see
here,
the
provider
id
of
the
of
the
control
plane
machine,
while
the
the
new
worker
node
is
still
being
provisioned.
Well,
not
sorry,
it
is
provisioned,
but
it's
being
configured
as
a
worker.
Node.
D
Now
I
I
I
logged
back
in
into
the
control
plane,
so
this
is
our
new
cluster
and
we
still
see
you
know.
Even
if
the
machine
was
already
installed
like
the
physical
machine,
the
bare
metal
hose
was
already
provisioned.
It
was
not
the
node
now
it
is.
You
know
it
has
been
configured
as
as
a
worker
node
of
this
new
cluster.
So
now
the
cluster
has
two
nodes:
they
are
not
ready
because
they,
as
I
mentioned
they,
don't
have
a
cna
plugin
here.
This
is
what
I'm
doing
here.
D
D
Let
me
speed
that
up,
but
basically,
after
psyllium
gets
deployed,
we
will
see
that
we
have
a
fully
functional
brand
new
kubernetes
bare
metal
cluster,
with
those
two
nodes
that
we
have
here.
Okay,
both
are
ready.
We
have
a
cluster
and
just
to
just
to
recap
this.
This
is
a
picture
of
the
current
situation.
We
have
two
bare
metal
hosts
that
have
been
provisioned.
D
This
is
you
know
this.
We
are
not
done
yet
because
we
want
to
grow
the
cluster
here.
You
will
have
noticed
that
we
have
another
bare
metal
or
a
fake,
bare
metal
hose
here.
Node
two
it's
been
switched
off,
it's
not
related,
it
has
no
relationship
with
the
cluster.
Yet
this
is
a
new.
Let's
say
a
new
server
that
we
want
to
add.
You
know
to
as
a
new
worker
make
it
a
new
worker
for
our
new
cluster.
D
So
the
first
thing
that
we
will
do
is
declare
it
as
a
parameter
host.
You
saw
in
the
presentation
departmental
host
that
the
object
can
be
very
contain
a
lot
of
information,
but
this
is
just
the
essential
essentials.
D
D
We
will
see
something
that
we
didn't
see
in
the
previous
ones,
because
when
we
started
the
demo
the
node
zero
node
one
were
already
registered
and
and
inspected.
What
we
will
see
now
is
that,
after
creating
this
brand
new
varietal
host,
we
will
see
well.
It
appears
here
in
the
list
of
bare
metal
hosts
and
its
register
registering
and
inspecting.
D
This
is
ironic
that
you
know
not,
if
bare
metal
operator,
not
noticed
that
we
have
a
new
parameter
host
and
is
using
ironic
to
well
find
out
how
how
the
node
looks
like
and
it's
booting
now
you
can
see
that
the
node
rebooted
and
ironic
is
inspecting
the
the
hardware
after
a
bit
again.
This
is
another
thing
that
has
been
accelerated
for
demo
purposes,
but
let
me
accelerate
it
even
more
oops,
maybe
a
bit
too
much
okay,
it
became
ready.
D
D
We
reached
that
point
with
no
2
now,
okay,
now
we
will
use
that
new
node
to
have
another
additional
second
worker
to
the
new
cluster,
and
we
will
use
a
different
trick
here.
So
machine
deployment
is
a
cluster
api
object
that
controls
it's
the
equivalent
of
deployment
for
parts,
but
for
machines,
and
we
have
the
current
cluster
has
one.
You
know
it
has
a
machine
deployment
with
one
replica
and
as
as
a
reminder
we
have
physically,
we
have
three
bare
metal
hosts,
the
last
of
them.
No
two
has
been
recently
provisioned.
D
What
we
will
do
is
we
will
scale
that
machine
deployment
and
you
know,
ask
it
to
be
to
have
two
replicas.
So
here.
D
By
declaring
this
new
state
of
three
replicas,
we
will
see
that
we
need
a
new
machine
so
that
this
will
cause
a
new
machine
to
be
deployed
which
will
you
know,
ask
for
a
new
metal
cube
machine
which
will
come
as
a
bare
metal
hose
probably
will
be
provided
by
the
environmental
host
that
we
just
started.
No
two,
and
this
is
what's
happening
here.
You
see
this
the
same
process
that
we
followed
before
we
see
no
two
being
now
provision.
D
And
that's
basically
the
same
process
as
the
previous
node
deployed
and
that's
basically
the
kit
of
the
demo.
So
this
is
kind
of
a
summary
of
the
final
situation.
Again,
we
have
a
a
brand
new
kubernetes
cluster
with
three
nodes:
three
parameter
nodes.
D
I
think
yes
here
here,
I
think
we're
checking
how,
from
the
new
cluster
itself,
how
it
how
it
looks
like
well,
the
the
new
node
has
only
been
provisioned,
but
the
the
configuration
has
not
finished.
It's
just
a
matter
of
waiting
a
bit
more
as
as
it
happened
with
node
1
node
2
will
eventually
become
a
proper
note.
We
see
it
pop
up
here
as
an
already,
because
we
already
configured
the
cni
plug-in
it
will
soon.
B
D
C
Yeah,
maybe
I'll
I'll,
take
it
over
thanks
a
lot
pep
have.
C
Yep,
can
you
see
the
slides,
yes
yep,
just
one
thing
to
mention
that
I
also
saw
in
the
integrations
here
in
the
chat
that-
and
I
forgot
to
mention
in
the
beginning-
we're
not
shipping
any
other
openstack
services
with
metalcube.
It's
just
standalone
ironic,
and
even
when
you
are
taking
like
when
metalcube
is
using
the
ironic,
you
don't
have
to
manage
the
ironic
itself.
C
You
don't
have
to
use
any
other
openstack
services
like
nova
or
whatever,
so
because
it's
just
like
decoupled
ironic
itself
that
we're
using
under
the
hood
to
manage
the
actual
pivotal
service
great.
C
But
contribution
can
be
different,
so
you
might
want
like
to
do
some
documentation
changes
that
we're
doing
right
now,
quite
a
lot.
So
if
you
have
some
skills
in
the
documentation
and
if
you
want
to
share
some
some
of
your
skills
and
do
some
contribution
to
the
metal
cube,
you
are
very
much
welcome.
C
You
might
also
be
interested
to
have
some
feature
requests
that
you
might
think
would
be
valuable
in
to
have
it
in
metal
cube.
That
is
also
really
really
nice
to
hear
as
well
or
you
might
have
found
some
bugs
that
you
might
report
which
we
first
tried
to,
of
course
fix
or
the
maintainers
or
the
contributors
of
the
metal
cube,
but
even
if
you
have
your
own
fix
for
that,
it's
even
nicer
than
that
or
you
can
also
take
a
part
in
the
review.
C
Processes
of
the
pull
request,
share
your
comments
and
give
some
reviews-
or
you
can
also
take
apart
or
like
help
in
some
talks
or
webinars
or
presentations
like
this.
What
we're
doing
right
now
or
write
some
blog
posts,
we
have
a
metal
cube,
dot,
io
website,
where,
where
people
are
writing,
different
blog
posts
or
different
features
of
the
metal
cube
and
also
you
can,
you
might
have
some
questions
or
the
feedbacks
that
I
get
for
the
metal
cube.
C
So
this
is
also
very
much
welcomed
from
the
community
and
if
you
want
to
know
how
to
get
started
with
the
contribution
you
can,
you
might
be
interested
to
check
the
link
below
and
the
last
slide.
I
would
say
that
we
have
a
very
diverse
and
really
really
interesting
community
right
now
we
have
different
contributors
from
different
organizations
across
the
world
like
to
name
red
hat
ericsson,
mirantis,
dale,
fujitsu
and
att.
C
That
happens
every
wednesday
we
had
zoom,
so
you
will
find
the
link
below
all
the
community
meetings
are
recorded,
but
apart
from
the
community
meetings,
we
also
have
nice
demos
of
the
metal
cube
that
are
stored
in
the
metalcube
youtube
channel
to
visit
the
the
code
actual
code,
it's
hosted
on
the
github
under
the
metalcube.io.
C
We
also
have
a
nice
website
where
we're
storing,
as
I
said,
like
a
blog
post
and
some
updates,
what's
happening
on
the
metalcube,
and
you
can
also
watch
some
updates
on
the
twitter
as
well.
So
you
can,
you
will
be
able
to
find
the
slides
on
the
link
below
zoom
link
community
meeting
recordings
for
the
youtube
and
then
comment
slack
where
you
can
find
the
channel
to
join
the
the
metal
cube,
and
I
think
that's
all
from
us
for
today
and
I
hope
it
was
somehow
informative
and
interesting
to
you
to
listen
yep.
A
Some
questions,
yes,
please.
If
you
have
any
questions,
please
fill
them
out.
We
have
about
nine
minutes
left
for
questions.
Please
feel
free
to
ask
any
questions
and
I
think
already
have
a
few
of
them
ask
it,
and
if
anyone
wants.
B
B
There
were
a
couple
of
questions
regarding
like
what
do
you
deploy
on
top
of
the
the
host
that
you're
provisioning,
so
metalcube
has
like
two
options:
if
you're
going
going
with
the
cluster
api,
you
will
deploy
a
cluster
directly
like
and
it
will
be
like
one
kubernetes
well,
one
physical
hardware
is
one
kubernetes
node,
and
that
is
directly
done
this
way.
But
if
you
don't
use
the
cluster
api
provider,
metal
cube
so
the
integration
with
cluster
api,
then
you
can
of
course
deploy
anything
that
you
want
on
top
of
your
node.
B
So
you
could
very
well
very
well
like
deploy
and
hypervisor
or
like
whatever
you
want
on
top
and
then
deploy
your
communities
on
top
of
this,
and
you
could
even
like
even
have
a
nested
like
metal,
cube
level
that
you
would
use
metal
first
to
provision
those
those
hardware
nodes
and
then
expose
the
the
virtual
machine
as
like
fake
hardware
nodes,
let's
say
and
then
still
use
use
metal
key
to
provision
them
with
the
cluster
api
this
time.
So
it's
quite
flexible
like
because
you
have
this
cloudiness.
B
You
can
pretty
much
do
anything
on
on
top
of
this,
then
there
were
also
questions
about
the
operating
systems
that
we
can
deploy.
So
ironic
is
pretty
much
only
writing
the
image
that
is
provided
to
the
disk.
B
So
with
that
regard,
as
long
as
you
have
an
image,
a
disk
image
of
the
os
you're
trying
to
install
like
it
can
cover
anything.
So
you
just
need
this
this
image
available
and
it
can
be
downloaded
via
http.
The
one
limitation
would
be
like.
If
your
image
doesn't
have
the
doesn't
have
the
cloud
in
it
or
ignition
or
like
similar
mechanism,
then
you
will
have
to
build
in
all
the
configuration
in
the
image.
So
you
will
have
to
have
a
specific
image
per
node.
B
Otherwise
it
will
probably
not
not
work
out
of
the
out
of
the
box.
Then
some
questions
about
the
communications
between
bare
metal
operator
and
the
bmc,
so
bamata
operator,
embeds,
something
that
is
called
ironic
and
ironic-
is
openstack
project
for
the
for
the
management
of
the
hardware
and
this
ironic
out
of
the
box,
already
embeds
a
lot
of
lots
of
different
protocols
and
like
supports
a
lot
of
different
hardware.
B
So
there's
like
a
ipmi
redfish,
high
low,
like
lots,
lots
of
different,
even
proprietary,
like
kind
of
kind
of
protocols
and
then
the
so
you
just
specify
which
protocol
you
want
to
use
in
when
you
specify
the
bmc
of
your
host.
So
you
say
like
if,
if
it's
a
redfish
or
if
it's
ipmi
or
whatever
and
then
like
it,
will
configure
the
bermuda
operator
will
configure
ironic
to
properly
talk
with
your
bmc.
B
And
then
there
are
some
open
questions
that
we
probably
should
address,
because
they
don't
yet
have
an
answer.
So
yeah
yeah
go
ahead
mike
sure
the
first
one
was
like
what
vitalization
product
do
you
have
good
experience
with?
I
could
recommend
looking
at.
I
think
the
demo
is
a
little
bit.
Surely
you
know
a
lot
of
options
with
that
would
be
incredible
to
hear.
B
Actually
I
don't
think
we
know
that
many
options
we're
like
really
usually
working
with
libvet
pretty
much
and
then
like
directly
like
building
the
notes
on
top
of
that.
So
sorry,
no,
except
maybe
like
if
pep
offers,
you
have
like
more
insight
in
there.
B
It
I'm
not
exactly
sure
what
you
mean,
then,
in
this
case
kind
of
like
storage
mechanism.
If
it's
like
just
for
the
disks
on
the
node,
then
well,
it
depends
on
the
image
you're
like
you're,
using
so
this
epa
to
to
write
it
to
the
disk.
But
it's
like
it's
a
centos.
B
So
there's
like
quite
support
of
quite
different
types,
types
of
storage
mechanism,
but
then,
if
it
comes
to
you're
like
trying
to
ask
like
for
from
the
kubernetes
cluster
that
is
deployed
on
top
of
you
like,
on
top
of
it,
for
example
like
for
the
persistent
volumes,
then
that
is
outside
of
the
scope
of
metal
cube.
That's
rather
with
the
configuration
of
your
target
cluster.
D
Yeah,
I
think
that's
a
question
for
me,
so
the
question
is,
if
does
does
openshift
plan
to
use
metal
cube
to
deploy
openshift
over
metal?
The
answer
is.
B
D
D
What
you
would
see
and
by
the
way
you
can
always
deploy
openshift
on
on
bare
metal,
providing
your
you
know,
deploying
your
nodes
yourselves,
but
I
understand
that
the
question
is
automated
deployment
management
of
of
bare
metal
and
so
openshift
uses
it
for,
let's
say
the
worker
part
the
control
plane,
you
would
see
it
uses
a
widget
has
its
own
installer,
which
is
in
store
that
will
take
care
of
deploying
the
control
plane
and
in
in
metal,
keep
terms.
D
You
will
see
the
the
hosts
that
represent
the
control
plane,
the
they
will
be
flagging
you
so
in
the
demo
provisions.
Well,
the
control
plane
nodes
for
for
openg
would
be
externally
provisioned
and
then
the
rest
of
the
worker
nodes
will
deployment
is
handled
by
metalcube.
B
Already
yeah.
Thank
you.
There
was
another
question
about
like
this
storage
mechanism.
The
precision
was
that
it
can
we
support
like
ceph
or
rook.
The
answer
is
like
it's.
Outside
of
the
scope,
however,
we
are
like
working
on
features
now
to
make
the
support
of
like
having
this
kind
of,
like
storage
deployed
on
your
cluster
feasible,
like
for
example.
Until
now,
we
were
always
like
cleaning
the
disk
that
we
can
upgrade.
B
Now
we
are
like
working
on
a
feature
that
would
that
would
allow
you
to
disable
this
cleaning
so
that
you
can
save
like
those
safe
disks,
for
example,
so
that
you
don't
have
too
much
data
like
flowing
through
your
cluster
during,
like
rebalancing
of
the
of
the
of
the
drive
when
during
the
upgrade.
So
we
are
no
it's
outside
of
the
scope,
but
we
are
try
trying
to
be
like
kind
to
it
like
to
make
sure
that
it's
it
works
smoothly.
On
top
of
a
metal
cube,
deployed.
B
A
All
right,
I
think
that
is
all
all
the
time
we
have
for
today
to
answer
q
and
a
questions,
so
thanks
a
lot
once
again
mild
version
and
pep
for
a
great
presentation
and
q.
A
again
thank
you
for
everyone
for
joining
us
today,
as
a
recording
of
the
webinar
and
the
slides
will
be
online
later
today.
So
we
are
looking
forward
to
seeing
you
at
the
future
cncf
webinar,
and
hopefully
you
have
a
great
day.
Thank
you.
Everyone.