►
From YouTube: Automate a K8s cluster on bare metal
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
When
we're
going
to
talk
about
how
you
can
manage
kubernetes
on
bare
metal
and
some
of
the
solutions
that
we
have
cooked
up
and
some
of
the
common
issues
that
you'll
run
into
so
first
of
all,
the
server
power
management
is
something
that
we
commonly
see
users
kind
of
being
apprehensive
about.
A
When
we're
talking
about
you
know,
moving
them
say
onto
bare
metal
from
vms
or
the
cloud
or
because
they
are
seriously
considering
bare
metal,
and
you
know
it's
a
very
human
process,
a
human
has
to
figure
out
when
the
machine
needs
to
be
turned
on
or
off,
and
if
maybe
it
is
turned
off
accidentally.
A
A
A
A
This
is
a
a
common
need
when
you're
running
stateful
applications,
obviously-
and
so,
let's
just
start
off
with
talking
about
a
few
definitions
before
we
dive
into
things.
First
of
all,
sedero
comes
with
the
notion
of
a
server
a
server
aptly
named.
It
is
basically
a
custom
resource
definition
in
kubernetes
that
represents
your
physical
machines.
A
This
is
something
that
eventually
you'll,
be
able
to
do
something
like
kubectl,
get
servers
and
see
all
the
servers
registered
with
the
system,
and
then
on
top
of
that
we
have
the
notion
of
a
server
class,
which
is
essentially
a
filter.
So
you
have
all
these
disparate
servers.
How
do
I
group
those
servers
and
treat
them
as
a
unit?
So,
for
example,
in
aws
you
have
you
know
t4s
t4
smalls,
you
have
t3
larges,
you
have
m5
larges,
so
on
and
so
forth.
A
It
is
a
50
megabyte,
squash
fs,
so
completely
read
only
and
it
runs
entirely
out
of
ram,
and
so
you
have
a
very
simple,
clean
and
efficient
way
to
run
an
operating
system,
and
not
only
that,
but
it
also
manages
kubernetes
for
you.
I
should
mention
that
sedero
metal
integrates
very
tightly
with
talus
linux.
A
A
The
process
is
going
to
look
like
this
we're
going
to
want
to
create
a
kubernetes
cluster,
because
cedero
metal
requires
a
kubernetes
cluster.
It
runs
as
an
application
on
top
of
kubernetes,
so
using
our
talo
ctl
cli.
We
can
actually
provision
a
kubernetes
cluster
right
here
on
our
laptop
using
docker,
a
very
simple
and
quick
way
to
get
kubernetes
running
on
this
network
very
quickly.
A
A
So
at
this
point
we
have
sedero
metal
installed.
Cedero
metal
is
coming
with
two
different
services:
kubernetes
services,
one
being
a
udp
service
and
one
being
a
tcp
service.
The
udp
service
is
for
tftp
and
the
tcp
services
actually
functions
for
multiple
things
within
the
cedaro
world.
So
it's
going
to
offer
up
specifically
pixi
ipixi.
A
It's
also
going
to
offer
up
the
cedaro
metal
api
and
it
is
also
going
to
expose
a
way
for
these
machines
to
get
their
configuration
files
when
they
become
part
of
a
cluster,
and
so
at
this
point
we
need
to
expose
these
services,
and
this
is
just
an
exercise
of
how
do
I
expose
a
kubernetes
server
once
we
do
that,
we
turn
on
a
machine
after
we've
told
them.
You
know,
via
dhcp,
to
boot.
Off
of
this
exposed
service,
we
turn
on
a
machine
it
pixie
boots,
sedero
metal
is
going
to
say.
A
A
A
As
I
mentioned
earlier,
is
it
allows
us
to
filter
down
all
of
these
disparate
systems
and
use
the
physical
data
that
we've
pulled
off
from
the
agent
associated
with
the
server
and
filter
them
into
classes,
in
this
case
we're
going
to
filter
them
down
into
what
we're
calling
a
g1
small
x86
and
a
g1
medium
x86,
and
I
should
mention
that
your
decision
on
the
naming
and
what
qualifies
says
this
is
very
flexible.
It's
completely
up
to
you.
A
So
now
that
we
have
our
server
classes
and
we
have
our
kubernetes
cluster
ready
to
go,
we
can
now
define
our
cluster.
As
I
mentioned
earlier,
we're
going
to
define
a
metal
cluster,
we're
going
to
push
this
to
our
local
laptop,
we're
gonna,
submit
and
say
I
want
a
one
node,
one
control,
plane,
node
and
one
worker
node
cluster.
A
In
practice,
you
typically
want
this
to
be
so
three
nodes
and
as
many
for
the
control
plane
and
as
many
workers
as
you
think
you
might
need
so
in
this
example,
I'm
going
to
say
at
least
in
the
diagrams,
I'm
going
to
say,
give
me
a
kubernetes
cluster
that
is
made
up
of
one
g1
small
x86
and
one
g1
medium
x86,
and
what
cedaro
does
is.
It
knows
how
to
choose
a
server
at
random
from
these
server
classes.
A
A
So
now
that
we
have
our
management
cluster,
I'm
just
going
to
run
through
a
quick
demo
of
what
it
would
look
like
to
create
a
workload
cluster.
A
A
A
Okay,
so
just
to
recap,
what
we've
done
so
far
is:
we've
walked
into
a
data
center.
We
have
bootstrapped
our
entire
management
plane
from
a
laptop
we've
installed.
Sedera
metal
we've
exposed
this
pxe
service,
we've
registered
servers.
We've
then
classified
those
servers
and
in
the
demo
I
showed
that
how
you
can
use
those
server
classes
in
those
servers
to
create
a
workload
cluster,
a
workload
cluster
being
a
cluster,
that
we
intend
to
run
our
actual
services
on
not
the
management
cluster,
but
it
is
a
cluster
that
is
managed
by
our
management
cluster.
A
So,
as
you
can
see
from
this
whole
process
from
the
common
issues
that
I
mentioned
in
the
very
beginning
of
this
things
like
server
power
management
and
server
lifecycle
management
are
entirely
handled
by
cedaro
metal,
sederal
metal
knows
when
a
machine
should
be
turned
on.
It
knows
when
it
should
be
turned
off
and
in
the
case
that
someone
accidentally
turns
it
off.
It
knows
that
the
desired
state
is
that
it's
actually
to
be
turned
on,
so
we'll
go
ahead
and
turn
it
on
the
life
cycle.
A
Management
is
also
entirely
automated
and
handled
by
cedaro
metal,
sedero
metal
knows
when
it
needs
to
pixie
boot
and
install
talos.
It
also
knows
when
it
needs
to
actually
have
this
system
be
registered
with
itself.
It
also
knows
when
the
operating
system
needs
to
be
removed,
and
so
in
that
process,
what
happens?
Is
the
agent
will
be
sent
back
and
will
actually
wipe
that
server
and
get
the
disks
prepared
so
that
it
becomes
the
server
becomes
part
becomes
available
within
the
pool?
Again
it
becomes
clean
and
unallocated.
A
The
fact
that
we're
using
talos
linux
handles
all
of
the
os
management
that
we're
going
to
need
again,
our
goal
with
talos
is
for
you
to
forget
about
the
operating
system
and
to
deliver
kubernetes.
So
we
have
a
secure
hardened
api
driven
operating
system
that
aims
to
basically
remove
the
whole
operating
system
management
out
from
underneath
you
and
handle
it
itself.
A
Not
only
that,
but
kubernetes
sorry
excuse
me.
Telus
linux
also
handles
the
management
of
kubernetes.
It's
going
to
install
kubernetes
according
to
the
best
practices,
and
it's
going
to
roll
out
control,
plane
changes
for
you
that
you
submit
to
talos
linux.
It
knows
when
you
want
to
perform
an
upgrade
whether
or
not
ncd
wilser
will
survive
this
upgrade,
and
so
we
have
safeguards
around
protecting
the
kubernetes
data
so
that
whole
kubernetes
management.
What
we
like
to
say
is
we
give
you
a
cloud-like
experience,
but
also
giving
you
that
flexibility.
A
The
last
two
things
that
I
listed
in
the
common
issues
were
networking
and
storage,
and
this
is
where
the
cedaro
labs
team
comes
into
play.
We
have
a
wealth
of
experience
in
getting
kubernetes
running
on
bare
metal.
We
know
all
the
patterns,
we
know
what
tools
to
use
and
we
have
reference
architectures
for
different
scenarios.
A
We
also
have
a
feature
with
entalos
which
allows
you
to
basically
do
load
balancing
of
the
kubernetes
control
plane
using
a
vip
which
is
handled
and
managed
by
talos
itself,
and
so
you
don't
need
any
extra
infrastructure
outside
of
that,
you
don't
need
load
balancers.
For
example,
now
storage,
rook
and
maya
store
work
great
with
tallows.
A
We
commonly
recommend
this
to
our
customers,
rook
being
the
more
battle-hardened
one
in
my
store
being
one
that
aligns
better
with
our
philosophy
of
making
the
operating
system
be
less
of
a
thing
and
then
finally
load
balancing
a
common
thing
that
we
suggest
to
our
users
and
customers
is
to
use
metal
lb
which
will
allow
you
to
expose
those
services,
specifically
sorry,
the
kubernetes
services
using
bgp
or
gr2.
A
A
We
wanted
to
make
managing
bare
metal
clusters
easy
reproducible
and
you
can
kind
of
print
these
clusters
out
within
your
data
center.
I
should
also
add
that,
since
cl
sedero
metal
is
built
on
top
of
cluster
api.
Not
only
can
you
create
multiple
clusters
this
way,
but
you
can
use
this
same
management
cluster
to
say,
create
kubernetes
clusters
within
aws
gcp,
azure,
digitalocean,
equinix
metal
and
wherever
there
is
a
supported
cluster
api
infrastructure
provider.
A
Okay
next
question:
what
happens
when
you
remove
a
machine
from
a
cluster
good
question?
I
don't
think
I
went
over
that.
So
what
happens?
Is
sedero
metal
knows
that,
now
that
this
machine
is
deleted,
this
machine,
if
you
remember
from
the
demo,
is
associated
with
the
metal
machine
which
is
then
associated
with
the
server
you
saw
that
in
the
server
binding
we
know
that
the
machine
has
been
deleted
and
so
that
we
now
know
that
the
server
has
to
be
cleaned
up,
and
so,
since
sedero
metal
does
the
power
management
of
this
physical
machine.
A
We
basically
tell
the
machine
to
pixie
boot
on
the
next
boot
power
cycle.
It
send
back
our
agent
to
the
server
server
boots
that
and
then
the
agent
knows.
I
need
to
clean
up
all
these
disks
and
get
this
node
prepped
so
that
it
can
become
available
again
for
the
next
cluster,
and
so
it's
simple
as
that
really.