►
Description
The presentation will showcase how k0s can be used for running kubernetes on edge/IoT devices, with its control plane separation (powered by konnectivity) and easy airgapped deployment.
Website: https://www.mirantis.com/
Organized by @Microsoft @kubermatic7173 @SysEleven
Thanks to our sponsors @CapgeminiGlobal, @gardenio, @sysdig, @SUSE, @anynines, @redhat, nginx, serve-u
A
So
a
little
bit
about
me,
my
name
is
Karen
I
work
for
morantis
I
write
code,
originally
I'm
come
from
devops
and
I
turned
into
a
developer,
I'm
an
open
source,
Enthusiast
I've
started
contributing
to
open
source
for
2016
and
anything
that
can
be
automated.
I
say:
let's
go
for
it.
My
username
on
GitHub
is
trawler
and
that's
because
I
kind
of
love
anything
to
do
with
boats
and
sea
and
and
it
was
available.
A
So
what
do
we
do?
Well?
I
mean
what
are
the
considerations
regarding
internet
of
things
or
when
what
is
called
Edge
devices?
A
So
obviously,
security
is
a
core
concern.
That
means
that
in
Internet
of
Things
devices,
Edge
devices,
let's
say
think
about
devices
that
are
on
the
production
level.
In
a
factory,
these
devices
are
usually
isolated,
usually
our
isolated
from
external
access.
Maybe
they
have
access
outside,
probably
not
to
the
internet,
probably
through
a
VPN,
a
VPN
network.
So
security
is
a
huge
concern
for
these
devices.
A
Also
for
for
this
kind
of
network,
we
don't
know
what
device
is
actually
going
to
run.
Our
kubernetes
worker,
that
can
be
an
AMD
64
machine,
but
it
can
also
be
an
arm
V7
machine
or
an
rv6
machine.
A
We've
had
a
network
of
Raspberry
Pi's,
for
example,
so
anything
that
we
Deploy
on
them
has
to
be
supported
on
multiple,
multiple
architectures
and
operating
systems
and
of
course,
I
gave
the
example
of
raspberry
raspberry,
pies,
so
computation
memory.
All
of
these
things
are
a
consideration.
They
will
be
limited
for
K
zeros.
We
made
the
decision
to
completely
isolate
the
control
plane
and
what
does
that
mean?
It
means
that
the
control
plane
are
controllers.
A
We
don't
call
them
Masters,
because
they're
not
kubernetes
Masters,
they
don't
run
any
workflow
whatsoever,
so
they
don't
run
any
workloads.
Sorry,
the
network
is
completely
separate
from
the
worker
from
the
worker
Network,
and
that
means
also
that
the
cni
network
is
not
stretched.
We
don't
have
to
worry
that
the
network
is
going
to
be
the
same
for
the
controllers
and
the
workers.
A
So
why
not?
The
usual
Master
notes,
for
example?
How
do
we
control?
Who
has
the
permission
to
deploy
onto
the
masternodes?
Yes,
you
can
use
Network
policies
in
order
to
block
that
none
of
it
is
of
course,
readily
available
in
kubernetes.
You
have
to
use
third-party
third-party
tools
for
that,
but
in
any
case,
we've
also
come
into.
We
also
found
that
in
some
some
tools,
you
cannot
even
override
these.
So
that's
why
we
made
the
decision
to
completely
disallow
deployment
on
the
master
nodes
at
all.
A
Again.
I
already
mentioned
it
for
the
cni.
If
you
want
to
have
workloads
on
the
Masters,
then
of
course
you
have
to
make
sure
that
the
cni
works
across
the
cluster
Network
and
the
worker
Network,
and
you
need
to
have
a
cubelet
to
API.
Sorry,
an
API
cubelet
direct
connection.
When
you
have
that
kind
of
architecture.
A
Same
same
for
what
they
were
saying,
the
architecture
when
it
comes
for
a
flat
network
is
rather
limited.
It
blocks
you
from
from
making
I,
don't
know
from,
let's
say
from
deploying
the
the
your
workers
in
a
completely
different
data
center
than
your
control
plane.
We
also
know
that
a
lot
of
cloud
providers
don't
like
that
they
want
to
have
their
control
plans
in
a
completely
different
data
center
than
their
worker
nodes.
A
For
that
we
decided
to
use
a
utilize
connectivity.
Connectivity
is
a
community
six
API
Network
proxy
and
basically
what
it
allows
us.
As
I
said,
it
allows
us
to
run
worker
on
isolated
networks
from
the
control
plane.
The
networks
can
even
overlap.
That
means
that
we
can
have
overlapping
IP
address
space
with
from
the
controller
Network
and
the
worker
Network
plus
the
API
itself
has
a
secure
tunnel
from
the
controllers
to
the
workers.
A
We
decided
to
use
grpc
tunnels,
and
that
means
that
when
we
start
the
the
cluster,
that
means
that
each
of
the
controllers
have
a
connection
to
the
worker.
The
workers
basically
initialize
the
connection
from
the
agent
to
the
controller
and
each
worker
has
a
connection
to
each
controller.
A
A
For
anyone
who's
interested
in
Reading
up,
that's
that's
just
some
of
the
links
that
you
can
look
up
after
the
after
this
talk
and
and
educate
yourself
about
so
we
have
here
the
cap.
We
have
here
the
repository
and
a
presentation
from
kubecon
2019.
That
explains
exactly
what
the
kubernetes
proxy
is
kubernetes
API
server
proxies
for
when
tackling
the
architecture,
the
different
differentiating
architecture
of
iot
device,
we
decided
to
package
our
kubernetes
distribution
in
one
single
binary.
A
So
basically
that
means
that
you
can
just
upload
the
binary
to
your
worker
start
it
with
the
right
Flags
or
the
right
configuration.
It
will
know
how
to
connect
already
connect
to
the
to
the
controllers
and
deploy
everything
that
is
required
in
order
to
start
the
cluster,
and
we
make
sure
that,
of
course,
we
have
a
wide
range
of
platforms
that
we
support.
We
even
support
Windows
worker,
the
packaging
itself.
A
Basically,
what
we
do
is
we
take
a
vanilla,
kubernetes,
binaries
and
package
them
into
k0s,
and
that
of
course
allows
us
that
the
the
flexibility
to
decide
which
components
we're
going
to
use
and
fix
whatever
cves
there
are
currently
that
we
can.
Then
easily
fix
in
our
own
repository
and
of
course,
makes
the
deployment
a
lot
easier.
A
Okay,
zeros
out
of
the
box,
runs
container
d
as
a
container
runtime,
but
we
also
support
again
for
out
of
the
box.
We
support
Cube
router,
because
Cube
router
runs
easily
on
rmd7
machines,
but
we
also
support
catacol
as
well
as
a
storage
backend.
We
have
support
for
hcd
by
default,
but
also
sqlite
and
kind.
A
As
part
of
the
as
part
of
the
packaging
and
as
part
of
the
installation
process
for
iot
device
with
devices,
we
also
support
air
gap
setup.
We
assume
that
a
lot
of
these
devices
don't
have
internet
access.
Therefore,
we
have
a
packaging
of
all
the
images
that
are
needed
to
start
the
cluster
which,
with
every
release,
we
release
an
air
gap
package
that
you
can
then
just
upload
to
the
cluster
and,
along
with
the
k0s
binary
and
just
starts
it
started
from
there.
A
For
the
minimum
system
requirements
from
our
tests,
we
found
that
in
order
to
start
the
controller
nodes,
the
table,
as
you
can
see,
we
don't
need
that
much
resources
in
order
to
start
it.
So
we
need
one
CPU
for
the
controller
node
one
CPU
for
the
worker
node
and
about
one
gig
of
RAM
for
for
each
of
them.
A
Of
course,
the
more
you
have
the
better,
but
these
are
I,
think
relatively
modest
requirements,
and
now
it's
time
for
the
demo,
I've
recorded
the
demo
in
advance,
but
I
think
we
have
enough
time
to
do
a
live
demo.
Let's
hope
that
the
screen
is
going
to
be
accommodating
before
the
talk
I
already
installed
a
controller
on
AWS,
it's
not
even
a
controller
right
now,
it's
just
a
machine
and
that's
the
IP
address
and
I
have
a
multipass
VM
on
my
laptop.
A
So
obviously
the
controller
does
not
have
a
direct
connection
to
my
VM,
that
is
on
a
private
Network.
So
I'll
show
you
how
I
can
deploy
k0s
on
it
and
for
that
I'm
using
a
tool
called
k0,
CTL
and
yeah
sorry.
Can
we
make
that
bigger,
yeah
yeah?
A
A
That's
how
the
configuration
file
looks
so
we
basically
configure
a
cluster.
We
configure
an
a
host,
which
is
the
controller
on
this
IP
and
a
worker
with
this
IP
we're
uploading,
an
image
bundle,
and
we
tell
the
cluster
that
the
controller
is
on
an
external
address,
because
otherwise
it
will
expect
to
use
the
cluster
IP
address,
which
is
wrong
in
this
scenario
because
they're,
not
they
don't
live
on
the
same
network.
So
it
has
to
know
that
it's
an
external
address.
A
So
so
everything
here
is
now
being
done.
Basically,
it's
downloaded
the
k0s
binary
to
my
laptop
and
now
it's
uploading
it
to
the
host
host
in
this
case
is
only
the
worker.
Sorry,
the
controller
also
has,
but
the
the
controller
does
not
need
the
image
bundle.
Only
the
worker
needs
the
image
button.
A
No,
it's
running
the
worker.
It's
going
to
take
a
few
seconds.
It
already
has
a
working
cubeconfig
I'm
going
to
use
it.
A
A
A
Okay
and
same
way,
I
already
have
exact
privileges
into
the
shell
demo
pod
existing
on
the
worker.