►
Description
DevDays Europe
Onsite and online on 26-28 April in 2022
Learn more about the conference: https://bit.ly/2ZmHx96
Join our next DevDays Europe conference on 26-28 April in 2022 where you will learn about the latest tech advances from international experts flown in specifically for the event and about recent changes in your local development community from the peers. This time, the conference will be held in a hybrid setting allowing you to attend workshops and listen to expert talks on-site or online.
A
Hi
everyone.
I
have
been
told
that
I
should
share
first
now
that
this
section
has
started,
and
I'm
doing
so
now,
so
let's
also
full
screen
it
there
we
go.
I
hope
everybody
can
see
this
so
hi
and
welcome
to
my
session
called
how
to
build
your
own
cloud
native
platform
on
infrastructure
as
a
service
cloud
in
2021.
A
So
a
single
slide
summary.
So
you
know
what
you're
in
for,
if
you
have
been
tasked
with
deploying
a
service
on
just
a
bare
infrastructure
as
a
service
cloud
which
are
very
common
here
in
europe
or
maybe
even
on
premise,
you
might
find
that
that
environment
has
tons
of
missing
features
and
services
that
you
are
used
to.
If
you
are
more
used
to
deploying
on
say
one
of
the
major
3d
clouds,
such
as
logging
authentication
monitoring
services,
all
that
good
stuff,
that
you
really
want
to
depend
on
in
a
platform,
because
this
is
dev
days.
A
But
on
top
of
kubernetes
we
can
deploy
some
cloud
native
services
that
help
you
bridge
the
gap
which
gives
you
a
modern
cloud
experience
even
in
an
infrastructure
as
a
service
cloud
or
on-premise,
and
the
idea
for
this
talk
and
for
the
demo,
repo
that
I'm
posting
about
in
the
public
chat
right
now.
So
you
can
click
on.
A
Okay.
So
about
me
and
the
company
that
I
work
for
just
quickly.
My
name
is
lars
larson
and
you
can
click
on
my
linkedin.
A
A
A
A
Authentication
like
how
do
I
even
log
into
this,
does
everybody
in
the
organization
just
get
a
like
an
admin
account?
Are
there
useful
audit
logs
that
type
of
thing
storage
services?
I
mean
there
are
even
clouds
that
don't
have
that
database
as
a
service.
A
Not
everyone
offers
that
either
I
mean
the
major
ones
in
the
eu,
say
ovh,
for
instance,
they
have
it,
but
if
you
go
to
a
random
one
in
your
home
country,
whichever
that
might
be
odds
are
it's
not
to
have
say
a
managed
postgres,
for
instance,
log
handling
and
analysis,
since
you
probably
have
personally
identifiable
information
in
your
logs
shipping
them
off
to
like
logs.io,
which
is
a
us-based
company,
probably
is
in
violation
of
gdpr.
A
A
But
that's
not
really
that
interesting,
because
the
metrics
that
you
actually
care
about,
especially
as
developers
as
you
want
to
understand
your
performance
characteristics
of
the
application
that
you're
developing
is
of
course
say
how
many
logged
in
users
do
I
currently
have
or
how
many
requests
per
second,
am
I
serving
what
type
of
requests
am
I
serving?
How
far
are
they
getting
through
the
micro
service
based
architecture
that
I've
deployed?
A
A
I've
basically
never
seen
a
container
image
registry
service,
even
though
docker
hub
started
charging
for
docker
containers
for
docker
image
hosting
now,
essentially
with
pool
limits
still
nothing.
I
don't
really.
I
don't
see
that
really
in
eu-based
clouds
and
then,
of
course,
continuous
delivery
services,
because
all
the
major
clouds
they
have,
this
sort
of
devops
pipeline
e
type
of
thing
like
azure
or
code,
commit
and
all
the
code
named
services
in
aws
right.
A
I
don't
see
a
whole
lot
of
that
going
on
in
the
eu,
so
will
kubernetes
come
and
save
everything,
no
and
as
developers,
I'm
sure
you've
heard
of
kubernetes,
and
if
your
managers
think
that
developers
is
now
called
devops
and
hey,
you
should
know
this
kubernetes
stuff,
I'm
not
sure
if
everybody
is
super
familiar
with
kubernetes,
but
it's
something
it's
a
great
base
layer
to
build
a
platform
on
top
of
so
by
itself.
What
kubernetes
addresses
is
that
it
orchestrates
containerized
applications.
So
this
is,
of
course,
great
for
developers
to
know
that.
A
Okay,
if
I
just
specify
precisely
what
needs
to
go
into
my
application
like
all
the
dependencies,
it
can
be
packaged
as
a
single
container,
and
I
can
ship
that
container
through
testing
quality
assurance
and
all
the
way
through
staging
and
production,
and
it's
that
packaging
that
is
unchanging
and
that's.
What's
getting
deployed
to
production
and
kubernetes
has
a
great
way
of
running
such
containerized
applications
in
a
clustered
environment,
and
it
does
so
in
a
way
that
is
infrastructure
agnostic.
So
it
it
kind.
It
runs
anywhere.
A
It
doesn't
care
too
much
about
where
you
are
deploying,
because
it
has
all
these
great
drivers,
essentially
that
understand
what
features
are
offered
by
the
cloud
provider
if
there
is
one
or
by
the
virtual
machine
hypervisor
like
vsphere
or
something
in
vmware
world,
so
that
all
you
need
to
say
is
here's
an
application
run
it
or
this
application
needs
to
have
a
load
balancer
in
front
of
it,
so
that
it
can
be
exposed
correctly
to
the
internet,
and
it's
then
kubernetes
problem
to
make
sure
that
it
translates
that
into
cloud
specific
api
calls
so
you're
managing
less
infrastructure.
A
A
A
A
So
the
goal
for
today
is
this
fully
featured
platform,
so
kubernetes
as
the
base
layer
and
then
on
top
we
will
add
network
security,
so
firewalls
and
certificate
management,
we'll
add,
authentication
and
single
sign-on
features
so
that
for
all
the
platform
services
that
we're
adding
if
they
can,
if
they
have
some
kind
of
web
ui,
this
thing
will
log
us
in
so
we
can
use
that
to
log
in
via
our
google
accounts.
The
demo
repo
shows
how
to
set
that
up.
A
We
will
also
have
a
system
for
automated
backups
container
image
registry
called
harbor
and
then
something
for
continuous
delivery,
which
is
called
argo
cd.
So
we
will-
and
this
this
all
takes
like
roughly
half
an
hour
to
install
if
you
use
the
scripts
that
I
provide
you
with
and
it's
basically
it
could
have
been
shaved
off
to
15
minutes.
I
just
wanted
to
make
it
all
pedagogical
to
show
the
first
we
install
and
then
we
add
authentication
on
top
really.
A
We
could
have
shaded
down
to
15
minutes,
even
by
just
doing
the
setup
step
once
so.
It's
very
easy
to
get
started
with,
and
what
I've
done
in
that
repository
is
I've
targeted
a
provider
called
exoscale,
which
is
a
european
cloud
provider.
They
do
have
their
own
managed,
kubernetes
service.
So
for
that
particular
cloud,
it's
quite
pointless
to
install
your
own
kubernetes
service
realistically,
but
we
can
use
it
to
simulate
just
having
a
very
basic
infrastructure
as
a
service
cloud
provider.
A
Seriously,
yes
seriously,
we
can
set
this
up
in
that
short
amount
of
time.
That's
the
github
address.
If
you
want
to
follow
along,
I'm
sure
these
slides
will
be
available
in
pdf
form
through
the
pine
platform
that
we're
all
on
and
then
that
link
will
be
clickable
for
you
if
you
have
not
entered
it
already
via
the
chat
and
by
the
way,
I
of
course
appreciate
if
people
store
it
and
like
it
or
whatever,
but
I
even
more
than
that
would
appreciate
issues
on
this
repo.
A
So
if
you
wind
up
using
this-
and
you
feel
like
hey,
you
know,
this
thing
could
have
been
a
bit
better.
This
aspect
needs
improving,
then
write
issues
because
it
would
be
so
much
fun
to
actually
help
people
use
this
better,
so
you're
helping
me
help
others
by
engaging
with
it.
So
thank
you.
If,
if
you
do
write
issues,
for
instance,
I
know
the
readme
looks
kind
of
ugly.
That
could
be
an
issue
so
the
base
layer
of
this
platform,
as
I've
said
a
couple
of
times,
will
be
kubernetes.
A
Now,
how
do
you
get
kubernetes
onto
machines,
whether
they
are
virtual
or
physical
or
if
they're
running
on
this
cloud
provider
or
that
other
cloud
provider
we're
relying
entirely
on
something
called
cube
spray
to
take
care
of
that
issue
and
cube
spray
is
great
because
it
it's
it's
essentially
a
combination
of
terraform
to
provision
your
infrastructure
and
then
it's
a
huge
ansible
playbook
to
install
kubernetes.
A
I
think
it
has
something
like
500
or
something
just
steps
that
it
goes
through
and
it
sets
up
a
production-ready
kubernetes
cluster
for
you
and
it's
just
it's
just
marvelous
at
what
it
does
actually
and
essentially,
if
you
can
ssh
into
it
and
it's
a
linux
machine,
I
should
have
put
there
that
in
there
as
well.
So
if
you
can
ssh
into
it
and
it's
a
linux
machine,
it's
probable
that
kubrick
can
install
kubernetes
onto
it.
A
So
on
premise:
vmware
virtual
machines,
openstack
small
eu
cloud
providers,
it's
all
gonna
work,
I'm
quite
sure,
so
as
long
as
it's
like
ubuntu
or
red
hat,
based,
perfect
or
debian.
A
So
when
we
set
up
kubernetes
using
cubespray
for
the
purposes
of
this
demo,
and
for
this
talk,
we
should
choose
a
network
provider
container
networking
interface
provider
that
has
full
network
policy
support
and
network
policies.
A
That's
kubernetes
speak
for
firewall
rules,
essentially,
and
also
we're
going
to
install
the
nginx
ingress
controller
so
that
we
can
expose
services
to
the
outside
internet
via
a
standardized
kubernetes
api
construct
called
an
ingress,
so
the
ingress
controller
is
the
thing
that
will
manage
the
various
ingresses
that
we
will
deploy.
A
A
Now
when
it
comes
to
network
security
and
automation,
what
we
can
do
is
either
do
as
in
the
demo
repo,
where
we
just
set
up
a
wildcard,
dns
entry
and
point
it
to
our
worker
node,
because
that
will
mean
that
if,
if
we
set
up
an
wildcard
entry
for,
say
example.com,
anything
under
example.com,
say
demo
or
www
or.
A
So
we
can
also
set
up
something
called
external
dns
which,
if
we
have
a
dns
provider
that
is
supported
by
external
dns,
it
will
automatically
find
any
domain
name
that
we
try
to
expose
services
through
and
register
those
with
our
dns
provider.
We
can
combine
this
with
something
called
cert
manager,
which
will
automatically
provision
certificates
using
the
let's
encrypt
service
and
then
also
make
sure
to
automatically
renew
these
certificates,
so
they
can
be
short-lived,
which
is
of
course,
great
because
that's
a
security
best
practice
and
then
automatically
review
renewed
when
they
expire.
A
Then,
of
course,
we
also
want
to
use
these
network
policies
everywhere,
so
that
components
that
are
running
within
our
kubernetes
cluster
are
protected.
So
we
can
say,
for
instance,
this
is
the
order
services
database.
A
Then
only
the
order
service
should
be
allowed
to
connect
to
it,
not,
for
instance,
the
front
end,
and
that
is
good
for
security,
and
it's
also
something
that
iso
27001
recommends,
for
instance,
call
because
they
are
very
keen
on
network
segregation.
A
So
it's
just
a
best
practice,
basically
to
make
sure
that
you
allow
traffic
where
it's
supposed
to
be
allowed
and
you
disallow
it
where
it's
not
supposed
to
be
allowed.
And
if
you
look
at
the,
if
you
want
to
look
at
the
external
dns
insert
manager,
combo,
that's
what
the
example
here
is.
So,
as
you
see
the
lines
in
bold
basically
say:
here's
a
specifier-
I
don't
know
if
you
can
see
my
mouse
pointer,
the
first
bowline,
bold
line
that
says
start
manager.
A
I
o
cluster
issuer
just
says
that
this
is
a
smart
manager,
should
use
the
production
version
of
let's
encrypt
to
get
a
more
stable
certificate.
The
second
bolded
line,
the
one
that
says
host
food.example.com,
says
please
expose
it
using
this
domain
name,
and
that
is
something
that
external
dns
will
listen
for
and
go
register.
That
domain
name
with
your
registrar
and
then
the
block
that
is
bolded,
which
starts
with
tls
and
then
has
a
bunch
of
stuff
under
it.
A
That
is
also
just
to
connect
to
to
tell
search
manager
where
to
stuff
the
certificate
and
also
the
ingress
where
to
look
for
it
once
it
has
been
requested
by
search
manager
and
that's
all
you
need
to
do
to
register
a
domain
name
and
also
to
make
sure
you
have
automatically
renewable
certificates.
So
that's
pretty
cool
authentication
now
authentication
is
part
of
just
sort
of
who
gets
to
do
what
right
so
authentication
is
who
and
then
authorization
is
gets
to
do.
A
What
and
kubernetes
itself
has
authorization
features
built
in
via
role-based
access
control,
but
in
order
for
kubernetes
to
be
generic,
it
doesn't
specify
a
specific
authentication
provider.
However,
it
does
work
with
this
standard
called
openid
connect,
which
I'm
sure
that
many
of
you
have
implemented
in
your
applications
and
for
the
use
of
this
cluster
we're
using
dex,
which
is
a
federated
identity
provider
that
does
just
that,
so
it
will
be
compatible
with
google
accounts.
Microsoft,
active
directory,
ldap
saml.
A
It
says
it's
deprecated
since
a
while
ago,
so
maybe
not
depend
on
it
for
saml.
Sadly,
but
google
accounts
and
microsoft
active
directory
and
ldap,
they
all
work
great
and
we
can
configure
and
use.
Google
accounts,
that's
what
we
do
in
the
demo
repo.
A
We
might
need
a
storage
service
now
it
might
seem
a
bit
weird
that
we
can
install
a
storage
service
on
our
own,
but
that's,
sadly,
what
you
might
wind
up
having
to
do
if
you
have
to
deploy
on-premise,
for
instance,
where
you
just
get
these
physical
machines
and
some
server
administrator
says
you
know
here,
go
nuts,
you
get
to
work
with
this.
I
keep
these
machines
up
and
running.
You
do
whatever
you
want
with
them.
A
So
in
a
cloud
we're
used
to
having
both
block
and
object,
storage
services,
block
services
being
virtual
hard
drives,
where
you
can
store
files
and
object
services
being
like
amazon
f3,
where
you
can
store
individual
files
and
you
just
don't
care
where
they
wind
up
on
a
hard
drive.
You
don't
use
the
object,
storage
service
really
as
a
hard
drive.
You
use
it
more
as
a
big
file
server.
A
What
we
can
do
to
give
ourselves
the
support
for
virtual
hard
drives
is
to
install
rook,
which
is
we
can
think
of
it
as
a
smart,
kubernetes
compatible
api
on
top
of
the
network
file
system,
ceph
and
together.
This
gives
us
and
kubernetes
persistent
volume,
support,
persistent
volumes
being
the
name
that
kubernetes
uses
for
virtual
hard
drives.
A
Essentially
now
rook
could
also
be
our
s3
object,
storage
because
it
can
pretend
to
be
an
object,
storage
service
that
behaves
like
amazon
s3,
but
I
wouldn't
depend
on
it
for
backups,
because
if,
if
you're,
storing
your
backups,
where
your
primary
data
is
also,
then
it's
not
really
a
backup,
it's
more
of
a
way
to
jump
back
in
time.
It's
not
really
a
way
to
actually
save
state
in
a
more
safe
way
than
your
original
data.
A
A
Sorry,
it's
not
difficult
to
set
up,
though,
if,
if
that's
what
you
want
to
do
so
ceph
is
great:
it's
a
performant
and
very
well
used
and
known
network
file
system
and
what
it
will
give
you
is
the
ability
to
just
say.
Well,
you
know
this
data
is
important.
A
One
thing
that
will
store
data
for
us
is,
of
course,
the
database.
Now
this
being
dev
days
again,
I'm
guessing
you
all
use
databases,
I
do
too,
but
I'm
also
guessing
that
you
are
perhaps
not
super
excited
about
managing
them
and
dealing
with
all
the
like
point-in-time
recovery
and
all
that
stuff
that
goes
into
a
healthy
database
service.
A
A
There's
this
great
software
called
vitesse,
which
is
the
database
that
youtube
is
running
on,
or
at
least
was
now
it's
open
source.
So
what
we
can
do
inside
of
kubernetes
is
to
install
something
called
an
operator
which
does
the
heavy
lifting
for
us
of
system
admin
work.
It
will
set
up
replication
and
clustering
for
us.
Do
backups
point
in
time,
recovery
all
that
good
stuff
and
if
the
database
becomes
unavailable
for
some
reason
and
it's
rather
obvious
how
to
take
care
of
it
in
an
automated
way.
A
A
Your
application
will
write
its
logs
to
standard
output,
doesn't
demand
that
you
yourself
know
how
to
forward
it
to
any
log
handling
service,
because
that
is
the
job
of
a
log
forwarder.
So
the
idea
is
you
install
some
software
on
each
node
in
your
kubernetes
cluster
and
it's
going
to
pick
up
all
the
logs
that
your
application,
pods
instances
are
emitting
and
it's
going
to
push
it
to
some
logging
destination
and
in
this
demo
repo
we
use
filebeat
and
elasticsearch
to
do
this.
A
A
A
Now
I
did
say-
and
this
might
of
course
be
of
interest
to
application
developers-
that
there
is
application
aware
detailed
monitoring.
So
the
monitoring
stack
of
choice
for
kubernetes
and
2021
is
prometheus,
which
is
a
monitoring
system
that
pulls
data
from
various
sources
and
then
grafana
to
visualize.
That
data
and
prometheus
has
this
concept
of
exporters,
so
something
that
will
export
data
in
a
format
that
could
and
that
prometheus
will
understand,
and
it's
very
easy
to
create
that
for
your
own
application.
A
And
then
all
you
need
to
do
is
you
need
to
update
those
values
and
prometheus
will
go
and
scrape
your
application
every
well,
it's
configurable
but
say
every
15
seconds
or
so
so
that
you
can
keep
track
of
these
values
and
we
can
install
both
and
we
can
integrate
them
with
dect
so
that
it's
easy
to
control.
Who
gets
to
see
what
in
grafana,
because
maybe
I
as
a
system,
admin
should
be
allowed
to
see
all
kinds
of
dashboards,
but
maybe
developers
should
only
be
allowed
to
see.
A
A
A
So
all
we
need
to
do
is
install
valero
and
tell
it
where
to
store
the
backups
and,
as
I
said
previously,
it's
not
done
in
the
demo
repo,
because
we
don't
depend
on
an
external
sd
service,
but
any
f3
like
object,
storage
service
will
do
so.
You
can
just
tell
valero
like
hey.
I
want
you
to
do
backups
of
this
every
24
hours
or
whatever,
and
it's
going
to
just
do
that
for
you.
A
Once
you've
built
a
container
image,
it
needs
to
be
pushed
somewhere,
so
it
needs
to
reside
in
a
registry
and
what
we
use
in
this
demo
repo
is
something
called
harbor
and
it
offers
not
only
a
secure
and
really
smart
registry,
but
it
also
integrates
with
vulnerability
scanning
features.
So
when
you
say
in
your
docker
file,
I'm
guessing
many
of
you
dockerize
your
applications.
A
Maybe
some
libraries,
then
maybe
you
pull
in
your
requirements,
whether
it's
like
python
requirements
and
node,
js
or
java
jars,
or
whatever
you
probably
depend
on
a
lot
of
you-
probably
have
tons
of
dependencies
in
your
application
and
then,
of
course
you
pull
in
your
application
code
and
you
build
it
all,
and
then
you
deploy
as
little
as
possible
of
these
dependencies,
of
course,
and
your
build
code
and
that's
what
you
wind
up
with
in
your
container
image.
Now
it's
your
code,
of
course,
but
that's
on
top,
underneath
you
have
all
these
dependencies.
A
You
have
that
base
operating
system
and
all
those
probably
most
of
them
are
open
sourced,
but
they
probably
have
more
users
than
you
and
if
so,
there
might
be
known
vulnerabilities
and
all
that
winds
up
in
databases
that
can
be
queried
by
the
tool
called
trivi
that
integrates
with
harbor
and
harbor
will
then
show
you
try
these
analysis
of
your
container
image.
So
it
can
drivey
will
scan
your
container
image
and
say
oh
you're,
using
this
and
this
version,
which
has
known
vulnerabilities
in
it.
So
please
you
need
to
update
okay.
A
So
what
we
can
do
is
we
can
set
up
harbor.
We
can
integrate
it
with
decks,
and
that
means
that
we
can
control
who
gets
to
log
in
who
can
view
who
can
push.
Who
can
do
all
that
fun
stuff,
and
that
also
lets
us
look
at
these
reports
and
you
can
tell
it
to
continuously
scan
because
maybe
there's
a
long-lived
microservice
that
you're
not
really
updating
that
much
and
say
that
it's
been
just
untouched
for
a
month.
A
So
again
we
can
use,
say
our
google
accounts
to
control
who
gets
to
actually
do
the
final
push
to
production,
to
prove
it.
A
A
So
summary
of
the
talk
you
just
listened
to
this
repo
will
give
you
a
fully
functional
platform.
It's
fully
open
source,
fully
based
on
kubernetes
and
cloud
native
tech
and
no
matter
where
the
cloud
that
you're
deploying
to
is
located
because
it
doesn't
even
need
to
be
a
cloud.
It
could
be
on
premise.
You
can
go
and
get
that
demo
repo
code,
if
you're
not
deploying
to
exoscale,
which
again,
of
course,
is
just
an
example.
A
If
you
think
this
stuff
is
interesting,
you
can
also
look
at
complying
kubernetes,
which
itself
is
also
fully
open.
Source
comes
with
more
than
this
already
included,
but
set
up
in
a
much
better
way
and
we
offer
it
as
a
fully
managed
service
as
well.
So
if
that
stuff
interests
you,
you
can
go
to
complying
kubernetes
dot,
io,
where
you
will
find,
among
other
things,
this
image,
which
shows
the
stuff
that
we
put
into
it.
A
So
how
that
project
was
born
was
that
we
realized,
after
countless
customer
engagements,
that
okay
gdpr,
the
californian
privacy
act
thing
and
also
various
regulations
that
are
industry
specific.
They
all
kind
of
point
to
the
same
set
of
controls.
That
is
stuff
that
in
iso
27
0001
terms
would
be
so
they
specify
a
process
that
you
should
have.
You
know
good
processes
around
access,
control,
logging,
network
security
and
all
that
stuff.
They
never
say
exactly
use
this
piece
of
software,
because
that
would
make
that
standard
outdated
quickly.
A
They
are,
of
course,
included
elasticsearch,
since
we
offer
this
as
a
service
is
not
the
now
impossible
to
host
as
a
service
due
to
a
license,
change,
elastic
search,
it's
going
to
be
opendstro
for
elasticsearch
and
open
search
soon,
and
we
also
have
open
policy
agent
running.
A
Protecting
you
against
misconfiguration
and
also
intrusion
detection
system
called
falco
from
sysdig,
otherwise
it's
very
similar,
but
it's
set
up
in
a
way
so
that,
for
instance,
we
have
tamper-proof
logging,
which
is
something
that
I
saw.
27001
certification
just
loves
right.
So
are
there
any
questions,
because
I
see
that
we
are
in
the
final
minutes
of
this
talk?
A
A
Doesn't
having
a
500
step,
asible
playbook
implied
that
the
underlying
technology
is
a
bit
too
over
complicated
in
and
of
itself
for
average
projects.
On
that
note,
thoughts
on
projects
like
k3s.io,
so
k3s.io
is
something
developed
by
rancher,
which
is
a
highly
opinionated
kubernetes
distribution.
That
is
a
certified
kubernetes
as
well.
There's
there
are
conformance
tests
so
that
you
can
say
whether
your
distribution
is
a
kubernetes
or
not.
A
kubernetes
and
k3s
is
a
kubernetes.
A
What
they
target
is
simpler,
setups
like
edge
location
setups,
so
it
makes
certain
choices.
Technology
wise
that
kubernetes
has
not
made
so
that
it
fits
into
smaller
virtual
machines,
so
less
resources,
but
it
also
means
that
instead
of
having
a
distributed
database
that
can
survive
the
outage
of
one
node
quite
easily,
it
instead
uses
sqlite
to
store
stuff
and
the
500
step
as
well
playbook.
Well
I
mean
it
does
sound
sort
of
complicated.
A
I
understand,
but
it's
also
about
taking
a
bare
virtual
machine
that
just
had
ubuntu
installed
on
it,
installing
all
kinds
of
components,
securing
them
and
making
them
production
ready.
So
I
think
the
fact
that
it's
divided
into
500
steps
is
perhaps
something
that
sounds
complicated,
but
I'm
also
very
happy
that
they
do
it.
A
So
when
would
I
recommend
going
with
vanilla,
kubernetes
and
when
with
compliant
kubernetes
or
manage
on
some
of
the
eu
based
providers?
Well,
I
think
that
you
have
to
think
about
is
managing
a
kubernetes
cluster
really
part
of
what
I
should
be
doing
so
a
vanilla,
kubernetes
cluster
that
you
manage
yourself.
A
Is
it
in
your
organization's
best
interest
to
spend
time
on
doing
that?
So
if
you
also
need
to
have
this
cluster
up
and
running
24
7,
that
probably
means
you
have
to
pay
people
extra
to
stay
up
or
to
have
like
on-call
duties
during
the
night
and
honestly,
I
think
that
if
you
think
about
okay,
we're
a
development
company
we
develop
software,
managing
kubernetes
is
not
really
part
of
our
core
business.