►
From YouTube: Running Swarm on RedHat OpenShift
Description
In this presenation from Day 3 of the #SwarmOrangeSummit, Nikola Jokić from DA PowerPlay presented a run through of the technicalities of deploying Ethereum networks on Red Hat OpenShift solutions. The aim of this talk is to show possible steps to achieve production-ready Web2 software deployment.
A
Hi
hi
hello,
I
forgot
to
put
my
name
on
the
slide,
but
my
name
is
Nicole
Aoki
I'm
CTO
of
digital
assets,
power-play.
We
are
based
in
Croatia,
but
we
are
really
present
in
the
Slovenian
seen
at
least
in
cooperation
wise.
We
are
focused
on
algorithmic
trading,
but
today
we're
not
going
to
talk
about
that.
We
are
going
to
talk
a
little
bit
about
running
swarm
on
RedHat
OpenShift.
A
Has
anybody
heard
about
open
shift
perfect?
Nobody?
Almost
nobody.
Has
anybody
heard
about
the
docker
Dockers?
Yes,
so
you
have
Dockers
at
the
bottom
layer.
Then,
if
you
have
multiple
Dockers
and
you
want
to
orchestrate
them,
then
you
have
kubernetes.
If
you
want
to
deploy
kubernetes
in
a
more
serious
environment,
then
you
will
probably
use
open
shift,
which
is
basically
let's
say
you
wrap
around
kubernetes
stuff.
So
the
talk
today
is
how
to
run
basically
any
blocking
software
in
sort
of
an
enterprise
ready
production
deployable
environment.
A
So
how
do?
How
do
you
do
production
ready
deployments
of
ev3
software
if
you
rooms
for
whisper
I
think
these
are
the
most
three
important
software's
piece
of
software
that
you
can
find
today.
I'm,
not
sure
art
is
the
software
production
ready,
but
I
think
I,
think
Red
Hat
overshift
is
so
ready.
Autopia
shift
is
an
open
source
company.
They
started
in
1993
with
a
fork
of
Linux,
they
they're
fully
open
source
and
they
are
working.
A
Some
billions
I
think
their
market
cap
they
have
around
11,000
employees
and
they're
focused
on
like
an
open
source
but
enterprise-ready
solutions.
You
will
mostly
find
them
in
banks
or
some
bigger,
bigger
institution
which
have
a
bit
more
serious
deployments
deployments
procedures.
We
we
deploy
our
platform
on
the
on
the
opposite
opposite
platform.
Some
parts
and
parts
are
bare
metal.
So
what's
the
idea
idea
is
when
you
develop
some
piece
of
software?
That's
one
part
of
the
problem.
Your
software
now
works
nicely.
It
passes
tests,
but
you
must
somehow
put
it
in
production.
A
What
you
can
do
is
maybe
SS
kind
to
a
machine
then
do
a
git
pull
build
somehow
then
run
it.
But
it's
not.
The
way
that
you
should
do
it,
maybe
you
have
some
rich
Jenkins
procedure
that
will
deploy
this
thing.
But
then,
then
there
is
the
question
how
the
production
is
on
one
state
I
do
some
development
did
I'm
somehow
need
to
switch
the
prove
the
old
production
out
and
then
put
my
new
dev
branch
or
something
in
in
production.
These
kind
of
questions
are
well
anybody's
problem
that
deploys
some
software
into
the
production.
A
Openshift
really
really
helps
here.
Since
we
started
to
use
it,
we
kind
of
sped
up,
maybe
five
times
in
terms
of
in
terms
of
our
efficiency,
so
why?
What
would
be
nice
to
hear
from
your
own
deployments
after
it
would
be
nice
to
easy
to
scale,
so
you
can
maybe
click
a
plus
button
and
get
ten
more
etherium
nodes
that
you
need.
Maybe
because
the
the
attack
on
your
on
your
back
end
is
a
bit
higher.
A
It
would
be
nice
to
easily
test
different
topologies,
maybe
to
have
an
environment
which
you
can
deploy
some
kind
of
network
topology
in
your
private
network,
then
maybe
test
out
what
happens
when
the
network
splits
or
maybe
what
happens
when
the
Wendy
Wendy
Wendy
some
hard
work
happens.
If
you're
you
doing
it
maybe
manual
it's
kind
of
tedious
process.
We
were
previously
on
digital
ocean
or
AWS,
and
it
was
like
you
go
into
these
ten
machines.
A
Ideally,
your
deployment
process
would
be
infrastructure
independent.
So
it
means
that
any
kind
of
hardware
you
have
you
can
you
can
deploy
on
it.
We
have
on-premise
hardware
and
built
from
from
the
hardware
layer,
so
we
styled
OpenShift
on
our
hardware
and
it
really
it's
a
non-trivial
installation,
but
it
basically
it's
it's.
It's
it's
a
way
to
go,
I
think
so.
What's
the
part
of
the
technology
stack
its
record
Enterprise
Linux?
A
If
you
are
for
a
more
of
an
open
source
solution,
you
should
go
with
CentOS,
which
is
basically
an
open
source
fork
off
of
Red
Hat
Enterprise
Linux,
docker,
kubernetes
and
OpenShift.
So
you
have
some
text
here
that
you
can
read:
what's
what's
this
field
all
about,
so
what
we
did?
We,
we
kind
of
were
surprised
that
you
cannot
own
opposite.
You
have
ready
templates
to
deploy
almost
any
kind
of
software.
If
you
need
the
Possible's
database,
you
can
like
yes,
yes,
yes
and
you
get
your
own
persistent
process,
Postgres
database.
A
If
you
want
some
kind
of
a
Kafka
cluster,
if
anybody
uses
patchy
Kafka
there
is
a
stream
see
project
Red,
Hat
recent.
We
just
started
to
work
with
them.
It's
basically,
yes,
yes,
yes
and
you
get
tens
of
Kafka
nodes
connected
and
working
together.
So
it's
really
really
easy
way
to
get
more
complex
deployments
of
your
software
up
and
running
and
and
template
that
so
you
can
reuse
it
and
in
in
some
other
projects
or
other
environments.
So
what
we
did
is
an
experiment.
First,
we
started
with:
let's,
let's
get
one
parity
node.
A
We
use
parity
client
for
development,
so,
let's,
let's
have
it
just
deployed.
This
template
is
open
source,
so
you
can
see
it
and
then
we
kind
of
started
to
experiment
with
more
bit
more
advanced,
openshift
possibilities.
What
we
did
is
we
deployed
one
validator
node
in
a
proof
of
authority,
Cohen
set
up
three
load-balanced
secondary
nodes,
which
are
not
mine
and
one
block
Explorer
that
basically
requests
get
load
balanced
across
these
secondary
nodes,
and
this
validator
know
this
is
actually
actually
mining
the
box
or
signing
them.
A
Whatever
you
like
to
say,
what's
annouce
nice
about
this,
we'll
go
check
out?
How
does
description
of
this
deployment
looks
like,
but
basically
you
kind
of
describe
I
have
some
service
that
service
has
route
outside
behind
the
service.
You
will
have
three
parts.
Each
part
has
limitations
of
X
megabytes
of
RAM
or
this
much
of
processing
power.
This
all
editor
lock
has
a
bit
more.
It
has
also
persistent
storage,
so
you
reserve
some
some
space
on
the
hard
drive
and
then
you
can
easily
easily
easily
redeploy
it.
What
we
can
do
is
basically.
A
What
we
can
do
is
maybe
look
look
how
it
does
it.
Let's
see,
let's
see,
how
does
it
look
in
in
open
shift
when
you
try
to
deploy
it?
So,
let's,
let's
sit
here
and
hands
on
this-
is
open
shift
user
interface.
It's
an
open
shift
online
solution,
so
we
can
open
up
an
account
like
an
Amazon
Web
service
or
something
like
that
and
pay
some
money
for
for
the
resources,
but
basically
you
can
install
it
locally.
I
recommend
mini
shift.
It's
like
a
small
open
shift
that
you
can
run
locally,
and
here
you
can.
A
You
can
do
a
bunch
of
things.
Ideally,
you
can
start
any
of
these
things
behind
there
every
each
of
these
things
as
a
docker
image,
with
some
description
around
it.
That
describes
how
to
run
this.
This
docker
image
this
is
already
had
supported.
What
you
pay
with
record
is
basically
support
there.
You
can
have
somebody
on
the
on
the
line.
Please
help
something
is
correctly
in
my
system
and
that's
what
you
pay
for.
The
code
is
basically
open
source.
So
you
can,
you
can
deploy
a
bunch
of
stuff,
but
you
can.
A
You
can
also
import
your
own
templates.
This
is
the
one
that
we
that
we
developed
and
open
source.
It's
a
piratical
searching
starter
that
we
have
called
it.
You
can
go
next,
it's
a
bit
description!
What's
what's
this
all
about
in
basically
now
you
have
some
configuration
phase
in
which
you
can
now
give
the
name
for
this
console.
You
can
give
it
network
ID,
you
can
set
block
reward.
You
can
define
set
step
duration.
If
anybody
looked
into
Genesis
JSON
of
the
Cohen
Network.
A
A
What's
the
RAM
limitations
for
for
the
part,
because
you
can
also
set
some
limits
if
your
project
goes
crazy
or
you
have
some
kind
of
memory
leak,
so
it
doesn't
eat
up
the
resources
of
the
home
project
but
stays
contained
in
his
own
limitations.
So
it's
it's
kind
of
kind
of
nice.
You
can
even
set
your
private
key,
so
it
immediately
starts
to
starts
to
mine
box.
When
you
click
create,
basically
wait
a
bit
and
then
what
happens
is
we
can
go
to
the
deployment?
A
We
have
a
template.
Basically,
we
have
a
block
Explorer,
which
is
forked
I.
Think
if
there
light
or
something
like
that,
the
one
that
can
show
there
are
links
so
I
can
I
can
send
you
which,
which
walk
Explorer.
Is
that
but
it's
an
open
source
solution
we
just
packed
it
to
to
a
docker
non-root
docker
image
based
on
where
ahead
Enterprise
Linux,
what's
also
deploying
here
is
the
secondary,
knows
those
load-balanced
one
from
the
deep,
deep
power-play
parity,
one,
nine
five
10
plus
image.
A
We
did
our
own,
because
almost
anybody
today
that
writes
docker
images
does
so
in
route
way.
So
at
the
end,
the
user
that
starts
the
the
container
is
root
user
and,
although
you're
in
a
container,
this
is
not
the
right
way
to
do
it.
Even
platforms
like
cocoa
shape,
don't
complain.
So
when
you
write
your
own
doctor
files,
please
take
care
that
we
at
the
end
you
kind
of
switch
to
a
non
root
user
and
then
fix
some
access
access
for
your
users.
The
project
can
work.
A
We
are
kind
of
talking
with
parity
to
fix
their
own
CentOS
image,
but
for
now
this
one
is
enough.
What
we
deployed
is
free
pods,
so
it's
free
copies
of
the
same
parity
parity
parity
secondary
node,
and
here
is
the
the
validator
node,
the
only
one
which
actually
signs
the
science
D
blocks.
What
we
can
also
see
is
basically
services,
so
you
have
different
services.
A
That
say
you
know:
I
have
some
service
that
I
want
to
expose
to
to
to
the
outside
world,
or
maybe
between
those
parts
and
I
say
this
service
exposes
on
port
3000,
some
kind
of
selector
and
then
inside
I
defined,
on
which
parts
the
service
will
load
balance
and
what
are
the
rules
of
load
balancing?
You
can
define
some
weights,
so
you
spend
like
90
percent
of
traffic
to
one
pot.
You
can
do
even
crazy.
Things
like
have
10
different
rests.
Api
is
all
load
balanced,
although
they
are
not
the
same
project
and
not
exposing.
A
A
Is
the
box
or
up
and
running
Internet
is
a
bit
slow,
so
we'll
give
it
some
time
until
this
loads
we
can
go
check
a
bit
about,
let's
say:
config
Maps
config
Mac's
are
basically
your
configurations
for
the
for
the
Dockers
that
that
you
deploy.
So
we
can
maybe
look
at
the
Genesis.
That
was
that
was
that
was
defined.
So
some
step
duration
was
defined
during
the
template
creation.
Some
block
reward
was
defined
using
the
template
creation,
the
dependent
whatever
you
decide
to
expose.
A
Basically,
you
can
set
up
in
the
initial
setup
if
I
edit
this
file
some
change
some
parameters
and
go
to
and
quickly
deploy.
It
will
basically
redeploy
the
thing
you
can
pick
between
two
two
types
of
redeployment,
so
I
could
recreate
so
it
crashes
the
new
one
who
crashes
the
old
one
and
then
brings
the
new
one
up
or
you
can
do
it
like
rolling
deployment
in
which
you
kind
of
keep
the
current
one
alive
and
then
destroy
destroy
the
other
one
okay.
A
So
we
have
the
block
Explorer
alive
and
running
basically
no
blocks
who
are
still
mind.
I
think
that
the
nodes
are
still
looking
for
for
each
other,
although
what
we
did
is
we
kind
of
told
them
that
there
are
some
reserved
peers,
so
there
is
one
period
that
will
connect
all
all
other
nodes
together.
A
So
what's
the
idea
here,
you
can
see
that
you
have
like
a
really
nice
control
over
the
over
the
already
process
of
deployment
you
can,
if
you
like,
you
can
basically
do
something
like
this
like
go
to
a
secondary,
node
click
plus,
and
it
basically
scales
to
four.
You
can
define
some
rules
if
the
CPU
usage
goes
up,
10%
and
scale
scale
to
five
things
and
stuff
like
that.
So
this
is
the
currently
the
only
way
to
easily
start
some
aetherium
software
on
openshift.
A
We
hope
that
the
reticle
guys
will
wake
up
and
then
kind
of
prepare
more
software.
So
this
is
about
that
I
scratched
service
surface
a
bit
with
the
let's
see.
Did
it
start
to
mine?
Still,
nothing?
Ok,
let's!
Let's
not
wait,
let's
go
back.
So
how
does
the
template?
Look?
Let's?
Let's
maybe
just
check
this
thing
out.
So,
let's,
let's
first
look
at
the
basis.
Basically
this
is
the
docker
file
that
that
that
that
deploys
the
parity
node.
What
we
basically
is,
we
get
their
own
build
of
RPM.
A
We
should
basically
build
this
RPM
inside
the
docker
image,
but
this
is
also
okay
and
then,
at
the
end,
what
we
do
is
expose
some
ports
and
say
user
1001.
So
it's
not
the
user,
user,
1
and
entry
point
is
basically
parity.
Any
other
parameters
that
you
want
to
move
forward
to
these
doctor
image.
You
will
be
basically
applied
to
this
entry
point,
although
we
can
even
change
the
entry
point
and
stuff
like
that.
So
this
is
a
non
route
doctor
image
of
parity.
A
A
A
Okay,
do
you
have
some
kind
of
algorithms
written
up
or
you're
like
manually,
clicking
manually,
clicking?
Okay
when
you're
in
the
algo
trading
environment?
You
basically
see
that
bunch
of
guys
are
writing
their
own
algorithms
and
collecting
their
own
data
and
that's
basically
a
problem
because
I
collect
your
arc.
I
collect
market
data,
you
collect
market
data
and
this
guy
collects
market
data
into
basically
the
same
data,
and
we
have
a
problem
because
then
the
market
is
not
efficient,
the
data
is
not
free
and
it's
free
on
the
exchange
level,
which
is
really
disruptive.
A
So
we
somehow
must
finish
to
this
endgame
open
market
data.
We
must
all
collect
it
together
because
it
is
public
and
today,
if
you,
if
you
want
to
back
test
some
strategy
of
Bitcoin
from
2009,
you
have
two
choices.
You
can
either
buy
the
data
which
will
cost
you
thousands
of
euros
and
you
cannot
verify
it
or
you
can
maybe
ask
your
friend
or
you
can
maybe
like
send
an
email
to
an
exchange.
But
basically
you
are.
A
Although
this
data
was
free
and
each
minute
that
the
day
goes,
we
have
less
and
less
trades
collected
and
we
have
a
bigger
problem
to
the
future.
I
believe
swarm
is
the
answer
for
the
storage
layer
of
this
problem.
We
need
to
somehow
publicly
store
this
data
and
what
else
do
we
need?
We
need
to
somehow
publicly
they
duplicated,
because
if
five
of
us
see
is
the
same
trade,
ideally
we
would
write
only
once
a
2d
2d
swarm,
swarm
layer.
A
Now
an
example
of
running
swarm
on
OpenShift
not
going
to
happen.
There
is
no
swarm
on
OpenShift
currently
or
this
is
Croatian
rapper.
He
says
memo
a
no-go.
It
won't
happen.
Why?
Why
it's
difficult
to
find
source
code?
Who
can
tell
me
where
the
source
code
of
swarm
is
you've
dries
you're
the
maintainer
yeah,
it's
probably
on
some
kind
of
branch
on
gif
I
think?
Is
it
I
think
so?
Yes,
it's
difficult
to
detect,
which
branches
should
I
pull
to
get
a
stable
built.
A
It
is
difficult
to
filter
issues
because
I
really
don't
know
what
the
you
guys
think
an
issue
is
even
if
I
think
an
issue
is
I.
Really
don't
know
where
to
put
it
it's
kind
of
up
it's
difficult
to
detect,
which
we
are
to
pull
to
get
some
features.
I
heard
some
guys,
like
you
guys,
are
doing
great
job.
You
know
it's
a
serious
big
piece
of
software
that
she
was
developing
and
you're
developing
with
another
piece
of
software
that
doesn't
have
any
relationship
to
this.
A
It's
like
if
helium
swarm,
two
pillars,
two
totally
different
scopes
in
the
same
you
get
repository,
come
on,
don't
me
and
then
the
end
it's
difficult
to
collaborate.
Yes,
you
know
why?
Because
I
cannot
I
cannot
find
the
place
where
to
talk
with
you
guys.
You
have
getter
channels,
you
have
everything,
but
you
think
you
don't
have
the
core.
You
don't
have
the
get
repository.
A
A
I
cannot
really
find
the
github
repo
containing
swarm
code.
I
can
find
the
etherium
go,
aetherium
repo
fighting
if
helium
source
code
and
then
you'll
have
some
convoluted
way,
finding
the
source
code
of
swarm
in
it.
And
then,
when
you
build
it,
then
when
you
build
it,
you
I
must
pull
the
whole
assyrians
repository
and
then
I
get
five
binary
result
and
then
I
need
to
clear
everything
out
and
get
just
the
swarm
binary.
If
I
want
to
run
it
same
long.
A
A
A
Yes
I:
this
is
this
this.
This
is
what
I'm
sharing
I'm
sharing
get
a
separate
vista
bleep,
oh
because
this
is
the
only
one
thing
that
you
need
to
do
to
speed
this
project
up
because
then,
as
a
CTO
I
can
pinpoint
to
my
director
look,
this
is
warm.
This
is
the
source
for
delay,
can
verify.
Then
he
goes
okay,
nice
we
can.
We
can
set
this
to
these
guys
to
check
about
swaram.
Well,
is
all
about
because
we
can
read
the
source
code.
It's
really
hard
to
read
the
source
code.
A
A
Yeah,
so
you
know
these
kinds
of
fishes
and
I
think
that
I
really
don't
have
any
other
objections
to
this
project.
I.
Just
think
that
we
need
some
some
exact
point
where
everything
that's
happening,
but
I
did
contributed
my
energy
to
think
about
this
for
the
last
six
months.
Yeah
it's
a
difficult
to
thank
you
so
I
think
swarm
a
separate
from
gifts.
I
think
swarm
has
defined,
build
and
deployment
processes
that
are
exact
for
swarm.
I.
Think
swarm
must
manage
his
issues
independently,
because
it
is
an
independent
project
and
I.
A
Think
swarm
has
provide
an
honor
dr.
image,
ready
to
be
run
because
developers
of
today
are
all
about
containers
like
five
years
ago,
they're
like
virtualization
and
now
it's
containers
give
me
a
doctrine
and
it's
up
in
five
seconds.
I,
even
I
didn't
test
as
hard
into
a
machine
for
I
know
how
how
much
because
you
get
a
docker
image
just
runs.
What
they
do
is
just
apply
configuration
to
it.
A
A
No
I'm
not
saying
this
is
easy.
This
is
hard
problem.
What
I'm
saying
is
that
each
day
that
we
continue
converted
with
some
other
code
base
is
going
to
get
them
more
difficult,
and
my
attitude
and
developed
me
some
something:
doesn't
work.
I,
take
an
axe
and
I
just
chop
it
down
and
it
goes
down
and
you
build
it
again,
because
you,
you
don't
need
it's
kind
of
wasteful
of
energy.
A
So
my
message
is
at
the
end
of
the
day:
please
you
did
the
perfect
nice
job
with
swarm
and
I
think
that
you
need
to
refocus
from
implementing
features
first
form
to
helping
the
community
help
you
because
they're,
a
bunch
of
us
I
have
what
I
can
offer
is
time.
I
can
offer
developers
and
then
I
can
offer
storage,
because
we
have
terabytes
of
storage
free
for
swarm
and
I
can
offer
some
money,
but
I
think
the
money
is
the
least
of
our
problems
for
this
project.
A
So
please
allow
us
to
gather
around
the
codebase
and
please
stop
hiding,
because
there
is
a
sentiment.
Let
me
just
wrap
this
up.
I
talked
with
people
on
this
conference
and
when
I
talked
with
people,
I
said
lockley.
Do
you
have
some
issues
with
the
get
your
planet
everybody's
like
well
here
there
is
clearly
convolved
it
way
and
ask
it,
but
why
is
it
like
so
and
like
few
people
told
me
you
are
doing
it
on
purpose?
You
are
flying
under
the
radar
and
I'm.
Not
sure.
B
A
A
B
Unfortunately,
that
I
have
to
disappoint
you,
but
the
truth
is
probably
less
mystical
and
and
more
close
to
this,
we
just
probably
suck
at
certain
things
we
suck
or
or
let
lets,
let's
put
it
in
a
kind
of
milder
and
more
to
the
point
like
objective
way.
We
don't
have
been
do
it's
for
proper
community
campaign
like
evangelism.
C
C
C
Think
at
this
point,
I
do
know
where
we
can
take
this
and
we
can
put
it
and
we
can
organize
a
little
bit
more
and
I
definitely
agree
that
Victor's
time
and
other
core
contributors
is
better
spent
continuing
to
do
what
they're
doing,
and
we
do
need
probably
part
of
the
community
to
step
up
and
take
on
more
of
the
community
management.
The
organization,
the
maintenance,
those
smaller
tasks
that
every
engineer
hates
to
do,
and
so
it's
also
a
really
good
way
to
orient
yourself
around.
C
D
Other
thing
I
was
going
to
add,
though,
is
that
sometimes
the
decision-makers,
the
people
writing
the
code
itself
are
most
aware
of
breaking
changes
or
things
that
become
obsolete,
so
it
it
probably
would
be
good
for
them
when
they
are
aware
to
just
at
least
like
you
know,
post
something
on
that
wiki
page.
That
says
this
is
now
outdated
whatever,
because
it
may
not
always
be
possible,
or
at
least
notify
the
community
manager,
whoever
that
happens
to
be
right,
which
sounds
like
Doug
just
volunteered
for
that.
C
E
Yeah
I
just
have
a
question
more
in
like
technical
devops
side,
because
I
was
using
not
OpenShift
but
like
burning
T's
for
a
swarm
cluster,
and
all
of
these
tools
seem
really
great
at
hiding
thousands
of
micro
services
and
containers
in
front
of
one
load-balanced
web
interface,
because
that's
what
the
web
2
needs.
But
when
we
want
to
run
a
whole
cluster
of
swarm
nodes,
we
want
each
one
of
those
swarm
nodes
to
be
able
to
communicate
and
appear
to
peer
network,
which
means
they
have
to
have
an
individual
identity.
E
You
know
an
address
and
a
port,
so
they
cannot
just
be
copies
to
each
other.
So
all
of
these
great
tools
that
could
burn
his
automatic
replication
and
scaling
we
couldn't
use
because
they
just
make
identical
copies,
and
so
we
had
a
lot
of
replica
sets
with
replications
of
one
and
then
had
50
of
those
because
it
just
didn't
fit
into
the
kubernetes
mindset
to
have
peer
to
peer
nodes
with
individual
identities.
You
know,
as
Nick
Johnson
told
me,
yeah
and
kubernetes
think
cattle
and
not
pets,
so
I
was
wondering
he
openshift
better
at
that.
A
These
kind
of
problem
is
opposite,
aiming
to
solve
and
is
basically
solving
it's
a
bit
more
convoluted,
so
it's
not
easy,
just
pressing,
plus
plus
plus,
but
there
are
solution
there
are
so
called
state
rule
sets.
There
are
so
called
more
more
advanced
stuff
that
you
can
do
it's
it's
not
it's
not
ideal,
and
you
cannot
move
the
manual
work
out
of
this.
You
know
what
you
cannot
move.
You
can
easily
move
images,
but
you
cannot
easily
move
configurations.
So
this
is
something
that
there
is
basically
no
solutions.
A
If
you
need
to,
if
you
change
your
configuration
and
dev
environment,
you
can,
you
must
either
write
custom
script
to
move
it
or
you
they
just
manually
retype
it.
So
this
is
not
gods
and
solution,
but
it
does
help
you
to
test
and
construct
different
topologies
and
then
describe
them
and
give
them
to
somebody
to
easily
replicate
them,
because
it's
just
a
JSON
describing
it
so
I
think
there
are
values
in
it
and
when
you
know
what
kind
of
deployment
is
optimal
for
you,
you
can
easily
push
it
to
production
environment.
A
A
bit
more
I
have
a
bit
more
faith
in
Red
Hat
setup
of
Linux
than
I
have,
in
my
own,
so
I
kind
of
kind
of
like
to
have
the
nice
nice
button
on
which
the
thing
is
running
on.
So
this
is
not
the
god
set
solution.
There
are
challenges
exactly
like
that.
One.
What
happens
when
you
have
twenty
thousand
nodes
with
each
different
configuration,
then
you're?
A
Basically-
and
you
have
thousands
of
config
Maps,
similar
with
Apache
Kafka
for
each
topic,
you
get
the
config
map
and,
in
the
end
of
the
day,
is
like
five
thousand
files.
Configurable
configuring,
your
own
topics
out
there.
There
are
challenges,
but
I
think
this
really
helps
to
build
an
iterate
more
easily,
maybe
not
for
the
development.
Maybe
with
many
shift.
A
Yes,
but
when
you
decide
to
push
it
to
production
more
seriously,
then
this
really
really
really
helps,
even
if
you
have
manual
work
to
set
control
configurations,
but
there's
always
ways
to
automate
it
if
I
can
just
and
so
to
finish
things
up,
I
opened
up
a
swarm,
OpenShift
repository,
which
is
currently
empty,
just
initialized.
This
is
the
place
that
I
will
hopeful.
A
In
the
next
day's
deploy
some
things
because
it's
possible-
it's
not
impossible,
he's
just
converted,
unfortunately
not
I'm
here
with
my
daughter,
so
I'll
move
out
to
home,
but
I
would
like
to
include
you
know
online
communication
and
basically
thank
you
for
your
time
and
I
hope
that
I
wasn't
too
out
there.
But
this
is
something
that
cooked
in
me
for
a
long
time
and
I
just
need
to
get.