►
From YouTube: OpenShift Coffee Break: Single Node OpenShift
Description
Get your espresso ready for the EMEA OpenShift Coffee Break as we will walk you through Single Node OpenShift (SNO) 4.9, a single node cluster optimized for Edge computing use cases with Kubernetes. Join the discussion & live demos on SNO with Jaafar, Natale and Tero and our special guest Alessandro Rossi!
Twitch: https://red.ht/twitch
A
Oh
welcome
welcome
back
to
openshift
coffee
break
and
today
we
have
also
tarot
last
minute,
so
I
see
the
screen.
You
know
in
the
preview
four
of
us
and
then
there
are
other
three
that
that
are
asked.
I
don't
know
why,
but
it
looks
cool.
I
don't
I
don't
know
if
we
can
fix
this.
Let
me
check
better
now,
not
sure,
but
it
it
looks,
still
cool.
I
hope
you
can
all
see
us
and
welcome
back
to
openshift
coffee
break.
A
Today,
we're
gonna
talk
with
terro
with
jafar
and
alessandra
our
special
guest
about
single
node
openshift.
You
see,
terror
is
already
having
is
a
coffee
shot?
I
hope
you
had
yours.
A
Okay,
that's
the
way
man
right,
yeah
welcome
back
today,
very
hot
topic,
because
we're
talking
about
single
node
openshift
and
you.
A
A
So
I'm
very
happy
to
have
you
all
today,
because
we
we're
gonna
talk
about
single,
not
open
shift.
This
is
something
we
were
waiting
long
time
in
openshift
right
about.
D
And
then,
then
we
did
a
lot
of
hacks
that
having
a
single
node
running
virtually
face
and
running
multi-node,
so
yeah.
D
It
is
kind
of
it's
a
good
thing
to
finally
have
it,
but
I
can
understand
that
it
is
pretty
hard
to
get
properly
working
since
the
possible
chicken
and
egg
problem
that
there
is
that
you
you
need
something
to
install
like
in
openshift
four
you,
you
use
the
kubernetes
to
install
googlenet
and
when
you
have
a
single
node,
you
you
it's
basically
pretty
hard,
because
you
have
to
use
the
single
node
to
install
that
environment.
But
but
one
question,
it's
it's
movember
good
cause.
I
started
first
of
november.
I
I
have
this
raise
your
mustache.
B
Know
what
I
I
just
shaved
mine
this
morning-
yeah-
maybe
I
I
I
I
missed.
I
missed
that
so
I
have
to
yeah.
A
Hipster
one
yeah:
it's
not!
If
you
have
always
this
look
and
feel.
D
A
Well,
I
also
hope
so,
and
actually
we
have
a
demo,
it's
just
a
spoiler,
but
just
let
me
remind
the
hello
everyone
also
in
the
chat.
If
you
have
any
question,
please
write
down
in
the
chat
in
youtube
and
twitch.
We
will
bring
your
question
during
the
show
before
we
start,
I
would
really
would
like
to
introduce
alessandro
so
alexander,
who
are
you
why
you
are
here?
What
do
you
want.
E
E
I
generate
that
in
august,
actually
as
a
solution
architect,
and
I
also
worked
on
many
site
projects
in
in
the
past,
buffalo
to
give
ernetis
and
openshift
like
an
installer
for
and
and
a
provision
that
we
worked
also
with
with
my
friend
and
colleague,
valentino
bertie,
in
in
in
the
past,
when
I
worked
as
an
instructor
also
for
that
when
I
was
a
working
for
a
partner
and
yeah,
my
I'm
completely
in
love
both
with
overshift
and
kubernetes.
So
I'm
very
happy
to
see
this
single
note
happening.
A
Thank
you
for
your
introduction,
alessandra,
and
you
know
what
we
already
have
terror
and
jafar
and
alison.
We
already
have
some
questions
in
the
chat,
so
lo
narrow
content,
which
is
a
nice
name
in
italian,
is
asking
is
possible
to
downgrade
a
running
cluster
to
a
single
node
or
we.
You
need
to
set
up
a
new
one
and
migrate
the
workloads.
So
the
question
is:
can
we
downgrade
an
existing
cluster
in
edge
availability
to
a
single
node.
B
A
Okay,
cool
thanks,
yeah
that
this
is
an
interesting
thanks
for
the
question
which
you
know
what
it
means
in
italy.
It
means
the
happy
bagpiper
so
and
we
are
going
into
christmas.
So
there's
this!
You
know
there
are
people
playing
during
the
street.
So
it's
it's
cool,
hello,
adrian,
hello,
everyone!
So
jafar
alessandro,
we're
talking
about
single
node
openshift.
Can
we
recap
a
little
bit?
What
is
single,
node
openshift
yeah.
B
Sure
so
traditionally
I
don't
think
we
use
slides
in
this
openshift
coffee
break,
but
since
we
have
a
nice
one,
maybe
I
could
I
could
use
that.
But
for
for
for
people
who
have
been
with
openshift
for
some
time,
you
probably
have
heard
what
we
call
the
openshift
all-in-one
insta
installation,
which
is
basically
it
has.
It
was
when
we
were
in
openshift3.x.
B
All
the
services
running
into
one
single
machine
into
one
linux,
so
it
was
very
convenient
for
many
things.
For
so
for
us
as
solution
architects,
it
was
convenient
for
our
demo
environments.
It
was
very
easy
to
set
up
and
and
very
easy
to
run
on,
our
local
machines
and
at
that
time
containers
were
just
starting.
So
there
was
not
really
that
need
for
for
edge
and
and
such
things,
but
since
the
the
the
last
years,
a
lot
of
things
happened
in
the
kubernetes
ecosystem.
B
We
saw
massive
adoption
across
all
industries
and
one
of
the
industries
that
is
now
really
into
containerizing.
The
services
is
the
telco
industry,
for
instance,
where
we
have
the
containerized
networking
functionalities,
what
we
call
nav
and
such
things,
and
basically
when
you
are
trying
to
deploy
those
services
on
an
openshift
cluster,
you
had,
I
would
say
the
the
first
pattern
was
you
need
a
full
flavored
openshift
cluster,
with
three
control
planes
and
at
least
two
workers.
B
That
was
the
compact
cluster,
which
was
the
ability
to
have
three
nodes
that
play
both
the
roles
of
masters
and
nodes,
so
it
was
already
a
way
to,
for
example,
address
the
regional
data
center,
or
maybe
some
offices
where
you
want
to
have
less
infrastructure
to
to
manage
your
openshift
and
and
kubernetes
workloads,
and
one
of
the
the
the
big
asks
we
had
from
customers
was
to
to
be
able
to
run
all
of
that
into
a
one
note
pattern
so,
for
example,
for
edge
where
you
want
to
have
your
workloads
really
close
to
your
customers
or
to
your
local
branches,
etc.
B
So
engineering
has
worked
on
that.
It
was
not
easy
because,
as
tiro
mentioned,
we
have
automated
a
lot
of
things
in
openshift
installation,
and
one
of
them
is
what
we
call
the
bootstrapping
process.
The
boots
trapping
process
is
basically,
you
use
an
ephemeral
external
machine
that
starts
all
the
containerized
services
that
configure
your
openshift
cluster.
So
basically
you
use
containers
to
deploy
containers
and
when
you
are
doing
all
of
that
inside
the
same
vm,
there
are
some
weird
things
that
can
happen.
You
have
the
load
balancers.
B
You
have
some
vips
that
are
going
to
be
hosted
on
the
bootstrap
when
you
are
bootstrapping
the
cluster
and
then
they
will
be
held
on
the
on
the
the
masters
etc
or
the
workers.
So
so
it's
a
complex
process,
and
hopefully
or
luckily,
engineering-
have
have
worked
very
hard
on
it
and
they
have
been
able
to
to
make
that
work
on
one
instance
where
everything
happens
inside
the
same
node.
B
So
you
you
do
what
we
call
in
place
bootstrapping
and
then
the
the
work,
the
node
becomes,
a
a
master,
slash
worker,
it's
not
really
a
worker,
it's
a
master
that
is
schedulable.
Basically,
so
that's,
I
would
say
the
single
node
cluster.
In
a
nutshell,
the
the
so
some
people
have
already
heard
about
code,
ready
containers
so
code
ready
containers
is,
I
would
say,
the
developer
friendly
flavor
of
openshift,
but
it's
not
a
full
fledged
openshift
instance,
whereas
the
single
node
cluster
is
is
really
one.
B
You
get
all
the
features
of
a
full
open
shift
clusters
with
things
like
metering,
etc
that
you
don't
get
with
a
with
an
openshift
code,
ready
containers.
So
it's
upgradable.
That's
great
news
compared
to
to
the
other,
I
would
say,
developer.
B
Developer
friendly,
I
would
say
platform
or
or
pattern
so
yeah.
So
that's
the
that's
the
the
I
would
say
one
of
the
biggest
differences.
B
B
A
For
this
explanation,
I
think
you
already
answered
one
of
the
questions
we
had.
What
that
was.
Are
the
single
node
cluster
upgradable?
I
guess
yes,
you
say
the
during
the
talk.
There
are
other
two
questions.
Maybe
we
can
help
with
terra
and
alessandro.
Also
to
answer
so
one
is
one
is:
is
there
a
list
of
supported
host
distribution
for
single
node,
open
shift,
and
for
this
I
guess
it's
only
one.
No,
it's
valid
for
the
same
concept
for
openshift
there's
only
one
os
correct
me.
A
B
So
so
I
I
believe
openshift
sno
is
supported
on
bare
metal
in
installations.
B
B
B
Other
architectures,
I
think,
for
the
moment
it's
x86,
but
I
have
to
check
actually
for
the
for
for
the
the
supported
platforms.
Maybe
we,
if
we
look
at
the
assisted
installer
we
can,
we
can
have
a
hint
on
on
what's
supported.
B
Yeah
yeah
so
again
keep
in
mind
that
this
is
fairly
new.
The
single
node
open
shift
has
been
available
since
openshift
openshift49,
so
I
I
think
it
was
released
two
weeks
ago
or
something
like
that.
I
don't
know
exactly
when
was
the
ga
date
for
449,
but
it
was
somewhere
around
yeah
mid
mid
october.
I
think,
or
something
like
that.
A
It
was
announced
also
at
kubecon,
so
it's
a
pretty
fresh
kind
of
available
availability
is
together
with
fort.
C
B
Exactly
yeah,
so
we
will
have
more
hands-on
experience
with
that
as
we
move
forward,
but
yeah
today,
it's
it's
something
a
bit
fresh,
even
for
us,
I
would
say
so
anyway.
I've
been
waiting
for
this
for
a
long
time,
because,
again,
it's
easier
to
have.
You
know
your
own
open
shift
environment
running
in
an
all-in-one
pattern,
so
this
is
aimed
toward
production.
B
So
this
is
something
that
customers
can
use
to
have
their.
You
know
real
workloads
running
in
production
on
one
single
open
shift,
contrary
to
openshift
containers,
which
is
you
know
just
for
development
purposes,.
E
Can
you
yeah
yeah?
There
are
another
interesting
question
that
is
regarding
the
subscription
model
for
for
single
node
openshift.
There
is
no
content.
This
is
asking
if
whether
the
the
subscription
model
will
be
the
same
as
the
classic
or
open
shift
subscription.
So
how
does
it
work.
B
C
D
C
D
D
Thing
so
is
there
really
really
use
cases
anymore
for
remote
workers
now
that
there
is
single
node
at
servers.
B
So
so
one
big
difference
I
would
say,
is
that
if
you
are
in
a
disconnected
environment,
you
can
run
a
single
node
openshift,
whereas
the
remote
worker
will
not
work,
because
the
remote
worker
of
course
needs
the
the
the
control,
the
the
communication
back
to
the
control
plane
and
to
to
to
the
registry,
etc.
So
so
the
remo
remote
worker
can
be
useful
in
some
situations
where
you
want
to.
For
example,
you
have
already
a
central.
B
Cluster
and
you
want
to
extend
capabilities
to
remote
locations,
so
you
can
pop
up
nodes
and
remove
them,
but
you
are
still
managing
your
cluster
centrally.
You
have
one
cluster
that
you
manage
and
you
are
adding
and
removing
nodes.
So
in
terms
of
management
say
you
have
a
hundred
facilities
if
you
deploy
remote
workers
and
they
are
compliant
with
like
networking
requirements
and
stuff
like
that,
maybe
it's
easier
to
manage
that
than
having
a
hundred.
B
Single
node
clusters,
so
that's
in
theory,
but
single
node
clusters
can
also
be
managed
centrally
through
acm,
which
is
our
advanced
cluster
management
solution
that
runs
on
openshift.
So
you
can
also
have
like
one
single
configuration
for
your
snos
that
is
replicated
through
your
100
sno
clusters.
So
basically
I
would
say
that
the
biggest
difference
is
this.
B
The
remote
workers
will
need
some
proper
networking
to
to
be
reliable
and
to
have
a
you
know,
reliable
connection
with
with
the
control
planes.
Even
if,
of
course,
we
we
have
done
a
lot
of
improvement
compared
to
a
traditional
deployment,
but
yeah,
that's,
I
would
say
the
biggest
differences.
Maybe
you
guys
see
other
ones,
but
that's
the
the
main
ones.
I
see.
B
B
So
it's
a
sas
based,
I
would
say,
portal
that
red
hat
has
created
for
the
customers
to
be
able
to
manage
their
subscriptions
to
manage
their
openshift
clusters
and
they
added
support
for
single
node
cluster
as
a
tech
preview,
and
that's
basically
what
I
used
actually
to
to
deploy
my
my
own
instance.
B
So
if
you,
if
you
want
guys,
I
can
if
you,
if
there
are
no
more
questions,
I
can
walk
to
walk
you
through
the
the
process
of
doing
that.
I'm
not
gonna
do
a
full
installation,
but
I
can
show
you
some.
You
know
some
of
the
main
steps
and
also
explain
some
requirements
in
terms
of
prereqs
like
what
ip
and
dns
host
names
are
required
and
such
things.
A
Yeah,
that's
definitely
definitely
very
interesting.
Go
go
for
it.
We
have
a
couple
question
in
the
chat
we'll
try
to
answer.
Also
during
the
during
the
chat,
I
think
one
interesting
while
you
switch
to
the
demo
one.
C
A
That
we
we
had
a
question
like
is
this
available
on
kd
and
yes,
it's
also
available
on
ok.
Did
I
link
it
in
the
chat?
A
Another
cool,
a
good
question
is,
is
arm
architecture
support
or
for
a
single
node
openshift,
because
I
think
at
the
end
the
edge
use
case
is
gonna
be
multi-arc,
but
I
I
don't
remember
if
arm
is
still,
I
think
it's
in
depth
preview
somewhere,
but
I'm
not
sure
it's
supportable
for
a
bare
metal.
I
don't
know
if
you
have
a
more
info
about
it.
B
B
Support
for
you
know,
arm
on
openshift,
like
it's,
it's
a
clear
target
architecture
so
for
sno
we
need
to.
We
need
to
check
where.
D
A
We
need
to
check
you
know
a
single
node
of
machine
bare
metal.
I
think
there's
no
support
yet,
but
techno
technology,
technologically
speaking,
all
the
container
and
all
the
binary
are
are
available.
I
think
it's
not
just
it's
just
not
supported
yet.
Yes,.
B
Yeah
but
the
the
the
mainframe
can
host,
maybe
thousands
of
snows,
maybe.
B
So
you
will
have
a
whole
data
center
in
there,
okay.
So
so
now
we
are
looking
at
the
console.red
console.redhats.com.
B
Which
is
a
as
I
mentioned,
the
portal
that
our
customers
can
create
can
use
to
manage
their
subscriptions
to
to
manage
their
openshift
clusters.
So
what
I
did
here
is,
I
just
went
to
console.com.
B
I
selected
like
the
clusters
part
and
what
I
do
now
is
I
go
to
create
cluster
so
from
here
you
have
different
options
and
each
one
will
walk
you
through
the
steps
that
you
need
to
do
to
install
openshift.
So
you
can
see
there
are
managed
services
where
you
can
run
the
the
installation.
B
If
you
want
to
install
them
on
the
cloud,
you
can
run
them
on
a
data
center
and
that's
the
option
that
we
are
going
to
use.
So
I'm
going
to
go
to
create
cluster
okay
and
and
then
it
asks
you
for
like
what
are
the
the
details
that
you
want
for
for
that
installation.
B
So
we
will
say
coffee
break
just
and
I
would
say,
coffee,
break
dot,
open
shift,
dot
affair,
so
I
happen
to
to
have
to
have
that
that
domain.
So
I
can
fool
around
a
little
bit
with
it,
and
the
interesting
thing
here
is
that
you
can
choose
to
have
a
single
node
installation
and
what
this
installer
does
is.
B
It
will
basically
generate
an
iso
image
that
you
can
then
load
on
your
you
can
bootstrap
on
your
machine
or
in
your
virtual
light
environment,
and
it
will
talk
back
to
openshift
to
the
management
service
and
it
will
automatically
configure
the
the
the
instance
for
you.
So
it's
very,
very
nice
too,
to
have
that
so,
okay,
we
will
say:
okay,
we
see
that
we
have
openshift
for
nine.
I
go
next
and,
as
you
can
see
here,
you
can
generate
a
discovery
iso.
B
B
So
there
are
three
important
requirements
which
are:
if
you
are
familiar
with
openshift4,
there
are
two
endpoints
that
need
to
be
configured
in
terms
of
dns,
which
are
the
first
one
is
the
api.
So
it's
basically
api
dot,
something
so
it's
the
api
dot,
cluster
name,
dot,
domain
etc,
and
the
second
one
is
so.
B
The
api
is,
is
the
end
point
that
you're
going
to
use
to
talk
to
to
the
kubernetes
and
openshift
api,
and
the
second
endpoint
is
the
white
card
extension
that
is
added
to
all
the
routes
that
we
create.
So
it's
a
star
that,
for
example,
apps.openshift
dot
cluster
name,
so
those
two
dns
endpoints
needs
to
have
fixed
ips
that
you
reserve
that
will
be
managed
by
the
the
the
ip
failover
services
on
on
the
installation
itself.
B
So
you
will
have
those
floating
ips
that
are
managed
by
the
the
openshift
single
node,
but
you
also
need
to
redirect
the
the
traffic
to
to
you.
You
need
to
reserve
a
fixed
ip
address
for
the
host,
so
this
is
the
part
that
is
a
little
bit
different
from
the
from
the
the
traditional
installation
of
openshift.
B
So
here
you
have
the
api
and
the
the
the
white
card
that
are
two
floating
ips
that
you
will
get
and
then
you
will
have
a
fixed
ip
address
that
you
reserve
for
for
the
snow.
Basically,
so
once
you
generate
the
discovery,
iso,
it
tells
you
if
you
want
to
have
a
full
image,
meaning
that
it's
going
to
be
about
one
gig
or
if
you
want
a
minimal
image
that
will
have
like.
B
I
think
it's
100
megs
or
something
like
that,
and
once
you
start
it
it's
going
to
pull
the
the
information
that
it
needs.
So
if
you
want
to
be
able
to
troubleshoot
and
connect
to
your
snow
so
same
way
as
we
do
with
openshift,
you
can
upload
your
ssh
key
in
here
and
then
it's
going
to
be
added
to
the
ssh
key
store
in
your
open
shift.
Sno.
B
C
B
No
a
bit
more,
maybe
one
hour
to
have
the
the
the
process
completed,
and
the
good
thing
is
that
you
have
a
nice
visual
ui
on
the
progress.
You
see
all
the
steps
you
can
access
the
logs.
So
it's
it's
very
nice
experience
to
do
that
this
way.
So
basically,
what
happens
afterwards?
Is
you
see
that
the
the
node
registers
itself
here
and
then
you
see
the
progress
in
this
ui
so
fairly
simple?
B
I
really
liked
the
experience
and
I
have
one
here
that
was
installed,
so
you
can
see
the
link
in
here.
You
can
download
your
cube
config,
you
can
check
the
basically
the
the
settings
etc.
So
very
nice
experience.
A
D
B
D
But
then
can
you,
but
if
you
do
the
full
image
the
one
gig
then,
is
that
possible
to
run
in
a
aircraft
or
do
you
always
need
the
connectivity
to
upstream
console.
B
So
if
you
are
using
the
the
service,
like
the
assisted
installer,
I
think
even
when
you
are
running
the
full
image,
because
that's
the
installation
that
I
used,
you
still
need
to
have
the
network
connectivity
to
the
to
to
this
endpoint.
D
B
D
E
All
right,
another
good
thing
that
is
coming
from
the
installer
is
that
sometimes
one
of
the
obstacles
that
were
found
by
the
customers
and
also
people
that
just
wanted
to
try
it.
It
was
that
it
was
a
bit
complex.
Let
me
say
to
just
have
everything
up
and
running
to
just
try
it
and
try
it
and
then
destroy
the
cluster
and
then
recreate
the
cluster
so
having
something
that
is
you
pick
it
and
you
start
moving
on
from
scratch.
E
B
So
honestly,
it
was
a
a
great
experience,
so
I
didn't
plan
to
walk
you
through
the
installation,
but
maybe
we
can
do
that
in
another
session
or
something,
but
because
you
know
it
takes
some
time,
but
basically
it
was
never
that
easy,
at
least
for
me
to
install
an
openshift
cluster.
Of
course
we
have
a
lot
of
ways
to
do
that.
You
have
the
yaml
files,
you
have
the
openshift
install
you
have
the
acm,
so
all
of
that
works
perfectly
fine
and
we've
done
all
of
that
before.
B
But
now
all
I
did
really
was
go
to
the
assisted
installer.
So
again,
this
is
in
tech
preview,
but
it
works
for
it's
going
to
be
supported
soon.
So
you
generate
the
iso,
the
the
the
image
the
the
the
os
boots
up
talks
back
to
the
cloud
service,
and
then
it
gives
you
a
pop-up
asking
for
your
for
information
like
what's
the
api
vp.
What's
the
wildcard
vip,
what's
the
fixed
ip
etc,
and
once
you
enter
that
information
like
it
asks
you
do,
you
want
to
set
up
additional
disks,
etc.
B
Okay,
so
enough
talking
about
the
installer,
so
here
we
are
looking
at
the
sno.
How
can
we
make
sure?
So?
Basically
you
go
here.
You
see
that
it's
one
single
node
that
has
both
roles,
it's
master
and
worker.
B
But
if
you
look
at
the
cluster
settings
you
see
that
there
are
no
dedicated
workers,
so
it's
one
master
that
is
schedulable,
so
it
plays
the
role
of
a
worker
and
the
nice
thing
here
is:
you
can
see
that
it
can
be
upgraded.
Actually,
so
there's
an
upgrade
that
is
available.
I'm
not
gonna.
Do
it
now,
because
we
don't
know
what
can
happen.
B
We
want
to
to
show
you
the
thing
working
and
not
wait
for
the
installation
to
happen
in
the
background.
So
so,
basically
you
have
all
the
features
that
you
have
with
the
traditional
openshift
cluster.
B
You
can
see
here
in
the
overview
that
you
have
the
metering
that
is
working.
You
can
use
storage,
classes,
etcetera,
etcetera.
Of
course.
This
there's
an
indication
here
that
there's
no
high
availability,
because
we
are
in
an
sno
pattern,
but
other
than
that
everything
looks
exactly
the
same
as
openshift,
and
since
we
are
in
openshift
4,
you
can
see
that
some
changes
happened
in
the
ui.
B
B
B
D
B
So
very
good
questions,
so
you
can
so
what
I
did
in
this
is
just
you
know
something
for
my
home
lab
and
it's
not
something
you
know
to
use
in
your
production
environment.
So
I
I
did
set
up
an
nfs
storage
class
using
the
nfs
provisioner,
and
you
can
see
that
I
have
a
storage
class
in
here,
but
you
can
create
other
storage
classes
and
use.
B
B
Local
storage
operator,
okay-
and
this
is
probably
the
one
that
you
could
use
if
you
wanted
to
have
everything
running
in
one
single
node.
So
basically
you
need
to
have
a
free
disk
that
is
unpartitioned
on
your
machine
and
once
you
configure
the
local
storage
operator,
you
can
use
that
unmounted
disk
as
the
local
volume
provisioner
for
your
openshift
cluster.
A
C
B
Yeah,
so
yes,
of
course,
as
we
said,
it
works
for
a
traditional
open
shift
cluster
and
in
case
of
an
s
o,
so
basically
the
question
of
the
storage.
B
D
B
D
D
B
Yeah
logging,
yeah
yeah
exactly
okay,
so
basically
that's
that's
it.
So
you
can
see
you
have
a
full
openshift
experience.
You
have
the
dashboards.
Everything
works
exactly
as
with
the.
B
The
the
traditional
openshift
cluster,
let's
switch
back
to
the
developer
perspective
and
maybe
try
to
add
quickly
a
sample,
so
the
the
four
line
perspective
has
changed
a
little
bit,
so
I
needed
to
get
used
to
it.
B
But
now
I
can
find
my
way
around
it,
but
yeah
it's
a
nice
improvement
to
the
to
the
ui.
So
I'm
just
deploying
a
simple
node.js
app.
You
can
see
that
the
build
started.
It's
cloning,
the
code
from
github
and
once
it's
done,
it's
gonna
be
be
running
in
here.
So
very
nice
very
easy
to
set
up
very
happy
with
it.
A
Pretty
cool
and
jafar
I
wanted
also
to
share
my
single
node
openshift,
which
is
in
this
intel
nook,
very
cool.
You
know
what,
because
I
I'm
doing
the
I
read
that
act,
fest
yeah.
Let
me
share
in
the
chat
the
link
there
is
in
this
week
that
read
that
hackfest.
If
you
are
interested
to
join
it's
happening
now
so,
but
if
you
are
interested
to
join
the
community
about
the
the
red
attack
faces,
is
it's
using
the
single
node
openshift
and
then
quercus
aqua
software
stack.
A
So
a
java
software
stack
to
implement
server-side
and
client-side
pollution
data
to
be
collected
and
analyze
it.
So
I
think
this
is
a
cool
use
case,
so
you
can
put
as
jafar
that
did
your
single
node
openshift
in
your
bare
metal
art,
where
you
can
put
on
an
intel
nook
to
come
back
to
your
question
about
the
architecture
at
the
moment
it's
intel
based,
but
the
support
of
other
architecture
is
also
coming.
A
A
How
was
easy
to
install
it
and
at
the
end,
that
is
an
openshift
cluster,
which
is
a
master
worker
together,
I
don't
know
if
we,
if
we
can
go
into
the
compute
zone
and
we
can
see
that
that
is
a
master
worker
and
and
so
it's
a
single
node,
where
all
your
app
will
will
run.
This
is
the
I
think
this
is
the
cool
thing.
I
have
one
question
to
be
honest:
what
are
what
are
the
hardware
requirements?
What
is
the
minimal
minimum
ram
that
you
have
to
have.
B
So
so
the
the
documentation
says
eight
cores
to
32
gigs
of
ram.
So
that's
the
the
minimum
because
of
course
there
are
services
that
will
be
running.
You
know
all
together
on
the
instance,
the
lcd,
the
api
servers,
the
monitoring.
So
you
need
to
make
sure
that
you
have
enough
room
for
those
workloads
and
your
applications.
B
Php
micro
services
that
take
50
megs
of
memory-
you
are
probably
fine
with
32
gigs
of
ram.
If
you
are
going
to
use
more
specialized
workloads
that
need
gigs
of
ram
for
each
pod,
you
probably
need
a
little
bit
more
than
that.
So
for
my
instance,
I
had
provisioned,
I
think
around
90
gigs
of
ram
for
the
incense,
so
that
leaves
enough.
B
Yes,
yeah
exactly
so
so
the
cluster,
but
again
keep
in
mind
that
some
services
will
stretch
depending
on
how
much
memory
you
have
available.
B
So
the
more
you
have,
the
more
they
can
take
right,
based
on
the
the
the
limits
and
the
like
the
default
requests
and
limits
that
you
that
are
set
up
so
currently
my
cluster
takes,
I
would
say,
about
10
gigs
of
ram
and
about
three
cores
about
out
of
the.
I
have
14
cores
available.
B
Oh
so
yeah
I
can.
I
can
run
the
update
and
let
you
know
how
much
it
takes.
I
didn't
want
to
run
the
update
before
the
or
I
mean
on
the
session,
but
yeah.
Now
I
can.
I
can
hit
the
update
button
and
and
see
how
long
it's
gonna
take.
B
B
D
E
Correct,
I
I
actually
tried
it
in
the
last
days
and
it
takes
yeah
about
45
one
hour.
It
depends
basically
on
mostly
on
the
networking
so
for
downloading.
All
the
new
images
for
the
core
os
and
also
the
cpu
requirements
are.
E
B
B
I
really
like
tried,
even
maybe
below
that
minimum
requirement
to
see
what
would
happen
and
the
bootstrap
did
not
complete,
because
you
know
there
was
not
enough
power
horse
horsepower
to
to
you
know,
download
the
images
bootstrap,
the
services
and
everything
that
needs
to
to
happen
in
the
background,
because
if
you
look
at
the
container
image,
you
have
cans
of
container
services
that
run
in
there
and
yeah
just
just
to
to
show
you
that
it's
basically
a
full-fledged
openshift
cluster.
Let
me
try
to
connect
to
my
openshift
sno.
B
So
so
you
see
the
the
the
tens
of
processes
that
are,
you
know,
containers
that
are
already
running
in
the
cluster,
so.
D
But
now
you
have
a
situation
that
you
need
a
second
snow
cluster
with
the
load
balancer
that
you,
during
the
updates.
You
have
something
providing
platform
for
the
workload
so.
B
Yeah,
so
so
yeah,
so
of
course
that's
why
they
say:
there's
no
h
a
of
course.
You
need
to
bear
in
mind
that
you
don't
have
h
a
if
h
a
is
important
and
you
need
continuity.
Then
probably
you
need
to
go
with
the
three
node
compact
cluster
pattern.
If
you
still
want
to
know
to
minimize
the
infrastructure,
that's
probably
the
safer
choice
for
that
type
of
use.
Cases.
A
This
answer
the
question
we
had
in
the
chat
jaffar
because
los
angeles
on
edge
scenario.
What
is
the
h
a
for
a
single
node
operation,
because
you
say
I
prefer
to
have
two
nodes
rather
than
five,
but
if,
if
somebody
can
use
the
compact
cluster,
so
three,
I
think
it's.
I
think
it's
the
right
choice
right
instead
of
having
two
single
node
openshift,
one
compact
cluster
of
three
master
worker.
C
D
Yeah
yeah,
but
there
is
also
also
this
that
in
some
its
locations
you
might
have
two
sections
in
your
data
center,
so
you
might
actually
have
better
to
choose
two
single
node
clusters.
One
in
each
room
then
have
all
in
one
three
node
setup
where
you
have
two
nodes
in
one
room,
which
is
then
a
single
point
of
failure,
so
you
don't
basically
get
all.
D
Of
course
you
will
have
two
clusters
to
maintain,
but
you
will
have
the
same
fail
tolerance
than
you
would
have
if
you
could
do
two
data
centers
with
three
node
clusters.
Yes,.
B
Yeah
exactly
and
if
you
can
manage
externally
the
load
balancing
stuff
I
mean
if
the
load
balancer
is
not
installed
on
one
of
the
machines
like
if
you
don't
require
any
additional
machine,
and
if
you
have
hardware
that
can
dynamically
switch
the
routing
and
stuff
like
that,
then
yeah.
You
can
probably
go
with
that
type
of
scenario
where,
during
the
upgrade
you
switch
you
you,
you
move
the
containers
to
to
the
other
cluster
or
you
have
them
as
an
active,
active
or
active
passive.
B
A
D
D
D
D
You
can
actually
use
two
clusters
that
are
and
the
environment
is
supported,
because
that
that's
important
thing
also,
but
you
have
to
think
about.
You-
have
two
clusters
and
you
will
still
have
that
single
cluster.
It's
not
a
fail
to
learn,
so
it
compared
what
data
you
store
and
the
clusters
don't
have
like.
Maybe
share
storage,
so
they
are
isolated.
D
So,
if
request
goes
to,
as
you
know,
cluster
one
store
something.
The
data
is
not
the
messenger
as
an
old
cluster
two.
So
these
are
the
considerations,
but
it's
a
it's.
A
good
extra
option.
How
you
set
up
your
environment
on
the
edge.
B
Yeah
and
so
so
just
to
keep
in
mind
something
that
the
goal
of
all
of
this
is,
I
would
say,
to
ease
not
only
the
proximity
of
your
workloads
to
your
other
business,
related
applications
or
data,
but
also
to
simplify
multi-cluster
management.
B
So
the
the
fact
that
our
openshift
sno
is
like
a
plain,
full-fledged,
openshift
cluster.
It
means
that
it
works
seamlessly
with
the
acm
and
you
can
use
acm
to
do
this
kind
of
things
like
if
you
want
to
move
workloads
from
one
sno
to
another
one.
B
B
For
instance,
say
you
have
like
a
100
branches
where
you
have
an
snl
running
everywhere
and
you
have
connectivity
to
a
central
acm
or
you
have
two
snows
per
per
branch
or
whatever
you
can
manage
all
of
that
centrally
from
acm,
as
you
would
do
with
a
multi
worker
open
shift
cluster.
D
D
I
said
way
back
so
this
is
that
you
don't
actually
install
really
you
install
opencv,
but
you
run
you
get
all
the
good
things
from
kubernetes
and
you
run
containers
and
it's
still
single
node.
B
And-
and
actually
you
remind
me
of
of
a
conversation
that
we
had
with
one
of
my
previous
customers,
but
it
was
years
ago,
I
think
it
was
almost
three
years
ago
or
something
like
that
where
they
wanted
to
have
that
use
case.
So
they
were
in
a
manufacturing.
B
B
So
basically,
that's
the
the
choice
that
they
went
with.
So
they
had
a
rail
with.
You
know,
containers
running
on
it
and
that's
it.
But
of
course
you
didn't
have
the
the
management
capabilities
of
openshift,
the
the
ui,
the
the
metrics,
and
all
of
that
you
know
the
developer.
B
So
openshift
was
a
great
use
case
for
that,
of
course,
but
it
was
not
a
pattern
that
was
possible
with
openshift4,
so
now
they
can
do
that
because
they
can
have
a
fully
fledged
open
shift
with
maybe
call
ready
workspaces
with
the
pipelines
with
whatever
operators
they
want
to
run
on
that
single
node.
B
A
Have
I
think,
alessandro's
fan,
there's
lots
of
people
say
grand
alessandro,
hello,.
A
You
know
I
have
to
explain
that
alexander
is
very
active
in
our
openshift
community
in
telegram,
is
we
invited
him
because
he
did
he
do
he
does
lots
of
things
with
on
github
we've
open
lots
of
experiments
and
he
he
did
also
on
single
node
openshift.
Then
alexander,
any
final
thoughts
about
it.
E
Yeah
and
yeah,
I
also
have
to
to
thank
rodrigo
for
the
precious
gifts
that
he
produced
for
his
following,
obviously
from
the
bash,
and
I
just
adapted
this
one
to
to
an
ansible
playbook
that
is
public,
available
and
yeah.
My
final
talks
are
that
all
the
single
lot
of
issues
is
something
that
it
will
probably
change,
how
we're
going
to
approach
edge
customers
and
the
different
scenarios
that
can
require
disconnected
workloads
or
something
that
should
be
also
local.
So
I'm
really
happy
to
see
how
this
will
evolve
and
yeah.
It's
very.
It's
been.
C
A
We
we,
we
are
happy
to
have
you
today
and
thanks
everyone
for
joining.
Today
we
have.
We
are
at
the
end
of
this
session,
this
coffee
break,
unfortunately,
because
it
was
super
interesting
with
demos
and
talk,
but
we
keep
talking
on
openshift
tv
jafar.
That
is
true,
because
today
we
have
additional
show-
and
I
put
the
link
in
the
chat
about
our
streaming
calendar
java.
Are
you
this
afternoon
again
on
on
the
stream.
A
Be
if
we
look
at
the
at
the
calendar,
that
should
be
the
level
up
and
then,
if
you
go
to
the
calendar,
there
are
other
sessions
this
afternoon.
If
you
like
to
stay
connected
with
us
on
openshifttv
and
for
our
next
appointment
is
actually
not
on
the
november
17,
because
there's
an
event
in
between.
Let
me
share
also
the
chat
there.
We
we
will
be
in
kdc
italy,
which
is
a
kubernetes
event,
and
I
will
share
in
the
chat
we
will
come
back
on
of
the
openshift.
A
Coffee
break
will
come
back
on
december,
the
with
another
show
on
our
series
and
in
the
while
I
put
in
the
chat
the
next
session
on
openshift
tv.
Please
keep
in
touch
with
us
and
subscribe
to
our
openshift
channel.
Let
me
share
also
in
the
in
the
chat
the
link
to
to
to
be
connected
with
us
to
like
the
show
to
stay
in
touch
with
us
and
and
yes,
I
will
really
thank
you,
jafar
tero
alessandra,
for
joining
us.