►
Description
Heard about Edge Computing? Get your espresso ready for this special episode with Daniel Froehlich, PM for Red Hat Device Edge, to discover how you could combine Kubernetes and Red Hat Enterprise Linux at the edge!
A
Hello
and
welcome
everyone
to
our
today's
episode
of
openshift
TV
coffee
break
I'm
Andrea,
give
me
a
sa
for
partners
and
the
solution
solution
architect
for
partners,
and
we
have
today
two
guests.
If
you
guys
remember
in
one
of
our
previous
chats
here
at
openshiftv,
we
had
spoken
about
the
highlights
of
Cuba
cubecon
and
cloud
native.com,
and
one
of
the
one
of
those
that
really
raised
attention.
A
Among
our
audience
was
the
recent
Announcement
by
red
hat
and
about
Brett
said
devices,
and
so
we
thought
we
would
invite
somebody
who
could
talk
about
about
it
and
who
best
then
the
PM
and
somebody
with
a
lot
of
experience
about
it,
and
so
we
have.
We
have
here
Dania,
Daniel,
furlish
and
Stefan
Bernstein,
and
why
don't
you
guys
introduce
yourself?
A
B
Good
coffee
machine
and
a
place
coffee
is
important.
Hi
everybody,
my
name,
is
Daniel
I'm
visited
now
for
over
six
years.
I
started
also
a
solution
architect
when
I
transitioned
into
the
global
Chief
Architect
role
for
industrial
and
in
that
context
of
industrial
Edge,
Computing
I
got
really
involved
with
Edge
Computing
and
then
earlier
this
year.
When
and
all
this
productization
effort
of
getting
microchip
to
customers
in
a
fully
supported
way,
I
was
asking
hey.
C
Hi,
my
name
is
Stefan
I'm
also
solution,
I
hit
detect
in
here
in
Germany,
I'm,
taking
care
of
various
manufacturing
and
Automotive
customers,
and
obviously
Edge
Computing
manufacturing
is
extremely
important.
The
whole
transition
to
to
have
in
manufacturing
plans
are
not
only
traditional
virtualization
technology
for
hosting
applications,
but
moving
more
and
more
into
into
containerized
workloads
because
of
the
ease
of
management
because
of
the
frequent
deployments.
Etc
is
really
a
Hot
Topic
and
yeah
I'm.
Looking
forward
for
for
today's
chat
to
to
hear
and
learn
more
about
our
new
portfolio.
B
Yeah
so
I
prepared
a
little
bit
of
a
overview.
An
introduction
so
show
some
pictures
and
explain
the
story.
What
it
looks
like
a
little
demo,
so
we
could
run,
we
could
simply
install
microshift
and
run
it
a
little
bit.
B
B
We
added
JBoss
middleware
application
services
around
that
we
went
out
into
the
cloud
because
if
you
know
how
to
manage
100
thousands
of
virtual
machines
or
servers
in
a
private
data
center
or
in
you
can
do
this
also
in
the
cloud
yeah,
so
that
that's
basically
where
we
have
been
in
the
last
five
years
or
so,
but
then
trends
like
Edge
Computing
appeared
and
the
first
situation
where
this
actually
started
is
the
techospace.
B
If
you
look
at
the
Telco
base
station
mobile
networking
base
station,
they
used
to
use
specialized
Hardware
devices
specialized
switching
Network
equipment
and
they
recently
more
and
more
transitioned
to
standard
I.T
equipment.
So
at
the
base
station
you
see
just
a
standard,
double
rack
infrastructure
and
you
do
this
stuff
software
defined.
So
you
end
up
with
hundreds
or
even
thousands
of
remote
locations
where
you
need
to
manage
your
I.T
kind
of
infrastructure
like
the
operating
system,
security
patches,
monitoring
all
that
stuff,
but
it's
not
only
Telco.
B
The
same
starts
to
happen,
for
example
in
industrial,
and
this
is
how
I
came
to
it:
yeah,
for
example,
in
the
factory
automation
space.
If
you
think
about
programmable
logic
controllers,
you
have
the
same
situation.
They
move
away
from
specialized
Hardware
to
software
defined
plcs.
B
If
you
think
about
point
of
sale
situations,
yeah
every
Supermarket,
this
kind
of
distributed
Edge
computing
and
the
the
big
what's
the
big
difference
to
to
Cloud
yeah,
and
that
is
what
makes
it
challenging
yeah
if
you
really
go
out
to
these
far
Edge
location
and
that's
one
of
the
big
differences
compared,
for
example,
to
Taco
Edge
Computing
average.
We,
it
is
first
of
all,
for
example,
network
connectivity
yeah
in
a
technical
situation.
B
If
the
base
station
loses
Upstream
connectivity,
it
has
no
business
value
anymore,
and
usually
you
have
overlap
anyway
yeah.
So
you
have
a
little
bit
of
service
duration.
Yes,
but
it's
not
that
critical.
If
a
technical
base
station
loses
its
connectivity
think
about
Manufacturing,
if,
if
a
plant
loses
its
Upstream
connectivity
to
the
cloud
and
the
plant
would
hold
to
a
grind
ground
to
on
hold
this
way,
it
doesn't
make
any
money
anymore
and
it
costs
you
thousands
of
dollars
per
minute
yep.
B
So
Edge
Computing
in
the
industrial
space
are
on
the
point
of
sales
situation
here.
Imagine
a
cashier
would
not
be
working
because
you
lose
your
Cloud
connective.
That's
that's
not
acceptable.
So,
but
then
also,
you
have
really
really
constrained
environments
like,
for
example,
in
an
industrial
Edge
controller
in
a
far
far
Edge
deployed
small
device
unit.
We
are
not
talking
about
server
grade
Hardware.
We
are
really
talking
about
single
board
system
ownership,
computers,
yeah.
So
that
is
a
constraint
you,
you
probably
even
sometimes
cannot
guarantee
the
physical
Integrity
of
the
device.
B
It
might
be
stolen,
yeah,
which
is
security,
wise,
a
completely
different
story.
What
about
the
life
cycle
management?
How
do
you
get
updates
out
there
if
the
device
is
only
connected
via,
let's
say
a
small
mobile
network
connection
yeah
and
pushing
out,
let's
say
a
20
Gigabytes
of
virtual
machine
images
is
not
manageable
right
and
then,
of
course,
the
scale
you
end
up
with
thousands
of
hundreds
of
thousands
of
devices,
so
that
are
kind
of
the
challenges
for
French
Computing
right
and
but
still
you
would
like
to
apply.
B
Let's
say,
modern
application
principles
to
it:
container
rights
applications,
microservices,
serverless,
virtual
machines,
git
Ops,
devsec,
Ops
methodologies,
because
that
is
why
cloud
computing
is
so
successful.
The
scalability
reliability
is
based
on
these
architectures.
We
used
to
say
to
name
this
Cloud
native
applications,
I
hate
that
word,
because
a
cloud
native
implies
you
deploy
to
Cloud,
which
you
don't
need
to.
You
can
still
deploy
a
modern
microservices
based
architecture
to
this
Edge
device,
for
example,
if
it
provides
reasonable,
suitable
based
platform.
B
Of
course,
you
would
need
a
container
runtime
right
if
your
workload
is
containerized.
If
you
have
a,
for
example,
containerized
software
defined
PLC,
you
just
need
a
suitable
container
runtime
operating
system,
and
that
is
where
we
come
to
play.
B
Yeah.
You
need
to
find
the
balance
and
that's
the
idea
of
red
hat
device
Edge.
So
we
are
quite
famous
and
known
for
using
providing
a
rock,
solid,
scalable,
kubernetes
distribution
with
openshift
yeah,
and
we
put
that
in
the
recent
years
on
a
diet.
So,
like
five
years
ago,
minimum
system
requirement
was
like
six
six
nodes,
I
think
when
we
made
it
to
compact
testers
the
three
nodes.
Then
we
introduced
single
node
openshift
cluster,
which
fits
on
a
single
Hardware
but
still
there's
a
gap.
B
If
you
have
this
really
small
form
factor
field
deployed
device
which
might
have
only
two
cores,
two
gigabytes
of
RAM
and
you
need
kubernetes,
we
couldn't
deliver
that
yeah,
because
openshift
did
not
fit
into
that
space,
and
that
is
why
we
invented
microshift
and
let
me
introduce
here
record
device
Edge,
because
rated
device
Edge
is
actually
a
combination.
So
we
start
with
the
dock
the
base
Workhorse,
which
brings
you
from
A
to
B,
and
that
is
actually
redhead.
B
Enterprise
legal,
so
that's
the
Baseline,
that's
the
base
operating
system
and
then
which
could
you
could
use
this
to
run,
for
example,
just
containers.
If,
if
you
have
a
rather
static
workload,
let's
say
two
three:
four
five
containers
which
are
more
or
less
fixed:
they
just
communicate
a
little
bit
to
each
other
and
then
implement
the
solution,
and
you
don't
need
to
do
rolling
updates
orchestration,
adding
removing
workload
if
your
record
is
rather
fixed.
That
is
already
good
enough
yeah.
That
is
good
enough
to
run
your
workout
at
the
edge.
B
But
then
you
might
have
some
requirements
to
actually
use
kubernetes
as
an
orchestration
tools
for
your
containers,
because
we
have,
for
example,
a
customer
who
does
who
does
this
for
a
point
of
sale
solution
and
they
need
to
be
able
to
roll
out
updates
to
their
workload
to
their
application
during
business
hours.
Why?
The
server
is
running?
Why
this
component
is
running
and
that's
the
sweet
part
of
kubernetes
right?
Remember:
rolling,
updates,
yeah
scaling
up
scaling
down.
B
You
have
load
balancer
integrated,
which
routes
the
traffic
from
the
old
one
to
the
new
one,
all
all
that
stuff
stuff
yeah.
So
that
is
the
reason
why
you
might
want
kubernetes
even
on
a
small
form
factor
device,
and
that
is
why
we
added
it
yeah,
and
please
also
note
that
the
dog
has
some
special
features,
because
that's
the
next
variant
you
could
use
so
Rel
is
the
base
and
then
there's
rare
forage,
which
is
real,
but
in
a
different
delivery
format.
B
It
uses
RPM
or
S3
as
a
packaging
format,
which
basically
makes
the
operating
system
immutable,
and
you
can
easily
add
different
versions
to
it,
to
the
same
disk
and
boot
to
the
different
version
which
gives
you
a
blue
green
deployment.
So
you
could
boot
into
the
new
version.
Health
check
it,
and
if
it's
fine,
you
you
stay
there
and
if
it's
not
fine,
you
roll
back
to
the
previous
version,
because
both
of
them
are
kind
of
immutable
and
you
cannot
easily
mess
around
with
it.
B
And
that
is
important
if
you
are
at
the
far
Edge
location,
where
fixing
a
device
rebooting
it
manually.
Something
like
this
could
mean
fly
in
a
helicopter
right.
So
you,
you
really
want
to
avoid
to
break
your
far
Edge
device
and
hence
for
these
type
of
scenarios.
We
would
recommend
rev
for
Edge
as
the
Baseline,
but
you
you,
basically
with
better
device
that
you
have
the
choice.
Yeah
you
can
use
both
traditional
RPM
based
one
or
right
for
H1.
B
We
expect
the
majority
of
edge
deployments
to
be
Rel
for
Edge.
Definitely
you
look
like
you
have.
C
A
question
yeah
yeah,
of
course
I-
was
wondering.
What's
the
sunglass
feature,
but
no
my
question
would
be
you
you
can
set.
You
can
have
both
right
for
Edge
and
real
s
bases
or
anything
else.
What
I
could
use
either
Community
versions
or
for
my
home
lab
or
other
things
and
yeah.
B
That
that's
a
typical
question
we
frequently
get
and
the
answer
is
a
little
bit
twofolded.
So
technically
you
can
use,
probably
you.
We
know
it
works,
for
example,
on
Fedora
yeah,
so
you
could
use
Fedora.
You
can
use
Center
streams,
for
example.
Also
you
could
even
probably
use
Ubuntu
on
a
Raspberry
Pi
yeah.
B
C
And
you
mentioned
having
called
Static
workloads
on
on
Enterprise
Linux.
That
would
be
then
static,
container
based
workloads.
That
would
be
an
appointment
right.
B
It
depends
on
the
workload
of
course.
If
the
workload
is
containerized,
you
would
use
hotman
to
start
this
and
system
D.
To
do
this.
Orchestration
and
updating
and,
for
example,
system
D
combined
with
potman,
also
has
the
capability
to
check
for
updates.
If
there
is
a
new
container
image
and
stop
the
current
one
and
restart
a
new
one
and
stuff
like
that,
this
would
be
the
Prof
Third
Way
for
rather
static
workload.
But
if
your
workload
is
just
RPM
based,
then
just
make
an
RPM
installed.
B
B
B
So
you
have
Rel-
and
here
is
a
comment
on
Linux
for
Edge-
that
we
we
recommended
yeah,
that's
the
Baseline
and
then,
as
I've
mentioned,
you
could
run
virtual
machines
using
leboot
KVM,
and
then
here
is
what
microshift
is
a
component
of
redhead
device
Edge,
which
you
can
optionally
install,
and
that
is
actually
we
could
do
now
or
later
in
a
demo,
just
install
it
and
see
what's
happening
yeah.
So
the
goal
of
this
is
that
we
provide
a
workload
portability
so
kubernetes
workload.
B
So
if
you
have
an
kubernetes-based
application
that
you
deploy,
for
example,
using
a
hem
chart
to
your
kubernetes
cluster,
maybe
in
the
cloud,
the
idea
is
that
we
get
the
same
time,
chart
the
same
workload
running
on
microshift
this
as
little,
hopefully,
no
modification
at
all.
It
should
basically
run
out
of
the
box,
and
that
also
explains
that
we
have,
for
example,
the
really
really
minimum
kubernetes
cluster
Services
there.
So
Let
me
refresh
it
another
way.
B
So
if
you
think
about
openshift
openshift
has
a
lot
of
more
services
and
we
put
openshift
on
a
diet
to
get
it
on
a
single
note,
but
still
it
requires
a
lot
of
resources
with
microshift.
We
started
the
other
way
around.
B
We
started
with
nothing
and
then
add
only
the
bits
and
pieces
you
need
to
get
to
this
workload
possibility
capability,
so
we
take
a
minimum
row.
Yeah
add
what
is
needed.
For
example,
container
runtime
yeah,
you
can
see,
containers
are
added,
we
per
default.
Do
not
add
the
word
yeah.
You
have
to
add
this.
If
you
need
it,
you
can
can
add
the
necessary
packages
for
for
microchip.
B
We
add
just
cryo
as
a
container
runtime
and
then
of
course,
kubernetes
hcd,
as
which
is
basically
what
you
need
to
have
a
kubernetes
orchestration,
the
scheduler
and
then
for
cluster
Services.
Really
you
need
just
networking,
so
the
pods
can
talk
to
each
other.
You
need
some
kind
of
Ingress
router,
so
you
can
get
traffic
into
the
customer
because
at
the
end
you
would
like
to
provide
a
web
front-end,
for
example,
rest
Services,
whatever
you
might
need
storage,
because
typically
your
workload
has
state.
B
So
we
need
a
CSI
storage
provider
so
that
you
can
have
persistent
volume
claims
resistant
volumes
and
that's
basically
it
yeah
that
and
that
that's
a
that's
the
approach
of
redhead
device
Edge
and,
for
example,
we
have
later
a
slide
where,
where
I
can
could
compare
this
to
openshift
but
really
simple
example
is
there's
no
console.
Openshift
has
a
really
nice
web
console.
There
is
none
because
it
takes
resources,
and
why
would
you
need
it
on
a
natural
occasion?
Right?
B
You
don't
want
you
don't
need
it
there
yeah,
so
command
line
tooling
or
external
management
at
use,
for
example,
Advanced
cluster
management,
git
Ops
ways
to
deploy
your
workload
and
manage
this.
Yes
for
sure,
and
that
works,
because
you
have
the
standard
kubernetes
API
yeah.
So
we
are
currently
not
yet
certified,
but
we
have
it
on
our
backlog
list
on
the
engineering
items
until
we
get
to
General
availability.
We
have
this
certified
as
a
cncf,
35,
qualities,
distribution
and.
B
Yes,
as
we
support
so
microshift
is
derived
from
from
openshift.
It
is
not
openshift,
but
it
is
derived
from
openshift.
So
we
inherit
a
lot
of
features
and
capabilities
from
openshift,
and
that
means,
for
example,
as
openshift
runs
on
arm.
Microshift
is
also
going
to
be
fully
supported
on
arm
so,
and
we
have
we
target
this,
especially,
for
example,
if
you
think
about
Machine
Vision
use
cases.
B
So
all
the
types
of
use
cases
where
you
would
like
to
do
model
inferencing
at
the
edge
location,
think
about,
for
example,
point
of
stage
situations
where
you
have
cameras
watching
the
shop
in
a
factory.
If
you
do
a
visual
inspection,
quality
control,
use
cases
to
determine
part
is
good
or
bad.
If
you
think
about,
for
example,
flying
drone.
A
B
We
have
customers
flying
this
stuff
on
drones
and
doing
model
inferencing
with
that,
and,
for
example,
Nvidia
has
a
really
nice
capabilities.
If
you
look
into
the
Nvidia
Jetson
devices
where
they
have
really
nice,
the
the
critical
factor
is
compute
power
to
power
consumption.
If
you're
on
a
few
deployed
devices
running
on
a
battery,
how
much
compute
do
you
get
for
power?
You
invest
yeah
and
Nvidia
arm.
B
Adjustment
devices
are
really
good
in
this
plus
adding
GPU
power
for
visual
for
for
model
inferencing
that
types
of
use
cases
we
explicitly
Target
and
will
support
them
yeah.
It's
still
a
long
way
to
get
rail
running
on
these
latest
arm
devices.
The
drivers
getting
the
drivers
into
the
right
space
is
a
long
story
short,
but
that's
my
job
to
take
care
of
this
product
manager
to
get
it
yeah.
A
So
so
is
it
fair
to
say
that
So
eventually,
so,
obviously,
you
have
with
the
dependency
what
Hardware,
what
Hardware
architectures,
what
platforms
are
supported
by
by
openshift
and
canvas
by
microchip
yeah
and
and
that
may
be,
you
you'll
be
able
to
explain
to
us
how
essentially
microshift
is
derived
from
from
what
code
base
that
would
be
interesting
to
know
and
and
then
what
is
supported
by
Linux
for
Edge,
and
so
the
intersection
will
give
us
what
other
Hardware
supported.
B
Slogan
forward,
yeah,
and
if
you
are,
if
you
have
a
question
and
does
openshift
run
on
this
or
that
bare
metal
server,
you
simply
look
into
the
Hardware
certification
list
in
the
catalog
we
have
and
I
I
could
show
the
link
if
you
like,
for
and
if
you
look
into
the
hardware
catalog.
Actually,
there
is
no
specific
study
certification
for
openshift.
You
just
look
for
relate,
and
the
same
holds
true
for
microchip
yeah.
B
So
if
you
have
a
hardware
device
which
is
certified
for
at
least
real
eight
real,
nine,
even
better,
then
you're
good
to
go,
we
will
fully
support
you.
If
it
does
not
work,
we
will
work
with
the
hardware
vendor
and
if
you
have
a
new
device
which
is
not
yet
on
that
list.
Just
let
us
know
that
the
hardware
vendor
know-
and
we
would
work
with
them
to
go
through
that
Hardware
certification,
which
is
not
a
big
deal.
Basically,
you
need
to
define
a
test
suit,
run
a
test.
B
You
submit
the
results
and
then
you
are
certified
free
of
charge.
It's
not
a
big
deal
yeah,
and
if
the
the
test
tube
finds
an
issue
with
the
driver
here
driver
there,
we
would
support
that
Hardware
render
to
get
it
fixed
might
be
complicated.
If
you
look
into
as
I
mentioned
Nvidia
jet
yeah,
and
that
is
then
we
really
do
joint
engineering
work
with
the
Hardware
vendors
with
the
communities
to
get
the
necessary
driver
to
where
they
belong.
A
No,
if
anything
I
mean
on
the
topic,
there's
also
be,
let's
say,
the
topic
of
having
base
images
or
that
architecture
and
and
obviously
having
disaster
Factory
the
generation.
Basically,
they
generally
the
final
images
yeah
that
will
run
on
that
architecture.
B
Yeah,
that
was,
for
example,
a
a
challenge.
If
you
look
here
at
the
storage,
so
we
use
odf
lvm,
which
is
based
as
openshift
data
Foundation
logical
volume
management,
which
is
basically
a
CSI
driver
based
on
the
Upstream
Community
toppo
lvm
project,
which
is
really
nice
because
it
Maps
it
is
a
CSI
driver
which
is
backed
by
an
lvm
volume,
the
logical
volume
manager
of
of
Linux
yeah.
So
if
you
create
a
PVC
under
the
covers,
it
creates
a
logical
volume
which
is
nice,
because.
B
Isolation,
quota
enforcement.
You
can
even
get
snapshots
yeah
on
yeah.
That
is
really
nice,
so
we
decided
to
use
that
Topo
IVM
or
odf
lvm
as
a
storage
provider
for
microchip.
So
it's
included
yeah,
because
we
fully
understand
you
need
State
at
the
edge
I
mean
yeah,
it's
all
about
yeah,
so
you
need
to
store
some
data,
but
when
we
first
run
our
first
test
on
arm
the
challenge
was
getting
arm
container
images
for
odflvm
because
they
were
not
yet
there
or
they
were
not
accessible
to
us
and
getting
this
all
worked
out.
B
Who
provides
what
is
kind
of
attention,
but
we
are
there
now,
and
so
we
have,
for
example,
in
our
engineering
CI
pipeline.
We
build
microchip
forearm
on
automated
regular
bases.
Yes,.
A
B
That's
basically
simple:
we
ship
and
support
only
odflvm
yeah,
because
that's
what
we
test.
Of
course,
it's
a
soon
to
be
certified
kubernetes
distribution.
So
in
theory
you
can
add
whatever
you
like,
but
then
you
have
to
work
with
the
CSI
driver
provider
if
it's
supported
yeah,
for
example,
the
the
really
simple
one
like
host
path:
local
volumes,
they
work
for
sure
yeah,
but
you
need
to
talk
to
the
vendor
to
the
CSI
driver
vendor.
B
If
it's
supported
from
their
site,
we
support
Topo
lvm,
we
ship
it
it's
by
default.
It's
there,
that's
included
in
the
subscription.
A
B
Yeah
so
I
have
here
a
Linux,
not
forage,
because
on
Linux
scratch
you
cannot
easily
RPM
install
something
else,
because
it's
immutable
remember
so.
Hence
I
took
the
RPM
based
one
yeah,
and
here
is
a
really
simple
console
and
Andrea.
If
you
could
maybe
make
this
a
little
bit
more
to
move
us
to
the
to
the
bottom.
B
Yeah,
that's
nice
thanks!
So
taking
a
quick
look,
so
this
is
a
standard
row.
Eight
seven
right
so
and
I
prepared
all
the
necessary
stuff
so
that
I
can
simply
dnf
install
microchip
Yeah.
So
basically
it's
an
RPM
package
and
it
has
a
couple
of
dependencies.
We
will
see
so
it
will
now
contact
the
repo
managers
that
I
would
like
to
install
microshift
and
it's
it's
on
a
tiny
yeah.
It's
just
26,
megabytes
and
downloads.
Doesn't
matter
really,
because,
usually
you
don't
download
it?
B
You
have
your
image
prepared,
but
just
to
make
the
point
of
fight
yeah,
because
ties
doesn't
matter
at
the
edge
and
when
you
see
dependencies
as
I
mentioned,
for
example,
cryo
as
a
container
runtime
and
networking
stuff
as
alienox
rules
yep,
because
we
security
is
one
of
the
primary
concerns
for
Edge
locations
and
openshift
is
famous
for
its
multi-layered
security
approach
and
we
follow
that
alongside.
So
let
me
so
in
total,
it's
like
85
megabytes
in
download
on
disk.
It
would
be
350
megabytes.
That's
not
all!
B
We
need
some
more
stuff,
but
let
me
simply
add
yes
so,
but
it's
downloaded
and
installed
yeah.
So
yes,
that
okay,
please
do
it.
So
this
now
takes
a
couple
of
steps
to
do
the
installation
and
by
the
way
this
is
a
virtual
machine
actually
running
on
my
laptop,
and
it
has
two
cores
and
I
can't
remember
what
the
two
or
four
gigabytes
of
RAM.
So
really
what
we
think
is
a
field
deployed,
Edge
device,
really
small,
limited
yeah.
That
is
why
stuff
like
this,
for
example,
installing,
sometimes
takes
a
little
bit
of
time.
C
B
B
You
can
see,
for
example,
open
v-switch,
because
we
use
as
a
cni
provider
the
networking
interface
provider.
We
use
OV
and
k
o
v
and
kubernetes,
so
we
need,
for
example,
openv
switch
stuff.
C
B
Yeah,
so
the
route
concept
is
kind
of
is
the
old
name
or
that
which
we
invented
it.
If
you
look
into
the
standard
kubernetes
API,
it
is
ingress,
Ingress
controller
objects
and
we
implement
this
using
proxy.
And,
yes,
that
is
there
so
now
I
installed
it
and
now
I
need
just
to
start
it
yeah.
So
I
system
enable
Microsoft
service.
You
need
to
do
some
if
you
want
to
access
it
from
exponent.
Do
some
firewall
enable
ring?
B
I
did
prepare
that
already
or
I
don't
need
it,
but
basically
now
I
started
and
what
now
happens
is
a
couple
of
containers
are
started
yeah
because,
for
example,
exactly
the
router,
the
Ingress
controller
is
running
containerized
like
also
the
networking
or
the
CSI
provider.
So
these
container
images
need
to
be
downloaded
here
in
this
version,
so
I
need
to
download
all
the
images
and
we
will
see
them
them
starting
pretty
soon.
B
That
is
also
one
of
the
beauties
of
if
you
would
use
Rel
for
Edge
rail
for
Edge.
Has
the
capability
to
add
your
container
images
to
the
os3
image
so
and
now
I
need
to
get
access
to
it
with
a
cubeconfig
and
I
can
also
get
or
take
a
look,
and
we
should
see
now
the
pots
here
you
can
see
the
container
starting.
So
this
is
now
pulling
the
container
images.
B
I
was
talking
about
earlier
the
goal
we
have
and
we
have
showed
it
multiple
times
that
you
can
have
a
fully
self-contained
os3
image
where
everything
you
need.
So
these
container
images
which
are
pulled
as
we
speak,
but
also,
of
course,
the
workload
you
can
all
of
this
embed
it
to
the
OS
tree.
It
becomes
part
of
the
immutable
os3
commit,
and
so
you
don't
need
any
network
connectivity
to
fire
this
up,
yeah
and
even,
for
example,
the
Manifest.
You
need
to
deploy
these
container
images.
B
For
example,
if
you
have
deployment
service
account,
service
object,
Ingress
objects,
all
the
typical
stuff.
You
have
in
your
Hampshire
to
deploy
your
application
yeah
and
you
can
add
these
manifests
the
Yama
files
also
to
the
image,
so
that
microshift
would
automatically
fire
this
up
at
startup
yeah.
So
you
could
really
have
a
fully
self-contained
image.
You
could
even
burn
this
to
a
USB.
Stick
go
to
the
edge
device
plug
it
in
turn
it
on,
and
it
would
install
fire
up,
no
networking,
nothing
yeah.
C
One
one
quick
additional
because
I'm
getting
that
quite
often
also
in
demos,
is
you
type
OC.
Could
I
use
Cube
cattle
Cube.
B
Cdl,
of
course,
actually
she
actually.
It
would
be
probably
better
to
do
that.
I'm,
sorry,
I
I
grew
into
the
Container
Community
spaces
openshift,
so
I'm
familiar
with
OC,
it's
a
cube,
API,
so
Cube,
CTL
and
I
should
have
it
yeah
get
namespace.
This
should
work,
yeah
or
cube
City
I
get
notes.
Of
course
you
can
use
it.
Actually
that
is
a
good
point.
So
there
are
differences
between
microchip
and
openshift.
Yeah.
Remember
microchip
is
not
openshift.
B
It
does
not
provide
the
full
open
shift,
capabilities
and
capabilities
boil
down
to
apis,
so
there
are
openshift
apis
which
are
not
kubernetes
apis,
but
openshift.
Specific
extensions
which
are
not
available.
For
example,
OC
get
build.
Yeah,
remember:
openshift
has
a
capability
to
build
container
images
on
the
platform.
If
you
are
on
a
Dev
development
environment.
This
is
a
rather
good
idea.
C
B
Not
working
yeah
every
time
you
see,
the
server
does
not
have
a
resource
type
builds.
It
means
that
the
OC
client,
which
I
I
simply
use
the
standard
openshift
climb
yeah.
That
does
not
have
this
API.
So
that
means
this
API.
The
build
API
is
not
supported
by
microchip.
Basically,
we
have
a
list,
so
the
kubernetes
API
to
get
certified.
It's
fully
supported
from
the
openshift
API.
B
Actually,
next
to
nothing
is
there
because
it
turns
out,
you
only
need
STC,
Security
contact
constraints
and
I,
think
routes,
I
think
it
is,
and
everything
else
is
not
there.
So,
every
time
you
see
this
error
message
from
the
obviously
client
you
hit
an
API
which
is
not
yet
there,
and
we
did
this
on
purpose,
to
really
keep
it
small
and
simple,
because
for
every
API
you
add
and
support,
you
need
a
controller
that
consumes
the
ram,
especially
and
memory,
and
to
rank
it
really
down
yeah.
B
But
let's
see
here
we
can
see
the
nodes,
it's
just
one
node
right
and
the
pot
should
be
firing
up.
Yeah
see
now
everything
is
running,
and
here
you
can
see,
for
example,
the
router,
the
the
Ingress
router
yeah.
It
isn't
the
same
namespace,
it's
like
the
openshift
router.
B
It
is
the
open
shift,
router
image
actually
yeah,
so
everything
is
there
and
we
could,
for
example,
also
now
check
that
we
have
requires
the
images
a
couple
of
images
downloaded
yep
here
you
can
see,
for
example,
the
CSI
driver
images
which
are
being
pulled,
so
it
adds
like
currently
kind
of
two
gigabytes
of
additional
disk
space
which
you
need
for
container
images.
A
B
We
have
that
capability
in
single
and
old
openshift,
so
you
can
add
additional
worker
node
to
sing
a
node
openshift,
which
sounds
strange
and,
to
be
honest,
I
don't
like
that,
because
it
gives
you
a
really
false
sense
of
high
availability.
Yes,
what
you
would
get
from
that
is
workload
High
availability,
so
you
could
scale
your
ports
to
two
different
nodes
and
if
one
fails
you
have
that
yeah.
But
what
about
the
router?
You
need
to
scale
up
the
router.
Now
you
need
external
load
balancing
or
you
need
keep
alive.
B
So,
for
that
reason,
adding
nodes
to
microshift
is
currently
not
supported,
and
we
have
it
explicitly
out
of
scope
because
we
would
like
to
avoid
to
have
give
you
that
false
sense
of
high
availability,
what
we
suggest
to
do
if
you
need
a
little
bit
of
redundancy,
simply
add
a
second
device
to
your
deployment
so
that
you
have
two
devices
on
your
deployment
and
do
that
fail
over
on
the
workload
side
yeah,
these
devices
can
detect
each
other
talk
to
each
other
and
do
a
let's
say,
active
passive
type
of
workload
or
high
availability
yeah.
B
But
there
is
a
reason
why
LCD
needs
three
notes
to
become
High
availability
here,
because
if
you
are
only
in
two,
you
always
get
to
do
this
split
brain
problematic.
If
you
have
state
who
is
the
leader,
this
is
really
dangerous
and
if
you
have
truly
serious
High
availability
requirements
as
three
nodes.
Openshift
compact
cluster
is
the
weapon
of
choice,
because
that's
rock
solid,
there's,
no
single
point
of
failure.
B
It's
proven
and
that's
the
way
to
go
that
covers
you
in
all
the
situations
because
and
if
you
consider
High
availability
seriously,
the
you
simply
put
the
plug.
That's
the
easy
situation,
yeah
the
it's
it's
not
really
alive,
but
it's
also
not
really
there
types
of
situation
where
device
sometimes
responds
or
sometimes
not-
that
is
really
hard
situation
and
you
need
three
notes.
A
tie
breaker
for
this
to
get
out
of
this
yeah.
A
So
so
the
way
I
read
it
and
and
sorry
for
being
a
bit
tricky
is
not
now,
but
if
we
were
to
do
it,
it
would
be
a
three
node.
It
would
be
a
three
note
configuration
but
not
not
now-
and
this
leads
to
another
question
which
is
what
are,
and
maybe
you
have
that
in
your
presentation,
without
the
good
use
cases
and
the
bad
use
cases.
If
you
wish
the
one
that
you
for
yeah
for
Edge
device.
C
B
A
really
high
level
decision
tree,
but
basically,
as
always
the
answer
is,
it
depends
as
an
architect.
You
have
to
understand
the
requirements
and
there
are
a
lot
of
things
to
look
out
for
the
first.
One
is:
what's
actually
the
the
use
case,
the
workload
yeah.
Is
it?
How
is
it
containerized
or
orchestrated?
How
Dynamic
is
it?
What's
the
networking
requirements?
How
is
it
connected
to
the
front
and
back
end
what
protocols
go
in?
What
protocol
goes
out
yeah?
How
do
you
connect
your
USB
device,
for
example
your
PLC?
B
That's
where
Reddit
device
Edge
is
targeted
for
and
then
you
take
a
look
at
the
workload
yeah.
How
static
is
it?
How
dynamically
is
it
and
that
could
bring
you
to
podman
or
the
virtual
machines?
You
could
add
virtual
machines
here,
and
only
if
you
really
have
the
need
to
kubernetes
and
I
would,
for
example,
also
add:
do
you
have
the
capability
because
I
mean
kubernetes
is
not
for
free.
It
brings
complexity.
You
have
to
understand
that,
for
example,
how
to
do
a
rolling
update
correctly,
the
breadboard
has
to
be
able
to
do
that.
B
Yeah.
If
you
put
an
old
monolithic
application
into
a
single
container
that
requires
12
cores
and
20
gigabytes
of
RAM,
it
doesn't
add
any
value
right
yeah.
You
need
to
really
go
down
into
the
microservices
rolling
update,
modern
application,
development
and
yeah.
So
that
would
be
kind
of
my
guidance
in
doubt.
I
would
suggest,
go
rather
with
a
single
node
openshift,
for
example.
We
will
see
that
it
comes.
It
works
on
four
cores
already
yeah.
C
B
So
and
also
people
tend
to
forget
other
topologies,
especially
in
the
factory
deployment,
that
you
could
simply
use
a
worker
node
or
even
a
remote
worker,
node
topology
yep.
A
You
mentioned
network
what
what
type
of
network
and
all
that
this
is
this.
This
question
mirrors
a
little
bit
the
previous
one
so
which
are
the
cni
drivers
that
are
supported
in
Edge
device.
B
So,
currently,
we
only
support
OV
and
kubernetes
okay,
and
we
switched
to
that
because
it's
used
in
openshift,
so
we
we
know
how
to
maintain,
update
fix
it
also
because
it
supports
Network
policies,
yeah.
That
was
the
feature
which
we
have
been
asked
from
customers
about
that
they
would
like
to
be
able
to
isolate
their
Dynamic
workload
on
the
small
Edge
device.
So
Network
policies
are
asked.
We
are
also
currently
looking
into
adding
more
tools
which
is
kind
of
the
umbrella
meta,
cni
driver
and
but
that
is
currently
not
supported
yeah.
A
B
A
B
That
was
a
really
early
research
prototype.
Do
you
consider
this
production
or
not
yeah?
It
depends
so
this
brings
us
to
the
question
on
how
do
you
get
to
production?
So
what's
the
state,
so
we
announced
it
here
with
on
at
qcon
and
I
must
admit.
Currently,
it's
not
yet
used
in
production
simply
for
the
reason.
B
It's
not
yet
General
available
because,
typically
for
with
redhead
production,
support
comes
with
General
availability
yeah
and
we
are
not
yet
there
so
the
road
to
there
is
as
follows:
I
think
I
probably
have
a
slide
on
this
at
least
I.
Oh
actually,
I.
Don't
that's!
That's
a
bummer,
okay!
B
Let's,
let's
keep
on
that
slide
and
let
me
continue
so
the
road
to
get
into
production
and
get
Enterprise
support
is
we
will
be
in
developer
preview
in
January
with
openshift
4.12.,
so
it
will
be
available
to
everybody,
but
developer
preview
comes
with
no
support.
We
have
a
couple
of
customers
who
are
on
the
Early
Access
program
and
they
will
be
and
they
are
in
Tech
preview.
So
they
run
this
already
and
they
run
this
in
their
test
environments.
They
run
this
in.
B
Let's
say
friendly
customer
deployments:
it's
not
yet
fully
in
production,
but
they
are
working
with
it.
They
are
using
it
and
we
have
all
different
types
of
scenarios:
industrial
defense,
retail
customers
using
that,
and
they
would
be
able
to
go
to
production
because
they
are
on
Tech
preview
and
with
tech
preview
you
you
need
just
a
support
exception
to
go
to
production,
so
we
expect
first
customers
to
be
in
production
early,
let's
say
q1
next
year
and
with
the
following
openshift
release:
4.13,
it's
currently
in
the
planning.
B
If
we
get
into
GA
or
Tech
preview
for
everybody,
we
will
see
and
then
I'd
say
later
next
year
it
will
become
General
available
and
the
reason
it
takes
so
long
is.
We
have
to
do
the
transition
from
the
office
of
the
CTO,
where
it
was
invented
to
the
regular
openshift
engineering
team
and
also
the
support
teams
need
to
be
enabled
because
I
mean
that's
at
the
end,
the
the
value
and
we
provide
support
for
it.
So
we
have
to
enable
our
support
teams
to
be
able
to
handle
a
call.
B
My
Microsoft
cluster
is
not
my
Microsoft
is
not
booting
I'd,
say
Saturday
afternoon,
and
you
need
somebody
to
help
you
with
that
yeah
and
that
would
be
us
yeah.
So
we
have
that
and
and
regarding
as
production.
Also,
so
we
have
this
one.
We
have
a
joint
announcement
ABB,
they
use
it
for
their
Edge
genius,
industrial
iot
solution.
As
I
said
it's
not
yet
in
production,
but
we
have
a
joint
announcement
and
another
one
is
Lockheed
Martin.
B
If
you
look
what,
for
example,
the
YouTube
video
which
we
have
is
Lockheed
Martin
from
kubecon,
where
they
show
the
details
about
their
huge
use
case,
as
I
said,
Microsoft
flying
in
drones,
yeah
that
works
nicely.
B
Okay,
yeah
well.
B
C
Would
be
also
management?
How
do
I
manage
yes,
yeah.
A
Which
we
mentioned
something
you
said:
yeah
well,
there's
the
console,
but
you
wouldn't
want
the
consoles
on
a
on
a
Remote
device.
That's
in
the
field,
but
you
mentioned
also
sem
yeah.
B
So
let
me
quickly
switch
gears,
so
I
have
to
log
into
another
virtual
machine
where
I
did
prepare
this.
So
this
is
again
a
micro
shift
running.
But
if
you
take
a
look
into
the
pods
are
running
you
can
see.
There
is,
for
example,
an
mqtt
message
broker
running,
but
much
more
interesting
is
open.
Cluster
management
is
running
so
what
I
did
with
that?
B
Microshift
cluster
I
added
it
to
Advanced
cluster
management,
so
Advanced
cluster
management
is
our
way
to
manage
kubernetes
clusters
yeah
distributed
in
the
data
center
in
the
cloud
or
at
the
edge.
How
does
it
look
like?
Let
me
quickly
find
now
a
nice
console
yeah.
So
this
is
an
ACM
Hub
cluster.
So
ACM
has
the
concept
of
you
have
the
Hub
cluster,
which
is
the
central
management
touch
point
where
you
can
manage
your
clusters,
you
can
manage
your
applications.
You
can
manage
your
security
policies.
B
So
if
I
take
a
look,
for
example
into
the
Clusters,
you
can
see.
I
have
a
couple
of
clusters
running
here.
Most
of
them
run
virtualized.
Some
of
them
run
bare
Metals,
some
of
them
run
somewhere
in
a
cloud,
and
here
you
can
see,
I
have
and
Microsoft
cluster
running,
so
that
is
actually
the
Microsoft
cluster
we
have
just
seen
in
the
virtual
machine
here.
This
is
running
on
my
laptop
it's
connected
to
the
management
location,
which
is
actually
running
an
erected
data
center.
So
I.
This
is
kind
of
far
etched
already
yeah.
B
If
I
lose
the
network
connectivity,
it
would
disappear
here.
You
can
see
it's
ready.
So
it's
connected,
it's
live
yeah
and
we
could
take
a
look
here
and
see,
for
example,
hey.
It
has
only
one
note
and
all
the
management
add-ons
are
deployed,
so
your
of
course
you
can
add
only
watches
you
can
add
only
the
agents
you
need.
So,
for
example,
if
you
say
I
don't
need,
let's
say
policy
security
policies.
B
You
can
skip
that
and
do
not
install
this
because,
of
course,
if
you
add
workloads
to
the
cluster-like
observability
or
the
policy
controller,
it
adds
to
your
footprint
yeah.
So
all
of
this
is
roughly
a
half
a
call
to
a
core
which
it
requires.
Currently,
we
are
in
the
trajectory
to
minimize
this
yeah.
B
We
expect
like
half
a
core
unit
for
this
management
and
maybe
rather
memory
hungry,
especially
the
observability
part,
which
is
like
two
two
gigabytes
of
RAM,
maybe
more,
but
for
example,
if
you
have
the
observability
controller,
you
can
go
to
the
multi-cluster
observability,
which
is
part
of
advanced
cluster
management,
so
I
could
switch
over
to
the
grafana.
So
that
is
the
ICM
cluster
overview,
where
you
can
see
all
of
your
clusters
and
feed
the
health
status
of
all
the
Clusters
attached
to
it.
A
B
And
you
can
see,
for
example,
my
local
microshift
is
also
here
so
I
could
switch
the
view
on
the
management
location,
keep
in
mind
here,
I'm
in
the
rapid
data
center
I'm
connected
to
the
data
center,
but
still
I
can,
if
I
find
the
right
dashboard.
B
Kubernetes
compute
resource
per
cluster
should
be
this
one.
If
I
select
my
local
cluster
I
see,
for
example,
the
CPU
utilization
over
the
last
three
hours
and
I
could
probably
go
back
to
the
last
week,
and
here
you
can
see.
For
example,
it's
not
going
back
the
last
week
because
I
just
started
it
yesterday.
It's
not
running
continuously
but
yeah.
So
here
I
can
see
on
the
Hub
cluster
I
could
inspect
the
cluster.
B
What's
the
CPU
load
memory
utilization
and,
for
example,
which
resources
are
consuming
most
of
the
CPU
app
and
the
same
is
for
events
like
if
the
faster
submits
alert
yeah
that
would
also
work
here.
So
that
is
basically
the
way
to
manage
workload,
and
we
could
now,
for
example,
add
an
application
to
it
yeah.
So
you
can
also
add
applications
to
this
using
githubs,
for
example,
and
that
would
then
roll
out
the
application
to
my
Edge
node.
B
What
you
currently
cannot
manage
is
the
actual
cluster
lifecycle,
so,
while
ACM
is
capable
of,
for
example,
provisioning
a
cluster
on
a
barometer
or
in
a
cloud,
it
cannot
currently
deploy
a
microshift
cluster
on
a
rail
for
Edge
device,
because
it
does
not
know
how
to
end
the
RPM
or
S3
relation
systems.
Yeah.
That's
on
the
roadmap
that
you
would
use
actually
a
red
hat,
Edge
manager
to
do
that,
which
is
a
different
console.
We
could
look
in
if
you
like,
yeah
to
to
manage
the
base
Edge
systems.
B
A
Yes,
there's
a
question
that
comes
comes
to
my
mind,
exactly
about
managing
workloads
and
so
real
for
Edge
itself
has
over-the-air
updates
capabilities,
and
you
did
mention
that
I
can
prepare
the
template
of
an
image
if
you
wish,
for
all,
let's
say
my
devices
of
a
certain
of
a
certain
release,
for
my
let's
say:
Hardware
that
I
had
to
put
out
in
the
field
and
so
that
image
will
also
contain
the
container
image
images
and
even
all
the
definitions
of
the
of
the
the
resources
that
they
have
to
have.
A
So
now,
however,
we're
having
a
different
situation
where
I
could
even
use
ACM
to
deploy,
to
deploy
or
update
applications
as
well.
Those
are
not
mutually
exclusive,
but
I
guess
you
can
have
one
or
the
other
and
and
decide
how
you
manage
that
yeah.
B
You
have
to
make
your
choice
yeah,
if
you
have,
for
example,
rather
static
workload,
I
would
go
with
the
first
one
embed.
The
container
images,
also,
if
your
devices
are
usually
disconnected,
do
not
have
good
good
network
connectivity.
It's
probably
a
good
idea
to
to
manage
that.
It
also
depends
on
how
you
do
the
overall
device
lifecycle
management
and,
if
you
are
in
a
more
Dynamic
environment,
where
you
see
I
need
to,
and
the
devices
are
usually
quite
well
connected
to,
your
management
location
could
be
in
your
data
center.
B
It
could
be
in
the
cloud,
then
this
one
would
be
the
weapon
of
choice,
but
even
a
combination
would
be
fine
yeah,
for
example,
to
deploy
the
Baseline
application
using
the
static
images.
You
even
can
later
add
dynamically
images,
so
there
that
actually,
the
container
runtime
has
two
locations
for
images,
one
which
points
to
the
immutable
or
S3
part
and
the
other
one,
which
is
the
regular
Dynamic
run
container
image.
B
Location
bar
lip
containers,
I
think
it
is
something
there
where
you
saw
the
images
so
and
you
can
then
pull
the
updates
to
container
images
and
then
adjust
the
depends
on
the
image
version.
Your
deployment
points
too,
because
you
have
both
images
located
on
the
edge
device
and
whatever
your
deployment
points
to
that
is
being
started.
So
yes,
but
it
requires
thinking.
There
is
no
one
solution
fits
all.
You
really
have
to
understand
the
use
case,
the
management
aspects,
network,
connectivity,
all
the
various
aspects,
security
and
we
did
not
deep
dive.
B
For
example,
it
was
a
security
aspects
of
this.
How
do
you
get
a
secure
device,
provisioning
done
yeah.
So
can
you
trust
that
device
at
the
far
Edge,
where
you
cannot
guarantee
physical
Integrity?
How
do
you
handle
that
for
disk
encryption
yeah?
That
would
be
another
coffee
session
to
talk
about
this.
A
short
answer:
is
it's
not
easy
to
get
it
right?
We
have
a
lot
of
solutions
and
layers
which
contribute
and
you
carefully,
as
always
with
security.
A
Thank
you,
sorry
for
this
disrupting
your
flow
with
with
lots
of
questions.
There
was
one
coming
from
the
from
the
audience
again,
not
sure
if
I
understood
was
asking
about
like
cluster
management
and
the
lightweight
control
playing
use
case,
which
I'm
not
really
sure
are
familiar
with.
B
B
B
Question
is
yeah,
what
I
mean
lightweight
control
plane
could
be
a
hypershift
which
is
something
completely
different
or
it
could
be
actually
microshift
I
mean
it
is
a
lightweight
control.
Planner,
it's
control.
Plane
is
running
yeah
so,
but
that
fully
worked
and
that's
the
reason
why,
for
example,
Advanced
custom
management
so
easily
works,
because
all
these
agents
we
are
seeing
here
are
talking
locally
to
the
cube
API,
which
is
the
light
back
control
plan
of
microchip,
and
they
would
talk
to.
A
B
For
example,
if
the
necessary
manifest
to
deploy
an
application,
yeah
that
simply
works
yeah.
C
And
one
thing
what
I
was
thinking
about
is
what
about
operators,
so
we
have
the
rate
right.
We
have
the
operator
Hub
or
operator
lifecycle,
management,
yeah
and
I'm
now
coming
from
the
application
side
and
think
about
well,
if
I
have
a
small
application
for
the
edge
that
has
a
small
database,
be
it
a
possible
whatsoever.
I
can
easily
manage
that
through
operators
on
openshift.
Would
you
recommend
them
to
do
this?
Awesome
micro
shift
or
say
well
in
this
case
when
he
was
hand
shots
or
whatever,
because
it.
C
B
Difficult
questions
because
we
removed
a
lot
of
the
or
actually
all
of
the
openshift
internal
cluster
operators
and
because
they
take
a
lot
of
resources
and
they're
not
really
needed
and
also,
for
example,
the
operator
lifecycle
manager
is
not
yet
there.
So
you
cannot
simply
easy
via
a
subscription
ad,
an
operator
from
the
operator
Hub
and
that's
not
possible,
also
the
updating
of
the
operator,
and
so
on
that
that's
currently
not
there,
which
makes
people
think
about
microchip,
does
not
support
operators,
which
is
wrong.
C
B
Actually,
if
you
can
deploy
the
operator
via-
let's
say,
for
example,
traditional
manifest
and
the
operator
uses
only
the
standard,
kubernetes
API
and
has
no
dependencies
on,
for
example,
let's
say
the
machine
config
operator,
which
is
not
there
yeah.
Then
it
actually
works.
So
you
can
deploy
kubernetes
operators,
and
that
would
be
also
my
recommendation
yeah
if
you
need
to
run
a
postgres
database
and
message
broker
something
like
this.
Of
course
you
have
to
make
the
trade-off,
because
each
operator
you
deploy
also
consumes
resources,
mostly
Ram,
but
you
can
do
this
and
actually
I.
B
If
I
go
back
to
my
here,
you
can
see
that
my
amq
broker
deployment
here.
That
is
actually
here.
So
this
is
the
actual
message
broker
the
mqtt
endpoint
running,
but
here's
the
operator
running
so
actually
I
did
deploy
an
operator
to
it,
but
I
just
use
the
Manifest
and.
C
B
The
the
really
important
point
is
you
have
to
talk
to
the
provider
of
that
operator
if
they
support
that
right,
yeah.
So,
for
example,
with
all
the
red
hat
operators,
we
still
need
to
invent
go
through
that
exercise.
The
default
answer
is:
is
it
supported?
Probably
not.
You
have
to
talk
to
the
necessary
teams
to
find
out,
for
example,
if
you
use
then
kubernetes
operator
for
postgres
from
crunchy,
you
have
to
talk
to
crunchy
if
they
support
a
manifest
deployment
to
microchip
to
get
Enterprise
support.
That
is
something
but.
A
B
Exactly
yeah,
and
that
is
what
we
mean
with
a
workload
portability,
and
that
is
why,
in
this
diagram,
we
have
the
kubernetes
operators
on
top
yeah,
because
technically
it
works
now,
your
support
mileage
will
vary.
You
have
to
check
and
also
especially
test
the
dependency
yeah.
As
I
said,
if
the
operator
uses
something
an
API
which
is
not,
there
doesn't
work.
A
So
if
somebody
wants
to
try
it
now,
that's
imagine
it's
not
the
easy
deck
with
you
right,
but
somebody
wants
to
play
with.
B
B
Could
they
do
now?
They
could
go
to
the
best
thing.
I
think
is
openshift
micro
shift.
Should
it
be
go
over
here,
go
to
the
docs
page
and
I
will
paste
the
getting
started
link
into
the
chat.
If
I
can
do
that,
I
put
it
over
here
and
I.
Think
you
Andrea
you
can
you
can
post
it
absolutely
easiest
way
to
get
started
yeah,
which
gives
you
instruction
how
to
run
it
on
a
virtual
machine
as
I'm
doing
it
here
could
be.
B
The
instructions
are
for
libgood
on
a
Linux
system.
Sorry,
we
are
Linux
company
we
usually
built
as
a
hypervisor
yeah.
But
basically,
if
you
use
something
else
that
also
perfectly
works,
fine
yeah
take
it.
Take
a
real
eight
seven
base
image
from
AWS,
whatever
yeah,
and
then
follow
the
instructions
and
you're
good
to
go.
That's
the
easiest
way
to
get
started.
A
And
I
I
guess
that's
what
we
all
we
have
time
for
today.
So
a
couple
of
comments
at
the
end,
I
think
we'd
like
you
to
come
back,
maybe
when
it's
when
it's
you
know
for
four
to
twelves
out
and
give
us
the
updates
a
little
bit
of
road
map
and
maybe
talked
about
more
about
use
cases
because
I
think
that's
that's
the
interesting
part
and
but
for
today.
Thank
you
very
much
Daniel.
Thank
you.
A
Thank
you
Stefan
for
for
your
contribution
and
thank
you
all
and
see
you
next
week
with
official
TV.