►
Description
Get your espresso ready for the OpenShift Coffee Break as celebrate the community of the third edition of the Red Hat Hackfest! Join us to discover how the Hackfest community to build event-driven apps that run on open Edge computing with Quarkus and RHEL for Edge, and connects to open Clouds with OpenShift.
Twitch: https://red.ht/twitch
A
Good
morning,
good
morning,
everyone
welcome
back
to
the
openshift
tv
coffee
prac.
Here
at
openshift
tv
we
have
our
architectural
and
tech
talk
on
with
all
about
openshift
and
clown
native
today
my
name
is
natalie
binto
product
marketing
manager
for
openshift
here
at
redat,
and
today,
I'm
very
pleased
to
have
as
a
special
guest
the
red
dot
access
community
so
welcome
andrea
andreas
matia.
How
are
you
today.
B
Hi
natalie,
thanks
for
the
invitation
and
thanks
for
this
hi
natalie
all
good.
Thank.
A
Morning
my
pleasure,
my
pleasure,
to
have
you
folks
here,
because
I
know
the
community
run
another
successful
act,
fest
and
I
would
like
to
let
you
tell
us
what
is
the
red
that
act
fest?
What
is
this
community?
We
had
the
winner
of
the
hack
fest
in
last
month.
No,
we
we,
they
joined
us,
the
the
winner
of
the
act,
fest
that
we
talked
about
all
the
project
today.
B
Yeah
sure
you
know
we
had
several
opportunities
to
meet
here
and
to
explain
and
talk
to
to
the
official
coffee
break
audience.
It's
a
continuous
development
now
so
red
hackfest
is
an
event
that
red
hat
delivers
to
its
partner
ecosystem
right.
We
just
want
to
bring
partners
together
to
have
fun,
get
challenged
with
technical
solution
or
or
technical
implementation,
and
also
the
the
whole.
The
whole
process
brings
to
the
partner
ecosystem
into
customers
other
than
the
community,
a
huge
amount
of
technical
tools
that
they
can
reuse.
B
So
we
try
to
build
repeatable,
consumable
and
scalable
components
for
enterprise
architecture.
An
important
thing
to
mention
here
is
that
we
try
to
focus
on
solution
over
products
which
means
in
turn
we
work
on
digital
transformation
or
specifically
edge
computing
solutions,
trying
to
make
sure
that
we
cover
one
vertical
at
the
time.
So
there
is
the
vertical
there
is
the
customer's
success.
There
is
the
partner
success
and
also
the
idea
of
look
at
a
bright
future
in
terms
of
reliability
right.
B
So
this
is
what
we
do
the
community
and
we
have
some
very
various
team
members
of
the
community
works
hard
to
implement
something
for
the
event
and
then
support
afterwards.
The
the
partner
ecosystem
participating
in
it,
and
mostly
the
winners
of
course,
but
I'm
happy
to
hand
over
to
andreya
somati.
C
C
Not
much
to
add
to
the
introduction,
you
know,
I'm
I'm
the
the
only
infrastructure
guy
here
it's
it's
also
quite
interesting
to
talk
about
that.
So
this
is
yeah.
I
I'm
I'm
doing
the
infrastructure
part
and-
and,
as
you
know,
we're
not
only
talking
about
stuff
running
inside
a
secured
data
center,
we're
going
towards
the
edge
and
that's
also
a
very
interesting
and
special
aspect.
Once
you
take
your
workloads
that
you
use
to
run
on
huge.
D
E
A
Say
that
you
are
the
infrastructural
person
with
some,
you
know
you
were
proud
about
it.
It
looks
like
you
are
all
with
you
know
those
cloud
native
developers
here.
So
how
is
the
live?
Ops
guy
inside
the
dev
team.
C
Well,
you
know
the
thing
is
for
for
the
ops
guys
today
is
we're
getting
closer
to
developers.
You
know
that.
That's
that's!
That's
one
of
the
big
changes
that
we
have
in
in
that
scale
out
and
that
containerized
world.
C
We
need
to
learn
some
of
the
principles
that
are
standard
for
developers
that
we
didn't
have
before.
You
know
I'm
very
much
in
the
automation
business
with
with
the
ansible
environment,
and
if
you
take
a
look
at
the
changes
inside
the
ansible
automation,
now,
operators
have
to
deal
with
git
with
versioning,
with
with
built
chains
for
for
their
whole
stuff.
They
weren't
used
to
that.
C
So
the
the
work
of
the
operator
gets
more
deaf
style
each
day,
and
if
we
talk
about
the
things
that
we
built
for
the
hack
fest,
you
know
the
distribution
of
the
workloads,
the
the
the
downloading
of
the
workloads
from
the
data
center,
offloading
them
to
the
edge.
These
are
all
processes
controlled
by
automation
and
and
operations,
operational
setups
that
align
with
principles
that
developers
typically
use
and
and
and
not
operators.
So
that's
that's.
One
of
the
big
changes
for
the
for
the
infrastructure
for
the
operator.
C
A
Yeah
thanks
andres
for
sharing
these
thoughts.
I
think
you're
right,
I
think,
there's
a
shift
on
the
paradigm.
No
more
close
to
more
close
to
development
and
and
mathia
is
devops
the
android.
I
know
you
are
a
great
demo.
D
A
Is
it
a
developer?
What
is
the
it's
a
kind
of
software
engineer
or
it's
a
sre.
D
I
guess
it's
a
it's
a
full
stack
to
be
honest
because
you
know
they
need
to
be
familiar
with
the
infrastructure.
First
of
all,
how
all
the
components
work
under
you
master
working
node,
but
as
well.
You
need
to
know
how
application
need
to
be
deployed.
You
know
how
it
works,
how
they
integrated
with
each
other.
You
know
communicate
with
pods
services
routes.
D
You
know
ingress,
so
it's
kind
of
you
need
to
know
the
boat
board
and
always
say
as
well
on
my
job,
I'm
not
anymore
middleware
or
dev
guy,
but
I
kind
of
hybrid
right
because
you
need
to
be
familiar
for
infrastructure
but
as
well.
You
need
to.
I
would
know
how
to
use
infrastructure.
A
Right
and
I.
A
E
Yeah,
marcus
hello,
I'm
a
partner,
enablement
manager
in
the
in
the
same
team
as
andreas
is,
and
my
main
focus
is
on
sap
related
stuff
and
hence
I'm.
B
A
D
A
B
B
B
So,
based
on
what
now
everybody
is
saying
here
in
this
session,
the
full
stack
engineer
definition
shifts
slowly
in
between
the
dev
and
the
ops
right.
It's
weird
to
me,
and
maybe
this
is
of
talking
topic,
but
this
is
actually
what
we
are
trying
also
to
understand
within
the
community,
discussing
on
a
weekly
basis,
how
deeper
into
the
technical
details
of
the
platform
or
the
operating
system
a
developer
should
be.
I
mean
now
we
we
have
mathia
and
andreas
here
and
also
marcus,
who
are
great
champions
in
different
areas.
B
Right
again,
I'm
a
developer,
but
usually
in
the
community.
You
don't
see
only
andreas
and
marcus
from
the
infrastructure
side
we
have
80
of
the
members
specialized
in
rail
or
operating
system
in
general
and
container
platform
or
virtualization.
So
this
full
stacking
kubernetes
engineer
definition
is
kind
of
weird
to
me.
If
I
may-
and
I
agree.
A
E
C
Look
at
look
at
technologies
like
like
workspaces
or
eclipse
j.
What's
the
upstream
project,
if
you
set
that
one
up
right,
the
developer
doesn't
have
to
deal
with,
you
know:
where
does
my
stuff
run?
How
do
I
set
up
my
hardware?
How
do
I
get
my
development
environment
because
operations
is
doing
that
for
him
so
and
this?
This
is
something
this
is
something
newer
than
we
had
throughout
the
last
year,
where
the
developer
really
had
to
dig
down
into
infrastructure.
How
do
I
get
my
setup
right?
C
How
do
I
adjust
my
software
so
that
it
runs
with
that
environment
that
I'm
on
and
and
that's
what
we
try
to
get
away
from
and
give
the
developer
more
freedom
to
say
here,
website
code
run,
it
click
run,
you
don't
care
where
you
know.
That's
all
that's
serverless
and
the
things
that
we
talk
about
so
yeah.
We
we.
We
should
be
able
to
take
the
developer
off
all
that
infrastructure
if
we
set
it
up
right.
D
I
I
think
when
I
say,
and
it's
not
everything,
it
means
that
you
need
to
know
how
the
platform
works
it
doesn't.
It
doesn't
mean
you
need
to
go
deep
dive
on
the
gnome
to
understand.
What's
the
issue
to
see
the
pit
container
or
whatsoever
and
as
well
also
for
code
ready
workspace,
like
you
mentioned
it's
about
standard,
I
use
this
developer
tools.
Everyone
use
the
same
way
or
consistently
working
so
it
just
the
developer.
A
A
I
I
think
we
was
talking
about
the
cara.
Can
cara
container
part?
Is
anyone
from
you
knowledgeable
about
sandbox
container
in
openshift?
We
can
share
some
info
before
we
go
into
our
topic.
Oh.
A
Yeah,
okay,
I
will
share
in
the
chat
the
link
to
get
a
container
on
openshift.
So
you
can.
You
can
check
the
commentation
to
get
more
and
thanks
for
bringing
questions
to
the
chat.
Please
send
us
a
question
to
the
topic.
We're
talking
about
and
andrea
talking
about
topics
of
today.
Can
we
shortly
resume
and
talk
about?
What
is
the
red
attack
fest?
What
are
the
technology
involved,
and
then
we
go
again
in
the
technology
that
details.
A
D
B
Cases
preparing
for
the
rate
of
harvest-
and
this
is
the
last
one-
we've
been
working
on
the
reason
why
it's
use
case
two
is
because
we
usually
work
on
the
same
use
case
for
two
hack
pheasants.
This
is
worth
thinking
of
the
huge
bunch
of
technology
we
put
in
and
that
we
rely
on
and
also
it's
important
to
have
a
continuous
development
also
for
the
people
who
join
the
community,
because
the
community
is
not
only
meant
to
do
something
enterprise
for
rather
in
its
ecosystem,
the
community.
Of
course.
B
B
And
knowledge
right
so
so
that
others
can
join
us
and
and
see
what
we
do
and,
of
course
suggest
so
the
the
use
case
we
implemented
when
we
are
happy
to
speak
about
today,
also
digging
deeper
into
some
details.
If
our
audience
is
keen
to
ask
questions,
is
a
manufacturing
use
case.
Manufacturing
is
quite
interesting.
It's
it
could
be
converted
easily
into
energy
and
utilities.
B
In
that
case,
converting
similar
use
cases
or
sorry
converting
use
cases
for
specific
vertically
into
other
verticals
is
always
a
matter
of
technologies
underneath,
but,
generally
speaking,
the
architecture
should
be
flexible
and
extensible
enough
to
make
sure
that
can
be
scaled
out
once
it's
needed.
B
So
we
had
some
hardware
provided
by
our
friends
from
intel
and
we
implemented
both
the
edge
device
and
the
edge
server
piece,
simulating
a
production
facility,
so
many
edge
devices
controlling
stuff
or
receiving
data
from
sensors
and
whatever
are
responsible
for
the
machineries
producing
something.
B
Then
the
edge
devices
are
responsible
for
the
control
of
the
machineries
and
for
sending
sharing
the
telemetries
several
telemetries
to
the
to
the
edge
server
we
implemented
yet
server
using
single
node
openshift.
On
top
of
the
intel
nac
device,
which
is
something
we
got
actually
certified,
so
intel
bu
and
red.bu
agreed
on
certifying
our
solution,
because
that
was
worth
to
have
it
for
several
medium
and
large
customers.
Last.
C
B
Least,
the
data
center
side
is
implemented
on
top
of
the
openshift
container
platform,
and
that
includes
and
is
shipped
with,
several
third-party
technologies.
It's
not
all
about
the
red
hat's
deck
right,
so
the
technology
stack
starts
from
the
hardware
underneath
goes
through
the
platform
and
by
platform
I
mean
operating
system
kubernetes
and
several
other
components
that
make
make
that
connection
between
the
other
layers.
So
the
the.
B
In
terms
of
microservices
applications
and
whatever
that
makes
make
the
the
environment
work,
so
the
data
center
plant
is
responsible
for
security
workload,
distribution
and
also
the
business
logic
that
is
tightly
coupled
to
the
the
use
case
we
implemented.
I
don't
know
if
there
are
any
questions
in
the
meantime.
A
Oh
andre,
there
was
some
question
actually
from
our
our
attendees
from
twitch,
for
instance,
it's
related
actually
to
openshift.
I
don't
know
since
in
the
access
that
we
work
at
no,
the
community
worked
either
on
the
edge
part,
but
also
in
the
openshift
part.
So
one
of
the
question
is:
are
we
by
any
chance
working
on
an
openshift
sizing
tool?
We
don't
have
something
easier
for
that.
You
know
I
remember
we
had
us
what
we
called
a
lab
for
openshift3.
It
was
a
sizer
tool,
but
do
you
any
of
you?
A
Do
you
have
any
helpful
answer
for
any
best
practices
to
sides,
the
openshift
cluster?
How
do
you
audit
you,
for
instance,
in
in
the
act
fest.
D
Well,
not
strictly
related
to
the
act
first,
but
usually
when
you
go
on
customer,
you
know.
Openshift
is
kind
of
a
dynamic
platform
right.
So,
first
of
all
you
start
with
the
basis.
Then
you
set
up
all
the
money
to
return
around
capacity
size
right
and
together
with
the
how
to
scaling
capability,
you're
able
to
scale
out
the
cluster
based
on
the
workload.
You
know
you
always
have
to
keep
that
a
threshold
right,
80
percent
of
the
workload.
Then
you
start
auto
scaling,
but
because
it's
a
dynamic,
I
don't
think
in
dynamic
way.
D
A
sizing
tool
can
be
a
guideline,
but
this
you
should
not
take
it
like
a
bible
right,
because
your
workload
always
change
it,
because
maybe,
in
the
next
day
the
under
application
they're
going
to
join
your
cluster.
So
how
you
do
the
size
it
will
run
anyway.
So
if
you
have
a
auto
scaling
automation
around
the
scalability
of
the
cluster,
then
the
sizing
tool-
you
don't
need
anymore,
just
start
with
the
basis
and
then
scale
out
so
start
simple
and
then
go
bigger.
C
Yeah,
you
got
to
see
the
simple
way:
openshift
is
the
new
linux.
Let's
say
that
way:
it's
it's
just
the
platform
to
run
your
application.
You
know
and
there's
no
standard
sizing,
how
how
you
set
up
a
linux
environment,
because
it
totally
depends
what
you
run
on
top
of
it.
So
we
cannot
have
a
standard
tool
for
how
how
do
you
do
openshift?
So
the
thing
that
you
need
to
do
is:
is
you
create
your
application?
C
First,
you
take
a
look
at
the
workload
that
your
containers
create
and
that
your
application
creates,
and
based
on
that,
you
start
the
sizing,
so
the
sizing
is
not
for
openshift.
The
sizing
is
for
your
workload,
running
on
top
of
openshift.
A
That's
good
point
also,
and
I
agree
what
you
you
both
say-
that
with
openshift,
for
you
know
them,
maybe
the
top,
the
architecture
is
more
dynamic.
You
can
really
scale
up
scale
down
on
the
need.
You
can
collect
metrics
and
then
you
know,
there's
tools
like
the
if
possible,
the
cluster,
auto
scaler
or
the
machine
set.
Those
automation
that,
where
is,
is
delivered
by
the
adoption
of
the
operator
pattern,
also
for
the
operating
system
and
the
core
component.
A
It's
help
you
to
shape
your
cluster
and
yeah.
I
think
it's
a
good
point,
both
mattie
andres
and
in
the
chat.
We
have
another
user
that
say
that
always
used
ansible
for
bringing
preparing
the
devices
for
the
developer,
so
another
good
story
to
share
andreas.
I
know
you
have
your
ansible
t-shirt
as
well.
That
looks
very
cool,
that's
cool!
So
please,
please
share
in
the
chat
your
experience
and
please
send
a
question.
It's
a
open
conversation
really
cool
thanks,
yeah.
C
Actually,
the
thing
is
that
what
we
did
for
in
as
a
part
of
the
qit
initiative
is
that
we
used
ansible
to
actually
deploy
the
workloads
to
the
edge.
You
know
that
that
was
one
of
the
components
that
we're
still
working
on
and
improving.
That
is
because
the
idea
is,
you
have
the
data
center
in
the
cloud.
C
Then
you
have
the
local
edge
server
where,
where
you
have
the
the
openshift,
where
you
have
your
factory
pods
running
and
then,
if
you
need
something
running
on
the
edge,
we
have
a
couple
of
methods
to
do
that,
and
and
for
our
initiative
we
decided
to
run
the
parts
that
you
create
for
the
edge
on
cardman
on
the
edge
device.
C
So
what
we
built
is
we
we
built
a
an
edge
rel
for
edge
setup
to
set
up
these
fliplet
devices,
and
then
we
took
ansible
to
deploy
the
containers
that
will
run
the
edge
and
actually
push
them
to
the
edge
and
have
them
running
there.
So
this
is
was
one
of
the
things
where
we
actually
used,
ansible
for
and
which
is
a
good
fit
to
do
that.
You
know,
and
and
that
was
part
of
of
the
the
whole
setup.
A
Very
interesting
andreas
and
I'm
curious
to
know
about
more
about
it.
Do
you
have
the
some
slide
to
share
that?
I
have
to
bring
up.
A
Yeah
yeah
yeah,
that's
that's
it
because
I
see
andrea.
You
have
some
some
deck
here.
I
don't
know
if.
B
You
can
share
if
you
want
so
to
give
to
our
audience
yeah
an
overview
of
the
technical
perspective.
Here,
that's
of
big
topics
we
are
keen
to
to
discuss
today,
also
other
than
the
the
question
we
can
get.
Of
course,
one
of
them
is
managing
edge
devices.
B
Now
andre
is
our
ansible
and
rail
champion
here
so
other
than
what
we
have
done
so
managing
the
workload
we
are
thinking
of
and
and
it's
a
matter
of
time,
of
course
we're
thinking
of
several
alternatives
to
what
we
have
done,
and
I
will
let
andreas
discuss
it.
So
I'm
talking
about
pxe
boot
and
several
different
opportunities.
B
Also
management
of
the
edge
device
with
with
ansible
is
key,
and
this
also
connects
to
what
mattia
is
is
bringing
to
the
community
so
he's
an
amazing
java
developer,
but
he's
also
very
very
expert
in
security.
So,
thanks
to
his
recommendations
and
skills,
we
introduced
a
key
component
that
is
quite
interesting
to
to
customers
actually
so
to
who
has
the
ownership
at
the
end
of
the
implementation
phase
of
the
whole
environment
and
landscape,
and
this
piece
is
is,
is
what
we
call
distributed
security
or
through
mutual
authentication.
B
After
this
quick
technical
overview,
so
I'm
I'm
presenting
here
and
let
me
also
zoom
in
more
yeah
that
works,
I
guess
a
another
technical
overview
of
the
architecture
for
the
production
plant.
So
the
the
edge
server,
of
course,
is
responsible
for
two
things:
connect
the
data
center
with
the
devices,
but
also
run
several
algorithms
and
processes
that
make
sure
that
the
devices
are
managed
properly
registered,
eventually
decommissioned
if
they
get
hacked,
distribute
the
certificates
for
the
security
and,
last
but
not
least,
handle
the
telemetry.
B
The
telemetry,
of
course,
could
be
based
definitely
on
the
open
telemetry
standard
which
we
are
investigating.
Thanks
to
the
recent
adoption,
also
in
the
workers,
universe
of
that
compound
that
module
and.
C
B
B
Owned
by
the
factory
and
the
factory
itself,
and
that's
why,
of
course,
we
introduced
this
additional
layer,
data
center,
plant
factory
and
machineries,
and
the
machinery
is
also
at
the
moment.
As
andreas
stated,
we
are
using
potman.
We
are
also
investigating
microshift.
Microshift
will
also
lead
to
a
more
centralized
control
and
overview
through
some
tools.
I
don't
want
to
take
over
there.
They
don't
want
to
steal
the
words
from
from
andrea's
mouth,
so
I
will.
I
will
give
him
the
chance
to
to
discuss
everything
interesting
things
snow.
B
So
single
node
openshift
works
like
a
charm.
We
have
been
eager
to
adopt
this
solution
a
few
months
ago
when,
when
back
in
the
day,
still
single
number
shift
was
about
to
be
released.
Ga,
and
we
have
to
say
that
worked
pretty
pretty
well
also,.
B
A
I
think
this
is
andre
thanks
for
sharing
the
architecture.
I
think
this
is
really
cool,
because
it
gives
an
idea
on
all
the
components.
So
how,
if
you
want
to
start
a
project
involving
edge
devices
and
application
event,
driven
application,
ingesting
lots
of
data
from
the
sensor
or
edge
devices
devices
iot
devices?
A
This
is
really
cool,
because
the
people
that
would
like
to
understand
how
the
project
works
can
have
from
this
overview
either
the
you
know,
the
the
client
part,
then
the
server
part,
it's
very
very
interesting
and
the
technical
details
are
in
the
quercus
for
iot
project
github
project
repos
that
we
will
share
in
a
moment.
So
it
was
very
interesting.
A
Maybe
I
am
we
can
go
now
if
you,
if
you,
if
you
like
into
the
edge
part,
I
think
it's
very
interesting
to
understanding.
Also
this
part-
and
I
would
like
to
thank
you
from
the
chat,
there's
a
twitch
user.
That
say
this
chat,
you
all
have.
Is
there
where
I
learned
to
automate
with
antipal?
You
are
all
amazing.
Learn
me
something.
Every
time
I
watch
something
from
red
dots.
This
is
really
cool.
This
is
why
we
are
here.
A
Thank
you
very
much,
thank
thanks
for
any
everyone
attending,
and
I
think
those
are
very
interesting
tech
talks
and
I'm
looking
forward
to
hear
more
from
you
folks.
So
what?
What
is
this
edge
part,
and
this
can
we
can
we
talk
about
more
about
it?.
C
Sure
the
edge
part
is,
you
know
when
we
go
to
the
edge.
We
have
some
some
different
ways
of
running
things
on
the
edge,
as
you
know,
we're
talking
about
a
setup
in
a
factory
with
a
device
attached
to
a
piece
of
machinery.
C
So
you
know
everyone
who
all
who
worked
in
a
factory.
You
see
these
machineries
and
they
have
this
big
red
button
which
says:
stop
you
know
in
case
something
goes
wrong.
You
just
hit
the
button
and
the
whole
machine
shuts
down.
If
you
combine
that
machine
with
it,
the
thing
that
you
cannot
have
is
you
know
somebody
getting
stuck
into
the
machine
and
says
help
me.
You
know
push
the
red
button
and
the
operator
says:
oh,
I
have
to
cleanly
shut
down
the
control
pc
first.
C
Otherwise
I
get
data
corruption,
it's
not
going
to
happen.
They
hit
the
red
button
and
the
thing
turns
off.
So
when
you
create
compute
devices
attached
to
this
type
of
machine,
you
have
to
keep
things
like
that
in
mind.
So
that's
why
we
have
an
image
builder
in
case
you're
not
familiar
with
it.
Let
me
get
that
into
the
picture.
A
Yeah,
while
I
I
shared
the
github
repos
in
the
in
the
chat
and
andrea,
was
showing
that
this
is
the
home
page
of
the
quarkus
for
iot
project
that,
where
you
see
the
article
from
the
community
andreas
wrote
an
article
on
how
to
install
rail4edge
on
compute,
lab,
fitlet,
2
and
dress,
you
are
sharing
the
screen.
So
let
me
switch
over
your
screen.
That's.
C
Okay,
so
the
thing
that
we
have
here
is
is
is
just
a
standard
tool.
It's
it's
a
part
of
every
rel
installation.
It
is
based
on
open
source.
Of
course,
it's
based
on
two
systems
called
lorex
and
welder,
which
is
an
image
builder.
As
you
can
see
here,
what
we
have
is
just
a
virtual
machine
with
cockpit.
C
A
So
it's
a
sass
tool
connection.
C
Yeah,
but
it's
where
is
it
yeah?
It's?
I
have
to
go
to
the
beta
page
of
that.
So
what
you
already
can
do
is,
if
you
don't
want
to
install
everything
on
on
a
standard
system,
you
can
go
to
cloud.com
and
use
the
image
builder
from
there.
I
haven't
tried
it
yet
to
build
edge
images.
So
I'd
be
careful
with
that,
but
you
know
it's
an
easy
step.
C
You
just
set
up
a
rail
system
for
those
who
say
I
want
to
play
with
open
source
first,
take
center
streams
or
any
enterprise
linux,
a
clone
that
is
out
there.
Just
install
cockpit,
install
the
image
builder,
it's
it's
described
on
our
system
and
then
what
you
get
in
with
a
front-end
tool
inside
cockpit
is
that
image
builder
tab,
and
what
you
see
here
is
is
the
the
built
setup
that
I
created
for
for
the
fitlit
device.
C
You
do
you
do
add
a
a
user,
you
add
an
ssh
key
to
that,
so
that
you
can
log
into
the
remote
machine,
and
then
you
create
an
image
and-
and
the
interesting
part
here
is
the
tool
initially
is
designed
to
give
you
the
opportunity
to
create
an
operating
system
image
that
you
can
run
everywhere.
So
you
create
your
gold
master,
you
run
it
on
these
feet.
You
run
it
on
amazon,
you
run
it
on
google
cloud
whatever,
and
the
image
creator
is
exactly
for
that.
C
It
gives
you
different
types
of
of
target
image
that
you
can
build
from
your
blueprint.
So
the
interesting
part
for
us
is
the
rel
for
edge
commit,
which
means
is
we
take
all
that
stuff
that
we
want
to
have
in
in
that
fitbit
device
and
put
it
into
a
structure?
Can
you
increase
the
phone?
Let
me
give
that
a
try.
C
A
C
And
then
you
have
okay,
so
the
real
for
edge
commit.
So
what
happens
here
is
that
we
take
all
the
necessary
things
we
need
for
that
edge
device
built
and
put
it
into
a
directory
structure
and
what
we
then
do
is
we
can
boot
that
remote
device
off
a
standard
usb
stick
and
then
point
to
a
standard
http
server
to
draw
all
the
packets
from
or
we
can
pixely
boot
it.
So
it
is
in
the
description
that
I
wrote
on
the
web
page
how
you
would
install
it
standard.
C
You
can
also
pixie
boot
that
we
can
take
a
look
at
how
I
set
that
one
up
so
that
you
have
a
networked
environment
where
you,
you
know,
grab
the
image
automatically,
but
the
important
case
about
what
we
do
here
is
the
moment
we
built
that
edge
device.
We
have
an
os
3
setup,
which
means
is,
we
do
not
have
a
full-blown
operating
system
with
dnf
yum
to
install
packets.
What
we
actually
create
is
a
write,
an
enterprise
linux,
image
that
comes
with
a
read
only
root.
It
doesn't
have
dnf.
C
It
has
the
the
whole
operating
system
in
an
os3
image
from
where
it
boots
from,
and
only
has
var
and
etc
mounted
read
right.
So
in
that
case,
if
you
turn
off
that
edge
device,
you
know
with
the
with
the
red
button.
You
won't
damage
the
root
file
system
because
it's
mounted
read
only
anyway
and
then
you
can
go
on
and
push
your
edge
loads
downward
to
that.
C
So
so
that's
that's
the
interesting
piece
and
again
the
image
builder
also
is
available
on
the
console
it's
currently
in
in
in
take
preview.
So
it's
not
directly
showing
up
here
in
the
standard
setup,
but
it
it
will
show
up
here
in
in
the
as
it
as
a
tool
button
and
say
create
it.
So
it's
is
it.
Is
it
here
yet
already?
C
No,
I
don't
see
it
yet,
but
but
it
will
be
in
in
the
console
for
the
redhead
users.
So
so
that's
how
you
build
the
edge
and
then
you
have
you
know.
Like
andrea
said
you
have
two
options.
You
could
you
could
run
apartment,
which
we
did
here
so
in
in
this
case.
This
is
just
an
overview
of
a
system
that
that
I'm
running
where
I
run
all
the
applications
in
parts
and
that's
that's
the
idea
what
we
did
in
in
the
edge
scenario.
C
So
we
then
took
the
workloads
that
we
wanted
to
have
and
put
them
to
the
edge
device
and
run
them
inside
apartment
containers
and,
as
andrea
mentioned,
we're
also
playing
with
with
micro
shift
as
an
idea
and
the
the
difference
between
those
is.
If
you
have
workloads
that
are
closer
to
the
hardware,
maybe
you
have
a
workload
on
your
edge
device
that
needs
to
access
the
gpio
or
a
usb
device
that
then,
is
wired
into
the
machine.
You
want
to
stick
closer
to
the
hardware.
C
You
could
run
with
microshift
microshift,
for
those
who
don't
know
is:
let's
go
to
the
microshift
dot,
io,
don't
type
in
microshift
blankly
you
get
to
a
webpage
of
an
american
company
manufacturing
shifts
for
mountain
bikes
so
because
they're
called
microchips,
so
that
is
a
very
stripped
down
simple
version
of
openshift
that
really
comes
with
with
very
low
requirements
for
for
cpu
and
memory,
but
it
works
with
the
cube,
ctl
or
oc
command
set.
C
So
you
manage
that
on
the
command
line,
and-
and
that's
that's
really
really
great-
and
this
is
more
targeted
for
you
know
the
workloads
that
you
directly
want
to
take
from
your
openshift,
so
where
you
built
your
containers
on
top
of
openshift
to
run
with
with
kubernetes
and
cryo,
and
you
want
to
put
them
on
an
edge
device,
so
micro
shift
is,
is
one
of
the
paths
that
you
can
do
and
by
the
way
this
one
not
only
supports
full-blown
intel
based
devices.
It
also
should
run
stuff
like
that.
C
In
case
you
haven't
seen
this
is.
This
is
one
of
the
nvidia
jetson
experimental
boards?
I
haven't
had
the
time
to
set
it
up
with
microsoft.
Yet
but
it
should
work
that
way.
We
had
a
demo
on
devconf
zed
earlier
this
year,
where
some
of
our
colleagues
in
in
the
czech
office
use
that
so
what
you
then
can
do,
is
you
even
take
ai
workloads
that
make
use
of
cuda
and
the
the
nvidia
chip
on
on
the
board
to
do
aiml
calculations
on
that?
So
that's.
That's
quite
interesting.
B
C
Use
yeah,
then
you
can
use
microsoft
stuff,
and
since
we
had
someone
talking
about,
you
know
the
the
ansible
stuff.
What
I
can
quickly
show
you
before
I
hand
it
back
where's
that
tab
so
oops.
So
now
it's
getting
a
little
bit
more.
You
know
getting
towards
the
command
line,
oh
crap
yeah.
I
hope
this
is
readable,
because
we're
now.
C
Yeah
now
on
on
an
openshift
environment
shift
environment
font,
size
yeah.
I
know
let
me
check
that
settings
standard,
let's
go
to
something
bigger
than
that:
okay
whoa.
So
that
should
be
really
loud.
That's
works
that
works.
C
So
what
we
have
here
is
actually
a
a
virtual
machine,
simple
virtual
machine
running
that
micro
shift
service
and,
as
you
can
see
here
it
it
it
everything
you
need.
Is
you
know
all
the
open
shift
is
except
for
the
nubuck
container
here?
That
is
just
a
bunch
of
parts
running
the
the
micro
shift
service
itself.
Really
simple
and
you
know
luigi
was
asking
what
hardware
does
microsoft
support
are?
Yes,
it
does
run
on
a
raspberry,
pi
4.
C
You
know
also,
so
that
one
is,
is
just
a
virtual
machine
running
running
intel,
but
also
because
we
have
the
question
from
from
someone
saying:
I'm
I'm
doing
everything
on
ansible.
So
if
I
go
to
oc
project
awx
oops
project,
you
can
use
microshift
to
run
your
awx
environment
so
and
it's
it
it
is
like
openshift.
So
you
have
the
the
awx
postgres.
You
have
the
no
port
forward
so
that
you
can
get
access
to
your
awx
installation.
You
can
run
openshift
operators
on
that
one.
It's
it's
really
simple.
C
So
I
have
you
know
that
openshift
and
not
openshift,
that
awx
setup
running
and
and
can
make
access
to
that
and
use
this
to
you
know,
run
any
kind
of
workload
that
has
an
operator
on
that
simple
environment.
So
this
this
is
actually
the
the
open
shift
that
the
the
aws
ansible
setup
that
is
actually
running
on
top
of
that
micro
shift
installation
it
is
that
fairly
simple.
So
this
is
a
great
tool
and
we're
going
to
use
that
in
a
future
version
of
the
qiot
project.
C
For
again
the
workloads
that
are
more
related
to
compute
rather
than
control,
and
if
we
go
to
you
know
two
usb
ports
and
stuff
and
and
then
we
stick
with
the
with
apartment
setup.
So
that's
that's
something
you
should
play
with
it's
it's
actually
nice
and
we're
gonna
include
that
into
the
the
hue
iot
project
too.
So
that's
from
my
side.
So
let
me
get
off
my
screen.
B
B
If
anyone
is
keen
to
learn
more
collaborate,
cooperate
with
the
community
andreas
matia,
myself,
marcus
and
also
natalie,
because
natalie
is
another
esteem
member
member
of
the
community
and
the
other
champions
we
have
feel
free
to
join
us
feel
free
to
reach
out
to
us.
We've
got
the
slack
channel
which
is
linked
in
our
sorry
in
our
landing.
B
And
so
you
can
see
here,
slack
linking
and
also
the
twitter
account
and
the
mailing
list
for
the
community.
Thank
you.
A
Welcome
yeah,
that's
very
interesting.
I'm
gonna
share
in
the
chat
this
luck
link
before
we
do
it.
There
was
a
question
addressed
in
the
chat
from
luigi.
What
hardware
does
microshift
support
arm
is
supported?
I
know
that
another,
a
user
from
twitch,
sent
the
link
to
the
microsoft
documentation,
but
do
you
know
which
hardware
is
supported.
C
So
the
important
thing
is:
if
we
talk
about
supported
microsoft,
currently
is
an
open
source
project
initiated
by
the
office
of
the
cto
of
red
hat.
So
it's
not
an
official
product.
It's
it's!
It's
an
it's
an
op!
It's
an
upstream
project
so
supported
you
know,
goes
both
ways,
so
we
we
of
course
it
runs
any
kind
of
intel
platform
and
it
runs
64-bit
arm
devices.
C
Now
the
of
course
the
prequisite
is
that
you
get
an
operating
system
supporting
the
cryo
engine
on
top
of
your
armboard,
but
this
could
be
a
kind
of
arm
debian,
64-bit
version.
It
could
be
fedora
on
arm
it
could
be
rel
or
centos
on
arm
that
should
work
again.
I
didn't
have
you
know
my
raspberry
pi
4
is
is
doing
other
stuff,
so
I
haven't
tried
it
yet,
but
it
should
work
on
two
cpu
cores,
two
gigs
of
memory
intel
or
arm
64
architecture.
C
So
again,
what
I
have
in
mind
is,
you
know,
use
this.
What's
that
it's
an
odroid
c2,
which
is
64-bit
arm
technology
and
and
using
top
of
that,
since
my
pi
is
busy
doing
other
things,
and
that
should
work
but
again,
if
you
just
want
to
give
it
a
quick
try
before
you,
you
know
start
messing
around
with
with
setting
up
the
arm
piece.
The
right
way,
just
spin,
a
simple
virtual
machine,
two
cpus
four
games
memory.
Everybody
can
run
that
on
top
of
the
notebook
somewhere
else
get
in
basic
fedora.
C
If
you
follow
the
the
micro
shift
documentation
how
to
run
the
easiest
way
is
get
fedora
35
and
followed
their
past
saying,
rpm
installation,
you
know
not
department
installation,
because
then
you
have
to
you
know.
Podman
and
and
cryo
on
top
of
one
system
doesn't
make
sense.
So
you
just
run
fedora
rpm
installation,
with
microsoft,
rpm
from
the
copper
repository
and
you're
up
and
running
in
five
minutes.
C
You
know
it's
it's
not
easy
and
then
once
you
got
a
hold
of
that,
you
can
start
trying
with
with
your
arm
set
up
and
it
should
work
from,
let's
say
debian
or
armbian
that
one
runs
armbian.
Yes,
there's
a
guy
victor.
I
forgot
this
name
who
is
creating
a
debian
has
has
created
a
debian
distribution
for
all
those
non-raspy
pri
arm
devices,
which
is
pretty
good,
and
it
should
work
with
that
one.
C
That's
what
I
intend
to
try
this
odroid
with
is
the
latest
version
of
armbian,
which
is
based
on
debian
10,
and
do
that.
So
there
is
a
question
from
do
you
get
a
gui
in
microshift?
You
know.
I
I,
as
a
red
hatter,
was
afraid
that
this
question
come
up
comes
up,
but
you
know
we're
talking
open
source
here.
C
As
I
said
to
my
colleagues,
there
is
no
official
gui
from
us,
but
you
know
they're
the
competition,
but
there's
some
guys
that
create
something
which
is
called
lens
and
lens
is
a
a
web.
It's
a
ui
for
a
kubernetes
environments.
C
It
is
from
you
know
the
the
competition,
but
actually
lens
is
a
a
a
nice.
It's
an
electron
app,
so
it
works
on
on
the
chromium
runtime
and
you
can
manage
anything
kubernetes
with
lens
in
in
a
gooey
fashion.
C
So,
no
matter,
if
it's
open
shift,
if,
if
it's
self-hosted,
you
know
plain
kubernetes
or
microshift,
so
you
can
use
that
if
you
want
to
have
a
ui,
I
know
you
know,
even
if
it's,
if
it's
a
product
from
from
the
competition,
it
is
a
nice
product,
it
is,
it
is
open
and
you
could
use
that
like
I
do
to
manage
your
microshift.
So
in
this
case,
if
you
want
to
take
a
look
at
it,
you
can
have.
I
just
shared
the
screen
with
it,
so
you
can
have
an
overview
about
your
node.
C
So
in
this
case
I
don't
know
if
I
can
make
the
the
characters
bigger
on
that
one,
because
it's
it's
it's
does
it
work,
does
it
take
the
no?
It
doesn't
so
so.
In
this
case
you
can
have
multiple
clusters
and
that's
the
one
that
I
was
just
showing
you
so
you
can
see.
This
is
the
the
virtual
machine
that
that
one
actually
runs
rocky
limits
for
testing.
C
A
It's
inventory
of
all
the
kubernetes
object.
A
So
unless
you
suggest
to
have
a
kind
of
a
ide,
for
instance
the
kubernetes
connector
for
visual
studio,
it's
similar
to
that.
No,
but
this
one
it's
an
application.
C
It's
an
application,
it's
electron.
It
runs
on
on
mac
windows,
linux
wherever
and
you
know,
for
the
ones
that
that
don't
want
the
cli
to
to
manage
their
micro
shift
environment
that
one's
really
easy
comes
in
handy.
I
know
it's
from
the
competition,
but
it's
a
nice
application.
So
no,
no
one!
You
know
we're
open
with
that
and
and
if
you
want
to
have
a
gui
and
make
your
life
easier
with
that,
go
that
way.
Of
course,
the
hard
core
operators
will
always
use
this
command.
A
A
Thanks
for
sharing
andreas,
it's
a
you
know:
it's
an
ide
for
kubernetes.
You
can
check,
you
can
monitor.
You
can
work
with
your
micro
shift
very,
very
interesting
thanks
for
sharing
it
folks,
I
would
use
those
minutes
that
we
have
left
to
ask
to
matiano
a
little
bit
more
about
the
openshift
part
so
and
then,
and
that
would
close
up
with
the
the
final
question
about
the
community.
How
did
you
build
this
wonderful
community,
but
before
that
mattia?
Can
you
share
more?
A
D
Yes,
so
when
I
was
first
involved,
the
architect
was
simpler.
We
were
just
a
central
openshift
platform
and
then
directly
the
edge
device
and
so
and
when
working
with
edge
divide,
we
want
to
have
also
secure
communication
with
you
and
device.
So
we
need
to
find
a
solution
that
was
scalable
right
and
you
know
just
a
disclaimer.
My
idea
was
to
leverage
a
service
mesh
to
have
a
common
mesh,
but
service
mesh
at
the
time
was
not
ready
for
use
case
for
an
edge
device
as
soon
as
so
forth.
D
So
we
decided
to
use
the
standard
kubernetes
for
certificate
management
with
this
search
manager
and
to
spin
up
certificate
as
well
for
the
device,
because
when
you
spin
up,
you
request
a
certificate,
you
can
define
the
the
specific
common
name
and
subject
alternating
name,
so
you
can
have
a
kind
of
hand-to-hand
integration
with
the
device
and
as
well
the
nice
things
about
cert
managers
is
quite
annoying
of
which
certificate
authority
we
use
and
and
for
our
poc
or
access.
D
We
simulate
the
the
customer
statistical
authority
using
ashiko
vault,
which
provide
a
pki
secret
engine
and
then
on
the
second
run
of
the
affects.
We
cannot
improve
the
architecture
we
simulate
kind
of
a
dash
center
and
a
factory
with
openshift
snow,
and
in
this
case
you
want
to
kind
of
not
just
scale
out
to
the
edge
but
as
well
the
factory
and
give
the
power
the
factory
to
spin
up
certificate
for
the
device
as
well
and
as
well.
D
The
application
running
on
the
factory-
and
you
know,
one
of
the
best
practice
in
standard.
You
should
not.
You
should
never
use
root
ca
certificate
authority,
but
you
always
need
to
delegate
with
an
intermediate
certificate
authority,
because
the
root
certificate
authority
is
just
the
purpose
of
the
root
cell
authority
just
for
spin
up
intermediate
certificate
and
then
the
intermediate
certificate
will
be
used
for
application
certificate
and
in
this
case,
for
our
use
case
for
edge
device.
D
What
I
will
see
as
the
next
step,
I
hope
to
implement
in
the
next
aspects,
would
be
to
leverage
the
service
mesh
federation
capability
which
allow
you
to
unify
it
several
cluster
device
or
in
a
unified
mesh
and
as
well.
The
service
mesh
can
be
integrated
with
certain
manner,
so
the
part
that
we
will
build
can
be
reused.
You
know
it's
nothing
to
throw
away.
B
Shifted
to
a
different
solution
still
supported
it's
still
in
the
pride
grade,
but
of
hopefully,
is
clearly
less
automated
than
the
service
mesh
itself.
A
Yeah
at
that
time,
andrea
service
mesh,
wasn't
we
weren't
supporting
the
you
know
the
modern
topology
of
service
mesh
now
istio
support,
also
ubiquitous
heterogeneous
environment
can
be
from
openshift.
We
will
have
the
support,
also
of
external
virtual
machine
running
the
mesh,
so
not
only
openshift
workload
with
service
mesh
also
non-openshift
workload
with
service
mesh.
This
is
interesting.
You
know
in
the
edge
perspective,
so
maybe
if
we
were,
if
we
had
that
technology
three
years
ago,
maybe
we
were
you
know,
rethinking
about
it,
but
in
for
the
future.
Why
not?
A
Why
not
also
mathias
said
it's
a
it's
a
valid
solution?
It's
something
we
can
reconsider
and
if
you're
interested
on
that,
please
join
the
access
community
and
and
to
to
discuss
and
and
participate
on
on
that.
You
know
from
the
in
the
wild
from
the
chat,
there's
toilet
sasso
watching
from
the
philippines
hi
hi
welcome.
Welcome
to
upper
shift
tv
coffee,
bread
welcome,
and
we
are
here
with
the
carcus
for
iot
red
dot
access
community
andrea.
We
have
some
minutes
left
before
we
close.
A
I
would
like
to
ask
you,
you
know
the
the
the
question
is,
how
do
you
create
a
successful
community
and
how
do
you
keep
the
this
community
engaged
over
the
years?
I
know
it's
a
hard
one.
B
B
So
we
decided
to
form
this
community
again,
it's
important
to
collaborate.
No,
we
are
professionals.
B
The
technology
and,
of
course,
working
in
an
open
source
environment-
and
this
is
the
on
the
only
thing
that
matters
right
sharing
and
whether
we
say
sharing
is
caring,
and
I
believe
this
is
true
so
other
than
calling
out
to
action
within
red
dot.
I
decided
to
do
something
together
so
to
try
to
call
for
contribution
the
community
itself
so
and
also
the
partners
participating
in
the
events,
because
they
were
also
individuals
or
entire
companies.
B
They
were
trying
to
figure
out
a
way
of
keep
the
collaboration
going
on
going
and
and
quite
live.
So
this
is
this
is
what
we
got
so
far.
I
mean
I'm
amused
by
the
people
that
they
join
in
the
community
from
time
to
time
or
the
old
members
like
andreas
and
mattia
and
marcus,
and
the.
B
Customer
engagements
or
or
other
other
various
reasons,
but
I
want
to
mention
them-
bentallyart,
mario
parr,
these
these
guys
jeff
newson,
these
guys
they
have
made
possible
to
not
just
develop
stuff,
but
also
have
an
open
view
on
the
future
opportunities
from
the
technical
standpoint
and
have
this
all
together
now
and
it's
fantastic
from
time
to
time.
A
Thanks
a
lot,
thank
you
all
for
joining
today
we
have
also
people
from
the
chat
saying
hi,
sir
luigi's
saying
great
great
discussion.
Thanks
luigi
andres,
there
was
a
question
from
michele
miguel
said:
lanza
doc
for
openshift
is
necessary.
C
I
don't
fully
understand
what
he
means
lens
ad
hoc
for
openshift
openshift
has
its
own
inbuilt
web
ui
it.
It
doesn't
need
anything,
but
if
you
have
a
lens
installation
and
you
use
it
to
manage,
you
know
several
micro
shift
installations,
you
can
also
get
access
to
your
openshift
with
that,
but
openshift
doesn't
doesn't
need
it
because
out
of
the
box,
as
as
you
know,
openshift
has
its
own
dashboard,
which
has
more
information
than
an
ui
like
like
lens.
So
you
don't
need
it
for
that.
You
know.
A
A
For
joining
so
marcus,
I
would
like
to
ask
you,
since
we
I
haven't
chatted
to
too
much.
So
what
are
your
final
thoughts
why
people
should
join
the
redhat
hackfest
community?
What
do
you
think.
E
Oh
well,
basically,
this
is
the
edge
of
the
edge
community.
I
would
say
so
there
there
is
going
to
be
a
big
market.
You
see
that
even
large
companies,
like
sap,
is
looking
at
the
edge
market
to
to
feed
their
backend
systems,
their
business
logic,
etc.
So
that's
basically,
why
I'm
here
because
they
make
the
money
out
of
the
databases
and
the
correlation
of
the
data.
E
I
need
this
data
to
be
fed
and
one
of
the
largest
possibilities
to
to
blow
up
their
databases
to
earn
money
is
edge
so
yeah,
it's
a
market
and
and
the
key
is
getting
this
data
and
turning
this
into
data
that
can
be
used
to
increase
the
customer's
business.
E
Only
those
people
who
get
intelligent
or
new
information
out
of
this
massive
data,
that's
that
they're
gonna
survive,
and
hence
this
is
a
market.
A
That's
a
very
good
point,
thanks
marcus
for
closing
up
with
these
nice
words
to
describe
why
the
people
should
join
the
actress.
If
you
want
to
join
the
new
edition
of
breakfast
andreas,
I'm
putting
the
link
to
the
landing
page
of
the
actress.
If
people
would
like
to
know
more
would
like
to
ping
you
andreas
andrea,
mattia,
marcus
on
any
question,
any
technical
or
business
question
around
the
technology
involved
in
the
acquisition
please
reach
out
to
them.
You
have
all
the
information
and
now
folks.
Thank
you
very
much.
A
I
would
like
just
to
let
you
leave
you
for
some
reminder
today
at
open
shift
tv
we
have
our
ask
an
admin
show,
and
then
we
have
the
show.
Readout
enterprise
linux
presents
stay
tuned
on
openshift
tv.
We
will
see
each
other
on
openshift
tv
coffee
break
next
wednesday
in
the
wild
and
have
a
good
day.
Stay
safe,
stay,
healthy
and
talk
soon,
ciao,
okay,
bye-bye.