►
From YouTube: Operating System for Edge Ben Breard (Red Hat) | OpenShift Commons Gathering Kubecon NA
Description
Operating System for Edge
Ben Breard (Red Hat)
OpenShift Commons Gathering
2020 11 17
A
My
name
is
ben
bryart
on
the
product
management
team
here
at
red
hat,
I've
basically
spent
the
past
five
years,
kind
of
operating
in
between
rel
and
open
shift,
kind
of
covering
all
of
our
immutable
operating
systems,
container,
run
times,
image
offerings
and
things
in
that
kind
of
that
world,
and
now
I'm
really
focused
on
on
edge
and
what
it
takes
to
be
successful
there
yeah,
so
I
just
hit
10
years
at
red
hat.
So
it's
it's
awesome.
I
have
the
greatest
job
in
the
world.
A
I
get
to
play
with
all
the
fun
stuff
and
work
with
all
these
super
cool
use
cases
from
customers.
So
it's
absolutely
great
and
yes,
I
at
one
one
time
had
rhca
one
day
in
the
future.
I
hope
to
update
that,
but
you
know
we'll
see
how
this
goes.
A
Okay,
so
what
is
edge
well
for
everyone's
education?
Oh
that's!
Terrible!
It's
yeah!
Anyway!
We'll
keep
we'll
keep
moving
here.
Basically,
we
now
have
the
situation
where
we
have
sources
of
data
all
over
the
place
and
there's
all
kinds
of
use
cases
to
capitalize
on
this
data
quicker,
faster,
more
efficiently
and
in
some
cases
we've
reached
the
limits
of
where
we
can
actually
ship
data
to
a
cloud
or
data
center
environment
to
process
this
and
so
putting
compute
closer
to
these
sources
of
data.
Sometimes
those
sources
could
be
humans.
A
Sometimes
it's
sensors.
It
can
be
all
types
of
use
cases,
but
that's
typically
the
combination
of
what
we're
talking
with
edge,
and
so
of
course,
you
guys
could
probably
imagine
all
of
the
things
that
we
take
for
granted
inside
of
the
data
center
or
cloud
environment
are
now
luxuries
from
an
infrastructure
side
that
may
or
may
not
exist
at
the
edge.
So
that's
that's
kind
of
a
key
thing
and,
and
one
another
point
I
would
make,
is
depending
on
what
industry
you're
in
edge
is
going
to
look
completely
different.
A
Sometimes
it
will
kind
of
look
and
feel
like
a
remote
office.
Maybe
it's
totally
different
for
telcos
and
content
providers
and
so
forth
in
the
industrial
manufacturing
space.
A
We
start
we
start
hitting
like
limits
of
what
you
can
do
with
protocols
and
things
of
that
nature,
and
so
it
it.
We
have
to
look
at
the
scale
problem
very
differently
in
this
world
in
terms
of
just
interoperability.
A
I
think
I
think
of
right
now
we're
kind
of
seeing
the
west
is
kind
of,
though
excuse
me,
the
edge
is
kind
of
like
the
wild
west.
In
a
lot
of
ways,
in
terms
of
just
what
hardware
is
available,
what
accelerators
are
being
used
if
any
oftentimes
it's
kind
of
a
existing
legacy
footprint
and
how
newer
systems
are
interacting
with
that
or
maybe
they're
interacting
with,
like,
like
like
hardware
systems
talking
to
plcs
on
a
manufacturing
line,
that
they
can't
stop
for
safety
protocols?
A
And
there's
it's
just
a
it's.
It's
a
lot
of
moving
pieces.
It's
going
to
continue
evolving
and
adapting
a
lot
over
the
next
few
years,
but
how
these
protocols
work
at
a
very
low
level,
sticking
with
the
manufacturing
cases,
there's
not
a
lot
of
commonality
between
the
vendors
and
types
of
communication
that
we
see
at
a
very
low
level.
So
it's
a
there's
a
it's
a
complicated
matrix
of
layers
of
technology
that
that
intersect
here
and,
of
course,
from
a
consistency
perspective.
A
You
know
basically
keeping
things
updated.
We
oftentimes
talk
about
the
convergence
of
operational
technology
with
I.t
networks
and
systems,
and
that
right
there
creates
sometimes
organizational
tension,
but
also
creates
huge
opportunity
for
us
to
solve
problems
and
really
work
with
customers
to
get
make
sure
that
you
know
we
can
meet.
Basically,
requirements
and
and
and
have
these
systems
kind
of
converge
over
time
all
right,
and
we
look
at
kind
of
an
application
lens
here.
A
You
guys
are
going
to
see
that
all
the
traditional
cloud
native
and
a
lot
of
the
aiml
stuff,
this
slide
should
really
say
open,
shift
and
and
rel
on
it,
because
these
are
really
the
types
of
applications
that
we
see
being
super
relevant
in
the
edge
space.
A
A
Most
of
the
growth
happening
on
the
cloud
native
side
could
just
be
repackaging
of
existing
applications,
as
well
as
literally
forklifting
workloads
from
other
environments
and
putting
those
out
out
at
the
edge
and
then,
of
course,
the
aiml
side
is
obviously
growing,
really
really
fast,
and
whether
you're
training
models
or
just
executing
them.
A
You
know
close
to
sensors,
for
example,
or
you
know
doing
inferencing
on,
say,
like
a
a
webcam
or
these
types
of
things.
You
know
we
basically
see
that
os
is
being
or
rel
specifically
being
a
great
fit
for
a
lot
of
these
use
cases
all
right.
So
I'm
going
to
put
this
in
context
and
we've
got
a
lot
of
cool
deployments
for
openshift
in
an
edge
context.
A
We
can
now
do
smaller
three
node
clusters,
the
remote
workers
landed
in
and
four
six
or
much
much
more
progress
has
been
made
there
and
then
the
single
note
work
is
is
coming
is
on
the
way,
but
today
just
to
put
this
in
context
we're
talking
about
just
like
what
can
I
do
with
linux,
and
so
you
could
think
of
this.
I've
had
two
people
come
up
with
this
term.
We
know
we
have
k-8s
and
there's
kind
of
k3,
which
is
like
the
slim
down
kubernetes.
A
I
kind
of
think
a
lot
of
this
is
like
k0
right,
like
what
can
I
do
with
container
runtimes
and
and
the
os,
and
it
turns
out,
you
can
actually
do
a
heck
of
a
lot
here
and
so
just
again
kind
of
going
back
to
the
the
trends
that
we
see
happening
a
lot
of
times,
you'll
see
edge
computing
connected
somehow,
with
some
type
of
digital
transformation.
A
Type
of
initiative
could
be.
You
know
the
itot
convergence
is
another
huge,
huge
things
going
on,
or
really
people
just
trying
to
make
better
use
of
analytics
on
top
of
data
to
either
make
them
smarter
from
a
competitive
standpoint,
improve
customer
experience
or
really
just
increase
like
operational
efficiency.
All
of
these
are
kind
of
tend
to
be
the
bigger
trends
that
people
are
going
after.
I
talked
a
little
bit
about
the
verticals
already,
so
I'm
not
going
to
rehash
that.
A
But
what
can
you
do
with
kind
of
a
standalone
os
at
this
point?
Really
just
a
container
host
that,
where
the
concept
of
a
cluster
actually
doesn't
add
value,
and
so
these,
like
independent
systems,
that
just
once
you
put
them
in
motion,
keep
going.
A
That's
a
that's
a
really
really
common
use
case
here
that
we
can
solve
really
well,
particularly
on
smaller
footprint
devices.
These
are
just
edge
servers
or
a
gateway
to
where
we're
really
just
passing
packets.
Back
and
forth.
I
talked
a
little
bit
about
computer
vision
a
second
ago
where
we're
doing
inferencing
on
like
a
feed
of
what's
coming
into
the
system
and
trying
to
identify,
what's
happening
on
it
and
make
decisions
from
that.
You
know
kiosks
are
still
a
huge
thing,
particularly
for
like
a
like
in
the
in
the
transportation
space.
That's
that's
still.
A
We
see
huge
investment
happening
there
and
then,
of
course,
we
see
the
classic
kind
of
iot
use
case
rolling
up
under
under
edge
as
well
all
right
so
with
rel
in
the
next
update,
which
will
be
8.3.
We
are
on
the
time
based
model
now,
so
the
november
release
we
kind
of
have
these
four
things
landing,
which
represents.
A
You
know
our
first
step
in
the
journey
of
kind
of
adapting
rel
for
edge.
We're
not
finished
with
this
release.
This
again.
This
represents
kind
of
that
first
step
for
us
and
effectively.
What
we
have
is
this,
this
tool
called
image
builder
and
we'll
we'll
go
over
this
in
more
detail.
But
basically
we
can
generate
these
pretty
pretty
small
footprint
operating
system
images.
A
They
can
either
be
purpose-built
built
for
a
particular
piece
of
hardware,
use
case
workload
or
kind
of
a
generic
container
hose,
and
then
we
get
a
whole
bunch
of
benefits
because
we're
using
rpm
os
3
in
the
background,
which
makes
it
super
easy
to
update
super
easy
to
be
really
efficient
with
those
updates
over
the
wire,
and
then
we
have
some
cool
technology
that
will
help
us
roll
back.
A
I
mentioned
earlier
that
there's
pretty
sizable
legacy
footprint
in
a
lot
of
these
edge
use
cases,
and
I
wanna
you
know
basically
there's
no
technical
reason
why
we
can't
just
drop
in
containers
next
to
traditional
damage
running
on
a
linux
system.
It
works
great
now,
depending
if
you
need
to
orchestrate
and
do
fancy
things
with
those
containers.
A
You
know
you
will
probably
hit
the
limits
of
that
pretty
quick
and
that's
obviously
where
kubernetes
has
a
massive
amount
of
value,
but
if
this
is
more
of
a
static
workload
type
case,
this
is
pretty
simple
to
do
and
works
really
really
well
again.
One
of
the
things
I
do
want
to
point
out
is
that
in
rel
eight
we
make
it
easy
for
just
regular
daemons
running
on
the
system
to
really
present
the
same
kernel
primitives
that
give
you
container
isolation.
A
A
These
types
of
things
are
really
easy,
and,
and
there's
really
a
list
of,
like
kind
of
like
eight
line
items
you
can
put
into
a
unit
file,
it
starts
starts
an
app,
and
it
will
just
basically
give
you
that
very
very
similar
type
of
isolation,
which
is
super
cool
when
you
consider
kind
of
how
connectivity
is
increasing
and
how
how
important
security
is
and
will
continue
to
be
in
the
future.
A
Okay,
so
we
are
from
a
container
runtime
perspective.
We
are
talking
more
about
the
podman
side
of
the
house
right
now.
Hopefully,
everybody
here
is
kind
of
familiar
with
the
differences
of
cryo,
which
is
meant
to
talk
to
the
kubernetes,
cri
and
podman,
which
uses
basically
all
the
same
same
underlying
components,
but
it's
kind
of
standalone
in
terms
of
it
has
a
cli.
A
Now
has
a
has
a
docker
compatible
api
in
this
version,
that's
going
to
come
out
in
8.3
and
just
super
lightweight
runtime
works
incredibly
well,
one
thing:
one
thing
we
like
about
it
for
this
use
case
is:
we
have
much
much
better
integration
with
podband
and
systemd
than
we
ever
had
in
the
docker
world,
and
so
again
this
makes
it
super
easy
to
just
if
you
again,
going
back
to
kind
of
that
static
workload,
model
have
container
images
that
can
auto
restart
they
just
like
they.
A
Just
the
system
knows
how
to
run
these
just
like
any
other
service
supply
is
really
great
for
that.
So
again
we
got
the
new
api
coming
in
in
a3
and
then
another
thing
that
makes
this
whole
model
really
really
nice.
Is
this
auto
update,
based
on
the
tag
on
your
on
your
images?
A
But
if
you're
managing
container
life
cycles
at
your
registry-
which
you
should
be
doing,
everybody
should
be
doing
that
and
if
you
want
to
have
a
certain
tag,
land
on
a
certain
set
of
boxes,
maybe
you're
going
to
phase
in
or
have
your
whole.
All
of
them
are
going
to
pull
the
the
prod
application.
A
Basically,
we
can
have
timers
on
these
nodes
and
now
check
that
tag
at
whatever
interval
is
appropriate
and
just
kind
of
auto
pull
that
image
and
update
as
new
ones
are
made
available
on
the
registry.
So
again,
little
features
like
that
make
life
super
simple
and
again
easy
to
scale,
because
these
are
all
client
initiated
actions.
A
So
it's
nice
okay.
So
let's
talk
a
little
bit
about
what
we're
getting
in
eight
three,
so
I
mentioned
image
builder,
it's
kind
of
the
the
front
door
of
this
tool.
A
This
is
made
available
via
the
cockpit
ui,
there's
also
a
cli
and
an
api
for
this,
but
really
log
in,
and
you
know
approximately
four
clicks
you're
going
to
get
the
default
image.
If
you
need
to
customize
it
with
rpm
content,
you
can
here
I'm
going
to
include
c
run,
as
I
really
enjoy
c
run,
because
I
like
using
secrets,
v2
super
fast,
instantiate
containers
as
well,
but
here's
we'll
just
we'll
go
ahead
and
we'll
commit
this
to
the
blueprint
again.
You
don't
have
to
do
this
by
default.
A
But
you
see
I
just
select
the
image
content
rel
for
edge
commit,
and
this
is
going
to
generate
an
rpm
os3
commit
that
we
can
then
serve
out
from
the
central
place
and
just
again
this
is
going
to
give
us
that
remote
update
capability
and
that's
it.
It's
gonna
kick
off
the
build
right
here.
We
can
see
this
going.
A
It
happens
pretty
quick
on
my
junky
laptop.
This
will
complete
in
I
don't
know.
Seven
eight
minutes
if
you're
running
on
good
hardware
expect
faster
results
all
right.
A
A
A
A
Node
0
is
finished
here
and
this
compose
image
is
really
just
going
to
download
it
and
give
me
an
artifact
that
I
can
work
with
here,
and
this
happens
super
fast
since
it's
local,
but
I
now
have
this
tar
file
with
that
commit
of
the
rpmos
tree
locally,
and
now
I'm
going
to
build
a
web-based
server
to
host
my
commit
so
I'll
spit
out
the
the
file.
Here
you
can
see
it's
super
simple
I'll.
A
Just
give
me
apache
extract
this
tarball
that
I
made
in
image
builder
and
then
go
ahead
and
serve
that
so
no
magic
at
all
is
is
needed
here.
So
let's
go
ahead
and
build
that
image.
It's
going
to
happen
quickly
because
it
was
already
built
on
this
node
and
then
I'm
just
going
to
bind
this
port
8000.
So
I
can
run
this
particular
one
to
root
list.
There's
no
again
no
requirement
to
do
that
again.
A
But
this
is
a
good
proof.
Point
that
you
can.
You
can
host
these
any
number
of
ways
you
want
and
then
once
that's
going,
I'm
just
going
to
I'm
just
going
to
curl
the
the
latest
ref
of
the
commit,
if
you
guys
have
used
rpms3
or
looked
at
it,
you'll
know
that
it
it's
kind
of
modeled
after
git,
so
a
lot
of
those
same
same
ideas
and
concepts
that
you
probably
are
familiar
with
from
git
rpmo
street,
basically
leverages
those
all
right.
A
So
once
you
have
made
an
image
from
image
builder,
that
is
a
good
way.
You
can
easily
serve
that
up
and
let's
talk
about
the
oh,
the
updates
themselves
so
day.
One
is
pretty
simple
updates,
though:
a
lot
of
edge
environments,
some
of
them
have
like
amazing
data
center
style
networks,
which
is
super
fast,
and
you
know,
efficiency
at
this
tier
is
not.
A
You
know
it's
a
nice
to
have,
but
it's
maybe
not
a
requirement,
but
in
some
environments
we
have
just
horrible
connectivity
like
like
microwave
links,
that
make
make
modems
look
really
or
old,
dial-up.
Modems
look
really
fast.
You
know,
retail,
we
still
see
like
fractional
t1s
and
these
types
of
things,
and
so
what's
cool
is
even
if
you
have
constrained
networking.
A
If
you,
if
you
generate
what's
called
a
static
delta,
you
can
actually
pull
pull
it
over
like
a
much
much
less
tcp
overhead,
which
is
which
is
just
great
and
again
increases
that
efficiency.
A
But
even
if
you
have
really
really
great
connectivity
and
bandwidth
isn't
like
near
a
scarce
resource,
you
still
probably
want
to
be
using
that
for
your
applications
of
workloads.
Not
you
know
os
updates
and
these
types
of
things
so
having
that
efficiency
really
helps,
regardless
of
what
type
of
infrastructure
you
have
and
again.
This
is
just
a
great
side
effect
of
of
using
rpmos
tree
natively
for
all
this
stuff
now
provisioning.
Here,
if
you're
familiar
with
our
cost,
you
may
be
wondering
why
is
this
not
ignition?
A
Well
we're
looking
at
ignition,
and
we
may.
We
may
include
that
in
the
future
as
an
option,
we're
certainly
open
to
that.
But
right
now
a
lot
of
these
devices
have.
I
like,
like,
like
we
just
see
this
gamut
of
hardware,
that
has
all
these
weird
requirements,
and
so
anaconda
works
incredibly
well
to
fit
fit.
A
What
this
rpm
is
commit
onto
those
systems,
so
anaconda
is
again
just
makes
this
really
really
easy
today,
and
so
this
little
example
just
has
a
kind
of
a
bare
bones
top
section
and
then
really,
instead
of
having
it
present
packages
where
you
would
normally
list
out
all
the
content
to
install
we're
just
going
to
use
rpmos
3
commit
and
we're
going
to
point
it
to
your
your
mirror.
If
you
point
it
to
your
production
mirror
once
you
deploy
the
system,
it's
going
to
know
where
to
look
for
updates
automatically.
A
So
that's
probably
a
good
thing
to
do.
If
you
can
and
that's
all
you
need
to
do,
you
can
do
any
customization
stuff
and
percent
post
if
needed.
Pre
is
still
there,
but
kind
of
a
good
rule
is
to
keep
these
as
simple
as
possible.
A
Okay,
and
so
that's
how
we
can
easily
just
get
the
commit
on
to
your
devices
and
now
a
little
bit
about
rpm
os3.
It's
just
really
gives
us
this
great
kind
of
a
the
kind
of
the
best
of
both
worlds.
If
you
think
of
a
traditional
like,
like
embedded
type
firmware,
you
know
like
your
your
router
that
may
have
a
an
amb
partition
on
it,
so
we
kind
of
blend
that
av
update
model
with
like
the
benefits
of
a
package
based
distribution.
A
A
So
again
we
get
the
benefits
of
this
av
model
where
we
can
fail
back,
if
necessary,
as
well
as
kind
of
that
flexibility
with
the
packages,
which
is
great
so
really
everything
of
the
operating
system
that
lands
under
user
gets
swapped
out
with
the
or
at
least
at
least
in
each
commit
there's
a
the
full
os
in
there
right
and
once
you
pull
that
into
your
repo
and
clone
it
locally
again,
we're
only
going
to
send
the
delta,
but
you
get.
A
A
So
you,
the
whole
operating
system
is
not
technically
immutable
like
from
the
strictest
definition
possible
and
that's
not
a
bad
thing,
because
true
immutability
often
requires
a
significant
amount
of
infrastructure
to
be
available,
and
that's
not
something
that
we
we
can
count
on
in
these
environments.
So
again
maintaining
your
configs
and
container
images,
and
these
types
of
things
is
generally
a
really
healthy
and
convenient
thing
to
do
so.
That's
it
and
we
always
get
this
known
state
that
we're
operating
in
on
the
system,
which
is
powerful.
A
So
I
mentioned
earlier
that
we
can
automatically
stage
these
updates
in
the
background,
and
so
that's
that's
a
great
way
to
approach
it.
That's
probably
what
I
would
do,
but
it
depends
on
your
environment
on
what
you
should
do
and
then,
whenever
an
update
is
staged,
typically,
you
may
want
to
align
to
a
maintenance
window
again.
A
lot
of
these
systems
are
responsible
for
like
critical
infrastructure.
You
can't
just
can't
just
accept
reboots,
you
know
like
free
form
like
we
would
expect
in
a
cloud
environment,
and
so
it's
pretty
easy
to
schedule.
A
A
So
that's
that's
how
that
works
so
updates
will
cost
you
a
reboot.
However,
as
long
as
you
time
that,
typically
going
for
accepting
a
reboot
should
be
less
than
less
disruption
or
potential
unknown
disruption
than
updating
a
live
running
system
and
making
changes
on
the
fly.
A
This
is
a
really
good
model.
Okay,
so
the
last
little
screenshot
is
to
kind
of
give
you
guys
a
look
if
you
haven't
played
with
rpmos
3
on
the
system
before
I'm
going
to
ssh
into
this
is
a
bare
metal
system,
I'm
running
and
again,
I'm
using
the
web
console
terminal
to
do
this.
I'm
running
a
container.
That's
just
sucking
the
four
cores
of
this
little
box
dry
and
I
check
the
status
I'm
running
a
single
commit,
so
this
box
has
just
been
provisioned
from
from
image
builder.
A
And
I'm
manually
triggering
an
update
because
I
don't
want
to
wait
for
the
timer
to
do
it
automatically
for
me,
and
this
particular
one
just
pulled
in
you-
know:
new
kind
of
the
container
tools
packages
have
been
rebuilt
for
this
one,
and
we
can
see
when
I
check
the
status
that
I
have
a
new
deployment
here
that
is
staged
and
not
running
on
the
system
and,
of
course
my
workload
has
not
been
interrupted
at
all.
It's
still
going
now,
I'm
forcing
a
reboot
just
to
just
to
move
into
this
really
quickly.
A
You
can
see
how
I
got
impatient
and
tried
to
ssh
before
the
system
came
back
and
I
checked
the
container
runtime
and,
of
course
my
application
is
running
as
expected,
because
it
is
being
managed
by
systemd,
and
we
can
see
here.
The
asterisk
has
moved
up
and
I'm
in
my
new
update
now
again
this
this.
This
model
is
familiar.
If
you've,
you
know,
if
you've
been
using,
you
know
like
things
like
atomic
hosts
in
the
past,
or
maybe
silver,
blue
and
fedora.
Real
core
os
does
use
this
model
as
well.
A
So
hopefully
this
this
makes
sense
to
everybody.
Now
this
last
thing
is
kind
of
what's
new
and
unique
to
what
we
have
in
rel.
A
A
I
can
basically
write
scripts
and
green
boot
gives
us
this
framework
to
run
those
scripts
that
integrates
with
the
boot
process
of
the
system
and
if
they
fail
using
a
counter
system.
So
it's
not
strictly
like
well,
it's
it's
as
flexible
as
you
need
it
to
be.
If,
if
an
update
causes,
you
know
one
of
the
critical
roles
of
that
node
to
fail,
we
can
revert
the
state
of
the
system
and
go
back
to
where
it
was
working
right.
A
So
super
super
powerful
here
and
yeah.
We're
basically
really
excited
to
have
that.
Have
that
linkage
again
between
the
workloads
and
and
operating
system
update
level,
one
of
the
customers
we
worked
on
on
this
capability
really
closely
with
you
know.
Their
feedback
to
us
is
basically
the
combination
of
rpmos
stream.
Green
boot
is
going
to
save
us
millions
of
dollars
with
our
deployments
once
these
systems
get
provisioned
at
the
edge.
A
The
goal
is
to
not
go
back
and
revisit
them
physically,
and
so
you
can
imagine
how
having
having
this
type
of
safety
mechanism
in
place
is
a
big
deal
for
a
lot
of
people
and
moving
into
this
space,
all
right.
So
with
that,
basically,
you
know
again.
I
mentioned
eight
three
as
kind
of
the
first
step
in
our
journey
of
kind
of
meeting
the
challenges
of
the
space.
A
We
do
see
the
security
story
of
rel
being
a
huge
value
to
edge
deployments,
in
fact
that
challenges
we
talked
about
earlier.
In
this
talk,
I
would
say
all
of
those
challenges,
kind
of
kind
of
live
on
top
of
the
security
concern
right
because
again
in
the
data
center,
things
tend
to
be
physically
protected.
A
You
know
we
have
cages,
we
have
badges
systems
and
you
may
or
may
not
have
any
of
that
for
the
edge
and
so
being
able
to
get
to
promise
that
same
level
of
security
without
physical
protection
is
huge
and
so
rel
rel
has
a
huge
value
proposition
there.
A
Today
and
one
that's
going
to
get
even
better
in
the
future
and
the
other
thing
around
edge
in
general
is
you
know,
we
see
complexity
as
being
a
huge
challenge
for
really
any
I.t
project
in
general,
and
we
see
edge
or
rel,
in
particular,
at
solving
a
lot
of
the
complexities
in
this
space
and
so
hopefully
kind
of
the
little
example
and
these
features
we
brought
through
in
eight
through
and
eight
three.
A
Hopefully,
you
can
kind
of
see
that
you
know
if
I
need
to
put
these
applications,
that's
relatively
static
on
some
smaller
devices,
maybe
they're
big
servers.
It
doesn't
really
matter
and
just
keep
these
maintain
them.
Steady
state
make
sure
they're
updated.
A
This
is
a
super
simple
way
to
just
to
just
go
meet
that
need
and
be
successful,
and
then,
of
course,
why
not
do
that
by
leveraging
kind
of
the
existing
investments
and
people,
skills
and
technologies
right
that
people
know
and
love
from
red
hat.
So
that's
a
that's
a
key
value
prop
and
everything
here,
and
so
at
this
point
I
guess
everybody's
probably
going
crazy
thinking.
Oh
my
gosh.
A
I've
got
to
get
my
hands
on
this
and
try
it
to
go:
go
conquer
conquer
edge
in
my
my
environment,
and
so
one
you've
come
to
the
right
conclusion
and
two:
it's
super
easy
to
go.
Do
this
and
get
your
hands
on
it.
If
you
go
to
the
os
build
github
repo,
we
really
have
kind
of
this
whole
thing
documented
out
here
and
you
can
walk
through
it.
A
It's
super
simple:
you
can
just
do
it
in
a
couple
of
vms,
if
you
like,
really
how,
however,
it
makes
sense,
but
it's
you
know
any
it'll,
take
you
anywhere
from
20
to
40
minutes,
depending
on,
depending
on
your
your
setup,
super
simple
and,
of
course,
we'd
love
to
get
feedback
from
you
guys
hear
what
you
think
and
again
this
will.
A
This
will
ga
pretty
soon
when
a3
hits
the
streets,
and
we
will
update
this
demo
to
reflect
that,
and
with
that
I,
I
guess
that's
that's
our
look
at
how
we
are
adapting
red
hat
enterprise
linux
for
the
edge-
and
I
appreciate
everybody's
time
and
being
here.