►
Description
In this session, Red Hat's Ben Breard will discuss and demo how Red Hat Enterprise Linux (RHEL) is conquering the complexities of computing at the edge
A
All
right,
everybody
welcome
to
another
openshift
commons
briefing
today,
as
I
like
to
do
on
mondays,
we
have
new
topics.
New
initiatives
opens
upstream
projects,
and
today
we've
brought
with
us
ben
breard
from
the
rebel
team
and
he's
going
to
talk
about
the
operating
system
on
the
edge
edge
is
pretty
new
and
pretty
edgy.
But
ben
brings
a
lot
of
experience
to
that.
So,
even
though
he's
inspired,
we
will
really
enjoy
this
talk
so
then
take
it
away
and
at
the
end
of
this
presentation,
we'll
have
live
q,
a.
B
Awesome
hey
so
my
name
is
ben
bryart
on
the
product
management
team.
Here
at
red
hat,
I've
basically
spent
the
past
five
years.
Kind
of
operating
in
between
rel
and
open
shift,
kind
of
covering
all
of
our
immutable
operating
systems,
container,
run
times,
image
offerings
and
things
in
that
kind
of
that
world,
and
now
I'm
really
focused
on
on
edge
and
what
it
takes
to
be
successful
there
yeah,
so
I
just
hit
10
years
at
red
hat.
So
it's
it's
awesome.
I
have
the
greatest
job
in
the
world.
B
I
get
to
play
with
all
the
fun
stuff
and
work
with
all
these
super
cool
use
cases
from
customers.
So
it's
absolutely
great
and
yes,
I
at
one
one
time
had
an
rhca
one
day
in
the
future.
I
hope
to
update
that,
but
you
know
we'll
see
how
this
goes.
B
Okay,
so
what
is
edge
well
for
everyone's
education?
Oh
that's!
Terrible!
It's
yeah!
Anyway!
We'll
keep
we'll
keep
moving
here.
Basically,
we
now
have
the
situation
where
we
have
sources
of
data
all
over
the
place
and
there's
all
kinds
of
use
cases
to
capitalize
on
this
data
quicker,
faster,
more
efficiently
and
in
some
cases
we've
reached
the
limits
of
where
we
can
actually
ship
data
to
a
cloud
or
data
center
environment
to
process
this
and
so
putting
compute
closer
to
these
sources
of
data.
Sometimes
those
sources
could
be
humans.
B
Sometimes
it's
sensors.
It
can
be
all
types
of
use
cases,
but
that's
typically
the
combination
of
what
we're
talking
with
edge,
and
so
of
course,
you
guys
can
probably
imagine
all
of
the
things
that
we
take
for
granted
inside
of
the
data
center
or
cloud
environment
are
now
luxuries
from
an
infrastructure
side
that
may
or
may
not
exist
at
the
edge.
So
that's
that's
kind
of
a
key
thing
and,
and
one
another
point
I
would
make,
is
depending
on
what
industry
you're
in
edge
is
going
to
look
completely
different.
B
Sometimes
it
will
kind
of
look
and
feel
like
a
remote
office.
Maybe
it's
totally
different
for
telcos
and
content
providers
and
so
forth
in
the
industrial
manufacturing
space.
B
It
looks
quite
different
as
well,
so
every
industry
kind
of
has
its
own
take,
and
so
it's
I
you
know
it's
it's
good
for
us
to
look
at
this
from
a
holistic
view,
not
not
really
just
from
one
industry
and
realize
that
you
know
it
does
vary
quite
a
bit
if
we
step
back
and
kind
of
look
at
common
challenges
here
we
do
see
that
scales
is
often
associated
with
these
in
a
data
center,
pretty
common
for
us
to
look
at
the
tens
of
thousands
approaching
hundreds
of
thousands
types
of
deployments
in
edge,
though
it's
often
that
these
numbers
start
in
the
in
the
millions
range
and-
and
you
can
just
imagine,
trying
to
trying
to
perform
a
task
on
a
million
devices.
B
We
start
we
start
hitting
like
limits
of
what
you
can
do
with
protocols
and
things
of
that
nature,
and
so
it
it.
We
have
to
look
at
the
scale
problem
very
differently
in
this
world
in
terms
of
just
interoperability.
B
I
think
I
think
of
right
now
we're
kind
of
seeing
the
west
is
kind
of,
though
excuse
me,
the
edge
is
kind
of
like
the
wild
west.
In
a
lot
of
ways,
in
terms
of
just
what
hardware
is
available,
what
accelerators
are
being
used
if
any
oftentimes
there's
kind
of
a
existing
legacy
footprint
and
how
newer
systems
are
interacting
with
that
or
maybe
they're
interacting
with,
like,
like
hardware
systems
talking
to
plc's
on
a
manufacturing
line,
that
they
can't
stop
for
safety
protocols?
And
there's
it's
just
a
it's.
B
It's
a
lot
of
moving
pieces.
It's
going
to
continue
evolving
and
adapting
a
lot
over
the
next
few
years,
but
how
these
protocols
work
at
a
very
low
level,
sticking
with
the
manufacturing
cases,
there's
not
a
lot
of
commonality
between
the
vendors
and
types
of
communication
that
we
see
at
a
very
low
level.
So
it's
a
there's
a
it's
a
complicated
matrix
of
layers
of
technology
that
that
intersect
here
and,
of
course,
from
a
consistency
perspective.
B
You
know
basically
keeping
things
updated.
We
oftentimes
talk
about
the
convergence
of
operational
technology
with
it
networks
and
systems,
and
that
right
there
creates
sometimes
organizational
tension,
but
also
creates
huge
opportunity
for
us
to
solve
problems
and
really
work
with
customers
to
get
make
sure
that
you
know
we
can
meet
basically
requirements
and
and
and
have
these
systems
kind
of
converge
over
time
all
right,
and
we
look
at
kind
of
an
application
lens
here.
B
You
guys
are
going
to
see
that
all
the
traditional
cloud
native
and
a
lot
of
the
aiml
stuff,
this
slide
should
really
say
open,
shift
and
and
rel
on
it,
because
these
are
really
the
types
of
applications
that
we
see
being
super
relevant
in
edge
space.
There's
already
a
lot
of
a
lot
of
like
traditional
footprint
out
here,
like
I
said,
rel
specifically,
has
been
doing
edge
computing
long
before
we
had
that
word
or
that
label
as
an
industry,
and
so
there's
a
there's
already
a
sizable
footprint
out
there.
For
that.
B
We
do
see.
Most
of
the
growth
happening
on
the
cloud
native
side
could
just
be
repackaging
of
existing
applications,
as
well
as
literally
forklifting
workloads
from
other
environments
and
putting
those
out
out
at
the
edge
and
then,
of
course,
the
aiml
side
is
obviously
growing,
really
really
fast,
and
whether
you're
training
models
or
just
executing
them.
B
You
know
close
to
sensors,
for
example,
or
you
know
doing
inferencing
on,
say,
like
a
webcam
or
these
types
of
things,
you
know
we
basically
see
that
os
is
being
or
rel
specifically
being
a
great
fit
for
a
lot
of
these
use
cases
all
right.
So
I'm
going
to
put
this
in
context
and
we've
got
a
lot
of
cool
deployments
for
openshift
in
an
edge
context.
B
We
can
now
do
smaller
three
note
clusters.
The
remote
workers
landed
in
in
four
six
or
much
much
more
progress
has
been
made
there
and
then
the
single
node
work
is
is
coming
is
on
the
way,
but
today,
just
to
put
this
in
context
we're
talking
about
just
like
what
can
I
do
with
linux,
and
so
you
could
think
of
this.
I
I've
had
two
people
come
up
with
this
term.
We
know
we
have
k8s
and
there's
kind
of
k3,
which
is
like
the
slim
down
kubernetes.
B
I
kind
of
think
a
lot
of
this
is
like
k0
right,
like
what
can
I
do
with
container
runtimes
and
and
me
os,
and
it
turns
out,
you
can
actually
do
a
heck
of
a
lot
here
and
so
just
again
kind
of
going
back
to
the
the
trends
that
we
see
happening
a
lot
of
times,
you'll
see
edge
computing
connected
somehow,
with
some
type
of
digital
transformation.
B
Type
of
initiative
could
be.
You
know
the
itot
convergence
is
another
huge,
huge
things
going
on,
or
really
people
just
trying
to
make
better
use
of
analytics
on
top
of
data
to
either
make
them
smarter
from
a
competitive
standpoint,
improve
customer
experience
or
really
just
increase
like
operational
efficiency.
All
of
these
are
kind
of
tend
to
be
the
bigger
trends
that
people
are
going
after.
I
talked
a
little
bit
about
the
verticals
already,
so
I'm
not
going
to
rehash
that.
But
what
can
you
do
with
kind
of
a
standalone
os
at
this
point?
B
Really
just
a
container
host
that,
where
the
concept
of
cluster
actually
doesn't
add
value,
and
so
these,
like
independent
systems,
that
just
once
you
put
them
in
motion,
keep
going.
That's
a
that's
a
really
really
common
use
case
here
that
we
can
solve
really
well,
particularly
on
smaller
footprint
devices,
either
just
edge
servers
or
a
gateway
to
where
we're
really
just
passing
packets.
Back
and
forth.
B
I've
talked
a
little
bit
about
computer
vision
a
second
ago
where
we're
doing
inferencing
on
like
a
feed
of
what's
coming
into
the
system
and
trying
to
identify,
what's
happening
on
it
and
make
decisions
from
that.
You
know
kiosks
are
still
a
huge
thing,
particularly
for
like,
like
in
the
in
the
transportation
space.
That's
that's
still.
B
We
see
huge
investment
happening
there
and
then,
of
course,
we
see
the
classic
kind
of
iot
use
case
rolling
up
under
under
edge
as
well
all
right
so
with
rel
in
the
next
update,
which
will
be
8.3.
We
are
on
the
time
based
model
now,
so
the
november
release
we
kind
of
have
these
four
things
landing
which
represents.
B
You
know
our
first
step
in
the
journey
of
of
kind
of
adapting
morel
for
edge.
We're
not
finished
with
this
release.
This
again.
This
represents
kind
of
that
first
step
for
us
and
effectively.
What
we
have
is
this,
this
tool
called
image
builder
and
we'll
we'll
go
over
this
in
more
detail.
But
basically
we
can
generate
these
pretty
pretty
small
footprint
operating
system
images.
B
They
can
either
be
purpose-built
for
a
particular
piece
of
hardware,
use
case
workload
or
a
kind
of
a
generic
container
hose,
and
then
we
get
a
whole
bunch
of
benefits
because
we're
using
rpmo
history
in
the
background,
which
makes
it
super
easy
to
update
super
easy
to
be
really
efficient
with
those
updates
over
the
wire,
and
then
we
have
some
cool
technology
that
will
help
us
roll
back.
B
If
we
need
to
and
we'll
take
a
look
at
those
in
more
detail
before
we
get
into
kind
of
some
demos
and
some
other
things,
I
want
to
kind
of
talk
a
little
bit
about
running
containers
with
traditional
workloads.
B
I
mentioned
earlier
that
there's
pretty
sizable
legacy
footprint
in
a
lot
of
these
edge
use
cases,
and
I
wanna
you
know
basically
there's
no
technical
reason
why
we
can't
just
drop
in
containers
next
to
traditional
daemons
running
on
a
linux
system.
It
works
great
now,
depending
if
you
need
to
orchestrate
and
do
fancy
things
with
those
containers.
B
You
know
you
will
probably
hit
the
limits
of
that
pretty
quick
and
that's
obviously
where
kubernetes
has
a
massive
amount
of
value,
but
if
this
is
more
of
a
static
workload
type
case,
this
is
pretty
simple
to
do
and
works
really
really
well
again.
One
of
the
things
I
do
want
to
point
out
is
that
in
rel
eight
we
make
it
easy
for
just
regular
daemons
running
on
the
system
to
really
present
the
same
kernel
primitives
that
give
you
container
isolation
to
services
installed
on
your
system.
B
B
These
types
of
things
are
really
easy,
and
and
there's
really
a
list
of,
like
kind
of
like
eight
line
items
you
can
put
into
a
unit
file,
it
starts
starts
an
app,
and
it
will
just
basically
give
you
that
very
very
similar
type
of
isolation,
which
is
super
cool
when
you
consider
kind
of
how
connectivity
is
increasing
and
how
how
important
security
is
and
will
continue
to
be
in
the
future.
B
Okay,
so
we
are
from
a
container
runtime
perspective.
We
are
talking
more
about
the
podman
side
of
the
house
right
now.
Hopefully,
everybody
here
is
kind
of
familiar
with
the
differences
of
cryo,
which
is
meant
to
talk
to
the
kubernetes,
cri
and
podband,
which
uses
basically
all
the
same
same
underlying
components,
but
it's
kind
of
standalone
in
terms
of
as
a
cli
now
has
a
as
a
doctor
compatible
api.
B
In
this
version,
that's
going
to
come
out
in
8.3
and
just
super
lightweight
runtime
works
incredibly
well,
one
thing:
one
thing
we
like
about
it
for
this
use
case
is:
we
have
much
much
better
integration
with
podband
and
systemd
than
we
ever
had
in
the
docker
world,
and
so
again
this
makes
it
super
easy
to
just
if
you
again,
going
back
to
kind
of
that
static
workload,
model
have
container
images
that
can
auto
restart
they
just
like
they.
B
B
But
if
you're
managing
container
life
cycles
at
your
registry-
which
you
should
be
doing,
everybody
should
be
doing
that
and
if
you
want
to
have
a
certain
tag,
land
on
a
certain
set
of
boxes,
maybe
you're
going
to
phase
in
or
have
your
whole.
All
of
them
are
going
to
pull
the
the
prod
application.
B
Basically,
we
can
have
timers
on
these
nodes
and
now
check
that
tag
at
whatever
interval
is
appropriate
and
just
kind
of
auto
pull
that
image
and
update
as
new
ones
are
made
available
on
the
registry.
So
again,
little
features
like
that
make
life
super
simple
and
again
easy
to
scale,
because
these
are
all
client
initiated
actions.
B
So
it's
nice
okay.
So
let's
talk
a
little
bit
about
what
we're
getting
in
eight
three,
so
I
mentioned
image
builder,
it's
kind
of
the
the
front
door
of
this
tool.
B
This
is
made
available
via
the
cockpit
ui,
there's
also
a
cli
and
an
api
for
this,
but
really
log
in
and
you
know,
approximately
four
clicks:
you're
gonna
get
the
default
image.
If
you
need
to
customize
it
with
rpm
content,
you
can
here
I'm
going
to
include
c
run,
as
I
really
enjoy
see
run
because
I
like
using
secrets
v2
on
super
fast
instantiate
containers
as
well,
but
here's
we'll
just
we'll
go
ahead
and
we'll
commit
this
to
the
blueprint
again.
You
don't
have
to
do
this
by
default.
B
We
will
give
you
everything
you
see
here
listed
on
the
slide,
a
small
accor
install
with
our
container
tools
as
well
as
some
goodies
that
we'll
look
at
in
the
next
couple
slides.
B
But
you
see
I
just
select
the
image
content
rel
for
edge
commit,
and
this
is
going
to
generate
an
rpm
os3
commit
that
we
can
then
serve
out
from
the
central
place
and
just
again
this
is
going
to
give
us
that
remote
update
capability
and
that's
it
it's
going
to
kick
off
the
build
right
here.
We
can
see
this
going.
B
It
happens
pretty
quick
on
my
junky
laptop.
This
will
complete
in
I
don't
know.
Seven
eight
minutes
if
you're
running
on
good
hardware
expect
faster
results
all
right.
B
B
So
it's
one
thing:
you
have
to
understand
that
you
are
now
driving
these
boxes
and
you're
driving
that
update
cadence
and
how,
like
you,
have
fine
grain
control
over
everything
that
happens
on
these
systems
again,
once
these
updates
are
created,
we
can
put
them
on
any
type
of
web
server.
B
B
B
Just
give
me
apache
extract
this
tarball
that
I
made
in
image
builder
and
then
go
ahead
and
serve
that
so
no
magic
at
all
is
is
needed
here.
So
let's
go
ahead
and
build
that
image.
It's
going
to
happen
quickly
because
it
was
already
built
on
this
node
and
then
I'm
just
going
to
bind
this
port
8000.
So
I
can
run
this
particular
one
as
rootless.
There's,
no
again
no
requirement
to
do
that
again,
but
this
is
a
good
proof.
Point
that
you
can.
B
You
can
host
these
any
number
of
ways
you
want
and
then
once
that's
going,
I'm
just
going
to
I'm
just
going
to
curl
the
the
latest
ref
of
the
commit,
if
you
guys
have
used
rpmos
3
or
looked
at
it,
you'll
know
that
it
it's
kind
of
modeled
after
git,
so
a
lot
of
those
same
same
ideas
and
concepts
that
you
probably
are
familiar
with
from
git
rpmo
street,
basically
leverages
those
all
right.
So
once
you
have
made
an
image
from
image
builder,
that
is
a
good
way.
B
You
can
easily
serve
that
up
and
let's
talk
about
the
oh,
the
updates
themselves
so
day.
One
is
pretty
simple
updates,
though
a
lot
of
edge
environments,
some
of
them
have
like
amazing
data
center
style
networks,
which
is
super
fast,
and
you
know,
efficiency
at
this
tier
is
not.
You
know
it's
a
nice
to
have,
but
it's
maybe
not
a
requirement,
but
in
some
environments
we
have
just
horrible
connectivity
like
like
microwave
links,
that
make
make.
A
B
Look
really
or
old,
dial
up,
modems
look
really
fast.
You
know,
retail,
we
still
see
like
fractional
t1s
and
these
types
of
things,
and
so
what's
cool
is
even
if
you
have
constrained
networking.
B
If
you,
if
you
generate
what's
called
a
static
delta,
you
can
actually
pull
pull
it
over
like
a
much
much
less
tcp
overhead,
which
is
which
is
just
great
and
again
increases
that
efficiency,
but
even
if
you
have
really
really
great
connectivity
and
bandwidth
isn't
like
near
a
scarce
resource,
you
still
probably
want
to
be
using
that
for
your
applications
of
workloads,
not
you
know
os
updates
and
these
types
of
things
so
having
that
efficiency
really
helps
regardless
of
what
type
of
infrastructure
you
have
and
again.
B
This
is
a
great
side
effect
of
using
rpmos
tree
natively
for
all
this
stuff
now
provisioning.
Here,
if
you're
familiar
with
our
cost,
you
may
be
wondering
why
is
this
not
ignition?
Well
we're
looking
at
ignition,
and
we
may.
We
may
include
that
in
the
future
as
an
option,
we're
certainly
open
to
that.
But
right
now
a
lot
of
these
devices
have.
I
like,
like,
like
we
just
see
this
gamut
of
hardware,
that
has
all
these
weird
requirements,
and
so
anaconda
works
incredibly
well
to
fit
fit.
B
What
this
rpm
industry
commit
onto
those
systems,
so
anaconda
is
just
again
just
makes
this
really
really
easy
today,
and
so
this
little
example
just
has
a
kind
of
a
bare
bones
top
section
and
then
really,
instead
of
having
it
present
packages
where
you
would
normally
list
out
all
the
content
to
install
we're
just
going
to
use
rpm
os3
commit
and
we're
going
to
point
it
to
your
your
mirror.
If
you
point
it
to
your
production
mirror
once
you
deploy
the
system,
it's
going
to
know
where
to
look
for
updates
automatically.
B
B
Okay,
and
so
that's
how
we
can
easily
just
get
the
commit
on
to
your
devices
and
now
a
little
bit
about
rpmos
tree.
It's
just
really
gives
us
this
great
kind
of
a
the
kind
of
the
best
of
both
worlds.
If
you
think
of
a
traditional
like,
like
embedded
type
firmware,
you
know
like
your
your
router
that
may
have
a
an
amb
partition
on
it,
so
we
kind
of
blend
that
av
update
model
with,
like
the
benefits
of
a
package
based
distribution.
B
B
So
again
we
get
the
benefits
of
this
av
model
where
we
can
fail
back
if,
if
necessary,
as
well
as
kind
of
that
flexibility
with
the
packages,
which
is
great
so
really
everything
of
the
operating
system
that
lands
under
slash
user
gets
swapped
out
with
the
or
at
least
at
least
in
each
commit
there's
a
the
full
os
in
there
right
and
then,
once
you
pull
that
into
your
repo
and
clone
it
locally
again,
we're
only
going
to
send
the
delta,
but
you
get.
B
B
So
you,
the
whole
operating
system,
is
not
technically
immutable
like
from
the
strictest
definition
possible
and
that's
not
a
bad
thing,
because
true
immutability
often
requires
a
significant
amount
of
infrastructure
to
be
available,
and
that's
not
something
that
we
we
can
count
on
in
these
environments.
So
again,
maintaining
your
configs
and
container
images
and
these
types
of
things.
B
A
really
healthy
and
convenient
thing
to
do
so.
That's
it
and
we
always
get
this
known
state
that
we're
operating
in
on
the
system,
which
is
powerful.
So
I
mentioned
earlier
that
we
can
automatically
stage
these
updates
in
the
background,
and
so
that's
that's
a
great
way
to
approach
it.
That's
probably
what
I
would
do,
but
it
depends
on
your
environment
on
what
you
should
do
and
then,
whenever
an
update
is
staged,
typically,
you
may
want
to
align
to
a
maintenance
window
again.
A
lot
of
these
systems
are
responsible
for
like
critical
infrastructure.
B
You
can't
just
can't
just
accept
reboots,
you
know
like
free
form
like
we
would
expect
in
a
cloud
environment,
and
so
it's
pretty
easy
to
schedule.
Reboots
with
a
timer
or
there's
a
number
of
ways.
You
can
use
any
type
of
a
management
system,
but
once
you
have
a
scheduled
reboot
when
they
come
back
up,
they'll
be
at
the
next
update
automatically.
B
So
that's
that's
how
that
works
so
updates
will
cost
you
a
reboot.
However,
as
long
as
you
time
that,
typically
going
for
accepting
a
reboot
should
be
less
than
less
disruption
or
potential
unknown
disruption
than
updating
a
live
running
system
and
making
changes
on
the
fly,
this
is
a
really
good
model.
B
Okay,
so
last
little
screenshot
is
just
to
kind
of
give
you
guys
a
look
if
you
haven't
played
with
rpms3
on
the
system
before
I'm
going
to
ssh
into
this
is
a
bare
metal
system,
I'm
running
and
again,
I'm
using
the
web
console
terminal
to
do
this.
I'm
running
a
container.
That's
just
sucking
the
four
cores
of
this
little
box
dry
and
I
check
the
status
I'm
running
a
single
commit,
so
this
box
has
just
been
provisioned
from
from
image
builder.
That
is
the
commit
of
the
update.
B
And
I'm
manually
triggering
an
update
because
I
don't
want
to
wait
for
the
timer
to
do
it
automatically
for
me,
and
this
particular
one
just
pulled
in
you-
know:
new
kind
of
the
container
tools
packages
have
been
rebuilt
for
this
one,
and
we
can
see
when
I
check
the
status
that
I
have
a
new
deployment
here
that
is
staged
and
not
running
on
the
system
and,
of
course
my
workload
has
not
been
interrupted
at
all.
It's
still
going
now,
I'm
forcing
a
reboot
just
to
just
to
move
into
this
really
quickly.
B
You
can
see
how
I
got
impatient
and
tried
to
ssh
before
the
system
came
back
and
I
checked
the
container
runtime
and,
of
course
my
application
is
running
as
expected,
because
it
is
being
managed
by
systemd,
and
we
can
see
here.
The
asterisk
has
moved
up
and
I'm
in
my
new
update
now
again
this
this.
This
model
is
familiar.
If
you've,
you
know,
if
you've
been
using,
you
know
like
things
like
atomic
hosts
in
the
past,
or
maybe
silver,
blue
and
fedora
rel
coreos
does
use
this
model
as
well.
B
So
hopefully
this
this
makes
sense
to
everybody.
Now
this
last
thing
is
kind
of
what's
new
and
unique
to
what
we
have
in
rel.
B
B
I
can
basically
write
scripts
and
green
boot
gives
us
this
framework
to
run
those
scripts
that
integrates
with
the
boot
process
of
the
system
and
if
they
fail
using
a
counter
system.
So
it's
not
strictly
like
well,
it's
it's
as
flexible
as
you
need
it
to
be.
If,
if
an
update
causes,
you
know
one
of
the
critical
roles
of
that
node
to
fail,
we
can
revert
the
state
of
the
system
and
go
back
to
where
it
was
working
right.
So
super
super
powerful
here
and
yeah.
B
We're
basically
really
excited
to
have
that.
Have
that
linkage
again
between
the
workloads
and
and.
A
B
Operating
system
update
level
one
of
the
customers
we
worked
on
on
this
capability
really
closely
with
you
know,
their
their
feedback
to
us
is
basically
the
combination
of
rpmos
stream.
Green
boot
is
going
to
save
us
millions
of
dollars
with
our
deployments
once
these
systems
get
provisioned
at
the
edge.
The
goal
is
to
not
go
back
and
revisit
them
physically,
and
so
you
can
imagine
how
having
having
this
type
of
safety
mechanism
in
place
is
a
big
deal
for
a
lot
of
people
and
moving
into
this
space,
all
right.
B
So
with
that,
basically,
you
know
again.
I
mentioned
eight
three,
as
kind
of
the
first
step
in
our
journey
of
kind
of
meeting
the
challenges
of
the
space,
we
do
see
the
security
story
of
rel
being
a
huge
value
to
edge
deployments,
in
fact
that
challenges
we
talked
about
earlier.
In
this
talk,
I
would
say
all
of
those
challenges,
kind
of
kind
of
live
on
top
of
the
security
concern
right
because
again
in
the
data
center,
things
tend
to
be
physically
protected.
B
You
know
we
have
cages,
we
have
badges
systems
and
you
may
or
may
not
have
any
of
that
for
the
edge
and
so
being
able
to
get
to
promise
that
same
level
of
security
without
physical
protection
is
huge
and
so
rel
rel
has
a
huge
value
proposition
there
today
and
one
that's
going
to
get
even
better
in
the
future
and
the
other
thing
around
edge
in
general
is
you
know
we
see
complexity
as
being
a
huge
challenge
for
really
any
I.t
project
in
general
and
we
see
edge
or
rel.
B
In
particular,
it's
solving
a
lot
of
the
complexities
in
this
space
and
so
hopefully
kind
of
the
the
little
example
and
these
features
we've
brought
through
an
eight
through
and
eight
three.
Hopefully,
you
can
kind
of
see
that
you
know
if
I
need
to
put
these
applications,
it's
relatively
static
on
some
smaller
devices,
maybe
they're
big
servers,
it
doesn't
really
matter
and
just
keep
these
maintain
them.
Steady
state
make
sure
they're
updated.
B
This
is
a
super
simple
way
to
just
to
just
go
meet
that
need
and
be
successful,
and
then,
of
course,
why
not
do
that
by
leveraging
kind
of
the
existing
investments
and
people,
skills
and
technologies
right
that
people
know
and
love
from
red
hat?
So
that's
a
that's
a
key
value
prop
in
everything
here,
and
so
at
this
point
I
guess
everybody's
probably
going
crazy
thinking.
Oh
my
gosh.
B
I've
got
to
get
my
hands
on
this
and
try
it
to
go:
go
conquer
conquer
edge
in
my
my
environment,
and
so
one
you've
come
to
the
right
conclusion
and
two:
it's
super
easy
to
go.
Do
this
and
get
your
hands
on
it.
If
you
go
to
the
os
build
github
repo,
we
really
have
kind
of
this
whole
thing
documented
out
here
and
you
can
walk
through
it.
It's
super
simple!
You
could
just
do
it
in
a
couple
of
vms
if
you
like.
B
Really,
how,
however,
makes
sense,
but
it's
you
know
any
it'll-
take
you
anywhere
from
20
to
40
minutes,
depending
on,
depending
on
your
your
setup,
super
simple
and,
of
course,
we'd
love
to
get
feedback
from
you
guys
hear
what
you
think
and
again
this
will.
B
A
Thank
you
that
was
like
awesome
and-
and
I
know
thank
everybody
for
remaining
quiet,
so
I
could
get
that
great
copy
of
the
the
demo
and
everything
else.
So
I
will
unmute
folks
if,
if
you
would
like
to
ask
any
questions-
and
you
did
tell
us
about
how
to
get
started
so
that's
always.
My
first
question
is
is
where
to
go
to
get
started
and
do
that.
A
But
I
also
I'm
curious
because
we
I
work
as
one
of
the
the
co-chairs
for
the
okd
working
group
and
we're
using
fedora
core
os
and
talking.
You
know
back
and
forth
with
folks
like
peter
robertson,
on
the
fedora
iot
and
and
we
use
ignition
for
yeah
for
the
okay
d4
release
and
I'm
not
wondering
if
there's
you
know.
I
know
you
we're
talking
about
anaconda,
but
if
there
it
is
in
the
roadmap
to
eventually
support
ignition.
B
B
So
I
I
don't
know,
there's
kind
of
pros
and
cons
on
technology.
You
use
for
config
at
day
one.
So
there's
there's
a
lot
to
like
about
ignition,
but
it's
not
it's
not
necessarily
a
slam
dunk.
I
think
I
think
what
would
be
the
best
of
all
worlds
is
if,
if
we
could
give
people,
the
choice
of
ignition
or
anaconda
would
probably
be
even
better
because
then
you
know
people
that
have
these.
B
You
know
I
need
this
weird
raid
thing
and
and
all
this
all
this
complex
stuff,
it's
actually
super
simple
to
do
in
an
anaconda,
whereas
ignition
you
know
we
kind
of
have
to
like
copy
the
os
to
memory.
Then
scramble
your
disk
then
write
out
from
memory
back
onto
disk,
and
you
know
again,
we
need
that
failure
domain
to
be
just
like
non-existent
when
I
can't
visit
visit
systems.
So
I
that's
personally
what
I
would
like
to
see.
B
I
don't
know,
I
think,
that's
probably
gonna
shake
out
over
the
next
few
months
that
plan
but
yeah.
So
sorry,
I
don't
have
anything
solid
there,
but.
B
A
Talk
and
listening
to
you
is,
was
that
I
mean
I'm
I'm
thrilled
and
I'm
not
going
to
ask
you
for
you
know
when
is
the
83
release
going
to
land,
because
I
think
that's
one
of
those
you
know
we're
now
a
publicly
traded
company.
We
can't
disclose
those
kinds
of
things
or
get
shot
by
an
engineer,
but
I'm
hoping
that's
that's
relatively
soon,
because
we've
had
a
lot
of
conversations
with
people
trying
to
shape-shift
like
okay,
d4
for
edge
use
cases
and
that
conversation
about
well.
A
B
Oh,
oh
yeah
great,
I
you
know
I
should
have
put
that
in
the
presentation,
but
like
a
lot
of
this
technology
and
ideas
come
out
of
the
fedora
iot
space,
so
the
green
boot
comes
out
of
that.
You
know.
Peter
robinson,
you
know
was
really
one
of
our
one
of
our
guys.
That
drove
a
lot
of
that.
The
the
difference,
though,
and
how
we
chose
to
productize.
B
This,
though,
was
rather
than
make
a
kind
of
a
pre-existing
artifact
like
we
do
on
the
core
coreos
side
was
we
wanted
it
to
be
composable,
because
we
again
in
this
space
we
know
the
the
range
and
breadth
of
needs
is
huge,
and
so
we
didn't
want
it.
You
know
if,
if
you
have
a
pretty
cookie
cutter
environment,
you
know
what
you
need
to
go
solve.
B
You
know
what
our
cause
and
openshift
is
awesome,
particularly
if
you
need,
if
you
need
the
like,
the
capabilities
of
the
cube
api
right
and
you
need
dynamically
deploy,
stuff,
everything's,
super
powerful
and
it's
a
great
fit.
Obviously,
edge
is
a
huge
focus
for
us
on
that
side
as
well,
but
even
even
with
that,
there's
still
like
this
kind
of
those
use
cases.
I
rattled
off
on
one
of
those
slides
where
it
as
powerful
as
kube
is
it's
probably
not
going
to
replace
all
of
linux
right?
B
It's
not
good
for
all
the
things
that
linux
can
do,
and
so
I
think
those
are
some
things
where
you
know
we're
really
just
kind
of
turning
the
knobs
of
what
you
can
do
with
a
with
an
os
will
will
meet
those
needs
really
really
well.
A
Yeah,
I
think,
there's
well.
This
is
diane's
opinion,
not
red
hats
or
anybody
else's,
but
there's
this
wishful
thinking
for
kube
on
the
edge.
You
know
like,
let's
have
a
succinct
solution
and
then
I
get
all
the
functionality
of
kubernetes
on
the
edge
and
then
then
you
go
through
the
trouble
of
trying
to
do
it.
A
You
know
how
how
big
it
is
and
when
you
really
start
thinking
about
edge
and
how
small
some
of
these
devices
are
having
enough
memory
to
run
that
so
the
the
value,
the
real
there's,
a
lot
of
value
propositions
in
rel,
but
like
projects
like
podman,
are
huge
help
us
to
make
us
be
able
to
deploy
and
be
efficient
in
this
space.
So
you
know
there's
it.
A
It's
always
interesting
how
people
think
like
associate
like
you,
know,
cloud
native
with
kubernetes,
but
it's
all
these
ancillary
projects
as
well,
that
are
the
helper
apps
that
actually
make
things
things
work
and,
potentially
you
know,
change
our
lives
and
change
where
we
can
do.
Innovations
it'll
be
really
wonderful
to
see
in
the
next
release
what
people
do
with
these
use?
You
know
with
their
use
cases
and
as
they
bring
them
forward.
So
so,
where
do
you
want
people.
B
I
did
I
did
want
to
show
you
this.
This
was
that
this
was
the
little
box
is
h2
in
that
demo.
These
are
super
cool.
I
just
a
little
eight
gig
of
memory,
it's
a
tiny
atom
processor
which
all
the
atoms
are
on
the
support
list
for
rel
now,
which
is
which
is
pretty
cool.
But
these
are
not
super
expensive
and
there's
I
don't
know
they're
just
really
really
cool
with
what
you
can
do
with
them.
So
I
don't
know
I
like
to.
B
B
So
I
mean
I
guess
going
back
to
your
point
a
second
ago.
I
think
there's
huge
opportunity
and
value
in
this
space,
for
both
like
linux
and
kubernetes
is
going
to
be
the
de
facto
of
it.
B
It
really
reminds
me
of
that
same
time,
like
all
the
conversations
are
like
this,
like
dreaming
of
what's
possible
again,
which
is
really
cool
and
exciting
for
me,
but
I
don't
know
it's
it's
just
an
interesting
parallel
that
I
think
we
see
with
like
the
maturity
and
stuff,
and
we
of
course
learned
a
lot
of
lessons
over
the
past
10
years.
A
B
B
A
And
everything,
so
you
know
really
thank
you
for
taking
the
time
today
to
do
this.
I'm
not
seeing
a
ton
of
questions
here.
I
think
people
are
off
now
and
fantasyland
is
trying
to
figure
out
what
they,
what
they're
gonna
do
and
how
they're
gonna
test
that
that
build
on
the
edge
demo
of
yours.
So
I
think
that's
going
to
be
great
and
really
appreciate
you
coming
today.
A
Is
there
any
place
that
you
want
to
point
people
to
besides
that
demo
if
they
want
to
reach
out
to
you
or
is
there
like
a
irc
or
slack
or
some
place
where
they
should
get
a
hold
of
you
if
they
have
a
use
case.
B
Yeah
I'll
just
drop
my
email
in
in
the
chat.
It
does
not
come
with
an
sla,
so
I'm
I'm.
I
have
a
long
backlog
of
email,
but
I
guess
hit
me
up.
If
you
guys
have
questions
or
ideas
or
have
feedback,
we
would
love
to
get
it.
I
you
know
at
some
point.
We
probably
need
to
beef
up
like
like
a
community
path.
Here
I
don't.
A
A
I
think
it's
it's
it's
interesting
because
I
think
we
probably
I
think
we
have-
or
someone
has
asked
me
to
boot-
up
a
sig
in
openshift
commons
for
just
edge
stuff,
but
it's
been
such
until
we
got
to
the
point
that
you've
been
describing
today.
There
was.
It
was
a
lot
of
slidewear
and
internal
pocs
and
customer
pocs
and
a
lot
of
conversations
in
fedora
iot.
A
So
I
think
maybe
now
we
have
like
a
center
of
gravity
for
doing
something,
maybe
on
a
monthly
cadence
or
at
least
create
a
community
landing
page
for
people
to
reach
out
and
share
their
use
cases,
because
I
know
yeah
a
few
people
they
keep
showing
up
at
okd,
working
groups
or
fedora
core
os
ones
and
and
peter's
really
been
doing
very
good
stuff.
A
So
maybe
we
can
do
something
to
make
that
happen
in
a
not
too
distant
future,
because
you
know
now
that
everything's
virtual,
we
can
wake
people
up
in
the
uk
really
early
in
the
morning
as
we
are
want
to
do.
This
morning,
I
was
up
at
4
45
to
record
at
a.m.
To
record
somebody's
talk,
so
I'm
like,
if
I
can
get
up
that
early,
you
can
get
peter
so.
B
B
Yeah
time
zones,
I
I
still
dream
of
a
world
where
we
all
operate
in
utc,
but
I
don't
I
don't
know
if
that's.
A
Today
is
today
was
recording
stuff,
open
source
summit
and
eu
happened
this
morning.
You
know
and
booted
up
and
yeah.
You
know
it's
it's
crazy
and
it's
really
interesting
to
see
people
as
they
we
evolve
on
how
we
use
these
virtual
platforms
too.
What
kind
of
interactivity
we
can
do-
and
I
still
haven't,
seen
really
great,
like
hackathons,
online
or
literal
hands-on
workshops,
especially
at
now
with
edge
too
it's
like
now,
you
have
to
have
the
hardware.
So
what
are
you
going
to
do
ship,
the
hardware
to
someone's
house?
A
You
know
like
like
a
virtual
swag
bag
with
that
compulab
fill
it
in
it
and
which
would
be
an
awesome,
swag
bag.
By
the
way
we
would
all
love
that,
so
that
would
be.
B
A
Really
cool,
so
thanks
again
for
taking
the
time
to
do
this,
we
will
be
rehashing
this
at
the
openshift
commons
gathering
on
demand
and
we'll
probably
ping
ben
to
join
us
there
too.
To
answer
any
questions
so
look
forward
to
that
on
november
17th
at
the
openshift
commons
gathering
on
day
zero
of
kukan,
along
with
like
20
other
day,
zero
events
but
yeah,
it's
kind
of
it's
going
to
be
crazy.
That
week,
there's
going
to
be
a
lot.
A
Way
better,
that's
true!
The
new
thing
that
I'm
doing
now
is
I'm
encouraging
people
to
binge
watch
this
stuff,
as
opposed
to
watching
it
live
on
the
day
off,
because
I
think
binge
watching
an
event
like
open
source
summit
or
kubecon
is
probably
going
to
be
more
realistic
than
being
able
to
get
the
whole
day
off
from
work.
That's
the
the
other
catch
with
virtual
is
that
when
you
could
go
to
a
conference,
you
couldn't
physically
really
work.
Well,
you
shouldn't
you.
A
Were
there
right
now,
there's
like
oh
well,
I'm
still
coding
here-
and
you
know
I
still
gotta
do
this
deploy
over
here
and
but
I've
got
you
on
the
screen
over
here
and
it's
not
the
concentrated
and
focused
educational
fire
hose
that
events
in
person
more
so
I
think
the
new
model,
I
think
for
virtual
events,
is
really
getting
people
to
have
the
event
available
data
and
content
online
so
that
they
can
binge
watch
it
instead
of
black
mirror
or
something
else.
B
Right
because,
at
every
time
I
watch
black
mirror,
I
hate
myself
afterwards.
So,
like
I
like
value-added
content,
that,
like
you
know,
like
you
said,
is
educational
and
ads
again
is
real
value.
I
mean
I
get
it's
not
zoning
out
to
tiger
king,
but
man
what
a
what
a
better
way
to
spend
your
spend
your
evenings.
B
A
You
know
it's
surprising
because
always
with
the
youtube
stuff,
it's
always
the
long
tail
people
watching
stuff,
but
the
other
thing.
On
friday
we
did
a
talk
with
fabio
pereira,
who
wrote
a
book
called
the
digital
nudge
and
it
was
great
and
he
used
one
example,
which
is
why
I
have
the
netflix
binging
on
my.
Is
that,
like
tiger
king,
it
doesn't
ask
you
if
you
want
to
view
the
next
thing.
A
It
just
goes
to
the
next
episode,
and
so
you,
like
it's
you're
off
you
don't
even
have
to
opt
in
you'd,
have
to
opt
out
to
watching
the
next
openshift
commons
gathering
talks.
Well,
I'm
thinking.
Netflix
is
my
new
business
model
here
for
commons
content,
so
we'll
see
if
I
can
get
netflix
to
agree
with
me
so
anyways
thanks
again
for
taking
the
time
today
and
we'll
let
you
go
and
if
we,
if
you
can
send
me
the
slides,
I'll,
make
sure
those
go
out
too.