►
Description
Introduction to Fedora CoreOS FCOS
Benjamin Gilbert Red hat
Ben Breard Red Hat
OpenShift Commons Briefing
Slides: https://blog.openshift.com/wp-content/uploads/Fedora-CoreOS-OpenShift-Commons-Briefing-July-25-2019.pdf
A
Hello,
everybody
and
welcome
again
to
another
openshift
Commons
briefing.
This
day
we
have
with
us
Benjamin,
Gilbert
and
Ben
beard
from
right
at
and
they're
going
to
be
talking
about
Fedora,
core
OS
or
F
cause
or,
however,
you
want
to
pronounce
that
acronym
and
I'm
gonna.
Let
the
two
of
them
introduce
themselves
and
give
us
a
background
on
the
beginnings
and
the
origins
and
what
is
Fedora
core
OS
and
how
it
interrelates,
with
everything
that
we're
doing
with
OpenShift
and
okd.
So
take
it
away.
Venture
news.
B
C
C
C
C
If
anybody
anybody
has
those
so
with
Fedora
core
OS,
we
are
pursuing
a
new
edition
of
fedora
if
you're
familiar
with
how
this
works
in
the
fedora
side
today,
the
primary
to
our
server
and
workstation,
where
they,
they
are
kind
of
use,
case
focus.
So
when
you
do
an
install
out
of
the
box,
you
typically
get
a
content
set
or
something
that
is
a
little
bit
more
tailored
for
what
you're
gonna
be
using
it
for
and
and
what
Fedora
core
OS
we're.
C
We
plot
that
path
forward
so
anyway,
so
that's
and
what
you're
gonna
see
today
is
really
kind
of
the
result
of
that
work.
So
we
are
kind
of
bringing
forward
some
pieces
of
atomic
host,
but
this
offering
will
will
basically
deprecated
a
time
so
from
a
mission
statement.
Our
goal
here
is
to
create
an
automatically
updating,
minimal
monolithic
container
focused
operating
system
designed
for
clusters,
but
also
operable
standalone,
so
we
are
optimized
for
kubernetes,
but
also
great,
without
it
and
and
at
the
highest
level.
C
If
you
aren't
familiar
with
core
OS
container
Linux,
you
can,
you
can
think
of
Fedora
core
OS
as
that
successor,
very
similar
from
a
usability
perspective
and
use
case
coverage.
It's
that
same
same
type
of
mentality
behind
the
OS
and
then
really
quickly.
I
want
to
cover
kind
of
the
the
product
side
and
how
you
know
kind
of
how
rel
core
OS
is
different.
The
many
people
are
very
comfortable
with
the
split
between
realm
fedora.
C
Well,
you
know.
Basically,
we
cut
from
fedora.
You
know
when
we're
ready
for
a
major
release
of
rel,
so
it
is
effectively
the
upstream
for
rel
rel,
Corollas
kind
of
shares
that
same
a
similar
lineage
with
with
Fedora
core
less,
however,
were
much
more
opinionated
on
on
the
use
cases
400.
So
you
can
see
here.
It's.
C
To
be
a
standalone
of
the
West,
it
is
literally
built
with
open
ship
and
as
a
component
of
that,
so
the
cluster
is
meant
to
manage
the
operating
system,
only
a
really
powerful
operator.
For
that,
specifically,
the
machine
configure
operator
which,
if
you
haven't
heard
of
that
I,
definitely
would
recommend
that
you
go
you'll
read
about
that.
B
Cool,
so
there
were
several
sort
of
high-level
design
principles
that
went
into
how
we
think
about
both
fedora
caress
and
rail,
core
OS
and
I.
Guess
the
principle
one
is
a
mutable
infrastructure.
This
is
not
something
that's
embodied
in
code
as
such,
it's
just
how
we
think
about
how
container
our
fleet
of
container
should
be
managed,
and
the
idea
here
is
just
that
you,
whatever
customizations
you
need
to
do
to
the
host,
whether
it's
setting
a
hostname
or
static,
IP,
addressing
or
configuring
security
settings
or
whatever
is
all
encoded
in
a
single
provisioning.
B
Config
file
I'll
talk
about
that
a
little
bit
more
later
and
then,
once
that
provisioning
config
is
used
to
spin
up
a
node,
you
don't
touch
that
node
anymore,
yet
automatically
updates
itself.
It
runs
containers,
its
schedules,
kubernetes
schedules
pods
on
to
it,
but
you
don't
SSH
in
and
modify
things.
You
don't
use
configuration
management
to
update
the
settings.
We
don't
stop
you
from
doing
those
things
in
Fedora
core
OS,
but
we
discourage
it.
B
A
second
major
component
of
the
philosophy
is
that
all
soft
with
that
is
relevant
to
a
user
should
run
in
a
container.
We
provide
software
for
supporting
hardware
or
mounting
high
skies
a
storage
device.
Is
that
kind
of
thing.
But
if
you
want
to
ship
custom
code
to
the
nodes,
it
should
run
in
the
container.
In
pursuit
of
that,
we
don't
ship
interpreters,
we
have
bash
and
awk.
B
If
you
want
to
think
of
that
as
an
interpreter,
we
don't
ship
Python,
there's
no
Perl
or
anything
like
that,
and
we
don't
worry
about
api
compatibility
for
the
libraries
that
we
ship
in
the
host.
Because,
again,
that's
essentially
implementation
details,
the
host
and
you
to
software
to
run
in
containers
and
then
finally,
another
implementation
detail
is
the
OS
version
itself.
B
The
OS
releases
are
versioned
on
for
both
the
door,
chorus
and
rel
core
OS,
because
that's
useful
for
debugging,
but
the
process
of
upgrading
between
releases
should
be
completely
transparent
and
happen
behind
the
scenes,
in
particular
the
upgrade
from
let's
say:
Fedora
32,
Fedora
31.
It
should
be
completely
transparent
that
shouldn't
be
a
big
deal.
The
node
should
just
upgrade,
and-
and
that's
that
next
slide,
please
so,
okay,
what
is
this
thing?
B
We're
actually
building
its
a
distro
for
servers
and
clouds
we're
not
here
yet,
but
we
want
to
run
in
a
variety
of
clouds:
AWS
GCP,
asher
digitalocean,
some
others
it
it's
sort
of
a
cloud
first
distro
running
on
bare
metal
hit
and
in
virtualization
is
absolutely
a
first-class
citizen,
but
we
think
of
the
OS
as
a
cloud
would,
if
that
makes
any
sense,
workloads
running
containers,
as
I
mentioned,
which
means
that
the
OS
is
pretty
minimal.
It
doesn't
have
a
lot
of
administrative
tooling.
B
It
maintains
individual
operating
system
releases
as
commits
effectively
and
you
download
our
commits
and
apply
it
essentially
in
a
separate
directory
and
then
reboot
into
it,
so
where
the
bullet
says
offline,
automatic
line,
atomic
updates,
that's
what
it's
talking
about.
You
download
a
the
delta
between
what
you're
running
and
what
you
want
to
run
and
then
reboot
into
it.
The
operating
system
itself
is
read-only,
so
you
can't
go
in
and
modify
something
in
user
s
been,
for
example,
over
and
above
what
RPM
RS
tree
provides.
We
provide
automatic
updates
and
I'll
talk
about
that.
B
B
That
software
provides
some
useful
functionality.
Elson
provides
a
number
of
things
that
are
not
especially
useful
in
the
context
of
a
minimal
container
focused
operating
system.
So
in
general
we
are
trying
to
avoid
shipping
those
platform
agents,
I
I'm,
not
sure
that
we
will
always
be
successful,
but
that's
the
idea,
sometimes
when
very
minimal
amounts
of
functionality
are
needed.
For
example,
on
some
platforms,
the
operating
system
needs
to
tell
the
cloud
that
it's
done,
booting,
that's
the
thing
we
can
do
in
in
generic
code
that
we
maintain.
B
So
we
have
this
project
called
afterburn,
which
is
sort
of
a
generic
minimal
cloud
agent.
It
has
hooks
for
things
like
checking
into
the
cloud
to
report
that
the
boot
is
done,
asking
the
cloud
questions
about
what
the
public
IP
address
for
the
know.
It
is
things
like
that
and
wherever
possible,
we
will
put
functionality
afterburn
rather
than
shipping
a
whole
separate
agent.
Next
slide
on
the
bare
metal
side,
we
think
of
bare
metal
as
an
extension
of
of
cloud
images.
B
So
on
the
cloud
you
have
an
omni
ID
or
something,
and
you
just
launch
the
image
you
want,
there's
no
installer.
So
on
the
bare
metal
side,
there
shouldn't
really
be
an
installer
either.
The
way
you
get
Fedora
core
OS
onto
disk
is
there's
a
script
and
it
essentially
downloads
a
monolithic
disk
image
and
Deedee's
it
to
disk.
B
We
will
eventually
support
live
pixie
as
well,
so
you
can
network
boot
a
fedora
caress
image
and
run
it
entirely
from
RAM
people.
Do
this
on
container
Linux
today,
as
as
a
way
of
actually
running
container
hosts
in
production?
We
don't
support
that
yet
in
the
current
Fedora
chorus
preview
release,
but
we
will
support
it
soon.
Next
slide.
B
So
what
are
we
actually
shipping
in
Fedora
core
OS?
It's
Fedora
based
components
for
an
old
systemd
todd
man
and
mobi
for
container
engines,
and
whatever
software
is
necessary
for
four
basic
support
of
your
hardware.
We
are
shipping
basic
administration
tools.
You
can
SSH
into
the
node
and
run
journal
control
things
like
that,
a
little
bit
further
down
the
road
we
are
talking
about
the
best
way
to
dip
or
provide
access
to
things
like
the
coolant
and.
B
So
let's
talk
a
little
bit
more
about
provision,
so
we
have
a
component
called
ignition.
It
has
been
used
in
container
linux
for
two
or
three
years
now,
and
it
is
the
way
that
you
get
customizations
into
a
Fedora
core
OS
machine.
So
you
write
an
ignition
well
I'll,
get
to
that
a
bit
more
in
a
bit.
You
have
an
ignition
config,
which
is
a
JSON
document
which
specifies
how
you
want
the
resulting
machine
to
look
and
you
provide
the
ignition
config
to
a
node
via
user
data.
B
So
most
clouds
have
a
mechanism
for
passing
in
small
amounts
of
arbitrary
data
to
an
instance
in
the
cloud
you
use
that
on
bare
metal,
you
can
put
the
ignition
config
on
a
web
server
and
pass
a
URL
on
the
kernel
command
line
so
on.
First
boot
ignition
runs
very
early
in
the
boot
in
the
unit
Rama
fast,
actually
and
fetches
the
ignition
config
and
applies
it
and
then
continues
the
boot.
B
B
So
that's
nice
for
from
a
liability
perspective,
and
it
also
means
that
if
the
ignition
config
cannot
be
implemented
for
some
reason,
perhaps
we're
trying
to
format
a
file
system
on
a
device
that
doesn't
exist.
We
can
just
fail
the
entire
boot.
So
what
that
means
is
if
your
machine
boots
successfully,
you
know
that
the
ignition
config
has
been
correctly
applied
next
slide,
so
we're
with
Mission
configs
come
from.
B
B
B
C
C
B
Okay,
so
once
you
get
a
machine
provisioned,
how
is
it
updated?
We
don't
think
users.
You
have
to
think
about
updates.
The
machine
should
update
itself.
That
way
you
get
bug
fixes
as
soon
as
they're
available
for
for
critical
bugs
you
get
security
fixes
as
soon
as
they're
available,
but
in
order
for
that
to
work
in
order
for
users
to
not
turn
up
dates
off,
those
updates
must
be
reliable.
They
cannot
break
existing
nodes
either
accidentally
via
regression
or
intentionally
from
time
to
time.
B
B
You
have
multiple
release
streams
so
that
there's
a
way
to
test
changes
in
a
smaller
context
before
they
roll
out
to
the
entire
fleet
and
as
a
last
resort,
if
an
update
rolls
out
to
a
machine
and
it
fails
to
boot
successfully,
the
OS
will
automatically
detect
that
case
and
roll
back
to
the
previous
release
included
in
that
is
the
ability
for
the
user
to
specify
user
provided
health
checks.
So
perhaps
you
have
a
particular
service
and
if
that
service
doesn't
come
up
successfully,
then
that
machine
is
useless
to
you.
B
There
is
a
question
in
chat
which
I'll
just
answer
real
quick.
The
question
was:
is
the
cadence
the
same
as
Fedora?
So
I'll
talk
about
a
little
bit
more
when
we
get
to
update
streams,
but
we
we
take
the
Fedora
current
release
and
then
we
essentially
package
it
up.
Every
two
weeks
update
management
for
new
installs.
B
B
So
we
might
say
roll
out
this
update
over
24
hours
and
some
number
of
percent
of
nodes
would
receive
that
update
every
hour
and
the
idea.
There
is
just
that
again,
if
there's
some
critical
breaking
issue,
that
we
have
an
opportunity
to
stop
the
rollout
before
it
hits
the
entire
fleet
next
slide.
Okay,
release
streams,
so
the
stable
stream
in
the
middle
there
is
the
one
that
most
nodes
will
probably
want
to
run.
But
in
order
to
get
there,
we
start
with
something
called
the
testing
stream.
B
So
every
the
current
plan
is
every
two
weeks
we
snapshot
current
Fedora.
So
right
now
30,
we
snapshot
that
Plus
updates
and
we
issue
a
release
on
the
testing
screen,
Deb
sister
and
the
testing
stream
for
two
weeks,
so
that
people
have
time
to
report
problems
with
it.
Then,
at
the
end
of
the
two-week
period,
we
take
those
same
bits:
we
fix
any
regressions
or
bugs
or
whatever
and
roll
out
to
the
stable
stream.
In
addition,
we
have
a
third
stream
called
next,
which
the
general
idea,
the
implication,
is
a
little
bit
complicated.
B
The
start
implementation
is
a
little
bit
complicated,
but
the
general
idea
there
is
that
if
Fedora
31
is
pretty
close
to
release
the
last
couple
months
before
Fedora
31
is
released,
we
would
be
releasing
that
every
two
weeks
on
the
next
stream,
because
the
bump
from
30
to
31
is
a
larger
set
of
changes.
We
want
to
make
sure
that
it
has
extra
time
to
be
deployed
and
get
experience
out
in
the
real
world,
and
all
of
this
is
toward
the
goal
of
making
sure
we
can
get
bug.
B
Reports
have
a
time
an
amount
of
time
to
fix
them
before
the
bugs
promote
the
stable
release.
So
key
to
this
is
that
every
deployment
should
run
a
little
bit
of
testing
a
little
bit
of
next.
Perhaps
a
few
percent
of
your
nodes,
because
Fedora
chorus
is
intended
primarily
for
clustered
applications
that
should
be
reasonably
safe.
If
a
testing
node
falls
over,
then
your
paws
will
be
rescheduled
on
other
nodes,
and
by
doing
this,
you
have
the
opportunity
to
fix
two
to
catch
workload,
dependent
or
perhaps
hardware
or
network
dependent
issues.
B
So
with
ins
and
caddy
the
update
client,
there
is
a
provision
for
the
client
to
connect
it
to
some
cluster
wide
service
or
just
specify
thing
config
and
request
permission
to
update,
and
then
the
cluster
can
give
that
permission
really
means
to
finalize
the
update
and
reboot.
The
cluster
can
give
that
permission
using
whatever
criteria
it
wants.
So
if
it
wants
to
ensure
that
updates
only
happens
certain
hours
today,
it
can
do
that
it
wants
to
take
a
lock
and
make
sure
that
only
two
nodes
can
update
at
once.
It
can
do
that
next
slide.
B
Our
experience
with
container
Linux
and
I
think
also
the
experience
more
broadly
in
the
Fedora
project.
Is
that
it's
hard
to
know
how
to
direct
development
efforts
if
we
don't
actually
know
how
the
OS
is
being
used
at
the
same
time,
while
automatic
reporting
of
of
those
sorts
of
details
has
become
reasonably
common
in
the
industry,
it's
also
controversial
from
a
privacy
perspective,
and
so
we
want
to
make
sure
that
we
are
that
we
have
some
information
with
which
to
direct
development,
because
that
makes
the
OS
better.
B
But
it's
also
very
important
to
us
that
we
do
that
in
a
way
that
preserves
privacy.
So
by
default,
Fedora
core
OS
will
report
some
telemetry
back
to
the
Fedora
project.
This
will
consist,
by
default
of
completely
non-identifying
information,
extent
possible
for
phrased
a
different
way
information.
That's
generic
enough
that
it
shouldn't
fingerprint
your
machine,
so
things
like
I'm
running
on
AWS
and
I'm
running
an
m4
dot
large
instance
type,
and
my
current
OS
version
is
1.
B
2
3
and
the
initial
installed
version
of
the
OS
was
1
2
0
things
like
that,
and
that
just
helps
us
know,
for
example,
which
clouds
to
work
on
improving
first.
In
addition,
you'll
be
able
to
specify
that
additional
metrics
should
be
sent
so,
for
example,
if
you're
on
bare
metal,
we
would
be
interested
in
what
types
of
hardware
you're
running
on.
B
But
since
there's
a
more
diverse
set
of
hardware
out
there,
that
might
be
more
likely
to
identify
you,
and
so
you
would
need
to
get
permission
to
provide
that
and,
of
course,
it's
possible
to
completely
opt-out
of
metrics,
at
which
point
your
node
to
become
invisible
to
us
for
the
purpose
of
directing
our
development
efforts.
No
unique
identifiers
will
be
sent.
B
So
where
are
we
right?
The
door
was
just
released
and
you
can.
The
URLs
are
in
a
later
slide,
but
you
can
go
to
the
website
and
download
an
ami,
ID
or
download
a
Q
mu
image
or
a
bare-metal
image
and
try
it
out.
Please
don't
run
it
in
production,
yet
it's
not
quite
ready
for
that,
but
play
with
it
run
workloads
on
it
that
that
you
might
want
to
run
and
see
what
you
think.
B
Please
report
to
us
bugs
missing
features
that
you
feel
are
important
and
we
can
work
to
improve
the
preview
for
the
stable
release,
which
is
in
roughly
six
months
by
the
time
we
get
to
the
stable
release
after
we
declare
for
the
chorus
stable.
That
means
we
believe
that
it's
ready
to
run
in
production.
B
One
other
note:
with
a
preview
release,
we
are
reserving
the
right
for
the
time
being,
to
make
incompatible
changes.
It
is
possible
that
some
preview
builds
will
not
successfully
update
to
other
preview
builds
and
manual.
Intervention
will
be
required.
It's
possible
that
something
like
how
you
configure
networking
might
change
we're,
obviously
going
to
try
to
keep
those
sorts
of
changes
to
a
minimum,
but
we
want
to
be
able
to
again
make
the
best
release
that
we
can,
by
the
time
we
get
to
stable
next
slide.
B
So
what
is
coming
up?
Currently,
we
only
have
the
testing
stream
plus
a
couple
sort
of
non
production
streams
for
development
purposes,
so
we're
going
to
have
next
and
stable
reasonably
soon.
We
do
not
yet
have
all
of
the
cloud
and
virtualization
platform
supported,
so
we're
going
to
be
working
on
providing
those
as
well.
We
are
interested
in
multi,
multiple
architecture
support,
so,
for
example,
arm
64
is,
is
one
that's
of
some
interest.
I
mentioned
like
pixie
earlier
for
running
the
system
entirely
out
of
RAM.
That
is
not
yet
supported.
B
It's
a
popular
option
on
the
container
linux,
and
so
we
really
want
to
get
that
in,
but
it's
not
there
yet
same
with
providing
a
live.
Cd
improvements
to
network
configuration
the
among
other
things.
The
net
Ram
FS
is
currently
using
directed
networking
rather
than
network
manager.
Right
now,
and
some
work
is
needed
there
over
time,
we're
going
to
provide
more
nice
configuration
knobs
in
the
fedora
course
config
transpilers,
so
that
for
common
cases
you
don't
have
to
manually
specify
you
know
the
contents
of
a
particular
config
file
functioning
telemetry
right
now.
B
Well,
actually,
as
of
the
next
bv
release,
we
will
have
a
telemetry
client,
which
is
just
a
stub.
It
parses
the
config
file,
so
that
you
can
go
ahead
and
disable
metrics,
for
example.
If
that's
what
you'd
like
to
do,
but
aside
from
pricing
the
config
file
and
complaining
about
errors,
it
doesn't
actually
do
anything
yet
so
we're
going
to
need
to
get
that
fixed,
much
more
documentation
and
also
design
an
integration
work
for
running,
ok,
Dion
on
Fedora,
core
OS,
next
slide
and
I.
Believe
then
it's
back
to
you.
C
Thank
you
yeah.
So
this
is
a
good
topic.
First
to
talk
about.
Is
it's
difficult
relevant
to
this
audience?
You
guys
know.
Ok,
D
is
obviously
kind
of
the
community
side
of
open
shift
and
but
Fedora
core
OS
being
you
know,
kind
of
the
community
flavor
of
rel
core
OS.
It
makes
a
lot
of
sense
to
pair
these
two
up.
You
know
effectively
giving
you
the
same.
A
lot
of
the
same
benefits
of
openshift
on
rel
Koro.
Ask
where
we
are
right
now.
Is
you
mean?
C
Obviously
you
have
a
state
for
where
we're
Fedora
CRO
us
is
in
project
and,
of
course,
getting
these
two
things
working
together,
we,
you
know
I,
would
say
we're
in
the
early
stages.
We
we
have
some
some
good
ideas,
but
we
haven't
vetted
these
fully
on
the
community
side,
and
so
that
that
is
ongoing
right
now.
C
We
do
know
that
there
will
be
some
changes
needed
on
the
Installer
and
then
for
the
machine
configure
as
well,
but
we
haven't
quite
drawn
the
lines
of
if
we,
if
we
think
this
should
work
similar
to
how
kind
of
tectonic
and
container
in
Linux
had
a
relationship.
The
OS
just
moved
on
its
own
cadence,
underneath
the
platform
or
if
it
makes
more
sense
to
do
what
we're
doing
with
open
ship,
Tyrell,
4
or
less,
where
the
cluster
actually
dictates
the
OS
down
and
you're
always
versioned
and
consistent
across
the
cluster.
C
There's
there's,
obviously
pros
and
cons
to
each
one,
but
I
do
expect
us
to
get
this
flushed
out
pretty
quickly.
I
don't
want
to
over
promise
updates,
but
obviously
we
really
want
to
get
it.
Ok,
d
running
up
a
dork
or
less
sooner
than
later.
So
that's
kind
of
the
big
thing
is:
where
does
that
control
of
the
version
happening
in
the
short
term
to
probably
accelerate
this
we're
likely
gonna?
C
C
Do
we
look
at
some
trick
ways
to
change
the
payload
of
you
know,
delivering
the
coulomb
cryo
to
the
OS
node,
there's
kind
of
some
some
options
there
that
we
absolutely
want
to
make
sure
that
we
explore
and
do
the
due
diligence
around
so
I.
You
know
kind
of
the
key
point
here
is
it's
on
that
side?
It's
a
little
bit
early
the
process,
but
we're
gonna
be
working
through
this
and
even
a
public
way.
So
if
anybody
here
has
opinions
or
comments,
definitely
please
join
the
discussion
around
this
and.
B
C
And
something
worse
running
right:
good,
yes,
excellent!
Okay!
So
how
do
I
get
involved?
Where
do
you
go?
So
if
you
want
to
grab
the
preview
release,
it's
you
know,
get
Fedora
org,
slash
core
OS
and
you
can
pick
the
platform
grab
the
right,
the
right
version,
if
you
want
to
do
the
VM
or
bare
metal
or
whatever
all
of
the
different
flavors
are
right,
they're
an
easily
accessible.
C
We
do
have
an
issue
tractor.
Please
please,
please
get
involved,
let
us
know
feedback
better
ugly.
This
is
actually
one
of
the
things
we
really
wanted
to
improve
from
past.
Iterations
is
more
active
participation
on
the
community
and
and
just
just
like
visibility
and
transparency
in
the
process
of
building
this.
So
you
know
we
definitely
encourage
everybody
to
not
only
do
issues,
but
you
know
join
us
on
the
on
the
forums
as
well.
It's
a
it's
a
good
set
up.
C
We
have
going
here,
obviously
we're
nerd,
so
we
hang
out
on
IRC,
so
you
know
feel
free
on
to
freenode.
Germany
of
you
are
already
there.
Oh
yeah
and
I
didn't
call
it
the
develops,
but
yeah
you
can.
You
can
use
that
as
well.
I
mentioned
the
other
comments
brief
or
the
next
time
is
briefing
earlier.
Here's
the
link
to
it.
C
No,
no
problem
yeah,
so
there
are
28
and
and
we'll
do
a
deeper
dive
in
this
and,
of
course
by
then
the
FCC
tea.
The
Fedora
transpiler
will
likely
have
some
cool
bells
and
whistles
in
it.
You
know
between
the
next
month,
so
so
that'll
be
a
really
cool
talk
as
well,
so
I'm
gonna
leave
this
up
and
then
we'll
go
ahead
and
open
this
up
for
Q&A.
A
I
have
a
question
myself
because
on
the
previous
slide,
you
said
you
mentioned
the
openshift
installer
work
and
I'm
wondering
if
you
can
talk
a
little
bit
about
in
terms
of
you
open
ship
installer,
what's
involved
in
getting
a
version
that
works
with
F
cause
and
in
my
a
dream
of
dreams,
it's
just
a
simple
if-then-else
statement
that
I
know.
That's
not
true,
but
you
trying
to
talk
a
little
bit
about
that.
B
B
There
was
some
discussion
around
adding
some
sort
of
a
command-line
argument
or
something
so
that
it
knows
that
it's
installing
on
RF,
Casarez
and
rel
core
OS,
the
one
bit
that
I
do
know
is
still
sort
of
in
my
resolved
issue.
Yes,
there
are
some
details
around
ignition
and
how
it's
handled
on
Fedora
core
s
versus
rel
core
OS.
B
There
were
some
incompatible
changes
made
to
the
ignition
config
spec.
The
spec
is
provisioned,
and
you
declare
in
your
ignition
config
what
version
you
support
the
the
spec
version
we
were
using
on
container
Linux
before
had
some
systemic
design
flaws
that
need
to
be
corrected
and
the
original
goal
was
that
those
flaws
would
be
corrected
for
both
to
go
or
Khorasan
rel
core
OS.
B
But
the
timing
didn't
quite
work
out,
and
so
the
initial
releases
of
OpenShift
4r
are
still
using
the
the
older
and
compatible
version
of
ignition
for
fedora
core
OS
with
okay
D
we're
Fedora
core
s
is
using
the
newer
ignition
config
spec,
and
we
are
hoping
to
continue
doing
that
with
okay
D,
which
means
that
the
Installer
and
MCO
would
need
to
learn
about
that.
Newer
ignition
version,
the
details
of
how
that
worked
or
whether
we
can
make
at
work
are
still
sort
of
up
in
the
air
yeah.
C
But
I
will
add
this
too.
It's
also,
it's
really
good
and
healthy
for
us
to
work
through
this
on
this
side,
because
we
we
definitely
need
to
bring
in
the
newer
the
newest
back
on
open,
chats
as
well.
It
gives
us
a
lot
of
a
lot
of
benefits
around
around
rel
core
OS,
so
definitely
excited
that
the
work
is
happening
here
and
we'll
be
able
to
pull
it
into
the
product
space
when
it's
ready
in
the
chair,
but
yeah,
that's
the
that's!
That's
one
of
the
big
ones
on
the
Installer
side.
Is
that
help.
E
I'd
also
add
like
some
of
the
detail
around.
This
is
like
we,
the
best
place
to
do.
This
was
probably
in
a
forum
where
we
can
kind
of
get
the
technical
proposals,
because
there's
a
ton
of
subtle
details
in
the
Installer,
just
like
you
mentioned
just
like,
have
cause
and
having
that
kind
of
be
enough.
Here's
the
thing
that
we
want
to
go
do
and
here's
all
the
trade-offs
is
bigger
than
you
can.
Probably
just
we
can
probably
describe
in
this
meeting.
Yeah.
E
A
To
then,
and
with
where
and
look
for
an
announcement
shortly,
what
we're
going
to
have
a
kickoff
for
okay,
the
working
group
next
week
and
there
will
be
as
a
survey-
that's
been
sent
around
and
I'll
post
that
with
the
the
video
and
the
slides
here
too
as
well,
and
so
maybe
some
of
that
conversation
could
happen
there,
because
it's
there
for
okay
D
to
work
port
4.0.
We
really
need
both
the
openshift
installer,
the
ignition
bits
and
Fedora
core
OS.
D
A
A
It's
more
than
just
okay
D.
There
are
other
arm
and
other
applications
that
there
was
a
one
slide
with
packet
and
a
bunch
of
other
things
where
we
want
to
make
sure
this
runs
everywhere,
and
it's
not
just
an
okay
state
is
specific
I.
Think
some
of
these
issues
are
things
that
okay,
the
community
people
have
to
step
up
and
and
help
resolve
for
okay,
D,
but
there'll
be
other
communities
like
the
arm,
one
that
will
have
requests
yeah.
C
Well,
yeah
you've
got
the
you
get
the
slides
and
the
recording,
but
but
in
short
you
know
you
can
think
of
rpm
as
earth.
Sorry,
the
door
has
a
massive
collection
of
rpms
that
kind
of
moving
it
right
together.
This
is
a
specific
edition
of
Fedora,
so
it's
a
very
opinionated
deployment.
That's
really
just
targeted
towards
containers
workloads.
You
can
think
of
it
as
just
I
set.
C
My
OMS
in
motion,
I
like
to
say
self-driving
marketing
tells
me
not
to
because
it
scares
people
when
they
eye
crashes
cars,
but
the
idea
is
the
OS
is
kind
of
self
managing
self-maintaining
once
you've
set
it
in
motion.
You
really
just
own
deploying
your
app
to
it.
However,
it
makes
sense,
which
is
which
is
really
really
different
from
a
traditional
Linux
box.
Where
you
know
you,
you
update
and
actively
manage
that
in
a
different
way,
so
that
that's
kind
of
how
I
would
draw
the
line
between
this
and
Fedora.
B
Core
OS
we
were
saying
earlier:
rel
core
OS
is
a
component
of
course
of
open
shift.
It
is
specifically
for
open
shift.
Fedora
core
OS
is
interested
in
okd,
its
interested
in
vanilla,
kubernetes.
It's
interested
in
running
containers
directly
from
systemd
units
with
pod
man.
Any
way
that
people
run
containers
were
potentially
interested
in
we're
trying
to
support
automated
operations
underneath
any
orchestration
system
or
just
manually
running
containers
that
you
might
want.
D
Yeah
I
think
they'll
be
important
messaging,
just
sort
of
break
that
sort
of
mental
sort
of
attachment.
You
know
like
I'm,
a
fedora
packager,
and
it's
is
this
really
big
community
that
thinks
of
things
in
terms
of
you
know
like
the
latest
rawhide
version
of
so
but
yeah
I
get
I
get
what
you're
saying
thanks.
A
See
if
there's
any
other
questions
in
the
chat?
Surprisingly,
not
though
I'm
sure
there
will
be
more
that
will
arise
coming
soon,
so
you've
got
the
get
involved,
slide
they're
in
front
of
you,
there's
another
briefing
coming
up
on
the
28th
of
August
on
ignition,
so
there's
another
opportunity
to
connect
and
ask
more
questions.
They'll
be
multiple
okd
working
group
notices
coming
out
shortly,
so
the
first
kickoff
as
I
mentioned
it's
going
to
be
next
week
on
July
31st
at
9
a.m.
A
and
we
may
change
the
date
and
time
based
on
the
survey
results
to
see
make
sure
we
can
fit.
Everybody
else
does
time
zones
and
everything
in,
but
we
really
did
want
to
kick
it
off
and
get
the
conversation,
especially
about
the
interdependencies
of
Fedora,
core
OS
and
the
open,
shipped
installer,
and
some
of
the
roadmap
design
questions
more
fully
formed
and
discussed.
So
please,
if
you
can
watch
for
that
notice
for
next
week's
meeting
and
do
get
involved.
We
really
appreciate
both
Benjamin
and
Ben.
C
I
mean
we
already
got
system
D
in
those
self-driving
cars.
So
this
is
the
next
logical
step.
Doesn't
answer?
Actually
Benjamin
I've
got
a
question
for
you
to
put
you
on
the
spot.
You
mentioned
future
preview
releases.
Have.
Is
there
a
cadence?
We
should
expect
those
to
come
out
like
weekly,
bi-weekly
monthly,
with
any
any
thoughts
there.
Oh
yeah.
B
So
the
current
target
is
to
have
each
of
those
three
streams
that
I
mentioned
release
once
every
two
weeks.
Those
are
the
scheduled
releases
and
then,
over
and
above
that,
if
there's
a
let's
say
critical
security
fix
that
needs
to
go
out
immediately,
then
we
can
have
what
we
call
out
of
cycle
releases
to
push
those
out
quickly.
B
That
is
an
initial
target.
We're
not
it's
not
contractual.
If
you
will
so,
we
may
choose
to
change
that
cadence,
depending
on
how
it
actually
works
out.
At
the
moment
we
are
releasing
a
little
bit
more
ad
hoc,
because
we're
still
sort
of
tinkering
with
the
FBI
infrastructure
and
that
sort
of
thing
so
I
think
for
the
next
month
or
so
releases
will
probably
be
a
little
bit
irregular.
But
we
plan
to
settle
down
to
it
to
week.
Schedule
cadence
after
that
perfectly
how.
D
B
A
Then
we
are
done
at
the
end,
not
quite
at
the
end
of
the
hour
and
I.
Don't
see
any
other
questions.
I
was
also
going
to
mention.
The
okd
working
group
will
be
hosting
a
google
group,
not
that
you
all
need
yet
another
mailing
list,
but
we
thought
we
would
set
aside
a
mailing
list
for
the
okd
working
group
so
that
if
we
had
things
that
we
wanted
to
vote
on
our
way
in
on.
E
A
A
A
This
is
great
and,
as
we
all
keep
saying,
the
more
feedback
and
the
more
issues
you
can
post
and
give
us
on
this
and
the
more
insights
into
how
you're,
using
these
projects,
all
of
them
and
Fedora
core
OS,
okay,
Dee
and
a
bazillion
other
things
that
are
are
interrelated.
The
better
we
all
will
be
in
terms
of
getting
good
releases
out.
So
thanks
guys.