►
From YouTube: Fedora CoreOS Ask Me Anything and Demo with Dusty Mabe Colin Walters Christian Glombek (Red Hat)
Description
Fedora CoreOS Update and Ask Me Anything FCOS AMA
Dusty Mabe
Colin Walters
Christian Glombek
hosted by Diane Mueller (Red Hat)
July 13 2020
OpenShift Commons Briefing
A
All
right,
everybody
welcome
back
to
another
openshift
Commons
briefing
and
as
we
like
to
do
on
Mondays,
we
are
going
to
have
an
ask
me
anything
question
with
one
of
the
most
wonderful
upstream
projects:
Fedora
core
OS.
We
have
a
number
of
the
members
of
the
team
from
Fedora
core
OS,
as
well
as
a
few
of
the
okd
working
group
folks,
who
are
also
leveraging
photo
archives
in
a
big
way.
A
Today
we
have
dusty
Maeve
who's,
going
to
kick
us
off
with
a
intro
to
Fedora
core
OS
and
a
bit
of
a
demo,
and
then,
after
that,
we'll
do
live
Q&A
and
we're
streaming
them
in
the
twitch,
Facebook
and
YouTube.
So
will
a
great
all
the
questions
and
feed
them
to
our
lovely
contestants
today.
So
with
that
dusty,
take
it
away,
hey.
B
Everybody
thanks
for
the
intro
of
Diane,
so
my
name
is
dusty.
Mabe
I
work
for
Red,
Hat
and
I'm
here
to
talk
about
Fedora
core
West
today.
So
briefly,
I'm
gonna
talk
about
what
is
Fedora
core
OS
I'm
gonna
talk
about
some
of
the
features
of
Fedora
core
OS,
how
it
relates
to
rel
core
OS
and
also
how
does
it
relate
to
okd
and
then
I'm
gonna,
give
a
short
demo
and
hopefully
dig
into
a
lot
of
questions
from
people.
B
Ok,
so
here
I
had
planned
to
start
the
demo.
The
demo
is
actually
an
install
of
okd
on
top
of
Fedora
core
OS,
but
I
realized
that
the
install
time
would
take
longer
than
my
talk.
So
I
started
it
early
today.
So
we'll
get
to
that
in
just
a
little
bit:
ok
Fedora
core
OS.
What
is
it
it's
an
emerging
Fedora
edition?
It
came
from
the
merging
of
two
communities:
one
was
core
OS
inks,
container
Linux
community
and
also
project
Atomics
atomic
host
community.
B
It
incorporates
the
container
Linux
philosophy
or
what
we've
been
referring
to
as
the
container
Linux
philosophy,
the
provisioning
stack
and
the
cob
native
expertise
of
container
Linux,
and
it
also
incorporates
atomic
hosts
Fedora
foundation.
The
update
stack
and
the
enhanced
security
with
SC
Linux.
B
So
first
I
want
to
talk
a
little
bit
more
about
the
philosophy
but
container
linux,
because
it
really
has
driven
what
we've
done
with
fedora
core
OS.
So
first
off
container,
linux
focus
primarily
on
automatic
updates,
which
means
that
the
administrator
by
default
has
no
interaction
with
the
system
in
order
to
keep
the
system
up-to-date
and
the
goal
there
is
that
staying
up-to-date
in
a
default.
Automated
manner
means
that
security
fixes
get
automatically
applied
and
your
systems
stay
more
secure,
so
more
secure
by
default.
Container.
B
Linux
also
had
all
nodes
start
from
approximately
the
same
starting
point
and
they
used
ignition
in
order
to
achieve
this
goal,
so
you
would
use
ignition
to
provision
a
node
wherever
it
started,
whether
it
be
on
bare
metal
or
on
cloud,
and
they
all
essentially
start
from
the
same
starting
point.
Container
Linux
also
focused
on
immutable
infrastructure.
B
So,
for
example,
if
you'd
need
a
change,
you're
encouraged
to
update
your
configuration
and
reprovision
this
kind
of
guarantees
that
you
know
your
changes,
make
it
back
into
your
configuration
and
it's
tested
because
guess
what
you've
tested
that
provisioning
when
you
brought
up
a
new
node,
also
user
software
runs
in
containers,
which
means
that
applications
don't
depend
on
the
host
and
they,
the
host
updates,
are
more
reliable.
When
you
have
automatic
updates,
you
need
them
to
be
reliable.
B
So
now
we're
going
to
talk
about
Fedora,
core
OS
and
you're,
going
to
hear
a
lot
about
that.
Those
features
that
I
just
mentioned,
or
the
philosophy
I
just
mentioned
the
first
one
being
automatic
updates,
so
Fedora
core
OS
features
automatic
updates
by
default,
and
if
you
have
automatic
updates,
you
need
them
to
be
reliable.
How
do
we
achieve
this
goal?
So
we
achieve
this
goal
by
having
extensive
tests
in
automated
CI
pipelines.
B
We
also
have
several
update
streams,
which
I'll
touch
on
in
a
minute
that
allow
users
to
preview.
What's
coming,
users
run
the
various
streams
so
that
they
can
know
when
changes
are
coming,
that
they
need
to
either
address
or
report
issues,
for
we
also
have
managed
upgrade
rollouts
so
upgrades.
You
know
upgrade
windows
happen
over
several
days.
B
I
want
you
to
automatically
go
back
to
the
previous
version
before
this
update.
That's
a
future
feature,
not
something
we
have
just
yet.
Okay.
I
mentioned
multiple
update
streams.
Right
now
we
have
three
update
streams
which
we
offer
that
have
automatic
updates.
One
is
next
that
stream
is
focused
more
on
experimental
features
or
Fedora
major
release,
rebase
--is.
So,
for
example,
when
we
switch
to
see
roots
b2
right
now,
we're
on
c
gross,
we've
won
in
Fedora
core
OS.
B
When
we
switched
up
cgroups
v2,
we
will
land
that
in
the
next
stream
first,
it
will
have
some
soak
time
there.
Hopefully
people
report
any
issues
we
get
them
fixed
and
then
eventually
they'll
go
into
testing
and
stable.
Also,
for
example,
when
we
switch
from
Fedora
32
to
fedora
33.
That
will
happen
in
next
first.
So
it's
an
opportunity
for
us
to
I.
You
know
put
those
breaking
changes
that
are
going
to
happen
or
possibly
breaking
changes
that
are
going
to
happen
in
there
and
get
them
tested.
B
Okay,
so
testing
is
basically
a
preview
of
what's
coming
to
stable,
it's
a
point
in
time,
snapshot
of
Fedora,
stable
RPM
content,
and
that
is
going
to
go
directly
into
stable
in
two
weeks.
If
we
don't
find
issues.
So
the
goals
of
having
these
update
streams
are
to
publish
new
releases
in
to
update
streams
that
two
weeks
and
also
find
issues
and
get
them
fixed,
so
they
don't
hit
the
stable
stream.
B
The
Fedora
core,
OS
release
promotion.
I
touched
on
this.
Briefly,
we
have
a
version
number
that
basically
incorporates
the
Fedora
major
the
date
at
which
content
was
snapshotted
from
fedora,
stable,
repos
and
then
also
a
number
that
indicates
which
release
stream
we're
on.
So
we
have.
We
have
three
release
streams.
We
have
testing
stable
and
next,
and
that
number
corresponds
to
a
release
stream,
and
then
we
have
a
revision.
So
if
we
do
an
ad
hoc
fix
to
a
content
set,
we
will
bump
that
revision
number
and
re-release
so
for
the
young
repositories.
B
If
this
represents
the
yum
repository
moving
in
in
time,
we
will
snapshot
that
on,
in
this
case
the
third
or
the
23rd
of
March.
That
becomes
the
testing
stream.
We
do
a
testing
stream
release
and
then
hopefully
we
don't
find
any
issues.
Two
weeks
later,
that
gets
promoted
to
stable
and
people
in
the
stable
stream
get
that
content.
B
B
So
I
took
the
configs
that
I
used
to
bring
up
that
server
and
I
just
spun
up
of
the
at
home
and
I
was
back
up
and
running
in
ten
minutes.
So
that's
an
example
of
you
know,
because
everything
is
is
baked
into
these
ignition
configs.
It's
really
easy
to
get
a
new
node
up
and
running
in
the
same
profile
as
the
one
that
you
and
then
because
we're
using
ignition.
We
have
the
same
starting
point:
whether
we're
on
bare
metal
or
cloud.
B
B
The
key
point
here
is:
if
provisioning
fails,
the
boot
fails.
So
you
don't
end
up
with
a
half
provision
system,
sometimes
with
cloud
in
it.
You
could
have
part
of
it
succeed
and
part
of
it
not
succeed,
and
then
you've
got
an
application
up.
That's
half
running
with
ignition.
You
know
it's
good,
because
you
know
the
machine
didn't
provision
correctly,
but
it
also
can
be
bad
because
it's
hard
to
debug
in
the
init
run
of
s.
So
there's
good
and
bad
there,
but
we
like
it
overall,
particularly
ignition,
is
not
very
human
friendly.
B
B
So
the
next
feature
I'd
like
to
talk
about
is
basically
cloud
native
and
container
focus,
so
software
runs
in
containers
and
users
have
two
options:
they
have
pod
man
or
movi
engine,
which
is
also
known
as
docker
those
two
container
runtimes.
If
you're
kind
of
coming
from
container
Linux,
you
still
have
docker.
B
If
you
want
to
use
that
or
if
you
want
to
try
out
pod
man,
that's
there
for
you,
it's
ready
for
cluster
deployments,
so
you
can
spin
up
100
nodes
and
have
them
join
a
cluster,
because
the
ignition
configs
are
used
to
automate
the
cluster
join
it
kind
of
take.
It
takes
care
of
everything
for
you
and
then
you
can
spin
down
nodes
and
spend
them
up
again
as
you.
So
that's
kind
of
more
of
the
cloud
native
piece
of
it.
B
B
Fedora
core
OS
uses
rpm
OS
3
technology,
I
like
to
describe
it
as
like
get
for
your
operating
system,
so
we
have,
for
example,
a
particular
version
of
Fedora
core
OS,
which
is
kind
of
like
a
tag.
You
have
a
version
and
also
a
git
commit
or
sorry
of
an
OS
tree
commit
hash,
and
this
single
identifier
tells
you
all
the
software
that's
in
a
particular
release.
It
tells
you
all.
B
The
RPM
content
tells
you
all
the
config
default
settings
that
are
delivered,
and
that
is
important
when
you
are
trying
to
report
issues
or
share
information.
So
as
a
user,
you
can
report
an
issue
and
say:
hey
I'm,
on
this
specific
commit
of
Fedora,
core
OS
and
I
run
these
steps
and
I
see
this
problem
and
that's
really
powerful
because
it
uses
our
pmos
tree.
It
also
has
read-only
file
system
mounts
which
prevents
accidental
OS
corruption.
B
Ok,
so
what's
in
the
OS,
we
have
the
latest
Fedora
based
components
built
from
rpms.
We
have
hardware
support,
so
hopefully
anything
that
Fedora
supports.
We
can
support
with
Fedora
core
OS.
We
have
basic
administration
tools,
we
have
container
engines
and
not
much
else,
so
we
don't
have
Python.
The
goal
here
is
that
we
encourage
users
to
run
their
applications
in
containers
and
not
run
things
directly
on
the
host.
That
makes
our
updates
more
reliable.
We
don't
want
to
update
the
host
and
break
your
application.
B
Ok,
so,
coming
soon
we
have
more
cloud
platforms
that
we're
adding.
We
also
are
trying
to
get
support
for
multi
arch
we've
got
more.
You
know
human
friendly
helper
functions
that
we
want
to
add
to
Fedora
core
OS
config
transpiler.
We
want
to
make
our
package
layering
more
reliable
in
cases
that
might
need
that
we
want
to
have
more
flesh,
improved
documentation
and
then
also
tighter
integrations
with
okd.
B
When
you
talk
about
fedora
and
rel
core
OS,
you
want
to
know
what's
the
difference.
So
at
a
high
level,
the
biggest
difference
is
obviously
based
on
rel
package
set
versus
based
on
a
Fedora
package
set,
but
the
probably
the
larger
thing
is
Road
Red
Hat
core
OS
is
only
designed
or
meant
to
be
used
directly
with
openshift
itself
and
not
meant
to
be
used
standalone.
B
So
the
updates
for
Red
Hat,
core
OS
are
delivered
and
controlled
by
the
cluster
itself
and
not
independently
of
the
cluster
with
Fedora
core
OS.
You
can
use
it
either
standalone
or
you
can
use
it
as
part
of
a
cluster.
So,
for
example,
without
a
okd
in
the
standalone
case,
you
get
the
updates
directly
from
Fedora
core
os's
release
servers
in
the
case
of
okd.
You
get
the
updates
similar
to
how
Red
Hat
Kouros
gets
its
updates
from
containers
in
the
registry.
B
So
you
can
use
Fedora
core
core
OS
standalone
with
other
quests
or
orchestration
technologies
or
with
okd.
Ok,
specifically,
ok,
D
is
installable
with
ok,
DS
installer,
so
the
same
one
that
OCP
uses
open
shipped
install
the
cluster
controls.
The
OS
upgrades
as
I
mentioned
a
minute
ago,
upgrades
are
provided
as
machine
OS
content
containers
and
then
the
cluster
can
manage
and
bring
up
new
machines
automatically,
which
I
think
is
the
coolest
point
there
right
at
the
end.
Ok,
so
let
me
go
into
demo
Diane.
Do
we
have
any
questions
before
I
go
into
demo?
B
B
C
A
B
Perfect
so
I
was
gonna
start
this
at
the
beginning
of
the
talk,
but
I
realized
that
it
takes
a
little
while
35
minutes
to
bring
up
a
cluster,
so
I
started
a
while
back.
So
in
this
case,
what
I've
done
is
I've
brought
up
an
oak
ad
cluster
running
on
Fedora,
core
OS
and
I'll
hop
over
here
to
another
window
and
I'll
actually
show
that
running
I'm
using
a
terminal
user
interface
called
canines,
I
guess,
which
is
pretty
awesome
I
like
it
a
lot
ok.
B
So
this
is
my
cluster
in
this
case,
I
have
one
two
three
of
our
primary
nodes
and
then
two
worker
nodes.
One
thing
that's
really
nice
is
I,
can
dig
into
each
of
our
nodes
and
I
can
actually
search
Fedora,
core
OS
and
the
cluster
knows
what
operating
system
is
running
underneath,
which
I
think
is
pretty
darn
cool.
So
this
is
the
Fedora
core
OS
release
of
stable
that
we
just
released
on
Friday
it's
from
a
Content
step
from
June
29th.
B
So
that's
when
we
froze
the
contents
that
we
did
a
testing
release
and
then
we
did
a
subsequent
stable
release
two
weeks
later.
Okay,
so
this
is
the
node
and
I
can
dig
into
each
node.
I
can
see
what's
running
on
each
node,
I
think
one
of
the
coolest
things
that
you
can
do
with
okt
these
days
and
it
represents
the
tight
integrations
with
the
operating
system,
is
if
we
go
and
we
look
at
the
machine
sets
for
this.
B
Particular
cluster
I
only
brought
up
two
worker
nodes,
so
there's
a
third
machine
set
here
which
has
no
nodes
currently
in
it.
So
if
I
go
and
I
edit,
that
particular
machine
set
and
I
changes
a
replicas
to
one,
so
it
will
start
to
bring
up
a
node
and
if
I
want
to
edit
another
one,
I
can
bring
up
them
even
more.
B
You
know,
let's
make
this
money
and
now
I
can
go
look
at
machines
in
the
cluster
and
we
can
see
we
have
two
that
are
provisioning
right
now
regarding
okd
itself
of
those
predictor
provision,
you
know,
typically,
if
you
want
to
look
at
the
health
of
the
cluster
I,
look
at
the
cluster
operators
and
these
kind
of
give
you
a
sense
of
the
health
of
the
cluster.
In
this
case
it
looks
like
we
have
one
cluster
operator
that
is
in
a
degraded
mode,
which
is
not
good.
B
I'll
need
to
look
into
that
one,
but
in
general,
as
long
as
all
of
these
typically
say
false,
then
you're
good.
Let's
see,
let's
see
if
those
those
haven't
come
up
as
nodes
yet
but
yeah.
So
this
is
kind
of
the
demo.
B
I
can
run
OC
debug
node
to
get
into
any
particular
one
of
these
machines
if
I
want
to,
but
this
is
just
kind
of
a
representation
of
the
integration
with
the
OS
that
we
have
with
the
newer
core,
OS
and
OPD.
That
makes
things
like
this
possible.
I
can
I
can
scale
out
a
cluster.
You
know,
depending
on
my
workload
and
because
it
uses
ignition
to
serve
all
of
these
new
worker
nodes,
essentially
their
role
in
the
cluster.
Those
come
up
automatically
enjoying
the
cluster,
and
then
you
know
that's
the
end
of
the
story.
B
A
B
So
here,
I,
just
I
just
highlighted
one
ignition
config
in
the
terminal
here,
which
is
more
or
less
the
one.
That's
used
to
bring
up
the
bootstrap
node,
which
is
essentially
what
kicks
off
the
the
entire
install
of
the
cluster.
So
this
ignition
config
has
some
user
information
where
it
sets
SSH
keys.
It
writes
in
different
registry
configurations.
B
B
C
One
thing
I
would
add
to
this,
so
you
know
part
of
what
came
from
Co
s
was
quintero
Linux,
but
also
tectonic
like
a
lot
of
open
ship
for
and
okd
now
is
based
on
an
evolution
of
tectonic
design
and
so
for
for
open
shift
for
the
ignition
configs
are
actually
cluster
objects
themselves
like
rusty.
If
you
want
to
do,
OC
get
machine
config
so
like
as
an
addition
to
the
core
OS
model,
the
the
integrated
machine
config
operator
will
support
de
to
changes.
C
So,
as
was
kind
of
mentioned,
like
the
the
ignition
is
provided
at
install
time
and
so
openshift
and
okt
work
the
same
way
as
any
other
workload
on
top
of
fedora
for
us.
But
in
addition
to
that,
the
MCO
kind
of
takes
over
a
day
too,
and
so
you
can
create
a
machine.
Config
object
with
okay
with
OC,
and
that
will
that
change
will
roll
out
to
the
cluster.
C
C
B
We
should
have
support,
for
you
know
complex
networking
configurations.
Anything
that
network
manager
supports
could
be
just
fine
as
far
as
the
they
say,
hardware
RAID
that
what
you
said,
Hardware
radio.
So
as
far
as
hardware
raid
goes
assuming
that
you
don't
need
like
a
special
driver
in
order
to
bring
it
up,
then
you
know
the
hardware.
Raid
just
shows
a
disk
to
the
OS
right
so
like
whatever
is
configured
in
hardware
RAID,
it
usually
just
shows
a
single
device
to
the
OS.
So
that
should
work.
B
A
B
D
A
Good
to
know
Thanks
well
lead,
asked
a
question
and
I
think
it
might
have
been
answered
in
the
chat.
Would
it
be
possible
to
report
back
to
Red,
Hat
satellite
Foreman
for
overall
inventory
security
stand?
A
C
I
can
take
this
one.
It's
too,
you
know
I
think
one
of
the
I
guess,
because
for
all
of
OpenShift,
certainly
a
huge
design
point
was
the
open
shift
and-
and
you
know,
okd
should
be
self-driving
by
default,
like
it
shouldn't
require
you
to
run
an
external
infrastructure
to
say,
spit
up
a
cluster
at
address
or
TCP
or
public
cloud.
We
definitely
want
to
integrate
with
an
external
systems,
so
I
think
the
short
answer
to
this
would
be.
You
can
always
run
a
daemon
set
or
a
system
to
unit
file
that
runs
on
each
node.
A
B
So
I
guess
it
depends
on
how
long
you
go.
This
person
tried
that,
because
we
did
some
work
maybe
a
month
ago
or
so
to
make
sure
that
appending
single
to
the
kernel
command
line
should
drop
you
inappropriately
to
a
you,
know:
admin,
shell
or
whatnot,
so
you
should
be
able
to
set
a
password
or
change
a
an
SSH
key
if
they
didn't
set
a
password.
Maybe
there
was
an
selinux
problem
when
they
got
into
that
dell,
but
I
would
say
open
a
bug
and
we
can
try
to
go
through
that.
B
A
A
So
it's
about
the
relationship
between
rel
sent,
OS
and
fedora,
and
you
knew
we
would
get
one
of
these
questions
and
I
figured
that
so
budget
from
fedora
rail
and
sent
to
us
you
know
is
that
Fedora
is
upstream
with
the
latest
features,
whereas
rel
is
the
stable
Enterprise
version
and
sent
to
us
is
the
stable
free
version
which
leads
many
leads
to
many
enterprises
appreciating
sent
to
us
for
stability.
How
does
Fedora
core
OS
fit
into
this
pattern?
A
B
Yeah
trying
to
think
of
the
best
way
to
answer
this
question,
obviously
there's
a
lot
of
different
ways
to
answer
it,
but
in
general
we
have
Fedora
core
OS.
We
try
to
do
everything
upstream
there
rel
core
OS,
actually
follows
very
closely
with
the
development
that's
going
on
and
Fedora
core
OS.
You
know
with
the
slight
package
that
tweak
that
includes
things
that
are
specific
to
open
shift
like
the
ins
are
like
cryo,
and
you
know
the
container
runtime
there.
B
So
it's
a
little
bit
different
than
normal
fedora
Tyrell.
It's
kind
of
like
the
set
packages
is
the
traditional
Fedora
Tyrell,
except
for
some
of
the
packages
that
we
focus
on
more
like
ignition
and
things
like
that,
which
kind
of
get
updated
a
little
faster.
As
far
as
Cintas
goes
right.
Now
we
don't,
we
don't
have
any
plans
to
make
a
CentOS
core
OS,
mainly
because
you
know
it's
taking
all
of
our
efforts
just
to
work
on
Fedora,
core
OS,
but
I
do
know.
B
A
A
Device,
so
it
might
not
make
it
follow
up
on
that,
but
I
know
we've
had
a
lot
of
conversations
about
this
in
the
okd
working
group
and
we
have
Christian
and
thati'm
with
us
from
that
as
well.
A
lot
of
it
has
to
do
with
resourcing.
You
know,
and
there
is
in
the
center
us
world
pause
as
working
group
where
that
conversation
has
been
taking
place
so
Christian.
If
you
want
to
chime
in
a
little
bit,
if
we
can
hear
you
I
can
see
you
now
yeah.
A
E
That
that
feedback
loop
and
we
actually
complete
a
feedback
loop
by
having
changes
yeah
trickle
down,
naturally,
because
Fedora
is
the
upstream,
so
everything
below
that
will
get
those
fixes
eventually,
while
CentOS
is
the
end
of,
is
the
most
downstream
thing
and
anything
that
gets
fixed
in
in
Santos
isn't
landing
anywhere
else.
So
santos
is
essentially
a
repackaging
of
rel,
so
even
in
the
standard
CentOS,
if
anything
is
fixed
in
Central's
first,
it
doesn't
really
land
in
in
rel
naturally.
E
But
for
us
it
was
just
we
and
that
was
kind
of
broken
in
the
lkd
three-point
ex
releases,
because
that
was
also
just
a
repackaging
of
OpenShift.
But
we
really
wanted
to
make
sure
that
we
get
ya
a
feedback
cycle
working
where
any
any
bugs
we
hit
in
upstream
will
also
be
fixed
downstream.
So
I
think
from
a
red
red-head
perspective.
E
It
may
make
sense
to
also
create
a
central
stream
based
core
OS
that
would
run
ok
d,
I'm,
not
sure
we
will
invest
any
resources
into
our
at
least
not
ourselves,
into
creating
a
a
Santos
Santos
Santos
core
OS,
ok
D,
which
would
be
again
it
would
be
downstream
and
no
other
none
of
the
other
products
would
benefit
from
it.
Yeah
I.
C
C
So
in
this
whole
conversation
we've
been
talking
about
the
hosts
and
for
some
people
that's
important,
but,
like
all
you
can
run
the
rally
at
UVI
container
on
the
Dora
core
OS
right,
and
not
only
can
you
you
should
right,
like
that's
a
totally
normal
and
expected
thing
to
do
so,
for
application
developers
like
you,
don't
have
to
you
still
get
the
benefit
of
rel
and
the
scent
hasta
ability
for
your
app
stream.
Well,
you
know
how
newer
hardware
enablement
and
all
that
stuff
comes
in
in
Fedora
core
OS,
and
that's
only
that
only
impacts.
A
Think
that
that
one
of
the
very
early
slides
that
dusty
put
up
was
about
where
the
container
Linux
and
the
project
atomic
project
came
to
live
was
in
the
fedora
world,
so
collaborating
with
them
made
sense
for
the
okd
working
group
in
terms
of
stability
and
in
resourcing
the
the
effort.
So
that's
them.
B
Another
thing
like
the
on
the
resourcing
front,
like
with
atomic
host:
you
know
it
was
kind
of
more
of
a
you
know,
a
lot
more
passive
relationship
with
fedora,
where
we
would
build
the
same
package
set
as
in
fedora.
We
essentially
only
have
one
one
stream.
You
know
in
the
terms
of
what
we're
doing
with
Fedora
core
OS,
there
was
a
lot
more
passive
and
a
lot
less
a
lot,
less
resources
required.
Now
we
have
three
streams.
B
We
kind
of
need
to
keep
up
with
features
that
are
in
next
versus
the
other
two
we
need
to.
You
know
triage
bugs
that
are
against
ones
versus
the
others.
When
we
do
automatic
update
rollouts,
we
need
to
kind
of
focus
on
you
know,
did
this
update
break
anything?
Does
it
need
to
be
rolled
back
right?
So
that's
why
it
was
a
lot
easier
to
do
a
Cintas
atomic
host
and
fedora
atomic
host,
because
it
was
a
lot
less
resource
intensive,
but
because
we
wanted
to
focus
on
automatic
updates.
B
You
know
we
needed
to
put
some
of
this
other
stuff
in
place
and
that's
why
you
know
when
people
talk
about
doing
a
Cintas
core
OS,
it's
like!
Oh
man,
that's
a
lot
of
work,
but
we
you
know
we
want
to
enable
people
to
do
that.
If,
if
people
want
to
step
up
and
and
do
that,
so
we
would
love
for
it
to
happen.
It's
just
a
lot
of
work.
A
It's
a
lot
of
work,
but
I
think
the
pattern
has
been
established
and
a
lot
of
the
that
difficult
hoops
have
been
figured
out.
It
really
truly
is
a
matter
of
resourcing
I,
think
and
if
the
community
wants
to
step
up,
you
know
happy
to
help
drive
that
as
well,
but
I
think
it's.
It
really
comes
down
to
that
and
I
think
as
Colin
so
aptly
put
out,
containers
changed
everything,
so
that
would
be
that.
A
E
So
I
think
MIDI
main
may
know
more
about
this,
but
I
think
we've
been
able
to
build
it,
and
there
is
some
preview.
Well,
our
a
yeah
proof
of
concept
that
it
works.
How
we
haven't
been
able
to
actually
change
the
project
to
build
on
on
top
of
okay
d
by
default,
as
opposed
to
OCP.
So
there's
still
some
discussion,
whether
we
want
that
whether
we
want
to
keep
that
our
cost,
OCP
openshift
option
platform
based
or
switching
over
to
okay
d
or
maybe
do
both,
which
is
again
work.
E
A
So
the
so
this
is
a
Naomi
around
Fedora,
core
OS
and
the
we've.
We
had
one
of
the
week
before
on
ok
d
working
group.
But
if
you
want
to
join
the
ok
D
working
group,
there's
a
whole
thread
about
the
single
node
cluster,
a
whole
bunch
of
people,
who've
done
some
home
lab
stuff
and
some
really
great
stuff,
but
it
has
not
filtered
down
into
an
official
code,
ready
container
yeah
that
you
might
think
you
can.
You
should
be
unmuted
now,
if
you
have
anything
to
add
to
that
right.
A
There's
nothing
that
keeps
us
from
creating
something
similar
on
the
community
side
other
than
I'm,
always
gonna,
say
the
word
resources
and
maintaining
it
once
you
build
it,
because
we've
all
been
there
before
creating
VMs
for
the
early
origin
that
we
had
to
maintain
with
every
release
is
a
community
thing.
It
was
fun
and
keep
me
up
to
date
as
it's
a
thing
and
also
we've
been
having
a
conversation
in
the
okd
working
group
about
arm
sixty-four,
architectures
and
we're
and
Ryan's
asking
any
news
about.
A
B
B
Fedora,
core
OS
or
fedora
IOT,
together,
I
think
there's
some
slightly
different
goals
there,
one
of
them
being
fedora
IOT,
obviously
plans
to
run
on
some
32-bit
arm
hardware,
and
that
is
not
really
a
goal
that
we
had
from
the
outset,
because
we're
focused
a
little
more
on
you
know,
servers
they
are
64.
Just
seemed
like
a
good
line
in
the
sand
to
draw,
but
yeah
we
don't
I,
don't
think
we
have
any
official
plans
for
kind
of
merging
the
efforts.
But
that
is
something
to
you
know,
talk
about
and
explore.
B
A
And
that
would
probably
be
peter
robertson
and
robinson
rather
and
so
we'll
have
an
ama
on
fedora
iot
some
time
soon
and
make
him
come
and
talk,
but
I
think
that's
people
there's
also
a
little
confusion
out
there
in
the
marketplace
and
and
I
think
about
or
in
the
community
space
around
that,
but
I
think
you
know,
as
as
others
have
said,
Armas
is,
is
becoming
ubiquitous
out
there
in
the
IOT
land
and
it
comes
up.
Often
that's
that's
there
from
YouTube
back
to
that
hardware.
A
Question
from
Vivian
for
the
hardware
question
again:
I
mean
not
only
installing
a
pre-configured
raid
volume,
but
also
configuring
raid
levels
from
the
ignition
configuration
at
the
moment.
It's
possible
to
do
that
for
I'm.
Gonna
say
that
acronym
wrong
in
the
admin
only
and
not
for
Hardware
raids,.
B
Yeah,
that
might
be
something
where
we
need,
like
a
specific
issue
to
dig
into,
but
basically
I
guess
they're
asking
for
support
for
ignition
to
be
able
to
configure
hardware
raid
controllers
yeah.
It
seems
slightly
out
of
scope,
I
guess,
there's
like
typically
a
tool
that
is
shipped
to
the
talk
to
them,
or
maybe
there's
a
kernel
interface
that
already
exists,
but
yeah
I
think
maybe
a
specific
issue.
I
think
her
name
was
Vivian
would
be
good.
We
can
kind
of
hash
out
some
details.
B
A
You
could
share
your
screen
and
go
there.
That
would
be
great
because
that
person
is
not
in
the
blue
jeans.
Ok,
perfect,
where
the
issue
tracker
is:
let's.
B
Go
back
to
the
presentation
and
I
have
a
slide
for
getting
involved.
So
if
you
want
to
go
just
grab
the
door
core
OS
or
view
our
releases,
we
have
the
top-level
gift,
fedora
org,
slash
core
OS
for
any
issues
or
kind
of
design.
Discussion
related
stuff,
we
usually
open
tickets
in
our
Fedora
core
OS
tracker.
So
that's
github,
comm,
slash
core
OS,
slash
Fedora
core
OS
tracker,
so
you
can
open
an
issue
there
and
that's
where
we
can
kind
of
dig
into
the
details
of
the
hardware.
B
Raid
controller
support
and
we
also
have
a
forum
if
you
have
kind
of
like
a
more
of
a
user
related
question
related
to
you,
no
I
can't.
How
do
you
set
a
password
or
things
like
that
on?
The
forum
is
a
good
place
for
that
they
have
a
mailing
list
and
we
also
have
Fedora
core
OS
on
freenode.
Should
I
go
to
the
issue:
tracker,
Diane
or.
B
B
A
A
B
Yeah
sorry
I
popped
open
a
new
window
when
I
clicked
on
that
link.
That
was
not
shared,
so
yeah.
So
this
is
the
issue
tracker
and
we
kind
of
have
everything
in
here
from
actual
bugs
to
discussion
topics.
So
this
particular
one
is
where
we're
padding
about
how
we
want
the
okd
release,
schedule
and
the
Fedora
core
OS
release
schedule
to
relate
to
one
another:
did
they
be
related?
Should
they
not
be
related?
How
do
we
want
to
go
about
it?
So,
a
little
bit
of
everything
in
there.
A
And
one
of
the
the
guests
is
asking
and
I
think
you
had
a
slide
on
it
already
it's
Fedora
or
was
already
usable
in
Azure,
AWS,
GCP
and
elsewhere.
B
You
go
to
our
download
page,
so
there
are
two
regions
or
two
clouds
in
which
we
have
automatically
uploaded
images
right
now.
Those
are
AWS
and
GCP
and
we're
trying
to
work
on
other
ones.
However,
we
also
have
images
created
for
a
lot
of
different
clouds,
so
we
have
Alibaba
AWS
is
your
digital
ocean
EXO
scale,
GCP
OpenStack
and
a
few
others?
B
We
also
have
VMware
images,
but
in
general,
if
you
go
to
our
doc
site
and
you
click
on
the
provisioning
machines
breakout,
you
can
see
how
to
actually
provision
a
machine
on
a
lot
of
different
cloud
providers
that
we
have
so,
for
example,
you
asked
about
as
you're
on
a
juror
because
we're
not
currently
uploading
directly
towards
your
or
at
least
we're,
not
able
to
share
an
image
that
others
can
use.
We
show
you
how
to
download
it
and
then
upload
it
before
you
start
to
launch
an
instant.
A
D
A
And
hopefully,
when
we
get
to
GA,
we'll
have
a
lot
more
feedback
on
some
of
this,
be
if
we've
gotten
everybody's
questions
answered,
and
can
you
talk
for
me
a
little
bit
about
ignition
3
support
versus
ignition
to
support
I
know,
that's
been
something
that
we've
had
a
conversation
with
in
in
the
working
group
about
having
it.
There
can
where-where's
ignition
app
right
now
in
terms
of
Fedora
core
OS.
E
B
E
So
yeah
in
Fedora
core
OS,
we
started
with
ignition
v3
from
the
get-go
so
and
that's
always
been
v3
in
in
Fedora
core
OS
in
Revit,
core
OS
and
an
open
shift
compute
platform.
We
started
out
with
ignition
config
spec
v2
and
now
in
the
upcoming
release
4.6
we
will
switch
OCP
to
ignition
spec
v3
as
well
so
it'll
be
the
same,
and
if
you're,
a
user
of
okay,
DS,
just
use,
spec,
V,
3.1
and
you'll
be
good
and
it'll
do
CP
will
switch
to
be
3.1
as
well
in
the
future.
A
E
That
was
we
kind
of
had
a
in
the
MCO.
We
added
dual
support,
so
we
can
provide
either
one
and
it'll
be
translated
to
back
3.14,
okay,
D
and
2.2
for
OCP,
so
just
to
facilitate
that
migration
for
OCP
to
move
from
spec
2
to
b3.
But
that's
really
an
implementation
detail
and
okay.
The
users
should
always
be
using
spec,
3
or
spec
3.1,
and
that
should
be
fine.
C
I'll
just
add
on
top
of
this
is
we're
really
close
to
fixing
a
lot
of
like
building
on
top
of
a
whole
lot
of
infrastructure
that
we
spent
the
last
like
two
years
or
so
building
and
there's
a
whole
lot
of
new
features
coming
in
ignition
around
I
think
does
you
may
have
briefly
mentioned
around
you
know:
managing
the
root
filesystem
and
a
whole
bunch
of,
and
encryption
that
a
whole
bunch
of
other
stuff.
That
I
think
a
lot
of
people
will
really
appreciate.
A
E
So
I
think
it's
a
secondary
goal
for
us
to
obviously
build
out
that
Fedora
container
ecosystem,
so
in
the
future
of
the
okd
working
group
who've
already
talked
about
this
last
week,
will
meet
up
with
the
Fedora
container
sig
and
yeah.
Obviously
we're
interested
in
delivering
all
of
our
operators
on
fedora
containers
as
well.
It
also
works
the
way
it
is
now
with
ubi
based
containers
or
central
space
containers,
but
obviously
yeah.
We
do
want
to
encourage
people
to
build
a
fedora
containers
and
use
fedora
containers
for
everything.
E
A
Someone
asked
me
earlier
today
in
slack
about
the
both
OCPs
operators,
the
operators
that
weren't
an
operator
hub
do
yet
and
if
there's
a
there's,
a
few
operators
that
are
specific
to
the
operator
hub.
That
comes
with
a
open
ship
container
platform,
and
if
we
were,
we,
the
working
group
was
going
to
rebuild
all
those
on
Fedora,
core,
OS
and
I.
A
Think
that's
a
bit
of
a
significant
effort
and
I
think
it's
also
other
other
questions
around
that
as
well,
which
ones
and
I
think
that
specifically,
they
were
talking
about
service
mission
is
do
making
sure
that
there
was
one
to
one
parity
with
what
is
available
and
operator
Haggadah.
Oh.
B
Yeah,
as
far
as
like
rebuilding
things
on
Fedora
images,
I
think
that
would
be
it
would
be
great
to
have.
But
you
know
a
question
to
have
is,
or
a
question
to
think
about
it
is
like.
Is
there
a
trade-off
for
doing
that
work?
Is
there
other
work
that
we
you
know
should
do
instead?
If,
for
example,
that
was
gonna
push
the
okay
djf
GA
out
to
January
of
next
year?
Would
that
be
something
we'd
want
to
do?
B
And
the
other
thing
too
is
you
know
if
things
are
able
to
use
a
ubi
base
and
we
have
only
one
version
of
them?
It's
a
lot
easier
to
maintain.
So
it's
just
like
there's
a
lot
of
trade-offs
to
think
about
when
you
starting
to
go
down
that
road,
which
is
the
exact
same
thing
which
we
were
discussing
earlier
with
a
Fedora
core
OS
and
it's
in
cost
core
OS
right.
B
C
Think
maybe
it's
you
is
this
concept
of
host
dependent
containers
because
I
think
it's
you
and
programming
iptables
and
things
like
that,
and
so
that
is
a
tricky
topic
to
handle.
We
have
some
discussions
around
that
around
how
you
best
have
a
container
that
you
know
partially
executes
saan
or
manages
the
host,
but
hopefully
we
can
get
to
the
point
where
even
these
host
dependent
containers
like
it
still
work
in
both
cases,
rebuilding.
D
E
I
think
the
first
step
we're
gonna
do
here
is
just
building
those
containers
that
aren't
currently
publicly
available
without
a
subscription
that
our
realm
AIST
building
them.
Rebuilding
them
in
fedora
just
have
feature
parity,
but
with
your
community
version
I,
don't
think
you
we
don't
yeah.
We
don't
have
to
rebuild
all
the
UVI
eight
containers,
because
there's
just
no
benefit
to
it.
Really.
Yeah.
A
And
that
might
be
a
good
way
of
doing
it.
Getting
parity
is
is
what
we
want,
and
then
you
know
if
third
parties
want
to
go
and
build
something
out
there
and
host
it
an
operator
hub
dot.
Io
god
bless
them.
If
the
working
group
either
the
okay
D
or
the
Fedora
core
or
some
other
working
group,
Fedora
container
working
group
decides
to
take
on
that.
A
Then
lesson
and
I
think
that
the
really
what
we're
just
trying
to
do
today
and
I
think
Dusty's
got
another
talk
on
community
central
this
week
and
we're
going
to
be
doing
a
lot
of
talking
around
okay.
D4
is
really
getting
the
word
out
about
how
to
engage
with
these
communities,
how
to
test
the
work
that
we're
doing
and
give
us
your
feedback
on
Fedora
core
OS
make
sure
that
the
cadence
of
release
cycles
and
stable
releases
are
working
for
everybody
and
that
we
all
stay
in
sync
and
connected
on
these
things.
A
B
So
I
would
say:
we've
got
a
few
issues
that
are
kind
of
you
know
periodically
come
up
that
we're
trying
to
work
on
next
I
mentioned
one
earlier,
which
was
multi
arch.
So
we
have.
You
know
some
very
motivated
people
in
the
community
that
kind
of
maintain
a
secondary
pipeline
for
different
architectures,
and
we
want
to
make
that,
like
part
of
our
official
pipeline
right,
we
want
to
try
to
release
stable,
for
you
know
all
architectures.
At
the
same
time,
we
want
to
have
the
artifacts
show
to
show
up
on
our
download
page.
B
We
want
to
have
them
signed
by
Fedora
release
engineering
like
our
current
64-bit
Intel
artifacts
are
that's
something
we're
gonna
try
to
work
on
next
we
mentioned
complex
root
device
stuff.
That's
in
the
works
right
now,
we're
always
doing
something
around
networking.
It
feels
like
so
we're
trying
to
enhance
it
so
that,
for
example,
right
now
we
default
to
DHCP
when
you
first
bring
up
a
node
which
is
the
same
default,
but
in
some
cases
it
can
be
problematic.
B
So
we're
going
to
try
to
change
it
so
that
if
you
don't
ask
dhcp
on
that,
first
boo,
we're
gonna,
not
not
bring
it
up
right,
let's
see
what
else!
Oh!
So,
even
though
we
discourage
package
layering
and
Fedora
core
OS,
sometimes
it
is
advantageous
to
do
that,
for
maybe
some
small
feature.
That's
like
a
host
level
type
thing
that
is
just
not
very
easy
to
containerize
or
either
a
very
big
maintenance
burden
to
containerized,
so
we're
gonna
try
to
make
package
layering
more
reliable.
B
Obviously,
okd
is
just
now
getting
the
GA,
but
we
want
the
relationship
with
okd
to
be
tighter
right.
So
we've
been
doing
a
lot
of
foundational
work
in
Fedora,
core
OS
haven't
had
a
lot
of
time.
You
know
Christian
and
Vadim
have
done
an
amazing
job
without
a
lot
of
help
from
us
right,
and
we
want
to
make
sure
that
that
integration
is
better
right.
We
also
want
to
look
at
other
kubernetes
distros,
like
kubernetes
distros,
like
typhoon,
who
are
also
using
Fedora
core
OS
underneath
right.
How
can
we
be
a
better
platform
for
them?
A
Perfect
well,
I
think
it's
it's
a
great
relationship.
I
think
we're
just
going
to
have
to
foster
more
of
them
in
the
coming
months
and
weeks
and
days
and
move
it
forward.
If
you
want
to
throw
back
up
your
final
slide
with
how
to
get
ahold
of
everybody,
that
might
be
a
great
way
to
end
and
we'll
definitely
have
you
guys
back
on
and
continue
to
collaborate
with
you
across
lots
of
different
communities.
So
thank
you
all
for
joining
and
we
look
forward
to
many
more
releases
in
tandem
with
okd
and
others.