►
From YouTube: DevConfUSOKD4FCOS
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
hello,
everybody
welcome
to
devconfus
and
welcome
to
the
okd4
openshifts
kubernetes
and
fedora
coreos
session.
Today,
I'm
happy
to
have
with
me
christian,
john
and
antonio
ridaka
to
help
fill
out
the
the
details
on
this
topic
and
we're
really
glad
you
came
we're
really
proud
of
this
latest
release
of
okd
and
I'm
going
to
tell
you
a
little
bit
more
about
what
we're
going
to
talk
about
today.
So
the
next
slide.
A
So
on
today's
agenda,
first,
what
we're
going
to
cover
off
is
what
what
is
okd,
then
we're
going
to
jump
into
an
overview
of
operators
and
the
operator
framework
and
we're
going
to
dive
a
little
deeper
into
one
of
the
operators,
that's
very
important
to
okd
the
machine
config
operator
and
then
we're
going
to
take
you
a
little
further
down
the
stack
to
fedora
core
os
and
we
may
have
a
demo
or
two
slid
in
there
and
we'll
leave
some
time
at
the
end
for
questions
that
we
know,
that's
probably
going
to
be
through
the
chat
or
slack
or
whatever
facility
that
they
give
us
for
dev
confused.
A
A
So
what
is
okd
we're
gonna
talk
a
little
bit
about
it
from
a
historical
point
of
view.
First,
you
probably
remember
okd
being
called
origin
back
in
the
day
when
it
was
a
ruby
on
rails,
mongodb
platform
as
a
service
offering
and
then
about
four
or
maybe
five
years
ago
now
we
shifted
and
re-based
on
kubernetes
and
openshift
went
through
a
significant
evolution
going
from
open
shift
three
to
four
as
well
rebasing
leveraging
operators.
A
So
you'll
hear
us
talk
a
lot
more
about
that
as
well,
so
we
basically
take
the
open
or
ocp
or
openshift
container
platform
codebase
and
combine
it
with
fedora
core
s
and
that's
a
an
interesting
distinction.
A
Giving
us
a
pure
open
source
play
from
all
the
way
down
so
from
the
code
base
is
all
open
source
for
openshift
naturally,
but
there
are
some
things
about
openshift
the
product
on
some
of
the
images
and
things
that
are
based
on
rel
core
os.
So
we
at
red
hat,
are
committed
to
having
a
pure
open
source
offering
of
each
of
our
products.
So
we
have
collaborated
with
the
fedora
community
and
fedora
core
os
and
come
out
with
a
distribution
which
we're
caught.
A
We
you'll
hear
us
talk
about
a
lot
as
okd4
and
it
allows
us
to
distribute
everything
as
open
source,
so
we
remain
true
to
our
sort
of
commitment
now
this
shift
really
went
from
being
a
platform
as
a
service
built
on
kubernetes
to
something
more
of
a
self-contained
ecosystem.
A
A
Something
that
everybody
can
use
freely
and
as
open
source,
but
also
has
all
the
functionality
that
comes
with
ocp,
just
running
on
fedora,
core
os
and
so
a
bit
of
an
example.
Here
is
the
difference
between
ocp
and
okd
is
ok.
A
Du
delivers
releases
in
a
much
faster
cadence
fedora
core
os
comes
out
in
a
faster
cadence
than
railcore
os,
so
you
get
to
try
out
all
the
new
features
and
sometimes
the
bugs
a
little
sooner
than
everybody
else,
and
if
you're
really
brave,
you
can
even
build
and
update
your
clusters
from
our
nightly
stream.
A
So
that's
that
could
be
a
lot
of
fun
and
really
it's
basically
a
new
highly
opinionated,
as
we
are
sometimes
at
red
hat
but
highly
flexible,
kubernetes
based
ecosystem
and
it's
built
around
this
new
concept
of
operators
and
we'll
get
more
to
that
there.
So,
through
the
operator
framework,
okd
manages
your
entire
platform,
automating
the
installation,
automated
patching,
automated
updates
and
maintenance
of
the
entire
environment
and
very
significantly,
even
the
operating
system
itself.
The
updates
for
those.
A
So
this
is
this-
is
kind
of
key
and
we'll
go
into
more
detail
about
fedora
coreos.
In
a
bit,
but
the
full
life
cycle
is
managed
by
okd
through
a
specific
set
of
operators
and
it's
deployable
on
many
many
different
infrastructures.
A
A
You
can
deploy
high
availability,
three
node
cluster,
where
your
worker
nodes
and
your
control
planes
are
sharing
roles
or
you
can
deploy
a
full
enterprise
grade
cluster
with
a
dedicated
control,
plane,
infrastructure
and
worker
nodes.
If
you
want
to
know
about
these,
specifically,
you
can
go
look
at
the
installer
project
which
you
can
find
in
github
under
github
openshift,
slash,
installer!
A
Well,
where
you'll
see
a
list
of
all
the
available
deployment
platforms
and
configurations
at
the
end
of
all
of
this
talk,
there'll
be
a
slide
with
more
links
and
more
resources.
So
don't
panic
and
that's
really
at
a
very
high
level.
A
All
right:
well,
then,
I
think
we're
handing
it
over
to
antonio
now.
C
Yes,
so
I'll
you
know,
the
next
point
in
the
agenda
is
talking
about
the
operators
and,
as
diane
mentioned
open,
shifted
a
huge
shift
from
311
to
four,
where
infor
we
introduced
the
concept
of
the
operators
at
the
cluster
level
itself,
and
so
before,
diving
into
the
actual
overshoot
for
architecture
and
later
in
the
mco.
We
need
to
talk
about
what
is
an
operator?
What
does
it
do
in
the
the
operator
pattern
as
well?
C
So
next
slide
I'll?
Do
it
myself,
so
operators
are
are
a
way
for
packaging,
deploying
and
managing
cube
application
right
so
to
put
that
into
a
real
world
example
think
about
a
mysql
database
and
we
can
think
about
them
as
sql
operator
earth
as
something
that
is
responsible
for
packaging
installing
managing
the
mysql
database
itself
on
a
cluster.
C
There
is.
I
talked
about
configuring,
the
operators
that
part
is
handled
specifically
by
if
you're
familiar
with
kubernetes
by
a
custom
resource
definition.
Those
are
just
you
can
think
about
them
as
just
configuration
files,
but
in
this
word
those
are
effectively
kubernetes
object
stored
in
a
cd,
so
so
the
so
openshift4
has
been
has
been
built
leveraging
operators
and
we
can
see
that
they
cannot
only
manage
applications
like
mysql,
but
they
can
also
manage
things
that
are
key
to
the
cluster
itself.
C
So
what
we
did
in
overshoot,
four,
is
you
know,
introducing
the
operator
patterns
for
the
components
that
make
a
cluster
make
a
kubernetes
cluster,
and
so
it's
like
the
the
analogy
would
be
having
the
cluster
on
autopilot,
because
many
of
the
things
that
you
know
an
administrator
would
usually
configure
like
scaling
up
a
node
is
like
takes.
C
You
know
all
the
manual
steps
to
bring
up
the
node
and
configure
it
that
will
be
done
automatically
by
an
operator
and
we'll
look
about
that
specific
operator
in
the
next
few
slides
in
these
slides.
C
We're
gonna
have
a
look
at
you
know
the
key
components
in
form
of
operators
that
make
openshift
four,
and
you
know
we
have
one
of
the
very
first
operator,
which
is
the
the
main
one
that
is
responsible
for
the
overall
health
of
the
cluster,
is
the
cluster
version
operator
and
since
overshoot
four
basically
has
operators
for
anything
that
really
makes
the
cluster,
like.
You
can
see
just
below
the
cluster
version
operator.
C
There
is
the
cube
api
server,
the
cube
controller
manager,
the
scheduler
lcd,
those
are
all
operators
and
what
the
cluster
version
operator
does
is
making
sure
that
those
components
which
are
still
operators
within
the
cluster
are
at
the
right
version,
and
you
know
this
is
this-
is
really
open
up
the
door
for
for
things
like
automatic
cluster
upgrades,
you
would
just
hit
the
button
in
sync
to
the
new
version
of
the
you
know,
whatever
latest
openshift
or
okd
release
you
have
there
are
you
know
many
others,
operators
which
are
core
to
the
platform
like
the
network,
one,
as
you
can
see,
just
make
sure
that
the
cni
plugins
and
are
there
and
the
the
sdn
is
installed.
C
We
now
have
an
operator
that
that
takes
care
of
all
of
this,
and
it
takes
care
of
the
image
registry
from
you
know
the
real
beginning.
It
set
up
the
registry,
the
route,
an
initial
storage,
and
things
like
that,
so
you
can
see
you
know
all
the
manual
that
we
used
to
do
before
are
now
handled
by
the
operator
itself.
C
Other
examples
of
the
operators
that
we
have
in
the
fourth
release
are
you
know
the
monitoring
one
which,
as
the
name
suggests,
is
responsible
for
collecting
the
metrics
display
them
on
the
console
or
anything
any
aggregator
that
you
can
also
install
the
ingress
operator
which
ensures
the
router
is
set
up.
The
storage
one
make
sure
that
the
csi
plugins
are
installed
and
the
storage
classes
exist.
C
So
all
these
you
know
operators
are
the
the
core
of
the
platform
and,
as
I
said
before,
what
we
did
you
know.
Bishop
four
was
leveraging
the
operator
pattern
and
and
used
that
at
the
core
of
the
cluster.
So
it's
the
it's
it's
something
like
the
cluster
manages
itself,
because
all
these
components
in
the
forms
of
operators
can
just
handle
their
own
life
cycle
in
an
ordered
way,
and
you
know
you'll
always
have
the
latest
version
and
it's
automatically
synced.
C
So
there
are
the
the
concept
itself
is
really
powerful
and
ambassador
4
is,
is
making
a
great
job
at
you
know,
leveraging
it.
C
Then
we
have
this,
this
other
thing,
which
is
still
operator
related,
which
is
the
operator
up,
so
the
components
that
I've
talked
before
the
operators
that
I've
talked
before
are
core
2d
to
the
cluster
itself.
You
know
those
manage
the
cluster
life
cycle,
but
you
know
at
some
point
there
will
be
somebody
using
the
cluster
so
openshift
as
the
this
concept
of
the
operator
hub,
which
is
a
community
sourced
in
the
index
of
optional
operators.
Like
you,
you
can
see
some
of
them
like
rafana
or
argo
cd.
C
So,
if
you
want
to
install
them,
the
operator
app
is
integrated
with
the
openshift
console.
So
you
know
any
admin
can
just
go
there
and
install
their
additional
operator.
Those
are
usually
application
focused,
as
I
said
before,
these
are
really
application
focused,
whether
the
one
I
talked
about
before
or
core
to
the
platform,
and
guess
what
there
is
an
operator
which
is
like.
C
We
call
that
manager
that
takes
care
of
the
life
cycle
of
those
additional
operators.
So
you
can
do
things
like
taking
care
of
the
operator
scope,
whether
it's
cluster,
wide
or
namespace.
Only
again,
it
ensures
that
it
can
be
updated.
Manually,
manages
permission
you
know
and
so
on
and
so
forth.
C
You
can
think
about
almost
anything
lifecycle
related
for
for
an
application
like
argo
or
grafana,
and
so
all
of
this
brings
us
to
to
the
mcu,
which
is
it's
one
of
the
core
components
that
openshift
4
uses,
and
it's
it's
it's
related
to
to
the
nodes
that
you
have
on
a
cluster.
I
mentioned
it
earlier
that
you
know
in
the
early
days
or
or
before
openshift4
in
order
to
bring
up
to
onboard
a
new
node.
You
would
need
you
know
many
manual
steps.
C
You
know
in
order
to
bring
up
the
actual
instance,
then
configure
it.
Then
you
know
get
up
and
running
the
cubelet
join
the
the
fleet
stuff
like
that,
so
that
all
of
that
isn't
necessary
anymore.
Thanks
to
the
operator
pattern,
and
specifically
thanks
to
the
machine,
config
operator
or
mco.
For
short,
the
machine
config
operator
is
again
a
core
operator.
C
That
means
it's
managed
by
the
cluster
version
operator,
which
I
mentioned
earlier
so
so
the
cluster
version
ensures
that
the
machine
config
operator
is
always
at
the
latest
version,
and
it's
and
it's
salty
as
well
the
machine,
the
mco,
I'm
gonna,
I'm
gonna,
just
say
mco
from
now
on.
Hopefully
it's
the
operator
that
manages
the
machine
configuration
it
does.
It
does
just
these
two
things.
It
manages
the
machine
configuration
and
it
applies
dos
updates
on
the
nodes.
C
I
don't
know
the
time
zone
setting
on
all
the
fleet
of
the
nodes
that
you
have
in
your
cluster.
You
would
use
the
mco
to
actually
ship
the
config
to
the
you
know
to
the
nodes
in
your
cluster
and
again,
the
other
super
important
thing
that
the
mco
does
is
making
sure
that
your
host
is
always
updated.
C
The
way
the
mco
works
is
it's.
It's
really
easy.
If
you're,
coming
from
from
a
you
know
from
the
kubernetes
world
is
we
are
leveraging
custom
resources
and
we
are,
you
know,
living
by
the
the
concept
of
current
versus
desired
or
spec
versus
status,
and
so
what
the
mco
does?
It's,
basically,
you
know
computing
a
div
between
what
it
has
and
what
the
admin
want
and
after
you
compute,
the
divs
is
just
it
just
supplies
it,
and
so
it
you
know
it
continuously
reconcile
itself
to
the
latest.
C
You
know
status
in
spec
that
the
administrator
want
it's
like
it's
a
finite
state
machine.
At
the
end
of
the
day,
it's
just
there
is
a
continuous
loop
like
any
kubernetes
controller,
and
it
just
you
know,
watches
for
any
changes
in.
In
our
case
again,
those
would
be
customizations
or
os
updates
coming
from
you
know,
from
the
from
where
the
the
actual
os
update
is
coming
from.
C
So
this
is
this
is
the
the
machine
config
in
a
nutshell,
and
hopefully
that
clarifies
what
it
does.
The
machine
config
operator
leverage,
mainly
one
custom
resource
definition.
There
are.
There
are
many,
but
the
most
important
one,
it's
the
one.
In
this
slide.
You
can
see
it's
a
you
know,
it's
a
super
common
kubernetes
object.
It
has
the
the
type
and
the
object
and
just
a
spec
where
an
administrator
can
just
go
and
tweak
all
the
fields.
C
The
most
important
thing
about
this,
this
custom
resource
is
probably
the
config
field
and
I'm
gonna
explain
the
others
as
well.
The
configuration
field
config
field,
which
you
can
see
it's
just
a
runtime
row
extension
nowadays.
C
It
just
contains
the
ignition
config
as
we're
leveraging
ignition
to
to
bring
up
new
machines
and
install
the
cluster.
So
the
mco
still
leverages
the
ignition
config
to
be
able
to
you
know,
customize
the
node
in
a
way
which
is
familiar
to
most
cluster
administrators
as
well
with
ignition.
You
can,
of
course,
do
the
usual
things
that
you
would
do
even
manually
like
creating
a
system
unit
timer,
you
know
disabling
a
service.
C
All
these
things,
you
know
change
your
configuration
change,
the
crowning
columns.
You
can
do
all
of
this
with
with
ignition,
and
so
the
config
field
is
probably
the
most
important
one
in
the
machine.
Config
cr,
as
that
allows
administrator
full
control
over
the
over
denode
and
then
the
other
things
that
you
would
find
on
a
machine.
Config
is
the
os
image
url.
That's
the
that's
another
important
thing
as
that's
the
second
point
from
the
previous
slide,
where
the
mco
does
configuration
and
os
updates
so
os
image.
C
Url
is
nothing
more
than
a
then
a
pullable
container
image
which
contains
the
actual
diff
of
the
os
update.
But
I
think
kristen
is
going
to
talk
more
about
that
later
on
and
then
the
rest
of
the
fields
that
you
can
see
all
relates
to
the
customization
side
of
of
the
mco
so
they're
to
some
extent,
they're
still
related
to
the
config,
but
we
actually
split
that
up
those
out
so
that
they're
more
I'd
say
we
can
control
more.
C
We
could
control
them
more
and-
and
I
guess
I'll
finish
with
the
the
component
of
the
mco-
it's
a
sub-component
of
the
mco
itself,
which
is
the
one
responsible
to
you
know
to
to
change
the
status
from
from
current
to
desire
it
right
so
say:
you
want
to
ship
a
new
file
on
every
host
in
the
fleet.
Masters
and
workers,
you
would
create
a
machine
config,
you
know,
do
all
the
things
that
you
would
do
with
that
machine,
config
create
it.
C
So
the
cluster
has
it
and
once
the
mco
notice,
it
will
go
and
create
its
vision
of
of
the
host.
And
then
there
is
this
component
that
actually
takes
care
of
applying
that
diff,
and
this
is
the
machine
config
demon.
So
the
machine
config
demon
is
it's
just
a
diamond
set
that
runs
on
every
node
in
the
cluster
and
again,
what
it
does
is
just
watching
for
changes
in
that
the
administrator
requested
and
just
applied
them
and,
as
I
said
before,
the
machine
config
demo
and
understand
the
config
field
of
the
machine
config.
C
It
just
pulls
the
us3
commit
from
this
container
image,
which
we
call
machine,
os
content
and
then
use
rpmos
3
to
actually
update
the
system
and
trigger
reboot.
B
Thanks
antonio,
so
yeah
next
we'll
dive
deeper
into
fedora
core
os.
What
is
fedora
core
os
next
slide?
Please,
the
fedora
core
is
in
one
center
fedora
core
os
is
an
automatically
updating,
minimal
monolithic
container,
focused
operating
system
designed
for
clusters,
but
also
operable
standalone
optimized
for
kubernetes,
but
also
great
without
it.
B
B
B
A
few
streams
have
have
flown
together
here,
so
it
was
two
communities,
the
container
linux
community
and
the
project
atomic
community
that
you
may
know
for
for
delivering
rented
enterprise,
linux,
atomic
host,
coreos,
fedora
atomic
host
and
center
os
atomic
host
in
the
past,
and
this
these
two
communities
have
merged
and
we've
kind
of
taken,
the
best
of
of
two
worlds
here
so
yeah,
most
importantly,
the
container
linux
philosophy
of
how
they
they
really
pioneered
the
container
focus
operating
system
and
their
provisioning
stack
and
the
cloud
native
expertise
here
and
from
atomic
host.
B
B
The
os,
versioning
and
security
is
a
major
part
of
of
the
the
goal
of
fedora
core
os,
providing
a
secure
platform
for
containerized
workloads,
so
fedora
core
os
uses,
rpm
os3
to
create
images
to
their
composed
out
of
rpms
and
os
3
is
like
a
git
repository
for
your
operating
system,
so
you
may
know
rpm,
osg
or
os
3
generally
in
general,
from
a
few
other
projects,
for
example,
flashpaq
uses
it
and
it's
really
just
yeah.
It
really
just
commits
a
file
system
to
a
repository
and
writes
a
hash.
B
So
it's
very
easy
to
to
follow
back
through
the
stack
and
see
what
came
from
where
and
if
we
compose
a
new
image.
We
have
a
very
clear
delta
of
all
the
files
within
the
file
system
that
have
changed
from
one
commit
to
the
next,
which
also
allows
for
functionality
like
rolling
back
a
commit
or
rebasing
to
a
totally
different
post
operating
system.
B
In
the
case
of
okd,
we
use
the
machine
os
content
container,
which
antonio
mentioned
to
deliver
an
os
3
comment
and
which
is
encapsuled
in
a
container
unpack
that
write
it
to
disk
and
reboot.
B
So
yeah
you'll
get
a
single
identifier
for
each
version
of
the
entire
operating
system,
which
makes
it
very
monolithic
and
very
secure,
and
a
very
important
feature
is
most
of
the
file
system
is
mounted
read
only
so
you
can
only
write
files
in
specific
places
that
are
enabled
for
it,
but
that
general
protection
on
most
look
at
almost
directories
in
the
file
system
prevents
accidental
os
corruption
and
also
other
kinds
of
attacks.
Additionally,
as
the
linux,
as
I
mentioned,
is
enforcing
by
default
to
prevent
compromised
apps
from
breaking
out
of
the
sandbox
next
slide.
B
Please
all
right
automatic
provisioning,
automated
provisioning
is
a
big
feature,
so
fedora
core
os
uses
ignition
to
automate
provisioning.
You've
already
heard
a
little
bit
about
ignition
before,
because
the
machine
config
operator
and
the
machine
config
resource
actually
encapsulate
and
manage
ignition
configuration.
B
Really
anything
the
machine
config
operator
only
supports
a
subset.
I
think
antonio
mentioned
that
as
well,
which
is
files
and
systemd
units,
but
within
the
ignition
specification
the
config
specification,
there
is
actually
space
that
there
are
actually
much
more
features
in
there
which
ignition
takes
the
binary
and
applies
at
the
very
first
boot
of
the
machine.
So
you
can
reformat
your
your
drives.
Repartition.
B
B
B
So
it's
really
item
potent
and
supposed
to
give
you
no
headache
whatsoever
when
doing
the
configuration,
because
fedora
core
os
the
machine
wants
it
once
it
runs
and
notices
where
it
runs
and
applies
defaults
for
that
platform.
So
we
have
just
one
release.
Artifact
are
a
few
release
artifacts,
but
fewer
than
we
have.
Then
there
are
clouds,
because
we
don't
need
a
release
architect
for
each
cloud.
B
B
B
It
ignition
runs
only
once,
as
I
just
mentioned
at
the
very
beginning
of
your
provisioning
once
it
once
the
first
boot
happens
and
ignition
actually
doesn't
run
on
the
on
the
root
file
system.
You
later
have
it
runs
within
the
inner
dram
file
system,
so
it
sets
up
the
new
file
system
within
the
within
the
ram
and
then
writes
it
to
disk
and
reboots
again
into
that
file
system
that
you
just
configured
the
way
that
you
wanted
to
with
that
machine.
B
That
does
a
little
bit
more
complex
tasks
like
generating
the
system
g
units
for
those
tasks.
You
can
have
a
look
at
this
back,
it's
very
similar
and
you
can
transpile
a
fedora
core
os
config
to
ignition
configuration
yeah
next
slide.
Please
these.
These
are
the
features
in
use
in
openshift
okd
we
have
automated
provisioning,
openshift
install
generates
ignition
configs
and
when
each
node
is
started
that
ignition
config
is
applied.
B
No
human
interaction
necessary
a
single
bootstrap
node
configuration
is
about
300
300
kilobytes.
So
it's
a
lot
of
data
conveyed
in
in
that
ignition
configuration.
B
B
B
B
B
If
you
want
to
join
the
fedora
colorways
working
group,
you
can
find
us
in
either
of
these
places
on
irc,
it's
the
fedora
core
os
a
channel
on
freenode.
We
have
an
issue
tracker
on
github.
We
have
a
discussion
forum
on
the
fedora
project
forum,
we
have
a
mail
mailing
list
and
we
also
have
weekly
meetings
on
irc,
which
you
can
find
on
the
ros
calendar
next
time.
Please.
B
So
this
is
where
we're
here
today,
if
you
want
to
join
the
okd
working
group
and
help
and
participate
in
releasing
and
improving
openshift
and
okd4,
then
please
talk
to
us
on
slack
we're
on
the
openshift,
the
dev
channel
on
the
kubernetes
slack
we're
on
the
openshift
comments
flag.
If
you're
a
member
there,
you
can
find
us
on
any
of
the
channels
really
definitely
on
the
general
channel.
B
B
We
have
bi-weekly
video
conference
meetings
which
you
can
find
on
the
okd
fedora
calendar
linked
here,
and
we
have
two
repositories
on
github
where
most
of
what
we
do
is
happening
and
documented,
and
those
are
the
community
and
the
okd
repositories
in
the
openshift
organization
on
github
next
slide.
Please.
B
More
links
have
a
look
at
okdio.
This
is
our
main
home
page.
You
can
find
everything
somewhere
in
there.
The
documentation
is
at
docs.okd.io
and
then
the
okd
repository
again,
which
we
use
as
a
technical
or
a
an
issue
tracker
for
for
technical
things
and
the
community
report
repository
which
we
use
as
a
tracker
for
meetings
and
group
tasks
and
related.
B
Things
and
with
that,
I
think
if
we
have
time
for
questions
after
this
I'll
be
in
the
chat.
B
A
All
be
in
the
chat
with
you
and
we'll
try
and
answer
your
questions
and
please
do
come
to
the
okd
working
group
meetings,
especially
if
you're
interested
in
deploying
on
any
interesting
configurations
we're
always
listening
and
looking
for
feedback
and
happy
to
help.
You
answer
any
questions
too.
So
look
for
us
all,
antonio
christian
and
myself
and
others
from
the
working
group
in
the
in
the
slack
channels
here
and
at
other
dev
conf
sessions.
A
So
thanks
christian
and
antonio
for
taking
the
time
today
to
record
this,
and
hopefully
we
gave
you
enough
depth
to
get
you
started
and
interested
in
participating
in
this
collaboration
between
the
okd
and
the
fedora
core
os
communities
and
keeping
keeping
the
open
source
pursuit
of
happiness
alive.