►
Description
The State of OKD4: OpenShift Kubernetes on Fedora CoreOS - CodeReady Edition
Christian Glombek and Charro Gruver (Red Hat)
OpenShift Commons Gathering KubeCon NA
November 17, 2020
Red Hat's Christian Glombek and Charro Gruver give a quick overview of the current state of the OKD 4 Kubernetes Distribution and where it is going next. Starting with a reminder of where OKD sits within the Kubernetes ecosystem, they follow up with a quick update on where OKD is going with the introduction of version 4.6. They'll close out with a demonstration of the CodeReady Containers developer tool, built from the OKD 4.5 code base for developing locally with ease.
466768 The State of OKD4 withSlides
A
Welcome
everybody
to
the
state
of
okd4
openshift
kubernetes
on
fedora
core
os
the
code
ready
edition,
I'm
christian
glombick,
a
software
engineer
and
co-chair
of
the
okd
working
group
at
red
hat.
A
A
So
what
is
okd4
okd4?
Is
the
community
district,
a
community
distribution
of
kubernetes,
specifically
the
community
distribution
of
openshift,
so
it's
the
openshift,
codebase
and
fedora
core
os.
The
website
is
okdio,
where
you
will
find
all
the
information
as
well.
Let's
go
a
little
bit
into
detail
here
for
a
community
distribution
of
kubernetes
plus
plus.
A
This
plus
plus,
is
really
the
important
thing
here.
So
openshift
and
okay
d4
bring
a
lot
of
goodies
with
it.
So
there's
automated
installation
patching
and
updates
from
the
os
up,
and
that
is
very
important.
This
last
part
here,
the
operating
system
has
become
an
implementation
detail
of
the
entire
cluster.
We
update
the
machines.
The
underlying
operating
system
through
the
cluster
bind
their
life
cycle
together.
So
when
you
update
the
cluster,
you
will
also
update
the
operating
system.
A
This
is,
this
is
really
a
a
standard
feature
and
it
makes
things
so
much
easier
to
reason
with.
So
let's
have
a
little
look
at.
Let's
look
at
the
at
the
graphic
here,
so
we
have
the
the
underlying
platform.
Obviously,
which
can
be
anything
it
can
be
cloud
can
be
bare
metal
and
then
we
have
linux
hosts
and
we
use
fedora
core
os
as
our
base
operating
system
in
okd
fedora,
core
os
uses
technologies.
A
That
are
that
really
make
this
that
really
enable
this
use
case
like
a
ignition
for
our
third
first
boot
configuration.
So
we
don't
use
cloud
in
it
or
anything.
We
have
ignition,
which
is
a
declarative,
configuration
that
when
you
first
boot,
it'll
configure
the
machine
specifically
for
that
environment
you're
running
it
in
and
you
can
configure
anything
with
it
really
so
it
can.
You
can
write
files,
you
can
format,
partitions
and
yeah
lots
of
things.
There's
you
can
write
and
enable
services
a
systemd
services.
A
A
So
it's
an
image-based
system,
which
means
rpm
os
tree
takes
fedora,
rpms,
composes
them
into
an
image
and
then
they'll
get
the
fedora
coreos
image,
which
is
essentially
immutable
and
yeah
enhance
security
with
that
and
you
can
rpm
os
3
is
described
like
git
for
operating
systems.
So
you
can
really.
You
really
get
a
hash
that
a
commit
hash.
A
That
really
tells
you
what
exactly
is
inside
this
image
and
you
can
like
go
from
one
commit
to
the
next
and
yeah
securely
update
that
way
and
testing
and
things
like
that
become
much
easier
for
us
on
our
site
and
because
we
can
just
every
all
the
time
say
in
this
commit
of
the
operating
system.
We
have
those
packages
and
those
files
and
yeah
if
anything,
changes,
there's
a
very
clear
differential
between
two
commits
going
up
from
that
we
have
kubernetes.
Obviously
our
flavor
openshift
is
is
an
enhanced
kubernetes.
A
So
we
have
security
pipelines,
a
lot
of
yeah
things
that
make
maintenance
of
the
system
and
also
development
on
the
system
very
easy
yeah,
and
on
top
of
that,
the
applications,
which
is
what
what
the
user
actually
wants
to
run
their
workloads
and
everything
both
the
kubernetes
system,
the
openshift
system,
as
well
as
the
workloads
they're,
highly
operatorized.
So
there's
this
operator
pattern
which
which
we
use
heavily
to
essentially
drive
the
cluster
on
autopilot.
A
All
right,
let's
go
to
the
next
slide
here,
we'll
just
have
a
quick
look
at
the
platforms
we
support.
It's
essentially
all
the
platforms,
so
we
run
on
bare
metal
on
on
virtualized,
overt
on
openstack
aws,
azure,
gcp,
they're,
smart,
there's,
vsphere,
there's
a
whole
lot
of
platforms.
We
support
and
we're
adding
to
that
list.
All
the
time.
A
Today
and
tomorrow
so
currently
we're
at
the
stable
release,
4.5.
That
was
a
big
milestone
for
us,
getting
everything
out
and
ready
and
stable,
and
that's
great.
We
still
have
a
lot
to
do
with
regards
to
getting
more
operators
ready
to
be
installed
on
top
of
okd.
A
So
the
actually,
the
main
mission
of
the
okd
working
group
is
to
facilitate
community
contributions.
So
that
is
really
what
we
want
to
do,
and
we
want
to
see
more
of
that.
We
want
to
actually
enable
the
community
to
to
work
with
us
and
to
contribute
here
and
specifically,
the
collaboration
with
the
operator
hub
and
fedora
communities
is
already
very
close
and
we're
we're
constantly
working
with
them
to
to
do
that.
A
There
yeah.
What
we
want
to
do
is
get
more
bespoke.
Operators
released
for
okd
we're
working
on
guidelines
to
to
do
that,
for
example,
and
to
enable
the
community
to
actually
release
their
own
operator
for
okd
as
well.
A
Another
part
of
the
mission
of
okd
is
to
enable
early
adoption
of
upcoming
technologies,
so
in
the
future
we
will
provide
guidelines
and
ways
to
to
use
upcoming
technologies.
Like
c
groups,
v2
like
how
what
was
the
other
thing,
I
was
gonna
say
here:
oh
yeah,
we're
gonna
have
we're
going
to
enable
psilium,
as
as
a
network
as
the
networking
stack,
which
I
I'm
looking
forward
to
seeing.
This
is
not
going
to
be
default
because
we're
a
stable,
yeah
we're
stable
with
okd.
B
I'm
going
to
talk
to
you
about
something
that
we're
very
excited
about.
We
have
been
collaborating
with
the
red
hat
co-ready
team,
to
create
a
version
of
the
code,
ready
containers
that
is
built
off
of
okd,
4.5
and
fedora
core
s,
and
we're
very
pleased
to
say
that,
after
a
lot
of
work,
we
finally
have
a
release
ready
for
you
to
try
out.
B
This
is
built
off
of
the
same
code
base
that
code,
ready
containers
is
built
from
that
you
can
download
from
red
hat,
but,
unlike
the
the
code
ready
containers
that
is
built
for
ocp
the
the
product
that
shares
the
code
base
with
okd,
this
is
a
a
community
driven
version
that
sits
on
top
of
fedora
core
os,
as
its
operating
system
gives
you
all
the
goodness
that
we
provide
with
a
full,
ok
d4
cluster,
but
allows
you
to
run
it
on
your
laptop
or
workstation.
B
Okay.
You
should
now
be
seeing
my
screen
an
empty
terminal
and
a
browser
that
is
displaying
a
running
instance
of
code
ready
containers
for
okd
on
my
poor
little
macbook
pro.
This
is
a
13
inch,
i5
macbook
pro,
so
it's
got
two
physical
cores
in
it.
Four
vcpus
16
gigabytes
of
ram,
which,
unfortunately
really
is
about
the
minimum
of
what
you
need
to
to
have
a
useful
experience
with
code
ready
containers
on
your
local
workstation,
unlike
mini
shift,
the
that
was
the
precursor
or
mini
cube.
This
really
is
a
full
instance
of
open
shift.
B
Is
go
to
our
home
page
at
okd.io.
B
B
One
thing
you
do
need
to
take
note
of:
is
this
pull
secret
right
here?
The
supported
version
of
code,
ready
containers
does
require
a
pull
secret
that
you
get
from
registering
with
red
hat
for
the
community
edition.
We
wanted
to
remove
that
step
and
so
we're
providing
what
is
effectively
a
fake
pull
secret
that
will
work
for
this
okd
distribution
of
code
ready
containers.
B
The
last
thing
to
note
is
this
link
right
here
to
the
getting
started
guide.
This
will
take
you
to
the
code
ready
containers,
documentation
which
will
tell
you
a
whole
lot
more
about
how
to
configure
it,
how
to
run
it
things
that
you
can
do
with
it.
So
once
you
download
the
binary
I
I
will
note
that
it
is
fairly
large,
so
don't
be
alarmed.
It
contains
a
full
q
cow's
instance
of
fedora
core
os.
This
is
a
compressed
disc
image
that,
when
uncompressed
will
be
about
nine
gigabytes
in
size.
It's
like.
B
I
said
this
is
a
full-blown
openshift
distribution
that
you're
going
to
be
running.
You'll
execute
a
crc
setup
to
get
it
started.
You'll
have
the
opportunity
to
do
some
configuration
which
you
can
read
more
about
in
the
documentation.
Here
you
can
see.
I
set
my
memory
to
12
gigabytes
for
this
image.
I
set
the
number
of
virtual
cpus
to
four
like
I
said
this
is
core
i5
with
hyper
threading.
That
gives
me
four
virtual
cpus.
B
When
I
start
it,
it's
going
to
prompt
me
for
that
pull
secret.
This
is
where
you
paste
that
pull
secret
that
you
copied
and
after
a
few
minutes,
you'll
have
a
running
instance
of
code.
Ready
containers
take
a
couple,
a
note
of
a
couple
of
things.
There
are
instructions
here
for
you
to
log
into
your
new
instance,
from
the
command
line
or
executing
a
crc
console,
will
launch
your
browser
and
give
you
a
view
like.
B
I
have
right
here
I'm
already
logged
into
my
cluster,
and
I
will
say
that
your
experience
on
a
machine
like
mine
is
going
to
be
better.
If
you
rely
more
on
the
command
line
less
on
the
console,
the
console
will
be
a
little
slow
to
refresh
from
time
to
time,
if
you're,
blessed
with
a
larger
machine
with
32
gigabytes
of
ram
you'll
have
no
problem
whatsoever,
but
for
those
of
us
that
are
still
living
in
the
16
gigabyte
world.
This
is
what
we
deal
with.
B
B
B
B
B
Account
that
was
created
for
most
things,
but
if
you
want
to
expose
the
internal
registry
that
comes
as
part
of
this,
so
that
you
can
push
custom
images,
let's
say
you're
using
podman
or
docker,
to
create
your
own
images
that
that
you
need
to
use
for
something
you'll
need
to
log
in
as
a
regular
user,
because
the
the
temporary
account
won't
work
for
externally
accessing
that
that
registry.
So
that's
that's
why
I
do
this
mainly
I'm
not
going
to
use
it
for
anything
right
now.
Just
wanted
to
show
you
a
few
activities.
B
So
I'm
going
to
create
the
custom
resource
definition
first
and
you'll
see
that
our
custom
resource
definition
for
tekton
was
just
created.
I'm
going
to
apply
some
custom
roles
and
custom
roll
binding.
B
B
B
Like
I
said,
the
the
console
will
be
a
little
bit
slow
from
time
to
time
as
you're
as
you're
working
with
this.
So
so,
if
you
have
a
machine
that
you
know
is
constrained
on
resources
like
mine
is
especially
when
you're
running,
recording
software
and
other
things
there
will
be
sometimes
when
it
will
take
a
little
bit
for
it
to
do
its
thing.
But
if
we
scroll
through
here
to
our
openshift
operators
namespace,
we
should.
B
B
B
B
Which
will
create
our
instance
of
openshift
pipelines
and
you'll
see
now,
we've
got
some
activity
happening,
our
pipelines
controller
and
our
web
hook
is
spinning
up
you'll
notice
also
that
we
have
a
brand
new
option
over
here
on
the
left
for
pipelines
when
this
operator
finishes
deploying.
We
will
then
have
the
ability
to
create
and
deploy
tecton
pipelines
that
are
ready
to
receive
our
code.
B
While
that's
happening,
I'm
going
to
do
just
one
other
thing
here,
because
there's
another
operator
that
I'm
a
huge
fan
of
the
namespace
configuration
operator,
the
namespace
configuration
operator-
that's
something
that's
maintained
by
red
hat
centers
of
practice
team
it.
It
allows
you
to
create
resources
that
get
synchronized
across
namespaces.
B
B
Instead,
I
create
them
with
the
namespace
configuration
a
new
custom
resource
that
gets
created
for
us,
and
I
use
those
to
configure
those
common
resources
so
that
when
I,
when
a
a
new
project
needs
to
be
created
for
a
team
that
that's
going
to
be
doing
some
development,
all
I
have
to
do
is
label
their
namespace
with
the
appropriate
label
that
I've
given
in
the
namespace
configuration
and
all
of
those
resources
are
not
only
created
but
they're
also
maintained.
If
I
update
it
in
one
place,
it
gets
synchronized
across
all
of
those
projects.
B
B
B
B
B
B
We
should
have
seen
that
that
we
had
some
pipelines
created.
We
may
be
waiting
a
moment
for
the
the
operator,
the
the
namespace
configuration
operator
to
do
its
thing,
switch
over
there
real,
quick
and
show
you
this.
B
Yeah,
we're
still
we're
still
waiting
for
the
namespace
operator
to
finish
deploying
when
this
operator
finishes
deploying
this
is
going
to.
This
will
pick
up
the
the
namespace
configuration
that
we
created.
B
It
will
identify
any
namespaces
that
that
are
appropriately
labeled
and
it
will
synchronize
the
the
objects
that
that
we
create
from
a
namespace
configuration
across
any
of
the
projects
that
have
those
labels
and
as
soon
as
we're
deployed
with
that,
your
applications
now
are
are
ready
to
run
and
deploy.
B
Give
it
your
github
account,
give
it
the
branch
you
want
to
build
from,
and
click
go,
I'm
going
to
show
you
a
couple
of
projects,
real
quick
before
I
finish
up
here,
that
you
can
get
this
information
from.
B
This
is
github.com
c
groover,
the
tecton
pipeline,
okay,
d4
project.
This
is
where
I
pulled
what
I'm
demonstrating
for
you
here.
The
documentation
is
still
a
work
in
progress,
but
it
is
at
this
point
usable.
So
if
you
want
to
deploy
these
pipelines
into
your
code,
ready
container
space
go
to
this
link
right
here,
github.com
c
groover,
the
tecton
pipeline,
okay,
d4,
and
you
will
be
able
to
replicate
what
I'm
showing
you
here.
B
A
B
Now
we're
going
to
switch
topics
real
quick
and
talk
to
you
about
the
working
groups
that
we're
all
part
of
so
we'd
love
to
invite
you
to
join
us
in
the
okd
working
group
on
your
screen.
Now
you
should
see
several
links.
This
is
a
very
active
community
of
folks
from
across
the
globe.
This
is
not
just
a
red
hat
thing.
B
We've
got
a
very,
very
broad
and
diverse
group
of
people
all
with
a
common
interest
in
the
okd
distribution
of
kubernetes
that
are
working
together
to
to
make
this
project
much
better
and
much
broader.
We're
experimenting
with
all
kinds
of
new
things,
and
if
this
is
something
you're
interested
in
we'd
love
for
you
to
join
us,
and
our
partners
in
this
venture
are
those
who
are
working
on
the
underlying
operating
system.
Fedora
core
os.
A
Right
and
we
also
have
a
fedora
core
os
working
group-
we
have
regular
meetings
on
irc,
there's
an
issue
tracker
on
github.
We
have
a
discussion
forum,
a
dedicated
one
on
the
fedora
project,
discussion
board,
there's
a
mailing
list
and
you
can
find
the
weekly
meetings
on
the
fedora
calendar
just
like
okd
yeah,
and
this
is
where
we
discuss
the
underlying
operating
system,
which
is
fedora
coreos
is
geared
towards
containerized
workloads,
so
it's
very
well
suited
for
the
okd
use
case,
but
there's
other
use
cases
too.
A
Most
discussions
nowadays
revolve
around
the
package
set
that
is
actually
to
be
included
in
the
in
the
compose.
So
if
you
have
any
any
wishes
there,
please
do
join
the
working
group
or
anything
else,
really
it's
not
just
about
packages
but
yeah,
the
actual,
the
yeah,
the
operating
system
and
how
we
build
it.
If
you
want
to
learn
about
that
or
have
you
have
an
interest,
please
do
join,
and
with
that
I
will
leave
you.
Some
links
for
resources,
so
okd
io
is
you'll,
find
everything
there
essentially
as
well.
A
That's
the
main
site.
We
have
the
doc
site
docs.okd.io.
We
have
two
repositories.
Now,
on
github
the
open
in
the
openshift
organization,
the
okd
repository
is
our
technical
issue
tracker.
So,
if
you
run
into
any
problems,
please
open
an
issue
there.
We
will
shell
that
out
too,
to
the
respective
repositories
where
we
think
the
issue
really
lies.
Maybe
it's
really
yeah
just
an
okd
issue.
A
We
will
essentially
triage
those
issues
there
and
send
them
off
to
the
right
spot,
and
then
we
have
the
community
repository
which
we
use
to
plan
our
okd
working
group
meetings
and
add,
essentially
anything
that
that
resolves
around
the
working
group.
A
And
thirdly,
there's
the
code
ready
organization
on
github,
where
you'll
find
resources
for
for
code
ready
containers.
What
charo
just
showed
us
so
check
that
out
too,
and
with
that
I'd
like
to
thank
everybody
for
listening
in.