►
From YouTube: AMA Session with Red Hat Engineers & Project Leads Red Hat OpenShift Commons 2022 Detroit
Description
AMA Session with Red Hat Engineers & Project Leads
Red Hat OpenShift Commons 2022 @ Kubecon/NA
Detroit, Michigan
October 25, 2022
Moderator: Joe Fernandes (Red Hat)
https://commons.openshift.org/gatherings/kubecon-22-oct-25/
A
B
Thanks
everybody,
so
my
name
is
Joe.
Fernandez
I
lead
the
openshift
business
here
at
Red
Hat,
my
background's
on
the
product
side.
So
10
years
ago,
this
fall
I
was
working
on
the
first
release
of
openshift
as
openshift
1.0,
which
we
released
at
the
first
AWS
re
invent,
so
it'll
be
a
10-year
anniversary
for
both
openshift,
as
well
as
the
reinvent
conference.
This
fall
six
years
ago.
B
B
Thank
you
so
much
I
know:
we've
had
a
lot
of
sessions
say
to
put
it
in
context.
People
always
ask
me,
you
know
where's
what
are
you
doing
from
a
product
roadmap
perspective?
What
are
you
doing
from
a
strategy
perspective
on
like
how
much
time
do
you
have
right?
But
if
you
think
about
it,
there's
really
three
things
and
you've
heard
them
about
all
of
them
today,
right
first
thing:
we're
trying
to
do
is
make
a
product
that
can
run
consistently
across
a
hybrid
infrastructure,
you're
all
managing
a
hybrid
infrastructure.
B
Your
developers
are
consuming
that
infrastructure
and
you
heard
about
today
the
work
we're
doing
around
data
center
around
Cloud,
around
Edge
x86
arm
Mainframe
bed
systems,
fully
managed
self-managed,
full
cluster
single
node
clusters,
hosted
control,
planes,
embedded
devices
and
so
forth.
So
that's
a
big
aspect
of
our
roadmap
is:
how
do
we
run
consistently
across
this
increasingly
hybrid
infrastructure
that
you're
all
managing?
The
second
is:
how
do
we
help
you
bring
workloads
onto
that
infrastructure?
Everybody
knows
that
openshift.
B
That
kubernetes
is
a
great
platform
for
cloud
native
applications
and
in
fact,
the
de
facto,
but
we
want
to
be
more
than
that
right.
We're
trying
to
be
a
great
platform
for
stateful
services.
As
our
Ford
speaker,
Satish
described.
We
want
to
be
a
great
platform
for
data
science,
for
an
AI
for
machine
learning,
and
you
saw
many
examples
of
that
today
and
we
also
want
to
help
you
modernize,
the
you
know.
B
Millions
of
applications
that
you
already
have
in
in-house
in
many
cases
have
been
around
for
decades,
so
things
like
the
conveyor
Community
around
application,
modernization
and
migration
to
a
cloud
native
environment
and
then
third
we're
trying
to
help
you.
You
know
better
build,
deploy
and
manage
those
applications
across
that
infrastructure.
B
So
things
like
all
the
work,
we're
doing
around
automation,
around
pipelines,
around
git
Ops,
around
building,
developer
experience
and
idps
around
managing
across
this
hybrid
environment
across
all
these
clusters,
and
certainly
around
securing
all
that
securing
not
only
the
Clusters
and
the
applications
but
the
entire
software
supply
chain.
So
if
you
think
about
those
three
pillars-
hybrid
infrastructure,
it's
a
hybrid
set
of
applications
that
run
across
that
infrastructure
and
all
the
work
we're
doing
to
help
you
build
deploy,
manage
those
are
really
the
three
things
that
drive
our
strategy.
B
D
This
is
vinesh
I'm
an
open
shift,
an
administrator.
My
question
is
so
you
were
talking
about
the
disaster
recovery
operator.
So
how
does
it
deal
with
the
image
replication,
the
content
registry?
How
does
it
deal
with
the
replication
of
the
image
because,
when
you
take
an
application
from
one
class
to
another
cluster,
the
second
cluster,
where
the
application
has
come
to
doesn't
have
any
connection
to
the
first
clusters
image
registry?
So
how
do
you?
How?
How
does
the
openshift
operator
for
the
disaster
recovery
deals
with
that.
E
E
E
Well,
if
the,
if
the
cluster
is
down,
then
what
you're
doing
is
you're
restoring
to
a
cluster
that
works.
So
if
the
cluster
like
I,
said
The
Hub
cluster
in
this
case
is
considered
to
not
go
down,
so
it's
the
Hub
cluster,
that's
actually
connecting
and
restoring
the
application
to
the
other
managed
cluster.
F
We
have
a
feature
called
OC
mirror
where
you
can
also
mirror
your
OC
content
locally.
You
don't
need
an
internet
beyond
that.
So,
if
your
cluster
is
destroyed,
you
can
still
have
the
local
images
of
all
the
operators
and
all
the
image
container
images
that
you're
using
you
can
mirror
it
locally
as
well.
So.
G
H
G
A
I
G
So
compliance
operator
will
stay
part
of
ocp
platform,
the
only
thing
what
we're
doing
we
integrated
to
ACS.
So
we
will
get
more
capabilities
in
ACS,
but
we
keep
adding
features
to
Ace
to
compliance
operator
or
it
will
be
the
it's
our
major
compliance
tools
for
openshift
and
we're
expanding
it
to
other
platform.
Also,
so
it's
it's
still.
J
G
K
Hello,
this
is
David,
so
there
is
any
plan
to
add
a
visibility
on
The
Operators
and
which
operator
needs
to
be
updated
and
like
just
make
sure
that,
like
now
with
these
cluster,
many
clusters
being
deployed
just
keeping
track,
of
which
cluster
have
certain
versions
of
certain
operator
and
that
all
of
them
follow
like
we
can
just
keep
like
or
Dev
and
QA
clusters
have
certain
version
of
the
I
know
the
service
mesh
or
something
like
that.
K
K
That
yeah,
like
you,
go
to
the
plus
review
and
you
see
like
all
the
versions
of
openshift
right.
So
if
there
is
any
plan
to
integrate
something
similar
to
CD
operators,.
B
Yeah
I
mean
I
think
partly
in
terms
of
how
we
architected
openshift
itself,
leveraging
the
operator
model
and
the
operator
lifecycle
manager
it's
to
facilitate
that,
because
all
of
these
Technologies
are
are
iterating
on
their
own
schedule
in
their
own
release
cycle
right.
We
want
to
enable
that,
while
also
bringing
it
together
and
then
a
big
part
of
that
is
the
work
we're
doing
around
cluster
management,
as
well
as
the
work
we're
doing
around
Telemetry
right.
B
So
cluster
management,
ACM
and
some
of
the
new
things
that
we're
working
on
like
ACP
and
so
forth
is
around.
How
do
you
manage
across
the
multi-cluster
environment
and
manage
all
the
components
of
that
multi-cluster
environment
and
then
the
Telemetry
for
folks
who
have
been
able
Telemetry
is
about
how
do
we
get
data
back
to
sort
of
help
help
be
more
proactive
in
identifying
issues
before
they
impact
you
or
give
you
insights
into
what
we
see
happening
across
the
fleet?
But
I?
Don't
know
if
you
have
anything
else,
I.
K
Of
so
it
is
more
like
for,
let's
say
that
you
have
openshift
logging
can
and
before
openership
login.
You
need
elasticsearch
right.
So
many
of
those
times
you
want
to
do
that
upgrade
puppy
manually,
because
you
need
to
upgrade
first
elasticsearch
and
then
logging.
So
though,
that's
why
I
have
manual
installers
or
because,
like
for
example,
last
week,
the
advanced
cluster
security
just
broke,
something
on
the
vulnerability,
so
I
keep
all
those
operators
being
manual
subscription.
H
L
L
So
are
you
asking
about
if
I
understood
across
multiple
clusters,
like
some
clusters
fall
in
one
category?
Yes,
I
think
ACM
is
exactly
the
right
answer
there.
The
policy
and
governance
features
there
are
designed
for
that
sort
of
thing
is
whether
it's
you're
going
to
enforce
that
this
group
of
clusters
has
a
subscription
for
an
operator
from
a
particular
Channel.
Maybe
production
grade
Channel,
some
other
collection
are
using
a
more
forward-looking
Channel.
E
K
K
Right,
no,
no,
no
I
I
know
that
I.
We
we
got
that
thought
of
that.
Real,
quick
and
support
was
amazing,
giving
us
a
solution.
N
So
I
also
have
a
question
about
the
operator
Hub
and
all
the
operators.
Will
there
be
some
a
better
possibility
to
do
life
cycle
management,
for
example?
If
you
want
to
ensure
that
you
have
DTaP,
and
you
first
want
to
ensure
that
an
operator
it
works
correctly,
to
pin
down
the
version
in
the
channel
for
specific
environments.
N
H
N
No
I
mean
like
first,
we
want
to
test
it
in
D,
T
and
A
before
we
go
to
production,
but
there
might
be
possibilities
that
there
is
already
a
newer
version
available
in
the
channel
and
when
we
sync
the
channel
on
production,
it
might
be
a
newer
release
than
what
we
have
tested.
So
would
there
be
a
better.
H
N
P
B
Check
with
Andy
afterwards
but
yeah
the
whole,
you
know
once
you
disconnect
from
The
Challenge
for
customers
that
are
running
disconnected.
This
is
sort
of
a
fact
of
life.
Then
then
you're
sort
of
controlling
the
content
that
you're
feeding
into
the
into
the
cluster
and
so
latest
means,
as
Andy
said,
the
latest
that
you've,
given
the
cluster
access
to
locally
yeah.
I
B
B
So
we
continue
to
shrink
the
footprint
down
to
three
node
clusters
to
single
node
distributed
clusters
to
then
single
node
openshift,
but
you're
still
dealing
with
Edge
servers
right
at
the
edge
there's
devices,
not
just
servers
right,
so
microshift
is
really
getting
into
that
edge
device
space
and
that's
why
the
the
productized
version
of
microshift
is
Red,
Hat,
Edge
devices
and
so
forth.
I'll
be
honest.
It
was
inspired
by
some
of
the
work
we
saw
in
the
community,
including
k3s.
Frank
can
talk
a
little
bit
more
about.
B
J
So
I
could
make
my
life
easy
and
just
say
you
know
it's
all
different
distributions
of
kubernetes
for
the
small
footprint
right.
You
know
k3s
French
uses
and
microcase
canonicals.
J
We
have
microshift,
but
I
would
like
to
point
out
that
there's
also
a
different
Focus
right,
because
if
you
look
at
at
some
of
the
other
distributions,
what
you
will
find
is
that
their
focus
is
actually
on
the
small
footprint
in
some
cases
like
running
on
as
many
platforms
as
possible,
Right
running
on
this
Linux
distribution,
or
that
Linux
distribution
and
being
able
to
install
very
simply
on
those.
J
If
you
think
about
it,
it's
a
developer
use
case
right
and
one
difference
is
that
from
the
very
beginning
we
said
for
us,
we
want
to
solve
the
production
use
case
on
those
Edge
devices
and,
as
a
consequence,
we
made
a
lot
of
different
design
choices
right
because
we
want
to
preserve
these
qualities
that
I
mentioned
during
the
talk.
We
want
to
integrate
very
well
with
the
operating
system
that
is
designed
for
that
use
case.
So
that
was
one
of
our
design
goals.
J
We
wanted
to
keep
The
Preserve,
the
qualities
of
openshift
nice,
so
you
Deploy
on
cloud.
You
then
push
your
workloads
to
that
device
Edge.
We
want
to
keep
that
security
posture
right
that
you
know
the.
If
you
want
the
essay
Linux
of
of
kubernetes
part
right,
so
the
our
design
decisions
were
very
different
because
we
are
targeting
that
edge
device,
production
use
case
and
not
a
developer
use
case.
K
B
Yeah,
what's
amazing
too,
about
all
these
Footprints
right?
We
don't
know
what
all
people
are
going
to
be
running
when
you
think
about
data
center,
public
Cloud,
Edge
servers,
Edge
devices,
we
don't
know
all
the
different
use
cases
and
workloads
you're
going
to
be
running
there,
but
we
do
know
one
thing:
you're,
probably
going
to
be
running
Linux
there.
B
If
you're
running
Linux
there
it
it's
going
to
probably
be
packaged
as
Linux
containers
these
days
and
then
increasingly
we're
seeing
kubernetes
show
up
in
all
these
environments
as
well
and
that's
why
we
as
a
company
need
to
be
there.
But
we
need
to
be
there
all
the
way
through,
like
the
production
requirements
that
you
need
to
run,
that
at
scale
in
Mission
critical
environments.
M
Real
quick
on
microshift
I
did
a
microchip
demo
at
a
red
hat,
User
Group,
which
I
recommend
you
should
all
attend.
If
you
can
and
the
easiest
thing
about
it
was
I
was
installed
on
our
couple.
Rpms
I
had
a
yaml
manifest
where
I
created
a
namespace,
a
deployment,
a
service
and
an
Ingress
and
DNS
was
entry
that
was
already
created
and
I
curled.
It
and
I
hit
it
from
my
laptop
hello,
openshift
I
didn't
have
to
worry
about
learning
traffic.
M
I
didn't
really
have
to
worry
about
installing
another
cni
I
didn't
have
to
do
anything.
It
was
a
RPM
and
a
yaml.
Manifest
I
was
up
and
running
in
seconds.
I
literally
uninstalled
everything
rebooted
the
machine,
and
we
did
it
together
and
it
took
about
five
minutes
and
the
the
entire
user
group
was
in
shock.
The
fact
that
it
was
that
easy.
You
can't
do
that
with
k3s.
M
Can't
do
that
with
you
know:
k0
OS,
you
just
can't
do
it
and
the
best
part
is
we're
going
to
sell
it
and
we're
going
to
support
it.
You
know
so
that's
another
thing.
You
have
to
take
consideration.
O
So
it's
been
referred
to
as
a
sister
distribution
in
a
lot
of
ways,
because
the
components
are
the
ocp
components
right
they
get
pulled
down,
but
it
runs
on
Fedora
core
OS,
as
opposed
to
Red
Hat,
core
OS
and
some
of
the
operators
are
not
there
and
obviously
the
support
model
and
whatnot
the
it's
been
because
it
was
on
Fedora
core
OS.
There
wasn't
sort
of
the
clear
path
to
get
into
ocp
from
things
happening
in
okadian.
Quite
frankly,
also
there
wasn't
as
much
Community
involvement
right.
E
Q
Now,
okd
is
kind
of
consuming
the
downstream
from
ocp
right,
but
I
would
love
to
see
a
world
even
as
an
openshift
engineer,
where
we're
pushing
more
of
this
knowledge
out
into
the
community
like
we
do
with
Centos
right,
and
so
you
know
one
of
the
reasons
I
love
participating
with
that
Community
is
because
there
is
a
there
is
a
desire
to
understand
how
to
contribute
back
to
ocp
right
and
right
now
the
community
doesn't
understand
like
if
I
find
a
problem
in
ocp
and
maybe
I
fix
it.
How
would
I
contribute
that
back
right?
Q
Q
R
And
hi-
you
didn't
see
me
here
today,
but
I'm
I'm,
the
community
Carson
Wade
on
the
community
architect
for
operate
first
and
what
I
wanted
to
to
do
was
to
go
off
what
you'd
mentioned.
Mike
mentioned,
Fedora
and
Centos
right,
and
it's
not
that's
not
just
a
casual
reference.
There's
a
there's,
a
historic
history
here
and
I.
Think
people
jokingly
refer
to
me
as
the
keeper
of
Open
Source
Lord
right
out.
R
So
that's
that's
what
this
is
back
in
the
days
the
Fedora
build
system
was
internal
to
Red
Hat
and
for
the
first
six
releases
of
Fedora
core
six,
you
couldn't
touch
the
build
system.
There
was
nothing
you
could
do
but
create
some
packages
outside,
and
that
was
Fedora
extras
and
and
that's
what
people
did.
So
when
the
build
system
called
Koji
was
created
in
the
community
and
the
tools
that
went
along
with
it
were
a
better
developer
experience
than
what
people
had
inside
of
red
hat
with
that's
with
that
system.
R
It
was
the
draw
to
the
developers
and
then
the
draw
to
the
business,
to
bring
it
all
together
in
the
community
space,
and
so
that's
why
it
got
renamed
to
Fedora.
Seven
at
Fedora.
7
release
is
because
there
was
no
longer
a
core
that
was
inside
the
company.
The
build
system
was
outside
so
effectively.
R
The
where
we
are
today
with
okd
is
is
right
at
that
first
step
of
having
a
build
system,
that's
external
to
the
to
the
community
and
the
analogy
matches
up
100
from
there
in
terms
of
going
from
this
kind
of
sister
cross-stream
thing
to
actually
being
able
to
build
an
upstream
in
a
way
that
makes
sense
for
all
the
people
who
who
are
invested
and
care
about
it.
Who
are
standing
here
on
the
stage
and
who
are
sitting
here
in
the
audience.
S
So
this
is
about
microshift,
just
kind
of
a
bit
of
it
was
already
clarified
earlier,
but
in
particular,
I
wanted
to
ask
a:
can
microchip
nodes
be
clustered
similarly
to
like
k3s
or
k0s
nodes
and
then
kind
of
a
more
general
question?
Is
there
a
list
of
or
a
possibly
published
document,
saying
exactly
what
the
design
goals
were
as
opposed
to
some
of
these
other
systems?
S
B
Mean
it's
I'd
say
from
from
a
business
perspective.
It
was
really
targeted
at
that
edge
device.
Iot
Gateway,
that
that
thing
like
we,
we
couldn't
shrink
the
openshift
footprint
small
enough
with
everything
that
comes
with
it,
olm
operators
and
so
forth.
You
know
to
get
down
Beyond
a
certain
size
right
that
would
sort
of
fit
an
edge
server
model,
but
not
an
edge
device.
J
J
To
this
earlier
that
it's
super
important
for
the
microchip
team,
but
also
I,
would
argue
for
the
other
teams
in
redhead
that
we
just
don't
build
anything
in
the
open
air.
We
need
to
to
Anchor
it
right.
That's
why
we
work
with
Partners
like,
for
example,
Lockheed
Martin
to
actually
understand
their
requirements,
and
the
reason
why
we're
like
single
node
at
the
moment
is
because
that's
where
the
demand
is
we're
basically
going
to
prioritize
exactly
based
on
the
customer
demand.
L
B
Take
a
couple
things
one
is
we
don't
take
major
releases
lightly?
We
know
they're
very
disruptive
to
you
as
customers
and
certainly
to
us
as
a
business
right.
The
things
that
have
driven
are
major
releases
have
been
sort
of
major
re-architectures
of
the
underlying
platform
infrastructure
that
surrounds
it,
where
we
had
no
other
choice
right
so
so,
obviously,
two
to
three
was
obvious:
when
we
did
that
in
2015
openshift
three,
we
pivoted
everything
right.
We
openshift
2
was
actually
built
in
Ruby
based
on
Linux
containers,
but
it
was
our
implementation
of
containers.
B
There
was
no
container
standard
openshift
three.
You
know
we
now
had
kubernetes.
We
had
standard
containers
with
Docker,
which
became
oci
and
and
so
forth,
so
that
was
that
was
sort
of
a
clear
pivot.
Some
people
even
thought.
Maybe
we
should
have
renamed
it
something
else,
because
it
was
completely
different
from
the
ground.
Up
three
to
four
is
basically
the
pivot
around
going
immutable
at
the
at
the
infrastructure
at
the
at
the
cluster
level,
right
so
Rel
to
Rel
core
OS
bringing
in
the
operator
model.
B
The
cool
thing
is
the
workloads
didn't
change
right,
like
kubernetes
itself.
Was
still
the
basis
and-
and
so
we
just
sort
of
the
next
version
of
kubernetes-
became
the
basis
for
openshift
four,
the
containers
that
you're
running
in
V3
came
forward
for
so
we
didn't
change
the
the
model
for
the
application,
but
the
model
for
how
we
built
and
deployed
and
managed
clusters
is
the
same.
Kubernetes
itself
is
still
on
1.x
right.
B
That
would
be
one
thing
that
would
potentially
drive
a
major
release
is
if,
if
kubernetes
itself
evolve,
major
from
the
platform
and
again
any
of
the
PMs
can
tell
me
I'm
wrong,
but
from
the
platform
right
now,
we
don't
have
any
plans
for
a
5x,
we're
essentially
iterating
on
the
current
platform,
both
in
the
kubernetes
community
and
these
surrounding
communities
and,
moreover,
most
of
the
Innovation.
Now
the
the
rate
of
change
is
much
faster
in
projects
above
and
around
kubernetes
in
the
cncf
ecosystem.
B
B
There
has
to
be
a
reason
and
right
now
we
don't.
We
don't
see
sort
of
a
reason
for
that,
even
even
like
new
Footprints,
like
microchip,
we
developed
that
as
a
sort
of
a
a
new
distribution
right
like
around
the
The
Edge
device
model
versus
sort
of
a
major
upgrade
and
I
think
I
think
that's
a
relief
to
a
lot
of
customers.
We're
still
we're
still
migrating
customers
from
three
to
four
right
and
we
will
be
probably
for
the
next
year.
So.
T
B
Change
at
all,
no
so
so
I'll
give
you
my
take
on
it.
These
guys
can
add
when
you
look
at
okd
or
what
was
previously
called
origin
in
the
3.x
days
in
Prior
right,
we
built
openshift
or
okd
on
top
of
sentos
right
and
that
was
good.
Centos
gave
us
a
stable
community
distribution
of
of
Linux,
and
then
we
innovated
on
top
of
that
there's
one
problem,
though
right
a
lot
of
the
Innovation
is
in
the
container
of
runtime
itself
right.
B
So
the
fact
that
Centos
was
Upstream
from
Rel
means
that
we
weren't
getting
the
community
benefit
of
innovating
in
Rel.
You
go
to
version
four,
and
now
we
have
Rel
core
OS.
There
was
no
Centos
core
OS
for
us
to
build
on.
We
did
did
build
Fedora
core
OS.
That
creates
a
problem
so
now
we're
Upstream,
but
we're
very
far
Upstream
right,
so
Fedora
it's
building
on
the
latest
version
of
Fedora,
which
won't
be
part
of
Rel
until
rel9,
because
Rel
doesn't
rebase
to
Fedor.
B
B
You
know
with
the
entire
platform
from
the
operating
system
runtime
on
up,
but
also
do
that
in
a
way
that
feeds
directly
into
our
commercial
products
and
our
productization
process.
So.
O
And
just
to
clarify
there
still
will
be
okd
on
Fedora
koros
available.
So
it's
it's!
It's
not
actually,
switching
from
one
to
the
other.
It's
actually
an
option.
That's
going
to
be
provided
to
the
community
yeah.
C
I
C
B
C
B
A
great
question
right
so
like
two
of
the
big
things
around
Edge
that
we
had
to
solve
for
one
was
the
constrained
infrastructure
footprint
that
we
had
to
run
on
and
two
is
how
to
manage
a
lot
of
these
instances
at
scale
right.
So
so
the
first
thing
is
we
needed
a
multi-cluster
management
solution,
not
just
for
Edge
but
for
even
traditional
deployments
in
the
data
center
across
the
public
clouds.
The
next
thing
we
need
to
do
is
you
know
scale
it
right.
B
So
so
we've
done
a
lot
of
work,
both
at
Red
Hat,
we've
actually
gotten
help
from
IBM
research
on
this
around
scaling
ACM
to
be
able
to
manage.
You
know
thousands
of
clusters,
and-
and
you
know,
we
really
need
to
keep
driving
that
into
tens
of
thousands.
So
that's
a
big
big
focus
and
then,
when
you
look
at
kind
of
related
Technologies,
you
know
kcp,
hypershift
and
so
forth.
That's
you
know
Hub
Hub
clusters
and
so
forth.
That's
just
continuing
to
drive
on
that
scale.
B
So
yeah,
absolutely
we
we're
working
on
it
and
we
continue
to
work
on
it
because
we
need
to
be
able
to
have
a
strong
multi-cluster
management
solution
to
manage
the
growing
number
of
clusters
out
there,
particularly
at
the
edge
where
they
scale
really
really
fast.
So
all
right!
Well,
thank
you
all,
and
thanks
thanks
all
for
coming
thanks
to
our
presenters.
That
was
fantastic.