►
From YouTube: Panel: The Future of Kubernetes is Control Planes - Red Hat OpenShift Commons 2022 Detroit
Description
Panel: The Future of Kubernetes is Control Planes
Red Hat OpenShift Commons 2022 @ Kubecon/NA
Detroit, Michigan
October 25, 2022
Panelists:
Dr. Stefan Schimanski (Red Hat)
Andy Goldstein (Red Hat)
Tushar Katarki (Red Hat)
Alberto GarcĂa Lamela (Red Hat)
https://commons.openshift.org/gatherings/kubecon-22-oct-25/
A
B
D
Yeah,
hello,
I'm,
Stefan,
so
I'm
involved
in
Cube
since
near
the
beginning
have
been
involved
very
much
in
custom
resource
definitions.
So
if
you
use
cids
lots
of
the
things
there
are
for
my
for
myself,
I've
been
openshift.
Api
server
lead
for
many
years
so
and
I
moved
to
kcp
I'm,
leading
that
with
Andy
to
build
the
future
of
control.
Planes.
A
Hi
everyone
can
you
hear
me:
yeah
I'm,
Tushar,
katarki
I
am
on
the
product
team
I've
been
on
the
product
team
for
openshift
for
many
years
now,
and
some
of
you
probably
have
seen
me
here
on
stage.
I
have,
among
other
things,
responsibility
from
a
product
perspective
hypershift,
as
well
as
the
kcp
and
so
happy
to
be
here
and
engaged
with
you
all.
B
All
right
thanks,
everybody,
so
I'm
going
to
start
with,
hopefully
an
easy
question:
what
is
a
control
plane?
You
want
to
take
that
stuff
on.
D
Yeah,
basically,
everybody
knows
openshift,
say
I
know
what's
which
run
workloads
and
there
are
three
nodes
which
are
special.
We
used
to
close
them,
the
master
nodes,
their
control,
plane,
notes
and
basically,
the
host
the
source
of
tools
of
the
cluster
so
desired
State
you
define
and
every
component
in
the
in
the
cluster.
So
there
are
many
many
agents.
Every
cubelet
is
an
agent.
Every
controller
is
an
agent.
D
They
have
all
to
agree
about
the
the
desired
State
and
the
current
state
of
the
cluster
and
all
of
those
controllers,
those
agents
plus
the
state
which
is
consistent
in
a
way.
They
form
the
control
plane
and
everybody
knows
those
three
clusters,
they're
important
for
availability
and
you
can
lose
one.
Everything
is
fine.
You
cannot
lose
two
because
you
lose
majority,
so
high
availability
maturity.
That's
all
super
important
for
consistency.
Basically,
that's
a
control,
plane,
consistent
state
where
everybody
agrees
on
plus
agents
which
use
that
state
to
establish
your
desired
state
in
the
cluster.
B
A
Great
great
question:
in
fact,
you
know
we
talked
about
ing
in
the
previous
presentation,
was
talking
about
what's
next
for
them
and
they're
talking
about
move
from
pets
to
cattle,
right
and
so
I
think
that's
kind
of
there's
one
thing
that
we
are
focused
on
on
in
this
presentation,
and
this
talk
really
is
about
you
know.
You
know
if
you
consider
the
number
of
kubernetes
clusters
and
namespaces
that
have
grown
over
the
years
and
there
is
you
can
go
and
look
at
the
cncf
survey
or
you
can
look
at
I
mean
we.
A
We
know
how
many
customers
we
have
and
how
their
openshift
clusters
and
namespaces
are
growing
Etc.
Based
on
that,
we
know,
that's
happening.
The
pets
and
cattle
analogy
makes
sense.
The
Clusters
cannot
be
thought
about
as
pets
anymore,
but
more
as
cattle.
If,
if
you
remember
that
familiar
analogy
from
the
early
Cloud
days,
so
I
think
that's
kind
of
one
of
the
main
things
that
they're
looking
for
is
that
those
things
are
proliferating,
and
there
is
a
real
need
for
that.
But
now
how
do
I
manage
my
applications?
Workloads
as
well?
A
As
you
know,
day,
two
management
of
all
these
clusters
and
compliance
and
security
I
think
are
in
a
touched
upon
that,
even
when
we
do
the
roadmap
presentation,
so
that's
really
where
customers
and
users
are
going
next,
so
the
transition
from
clusters
as
pets
to
cattle
and
and
even
name
spaces
included
in
that.
B
C
Thank
you,
okay,
yeah,
so
to
to
put
it
to
put
some
context
on
that.
Let
me
get
back
to
your
previous
question
about
what's
the
the
control
plane,
so
people
need
to
to
run
workloads
right
and
for
that
you
need
compute
and
the
more
workloads
that
you
run.
The
mod
computer
you
need
so
is
that
complexity
is
increasing.
You
start
to
look
into
clusters.
C
C
That's
right
so,
with
this
topology
with
hosted
control
planes,
you
basically
have
a
quicker
way
to
get
from
nothing
to
a
cluster
up
and
running
so
going
from
nothing
to
Value
by
Design
is
also
cheaper,
because
you
don't
need
a
dedicated
infrastructure
for
running
your
kubernetes
control
plane
with
traditional
Standalone
openshift.
C
B
Thanks
so
to
sure,
are
there
any
other
reasons
that
you've
seen
from
a
product
perspective
for
like
why
we're
interested
in
trying
to
work
on
a
hypershift
for
the
community.
A
Yeah
I
mean
I
I,
think
you
know,
I
think
the
other
one
that
I'd
add
to
it
and
it's
related
is
how
do
you
scale?
I
mean
I?
Think
we
spoke
about
the
haban
spoke
model
earlier,
especially,
you
know
when
you
think
about
Edge.
Now,
right,
like
I,
mean
I
talked
about
I
mean
when
we
started.
Maybe
there's
few
clusters
tens
of
clusters
now
there's
hundreds
of
clusters.
A
Now,
as
you
go
towards
the
edge
you
know,
you
will
know
that
there
are
thousands
of
clusters
and
we
are
potentially
talking
about
at
the
far
Edge
tens
of
thousands
of
clusters
right.
So
the
question
is:
how
do
you
scale
and
you
know-
and
we
talked
about
the
Hub
and
spoke
and
really
hypershift
or
hosted
control
planes,
which
is
officially
what
we
call
it
now
hypershift
is
the
Upstream
Community
project?
A
You
know
it
allows
you
to
scale
in
that
model
right.
How
do
you
scale
with
a
hub
cluster
and
a
sport
cluster
but
smokes
are
not
a
hub
of
hubs
is
what
we
talk
about,
must
have
heard
about
that
before
and
so
that's
kind
of
one
of
the
big
things.
In
my
mind
is
scaling
cluster
deployments
from
for
thousands,
and
tens
of
thousands
of
clusters
is
what
one
thing
I'll
definitely
add
to.
Are
there
other
benefits?
A
Like
you
know,
you
can
run
mix
clusters
like
you,
can
run
the
control
plane
on
arm
and
take
advantage
of
cost
savings,
whereas
your
tenant
clusters
can
be
x86,
for
example,
or
could
be
heavily
in
gpus.
You
can
do
all
those
things
much
better.
It's
not
I'm,
not
saying
you
cannot
do
it
with
Standalone
clusters
is
much
better,
much
easier,
much
faster
with
hypershift.
B
And
is
this
something
that
we
have
today?
Can
people
go
play
around
with
it?
You
know:
what's
the
current
status.
A
I,
can
you
know
I
mean
so
it
is
from
a
Upstream
project
perspective.
Alberta
will
tell
you
from
a
product
perspective.
It
is
Tech
preview
right
now
for
AWS
and
bare
metal
and
we're
trying
to
get
that
to
GA
and
you'll,
see
us
adding
more
platforms
next
year
as
we
go
along
that,
but
Upstream
is
probably
farther
along.
In
that
sense,.
C
B
Thanks
so
Stefan
you're
here
with
me
to
represent
kcp
so
I'm
curious.
You
know:
we've
been
talking
about
trying
to
improve
efficiency
of
control
planes
for
clusters,
where
you
know
we're
trying
to
move
the
control
planes
into
pods.
Kcp
is
a
little
bit
of
a
different
thing.
So
can
you
tell
us
a
little
bit
about
that.
D
Yeah,
so
we
we
realized,
kubernetes
has
been
built
around
containers
for
years
right.
If
you
read
the
website
of
kubernetes,
it's
a
production
grade
container
authorizer
orchestrator.
Basically,
so
it
orchestrates
deploys
workloads,
but
there
are
more
and
more
workloads
which
not
workloads
but
services
on
top
of
kubernetes,
which
are
actually
not
around
containers.
D
There
are
controllers
which
make
available
Cloud
resources
from
AWS
like
this
ack
project
or
the
S
cross
plane,
which
builds
also
something
like
controllers
in
a
sense
and
apis,
and
it
doesn't
care
about
that.
They
are
actually
pots
running
or
something
like
that.
So
there's
lots
of
value
in
the
kubernetes
project,
which
is
actually
not
connected
to
container
orchestration.
D
So
what
we
thought
about
is
what,
if
we
take
kubernetes
but
remove
everything
about
pots,
everything
like
deployments
pots,
everything
is
gone.
Forget
about
that
and
at
the
same
time
we're
thinking
big
clusters.
Everybody
knows
the
problem
of
big
clusters.
Here,
big
openshift
clusters,
like
you,
have
to
share
them
with
users.
Users
are
not
administrators,
usually
in
the
Clusters.
They
cannot
install
operators
or
just
limited
in
a
limited
sense.
They
cannot
touch
cids.
D
D
Basically,
what
you
know
as
a
namespace
but
increase
isolation
around
the
namespace,
so
it
feels
like
its
own
cluster.
It
shares
at
CDs,
so
it's
still
basically
on
the
same
Hardware.
So
it's
running
in
the
same
control
plane,
but
every
user
is
again
empowered
to
do
whatever
the
user
wants
in
this
small
cluster,
we
call
a
logical
cluster
or
a
workspace,
and
so
we
don't
have
pots
anymore.
We
come
to
that
later
on
I
think
pots
come
back,
but
for
the
moment
think
about
an
empty
generic
API
server.
D
Many
many
workspaces,
every
user
is
again
admin.
And
what
can
you
do
on
such
a
platform?
What
can
you
offer
a
user
inside
of
a
cluster
to
make
use
of
that
to
bring
value
into
this
model?
So
services
are
a
big
topics.
Software
service
I
think
will
come
to
that.
So
we
have
removed
something.
We
change
something
at
the
core
of
kubernetes
and
we
ask
ourselves:
does
this
have
value
so
we
have
used
cases
for
that.
We
will
talk
about
that
and
the
big
question
is:
can
this
change
how
we
think
about
control
plans?
D
There's
one
more
dimension
scale?
How
many
namespaces
do
we
have
in
openshift
openshift?
Usually
it
lives
in
some
region
and
some
cloud
provider,
for
example,
or
a
new
data
center,
but
basically
this
one
openshift,
it's
just
living
there
in
this
location-
and
you
have
many
of
them.
Imagine
kubernetes
or
kcp
is-
is
spread
worldwide.
So
you
have
workspaces
and
you
don't
care
about
where
it's
really
running
on,
on
which
Hardware
they
are
just
there.
D
B
D
The
the
answer
is
no,
so
what
we
want
to
do
so
we're
exploring
how
to
use
this
model.
This
new
model
Planet
scale,
workspaces
and
imagine
you
have
many
ultimate
clusters,
so
clusters
are
still
running
the
workloads,
but
at
certain
size
and
Tushar
was
mentioning
that
already.
If
you
have
many
clusters
Suddenly
at
some
point,
you
will
have
the
desire.
I
want
a
single
source
of
tools
of
everything.
I'm
deploying
and
kcp
can
be
this.
D
B
Thank
you
so
Tushar
from
a
you
know,
put
your
product
hat
on
what
excites
you
about
the
possibilities
of
kcp
and
where
we
can
take
that
and
maybe
hypershift
as
well
like.
Can
we
put
them
all
together
and
come
up
with
something?
That's
really
powerful,
yeah.
A
I
mean
there's
lots
of
exciting
things
there
for
sure
I
mean
the
problem.
Spaces
I
think
we
are
going
after,
as
I
said,
is
you
know
multiple
clusters,
multiple
namespaces
and
I'm?
When
I
say
multiple
lots
of
them,
you
know
how
do
you
manage
application
and
workload
deployment
across
all
of
these?
You
know
how
do
you
do
you
know
in
an
agnostic
fashion,
right
like
if
it
again
1
10
15,
you
can
name
them.
A
If
there
are
thousands
of
them,
then
you
need,
as
some
logic
some
piece
of
software,
some
piece
of
system
to
be
able
to
understand
what
to
do
with
this
right
like
where
should
I
place
this
workload?
Or
you
know
the
even
the
concept
of
the
you
know
there
are
lots
of
services
as
Stefan
was
talking
about.
A
The
world
is
a
mix
of
kind
of
applications
that
I
write
and
applications
or
services
that
I
consume
as
a
developer
and
so
for
those
applications
that
I
consume
I
need
a
way
to
publish
them
and
consume
them,
and
so
and
doing
it
at
scale.
You
know
kubernetes
based
fashion.
If
that's
your
ecosystem
in
a
declarative
model
is
definitely
one
of
them.
I
think
those
are
just
some
of
the
things
that
come
to
mind
I
mean.
A
Then
there
is
also
the
notion
that
you
know
this
is
just
multiple
clusters,
but
what
if
these
clusters
are
across
multiple
regions?
What
if
this
is
across
multiple
clouds?
What
it
what
it?
What
if
it
is
all
the
way
to
the
edge
so
I
think
that's
yet
another
dimension
of
problem
space
that
we
think
we
can
address
with
kcp
and
hypershift
right.
A
So
in
some
ways
to
me
at
least
simplistically
hypershift
has
allowed
me
to
create
lots
of
physical
clusters
on
demand
quickly
and
at
scale
and
cheaply,
and
now
I
need
a
way
to
deploy
and
consume
applications
and
services
on
those
on
those
physical
clusters
in
a
in
a
easy,
scalable
way.
So
I
think
that's
the
tldr
to
me
all.
B
Right
thanks
so
I'm
going
to
try
and
weave
together
hypershift
in
kcp
a
little
bit
more
here
so
Alberto.
If
I
wanted
to
create
a
new
control
plane
using
hypershift,
how
do
I
do
that?
What
sort
of
interface
or
apis
does
it
offer.
C
Yeah,
so
what
hypershift
gave
you
is
a
kind
of
a
generic
API
to
spin
up
new
clusters
in
a
given
management
cluster,
where
you
end
up
with
that
decoupling
between
both
hosted
control,
plane
and
data
playing.
So
we
have
a
consumer
facing
API
and
it's
called
the
hosted
cluster,
where
you
can
specify
the
different
parameters
that
you
decide
for
your
cluster
like
infrastructure,
the
time
the
type
of
cloud
where
you
want
to
run
that
kind
of
theme.
And
then
you
have
no
pools
where
you
can
specify
the
properties
for
your
data
plane.
B
B
D
D
That's
a
detail
of
your
it
team,
where
the
management
clusters
are
running,
so
you
need
a
central
place
where
you
can
create
the
hosted
cluster
object
and
the
IT
team
will
make
sure
that
this
is
in
some
way
instantiated
in
one
of
the
management
clusters,
and
eventually
you
will
get
an
openshift
cluster
below
and
the
address
of
the
SAQ
config
is
sent
back
to
you
as
a
user.
But
the
user
wants
this
one
single
place:
basically
an
API
endpoint,
API,
Dot
and
unit
company
name
and
there's
everything
for
instantiating
resources.
B
Yeah,
so
let's,
let's
dig
into
that
a
little
bit
deeper,
so
you
and
I
have
been
talking
before
about
trying
to
see
this
as
a
new
way
to
enable
software
as
a
service
in
openshift
and
kubernetes.
So
what
sort
of
possibilities
can
we
potentially
get
where
we
have
kcp,
and
we
want
to
offer
database
as
a
service
or
anything
as
a
service
like?
How
can
we
do
that
with
kcp,
and
how
does
it
fit
into
something
like
openshift
or
kubernetes?.
D
So
I've
been
involved
in
in
cids
from
the
beginning
and
I
implemented
lots
of
the
things
there
so
yeah
these
are
cool.
Everybody
knows
them
uses
them
everywhere.
We
as
get
had
invented
operators
operators,
you
know
the
operator
Hub
or
the
the
console
operators
store
where
you
can
find
databases
any
database.
You
want
basically,
but
still
the
model
is
that
you
install
something
into
your
clusters.
It's
not
really
software
as
a
service.
D
When
you
consume
a
cluster,
a
database
from
a
cloud
provider,
there
are
some
API,
maybe
some
console
some
UI
and
you
click
for
creation
of
a
postgres
database.
You
get
it,
but
you
don't
see
how
it's
executed.
You
don't
see
pots,
so
the
difference
in
software
as
a
service
is
that
actually
nothing
is
running
in
your
cluster.
It's
really
completely
provided
by
a
service
provider
and
this
model
of
service
consumption
does
not
exist
in
kubernetes
in
openshift,
so
we
only
have
to
operate
our
model
U.S
and
I.T
Department.
D
Imagine
Department
want
to
become
a
service
provider
for
other
clusters,
so
you
have
hundreds
of
clusters,
hundreds
of
users
of
your
database
service,
and
you
want
to
build
these
things.
You
have
to
run
an
operator
somewhere
in
your
data
center
in
your
database
clusters
and
the
users,
your
developers,
for
example,
they
shouldn't
see
how
you
actually
run
the
databases,
so
you,
the
user,
shouldn't
see
that
they're
actually
pods
running
postgres.
D
Of
course
they
exist,
but
we
want
to
hide
that,
like
imagine,
there's
a
Persona
of
a
service
provider
in
your
company
database
team,
something
like
that
and
there's
a
developers.
They
want
to
consume
databases
and
they
just
care
about
the
API.
So
the
contract
between
two
teams
should
be
just
an
API,
a
CID
based
API,
but
just
that
they
shouldn't
install
an
operator
and
maintain
it
and
see
when
it
doesn't
work
and
and
fix
it
and
so
on.