►
From YouTube: OCB: Emerging Multi-cluster Patterns: HyperShift and Kubernetes Control Planes - Adel Zaalouk
Description
Let transcend! In OpenShift / Kubernetes a pod is most basic resources that all other orchestration primitives are built upon, they represent workloads . What if we can take the same concepts that we created to lifecycle, orchestrate, and schedule pods and applied them to whole clusters? But what is a whole cluster, does it need to be whole, can we apply dualism to clusters, and is it worth it?
We bring more questions than answers, join us if you are interested in musing apart potential futures of virtual dualistic logical-centralised physically-distributed multi-clusters (I know!)
A
A
A
Well,
hello,
everybody
and
welcome
to
another
openshift
commons
briefing
as
we
like
to
do
on
mondays,
we'd
like
to
talk
about
upstream
projects
and
new
ideas
and
new
technologies,
and
today
we're
really
happy
to
have
adele
zaluk
with
us,
who
is
a
product
manager
in
the
openshift
group,
at
red
hat
and
he's
going
to
talk
about
some
emerging
multi-cluster
patterns.
A
You've
probably
heard
about
kubernetes
control
planes
and
other
things
along
that
line
that
have
been
being
talked
about
and
discussed
in
different
community
groups,
but
this
today
is
adele's,
take
on
it
and
I'm
really
looking
forward
to
it.
So
adele
introduce
yourself
your
background
a
little
bit
and
then
take
us
down
this
path.
B
B
And
I'm
a
product
manager
for
openshift
I
have
like
my
experience,
is
a
mixture
between
networking,
consulting
and
development
and
research
and
recently
product
management,
and
today
I'm
going
to
be
talking
about
multi-cluster
patterns
and,
as
you
see
here,
the
subtitle
is
a
bit
confusing,
but
I'm
going
to
be
explaining
it
along
the
way.
There's
a
path
to
virtual
dualistic,
logically
centralized
physically
distributed
clusters,
and
I
would
like
to
start
by
having
us
look
at
this.
B
This
figure
the
way
and
given
also
my
background,
I
come
from
networking,
so
the
first
thing
I
wanted
to
do
is,
or
thought
of
doing
is
basically
mapping
the
stack
that
we
have
with
openshift
onto
something
like
an
osi
model
and
that
turned
out
to
be
the
openshift
interconnection
model
that
that
does
not
exist
anywhere.
That
is
something
I
came
up
with,
so
I'm
sorry
for
the
osi
folks,
but
yeah.
That's
that's
that's!
Basically,
the
reason
I
actually
did.
B
That
is
because
I
oversized
a
representation
of
layers
of
what
they
do,
and
then
there
are
protocols
that
are
basically
interconnecting
with
one
another
at
different
layers
of
the
stack
and
there
are
even
specialized
stacks,
like
the
tcp
model.
That
kinds
of
like
you,
for
example,
could
have
an
application
that
would
run
on
udp
or
an
application
that
would
run
on
tcp
and
then
there's
ip
on
the
layer,
3
layer
and
then
there's
a
physical
layer
that
everything
runs
on
top.
I
think
of
openshift,
or
the
stack
that
we
provide
is
basically
similar.
B
Red
hat
has
been
historically
known
for
red
hat
linux.
That's
the
basis
where
we
build
on
top
everything,
and
then
openshift
is
just
an
addition
that
brings
all
the
goodies
of
upstream
kubernetes
to
to
to
to
to
customers
and
and
and
to
you-
and
this
is
con.
This
is
basically
comes
in
different
shapes
forms.
B
If
we
think
about
kubernetes
you'll
find
a
lot
of
cluster
interfaces
being
defined
upstream,
initially
kubernetes
didn't
have
that,
but
the
more
it
gets
or
the
more
we.
We
we
go
with
time.
B
The
more
more
standards
get
defined
like
cluster
api,
which
deals
more
of
how
machines
get
gets
created
on
a
cluster
level
on
any
infrastructure
provider
like
cni
deals
with
the
networking
on
the
cluster
like
container
runtime
interface,
which
deals
of
of
basically
what
you
use
as
a
runtime
to
run
your
workloads,
whether
that
is
normal
or
sandboxed
or
any
other
type
of
runtime
that
you
choose
or
the
csi
layer
which
basically
consists
of
plugins
and
so
on.
B
I
don't
have
time
to
talk
about
each
of
these
layers
and
details,
but
I
can
tell
you
that
like
was
was
was
was
kubernetes
that
a
lot
of
these
and
each
layer
can
span
an
entire
session
on
its
own
and
with
openshift.
Basically,
we
bring
that
with
support
and
add
on
top
a
lot
of
layers
that
help
the
usability
of
these
things.
B
B
You
have
an
external
control
plane,
that's
basically
what
we're
going
to
a
bit
explain
about
today,
which
is
a
more
an
architectural
pattern,
but
still
bringing
openshift
and
and
so
that
choice
brings
you
more
and
more
use
cases
so
and
and
then
we
go
a
layer
up
and
then
you
have
the
multi-cluster
management
orchestration.
There's
a
lot
of
things
happening
here,
and
I
am
sure
I
forgot
many
things
to
add
here,
but
I
was
lazy
at
some
point
and
I
didn't
add
things
like
image:
dresses,
image,
registries,
githubs
pipelines.
B
All
these
things
fits
as
blocks
and
the
nice
thing
is.
We
provide
this
as
choices
right
like
we
could
say
it
depends
on
the
use
case.
We
have
the
luxury
to
say
it
depends
because
we
have
these
blocks
that
could
interconnect
with
one
another
in
any
way,
shape
or
form
right,
and
this
is
in
my
opinion
that
we
have
value
that
we
provide
these
building
blocks
and
you
come
and
look
at
them
and
say:
oh,
this
makes
sense.
I
have
this
use
case
and
I
would
like
to
know
this
this.
B
I
would
like
to
apply,
for
example,
policy
or
run
multi-cluster
with
a
disconnected
cluster
and
and
run
it
in
an
externalized
control
plane.
So
you
have,
you
can
use
it
in
the
same
way
that
the
osi
model
is
built
for
or
networking
protocols,
basically
that
match
and
run
on
top
of
one
another,
and
in
this
session
today
I'm
going
to
be
focusing
only
on
three
blocks
spanning
two
layers.
B
So,
let's,
let's,
let's
start
yeah,
as
I
said
that
the
term
or
what
was
a
bit
complicated,
was
virtual,
logically
centralized
dualistic
physically
distributed.
I'm
going
to
go
and
try
to
to
explain
what
I
mean
by
each
of
these
so
bear
with
me.
B
So
the
first
layer
I'm
going
to
start
from
the
top,
which
is
I'm
going
to
take
part
of
the
multi-cluster
management
story
right
very
small
part
of
it,
not
all
the
blocks
and
I'm
going
to
talk
about
kcp,
right
and
kcp
to
me
represents
these
two
cluster
two
two
blocks
on
the
top.
It
could
present
more,
but
I'm
going
to
be
talking
only
about
the
two
blocks
today,
which
is
basically,
if
you
look
at
the
github
repo
for
kcp
you're
gonna
find
it
is
defined
as
a
minimum
kubernetes
api
server.
B
It
exposes
just
enough
resources
and
it
extends
or
makes
the
api
server
pluggable
enough,
so
that
you
can
also
define
or
get
rid
of
the
resources
that
you
don't
need.
In
addition,
if
you
look
at
the
documentation,
you're
going
to
find
three
major
use
cases
that
kcp
tries
to
address
the
first
one
is
that
minimalistic
api
server?
That
gives
you
that
not
normal
like
if
you
look
at
the
kubernetes
api
set
you're
going
to
find
a
ton
of
resources.
B
You
don't
necessarily
need
all
these,
so
it
strips
out
all
the
things
that
is
needed
only
the
ones
that
are
needed
and
provides
you
with
an
interface
that
you
can
interact
with
it
without
all
the
overhead
of
kubernetes,
like
components
like
tube
controller
manager
or
any
of
these
things
that
deals
with
spot
or
deployments.
And
all
these
things
in
kcp.
B
B
But
what
if
the
question
that
gets
asked
is
what,
if
we
can
present
that
and
take
take
it
a
layer
up
and
and
make
each
cost
represent
a
10,
for
example,
and
then
orchestrate
multi-tenancy
on
a
cluster
level
primitives
instead
of
like
name
spaces,
which
present
presents
a
stronger
bubble
which
might
be
appealing
to
some
and
then
the
third
one
is
transparent.
Multi-Cluster-
and
the
argument
here
is
like
whatever
you
want
to
apply
to
a
cluster,
should
also
work
with
kcp
kcp
presents
one
layer
on
top.
B
You
could
attach
multiple
children
clusters
to
kcp,
so
whatever
gets
deployed
on
kcp
layer
should
not
be
problematic
or
should
not.
It
should
not
be
a
problem
to
propagate
that
resource
that
you
just
deployed
to
the
children
clusters.
That's
basically,
transparency.
Also,
I
like
to
call
it
lossless
multi-cluster,
because
whatever
you
apply
on
the
top
layer
doesn't
lose
value
or
entropy
when
it
gets
translated
into
these
built-in
clusters
yeah,
and
so
there
is
also
the
stack,
the
slack
channel
for
kcp.
B
If
you
want
to
get
more
details,
I
think
clayton,
jason
and
david
gave
a
lot
of
or
talk
a
lot
about,
a
lot
of
topics
that
I
just
mentioned
here
and
go
into
the
details
of
how
things
would
look
like
in
the
future.
So,
if
you're
interested
go
and
join
that
slack
channel
on
the
kubernetes
slack
and
join
the
discussions,
they
have
also
a
community
meeting.
So
please
you.
A
B
So
the
second
part
that
I
would
like
to
talk
about
today,
I'm
not
going
to
get
into
the
the
the
weeds
of
high
pressure,
but
I
just
want
to
present
because
I'm
going
to
present
a
use
case
that
is
actually
more
generic
and
can
be
used
by
other
products,
not
necessarily
or
other
projects.
It's
a
pattern
more
than
I'm
presenting
a
product
hypershift
deploys
openshift,
but
it
deploys
it
with
a
slightly
different
architectural
pattern.
So
that's
the
dualism.
A
B
And
the
dualistic
part
is,
you
know,
who's
in
philosophy.
Dualism
is,
basically,
you
could
have
the
mind
and
the
body
residing
in
different
places
yet
functioning
right.
So
could
we
do
that?
Could
we
take
out
the
mind
of
openshift
and
put
it
somewhere
else
and
still
have
a
functioning
cluster?
Basically
the
question
that
we
ask
in
hypershift-
and
yes,
it
is
possible,
so
you
could
take
the
control
plane
the
logic
of
that
cluster
and
deploy
it
somewhere
else.
B
Not
even
not
only
that,
but
you
could
also
centralize
that,
so
you
could
have
multiple
minds
of
different
people,
work
together
or
under
the
same
body
and
have
that
and
we
call
that
a
management
cluster.
So
you
can
have
one
management
cluster
that
hosts
the
logic
of
all
these
control
or
all
these
clusters
or
the
control
panes
on
the
same
place
and
the
physical
distribution
is
basically,
you
could
have
your
nodes
physically
distributed
across
regions
across
zones
across
cloud
providers.
B
B
So
with
the
normal
openshift,
you
you
have
your
nodes
you're,
requiring
a
certain
minimum
number
of
nodes
which
needs
to
be
or
in
most
of
the
cases
collocated
with
the
control
was
the
workers
in
hypershift
we're
removing
that
requirement
of
multiple
nodes,
and
potentially
we
could
host
more
than
one
logic
more
than
one
mine
more
than
one
control
plane
on
the
same
node
and
scale
that
up
or
down
and
use
all
the
kubernetes
primitives
to
do
so
so
that
the
hypership
repo
is
upstream
and
it's
open
source,
you
could
go
to
github
and
also
raise
issues
and
try
it
out
and
yeah.
B
It's
still
early,
very
early
in
the
early
phases.
So
it's
a
good
time
to
to
ask
questions
and
and
challenge
stuff.
Why
would
we
want
a
dualistic
or
a.
B
Control
plane,
like
I
included
here,
a
set
of
some
some
advantages
or
features.
One
thing
is
because
we
are
getting
rid
of
them
or
not
getting
rid
we're
we're
complementing
the
requirement
that
so
you
some
some
customers.
Some
users
want
that
collocation,
they
don't
want
dualism,
they
say
no,
I
want
my
mind
and
body
in
the
same
place.
I
don't
want
to
go
like
be
like
a
freak,
while
others
see
the
benefits
of
having
wine
separated.
So
we
provide
was
open
to
the
two
flavors
when.
B
With
hypershift,
you
get
immediate
clusters
because
you
don't
have
to
comply
with
a
requirement
of
a
minimum
number
of
nodes
to
get
to
a
cluster.
You
are
running
you
you're
running
on
an
existing
cluster
and
you're
hosting
just
the
control
plane
as
pods
you're
getting
kind
of
immediate
clusters
and
the
control
planes
are
cheaper
because
you're
you
can
host
now
multiple
of
these
control
things.
Instead
of
on
one
control,
plane
per
three
nodes,
you
could
host
multiple
control.
B
Potentially
one
node
we're
using
kubernetes
to
host
the
kubernetes
control
plane,
so
you
discover
this
in
kubernetes,
so
I
could
scale
up
the
pods
and
and
so
on
and
by
the
way
the
pattern
of
dualism
or
control,
plane
and
workers
separated.
This
is
not
new.
It
has
been
used.
You
will
find
cloud
providers
and
so
on
using
that,
but
this
here
is
bringing
openshift
like
it
applies
that
to
openshift
brings
you
all
the
benefits
that
entire
stack,
that
I
showed
you
the
choice
and
brings
that
to
you
in
that
architectural
pattern.
B
You
can
also
have
life
cycle
in
the
couple
because
you
could
upgrade,
for
example,
the
management
cluster
without
affecting
the
workload
clusters
or
the
physical
node.
They
could
even
be
on
different
versions.
They
could
even
be
on
different
architectures.
B
You
could
also
have
your
sres
like
focusing
instead
of
like
having
to
to
to
to
memorize,
to
keep
config
of
hundreds
of
clusters.
They
just
have
one
config
and
then
they
can
log
into
or
have
access.
B
If
you
allow
them
to
to
debug
the
control
plane
of
your
cluster
or
if
you
are
the
one
providing
the
clusters,
then
you
your
sres,
are
having
that
benefit
of
surfing
across
control,
planes
and
easily
detecting
a
problem,
because
observability
becomes
easier,
logging
becomes
easier
and
all
these
things
become
becomes
easier,
not
to
say
that
it's
not
possible.
It's
definitely
possible
with
multi-cluster
management.
The
other
blocks
that
I
didn't
talk
about,
but
you
could
also
have
multi-clusters,
centralized
logging
centralized
monitoring.
It's
just
the
footprint
that
might
be.
You
know
a
bit
lower,
yeah
cool!
B
So
comes
the
department
a
bit
more
about
the
use
case,
so
I'm
sure
that's
not
a
one-to-one
relationship.
I'm
sure
that
it's
not
only
hypership
that
loves
kcp,
I'm
sure
kcp
has
a
lot
of
other
use
cases.
As
I
said,
hypershift
is
just
the
pattern
that
I'm
presenting
today.
Lots
of
other
controllers
could
reuse
that
pattern
or
maybe
build
upon
that
pattern
or
completely
do
other
patterns.
But
it's
it's
kind
of
like
you
know:
kcp
could
love
hypershift
hypership
could
love
kcp
other
things,
love
tcp!
B
B
So
if
you
remember
the
figure
on
the
right
here,
the
I
know
the
text
is
not
visible,
but
that
block
that
I'm
highlighting
was
kubernetes
native
clusters,
and
by
that
I
mean
we
want
to
apply
all
the
kubernetes
concepts,
but
on
a
clustered
layer.
So
basically
scheduling
I
have
a
pod.
I
have
a
scheduler
cube.
Scheduler
keep
scheduler
schedules
a
pod.
Can
I
apply
that
to
a
cluster
and
I
apply
auto
scaling
to
a
cluster?
B
Can
I
apply
at
least
like
leader
election
all
these
primitives
that
exists
with
kubernetes?
We
want
to
take
them
a
notch
up
and
do
that
with
what
was
more,
a
multi-cluster
with
a
virtual
pane
of
glass,
which
is
kcp,
so
kcp
would
act
as
the
virtual
interface
to
multiple
management
clusters
and,
as
I
said
in
hypershift,
the
management
cluster
is
the
place
where
you
host
the
control
planes
of
your
clusters.
So
in
that
case
I
have
multiple
management
clusters.
B
B
B
The
crd
puller
is,
for
example,
whenever
I
define
a
crt
in
one
of
the
children
cluster
that
gets
pulled
up
to
the
virtual
cluster,
so
I
build
awareness
of
that
crd
and
the
sinker
is
basically
more
an
agent
that
lives
on
the
child
cluster
to
replicate
components
that
I
create.
I
will
go
into
more
details
in
each
and
today
I'm
going
to
be
extending
a
bit,
the
sinker
and
the
cluster
controller
to
kind
of
match.
The
use
case
that
I
have
in
mind
all
right
so,
as
I
said,
that's
another
view
of
it.
B
B
Lightweight
it
is
actually
so
that
means
the
api
server
plus
atd
is
like
now
as
single
binary,
and
it's
very
easy
to
run
it,
as
I
will
show
later
in
the
demo.
B
So
the
first
thing
that
we
want
to
do
is
the
resources
that,
like
hypershift,
defines
two
resources
or
defines
a
cluster
by
a
resource
name
hosted
cluster.
So
if
I'm
a
user-
and
I
create
a
hosted
cluster,
I
would
like,
for
example,
kcb
or
use
the
splitter
pattern
to
schedule
and
divide
the
whole
cluster
and
place
it
on
the
cluster
that
has
more
resources.
For
example,
in
that
case,
a
user
creates
a
cluster.
The
splitter
watches
says:
let's
see
management
cluster
one
doesn't
have
any
resources
management
cluster
two
it
has.
B
B
On
the
other
hand,
you
could
have
pull
mode,
and
that
is
useful
in
cases
of
like.
Let's
say
I
don't
have
absolute
connectivity
between
the
control
center
and
management
cluster.
I
just
want
awareness
to
be
one
way
in
that
case,
the
sinker
could
then
watch
resources
get
getting
created
on
the
virtual
cluster
and
replicate
them
locally
in
that
case,
so
the
hosted
cluster
becomes
virtual.
B
This
is
why
it
is
called
virtual,
hosted
cluster
here
and
then
that
resource
gets
replicated
locally
to
the
hypershift
operator,
which
is
an
operator
that
takes
a
hosted
cluster
and
starts
creating
clusters
and
name
spaces
for
each
cluster.
That's
pull
mode
because
it's
pulling
the
resource
instead
of
being
pushed
from
up
now.
Can
I
use
the
pull
mode
to
schedule
resources
for
more
than
one
cluster?
That's
the
question
I
asked
myself
and
the
answer
is
obviously
well.
B
It
was
not
so
obvious
for
me
because
I
have
not
voted
for
a
long
time,
but
when
I
was
coding
this
I
realized.
Oh,
but
then
you
have
a
resource,
it
gets
deployed
on
a
control
center
or
the
virtual
cluster,
and
that
is
being
watched
by
two
management
clusters
at
the
same
time.
So
the
queue
for
each
has
that
resource.
So
it
will
be
created
no
matter
what
so
that,
like
you,
don't
have
time
to
decide
or
schedule
stuff
like
thinkers,
are
going
to
watch
and
create
at
the
same
time.
B
So
it's
not
useful
for
scheduling
one
versus
the
other
management
cluster.
It
is,
however,
useful,
if
you're
thinking
about
h
a
where.
Basically,
you
want
to
make
sure
that
you
have
the
same
resource
in
more
than
one
place.
That's
a
good
pattern.
If
you
want
to
do
high
availability
in
any
any
use
case.
B
And
then
there's
a
mixture
of
these
two
approaches
and
that's
basically
the
way
like
I
used
it.
So
I
use
the
sinker
to
kind
of
like
be
an
informant
if
some
form
it
tells
the
the
the
kcp
server
or
the
controller
manager,
or
it
watches
the
resources
on
on
the
management
cluster.
B
It
is
deployed
on,
and
it
tells
the
controller
manager
or
talks
to
kcp
and
and
and
says
hey
that
budget
for
this
cluster
is,
for
example,
seven
kcp
has
an
own
cluster
resource,
so
the
update
actually
happens
there
and
on
the
same,
the
same
side.
The
management
cluster
two
actually
uses
the
same
pattern
says
I
I
have
this
budget,
I
can,
I
have
eight
name
spaces
or
seven,
and
then
the
cluster
controller
looks
at
the
budget
of
each
and
locally
because
it
has
access
to
the
local
resources.
B
In
that
case,
you
see
the
loop
happening
here
and
you're
gonna
find
like
for,
like
the
decision
will
say:
okay,
seven
is
less
than
eight
seven
closer
management
cluster
one
has
less
name
spaces,
meaning
has
less
clusters,
because
in
host
and
hypership
a
cluster
gets
a
namespace,
and
so
it
decides
to
assign
management
cluster
one,
for
example,
as
the
owner
of
the
cluster
resource
or
the
virtual
hosted
cluster
resource
that
gets
deployed.
So
the
sinker
again
watches
and
finds
that
its
name
got
assigned
to
that
resource.
B
The
same
way
like
the
scheduler
does,
with
the
pod.
When
a
pod
gets
deployed,
the
node
tells
the
scheduler.
I
could
host
that
where
I
could
take
care
of
scheduling
that
pod
on
my
resources
and
it's
the
same
way
here.
The
sinker
tells
the
controller
manager.
I
have
more
resources
to
schedule
that
resource
the
controller
manager
decides
assign
management
cluster
one
as
the
owner
of
that
resource
and
then
the
syncer
simply
just
does
its
job
and
replicates
that
resource
locally
and
from
there
on
normal
operations
happen.
B
Basically,
the
hypershift
operator
would
then
take
that
posted
cluster
resource
that
got
deployed
in
that
cluster
and
then
does
its
job
to
create
namespaces.
So
in
that
sense,
it's
it's
more
or
less
similar
to
how
kubernetes
does
its
things.
You
could
even
take
that
approach
and
and
shard
it
a
bit,
so
you
could
have
one
sinker
that
covers
a
region,
for
example,
and
then
the
resources
get
deployed.
Now
I
haven't
tried
that,
and
I
think
this
this
is
worthy
of
a
lot
of
discussions,
but
this
is.
B
This
is
just
an
example
of
one
pattern
that
is
enabled
there's
a
lot
of
other
patterns,
because
kcp
gives
you
that
ability
with
the
controller
manager
plus
the
minimalistic
api
server,
it
pulls
for
crds
and
it
is
a
lightweight.
So
more
more
of
these
get
gets
enabled.
So
this
is
this
is
now
the
scheduling
part,
and
this
is
what
would
be
the
topic
of
my
demo.
There's.
A
B
B
So
can
I
actually
auto
scale
actual
clusters
and
the
answer
is
yes,
so,
for
example,
in
that
case,
let's,
let's
go
to
hosted
cluster
path
again,
so
a
user
creates
a
hosted
cluster,
but
then
management
cluster
that
is
exit,
there's
only
one
management
cluster
and
it
doesn't
have
resources,
so
it
kind
of
like
informs
kcp
or
the
controller
manager,
and
it
then
asks
to
create
using
hypershift
patterns
a
hosted
cluster,
and
we,
the
difference
here,
is
that
a
hypershift
operator
would
then
be
deployed
on
the
on
the
upper
layer
or
the
management
or
the
control
center,
and
then
basically
takes
care
of
of
of
auto
scaling
and
creating
a
new
management
cluster,
which
then
brings
us
to
the
original
use
case.
B
Because
now
we
have
two
management
clusters.
I
create
a
host
cluster
resource
and
then
these
two
management
cluster
will
report
their
budgets.
The
controller
manager
will
act
as
the
scheduler
assigns
one
of
them
and
then
one
of
them
picks
the
resource
and
deploys
it.
So
I
know
there's
many
things
that
might
be
unclear
at
the
moment.
Hopefully
the
demo
will
clear
stuff
a
bit
but
yeah,
I
think,
yeah,
that's
that's.
Basically
the
a
conceptual
part.
Then
we
could.
B
A
A
We
can
see
it
and
I
think
the
font
is
pretty
good,
so
I
can.
I
can
read
it
pretty
clearly
here
so
you're
good
to
go
cool.
B
Okay,
so
I
have
tried
to
label
the
paints
accordingly
to
the
architecture
that
we
just
talked
about.
You
have
the
kcp
server,
then
you
have
the
controller
manager
and
then
you
have
the
control
center
and
that's
the
virtual
cluster
that
we're
going
to
be
talking
through,
and
then
we
have
management
cluster
one
and
measurement
cluster,
two
and
and
simply
what
management
cluster
one
and
measurement
cluster
two
are
kind
clusters.
B
So
you
see
here
management,
cluster,
one
and
measurement
cluster
two-
and
I
am
currently
pointed
at
a
kcp
cluster.
So
this
is
the
api
server
and
the
controller
manager.
So
if
I
get
api
resources,
this
is
something
unique
that
you'll
not
find
anywhere
else
you're
going
to
find
this
only
this
set
of
resources.
B
This
is
this,
tells
you
here
in
the
kcp
pointed
cluster,
for
example,
there's
no
endpoints,
there's
no
endpoint
slices.
There's
many
things
are
not
there,
only
really
what
is
needed
as
bases
and
crds
and
from
the
crds.
It
is
important
to
see
hypershift
here.
So
hypershift
is
a
crd
that
represents
that
host
of
cluster
resource
that
we
have
talked
about
five
and
additionally,
there
is
one
namespace,
which
is
also
hypershift
where
the
hypershift
resources
gets
created.
B
So
that's
one
thing:
the
other
thing
here
is
the
control
and
manager,
and
the
controller
manager
is
watching
for
stuff.
As
you
can
see
here,
it's
reporting
but
budgets
for
the
two
clusters.
I'm
going
to
explain
that
now,
but
before
that,
let
me
do
something
that
I
have
not
done
to
make
sure
that
we
are
so
let's
delete
that
now.
B
And
by
the
way,
k
is
just
cube,
ctl,
just
in
case
I'm
lazy,
so
yeah
all
right.
So
in
management
cluster,
one
we're
gonna
find
that
there
was
a
new
syncer
deployed
and
management.
Cluster
two
has
also
a
new
sinker,
all
right,
so
and
and
now
one
thing
that
we
need
to
clear
is
the
clusters.
So
what
I
have
oh
you'll,
not
see
it
from
from
okay,
because
I'm
sharing
that's
then.
B
So
this
is
this
is
the
the
resources
that
I
will
create.
The
first
one
is
called
cluster.
This
is
a
kcp
resource,
specific
resource,
so
that
will
get
deployed
on
the
control
center.
It
is
already
deployed
by
the
way,
but
I'm
just
showing
it,
and
the
secrets
here
are
fine,
because
this
is
these
are
kind
of
clusters,
I'm
going
to
delete
them
after
the
demo.
B
So
you
see
coin
management,
two
and
coin
management,
one.
These
are
the
two
clusters
and
what
I'm
saying
here
is
basically
telling
kcp,
hey
ingest
or
I'm
defining
these
two
clusters
so
be
aware
of
them.
That's
what
this
means
cool
and
later
on,
I
would
be
creating
a
hosted
cluster,
and
these
are
just
demos.
For
example,
if
I
look
at
hosted
cluster
resource.
B
B
Sinkers
here
it's
telling
me
that
it
is
aware
of
a
hosted
cluster
and
a
cluster
resources
and
it
is
setting
informers
or
basically
that's
the
controller
pattern
on
hypershift
clusters
and
clusters
for
both
the
guest
and
or
the
or
the
virtual
cluster
and
the
local
cluster,
and
then
it
said,
updated
budget.
B
What
does
this
mean?
It
means
that
it
told
the
virtual
control
center
how
many
name
spaces.
It
has
that's
the
definition
of
a
budget
here.
So
here
I
have.
Let's
see
I
have
nine
and
if
I
remove
the
headers
eight,
I
have
eight
name
spaces
in
that
cluster
and,
if
I
repeat
the
same
command.
B
I
have
also
eight
name
spaces,
so
the
budget
should
be
eight,
so
the
control
and
manager
here
so
okay
management,
one
has
eight.
The
budget
is
eight
for
management,
one
eight
name
spaces,
meaning
I
theoretically
have
eight
clusters.
Let's,
let's
think
of
it,
this
way
and
management.
One
has
also
a
cluster,
so
most
of
them
are
now
have
equal
resources,
so
I
could
basically
choose.
When
I
get
when
I
create
a
hosted
cluster
resource,
I
could
basically
choose
any
right.
B
So
when
I
create
a
host
of
cluster
resource,
what
I
need
is
on
the
management
clusters.
I
need
a
controller,
an
actual
controller,
because
on
the
virtual
cluster
layer
I
don't
have
a
controller
I
just
like,
if
I
literally
don't
have
any
pods.
So
no
actual
control
happens
on
the
virtual
cluster.
In
this
case
it
is
more
proxy
right,
so
the
actual
controller
will
be
in
hypershift
namespace.
So
that's
where
the
hypershift
operator
basically
lives
and
acts
on
resources,
so.
B
B
Of
the
virtual
cluster,
so
example
poster
cluster
w1
so
that
got
created.
Let's
see
what
happened?
Okay,
so
we
see
the
budget.
The
controller
manager
was
aware
that
the
budget
was
nine.
That
means
that
one
of
these
two
clusters
scheduled
the
hypershift
resource.
So
in
that
case,
let's
get
that
name.
Let's
first
look
at
the
logs
of
the
sinker.
What
did
it
say
so?
The
sinker
said
has
owner
annotation.
B
It
does
not
have
the
owner
annotation,
but
it
has
a
cluster
id
and
it
updated
itself
because
it's
so
that
it
has
less
like
it
saw
its
name
on
the
cluster
resource.
So
that's
something
I
forgot
to
show
sorry,
let's
see
so
the
cluster
controller
manager
will
update
the
hosted
cluster
resource
with
the
owner.
So
first
it
is
aware
it's
picking
up
the
budgets
and
then
it
sees
which
one
has
less
budget
and
in
that
case
both
were
equal
and
it
updates
the
actual
resource
that
wants
to
be
deployed.
B
Tcp,
so
the
owner
got
updated
to
be
management
tool,
not
management
one.
So
I
should
not
find
the
name
space
here.
I
should
find
it
on
management
too
see
there
were
no
cluster
name
space,
only
the
hypershift
namespace.
B
But
here,
if
I
look
at
the
namespaces,
I
should
expect
yes,
so
there
was
an
additional
namespace
that
got
created
that
should
represent
the
cluster
and
if
I
look
at
the
thinker
logs
say,
oh
looks
like
I'm
the
cluster
guardian
provisioning
in
a
second,
so
it
recognized
that
it
is
the
owner
of
this
by
looking
at
the
watching
the
cluster
resource,
the
hosted
clustering
cluster
and
started
replicating
that
to
the
local
cluster
for
the
hypershift
operator
to
act
upon
so
yeah.
B
B
The
namespace
here
is
empty
because
I
literally
didn't
define
anything
in
the
resource,
so
nothing
got
scheduled
right,
but
it
provision
the
namespace,
which
represents
a
cluster
and
usually,
if,
if
we're
demoing
heart
picture
that
cluster
or
the
namespace
would
contain
the
control
pane
of
these
components
so
yeah.
If
I
now
create
a
new
resource
like
now,
because
it
shows
management
too,
because
management,
one
and
two
have
equal
budgets,
they
both
have
eight
name
spaces.
So
I
choose
randomly
now.
I
create
another
one
example:
closer
clustered
dummy.
B
Nothing
up
so
this
is
this
is
all
three
minutes
ago.
I
could
check
again
18
seconds
so
that
the
cluster
got
scheduled
to
the
management
cluster
that
had
more
or
had
less
name
spaces
and
thus
more
resources,
so
that
basically
just
shows
that,
with
minimal
effort
I
was
able
to
apply
scheduling
mechanisms
and
scheduling
primitives
at
the
cluster
layer,
and
I
could
do
a
lot
more.
I
could
do
auto
scaling
can
do
any.
B
Basically
anything
then-
and
I
said,
the
relationship
between
kcp
and
hypershift
is
not
one
to
one,
and
the
reason
I
haven't
shown
anything
related
to
hypershift
is
because
this
is
more
of
a
pattern.
Any
controller
could
literally
just
use
the
same
thing
that
I
did
here
so
you
could
apply.
For
example,
I
I
don't
know
an
htd
resource
that
follows
the
same
pattern
and
gets
scheduled
to
the
cluster
that
has
the
controls
on
the
back
end,
so
scheduling
at
a
cluster
layer
and
yeah.
That's
basically,
the
demo.
A
All
right:
well,
we
have
one
question
from
michael,
is
asking:
if,
if
we
can
leverage
kcp
to
write
a
parser
to
split
a
cube,
app
svc
deployment
against
two
distinct
kubernetes
clusters,
yeah.
B
So,
as
I
said
like,
of
course,
like
kcp
is
just
acting
as
a
proxy
and
something
like
the
splitter
pattern
here.
So
let
me
yeah
this
the
splitter
pattern
or
if
we
look
at
the
repo
the
splitter
looks
at
the
deployment,
for
example,
looks
at
the
replica
and
it
has
awareness
of
the
clusters
that
it
ingested,
so
it
could
separate,
for
example,
a
service
or
or
a
deployment
across
two
different
clusters,
so
that
is
also
possible.
B
A
All
clear
here:
okay,
it's
all
clear
and
all
the
other
ones.
I
think
what's
really
interesting
to
me
about
this.
This
whole
use
case
that
you're
describing
is
the
applicability
to
so
many
other
use
cases,
and
you
know
I
know
we're
red
hat
and
we're
all
open,
shifters
and
so
hypershift
is
in
our
belly
wick,
but
it
it
really
bodes
well,
for,
I
think
the
concepts
between
kcp
and
and
applying
them
across
the
board,
regardless
of
what
the
use
case
is.
A
So
I
the
the
slide
that
you
had
earlier
with
how
to
get
in
touch
with
the
kcp
community.
I
think
that's
probably
where
you
want
people
to
go
if
they
have
for
continue
the
conversation,
the
kcp
prototype
one
or
is
there
another
place
where
you
would
like
people
to
reach
out
to
you
and
and
talk
to
you
about
this
topic.
B
Yeah,
so
there
are
two
things
that
I
briefly
talked
about
here:
right,
if
you
look
at
that
layered
architecture,
that
the
first
thing
is
the
the
the
kcp
bit,
which
is
the
multi-cluster
bit
and
that
that
you
can
go
and
talk
to
the
folks,
like
dr
clayton,
david
and
and
jason
about
the
use
cases
they're
discussing
that
every
day
or
every
week.
B
Sorry
and
there's
another
place,
which
is
also
like
very
interesting,
which
is
the
hypershift,
which
is
basically
this
pattern
of
decoupling,
the
control
plane
and
the
workers
or
the
management
and
the
workers
and
deploying
openshift
in
a
in
a
more
centralized
logically
centralized
cheaper,
a
faster
way.
But
again,
as
I
said
it,
it
is
a
complementary
like
to
the
existing
pattern
that
we
have
today.
B
It
just
gives
the
users
the
chance
to
have
that
externalized
control,
plane
pattern
to
save
costs
and
to
do
all
these
things,
but
there's
a
github
people
there.
The
uni
contributions
are
very
welcome.
We
don't
have
a
slide
channel,
unfortunately,
but
that's
another
place
that
I
would
try
to
point
people
yeah.
A
So
if
you
have
questions
about
hypershift
itself,
also
in
the
kubernetes
slack,
there
is
an
openshift
dev,
an
openshift
user
channel
that
you
can
pop
into
and
ask
questions
as
well.
So
I
think
this
this.
This
is
really
have
you
seen
this
pattern
at
all
in
production,
or
is
it
still
a
theoretical,
poc
kind
of
thing.
B
So
the
kcp
itself,
like
kcp,
is
unique
in
like
if
I
talk
about
kcp
and
hypership,
kcp
itself
is
very
unique
in
certain
aspects
like
it
tries
to
do
where,
like
you,
have
a
minimalistic
api
server,
as
I
said
like,
if
I
look
at
the
cluster,
I
immediately
recognize
it's
kcp.
I
don't
see
this
anywhere
else
and
you
have
that
transparent
multi-cluster
use
case
and
the
stronger
multi-tenancy,
where
you
can
deploy
resources
and
they
get
it
translated.
B
There
are
efforts
like
federation
and
so
on
that
tackle
that,
but
not
from
the
same
angle,
like
kcp,
was
the
stronger
focus
on
on
multi-cluster
and
transparency
or
lossless
multi-cluster
on
the
hypershift
side.
As
I
said,
that
pattern
is
not
new,
there
are.
There
are
many
providers
that
give
that
separate
control,
plane
and
workers,
and
and
and
was
with
hypershift
we're
bringing
that
pattern
that
all
the
goodies
and
benefits
of
that
pattern
to
open
shift.
B
So
you
could
have
openshift
clusters
following
that
pattern,
so
I
would
say
it's
not
new,
but
it's
new
with
with
openshift,
because
you
you
get
the
bonus
off
of
features
and
then
it
covers
more
use
cases
and
you
can
then
mix
and
match
like
protocols
in
the
osi
layer,
but
with
the
actual,
like
the
openshift
interconnection
modern.
Instead,
you
have
like
all
these
layers
and
stacks
and
whatever
use
case
you
have,
we
have
the
luxury
to
say
you
could
pick.
A
B
Yeah,
okay,
it
could
so
you
could
follow
me
on
twitter
or
you
could
reach
to
me
on
slack
but
yeah.
I
could
add
it
to
the
to
the
slide
deck
and
perfect.
A
All
right
well,
first
of
all,
thank
you
very
much
for
taking
the
time
to
do
this
today.
I
know
you
we're
all
really
busy
with
the
4.8
release
and
everything
else.
That's
going
out
the
door
in
the
next
few
months.
It
really
helps
to
set
the
the
playing
field
here
for
where
these
use
cases
fit
and
how
the
different
pieces
and
parts
of
this
stack
work
together.
A
So
thank
you
very
much
for
taking
the
time
today
and
I
don't
see
any
other
questions
in
the
chat,
so
I'm
going
to
give
people
a
few
seconds
here
before
I
close
it
out
and
michael.
Thank
you
for
your
question.
If
you
do
just
reach
out,
if
you
have
other
ones
just
reach
out
and
ask
us
in
in
the
slack
and
we'll
we'll
be
hanging
out
there
or
on
twitter,
where
we
also
hang
out
too,
but
it's
much
better
to
have
a
threaded
conversation
in
this
in
the
slack
channel.
A
I
think
these
days,
so
I'm
not
seeing
any
other
questions
coming
in
so
adele,
I'm
going
to
give
you
a
huge
shout
out
on
the
internets
later
today
and
we'll
upload
this
video
and
thanks
to
chris
short
for
making
the
production
happen
today
and
we'll
call
it
a
wrap
and
we'll
have
you
back
with
each
new
release.
A
I
think
and
get
you
back
to
tell
us
how
how
this,
how
this
goes
and
I'll
I'll
share
this
with
the
cp,
the
kcp
prototype
channel,
once
it's
once
it's
up
too
so,
because
I
think
that'll
be
a
good
interesting
place
for
people
to
give
you
feedback.
B
Thank
you
thank
you
for
for
hosting
posting.
It
was
really
fun
and
I
will
be
back
shortly
with
another
topic
exactly
soon.