►
From YouTube: Kubernetes SIG Cluster Lifecycle 20181212 - Cluster API
Description
In Person HA Enablement Meeting:
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit?usp=sharing
B
B
B
So,
basically,
we
up
until
now,
we've
talked
at
a
super
high
level
about
how
do
we
do
AJ
with
cluster
API
and
I?
Think
now
we
finally
got
it
to
the
point
where
we
can
start
hashing
out.
Actually,
you
know
a
lot
of
groundwork
that
we've
already
done.
You
know
it's
been
wrapped
up
and
now,
at
least
where
the
AWS
provider
we're
struggling
at
implementing
AJ
and
the
current
way
that
we
would
have
to
do.
It
would
be
to
basically
build
a
new
custom.
B
Workflow
outside
replicates
a
lot
of
cluster
model
workflow,
but
then
just
continues
to
add
a
chain
control
button
chains,
which
is,
you
know,
very
error-prone,
very
kind
of
tedious.
So
we
wanted
to
start
up
the
discussion
about.
How
do
we
sit
there,
like
we
discussed
in
the
past
start
making
that
control
playing
part
of
the
actual
cluster
object
itself
instead
of
having
to
be
were
created
externally,
I
know
what
we
discussed
it
in
past.
There
have
been
a
few
different
ideas
and
concepts
of
how
this
can
work.
B
Now,
I,
don't
I,
don't
expect
that
we
all
all
fully
agree
at
the
end
of
today
on
you
know
one
single
path
forward,
but
I'm
hoping
we
can
get
through
enough
of
you
know
hash
nappies
designs
to
at
least
gonna,
be
able
to
walk
away
from
here
and
start
putting
together
some
concrete
proposals,
and
then
we
can
kind
of
move
you
to
get
a
rate
from
there.
So
so,
to
start
with,
would
you
like
to
go
into
a
kind
of
gardener
model.
C
And
kind
of
yeah
I
mean
so
we
currently
yeah
simply
called
our
control
plane
sort
of
so
this
is
one
benefit
of
the
box
and
we
don't
currently
support
give'em
JJ
I
mean
even
if
the
egg,
it's
the
goes
down
if
it
go
down
for,
like
vegetable
amount
of
time
and
by
the
time
it's
it's
like
a
power
ring
yeah.
It
is
a
lot
down
time,
but
it's
it's
like
a
kick
up
enemies.
C
C
Not
a
chaebol
in
a
more
stable
way
that
about
virtual
machine,
which,
if,
if
the
virtual
machine
crashes
or
memory
or
so
and
so,
and
so
it's
it
was
better
trying
to
turn
the
world
bottom
apprentice
than
metal.
A
To
make
it
stop
yesterday,
but
if
I
didn't
worth
talk
about
what
we
buy,
a
Jakes
I
think
you're
gonna
have
a
small
pastor,
which
is
different
than
necessarily
the
same
is
highly
available.
We
want
to
explicitly
don't
master
it,
that's
fine.
If
we
want
to
try
and
say
20
BHA,
we
should
think
what
burner
does
like.
They
would
probably
call
highly
available.
C
I
mean
even
if
you're
careful,
even
if
you're
smart
and
you
ruin
your
control,
major
right
just
you
could
actually
just
directly
talk
to
get,
and
you
know
that
your
control
manager
is
down.
You
can
or
less.
You
know
in
what
case
directly
talk
for
the
API
server
and
then
remove
like
this
complete
map,
which
is
small
as
you
use
called
the
locking,
and
then
you
can
book
your
confirm,
Ranger
faster.
So.
A
B
That
actually
brings
up
a
point
for
the
moving
bags
control
plane
that
I
can't
remember
if
it
was
the
other
day
where
we
could
provide,
basically,
at
least
on,
like
the
end
of
us
ID
or
some
of
the
other
cloud
providers.
Have
the
concept
of
you
know
a
persistent
machine
that
will
recover
from
failure.
B
B
The
downside
is:
is
that
a
lot
of
people
that
are
coming
to
kubernetes
new
and
are
looking
to
adopt
it?
It's
sort
of
a
checkbox
that
they're
looking
to
adopt
so
so
it's
it's
almost
a
challenge
of
you
know:
do
we
lose
adoption
by
providing
each
a
capability
without
you
know,
also
providing
multiple
control
plan.
I
agree.
A
That
there's
there's
an
obvious
side
to
this
I
wanted
to
ask
Quentin
specifically
about
that
talked
about
at
the
time
that
it's
released
to
his
fires,
that
you
can
start
I
was
curious
as
more
and
more
applications
about
the
operator
model.
Now.
Does
the
resiliency
of
the
EPI
in
general
become
more
important,
and
do
we
need
to
have
shorter
and
shorter
I
mean?
Is
it?
Is
it
possible
reach?
We
reach
a
point
where
even
being
out
for
five
minutes
is
no
application
in.
A
A
A
Whatever
the
number
was
the
time
like,
why
should
you
care
right?
Why
do
you
care
whether
or
not
it's
replicated,
because
it
really
is
a
replication
I,
think
that
was
something
we
try
to
spend
some
time
at
use
it
on
some
users
actually
have
requirements
non-acidic
for
availability,
but
for
resiliency
in
terms
of
your
writing.
A
So
that's
where
you're
like
a
regional
cluster
yesterday
regional
clusters
are
replicated
across
for
a
lot
of
customers,
that's
actually
more
important
than
right,
and
so
a
lot
of
you
wants
a
replication
so
that
a
zone
can
go
down
supporting,
not
because
they
need
to
be
able
to
make
right.
So
that's
that's
a
sort
of
another
reason:
the
girl
interface.
You
might
be
able
to
get
that
by
just
quickly.
F
But
one
case
for
the
scene:
if
you
google,
a
single,
even
though
you
can
bring
your
dog
I
mean,
is
it
single
instance
of
that
city?
I
mean
I,
think
that
data
and
caring
about
whether
we
have
any
cases
where
single
innocent
goes
down?
And
then
you
get
into
a
situation
where
you
can't
bring
back
the
HCV
or
something
on
that
like
then.
F
That
is
a
you
know,
that's
a
case
where
it
might
be
difficult,
even
that
you
can
schedule
a
Power
manually
started
for
that
matter,
but
you
might
not
be
able
to
recover
your
entire
control
time.
So
that's
probably
one
case
that
I
can
see
you
might
want
to
have
at
least
three
replicas
of
HCV,
even
though
you
know
you
can
probably
get
away
with
just
one
hip
here
sever,
but
you
probably
from
the
NCD
perspective
I
think
you
probably
need
at
least
a
lot
to
copy
of
that
just
to
be.
G
A
B
So
and
I
think
there's
a
big
differentiation
depending
on
how
large
of
clusters
you're
managing
with
this,
and
also
what
touch
workloads
you're
managing
on
you
know
if
you're
managing
you
know,
workloads
that
can
be
spread
out
across
multiple
clusters,
and
things
like
that,
you
know:
there's
much
less
concern,
there's
and
hit
you're
running.
You
know
larger
workloads
with
the
that
are
accessing
water
resistant
data
as
well.
That's
where
you
know
having
you
know
that
additional
multi
control
plane,
you
know,
availability
on
their
child
cluster,
a
cluster
of
makes
a
lot
of
sense
they're.
B
H
A
I
A
A
F
J
All
to
say
so,
I
think
the
question
is
like:
it
is
clear
for
everyone
who
building
operators
that
the
master
can
go
down
for
a
certain
time
and
how
its
operated
it's
immediately,
going
crazy
or
if
it
knows
okay
I
must
handle
the
situations.
I
think
this
should
be
also
somehow
address
it's
the
guys
to
to
building
operators,
and
they
must
haves
as
in
mind
that
it's
not
like
a
given
case
that
samatha
is
highly
available
and
you
can
assume
the
api's
every
time
there
and
it's
working.
A
A
A
B
Well,
that
opens
up
some
interesting
opportunities
as
well,
because,
right
now
we
have
the
special
handle
the
control
plan,
instances
having
a
TV,
also
co-located,
whereas
if
we
separated
a
TV,
then
that
gives
us
the
ability
to
kind
of
treat
those
control
plane.
Instances
much
more
likely
currently
expect
to
treat
the
worker
nodes
to
where
you
know
a
rolling
upgrade
could
be
just
tear
down.
You
know
the
mess,
does
a
control
plane
and
bring
back
new
controls.
C
B
It
makes
sense
because,
when
you
start
looking
at
how
different
users
and
how
different
providers
can
implement
things,
you
know
you
can
have
multiple
permutations
of
kind
of
roles.
You
can
even
have
kind
of
different
sub
roles
of
you
know:
worker
nodes
and
things
like
that
and
trying
to
hard
code
and
Pablos
up
and
to
the
common
objects.
Just
it.
A
H
A
B
C
B
B
You
know
if
you
have
the
cluster
manager
cluster,
if
you're
looking
at
potentially
hosting
you
know,
you
could
potentially
use
that
to
host
it,
but
if
you're
looking
at
multi
provider
or
multi
region
or
multi,
you
know
you
know
anytime
management
like
that
you're,
then
separating
your
control
plane.
You
know
across
a
large
geographical
region
from
your
workers,
especially
and
I,
think
sort.
C
Of
does
this,
you
know
so,
let's
have
like
for
AWS
from
Europe.
We
have
one
custom,
for
example
in
Europe
Eastern
or
your
West,
and
then
political
I
mean
for
everything
from
providers
we
have
like
cardinality
me
like
provider
and
then
the
zone.
So
if
I
can
do
this
thing,
so
you
would
have
like
a
different.
What
does
it
see
my
cluster
yeah
yeah,
so
on
one
axis,
you
can
providers
and.
L
B
H
Okay,
so
basically,
we
started
off
with
manually
created
clusters,
kubernetes
clusters
and
then,
from
from
there
on,
we
have
written
it's
now,
an
API
server
extension,
a
gardener
and
a
server
controller
manager
basically
listens
for
custom
resources,
the
clustered
definition,
so
we
have
our
own
spec.
Are
we
describe
okay?
This
should
be.
You
should
use
a
cloud
provider,
AWS
GPU
or
not,
networking
DNS.
All
these
things,
the
controller
manager,
basically
the
nose
of
the
see
clusters.
It
has
created
them.
H
So
basically,
gardener
has
created
all
the
seed
classes
automatically
and
then
it
then
the
scheduling
part
basically
decides
wait
a
minute.
You
won't
want
to
cluster
Tory
to
be
run
on
and
on
gtp
in
the
region.
Well,
then,
it
looks
for
the
C
classes
that
are
there.
Maybe
there
are
multiple
ones,
then
it
takes
on
the
most
capacity
and
then
schedules
the
control
and
actually
more
things
happening
underneath.
Basically,
it
first
starts
to
create
all
the
networking
I'm
set
that
all
laughed
it's
it
is
creating.
H
E
H
The
part
of
your
work
on
sheet
API
and
waits
for
again
these
components
to
be
ready
to
the
for
the
notes
tool
to
begin,
and
it
deploys
the
rest
on
the
cube
system.
And
then
you
have
your
cluster
now
the
Achilles
heel
of
all
of
that,
where,
in
the
beginning,
the
seed
cluster,
so
somebody
had
to
set
them
up
manually
with
a
separate
project.
So
in
the
beginning
it
was
relatively
easy.
H
H
Ok,
the
see
clusters,
why
not
control
create
them
same
way,
so
basically
shoot
clusters
and
that's
the
current
state,
and
that
activist
here
is
still
this
root
cluster,
and
for
that
we
we
basically
have
done
multiple
folks,
and
we
know
how
to
do
that.
You
need
now
to
codify
it,
don't
know
if
we
should
talk
about
that
one.
Basically,
we
had
a
longer
exchange.
Robert
and
and
we
about
the
whole
set
up,
but
in
indian,
so
we
have
set
that
up
multiple
times
by
hand.
It's
basically
a
ring
of
shoot
clusters.
H
You
bootstrap,
with
whatever
cost
that
you
have
can
be
a
mini
cube,
doesn't
matter
you
deploy
their
government,
you
create
three
more
clusters,
then
you
migrate,
the
control
plane,
so
pathetically
one
cluster
has
the
control
the
other
inner
ring
you
place
garden
on
top
of
it
so
easily
set
it's
it's
harder
done,
but
when
you
have
it
there,
basically,
if
the
cluster
face
doesn't
matter,
there's
still
two
clusters
there.
That
means
the
HCP
that
is
fittest.
There's
a
net
city
clusters
that
runs
on
all
these
free
clusters.
H
These
clusters
can
be
different
infrastructure,
doesn't
matter,
it's
a
latency
issue,
of
course,
in
the
end,
but
gamma
is
isn't
doing
much
gardna
is
basically
pushing
most
of
its
work
into
the
seed
clusters
so
garden.
I
itself
is
basically
only
the
watchdog
that
basically
pushes
information
into
the
sea
classes.
The
stuff
happens
there,
it
reconciles
information
back
so
that
it
knows.
Okay.
This
is
this.
This
is
the
status
of
the
whole
thing.
This
is
what
I've,
created
and
yeah,
so
this
this
ring
there
for
Perks.
H
If
one
part
gets
cut
off
cut
off,
everything
will
be
eventually,
then,
by
gardener
Rica
reconstructed
I
mean
there
are
multiple
layers
in
between
usually
cognizant
even
needed,
since
they
schedule
and
kuba
neat
is
that
that's
the
nice
thing
edge
before
most
problems,
kubernetes
tags
already
care,
and
only
if
something
really
breaks
networking
something.
Sometimes
we
destroy
something.
So
we
have
seen
that
people
sitting
around
with
security
groups
or
whatnot
in
the
next
reconciliation
face
of
God
might
will
be
repaired
again
and
that's
what
we
actually
want.
H
We
do
not
want
to
use
different
tools
and
different
kubernetes
services
to
get
to
this
root
cluster.
We
have
done
that
already
for
the
seat
clusters.
These
are
short
classes.
If
we
want
to
do
therefore
the
root
cluster
as
well,
that's
basically
the
idea
I
would
have
all
the
slides.
But
this
is
not
I
am
a
bit
surprised.
I
mean
we
are
not
trying
to
sell
your
seems.
They
was
also
topic,
but
we
are
very
much
interested
in
the
cluster
API
specification.
M
B
H
Series,
okay,
so
these
are
the
profiles
here
these
and
there
you
can
describe
which
infrastructure,
basically,
only
the
infrastructures
that
are
right
now
in
source
tree
supported,
can
be
configured
so
later.
This
GCP
azure
and
OpenStack
Alibaba
is
coming
soon
packet.
It
started
adopting
it's
the
embarrassing
discussion
so
sure,
if
anybody's
here
from
these
things
so
and
then
you
basically,
then
you
basically
create
the
in
the
cloud
profile.
H
You
specify
the
Machine
times
the
region's
you
wanna
have
the
zones
you
wanna
F,
so
you're
expressing
it
there
and
then
in
the
configuration
of
garden
itself,
you
say
so
this
is
only
the
configuration
but
she'll
be
supported,
also
advertised
in
the
in
the
dashboard,
and
you
can
never
ID'd
more
down
with
the
concrete
configuration.
Basically,
this
is
it's.
The
top
profiles
are
series,
so
we
very
much
like
to
use
everything
we
can
get
from
kubernetes
every
single
bit.
We
don't
want
to
develop
that
stuff,
so
yeah.
H
That
was
the
design
principle
all
the
time
use.
What's
there
therefore
I
had
this
heretic
question,
because
we
know
there
are
certain
limitations,
but
it
was
our
strategy
to
say
we
don't
want
to
solve
these
problems
differently.
I
think
kubernetes
will
develop
and
that's
the
reason
why
being
completely
bet
on
kubernetes
everything
runs
there.
So
we
do
not
use
another
technology
before
that.
There
are
two
years
of
experience,
also
with
Cloud
Foundry
managed
by
portion.
That
was
always
a
pain
to
to
basically
use
one
tool.
I
mean
bush
is
a
very
good
tool.
H
Don't
get
me
wrong,
but
still
using
one
tool
to
manage
some
completely
other
technology
and
with
kubernetes.
We
wanted
to
get
rid
of
this
split.
So
we
are.
We
are
now
also
knowing
very
much
what
we
provision
to
our
users
and
that
feels
good,
so
eat
your
own
dog
food
on
that
level.
It's
really
perfect.
B
The
biggest
question
for
me,
then,
is
you
know
if
you
have
you
know
the
two
models
aren't
very
dissimilar
and
really
it's
just
a
matter
of
where
the
control
plane
is
hosted,
and
you
know
you
know
how
many
kind
of
kubernetes
clusters
are
involved.
You
know
to
completely
reconcile.
You
know
these
objects,
you
know
in
the
gardener
model,
you
know
that
you
have
the
root
cluster,
which
is
feeding
down
to
the
chute
cluster
and
the
information
is
feeding
through
there
before
you
actually
start
provisioning.
B
B
The
one
benefit
that
we've
seen
so
far
to
that
is
it.
You
know
the
bootstrapping
process
seems
to
you
know,
give
you
a
way
to
get
that
control
plane
without
having
an
existing
control
plane
so
you're
using
the
same
process
to
who
trap
the
initial
cluster.
You
know
even
the
bootstrap
cluster
versus
you
know
a
follow-on
cluster.
H
We
don't
really
need
to
distinguish
it,
but
we
need
a
place
to
deploy
gardener
to.
So
the
only
purpose
of
that
cluster
is
to
host
gardener
in
the
gardener
rink.
We
basically
have
kind
of
an
odorless
cluster.
We
we
have
multiple.
So
if
you
imagine
you
have
this
ring
and
then
they
are
basically
running
at
cities
on
all
of
the
all
of
the
three
clusters.
Api
servers.
There
are
no
notes,
that's
kind
of
a
virtual
note
list
cluster.
H
We
need
it
only
basically
because
godness
an
API
server
extension,
that's
also
meet
this
place
and
how
we
come
to
it
and
where
we
run
it,
we
don't
care,
but
we
prefer-
and
therefore
we
have
the
ring
to,
of
course,
use
use
our
own
clusters.
We
can
because
then
we
can
also
spread
them
even
across
infrastructures,
a
set
garden
as
a
as
a
model
that
doesn't
do
much
too
much
intensity.
So
it
is
not
I
mean
we
don't
know
we
have.
H
H
H
To
know
exactly
so,
you
also
have
one
and
you
can
use
it
kind
of
as
a
seed
under
the
garden
cluster
of
all
it
once
we
have
that
set
up
as
well,
so
we
have
smaller
infrastructure,
smaller
data
centers
with
it
basically
and
isolated
ones,
so
these
ones
run
completely
isolated
from
the
rest
and
therefore
they
they
need
another
gardener.
There.
One
cluster
there's
manually
set
up
right
now,
so
with
our
terraform
tule,
you
don't
want
that.
You
want
the
ring
everywhere.
H
You
want
basically
to
to
codify
that
that
we
can
press
click
a
button
and
then
we
from
a
mini-cooper,
whatever
be
bootstrapped
I
drink,
and
then
it
can
run
also
in
isolated
environments,
and
then
you
can
use
the
same
cluster
also
SSE
cluster
for
all
the
other
control
plants.
Of
course,
if
you
scale,
then
you
will
need
eventually
than
more
seed
clusters,
but
they
are
all
managed
by
garden.
H
It's
basically
only
configuration
you
tell
garden,
ok,
I
want
to
be
in
these
regions
or
whatnot,
and
then
it
creates
the
c-class
SS
fruit
clusters
and
it's
it's
basically
very
simple
deployment.
The
only
thing
that
so
we
have
two
things
that
we
are
working
or
three
things
that
they're
working
on
very
extensively.
H
One
is
right
now,
it's
all
in
sauce
tree,
so
we
started
basically
send
via
scope
and
it
just
we
knew
it
was
basically
to
deliver
value
first,
we
did
that
and
we
knew
we
had
to
change
that
so
right
now
we
are
doing
out
of
tree
so
that
we
that
we
can
get
this,
because
this
is
now
picking
up
so
more
infrastructures.
We
don't
want
to
have
all
the
code
in
our
code
base.
H
We
need
to
get
it
out,
so
this
is
what
happens
currently
then
the
then
the
ring
this
is
about
to
be
codified,
so
we
need
specific,
additional
components.
We
have
written
already,
most
of
them,
and
now
we
will
start
with
the
actual
bootstrapping
logic
to
codify
that
and
the
third
one
is
something
can
relate
the
tool
to
maybe
this
discussion,
it's
about
auto
scaling
of
the
control
and
there's
a
half
problem
so
or
no
ten
notes.
100
nodes
is
okay
thousand
nodes.
We
cannot
do.
We
need
to
get
a
lot
better
in
that
area.
B
H
Blobstore
basil
tree
beside
card
that
nothing
mentioned,
especially
a
sidecar
of
Exedy,
and
you
can
pick
the
same
infrastructure
which
we
usually
do
in
configuration,
so
we
use
OpenStack
Swift
or
we
use
ever
history
or
we
use
so
on
the
different
infrastructures.
We
use
the
different
web
stores
and
be
you
run
these
full
snapshots
and
then
incremental
snapshots,
and
should
we
decide
to
move
the
country
migrate,
a
control
plan,
it's
basically
already
there
you
can.
You
can
basically
go
to
such
a
I,
could
even
show
it
to
you
basically
go
to
such
a
seed
cluster.
H
You
can
delete
the
PD.
Doesn't
matter
so
Exedy
recover
immediately,
see
you
wait
a
minute.
There
is
no
HCD
anymore.
Is
there
a
better
guess,
there's
a
Becca
gets
it
restores
it
recovers
fully
automatically.
So
I
would
not
even
touch
anything,
and
this
is
actually
our
standard
demos.
So
it's
the
easy
thing
is
destroyed
the
API
server.
So
we
have
nothing
to
do.
CubanÃa
discovers
that
that's
already
easy
and
that
city
is
also
nice.
You
just
delete
it
or
you
just
break
it.
You
mean
you,
you
corrupt
it
and
the
guys
who
are
doing
it.
H
We
only
cover
basically,
so
what
what
is,
what
is
kids,
it
city,
persistence
and
the
seed
cluster
state?
They
run
only
control
plus
control
planes
and
the
control
trends
of
these
control
plants
from
the
same
paradigm.
That
means
they
have
the
same
sidecar.
So,
even
though
we
are
just
running
a
backup
for
the
control
plane,
since
we
know
in
our
seed
clusters
run
only
control
points
of
truth
clusters
and
they
have
already
a
backup
doesn't
matter.
We
can
lose
everything
and
basically
the
seed
cluster
gets
recreated.
H
Today
there
is
in
the
Bob
store,
so
it
comes
back.
Then
the
information
is
there
that
that
what
is
scheduled
there,
the
control
points
of
the
other
clusters,
s
Potts,
regular
kubernetes
resources
and
they
get
then
recreated
and
then,
of
course,
the
H
citty
jumps
in
again
and
says:
oh
wait,
a
minute,
where's
my
PV,
and
then
it
goes
to
its
own
Lobster
to
fetch
its
backup.
So
everything
can
be
recreated.
We
have
never
lost
a
single
cluster
I.
M
M
H
Exactly
therefore,
yeah
interesting
mechanism
very
early
on,
we
may
not
lose
any
data.
We
have
only
one,
so
we
really
wanted
to
stick
to
kubernetes
principles.
We
really
wanted
to
avoid
it
city
clusters
because
there's
so
much
trouble.
It
didn't
want
half
the
relationally
replication
of
data,
so
that
was
a
bet.
B
H
Knows
it
both
in
the
beginning,
so
we
had
used
calico
and
we
found
out.
Oh,
it
wants
to
basically
talk
to
a
CD
which
is
a
no-no
in
our
use
case,
but
luckily
there
was
a
configuration
switch
to
theirs.
If
we
talk
to
the
API
server
and
that's
what
we
are
using.
Yes,
it
limits
us
a
calico
is
fine,
but
we
are
very
happy
actually
with
a
choice
right
now
and
we
oppose
a
network
policies
and
overall,
it
seems
quite
a
good,
balanced.
B
Solution
well,
I'm,
talking
more
generally
about
Sdn
solutions
that
are
doing
like
the
X
line
between
the
nodes
and
I
know
in
the
past,
I
had
seen
if
the
instance
in
the
control
plane
isn't
part
of
that
index.
Lan
then
certain
features
like
you
know
exactly
and
2/3
exacting
into
a
pod,
or
you
know,
proxying
those
things
failed.
H
Yeah,
it
is
a
limiting
factor
in
the
beginning,
for
you
be
quit.
For
example,
knots
power,
I'll,
spend
chute
cluster,
let's
say
control
plane
on
naval
8
over
years
and
the
shoe
cluster,
never
as
an
example
that
didn't
didn't
work
out
because
of
limitations
and
implementation.
But
now
this
all
works,
but
still
we
have
the
limitation.
So
we
have
also
the
the
problem
wise
versa,
so
API
server
needs
to
talk
to
to
the
to
the
to
the
bots
and
services.
So
we
have
a
VPN
connection
from
the
control
plane
to
the
to
the
shoot
cluster.
H
So
as
a
sidecar
to
the
API
server
as
a
sidecar
to
Prometheus,
that's
run
in
the
in
the
namespace
for
the
truth,
cluster
in
the
C
cluster.
You
have
this
VPN
connection
and
talks
to
again
standard
primitives.
Basically
a
VPN
load
balancer
in
the
ship
cluster
would
there
be
need
to
I
mean
it
depends.
If
it's
only
about
connectivity,
we
can
certainly
find
a
way.
We
don't
have
one
out
of
the
box,
because
right
now
it's
only
got
a
code
that
is
supporting
it
to
talk
to
the
API
server,
but
you're
totally
right.
H
B
H
The
VPN
are
you
doing
something:
custom
within
corner
or
you
leveraging
open,
BPM.
Okay,
we
create,
on
the
shoot
side,
an
Open,
VPN,
endpoint
and
sidecars
in
the
i7
API
server
infinite
resource
to
for
cube
extra
cube
port
forward
to
cuddle
port
forward
proxy.
These
we
use
that
one
and
Prometheus
wants
to
scrape
components
in
the
sugars.
We
want
to
have
monitoring
information
from
whatever
big
loss
of
use,
data
manager
to
deploy
stuff
into
the
shoe
first.
Of
course,
we
need
to
even
sets
Q
proxy
calico
etcetera
and
we
are
collecting
monitoring
information.
M
C
H
The
users
to
region
of
excess
capacity
in
the
seed
clusters
that
we
can
spin
up
the
shoot
faster
faster
if
the
if
the
C
cluster
is
out
of
capacity.
Basically,
we
have
to
cast
the
autoscaler
it's
it's.
Basically,
you
guys
did
based
on
the
Machine.
Api
will
kick
in
and
provision
new
machines,
so
if
there
are
no
machines
available
for
the
other
control
plants.
H
A
N
N
M
Yeah
we're
working
on
so
the
issue
with
multi-master
me:
it's
it's!
It's
not
that
much
more
hurdle
to
overcome
we're
working
on
it
right
now
is
making
sure
that
the
load
balancer
hostname
exists,
while
the
child
nodes
are
spinning
up
so
that
once
you
have
the
load
balancer
service
with
the
API
server
backing
it,
you
can
have
a
fully
provisioned
cluster
I
mean
yeah,
we're
trying
multi-master,
as
we.
B
It
almost
seems
to
me
like
there
might
be
kind
of
an
inflection
point.
You
know
if
you
are
trying
to
manage.
You
know
just
a
certain
number
of
clusters
and
you're
not
gonna
have
multiple
clusters,
you
know
per
region
and
provider
curve
availability
zone.
You
know
having
a
seed
cluster
for
each
of
those
just
a
matter.
B
Just
single
cluster
seems
a
bit
overkill
for
that
scenario,
but
if
you're
going
to
spin
up
a
lot
of
clusters
in
each
of
those
areas,
it
seems
like
you
know,
there
seems
to
be
a
tipping
point,
for
you
know,
are
using
more
spending
more
resources
on
the
seed
clusters
over
it.
You
know
the
single
course
you're.
H
Absolutely
right,
it
was
again.
Basically
he
had
a
softer
multi-tenancy
problem.
We
cannot
solve
that
in
kubernetes,
I
mean
bees
are
not
named,
Stacey
release
on
of
names
etc.
So,
and
he
knew
there
multiple
lines
of
businesses
and
not
corporations,
and
we
need
somehow
to
give
them
isolated
clusters
of
it.
It's
only
a
solution
for
mass
creation
or
mass
management
of
clusters
ever
mass
is
in
definition,
but
it's
it's
suddenly
too
much
for
one.
D
B
C
C
M
A
A
A
B
A
Machine
how
to
create
a
new
machine?
Well,
that's
a
much
lower
bar
that
I.
Have
you
know
the
creative
machine
that
could
be.
You
know
it
could
be
a
CD
machine.
It
could
be
a
control
plan.
It
could
be
part
of
an
aging
control
blade
like
having
all
those
through
different
ways
to
initialize.
It
was
much
more
difficult
as
writers,
they
create
a
machine,
make
sure
doctors
are
make
sure
it's
got
a
keyboard
register.
We're
gonna
store
a
component.
H
Is
the
C
cluster
is
a
true
trust,
so
that's
not
the
problem
but
you're
right.
So
there's
the
other
parts
of
Machine
API
so
far
covers
basically
a
very
thin
API
of
creating
interesting
machines
that
that
is
true.
It
makes
it
very
easy
to
get
support
for
that,
but
then
there
is
also
the
networking,
but
I
think
one
point
not
mentioned
and
why
we
actually
also
did
that
and
by
then
even
a
few
clusters.
Rnd
in
our
case
is
basically
providing
homogeneous
clusters
across
all
these
providers.
H
I,
don't
know
where
you
are
going
to
drive
cluster
API
to,
but
from
our
point
that
is.
That
is
the
reason
why
I
do
actually
started
the
gardener
project.
He
wanted
homogeneous
cluster
so
with
with
the
shooter
so
that
that
create
you
basically
can
configure.
Let's
say:
open
ID
connect,
random
configuration
feature
flex
of
the
API
server
controller
man.
H
Maybe
it's
this
coordinators
version
here,
maybe
the
other
one
has
skipped
this
patch
version.
Maybe
they
have
patched
the
patch
version
themselves,
so
we
cannot
really
guarantee
that
the
behavior
will
be
the
same.
So
we
still
can't
completely,
of
course,
networking
that
there
are
they're
hard
things
that
that
are
different
everywhere,
but
at
least
the
components
that
we
can.
In
terms
of
their
point
of
view,
these
we
can
control
and-
and
so
our
strategy
was
not
so
much
to
basically
say.
Okay,
you
can
plug
in
your
community
service.
H
Our
strategy,
or
what
we
wanted
to
achieve
is
actually
on
all
these
infrastructures
to
have
classes
that,
mostly
as
good
as
possible,
feel
the
same
that
provide
the
same
features
that
you
can
configure
and
that'd.
Be
it
bare
metal
kind
of
or
the
the
the
big
hyper
scale
is
so
so
that
is
the
the
key
preposition
proposition
that
that
Gardner
also
does
have
these
homogeneous
clusters
and
manage
thousands
of
them.
So
this
is
this
was
to
go
and
there's
the
project
about
and.
J
For
me,
the
question
for
cluster
API
is
what
what's
the
focus?
I
mean:
yeah,
I,
AMA
sense,
multi-cloud
approach
like
Anna
is
doing
because,
with
permitting,
we
have
quite
a
similar
architecture
and
I
see
exactly
the
same
use
cases
how
to
do
this,
and
like
also
why
it's
easy
to
integrate
a
new
cloud
provider,
because
mostly
you
must
implement
it
in
your
machine
controller
and
the
only
single
Tomales
needless
create
indeed
VMs
on
the
provider.
Also
s,
the
heavy
part
is
managing
the
master
control
plane.
B
Think
what
have
a
question
that
I
have
to
is
with
having
the
seed
cluster?
Does
that
limit
you
to
provide
additional
kind
of
Network
restrictions
between
clusters
that
you're
creating
yes,
so
P
Rangers
may
not
conflict
these
things?
If
you
are
you
doing
like
subnet
restriction
or
full
VB,
is
there
the
ability
to
like
for
BBC
restriction
between
now
I
mean.
H
B
H
You
can
choose
basically
either
you
configured
to
say,
but
government
create
the
BBC
for
you.
It
creates
the
ABC
the
supplements
that
stuff
you
basically
only
configure
the
city
rangers
or
you
say,
I
have
already
a
PPC
I
want
to
place
many
clusters
into
one
V
PC
and
then,
since
it's
and
basically
you
you
define
how
the
resources
look
like.
We
have
also
one
customer
who
is
basically
creating
one
V
PC,
because
he
needs
V
disappearing.
H
H
B
E
H
You
have
a
control
plane.
Therefore,
I
I
meant
with
doesn't
matter
how
many
hierarchies
it's
it's,
it's
a
actually
only
one
thing
that
is
important:
there's
the
control
and
runs
her
own
company
discussed
the
other,
the
notes
that
are
running
that
fall
further,
which
are
managed
by
this
control
plan.
In
between
those
there
is
a
VPN
from
the
C
cluster
to
issue
just
to
access
ports
and
services.
H
H
Shoot
classes
can
an
overlap,
yeah
right,
but
not
in
the
same,
but
not
in
the
same
V
PC,
of
course,
because
you
cannot
right
yeah,
but
you
can
have
one
seed
cluster.
So
this
is
how
we
need
to
set
it
up.
So
the
seed
cluster
gets
one
pc
range
and
this
V
PC
range
is
forbidden
for
to
shoot
clusters
actually.
F
So
because
they're
running
the
control
plane
within
the
seed
cluster,
are
there
any
further
restrictions
that
you
put
on
the
usage
of
that
so,
for
example,
in
users
they
cannot,
for
example,
say,
go
and
you
scream
system
namespace
or
run
some
other,
let's
say
extensions.
If
they
have
it
has
an
application
I'm
building,
something
which
is
again
model.
Has
me
pxn,
Cisco
and
I
run
it
well.
Is
that
going
to
be
possible
yeah?
M
M
M
M
C
C
But
you
want
to
use
I,
mean
yeah,
a
lot
of
solutions,
I
mean
in
in
the
best
and
most
happiest
use
case.
The
provider
can
actually
provide
you
with
some
IP
is
in
which
you
can
write
like
a
future
interface,
and
you
can
optionally
attach
the
robot
and
then
here
like
a
statue,
CNI
plugin
or
cooling
on
for
this
job.
But
this
is
specific.
A
Yeah,
what
you're
talking
about
wire
guard
is
basically
the
singing
and
effect
like
you're
saying
the
same
result,
which
is
it
guys
ever
will
now
be
able
to
reach
in
bus
trip.
So
as
far
as
I
can
tell,
there
are
three
things
you
Hillary
chose
Bill
to
reach,
nobody's
directly
oddities
directly
and
serviced
at
these
two.
M
F
But
we
need
some
extra
company,
especially
when
I'm
working
as
like.
Now
we
suddenly
went
from
independent
cluster.
We
didn't
need
anything
special
to
make
it
work
now
in
this
model
we
need
it
is
one
more
missing
common
and
yes,
it
means
choices.
What
we
can
pick
out
of
that,
but
there's
still
one
additional
comment
that
you
know
deploy
manage
so
an
additional
management
override
if
it
in
expertise
needed
somebody
to
look
at
that
I
mean
that's.
Maybe
common
with
some
may
not
be
common
in
no.
H
A
F
Cut
is
like
to
make
it
like:
keep
it
simple,
essentially
adding
new
dependencies
of
new
components
that
you
know
we
are
kind
of
self
imposing
on
ourselves
based
on
our
design
is
something
that
we
can
probably
take
in
account
to
see.
If
we
get
will
be
get
to
choose
that
or
either
additional
dependency,
only
if
you
really
are
gaining
a
substantial
amount
of
you
know
benefit
out
of
it.
It
does
the
case
then
sure,
and
of
course
it
will
be
nice
if
we
can
have
a
common
way
that
individual
providers
don't
have
to.
B
So
do
we
want
to
go?
You
know,
we've
discussed.
You
know
that
kind
of
model
did
we
want
to
go
more
into
kind
of
the
other
model
at
this
point
and
go
back
to
you
know
talking
through
some
of
the
different
trade-offs
with
you
know,
managing
separating
the
control,
a
management
from
ncb
or
single
single
instance,
control
claim
that
we
provide
you
know
some
type
of
Brazilians
with
provider
utilities
and
versus.
A
H
I
think
that
cut
is
important
to
me
so
I
either
you
you
want
to
configure
all
all
the
details
and
configure
all
the
knobs.
You
want
to
turn.
Then
it's
it's
a
very
byte
API
or
you
just
say:
okay
I,
want
to
cluster
I,
want
to
express
certain
things
like
the
kubernetes
version
for
sure.
Maybe
I
need
to
have
a
provider
consideration
for
the
network,
this
kind
of
stuff,
but
you
don't
have
to
express
the
rest.
H
So
basically,
one
implementation
creates
the
clusters,
as
yes
as
a
model
like
with
a
master
cluster,
so
citrus
or
in
our
case
another
one
have
these
self-contained
clusters,
so
both
is
both
is
possible.
Do
you
really
want
to
offer
all
these
options
in
the
cluster
API
I,
don't
know
where
you
where
you
make
the
cut.
This
is
what
I
meant
with
these
questions
arm
in
the
beginning,
I
thought
your
so
you're
more
interested
to
serve
all
of
them,
basically
in
in
a
rather
narrow
API.
H
Let
me
say:
okay,
the
one
you
want
to
describe
how
the
cluster
should
look
like
in
terms
of
what
the
end
user
will
feel.
So
could
a
knit
this
version
there
will
be
something
that
the
user
will
feel,
but
whether
or
not
it's
a
self-hosted
one,
because
he
picks
solution,
a
god
provider
a
or
the
gardener
model
with
seat
and
truth
approach
is
then
something
that
is
more
or
less
expressed
by
picking
without
Pelayo.
So
should
the
sphere
not
that
somebody
control,
if
it
is
a
knob
that
somebody
control
karna,
provides
only
this
model
gke.
H
Yes,
all
these
other
solutions
that
have
their
model
and
they
will
not
move
yes,
so
is
it
really
realistic
that
there
will
be
any
implementation
that
you
can
actually
change
that
that's
a
self-contained
cluster
in
Cagayan
at
the
same
time
also
provide
one
with
the
control
penthouse
at
somebody
else,
I'm
kind
of
doubtful
that
somebody
will
really
implement
the
entire
spectrum.
So,
ideally.
B
We'd
be
building
what
we
should
be
building
in
cluster
API
should
be
building
blocks.
That
can
then
be
you
know,
consumed
by
others,
and
how
do
we
do
that
without
you
know
providing
you
know,
you
know
if
it's
a
you
know
if
we
get
too
opinionated
on
some
of
these
aspects,
you
know
the
usability
violence.
B
H
B
B
B
A
A
A
No
I
think
it's
more
like.
Are
you
tell
us
if
you're
created
that
and
on
top
of
that
used
machines
machines
use
the
big
earner
technique
to
run
their
trouble
night
office
versus
the
alternative,
which
is
day,
which
is
a
discussion
controller?
It
creates
Network
and
then
through
indirection
big
talk
line
is
created.
Do
that
by
grade
machine.
F
Having
one
other
piece
of
information
that
problem
is
going
to
the
cluster
object
is
like
the
cloud
configuration
for
this
specific
provider,
so
today,
for
instance,
I
know
for
the
vSphere
provider,
for
example,
we
can
have
peace,
the
cloud
provider,
information
from
the
master
machine
object
which
again,
which
again
there
could
be
covers
with
that,
because
if
you
had
go,
it's
like
the
moment,
you
think
about
multi
masters.
Now
you
have
multiple.
You
have
to
carve
that
information
and
let's
say
if
you
were
to
distribute
your
master
machines
in
two
different,
just
different
data
stores.
F
Homogeneous
form,
what
I'm
saying
is
I,
for
example,
just
from
the
placement
perspective,
for
example,
on
the
vSphere
infrastucture,
which
data
store,
so
you
don't
want
to
have
all
the
three
masters
VM
is
landing
on
the
same
data
store
just
to
get
the
storage.
You
know
it's
just
to
protect
your
masters
from
one
single
storage
going
on,
so
that's
the
only
I
mean
they're
homogeneous
from
all
other
aspects.
I
mean.
F
Right
I
mean
so
so
one
option
gives
a
vSphere,
for
example,
if
we
could
use
the
extra
rich
cluster
we
today
we
don't
use
storage
clusters
at
the
moment.
You
know
these
should
provide
implementation.
They
really
go
into
a
a
selection
of
a
data
store,
but
that's
something
that
we
can
essentially
definitely
improve
with
our
implementation.
But
I'm
saying
with
the
current
said
there
someone
may
or
may
not
use
a
store
cluster.
F
Therefore
they
may
have
to
go
and
say,
select
a
particular
data
store
of
their
choice
for
a
master
node,
for
example,
in
case
of
for
multi
master,
you
might
want
to
select
just
different
data
store
just
for
that
reason,
since
you
infrastructure
is
not
relying
on
that,
do
the
very
same
thing.
So
the
point
is
the
weight.
F
Yeah,
that's
actually
what
I
plan
to
do
better
sessions.
Just
to
the
point,
you
know
what
kind
of
information
he
gets
into
a
cluster
is
you
know
this
is
yet
another
piece
of
information
that
will
probably
be
need
to
be
in
the
cluster
object
across
the
board
if
the
cluster
object
were
to
be
used
to
actually
lay
out
your
master
layout,
your
control
plane,
because
then,
for
regardless
of
the
provider
that
control
plane
will
be
need
to
be
configured
with
the
appropriate
cloud
provider
configuration
for.
F
A
F
F
We
cannot
doing
that
from
a
worker
node,
for
example,
for
how
you
scale
precisely
what
ring
is
now
I
mean
when
we
create
our
worker
nodes,
we
spread
out
label
them
properly
and
let
kubernetes
handle
you
know
from
the
worker
perspective
the
availability
aspect
of
the
services
for
that
matter,
branch
saying
since
this
is
like
not
the
worker.
Now
it's
like
your
control
pain,
becomes
the
workload
in
there
in
this
model
and
since
we
are
not
there
yet
in
cluster
api,
therefore
you
know
we
have
some.
You
know
difference
is
there
at
the.
B
B
B
I
mean
currently
doesn't
open.
Question
I
mean
like
the
way
we're
looking
at
right
now
for
the
any
of
us
provider
is
it's
currently
we've
been
telling.
You
know,
people
who
want
to
demo.
Yes,
go
ahead
and
run
cluster
cuddle,
you
get
a
cluster
and
you
know
you
want
to
play
around
with
it.
As
we
start
to
look
at.
How
do
we
implement
you
know
multi
control
playing,
we
can
no
longer
use
the
cluster
call
workflow,
so
we're
looking
at.
Potentially
you
know.
First
documenting
you
know
using
the
phases
in
cluster
cuddle.
B
B
B
A
B
B
Think
we've
covered
a
lot
of
ground
I.
Think
there's
a
lot
more
content
context
now
than
there
was
previously.
Hopefully,
we've
recorded
that
shared
context
and
no
matter
that
people
haven't
been
able
to
attend
and
you
know,
go
back
and
see
it.
You
know
I
do
think
it
makes
sense
at
this
point
to
the
further,
and
you
know,
I
would
love
to
see
a
more
formal
proposal.
B
You
know
how
we
could
kind
of
potentially
adopt
the
Gardner
model,
you
know
and
and
we
could
potentially
work
through
kind
of
some
other.
You
know
approaches
as
well,
but
as
far
as
as
far
as
adding
a
che
in
a
more
managed
scenario,
you
know
it
probably
is
a
good
idea
to
go
ahead
and
you
know
continue
proving.
B
H
Basically,
ways
that
you
have
this
garden
API,
so
forgotten
a
controller
manager
and
and
then
this
additional
control
plan
for
a
sieve
cluster,
which
will
be
pretty
minimal.
So
this
is
the
way
I
saw
it
in
more
concrete
words,
you
would
have
one
cluster,
there,
gardener
is
deployed
and
it
would
host
a
control
kind
of
another
cluster.
It
would
be
that
is
your
target
faster.
So
this
is
actually
not
even
waste,
so
depends
on.
Do
you
are
you
having
this
ring
or
not?
H
But
let's
assume
you
have
already
destroyed
cluster
and
you
want
to
spin
up
only
one
single
cluster
right,
then
the
control
plan
of
that
cluster
needs
to
be
placed,
and
this
this
first
cluster.
So
this
one
is
no
waste,
but
you
of
course
have.
This
is
first
cluster,
where
you
need
to
have
gotten
a
running,
so
this
is.
This
is
basically
your
fixed
cost.
So
you
need
to
pay
for
that.
If
you
have
just
one
cluster,
you
basically
have
as
a
fixed
cost
one
additional
cluster.
H
This
root
cluster
kind
of
thinking
that
we
call
garden
cluster
of
a
gardener
runs.
That's
your
waste.
If
you
have
five
clusters,
it's
this
one
cluster
divided
by
five.
If
you
have
100
clusters,
you
might
have
maybe
two
seed
class
or
something
like
that.
Then
it's
then
it's
these
100
divided
by
these
two.
So
it
really
depends
on
how
you
scale
out.
So
our
largest
see
clusters
run
I,
don't
know
150
200
clusters,
like
could
run
more,
but
we
basically
said
no.
We
cannot
scale
our
to
a
thousand
nodes
on
this.
H
We
have
not
yet
simply
implemented
it's.
It's
not
there
so
be
basically
scared
to
100
nodes,
and
then
we
say
we
open
a
new
seed
cluster.
That's
basically
how
it
works.
So
you
have
this
place
kind
of
one
control
plant
that
you
need
to
host
for
the
seed
cluster,
the
seed
cluster
or
every
food
cluster.
For
that
matter
it
basically
contains
only
the
machines,
so
there
is
no
waste
there,
you
stuff
there.
Of
course
you
have
fragmentation.
I
mean
the
machines
are
not
fully
utilized
until
you
fully
utilize
them.
H
So
if
you
have
a
large
node
and
you
place
only
one
control
plane
onto
that,
of
course,
the
waste
is
larger
because
you
have
less
utilized
resources,
but
our
driving
factor
was
also
since
we
also
came
from
from
Bosch
Cloud
Foundry
was
to
increase
their
utilization,
so
there
was
a
lot
of
waste
in
basically
using
virtual
machines.
Only
of
course,
you
can
tweak
that
on
underneath
with
over-commitment.
What
not.
But
we
wanted
to
stick
with
coordinators,
and
we
said
we
are
deploying
these
pots.
H
I
mean
there
are
means
in
including
this
tool
to
basically
scale
automatically
out
up
in
down
and
that
we
wanted
to
utilize.
So
if
it's
just
a
small
seed
cluster,
it
will
be
a
very
small
control
plane
as
well,
because
it
has
to
manage
only
one
other
control
plan.
There
is
not
much
waste
in
it,
but
it
is
of
course
much
more
so
it
is
of
course
based
if
you
want
to
have
just
one
cluster.
H
The
pro
side,
which
was
our
motivating
factor,
is
the
providing
the
homogeneous
clusters
and
another
pro
side
of
that
model
and
and
what
we
are
doing
with
extensibility
so
I'm,
not
quite
sure.
If
you
have
talked
about
that
already
with
the
cluster
API,
we
are
also
doing
extensibility
not
only
to
make
infrastructure
support
pluggable,
but
also
to
make
there
is
other
things
packable
like
the
operating
system
like
in
some
cases,
I
mean
you're
all
working
for
larger
operations.
H
Maybe-
and
you
have
your
own
audit
logging
solutions-
we
have
this
effort
as
well,
so
we
cannot
plug
in
any
solution
or
it
logging
something
that
needs
to
be
handled
very
specially.
So
you
need
to
configure
the
control
plane
to
basic.
This
means
to
be
extensible
so
right
now,
it's
it's
basically
some
add-on
that
we
deploy
from
the
side
and
it
watches
the
seats,
watch
the
sessions,
and
then
it
basically
has
a
mutating
web
hook
to
get
this
stuff
in.
H
Of
course,
we
want
gardener
to
manage
it,
not
knowing
that
it's
managing
it
so,
basically
also
the
shoot
spec
will
become
something
that
it
can
be
extended.
So
if
you,
if
you
want
to
configure
metering
or
it
logging,
is
these
other
things
that
are
not
not
core
business
of
gardener
gardeners.
Core
business
is
the
control
plan.
If
you
want
to
have
these
things,
but
one
to
transport
it
into
the
C
cluster
shoot
clusters,
the
shoot
a
means
to
do
that.
So
this
also
part
of
the
extensibility.
We
don't
want
too
many
additional
components.
H
We
want
to
basically
facilitate
that,
but
not
know
that
we
that
we
actually
injecting
this
components.
Gardner
should
not
be
aware,
because
then
this
makes
API
complicated.
So
today
we
have
this
API
tomorrow
it
needs
to
be
too
complicated.
We
want
the
flexibility
kind
of
further
for
the
owner
of
this
infrastructure
to
decide
how
to
enrich
the
clusters
that
are
provision,
not
sure
whether
you
want
to
touch
it
at
all
or
like
we
did
in
the
beginning,
basically
say:
that's,
not
our
business.
Somebody
else
has
to
to
watch.
H
What's
going
on
there
and
inject
these
things
from
the
side,
I
said
we
don't
want
to
put
that
into
the
contract,
but
we
want
Gardner
to
basically
facilitate
that
and
we
want
the
cluster
resource
that
we
are
having
needs
to
be
put
in
there.
So
you
can
describe
oh
I
want
started
logging
in
this
way
to
be
to
be
configured.
H
H
It's
exactly
it's
completely
pointless,
so
so
it
is
true,
most
of
our
most
of
the
clusters
that
people
create
in
the
infrastructure
from
development
isa.
You
are
small,
then
again,
if
you
have
a
large
corporation
and
you're,
creating
2,000
of
them,
it
pays
off.
So
not
every
cluster
needs
to
have
three
virtual
machines.
That's
a
kind
of
waste.
A
A
A
C
Mean
currently,
you
cannot
do
this
thing,
because
your
API
server
needs
to
be
able
to
talk
to
your
I
mean
it's
yeah.
It's
able
to
talk
to
to
it's
both,
but
but
but
the
qubits
and
all
those
components,
the
right
guy
server.
So
you
currently
cost
it
on
your
own
on
your
own
machine.
Theoretically,
you
cannot
but
I
mean
what
what
I'm
currently
doing
is
for
development.
C
I
spin
up
mini
coop,
like
I,
want
the
control
plane
for
I'm,
gonna,
think
and
then
I
simply
connect
one
of
our
existing
clusters
and
then
I
simply
use
it
as
a
yeah
and
I
sit
closer
and
that's
it.
So
it's
I
mean
it's
a
trade-off,
so
it's
there's
also
there's
also
always
tied
up,
so
you
cannot
just
have
I
mean
you
can't
create
a
table
just
on
your
own
walk
motion
that
we
yeah.