►
From YouTube: Cloud Foundry for Kubernetes SIG [Jan 2021]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
A
D
C
A
C
A
C
A
A
D
A
A
A
I
think
the
quote
unquote
only
topic
that
we
came
up
with
was
to
go
through
the
thoughts
on
cf
for
kate's
use
cases,
document
that
I
sent
via
the
cfdef
mailing
list-
I
guess
maybe
mid
of
december
or
so
and
received
a
couple
of
comments
from
from
people
reading
it
back
then,
but
before
we
dive
into
that,
are
there
any
additional
topics
that
people
want
to
want
to
cover
in
in
this
call,
or
is
this
pretty
much?
The
only
thing
that
we
have,
which
is
fine?
I
guess.
A
Doesn't
seem
to
be
the
case,
so
I
guess
the
at
least
in
sap.
The
usual
review
starts
with
who
has
read
the
document
and
then,
like?
Usually,
two-thirds
of
the
crowds
are
not
too
outspoken
about
it.
So
I
guess
the
second
question
that
then
usually
comes
is
how
should
we
go
about
the
document
like
assuming
that
everybody
has
read
it?
A
Should
we
just
discuss,
or
should
we
rather
like
step
through
the
document
again
and
kind
of
try
and
recap
things,
and
I
guess
usually
we
end
up
with
doing
the
letter
so
yeah
base
it
on
on
your
preference.
So
I
can,
I
can
definitely
walk
through
the
document
and
then
we
can
can
have
a
discussion
as.
A
A
Cloud
foundry
foundation
is
a
single
instance
of
cloud
foundry,
and
then
I
think,
like
the
other
ones,
are
maybe
a
little
bit
more
need
a
little
bit
more
definition,
so
to
speak.
So
I
I
came
up
with
the
term
of
a
cloud
foundry
operations
team,
so
essentially
the
the
people
that
are
operating
one
or
many
cf
foundations
from
from
like
a
platform
perspective,
so
they
do
things
like
provide
new
ones.
A
Make
sure
that
these
are
updated,
et
cetera,
et
cetera,
to
to
kind
of
distinguish
these
folks
from
people
using
the
the
actual
platform,
and
then
I
I
kind
of
have
the
term
c
of
control
plane
so
essentially,
all
all
the
core
cloud,
foundry
components
like
cloud
controller,
uaa,
etc,
etc,
and
I
think
at
times
I'm
also
using
cf
applications
as
kind
of
the
the
opposite
of
that,
so
essentially
payload
that
gets
cf
pushed
to
the
platform
and
that
that
is
then
run
and
that
distinction
will
become
important
later
on,
because
today,
if
you
deploy
cfo
gates,
then
essentially
you
get
the
control
plane,
as
well
as
the
cloud
foundry
apps
deployed
in
in
one
cluster
and
part
of
the
document
is
talking
about
ideas
of
of
how
to
to
actually
separate
these
two
entities
from
from
each
other
and
then
last
but
not
least,
like
separating
those
entities
from
each
other.
A
A
Ultimately,
and
then
basically,
I
I
started
by
giving
a
little
bit
of
sap
do
so
definitely
running
cf
on
on
vms
at
large
scale,
but
then,
obviously
in
the
future,
envisioning
that
users
that
use
cf
on
vms
today
will
move
over
to
cf4ks
as
essentially
over
time
the
the
kind
of
evolution
of
cloud
foundry
natively
on
on
kubernetes
and
then
like
we
as
as
a
platform
provider
for
us,
it
means
probably
moving
away
from
managing
a
few,
and
maybe
a
few
is
is
more
than
people
actually
think,
but
a
few
cf
on
vm's
deployments
that
are
pretty
big
over
to
running
many,
many
more
for
pace
deployments,
maybe
something
like
one
per
stakeholder
team
or
one
per
org
unit
to
to
be
decided.
A
I
I
guess,
and
and
then
obviously,
while
like
for
a
handful
of
cf
on
vm
deployments,
certain
kind
of
semi-automated
steps
are
fine.
These
are
probably
no
longer
fine.
If
we
are
talking
about
hundreds
or
even
thousands
of
of
cfocade's
deployments,
then
you
essentially
cannot
afford
doing
any
manual
operations
activity
on
these
systems
and
plus,
obviously,
there's
also
that
comment
and
guillaume
put
put
a
question
there.
A
Even
if
we
wanted
to
basically
take
each
of
our
cfo
vm
deployments
and
transfer
it
one
to
one
over
to
the
kubernetes
world,
there's
probably
a
realization
that
a
single
kubernetes
cluster
could
not
scale
to
the
extent
that
we
were
actually
needed
to
run
our
workloads.
And
I
would
say,
even
worse,
if
you
then
put
on
that
kubernetes
cluster
service
mesh,
then
probably
also
the
service
mesh
will
start
being
a
limiting
factor
to
the
actual
growth
of
that
cluster.
A
Probably
later
on
plus,
then
also
the
actual
payload
traffic
that
gets
into
the
cluster
and
then
needs
to
be
dispatched
to
the
actual
cf
applications
so
yeah
that
that
essentially
means
that
we've
come
to
the
realization
that
actually
for
our
future
cf
operations
model,
we
need
something,
that's
very
close
to
how
we
cf
create
service
things
like
a
database
today,
where
also
we
have
then
automation,
kicking
in
provisioning
database
instances
or
creating
a
shard
on
a
database
instance
or,
however,
you
do
it,
and
we
need
something
similar
for
cloud
foundry
where,
via
some
tooling
and
automation,
we
actually
create
cf
foundations
as
a
service.
A
And
then
to
to
to
clarify
that,
while
obviously
we
need
some
infrastructure,
that
kind
of
manages
all
these
year
foundations
and
probably
that
that's
that's,
maybe
like
an
additional
topic
that
that
we
could
discuss
in
in
this
round
at
a
certain
point
in
time.
This
document
is
is
less
about
how
we
manage
many
of
those
cf
foundations,
and
it's
also
not
about
how
concretely
we
are
deploying
one
of
these
foundations,
but
but
it
more
talks
about
like
from
a
structural
perspective.
A
What
changes
would
actually
benefit
us
in
in
terms
of
being
able
to
manage
such
a
fleet
of
cf4
for
kate's
deployments,
so
so
not
like
looking
at
kind
of
single
systems
that
are
always
single
cluster
and
contain,
as
I
said,
the
cf
control,
plane,
plus
cloud
foundry
applications,
but
probably
something
more
sophisticated
that
might
fit
some
of
our
use
cases
better
than
this.
Like
single
cluster,
single
cfocates
deployment
and
then
yeah.
A
We
are
probably
one
of
the
extreme
cases,
simon
and
then
ibm
are
probably
another
of
the
extreme
cases,
but
even
for
kind
of
moderately
sized
cf
on
on
vms
deployments.
Today,
probably
people
will
come
to
a
realization
that
certain
flexibility
is
is
required
and
it's
maybe
not
always
like
the
the
most
optimal
use
case,
to
have
very
separate
cfo
cages
clusters
associated
to
a
single
kubernetes
cluster.
A
Then
this
section
actually
came-
or
I
introduced
that,
based
on
a
question
from
daniel
jones
who
replied
to
my
mail
via
the
cfdef
mailing
list
pointing
to
to
that
document,
and
his
question
was-
was
basically
like
how
how
bad
quote-unquote
is
it
now
for
operators
to
actually
come
from
a
bosch
based
world
and
being
used
to
to
wash
ways
of
managing
systems
now
moving
over
to
something
completely
different.
And
basically
this,
like
bullet
point
list,
summarizes
my
my
reply.
A
So
there's
definitely
that
distinction
between
operators
of
a
cloud
foundry
system
and
users
of
a
cloud
foundry
system,
at
least
for
us,
providing
a
a
managed
service
and
even
within
those
two
groups,
there's
the
subgroups.
So
there's
the
people
that
are
happy
with
cloud
foundry
as
we
have
it
today
and
for
them
things
like
just.
Let
me
continue
doing
the
workflows
that
that
I
run
today
are
most
important.
I
think
they're.
A
We
are
also,
then
talking
about
like
compatibility
questions
between
cf
on
on
vms
and
and
cf4ks,
but
I
think,
like
these
people,
probably
couldn't
care
less
about
like
running
the
cf
instance
that
they
are
using
on
vms
versus
on
kubernetes
clusters.
Ultimately,
then,
there's
people
that
tried
probably
cloud
foundry
before
but
felt
or
found
that
actually
it
cannot
run
their
scenarios
and
some
of
those
people
have
what
what
I
would
call
mixed
workloads.
A
So
probably
they
have
parts
of
like
stateless
applications,
microservice
architectures,
12-factor
applications
all
of
that
stuff,
and
probably
they
would
be
able
to
run
that
part
of
their
overall
scenario
on
cloud
foundry.
But
then
they
might
have
additional
things
like
they
might
have
a
need
for
an
additional
stateful
service,
and
maybe
the
provider
of
their
choice
doesn't
offer
that
service
in
their
service
marketplace.
A
How
much
effort
is
it
to
maybe
even
next
to
it
run
a
separate
cloud
foundry
instance
or
like
how
much
additional
overhead
is
it
to
deal
with
cube
ctl
on,
on
the
one
hand,
and
then
push
the
other
hand
and
kind
of
trying
to
to
mix
and
match
those
models?
These
people
at
times
come
to
the
realization
that
they
also
then
ultimately
run
their
stateless
workloads
on
the
kubernetes
cluster,
where
they
run
their
stateful
workloads.
A
I
would
say,
essentially
if
people
come
to
that
realization,
then
at
least
today,
with
only
a
vm
based
deployment
with
quote-unquote
lost
on
these
people
for
for
kind
of
running
their
state,
less
workloads
on
cloud
foundry,
because
there's
no
no
way
to
actually
like
win
them
over,
so
to
speak
for
for
the
stateless
parts
and
and
these
people-
and
I
think
that
that's
even
like
a
difficult
discussion
for
for
us.
A
So
so
that's
that's.
Probably
one
group
then
there's
people
that
actually
come
from
a
kubernetes
background
and
then
at
least
today
we
we
oftentimes,
hear
that,
like
running
cloud,
foundry
on
vms
looks
too
heavyweight
for
for
them
like
too
much
resource
requirements.
A
The
need
to
to
look
into
bosch
as
a
lifecycle,
management,
tooling,
which
is
probably
not
too
well
known,
outside
the
cloud,
foundry
or
community,
etc.
So
for
them
it
would
actually
be
be
good
to
have
a
cloud
foundry
on
kubernetes.
That
feels
as
much
as
possible.
A
Kubernetes
is
native
and,
like
doesn't
look
like
like
something
else,
something
that's
that's
not
coming
from
from
a
kubernetes
ecosystem
and
then
the
the
last
one
is
actually
something
that
that
guillaume
added
in
I'm
not
too
familiar
with
with
the
use
case
there.
But
my
understanding
is
that,
like
he
has
a
used
case,
where
he's
pretty
much
running,
I
would
say
the
service
marketplace
of
cloud
foundry
for
people
and
and
then
like
provision
services
via
that
marketplace
for
multiple
cloud
foundry
deployments.
A
I
I
don't
think
that
that
I
like
very
well
described
the
use
case
that
he
has
talking
to
to
him.
A
bit
more
back,
then
it
kind
of
sounded
like
a
scenario
that
we
also
have,
which
is
now
that
we
have
all
those
cloud
foundry
on
kubernetes
deployments.
How
do
we
actually
make
sure
that
the
service
marketplace
for
each
of
those
deployments
get
gets
actually
filled?
And-
and
usually
you
want
to
fill
that
with
like
very
similar
things?
So
probably
all
of
these
deployments
want
to
be
able
to
connect
to
some
relational
database.
A
Probably
all
of
these
deployments
want
to
have
some
message:
queue,
etc,
etc.
So
that's
definitely
a
scenario
that
we
also
have
where
maybe
those
service
marketplaces
don't
exist
in
isolation,
similar
to
how
they
don't
exist
in
isolation
today
on
a
big
cf
on
vm's
deployment.
But
but
you
want
to
have
some
means
to
to
essentially
manage
those
marketplaces
in
in
a
more
holistic
way.
A
So,
though,
those
are
kind
of
scenarios
or
personas.
If,
if
you
will
and
then
the
rest
of
the
document
actually
talks
about
a
couple
of
some
of
them,
independent
from
from
each
other,
but
some
of
them
are
actually
being
being
connected
and
the
first
step
that
I've
noted
down
is
to
actually
separate
the
cloud
foundry
control
plane
from
the
cloud
foundry
applications,
so
so,
essentially
establishing
what
I
previously
in
the
glossary,
called
a
a
workload
cluster
and
then
having
the
cloud
foundry
control
plane
running
on
a
different
kubernetes
cluster.
A
The
reasoning
for
for
that,
at
least
for
us,
is
to
look
into
these
scenarios
that
I
mentioned
before
so.
People
that,
like
anyways,
want
to
run
something
on
a
kubernetes
cluster
and
want
to
have
native
access
to
that
kubernetes
cluster.
And
if,
if
we
do
that,
then
obviously
having
a
cloud
foundry
control
plane
running,
there
opens
up
that
cloud.
Foundry
control
plane
for
all
kinds
of
manipulation
by
these
users
of
the
kubernetes
cluster.
A
So
the
assumption
there
is,
they
have
admin
rights
they
could,
if,
if
they
want
delete
all
the
cloud,
foundry
control,
plane,
components
like
delete
the
entire
kubernetes
namespace
and
then
probably
the
cloud
foundry
control
plane
component
would
be
gone
for
good
and
we,
as
as
a
provider
of
a
managed
service,
obviously
wanna
provide
some
kinds
of
of
slas
for
the
cloud
foundry
control
plane.
A
So
we
ideally
don't
want
to
have
that
control
plane,
visible,
at
least
from
a
kubernetes
deployment
perspective
to
the
people
that
cf
push
applications,
but
rather
what
we
want
is
we
want
to
hand
out
a
cloud.
Foundry
api
end
point
to
these
people
and
then,
ultimately,
these
people
want
to
see
whatever
they
see
of
push
showing
up
as
additional
kubernetes
workloads
on
their
cluster.
But
we
want
to
essentially
prevent
them
from
from
manipulating
the
cloud.
Foundry
control,
plane
itself.
A
Single
kubernetes
cluster
right,
it's
coming,
yeah
that
that's
coming
in
in
in
the
latest,
step
of
the
document
actually
but
yeah
you're
right.
That's
then
something
that
that
that
comes
in
top
there
so
and
then
I
said
in
in
an
ideal
world,
we
we
kind
of
don't
want
to
have
any
managed
assets
in
in
that
payload
cluster.
Instead,
we
just
want
to
run
the
cloud
foundry
applications
there,
probably
in
reality
on
the
way
how
ingress
works
in
kubernetes
and
the
way
how
service
meshes
are
deployed
to
kubernetes.
A
We
still
might
want
to
have
something
like
an
ingress
controller,
a
an
istio,
etc,
etc.
Running
in
in
that
cluster,
where
also
the
cloud
foundry
applications
are
running.
A
As
I
said
in
an
ideal
world,
we
might
want
to
avoid
that,
but
I
think
in
reality
today
it's
it's
essentially
unavoidable
to
to
have
some
some
of
the
the
ingress
logic
running
in
the
cluster.
That
runs
the
cloud
foundry
applications,
so
so
that
is
one
then
the
other
one
is
related
to
to
scale
and
probably
also
isolation,
amongst
other
things,
so
having
like,
once,
we
have
separated
the
cloud
foundry
control
plane
cluster
from
where
cloud
foundry
applications
are
running.
A
Then
we
envision
having
something
like
multiple
kubernetes
clusters,
running
fleets
of
cloud
foundry
applications
so
similar
to
how
we
run
very
separated
fleets
of
cloud
foundry
applications
using
isolation,
segments
and
therefore
different
sets
of
diego
cells,
having
something
very
similar
on
on
the
kubernetes
side
of
the
house,
still
being
able
to
use
one
cloud,
foundry
control
plane
to
ultimately
operate
those
separate
clusters
and
probably
similar
to
how
isolation
segments
do
it
distinguish
where
cloud
foundry
applications
end
up
via
cloud
foundry,
orgs
and
spaces.
A
So
so
that's
that's
one.
One
thing
that
that
we
envision
and
then
like
once
we
are
in
the
scenario
where
the
cloud
foundry
control
plane
is
running
on
a
separate
kubernetes
cluster
you
could
again
ask
the
question
of
like
like
is:
is
that
there
a
one-to-one
relationship
between
the
cloud,
foundry
control,
plane
and
the
kubernetes
cluster?
A
Or
can
we,
for
example,
for
for
saving
resources
just
now
put
multiple
control
planes
working
with
with
different
payload
clusters
onto
the
same
kubernetes
cluster,
so
essentially
running
multiple
control
planes
next
to
each
other
on
on
the
same
kate's
cluster,
which,
like
probably
then
again
from
an
operations
perspective,
makes
certain
things
easier.
I
mean
like
also
that
definitely
will
will
have
its
its
limits.
A
So
I
don't
envision
us
to
run
all
cloud
foundry
control,
planes
of
those
hundreds
and
thousands
of
cf4k
systems
on
on
the
very
same
single
kubernetes
cluster,
but
at
least
there
could
be
some
some
resource
sharing
there
next
thing
is,
is
actually
then
having
an
ability
and
that
that's
mostly
unrelated
to
to
the
previous
chapters,
but
having
an
ability
to
to
actually
hibernate
the
cloud
foundry
control
plane.
A
The
thought
there
is
to
that,
at
least
for
us.
We
we
have
clusters
where
there's
quite
a
lot
of
cloud.
Foundry
rest
api
traffic,
because
people
are
developing
their
applications
there,
but
but
then,
there's
also
other
clusters,
where
maybe
there's
only
limited
api
traffic
towards
the
cloud
foundry
control
plane,
so
essentially
productive
clusters,
where
updates
are
happening
at
specific
time
windows
and
probably
not
as
frequently
as
in
in
the
development
scenario.
A
So
so
in
those
cases
to
kind
of
further
save
some
resources,
one
could
think
about
the
ability
to
essentially
hibernate
the
cloud
foundry
control
plane,
and
I
think
I've
talked
to
to
the
irene
team
during
one
of
the
still
in
person
cloud,
foundry
summits,
basically
asking
the
question
like
what
of
the
irene
infrastructure
actually
needs
to
run
to
make
sure
that
the
cloud
foundry
application
actually
keeps
running
or
if
it's
deployed
on
on
a
kubernetes
cluster
and
at
least
back,
then
the
answer
was
none
of
it
because
essentially
kubernetes
itself
makes
sure
to
help
manage
the
cloud
foundry
application
itself.
A
So
if,
if
that's
still
the
case,
then
one
could
indeed
think
about
kind
of
hibernating
or
shutting
down
the
cloud.
Foundry
control
plane
and
only
quote,
unquote
waking
it
up
when
it's
actually
required.
So
whenever
you
do
a
cf
push
or
a
cf
scale,
or
things
like
that
and
then
obviously
in
times
of
serverless,
you
could
think
about
kind
of
on
demand:
wake
up
of
the
control
plane.
A
So
essentially
only
if
an
api
request
comes
in
wake
up
the
whole
thing,
which
then
obviously
needs
to
to
happen
in
a
couple
of
seconds
rather
than
minutes
process
that
request
and
then
shutting
down
the
control
plane
again
so
kind
of
an
optimization
of
some
sort
and
then
last
but
not
least,
there's
also
like
from
a
management
perspective,
at
least
today
that
there's
this
stateful
thing
kind
of
next
to
kubernetes
being
a
stateful
thing
and
using
etcd
when
you
deploy
cfo
kates,
you
still
need
your
relational
database
for
the
cloud
controller
and
ua
persistency.
A
So,
like
all
the
usual
suspects-
and
I
think
we've
talked
about
the
topic
before
in
earlier
incarnations
of
the
sick
call
so
trying
to
to
store
cloud
foundry
entities
like
orgs
and
spaces
and
apps
as
custom
resources
on
kubernetes,
ultimately
in
etcd,
might
relieve
us
from
essentially
dealing
with
two
persistencies,
so
getting
rid
ultimately
of
the
relation
of
the
database
of
cloud
foundry
itself.
A
So
yeah,
as
I
said
like
like
some
of
those
chapters,
are,
are
a
bit
related.
Some
some
are
unrelated.
Some
you
can
kind
of
do
in
isolations,
isolation.
Others
just
make
sense
if,
if
you
combine
them
and
have
like
a
certain
logical
order
of
steps,
but
these,
ultimately,
where
were
ideas
that
that
came
to
my
mind-
and
probably
I
forgot
that
at
the
very
beginning
of
the
document,
all
of
that
is
is
definitely
like
from
a
timeline
perspective,
further
away
than
probably
many
other
topics
that
people
have
in
mind
for
cfo
cates.
A
So
that
was
essentially
the
main
part
of
the
document.
Then
there's
two
two
chapters
or
two
paragraphs
of
side
notes.
So,
first
of
all,
what
one
is
about
the
resource
consumption
of
the
cloud
foundry
control
plane
that
that,
at
least
when
I
wrote
the
document
was
still
pretty
heavyweight,
even
for
cf
for
cates.
I
understand
that,
like
there's,
more
lightweight
deployments
of
especially
the
cfocates
control
plane.
A
These
days,
where
you,
for
example,
don't
deploy
control,
plane,
components
in
a
redundant
way,
but
just
as
a
singletons,
which
is
probably
fine
for
some
scenarios,
because
you
still
have
kubernetes
trying
to
revive
certain
components
if
they
are
falling
down.
A
So
having
that
in
in
a
more
lightweight
fashion
is,
is
definitely
beneficial
also
for
this
kind
of
quote-unquote
mass
market
of
hosting
many
many
of
cfocate's
instances,
and
then
last
but
not
least,
I
mentioned
that
also
during
the
intro
compatibility
is
is
definitely
another
topic.
That's
that
that
needs
needs
to
to
have
focus
so
that
actually
we
we
can
see
that
move
of
of
people
from
using
cf
on
on
vms
today
over
to
to
cf4
gates.
A
So
not
not
the
real
core
of
of
this
document,
but
important
ingredients
for
like
an
overall
story
moving
people
over
from
where
they
are
probably
today
over
to
the
kubernetes-based
cloud
formally,
and
with
that
I
believe
that
was
kind
of
a
summary
of
the
document.
Maybe
a
little
bit
longer.
G
First
and
foremost,
thank
you
for
putting
it
together
because
I
think
that
has
been
a
very
valuable
exercise
already.
I
mean
I
all
the
thoughts
that
I
have
around
this.
I
think
I
already
either
added
as
comments
or
we
spoke
about
this,
so
this
is
almost
almost
100
percent
aligned
with
you
know,
ibm's
thinking
on
on
these
these
particular
points,
and
from
that
perspective
there
is
nothing
more
to
add
than
that.
But
what
already
has
been
said.
F
Simon,
it's
troy
is
the
multi-cluster
approach
for
workloads
similar
to
the
work
that
went
on
in
qcf
with
diego.
Is
that
the
same
use
case
for
you
guys.
G
Pretty
much
the
same,
yes,
I
mean.
The
point
is-
and
I
think
this
is
like
burn
said
this
in
the
beginning
really
like,
although
I
know
every
cloud
vendor
basically
says
hey,
you
can
run
an
infinite
size,
kubernetes
cluster
and
we
all
know
they're
lying
right
so
yeah.
So
there
is
some
kind
of
a
an
upper
limit
and
whether
that
upper
limit
is
like
200
machines
or
300
machines
or
400
machines
at
the
end
of
the
day,
doesn't
really
matter
right.
G
So
if
you
have
to
run
a
deployment
that
from
a
capacity
requirement
just
requires
more
than
those
400
machines,
you
just
have
to
split
them
up
and-
and
it
does
really
matter,
if
you're
doing
this
through
diego
in
in
the
coop
cf
world
or
if
you,
if
you
have
to
do
it
through
irene
in
the
cfrcats
world
or
any.
G
F
A
That
is
essentially
that
that
second
paragraph
here
so
when,
when
I
wrote
it
down,
I
received
the
feedback
that,
like
there
might
be
multiple
places
where
you
could
actually
do
that.
One
is
definitely
within
irene
kind
of
talking
to
the
kubernetes
api
directly.
The
other
one
could
be
in
copy
and
that's
like.
G
Two
alternative
options,
but
but
it
has
like
it
when
you
look
at
it
technically
troy
it's
it's
a
chicken
and
egg
problem
right,
because
if
you.
D
G
In
happy,
like
just
forget
about
diego
for
the
moment,
just
think
about
irini,
if
you
do
it
in
cappy,
you
would
then
have
to
make
sure
that
kathy
can
talk
to
multiple
irenes,
because
an
irene
would
then
be
cluster
scoped.
So
to
speak.
G
And,
and
that
might
be
an
implementation
option
if
we
decide
that
that's
the
right
way
to
go,
but
on
the
other
hand,
if
you
do,
if
you
have
an
irini,
if
you
do
it
in
irene,
then
the
I
really
would
be
not
cluster
scoped.
It
would
be
control,
plane,
scope,
so
to
speak,
but
needs
to
know
how
to
talk
to
multiple
backend
clusters
in
this
day.
So
it's
two
options
and
you
know
one
has
as
much
advantages
over
the
other
than
visa
versa.
D
I
mean
if
you,
if
you
would
use
this
kind
of
thing,
to
have
a
multi-tenant,
cf
control
plane
to
like
share
resources
in
in
that,
then
you
would
give
the
control
of
where
to
run
these
things,
some
like
with
isolation
segments,
basically
somehow
to
the
to
the
user
right
who
needs
isolation,
segments
they
are
assigned
to
to
organizations,
and
the
other
thing
would
be
where
this
is
completely
transparent,
where
some
kind
of
cf
control
plane
just
just
schedules
that
thing
to
to
where
space
is
available.
G
If
you
throw
isolation
segments
in
there,
the
the
picture
gets
even
more
complicated.
I
I
agree
all
right,
so
I
mean
I
I
think
when
I
answered
troy's
question,
I
wasn't
thinking
about
isolation
segments
in
particular.
I
was
just
thinking
about
one
huge
multi-tenant.
D
F
But
isolation
segments
is
the
is
one
of
the
biggest
use
cases
for
this
right
when
you
have
tenants
that
want
to
have
all
of
their
stuff
and
one
kubernetes
cluster
separate
from
everyone.
G
So
yes,
it
is
it's.
It's
certainly.
Another
isolation
segments
are
certainly
another
use
case
for
having
such
a
multi-cluster
support,
but
it's
not
the
only
one.
D
A
I
mean
plus
obviously
like
the
whole
conversation
around
how
well
our
workloads
actually
isolated
in
today's
cloud,
foundry
versus
how
well
can
you
isolate
workloads
on
kubernetes,
where
I
frequently
like
have
discussions
around?
You
have
requests
and
limits
et
cetera,
et
cetera,
and
then
I
usually
ask,
and
what
about
network
ingress
and
egress?
I
mean.
A
Cloud
on
vms
right,
but
having
that
possibility
to
actually
isolate
things
from
from
a
network
perspective
is,
is
definitely
something
that
that
I
feel
is
is
lacking
like
in
today's
cf,
as
well
as,
if
you
look
at
deployments
today,.
E
Are
you
thinking
about
just
the
kind
of
intrinsic
network
isolation
of
those
workloads
like
where
they
might
all
be?
You
know
intermixed
on
on
the
same
network,
or
are
you
thinking
about
ingress
traffic.
A
Well,
actually,
actually
like
multi-tenancy
on
on
the
network
right,
so
I
like
in
the
ideal
world
you
would
even
within
a
tenant,
be
able
to
say
this.
Cf
application
gets
this
slice
of
the
available
network,
but
like
even
being
able
to
to
shield
potentially
malicious
tenants
from
each
other
is,
I
think
something
that's
like
difficult,
usually.
E
Yeah,
I
guess,
with
with
the
existing
isolation
segments,
you
know,
there's
it's
a
much
more
advanced
configuration,
but
you
know:
we've
got
router
groups
and
the
ability
of
having
even
different
deployments
of
go
routers
designated
for
that,
but
I
could.
E
I
could
definitely
see
this
enabling
more
of
that
same
kind
of
topology,
where
you'd
both
have
workloads
and
appropriately
configured
ingress
gateways
for
that
traffic,
isolated
to
you
know
one
or
more
kubernetes
clusters,
and
I
guess
I
could
definitely
see
one
of
the
benefits
here
being
that
you
know
a
tenant
could
potentially
even
bring
their
own
kubernetes
cluster
and
register
it
and
say
hey.
E
This
can
be
associated
with
you
know
this
isolation
segment,
which
the
administrator
and
the
cf
control
plane
is
creating.
But
then
you
know
something
whether
that's
configuration
cappy
or
ireney,
wherever
the
branching
point
makes
sense,
ends
up
saying
all
right.
Well,
you
know
I'm
now
going
to
bottom
out
in
this
kubernetes
cluster,
so
here's
the
interface
contract.
F
Yeah
I
like
the
idea
of
mapping
it
to
isolation
segments,
actually
it
it
seems
to
be
the
because
we
I
I
was
also
given.
E
Given
in
that,
it's
the
nail
that
we
have
it's
kind
of
that.
F
Yeah,
well,
what
I
had
initially
thought
was:
they
should
be
mapped
to
namespaces
and
kubernetes.
F
That
was
before
I
started
learning
about
some
of
the
size
limitations
in
kubernetes
that
were
gonna
bite
us
and
the
porousness
of
not
not
really
the
poorestness
of
namespaces,
but
the
like
ingress
is
a
good
is
a
good
point
like
an
ingress
for
a
kubernetes
cluster
makes
sense.
An
ingress
for
a
namespace
makes
maybe
less
sense.
F
So
yeah
would
does
that
sound
like
a
good
mapping
berth.
You
didn't
seem
keen
on
that
on
overloading
isolation,
isolation,
segments.
A
A
Because,
like
what
I
like
about
isolation
segments
is
that
you
still
have
a
certain
amount
of
flexibility
right,
you
could
introduce
an
isolation
segment
later
on
and
you
could
kind
of
move
a
cloud
foundry
or
from
one
isolation
segment
to
another
one,
and
then
I'm
not
entirely
sure
what
would
happen
to
existing
apps.
But
I
believe,
like
that,
would
only
kick
in
if
you
see
a
push
the
next
time.
If
I'm
not
mistaken-
and
that's
actually
quite
some-
some
flexibility
that
this
would
enable.
A
F
No,
I
like
this,
but
it
also.
There
is
also
the
other
model
where,
like
ibm,
clouds
model
where
each
tenant
gets
their
own
cloud
foundry
and
that's
sort
of
the
the
headspace
that
that
stratos
built
the
interface
on
the
multi-endpoint
interface
on
where
we
envisaged.
You
would
be
connecting
to
possibly
numerous
cloud
foundry
api
endpoints
and
they
might
be
tied
together
with
the
same
uaa
or
they
might
be
tied
together
in
the
back
end
of
the
uaa,
with
the
same
identity
provider.
D
I
mean
I
can
imagine
that
the
isolation
segments
is
a
good
way
to
actually
allow
customers
to
configure
stuff,
like
that,
like
eric
said
when
they
bring
a
cluster,
but
I'm
not
sure
if
we
should
enforce
that
as
the
only
mechanism,
because
that
would
mean
that
for
scaling
like
I
mean
imagining,
somebody
pushes
that
many
cf
apps
to
a
single
org
that
they
don't
fit
into
a
single
cluster.
D
That
would
be
a
scale
boundary
that
we
hit
that,
like
the
customer,
then
has
to
actually
introduce
a
second
isolation
segment
instead
of
being
transparent
in
that.
So
I'm
not
sure
if
we
should
put
that
in
in
a
single
place,
so
to
say
like
we
could
actually
have
irini
with
a
rather
generic
scheduling
mechanism
that
you
can
like
configure
to
actually
say
something
like
like
do
affinities
like
saying
yeah.
This
app
put
that
like
right
next
to
this
other
one,
to
be
able
to
control
that
decision,
but
implement
that
rather
flexibly
in
irene.
D
But
maybe
it's
maybe
it's
not
worth
it,
because
the
I
I
mean
the
limits
of
kubernetes
clusters
are
are
raising
and
maybe
that's
really
a
lot
of
apps
and
they
actually
in
in
practice,
never
end
up
in
the
same
org.
So
maybe
it's
not
worth
because
it
does
add
quite
a
lot
of
complexity.
If
you
just
do
that
generically
as
a
rather
generic
scheduler
in.
E
Irene
yeah,
I
think
you
know,
with
with
orgs
they've,
got
a
many
to
many
mapping
for
isolation,
segments
and
spaces
spaces.
I
think
are
just
it's
one.
Each
space
has
one
assigned
isolation
segment,
and
I
think
it
is
the
case
that
if
that
is
mutable,
and
if
you
change
it,
it
gets
updated.
Lazily
like
whenever
that
app
gets
re-pushed.
It
goes
into
the
new
segment,
although
that
may
be
a
little
less
controlled
than
you
might
want,
or
you
know
potentially
surprising
if
you
are
trying
to
coordinate
with
some
sort
of
infrastructure
change
underneath.
E
So
I
would
you
know,
maybe
the
that
biggest
constraint
at
this
point
is
the
notion
that
a
space
is
assigned
to
exactly
one
isolation
segment,
including
the
the
default
null
isolation
segment,
and
that
that
would
be
the
main
constraint
in
terms
of
flexibility
in
terms
of
mapping
to
those
units
of.
E
Infrastructure
but
yeah,
I
would
I
mean
it
well,
maybe
maybe
we
have
a
few
examples
of
really
pathological
cases
where
somebody
has
10
000
apps
in
a
single
space,
but
they
may
be
hitting
other
scale
limits
in
cf
when
they
try
to
do
common
common
operations.
I
I
I
shuddered
to
think
of
just
what
running
cf
apps
would
do
at
that
point.
D
E
My
impression
was
that,
like
some
of
these,
things
seemed
kind
of
dependent
on
each
other,
like
separating
the
or
having
the
ability,
at
least
to
separate
the
control
plane
from
the
the
where
the
cf
apps
are
running
on
a
workload,
cluster
seemed
like
maybe
a
bit
of
a
prerequisite
to
branching
out
to
multiple
clusters,
or
maybe
that
would
that
would
probably
end
up
being
the
more
natural
sequencing
of
these
things.
A
E
I
guess
in
that
control
plane
consistency,
your
persistency,
well,
I
guess
you'd
already
kind
of
mentioned
that
it
would
we're,
assuming
that
this
would
just
be
via
the
kubernetes
api,
and
then
you
know
kubernetes
would
be
using
ncd
as
a
persistent
store.
So.
D
E
Get
it
we'd
get
it
transitively
yeah
that
that
might
be
something
to
spell
out
a
little
more
explicitly
that
we're
not
saying
we
would
have
say
cloud
controller
or
any
other
components
talking
directly
to
fcd.
D
D
E
C
E
Thanks
again
for
writing
this
aaron.
This
has
been.
This
is
really
a
nice
nice
set
of
directions
to
capture.
So
I
appreciate
it.
F
Yeah,
thanks
parent,
as
as
with
the
the
other
cfra
kate's
doc
that
we
talked
about
just
we
have
cloud
foundry
cab
call
tomorrow
I
as
of
yet
I
don't
have
a
presentation
planned.
Do
you
think
any
of
these
topics
around
cfrk8s
would
be
good
for
discussion
in
the
broader,
broader
group,
or
should
we
wait
till
we've
refined
things
a
bit
more.
A
Not
sure
like
for
for
the
cap
call
I'm
I'm
not
sure
how
deep
people
are
already
in
cf4ks,
because
that
that's
I
I
would
guess,
is
it's
definitely
a
more
advanced
topic
right.
If
you
did
didn't
have
the
chance
to
try
out
cfocates,
then
it
will
be
hard
to
to
kind
of
follow
in
terms
of
what
does
it
now
mean
to
have
the
control
plane,
running
elsewhere,
etc,
etc.
I
mean
we
could
still
point
again
people
to
the
document
and
like.
F
For
for
feedback
and
comments,
why
don't
I
just
I'd
like
make
a
quick
announcement
that
these
documents
are
available
and
looking
for
review
and
comment
yeah.
We
won't
dive
into
that
too
much
yeah.
A
I
mean
anyways
like
let's
see
how
basically
the
first
quarter
goes,
but
at
least
I've
seen
some
kind
of
reduction
in
in
people
suggesting
topics.
So
maybe
at
some
point
in
time
we
could
anyhow
think
about
like
should
we
kind
of
fold
this
special
interest
group
into
the
cup
call
and
then
kind
of
have
kind
of
more
regular
topics
for
the
combined
thing.
Yeah.
F
I
I
I
think
that
would
be
fine,
because
likewise
I
don't
see
a
lot
of
people
stepping
up
to
volunteer
for
for
presentations
in
cab
and
it
having
this
content
in.
That
meeting
would.
F
Expose
it
to
a
larger
audience,
and
then
people
in
the
community
would
be
thinking
about
these
and
we
would
have
and
here's
the
thing
we
would
have
a
lot
more
use
cases
when,
when
we
talk
about
them
right
now,
we're
talking
about
about
ibm's,
sap
and
scissors
use
cases,
and
we
get
this
bigger
variety
of
of
people
who
work
with
it
day
to
day.