►
From YouTube: Cloud Foundry for Kubernetes SIG [May 2021]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
E
D
D
So
while
while
people
are
joining,
I'm
actually
trying
to
recall
where
we
ended
last
time
in
terms
of
going
through
the
comments,
I
think
anyways
in
between
people
added
additional
feedback.
But
maybe
it
would
be
good
to
kind
of
start
where
we,
where
we
ended
last
time
and
not
entirely
sure.
I
think
we
talked
about
the
crd
topic.
D
B
Didn't
I'm
trying
to
remember
as
well
looking
at
some
of
the
the
notes
from
the
session
two
weeks
ago,
two
down
at
the
bottom
of
the.
D
D
C
D
D
A
Yes,
maybe
there
was
a
spread
end
of
page
two,
I
think
you're
beginning
of
page
three,
I
believe
yeah
yeah
should
should
prepare
to
yes
in
this
should
prepare
to
highlight
end
of
page
two
there,
what
the
thread
about
yes,
what
to
surface
for
api
developers
for
for
application
developers
in
terms
of
if
they
are
in
the
in
the
personnel
where
they
are
actually
also
using
kubernetes
cpi
and
whether
the
treadmill,
if
a
cabinet,
should
aim
to
become
a
template
engine
into
which
the
pass
is
creating
objects
and
the
users,
resources
and
the
user
as
visibility
into
these
resources
can
see
them
and
what
kind
of
backward
compatibility
contract
is
provided.
A
D
Let's
maybe
resume
there
looking
at
the
time.
I
know
that
at
least
jens
will
join
a
little
bit
later,
but
probably
should
get
started.
B
Yeah,
I
I
think
I
I
think
a
lot
of
the
commentary
on
that
section
had
happened
before
the
call
two
weeks
ago,
and
so
I
I
did
want
to
sum
up
some
of
what
we
had
discussed
on
the
call
on
the
thread,
and
then
I
didn't
recall
if
we
talked
about
this,
but
I
think
one,
one
hypothesis
that
we
have
is
that
if
there's
a
relatively
direct
correspondence
between
a
cf
space
and
a
namespace
in
a
particular
cluster,
maybe
let
alone
some
of
the
discussions
about
spreading
workloads
in
a
space
across
multiple
clusters.
B
But
if
there's
that
kind
of
direct
correspondence
that
might
make
defining
policies
via
case
our
back
that
allow
that
kind
of
restriction
easier
than
having
to
do
something
more
nuanced.
In
terms
of
policy
around
object
metadata
with
like
copa
and
gatekeeper.
You
know
if,
if,
if.
C
B
To
look
at
some
label
or
annotation
on
the
resources
that
said
like.
Actually,
this
is
in
the
cf
space
so
allow
this
set
of
users
access
to
it,
but
not
the
set
of
users.
That
seems
like
pretty
brittle,
but
if
that
is
a
simpler
expression
in
terms
of
okay,
you
know
this.
There's
this
rule
or
cluster
rule
or
something
defined
in
in
the
cluster
that
effectively
corresponds
to
the
space
permissions.
D
B
Right,
yeah,
you
know,
I
think
I
think,
maybe
moving
into
some
of
the
higher
level
concerns
that
we've
had.
I
think
that,
like
what
we've
built
with
cf
has
been,
has
been
this
really
fantastic
and
extremely
reliable
system
at
providing
this
end-to-end
hosting
environment
for
application,
workloads
and
the
developer
workflows
around
them.
But,
like
you
know,
time
and
time
again,
we
see
these
expressions
that
there's
kind
of
you
know
it's.
B
It's
all
or
nothing,
there's
no
way
to
pick
and
choose
pieces
of
that
functionality
or
to
make
that
a
more
permeable
barrier
when
it's
appropriate,
even
if
that's
not
appropriate
for
most
users-
and
you
know,
I
think
the
the
details
of
that
are
the
thing
that
we
do
need
to
work
out.
But
you
know
hearing
examples
of
like.
B
Oh,
I
you
know
sure
if
it
back
when
it
was
cf
or
vms,
there's
a
you
know,
an
easier
trade-off
to
understand
and
to
manage,
but
now
that
we
have
these
more
intermediate
representations
of
application
workloads
as
kubernetes
resources,
I
think
we've
all
been
feeling
a
lot
more
pressure
to
be
able
to
say
like
yeah.
You
know
I'd
love
a
cf
expression
of
this
workload
in
terms
of
kubernetes
constructs,
but
you
know
I
might
need
to
break
outside
of
that
box
a
little
bit
you
know.
B
If
I
need
to
to
detach
that
thing
and
manage
it
outside
of
cf
so
again
like
these
are
things
that
if
they
do
violate
that
developer
encapsulation,
but
if
the
alternative
is
to
say
like
well,
you
know
again
it's
all
or
nothing
for
cf.
Then
I
think
more
and
more,
we
find
ourselves
on
the
losing
end
of
that
battle
when
we
think
it
actually
is
the
right
choice
for
a
lot
of
users.
B
B
How
do
we
make
it
more
flexible
for
platform
operators
which
I
think
includes
a
lot
of
ourselves
to
allow
more
nuanced
access
policy
around
these
resources
that
allows
us
to
save?
You
know
most
of
the
time
yeah
it
is.
B
It
is
locked
down
to
the
point
where
developers
are
going
to
only
see
the
high
level
abstractions
that
they
need
to
to
do
their
jobs
efficiently
and
the
rest
of
the
system
can
get
out
of
the
way
and
enable
them,
but
when
they
do
need
to
to
break
through
to
those
lower
levels
of
abstraction
or
to
understand
something,
that's
going
on
or
even
to
interact
with
other
workloads
in
a
kubernetes
environment,
because
that's
not
going
away.
A
A
band
would
you
mind
crawling
up
just
a
little
bit
respect
to
the
proposals
I've
made
in
terms
of
those
those
attractions
that
can
be
yeah
again
a
bit
more.
I
think.
D
A
Yeah
a
little
down
here
that
there
was
a
one,
two
three
numbered,
so
that
we
can
reason
about.
A
A
Yes,
service
discovery,
kubernetes
levels
and
annotation
to
to
discover
ingress
networks
and
points.
A
C
A
Look
to
look
up
third
party
workloads,
so
there
would
still
be
a
still
be
a
service
broker,
but
can
use
maybe
labels
and
annotations
to
so
that
an
app
in
cloud
foundry
can
look
up
existing
services
in
the
same
cluster
so
discovering
both
sides.
I
was
thinking
that
maybe
discovery
would
be
one
specific
element
we
could
that
could
have
some
value
for
developers
to
get
more
more
details
and
to
integrate
better
with
kubernetes.
D
Yeah,
so
I
think
for
the
for
the
service
broker,
I've
seen
like
a
couple
of
implementations
that
that
actually
already
do
that.
I'm
not
sure
that,
like
having
the
service
broker
api
in
place
kind
of
enables
you
to
do
these
types
of
lookups
behind
the
scenes.
Somehow
right,
whereas
I
would
say
your
example,
1.1
here
definitely
is
something
where
like.
D
If
we
say
like,
we
have
specific
labels
and
annotations
on
the
kubernetes
side,
then
it's
almost
as
if
we
would
define
a
certain
api
that
kind
of
is
guaranteed
to
to
exist
and
continue
to
exist.
Even
if
updates
are.
B
Yeah,
I
I
think
the
the
idea
of
having
a
more
regular
and
potentially
stable
set
of
identifiers
for
the
the
workload
as
represented
on
the
kubernetes
infrastructure,
so
that
it
could
better
interoperate
with
the
kind
of
selector-based
philosophy
within
the
cluster
like
that.
That
definitely
makes
a
lot
of
sense
to
me
in
terms
of
enabling
some
of
the
interaction
patterns
that
I
think
we'd
want
to
see
in
kubernetes
I
mean-
maybe
it's
good,
even
to
tie
that
back
to
some
specifics
like
I
could
envision.
B
Maybe
that's
even
number
two
in
your
list
being
able
to
say
like
what,
if
what,
if
you
wanted
to
just
run
the
workload
on
the
cluster
using
the
cf
app
abstractions,
but
you
wanted
to
use
your
existing
ingress
controller
to
route
traffic
to
it
like
you
didn't
want
to
deal
with,
whatever
the
cf
system
was
going
to
bundle
in
and
cf
routing,
what?
B
What
kind
of
metadata
would
we
need
to
ensure
that
cf
could
provide
on
those
workload
units
so
that
they
could
be
connected
adequately
to
whether
that's
a
case
service
or
a
kids
ingress
object?
Or
you
know
the
the
kate's
gateway
apis
are
kind
of
gelling
and
coming
into
place.
So
I
think
I
think
that
that
one
I
feel
like
that
has
come
up
in
in
a
few
different
venues
already
as
one
of
these
decouplings
that
we
could
be
arranging
with
the
system
we're
fully
integrated
on
top
of
kubernetes.
A
Great
thanks,
maybe
number
three
was
to
to
relate
back
to
heroku
and
twelfth
father
apps,
which
was
meant
to
have
no
persistence
and
to
have
a
backing
service.
Sending
persistence,
I'm
wondering
whether
now
there
is
use
cases
such
as
there
is
a
machine
learning
process
running
somewhere,
which
creates
creates
models
that
needs
to
be
consumed
by
front-ends
and
those
front-ends
are
suited
to
to
be
running
within
cloud
foundry.
A
But
the
backing
service
such
as
s3
or
streaming
or
database
is,
is
a
bit
heavy
weight
and
they
would
just
want
to
map
maybe
a
read-only
persistent
volume.
A
B
Oh
yeah,
so
sugiyam
you're,
saying
like
right
now
we
have
the
kind
of
the
read
write
file
system
volume
service
in
ncf
today,
with
you
know,
kind
of
nfs
semantics
around
it
and
and
you're
bringing
up
the
idea
of
instead
exposing
like
a
read-only
file
system
interface
to
some
workloads
to
like
allow
them
to
interact
with
a
set
of
data
more
efficiently
than
kind
of
doing
that
directly.
Over
network
calls.
A
Yes,
potentially
to
be
able
to
to
reuse
a
persistent
volume
which
is
written
by
some
other
processes,
some
other
workloads
and
yeah-
maybe
it's
it
gets
tricky
because
of
placement
and
all
the
subtle
details
that
seems
easy
at
first,
but
it's
complex,
so
I'm
not
sure
about
this
one
yeah!
No,
I
think
I
think
that's
that's!
That
is
really
interesting.
B
A
Yes,
I
recall
in
sap
an
sap
talk
we
had
on
on
terraform,
where
the
sap
folks
they
were.
A
I
think
they
were
using
kubernetes
for
getting
their
machine
learning
models
being
built,
and
they
were,
they
were
needing
things
like
keep
flow
and
gpus
and
low
level
controls
to
be
able
to
learn
to
to
make
those
machine,
learnings
models
created,
but
the
consumption
part
and
the
api
exposition
they
didn't,
need
the
the
full
complexity
of
kubernetes,
so
just
getting
those
files
and
and
to
look
to
load
them
in
a
python
library
and
serve
it
help
them
using
a
rest.
Api
would
just
be
sufficient,
and
so
maybe
the
boundary
would
be.
A
A
And
by
volume
service
you
mean
the
a
cloud
foundry
service
broker
with
volume
bindings,
or
do
you
mean
the
kubernetes
persistent
volume
providers.
B
I
I
was
thinking
of
the
cloud
foundry
abstraction
that
we
have
today
around.
You
know
having
having
that
kind
of
logically
bound
to
an
app
workload
I
expect
underneath
and
the
yeah
the
broker
would.
B
End
up
interacting
with
the
kubernetes
resources
fairly
directly.
I
suppose
that
would
that
would
have
to
flow
through
whatever
is
defining
the
actual
workload
to
make
sure
that
the
appropriate
volumes
are
associated.
To,
like
you
know
the
deployment
and
that's
pod,
spec
template
or
you
know,
whatever
else
is
going
to
be
backing
that
workload.
D
Kind
of
I
had
some
memory
around
a
kubernetes
based
persistency
service,
but
probably
that's
somewhere
over
in
the
incubator
or
in
the
cloud
foundation.
Unity
repository
isn't
an
entire
issue.
I
I
guess
the
other
remark
that
I
wanted
to
make
is
like
topics
like
volume
services,
but
then
also,
maybe
some
some
newer
additions
to
to
the
functionality
of
cloud
foundry
like
labels
and
and
so
on.
D
They
always
make
me
wonder
if,
like
we
would,
would
have
chosen
like
to
actually
implement
that
type
of
functionality
inside
cloud
foundry
or
if
cloud
foundry
would
be
running
on
top
of
kubernetes,
we
would
have
just
said:
no.
This
is
kind
of
outside
the
scope
of
what
we
built
cloud
foundry
for
so
previously.
Obviously
you
have
to
kind
of
compete
a
little
bit
with
kubernetes
itself.
D
I
think
that's
somewhat
geared
to
less
cpu
intensive
applications
and
more
io
bound
applications,
and
then
obviously
the
question
is:
do
you
enhance
that
model
to
do
you
make
it
more
generic
to
allow
also
cpu
heavy
workloads
to
to
actually
run
quote
unquote
inside
cloud
foundry?
Or
would
you
then
rather
say
no?
This
is
something
that
should
kind
of
run
next
to
cloud
foundry
and
if
you
have
them,
for
example,
a
rest
api.
D
A
The
80
should
be
made
easy
and
the
20
percent
be
made
possible
and
make
possible
could
be
you
get
access
to
the
full
kubernetes
api
with
all
the
complexity,
because
you
need
it
so
you
can
afford
to
ramp
up,
but
the
20,
the
80
percent,
that
don't
need
that
they
don't
need
to
be
exposed
to
this
complexity
and
they
can
keep
the
productive
clitonry
attractions.
A
If
we
expose
pods
and
the
internal
details,
this
will
change
in
potentially
in
every
release.
So
we
we
will
break
people
that
that
have
access
to
those
details
that
they
will
break
their
work.
D
I
I
think
what
you
are
saying
is
what
we
have
defined
here
is
rather
like
implementation
apis,
and
I
think
you're,
asking
about
like
additional
apis
that
allow
you
to
access
kind
of
underlying
kubernetes
implementation
details
so
to
speak,
to
to
be
able
to
to
more
directly
interact
with
these.
That's
a
good
good
point.
I
think
I'm
not
sure
who
was
it,
but
somebody
kind
of
asked
about
like
what
is
the
cloud
foundry
api
anyways
right.
D
That
was
one
of
the
comments
that
came
in
recently,
and
that
was
more
referring
to
like
the
existing
cloud
foundry
api.
But
I
think
then,
on
top
of
that,
if
we
say
there
is
options
for
people
using
cloud
foundry
to
see
through
some
of
the
abstraction
yeah,
those
those
apis
will
also
need
to
be
defined.
A
Yes,
for
example,
the
the
apis
we
are
talking
about
is
the
environment
variables
that
applications
are
exposed
to
the
dns
discovery
that
allows
to
to
use
dns
endpoint,
to
discover
and
to
root
in
the
internal
network
control
plane,
so
that
that's
that's
and
yeah.
That's
the
two
main
apis
and
know
about
platform
during
time,
apis
for
applications
and
then
obviously
the
buildbacks
to
some
extent
buildbacks
have
their
own
contract
as
well
and
yeah.
B
A
B
I
mean
yeah,
there's
documentation
for
say
the
cloud
controller
api,
but
I
I
think
a
lot
of
the
intent
it's
it's
certainly
not
all
consolidated
in
one
place
and
some
many
of
the
behavioral
nuances
or
those
interfaces
inside
of
the
the
build
or
the
runtime
environment
are
not
very
explicitly
documented.
A
You're
talking
about
like
diigo
interfaces,
to
extend
the
system
or
are
you
talking
about
the
endowment
variables
that
are
surface
to
applications
at
runtime.
B
D
Also,
very
subtle
things
like
how
does
cell
draining
work
if
you
update
like
cloud
foundry
itself
like
what
happens
first,
what
happens
next
I
mean
also
that
is
specified
somewhere
in
the
documentation,
but
not
like
in
a
very
testable
way,
so
to
speak.
I'm
not
sure
if
there's
any
good.
B
B
Life
cycle
expectations
under
normal
operation
for
for
workloads
when
the
system
is
being
updated
that
may
be
specified
somewhere
else,
even
in
terms
of
describing
the
behavior.
But
I
don't
think
that
you
know
there's.
Certainly
no
like
certification
or
compliance
statement
around
saying
your
cf
system
should
or
must
do
x
when
it's
updating
worker.
D
Yes,
okay
and-
and
I
think
your
your
comment
referred
to
the
to
the
name-
space
example
where
I
made
a
remark
that,
like
at
least
this
particular
example
of
like
hierarchical
name
spaces
if
they
are
becoming
any
reality
in
kubernetes.
This
is
more
like
an
implementation
detail.
It's
not
so
much
something
that
would
be
visible
one
to
one
to
people
using
the
system
itself.
D
And
then
like
the
kind
of
duality
of
the
role
of
developers
using
acf
on
kubernetes
like
the
existing
ones,
that
are
probably
happy
with
what
what
exists
today
and
don't
want
to
have
additional
abstraction.
And
then
the
people
that
I
think
also
eric
was
referring
to.
Looking
at
the
topic
from
a
kubernetes
background
and
like
wanting
to
have
those
those
interactions
between
cloud,
foundry,
based
workloads
and
and
other
workloads,
and
how
to
kind
of
strike,
the
balance
to
actually
be
attractive
for
both
both
groups.
D
B
I
would
recommend
batting
having
to
well,
I
think,
having
to
to
bat
through
several
layers
of
cookie
information
and
popovers
is,
is
probably
worth
it
to
get
to
the
content
as
annoying
and
initially
deterring
it.
D
Maybe
okay,
then
the
next
one
is
on
the
yeah
copy.
V3
entities
as
crds,
and
I
made
a
comment
and
giuseppe,
is
on
the
call
as
well
that
at
least
I
I
know
that
the
irini
team
is
working
on
like
wrapping
their
entities
into
into
cld.
So
it's
not
like
directly
the
cloud
controller
entities
but
kind
of
the
thing
that's
gets
generated
from
the
cloud
controller
entities.
E
Yeah
I
mean
we,
we
started
pretty
much
trying
to
replicate
the
diego
api
as
much
as
we
could,
and
then
we
kind
of
had
to
replace
some
of
the
features
of
that
api
to
be
a
bit
more
clear,
and
it
is
native,
for
example,
getting
rid
of
all
the
callback-based
workflows,
replace
them
with
things
that
look
a
lot
more,
a
little
bit
more
like
events
or
things
that
can
be
just
you
know,
watched
in
the
spirit
of
like
the
way
kubernetes
things
are
done,
other
than
that.
E
That
is
pretty
much
it,
and
this
would
be
the
workloads
orchestration
api
like
in
that
list
of
apis
that
we
would
have
to
solidify,
like
as,
like
you
know,
interfaces
that
can
be
implemented
by
different
backends.
I
think
the
the
arena
one
would
be
could
be
a
starting
point
for
for
that
api.
Given
this
opportunity,
it's
an
opportunity,
given
we
are
changing.
E
We
are
basically
rewriting
the
reading
api
right,
we're
porting
it
from
rest
to
to
lrps
to
sorry
to
crds.
It
is
an
opportunity
if
there
is
any
if
we
want
to
change
stuff
to
to
do
it,
because
it's
while
that
api
is
still
in
beta,
is
still
completely
experimental
and
hasn't
been
released.
E
So
like
we
should
try
to
work
from
from
the
two
ends
of
of
the
interface
to
see
which
one
like.
What.
What
makes
us
happy
as
a
good
interface
that
we
can
then
stabilize,
and
it
really
could
be
an
implementation
of
that.
E
The
idea
would
be
yeah
yeah
at
that
point.
Erin,
that's
already
the
case
like
they
really
have
any
has
an
api
that
can
be
consumed
by
any
anyone,
although
in
in
practice
it's
only
cc
consuming
it,
but
maybe
it
could
be.
It
could
be
a
case
of
really
not
being
the
only
thing
that
cc
can
consume.
If
we
decide
that
the
api
is
stable
and
standard
and
we
document
it
etc.
E
A
E
Are
definitely
not
supposed
to
be
surfaced
to
users,
because
it's
it's
quite
low
level
right.
It's
like
just
like
the
diego
api.
I
think
at
the
moment,
if
you
really
want
you
can
try
to
hit
lego
directly,
but
you
have
to
pass
in
your
requests
would
have
would
not.
It
would
not
be
very
ergonomic
right.
You
would
already
need
a
built
image
and
you
wouldn't
need
to
know
a
bunch
of
things.
E
You
need
to
know
how
many
processes
you
want
and
that's
all
work
that
the
cc
does
for
you
when
you
make
a
cc
request,
which
is
the
request
that
is
supposed
to
be
made
by
a
human,
so
yeah.
A
And,
and
so
maybe
to
the
challenge
is
that
we
do
need
all
the
features
set
that
cc
does
and
maintains.
Well,
we
do
need
to
maintain
the
safe
cli
and
the
cloud
foundry
experience
to,
because
the
population
associated
are
happy
with
that.
At
the
same
time,
we
need
kubernetes
users
to
be
using
native
custom
resource
with
the
same
level
of
of
abstraction,
yeah,
yeah
yeah.
E
Yeah,
so
those
would
be
crds
defined
by
ecc
or
whatever
component
we
decide
should
would
replace.
Is
here
so
so
cc
has
concepts
like
an
app
a
manifest.
I
don't
know
roots
whatever,
like
irini
has
cons,
concepts
that
are
one
layer
below
so
yeah
like
translating
the
copy
api
to
crds
is,
I
think,
is
a
separate
effort
from
from
the
irini
one,
although
of
course
having
a
ring
crds.
E
That's
when,
like
it
becomes
a
little
bit
more
annoying,
which
is
what
irini
has
been
doing
so
far
like
we
translated
these
the
cloud,
foundry
imperative
requests
to
declarative
requests,
kubernetes
and
that's
been
a
source
of
bit
of
a
mismatch
right,
so
going
declarative
end
to
end,
I
think,
is
going
to
make
things
a
lot
easier.
A
And
we
see
many
community
efforts
that
dynamically
generate
those
customer
resources
so
from
the
cross
plane,
kubernetes
community.
We
see
many
cloud
providers,
such
as
aws
gcp
azure,
that
automatically
generate
the
csds
from
their
open
apis.
For
example,
they
have
open
api
for
the
rest.
Api
which
cc
doesn't
have
htc,
has
its
own
format.
A
A
Maybe
one
avenue
could
be
maybe
as
a
poc,
maybe
just
to
get
feedback
from
community
the
terraform
provider
for
cloud
foundry
is
just
doing
that.
It's
wrapping
cc
api
into
a
direct
declarative
way.
A
A
Yes,
on
top
right
here
you
have
the
html,
but
yes
definition
of
the
provider,
and
then
you,
basically
you
have
cloud
foundry
application
platform
service
instance
service
binding.
So
that's
for
application
developers
and-
and
we
have
arkspace
buildbacks,
so
the
full
isolation
segments,
private
domains,
quotas
and
all
that.
So
all
of
this
is
already
maintained
by
the
community,
and
there
is
a
thought
in
the
cross-plane
community
to
take
to
take
a
terraform
provider
and
to
automatically
generate
a
crd
out
of
that.
A
So
maybe
that
could
be.
That
was
the
suggestion
I
was
making
could
be
a
way
to
in
a
cheap
way
to
generate
crds
that
exactly
match
cc
api
without
much
effort.
A
It
still
relies
on
the
platform
provider
cloud
foundry
to
be
maintained,
and
I
think
there
is
plenty
for
to
migrate
from
v2
to
pv2
to
capital
3..
But
yeah
just
wanted
to
point
out
this
alternative.
E
Yeah,
I
guess
it's
it's
a
bit
of
a
dangerous
game
to
play.
Given
I
think
it's
the
same
for
terraform
like
there
is
some
state
that
terraform
has
or
crds
have
right.
Crds
are
saved
into
a
cd
right
and
then
so
you
create
a
crd.
You
create
a
resource
which
is
stored
into
a
cd,
and
then
there
is
a
watcher
that
picks
it
up
and
then
makes
a
rest
request
to
copy
which
puts
it
in
its
own
postgres.
E
So
you
have.
You
constantly
have
two
pieces
of
state
that
need
to
be
constantly
synchronized,
so
it
can
work,
but
there's
always
the
risk
that
those
two
get
out
of
sync
versus
deciding
that
the
source
of
truth
is
the
crds.
E
And
then
you
can
have
a
rest
api
in
front
of
the
crds
to
manipulate
the
cds,
because
that's
what
kubernetes
has
as
well
right,
like
I
don't
know
if
it
can
be
called
rest
but
effectively,
it's
an
http
api,
the
one
you
use
to
create
the
cds
et
cetera
and
watch
them,
but
I
think
the
other
way
around
given
like.
If
you
keep
the
the
old
database,
then
you
have
state
in
front
of
procedure,
calls
in
front
of
state
while
you
can
get
rid
of
one
of
the
two
states.
E
E
C
Yeah
akin
to
our
rewrite
pretty
much,
and
just
just
to
add
on
that,
like
we,
we
do
this
in
a
couple
places
already
where
we
have
that
two
sources
of
state
like
one
of
them
being
routes
where,
because
we
have
it
in
the
database
and
in
kubernetes
scd,
we
have
to
have
syncers
on
both
sides
that,
like
ends
up
being
pretty
expensive
and
like
kind
of
takes
away
from
like
the
kubernetes
native
aspect.
C
B
Tim,
that's
that's
the
case
for
the
existing
like
route
support
in
like
cf
for
kate's.
Today,.
C
Yeah
yeah
because,
like
we
did
like
at
the
start,
a
lot
of
these
issues
were
like
theoretical
and
then
we
did
actually
see
discrepancies
where
people
would
make
imperative
changes
through
the
cf
apis
and
they
wouldn't
be
reflected
in
the
route
on
kubernetes.
Because
of
just
like
network
hiccups
or
weird
ordering
things.
So
we
had
to
develop
a
a
sinker
on
the
cloud
controller
side
to
just
on
a
loop
like
classic,
like
diego
style,
push
data
and
clobber
everything
in
kubernetes
to
reconcile
that.
C
Yeah-
and
I
imagine
we'd
have
to
do
something
like
that
for
every
resource
so
like
for
routes
alone,
that
ended
up
being
pretty
expensive,
but
to
do
everything,
especially
ones
that
like
routes
were
nice
because
they
were
isolated
in
a
single
table,
but
for
a
lot
of
other
api
resources
on
cloud
controller,
it's
stuff,
that's
split
across
tables
in
the
database
and
then
like
syncing,
that
gets.
I
know
this
is
implementation-y.
That
gets
even
more
difficult,
though.
A
Yes-
and
maybe
it's
interesting
to
raise
the
fact
that
cod
controller
maintains
a
kind
of
transactional
consistency
to
the
imperative
request
that
it
receives
and
splitting
that
into
independent
resources.
Sometimes
there
is
relationships
between
those
resources
and
so
getting
that
into
a
degradative
way
where
there
is
only
the
open
api
input,
validation,
which
is
possible,
but
the
more
consistency
check
is
always
as
asynchronous,
and
I
think
it
makes
it
makes
it
challenging
to
keep
it
simple.
A
E
Yeah
in
general,
I
think
it's
just
you
design
differently,
because
it's
not
a
relational
data
store.
So,
like
you
can't
just
have
different
resources
with
relationships
that
you
know
will
be
kept
consistent.
Just
like
you
would
in
a
on
the
other
hand,
it's
not
as
bi-dimensional
it's
not
as
restrictive
in
terms
of
what
you
can
put
in
one
table
or
in
one
type
right,
so
usually
things
that
change
together.
You
put
them
in
one
resource
definition.
E
Hopefully
you
can
do
that
like
it's
not
always
so
easy,
and
then
there
is
this
concept
of
a
re
of
an
ownership
like
a
resource
can
own
another
thing
so
that
they
can
be
deleted
together
and
stuff
like
that
or
they
can
be
notified
when
things
change,
but
that's
pretty
much
it
so
you
have
to
think
you're
going
from
postgres
to
maybe
I
don't
know,
or
even
more
than
that,
you
design
around
the
lack
of
guarantees.
I
guess
because
you
can
do
atomic
changes.
B
Yeah,
I
I
wonder
if
it's
worthwhile
identifying,
maybe
what
the
biggest
sources
of
concern
are
there
that
might
even
be
informative
in
terms
of
like
the
system
design,
we
would
want
around
that
or
or
running
a
smaller
scope
spike
than
you
know,
trying
to
do
that
for
all
of
cc.
I
don't
know
if
we'd,
if
timber,
just
a
or
anyone
else,
who's
been
digging
into
some
of
those
details
over
the
past
year
or
so
has
anything
off
the
top
of
their
heads.
E
C
Yeah,
I
we,
we
have
a
lot
of
like
theoretical
thoughts
on
it.
I
I
believe
in
our
back
club.
We
have.
We
have
a
spike
to
actually
like
try
and
implement
something
that
looks
like
the
v3
apps
controller,
that
that
does
involve
some
relationships
across
resources
and
see
what
that
might
look
like,
but
we
haven't
worked
on
it
yet.
B
B
Maybe
to
I
think,
there's
at
this
point
a
lot
of
precedent
from
the
kubernetes
community
itself,
in
terms
of
like
you,
have
all
these
resources
that
aren't
actually
coherent?
What
is
what
what
does
the
system
do,
or
what
end
users
do
to
reconcile
that.
D
D
A
Yes,
I
think
it
it's
all
it's
it's
kind
of
we
already
covered
that.
I
think
it's
duplicate
and
the.
C
A
D
D
There
was
another
comment
by
pierre
talking
about
overlapped.
With
things
like
k
native,
I
was
kind
of
making
making
a
comment
around
like
k.
Native
might
obviously
be
one
kind
of
alternative
implementation
for
workload
orchestration
right,
so
that
was
essentially
one
thought
why
we
said
that
workload
orchestration
light
is
an
api
that
could
be
implemented
differently,
maybe
not
even
necessarily
in
terms
of
the
containerized
workload
but
yeah.
I
guess
sk
native
is
kind
of
very
close
to
like
being
a
good
alternative
implementation
for
this
particular.
B
E
Yeah,
mostly
about
the
scope
of
the
workload
orchestration
api
regarding
multi-cluster,
so
just
thinking
about
how
this
would
look
like
we've
talked
about
basically
leveraging
isolation
segments
to
to
achieve
multi-cluster,
and
I
think
at
the
moment
the
diego
api
takes
care
of
it.
So
you
just
tell
diego
here's
an
lp
put
it
in
this
segmentation.
E
This
is
the
segmentation
segment
tag
or
something
and
diego
takes
care
of
keeping
a
distinction
between
segments.
So
we
could
do
that
in
arena.
But
given
the
api
is
declarative
and
we
still
need,
I
think
we
will
still
need
resources
for
each
process
running.
Then
we
would
have
to
duplicate
like
every
process.
Would
that
would
need
a
resource
on
the
on
the
control
plane
and
a
resource
on
the
actual
affected
on
the
actual
like
on
this
on
the
cluster?
E
That
is
that
we
want
to
deploy
to,
and
that
felt
a
little
bit
too
much
compared
to
maybe
teaching
the
cloud
controller
upstream
to
tell
like
okay,
which
cluster
should
this.
Does
this
belong
to
and
just
create
a
resource
there,
which
means?
E
The
deployment
of
stage
4
sets
that
it
creates
on
the
remote
cluster,
and
I
don't
know
how
that
will
work,
I'm
pretty
sure
it
can
work,
but
like
it's,
two
different
clusters
with
two
different
authentications
I've,
never
seen
a
controller
that
is
able
to
watch
resources
on
two
on
two
clusters.
At
the
same
time,
and
also
so
I
don't
know
if
that
is
possible-
and
also
I
don't
know,
if
you
can
do
this
with
two
separate
controllers,
I
think
you
need
you.
Maybe
you
can.
E
Actually
I
don't
know
but
like
it
would
be
a
bit
weird.
So
that's
why
I
think
like.
If
we
could
do
this
up
up
like
up
front,
it
would
make
things
a
little
bit
simpler,
given
how
kubernetes
works,
even
even
if
it's
a
bit
different
than
what
diego
does
at
the
moment.
B
E
D
Yeah,
actually,
I
was
about
to
to
to
say
that,
like
the
the
other,
implementation
would
have
the
the
elegance
of
not
requiring
irene
to
be
deployed
in
each
and
every
workload.
Cluster
kind
of
being
deployed
centrally,
but
obviously
you're
right,
like
the
implementation
of
that,
would
be
way
more
complex
than
just
kind
of
having
that
run
in
one
cluster,
watching
one
crd
and
kind
of
generating
a
bunch
of
others
in
in
the
same
class.
E
Yeah,
I
think
it's
very
it's
kind
of
idiomatic
for
kubernetes
controllers
to
just
watch
stuff
in
their
own
in
their
own
cluster
and
even
in
their
own
namespace,
like
just
by
leveraging.
This
convention,
for
example,
irini,
doesn't
need
to
care
too
much
about
name
spaces,
etc.
E
You
just
deploy
arena
and
it
just
watches
you
tell
it
which
namespace
to
watch
it
just
watches
that
namespace
and
that's
it
so
yeah
things
will
get
a
lot
more
complicated
if
we
have
all
these
layers
of
irq,
where
you
have
multiple
clusters
with
multiple
namespaces
and
it
kind
of
explodes.
E
B
One
thing
I
was
just
gone:
okay,
yeah,
oh
well,
I
was
just
thinking
like
this
is
maybe
kind
of
interesting
to
intersect.
With
our
the
discussion
we
just
had
about
representing
the
higher
level
cf
resources.
Also
as
crds
I
mean,
would
we
still
end
up
in
a
pattern
where
we
need
some
sort
of
like
we.
We've
now
entered
the
kubernetes
form
land
of
resources,
and
so
the
probably
the
path
of
least
resistance
in
terms
of
implementation
would
be.
B
We
have
some
sort
of
controller
that
understands
what
we
think
of
as
the
cc
entities
today,
like
you
know,
apps
or
processes
or
whatever,
with
the
associated
set
of
either
context
or
explicit
selectors
on
them.
That
kind
of
give
hints
about
where
they
should
actually
be
realized.
You
know
if
we're
talking
about
a
multi-cluster,
topology
and
then
do
we
still
have
some
sort
of
controller
that
is
acting
on
those
resources
and
maybe
it's
driving
those
irini
or
irony-like
crds
in
you
know
either
the
same
cluster
or
other
clusters
simultaneously.
E
But
if
you
want
these
lrps
to
be
on
a
specific
cluster,
then
you
have
to
create
these
lrps
on
that
cluster
and
then
you
need
to
from
the
control
plane.
You
need
to
watch
them
because,
again
you
you
need
this
double
like.
You
need
to
watch
in
both
directions
like
when
the
lrps
change.
You
need
to
reconcile
them.
When
the
app
changes
you,
you
also
need
to
reconcile
the
associated
lrp's.
E
So
actually
we
probably
can't
get
away
from
this,
and
this
is
because
see
like
humanities
is
and
was
never
supposed
to
be,
used
like
this,
like
with
many
clusters
like
at
the
same
time,
right
controllers
are
just
designed.
The
scope
of
our
controller
is
always
a
cluster,
so
maybe
there's
no
running
away
from
that
right.
Yeah.
E
Of
people
basically
replicating
the
controller
runtime,
which
is
a
framework
that
is
used
to
create
controllers
for
multi
cluster,
so
just
to
kind
of
you
use
it
like
a
normal
controller
runtime.
But
it's
multi-cluster.
E
E
So
it's
not
impossible
technically
like
it
might
need
that.
We
need
to
rebuild
some
of
those
core
components
that
people
just
tend
to
just
reuse
because
they're,
it's
a
framework,
but
it's
definitely
not
impossible.
Technically,
because
http
is
http,
they
should
be
able
to
work
in
cluster
just
like
across
clusters.
There's
no
reason
they
shouldn't.
A
Yeah
so
crossplane
is
defining
crds
with
a
kind
of
a
composition
mechanism
and
in
this
project
they
have
an
agent
so
that
one
command
one
client
kubernetes
clusters
is,
is
able
to
define
custom
resource
locally.
That
gets
basically
execute
on
the
central
cluster
and
the
way
it
works
is
that
they
have
an
agent
on
the
client
kubernetes
clusters
that
calls
home
to
the
central
cluster
fetch
all
crds
to
execute
them
locally
about
to
to
get
their
definition.
A
And
then,
when
a
user
execute
asks
for
resource
on
the
client
cluster,
it
creates
a
resource
remotely
in
the
central
cluster
and
gets
the
response
back.
So
maybe
a
similar
pattern
could
work
in
the
sense
that
maybe
irini
could
be
defining.
A
E
This
one
I
am
talking
about
it's
a
bit
abandoned,
but
it
looks
pretty
much
exactly
like
the
runtime
controller.
Again,
it's
doable.
It's
just
there's
a
risk.
We
have
to
reinvent
the
wheel
on
things
that
are
a
bit
tricky
like
caches
like
stuff
like
that,
but
yeah
doing
this
thing
cc,
I
think,
would
still
save
us
from
one
resource
creation
so
per
process.
E
So
that's
not
bad.
So,
basically,
otherwise,
every
pro
every
lrp
on
every
cluster
would
have
an
it
would
have
a
correspondent
lrp
on
the
control
plane
cluster.
So
there
will
be
a
bit
of
a
profit
proliferation
but
yeah
the
the
multi-cluster
probe.
The
problem
we're
having
control
is
across
the
boundaries
of
a
cluster.
I
think
we
can't
we'll
have
to
face
only
one
way
or
the
other.
D
I
guess
I
have
to
cut
the
conversation
a
little
bit
short
because
we
are
already
slightly
over
time.
I
would
say
thank
you
very
much
for
for
feedback
and
the
discussion
for
now
we'll
take
another
pass.
I
guess
on
the
comments
that
are
in
there
and
then
take
it
from
there.
B
Yeah-
and
I
I
think
I
would
I'd-
really
encourage
everyone
to
think
about.
Like
you
know,
we've
kind
of
we've
been
identifying
this
perspective
on
what
we
think
cfon
keats
can
be
realized,
as
and
so
I'd
really
like
people
to
provide
input
on.
B
You
know
where
they
think
you
know
whether
that's
a
continuation
of
cfr
gates
or
whether
we
need
some
more
dramatic
changes
there
like
where,
where
you
would
actually
see
yourselves
being
able
to
put
some
of
this
into
you,
so
what
blockers
you
would
have
in
order
to
actually
start
using
a
system
like
this.
B
You
know,
because
we've
had
that
with
cf
for
kids
right
now
and
you
know
we've,
we
have
a
few
examples
of
people
using
it,
but
a
lot
of
people
have
been
holding
back
for
various
reasons,
and
you
know
we
want
to
get
past
some
of
those
barriers
and
and
get
everyone
aligned
on
on
a
common
picture
here.
So
I
think
getting
getting
a
clear
picture
of
some
of
those
requirements
and
goals
that
we
would
all
have
in
mind
or
things
that
we
have
feedback.
B
You
know,
maybe
from
other
people
who
aren't
in
these
sessions,
but
that
we
view
as
important
stakeholders
in
the
community
that'd
be
really
beneficial,
as
we
start
talking
more
about
some
of
the
details
of
how
we're
actually
going
to
implement
this
and
carry
it
forward.