►
From YouTube: Cloud Foundry for Kubernetes SIG [April 2021]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
C
A
B
B
D
D
E
Dan
jones
sends
his
regards.
He
can't
make
it
as
he's
on
transit,
but
I
see
he
just
left
a
few
comments
and.
E
E
D
D
D
Okay,
why
don't
we
get
started
so
welcome
everybody
once
again
to
this
week's
special
interest.
Call
cloud
foundry
on
kubernetes,
as
announced
and
probably
seen
by
all
people
dialing
in
on
the
topic
for
today,
would
be
to
go
over
the
vision
for
cf
on
kubernetes
document
and
hopefully
have
an
interesting
discussion
and
conversation
about
it.
D
D
So
I'm
kind
of
assuming
everybody
has
has
gone
through
the
document
and
I
guess
suggestion
would
be
to
kind
of
go
over
the
areas
where
we
have
open
comments
and
then
kind
of
see
see
how
much
time
remains
and
if
there's
like
additional
feedback
questions
aspects
that
we
actually
should
should
incorporate
that
sound
good
for
everybody.
D
If
so,
let's
see
yeah
eric
already
mentioned
it,
daniel
can't
make
it
today
but
started
leaving
documents
right
right
before
the
meeting.
So
let's,
let's
see
what
what
he
had
there.
So
he's
commenting
on
the
topic
of
convenient
developer
outcomes.
That
cloud
foundry
today
delivers
by
a
bosch.
D
D
D
If
we
talk
about
like
the
first
group,
the
people
using
cf
today,
then
one
goal
that
we
have
is
to
kind
of
make
that
move
very
smooth,
and
I
think
you
also
kind
of
commented
on
the
part
of
the
documentary
kind
of
talk
about
talk
about
that.
D
On
the
other
hand,
I
believe
we
said
we
we
don't
like
making
that
move
as
smooth
or
we
we
don't
see,
making
that
move
as
smooth
for
the
operator
a
kind
of
hard
prerequisite.
So
in
other
words,
we
basically
felt
that
probably
also
the
fact
that
pretty
much
cloud
foundry
is
kind
of
the
only
open
source
community
using
bosch.
D
Also
provided
kind
of
a
high
entry
barrier
for
people
who
kind
of
primarily
wanted
to
see
what
cloud
foundry
is
like,
but
the
first
thing
that
they
had
to
do
is
actually
like
look
into
boston
and
learn
how
bosch
works
in
order
to
kind
of
set
up
systems.
So
so
I
I
guess
what
we.
What
we
said
for
moving
cloud
foundry
on
kubernetes
is
that
we
actually
wanted
to
more
lean
towards
kind
of
doing
things,
to
kubernetes
way
kind
of
acknowledging
that
there
is
no
real
one.
D
Kubernetes
way
of
doing
things,
probably,
except
for
cube,
cuddle,
apply
a
bunch
of
yaml
files
right
so
but
yeah.
I
think
that
that
was
the
reason
why
we
basically
said
kind
of
moving
over
existing
developers
using
cloud.
Foundry
is
like
one
of
the
key
goals
but
kind
of
retaining
the
same
operator
experience
than
what
people
had
with
bosch
probably
isn't
on.
I
guess.
E
I
think,
when
one
other
thing
that
we
were,
we
were
keeping
in
mind.
You
know
we
were
we're
talking
a
lot
later
down
in
this
document
about
you
know.
E
Even
with
cf
for
kids,
we
were
making
a
lot
of
decisions
about
the
kind
of
subsystems
to
bundle
into
that,
and
you
know,
as
as
we've
been
discussing
like
what
what
makes
sense
to
set
up
cf
on
capes
for
continued
success
and
evolution
with
the
cloud
native
community
and
recognizing
that
a
lot
of
those
interfaces
to
the
subsystems
need
to
be
more
defined
so
that
the
subsystems
underneath
can
change
like
we're.
E
We
we'd
like
the
stall
to
be
in
a
situation
where,
from
the
operational
perspective,
that's
maybe
as
aligned
then
with
the
kubernetes
community
as
possible
and
has
that
low
barrier
to
entry,
because
it's
going
to
have
to
interrupt
interoperate
with
you
know
those
other
components
that
are
coming
together
to
provide
a
more
cohesive
system.
E
And
so
you
know,
I
think,
with
with
both
quarks
and
then
with
the
carvel
mk14s
tooling,
we've
been
trying
to
impose
a
bit
more
uniformity
on
on
that
operator,
experience
to
get
a
complete
system
deployed
and
I
think
we're
expecting
to
have
to
move
back
from
that,
probably
at
the
the
detriment
of
that
operator
experience.
E
And
you
know
to
baron's
point.
Maybe
that
ends
up
meaning
that
the
the
the
core
of
that
the
pieces
that
are
that
we're
still
generating
in
the
community
have
to
be
something.
That's
available
to.
You
know,
keep
cuddle
apply
or
you
know
something
pretty
lightweight.
On
top
of
that,.
D
But
then,
at
the
same
time,
probably
like
don't
give
like
all
the
configuration
options
that
people
otherwise
would
get
from
like
a
full-fledged
bosch,
managed
system
like
with
all
the
properties
like
all
the
convenience,
also
of
like
not
having
to
maintain,
I
don't
know
configuration
values
in
different
places
and
getting
other
values
concatenated
from
whatever
stuff.
So
so,
probably
like
there's
a
certain
trade-off,
and
I
I
I
guess
we
felt
like
giving
people
in
or
with
with
the
kubernetes
background,
specifically
kind
of
an
easier
way
to
try
things
out
and
see.
E
Yeah,
I
guess
the
the
other
thing
that
that
I've,
I
think
we
talked
about
before,
and
that
we
could
keep
in
mind-
is
that
kind
of
the
the
less
opinionated
we
keep
the
core
assets.
The
more
room
there
is
to
do
more
sophisticated
orchestration
of
those,
as
part
of
say,
a
more
complete
distribution,
and
so
that
that's
also
where
I
think
we're.
E
E
Again,
you
know
this
is
also
where
you
know
this
is
just
the
the
things
that
primarily
baron
and
I
have
been
thinking
about
and
recognizing
and
we
we
definitely
want
to
hear
the
perspective
from
all
of
you
as
well.
A
Point
maybe
we
can
just
answer
that.
A
C
A
A
Can
you
share
a
bit
what
directions.
E
E
So
I
I
think
that
we
don't
have
anything
definitive
other
than
maybe
biasing
towards
the
simplest,
a
collection
of
resources
that
would
make
sense.
You
know
some
of
that
might
even
look
like.
E
Maybe
the
reference,
implementation
or
distribution
of
a
complete
system
is,
is
somewhat
simpler
than
we
have
with
cf
for
kids
today,
you
know
so
right
now,
for
example,
that's
bundling
in
istio,
which
is
itself
enormously
complicated
and
requires
a
lot
of
coordination
and
mechanics
to
upgrade
correctly,
and
you
know
we
might
say
well,
maybe
it
makes
sense
not
to
try
to
tackle
that
level
of
complexity
in
the
reference
implementation,
and
if
we
just
need
something,
that's
demonstrating
ingress
routing.
E
You
know
we
could
achieve
that
with
one
of
the
simpler
ingress
controllers
like
contour
or
nginx,
and
so
that
correspondingly
would
be
a
much
less
complicated
system
to
deploy.
So
it
might
not
even
need
the
power
of
things
that
we've
had
with
carvel.
We
might
be
able
to
get
away
with
saying.
Okay,
it's
just
you
know.
This
is
just
a
bare
kubernetes
deployment
or
demon
set
for
that
part
of
the
system,
and
so
just
throw
it
out
there.
E
If
you
need
any
more
complicated
like
if
you,
if
you
need
drop
in
default
coordination
of
that
resources,
actually
cap
deploy
works
pretty
well,
it
gives
you
a
little
bit
more
control
than
cubecut
will
apply,
but
it's
not
going
gonna
need
anything
more
than
that
and
then,
if,
if
there's
you
know
a
more
complicated
distribution
that
needs
to
incorporate
a
more
complicated
system,
then
it
might
have
to
in
order
to
retain
the
same
power
that
we've
had
bosh
might
have
to
adopt
a
more
complicated
deployment
tool
chain
on
top
of
that.
A
Yes,
I
was
just
suggesting
awarding
refinement
to
the
migration
to
me.
The
migrate,
the
migration
of
existing
workload
on
the
cloud,
foundry
and
vms
aims
to
be
transparent
for
application
developers
when
their
environment
gets
migrated
over
to
kubernetes.
They
should
not
have
to
make
a
new
push
to
the
application
and
to
import
the
marketplace
services.
D
A
Yes,
similar
to
testimonies
from
the
cloud
foundry
community,
for
example,
swisscom,
are
shared
and
I
think
there
has
been
occasions
I
think
from
pws
as
well,
when
there
is
infrastructure
change
from
going
from
from
example,
from
openstack
to
vsphere
or
maybe
pws
going
from
aws
to
gcp.
Maybe
if
I
get
it
right,
I'm
not
sure
and
then.
A
Okay,
so
maybe
wrong
example.
I
think
there
are
some
more
examples
of
infrastructure
change
into
which
the
application
developers
collaboration
were
very
limited.
They
were
just
noticed,
there
might
be
a
downtime
or
just
your
application.
A
The
control,
the
cloud
foundry
api
might
go
read
only
for
half
a
day,
so
you
won't
be
able
to
do
pushes
by
during
half
a
day.
But
after
this
maintenance
period
your
workload
would
be
migrated
and-
and
you
don't
have
any
actions
to
take
so
to
me,
that
would
be
the
goal
of
one
promise
to
to
preserve
and
to
deliver
to
existing
users
is
that
they
don't
need
to
do
new.
E
Approaches
yeah,
I'm
I.
I
definitely
hear
the
the
value
that
we've
had
in
that
statement
for
preserving
the
continuity
of
of
existing
cf
environments,
and
you
know
I
we
have
definitely
been
able
to
do
that
in
the
past
in
a
lot
of
cases.
E
My
concern
is
that
it's,
it
is
a
very
high
bar,
especially
given
the
complexity
of
the
change
in
infrastructure
that
we're
discussing
here,
and
so
I
I
wouldn't,
I
feel
like
we-
we've
got
an
opportunity,
but
we
don't
have
an
unlimited
amount
of
time
to
keep
keep
moving
this
forward,
and
so
I
wouldn't
want
us
to
be
losing
a
good
outcome
that
maybe
requires
a
little
bit
more
manual
intervention,
even
from
development
teams
at
the
expense
or
like
in
order
to
try
to
aim
for
that
perfect
outcome
of
total
transparency.
E
So
I
and
I
I
do
think
that
there's
a
certain
amount
of
incrementality
that
that
we
could
aim
for
here
as
well,
where
you
know
maybe
early
on
as
we've
even
had
today
like
there's,
you
know
right
now,
there's
not
a
seamless
transition
from
say,
fcf
deployment
environment
to
cf,
for
kids,
where
you
at
least
would
need
to
re-push
the
app
to
a
different
environment.
But
you
know
in
a
lot
of
cases
it
would
it
would
remain
compatible.
E
E
So
I
think
maybe
it's
worth
discussing
that
like
mentioning
that
sequence
of
outcomes
or
that
progression
towards
an
ideal
state,
but
I
I
I
don't
want
us
to
you-
know,
get
into
a
mindset
where,
like
okay,
we're
focused
only
on
this,
you
know
totally
transparent
migration
for
app
developers,
and
you
know
maybe
it's
going
to
take
us
two
years
to
do
that.
A
And
if
I
recall
the
migration
that
we
experienced
at
the
range
for
a
period
of
time,
we
asked
our
developers
please
push
new
apps
on
the
new
infrastructure,
and
please
tell
us
if
you
see
regressions,
so
this
this
part
of
time,
whether
it's
actually
not
a
migration,
is
still
useful.
So
yeah.
A
I
do
share
your
point,
and
maybe
is
it
worth
that
would
be
useful
for
me
to
to
share
what
are
the
the
remaining
work
to
be
able
to
import
an
existing
cloud
controller
database
into
into
cfr
kits
and.
A
Work
preventing
from
doing
this,
this
transparent
migration,
so
that
the
new
newbie
like
me
can
understand
why
it's
so
much
effort
or
what's
remaining
left.
D
I
I
I
guess,
like
one
thing
is
and
probably
like
we
at
sap
are
a
bit
special
because,
like
it
was
kind
of
clear
pretty
early
on
that,
we
wouldn't
be
able
to
just
take
one
cloud
foundry
system
that
we
run
today
like
massive
massive
multi-tenant
system
and
kind
of
transfer
that
into
one
cloud
foundry
running
on
one
kubernetes
cluster.
So
I
guess
we've
kind
of
leaned
more
towards
this.
D
There
is
specific
teams
working
with
that
system,
that
kind
of
decide
on
their
own
pace,
how
to
move,
and
we
don't
move
them
all
into
like
the
same
targets.
Kubernetes
cluster
but
like
everybody
gets,
gets
an
own
cluster.
So
I
think
that's
I'm
not
sure
how
how
special
or
how
common
that
that
scenario
actually
is
the
other
thing,
though,
that
I've
been
kind
of
bringing
up,
also
seeing
georgie
and
daniel
here
in
in
the
call
is
like:
can
there
be
intermediate
states
even
in
like
today's
bosch
managed
world
right
like?
D
Can
there
be
an
irene
back
and
similar
to
how
we
switch
from
dea
going
to
to
diego
right?
Like
do
you
need
to
go
the
full
mile
of
also
taking
the
cloud
foundry
control
plane
and
putting
that
on
kubernetes,
or
do
we
have
a
subset
of
people
that
already
find
value
in
just
getting
the
app
that
eventually,
like
is
pushed
towards
the
cloud
foundry
control
plane,
ending
up
on
a
kubernetes
cluster?
I
guess
as
as
often
times
like
in
in
reality,
it's
more
complicated
than
just
saying.
D
Let's
do
the
same
thing
again,
but
that
was
like
at
least
one
one
thought
and
then
I
think
the
other
discussion
that
we
had
kind
of
looking
at
kind
of
what
would
it
take
to
take
an
existing
workload
and
move
that
over
to
cf
for
hates
in
particular?
Was
that,
like
the
jump
from
today's
build
packs
to
the
cloud
native,
build
pics
is
like
more
disruptive
than
one
would
hope.
So
there
was
also
always
this
conversation
around
like.
D
Can
we
retain
the
existing
build
pack
in
the
cf
for
kate's
world,
or
can
we
bring
the
cloud
native
build
packs
to
the
bosch
managed
cf
world
so
that
people
could
like
over
time,
adapt
their
workloads
to
then
kind
of
have
an
easier
move
in
in
the
actual
migration
from
like
bosch,
managed
cloud
foundry
to
kubernetes
management,
so
not
to
like,
have
have
to
having
to
do
all
those
steps
like
at
the
very
point
of
migration.
A
And
I
wonder
whether
isolation
segment
could
be
a
way
to
to
sequence,
migration
of
a
multitenant
cluster
yeah.
D
That
was
my
my
proposal
as
well
right,
so
because
isolation
segments
provide
you
that
means
of
basically
saying
a
like.
I
have
a
separate
compartment
that
runs
my
apps
and
then
b
also
has
quite
a
fitting
notion
of
like
how
you
have
the
cli
commands
for
kind
of
switching
organizations,
switching
the
defaults
of
organizations
or
even
having
like
an
organization
that
is
simultaneously
able
to
push
to
like
isolation
segment,
a
and
isolation
segment
b,
based
on
like
how
you
push
your
application.
So
so
that
was
like
my
my
thought,
as
well.
E
Yeah,
I
was
and
also
oh,
go
ahead.
Yeah.
A
D
Just
thinking,
if,
like
some
of
the
documents
that
we
had
earlier
on,
contain
that
or
not
I
guess
I'll
have
to
do
some
some
digging
to
to
see
if
that's
kind
of
captured
somewhere.
E
Yeah,
I
thought
that's
not
one
of
the
documents
you
had
produced
baron
was
discussing
isolation,
segments
right,
yeah.
D
Yeah
yeah
yeah
yeah
I'll
I'll,
find
that
out
and
kind
of
reference
that
yeah.
E
Yeah-
and
I
I
think
one
one
thing
that
I
was
thinking
about
is
like
you
know
in
in
the
actual
like,
let's
say
that
we
wanted
to
try
using
isolation
segments
as
a
mechanism
to
introduce
that
choice
between
back
ends
with
the
goal
of
migrating
existing
apps
to
it.
E
I
wonder
if
there
might
need
to
be
a
little
more
flexibility
in
terms
of
the
mapping
of
a
cf
space
to
an
isolation
segment
introduced,
because
right
now,
it's
each
space
can
have
only
one
isolation
segment,
and
that
is
mutable,
but
there
is
then
kind
of
a
an
irreversible
transition
from
the
existing
or
previous
isolation
segment
to
the
newly
assigned
one
like
when
you
restart
or
re-push
an
app
it'll
land
on
that
new
segment.
E
But
it's
kind
of
outside
of
the
control
of
the
space
developer,
working
in
that
space,
so
again
for
apps
that
are
all
running
on
diego
with
the
same
build
packs
that
didn't
seem
like
such
an
unreasonable
workflow,
because
it
should
just
be
flopping
over
to
new
underlying
infrastructure.
But
it's
all
the
same
flavor.
E
Whereas
if
there's
something
that's
a
riskier
transition,
it
might
be
appropriate
to
provide
a
little
bit
more
control
like
if
something
goes
wrong
quickly.
Switch
back
to
the
the
previous
back
end
and
then
figure
out
what
went
wrong
and
that
that's
kind
of
what
we
did
with
the
special
built
diego
flag.
E
When
we
did
the
deity
diego
transition
is
that
that
would
allow
kind
of
that
that
two-way,
switching
between
the
dea
back
end
and
diego.
So
I
think
that
that
would
be
my
only
high
level
snag
in
the
workflow
around
using
isolation
segments
to
extend
to
a
kubernetes,
backed.
E
E
E
A
Because,
yes,
I
understand
the
go:
router
had
the
ability
to
route
shared
domains
and
isolation,
segments,
private
domains,
and
so
this
might
conflict
with
the
safe
rocket
routing
part
which
I
guess
by
by
default,
requires
ownership
on
the
domain.
Until
the
balance
of
the
is
balancer
directly
attack
at
the
kubernetes
cluster.
E
Yeah,
ironically,
this
might
be
a
final
hurrah
for
the
concept
of
route
services
in
the
go
router
that
might
have
enough
flexibility
to
chain
the
chain.
You
know:
reverse
proxies
together
across
environments
for
that
transition,
but
it's
something
that
is
definitely
worth
thinking
about
and
exploring.
A
And
to
get
back
on
the
cnd,
I
I
had
a
look
at
the
the
cats
for
cf4
kids.
A
Is
that
the
reason
why
the
services
part
of
the
cuts
is
disabled?
The
fact
that
if,
if
I
get
it
correctly
on
the
ci
on
the
cfr,
ci
the
cuts
the
service
part
is
not
enabled.
Is
that
because
the
pickup
services
are
running
differently?
Basically,
my
question
is:
would
the
vcap
services
binding
be
preserved?
A
As
I
understand,
the
cnb
is
not
using
the
cap
services.
Am
I
right.
D
I
think
I
recall
something
in
that
in
that
direction
as
well,
that,
like
some
of
the
cnbs
don't
provide
like
yeah,
they,
they
don't
kind
of
listen
to
whatever
services
are
bound
to
an
app
and
like
things
like
pulling
in
the
database
driver.
If
you
bound
the
database
service
is
probably
something
that
works
differently
in
cmds.
E
Yeah,
I'm
I'm
not
sure
why
that's
disabled
that
I
don't
know
if
anyone
who's
been
working
more
closely
on
on
cfrcates
who
happens
to
be
here
would
know
like
the
vcap
services.
You
know
that
that's
still
flowing
certainly
to
the
runtime
environment,
and
so,
if
there's
any
service
or
if
there's
any
dependency
interactions
in
the
app
that
just
resolve
at
runtime
using
vcap
services
that
should
all
work.
Fine,
there's
a,
I
think
you
know.
E
There's
there's
been
some
discussion
of
this
and
in
I
think,
comments
on
cfr
gates
on
the
repo,
but
there's
a
subtler
class
of
integrations
that
do
happen
at
staging
time
to
pull
in
other
dependencies,
and
so
those
those
are
still
not
currently
supported.
Because
of
that
mismatch
of
information
flow
with
with
kpac
into
the
cnbs.
E
E
But
I
mean
my,
I
would
think-
maybe
it's
a
combination
of
like
it's,
not
that
nothing
in
there
works,
but
there
might
be
some
coincidental
issues
with
even
some
of
the
test
assets.
I
know
for
a
while.
We
had
a
we
had
that
ruby,
apps
wouldn't
run
at
all
or
wouldn't
run
great,
and
that
was
still
a
lot
of
the
test
fixtures
in
the
cats.
E
B
Understood
it
was
a
conscious
decision
from
the
pacato
team
to
not
include
something
like
the
jdbc
buildback
part
into
the
pacquiao
java
build
pack.
So
it's
not
going
to
come
back
unless
you
do
it
like
we
did
it,
because
we
had
an
application
that
needed
the
functionality
and
it
was
rather
easy
to
provide
a
customized
java
cloud
native.
Build
pack
indeed
included
the
functionality
again,
but
that
was
a
conscious
decision
not
to
do
that.
B
C
B
That
is
that
not
based
on
it,
it's
guests
like
which
is
the
right
driver,
like
you,
do
know
the
backhand
database,
but
you
have
no
idea
of
the
version
that's
running
there
or
if
there
could
be
any
conflicts
with
yours,
so
they
said
when
we
asked.
Actually
they
said
that
when
it
was
introduced,
it
was
a
bad
idea.
Now
they
they
just
didn't
copy
over
that
bad
idea
to
the
packado
ecosystem.
So
I
think
that's
not
going
to
come
back.
A
Would
that
make
sense
to
deprecate
part
of
the
java
buildback,
even
though
in
the
cf-4
vms
and
and
so
that
all
ought
to
restrict
the
scope
of
cats
to
not
cover
this
area
and
still
get
most
of
the
other
build
packs
features
being
tested
with.
A
E
Yeah,
I'm
not
sure
if
that
is,
if
that's
deliberately
tested
as
behavior
in
cats,
it
might
be
in
java
build
back
tests
yeah.
I
don't
know
at
this
point
about
the
value
of
altering
the
you
know
the
cf
v2
java
built
back
to
deprecate
that
behavior.
Maybe
it
would
make
sense
just
to
deprecate
it,
but
if
ever
effectively
never
to
remove
it,
but
to
make
it
clear
that
people
should
not
rely
on
that
behavior,
maybe
give
them
a
way
to
turn
it
off
so
that
they
could
verify
they're
not
relying
on
it.
E
Hey
there,
it
looks
like
you're
asking
us
to
dynamically
inject
your
database
driver
at
build
time.
Would
you
like
to
stop
doing
that.
D
Okay,
so
I
believe
that
has
covered
the
workload
migration
topic
on
on
on
that
one.
Actually,
I
think
we
had
an
earlier
sit
call
where
we
discussed
about
like
picking
individual
assets
from
cloud
controller,
and
I
think
I
also
recall
that
both
souza
and
vmware
had
some
open
source
projects
that,
like
helped
with
kind
of
pulling
part
of
the
information.
D
The
thing
that
I
also
recall
is
that
service
bindings
and
especially
kind
of
kind
of
applying
service
bindings
again
in
the
target
system,
didn't
have
an
api
right.
So,
even
though
you
might
be
obviously
able
to
retrieve
the
binding
information
from
the
cloud
controller
database,
you
don't
have
an
api
to
get
that
into
a
new
cloud
foundry
system.
Obviously
you
could
kind
of
write
to
the
cloud
controller
database
yourself,
but
this
might
be
overly
overly
complex.
D
E
Yeah,
I
I
do
recall
you're
talking
about
just
taking
all
that
that
system
of
record
state
that's
in
cctv
and
transferring
it
over
to
another
environment.
E
Yeah
there's,
I
think
it's
it
might
still
be
in
some
of
the
like
pivotal
labs
or
whatever
we're
calling
it
now
organization.
We've
had
a
project
called
cf
management
that
has
done
some
of
that.
D
E
About
I
wonder
if
maybe
maybe
there's
all
kinds
of
reasons
why
this
wouldn't
work,
but
I
wonder
if
it
would
work
to
have
like
a
proxy
broker
in
the
target
environment
they
could
go
through
the
service
broker
api
to
re-establish
service
instances
and
bindings
for
apps.
E
A
But
I
understand
the
service
binding
that
are
indeed
recorded
into
ccdb,
so
just
not
surfaced
in
safe
api
for
right
to
use
clients
to
replicate
them.
So
maybe,
if
there
is
value
in
adding
this
use
case
into
cc
api,
maybe
with
restricted
permission,
because
there
is
security
issues
in
exposing
service
binding
because
the
cc
api,
it's
only
in
latest
version
that
the
service
binding
get
have
getting
points
up
to
now
they
were
only
disclosed
at
post
that
service
binding
provisioning.
So
it's
just
too
late
to
fetch
them
afterwards.
D
Okay,
then,
let's
maybe
move
on
to
to
the
next
comment
actually
like
this
one
here
is
based,
I
think,
on
a
text
suggestion
that
yugioh
made
indeed
like
when
we
wrote
the
document.
D
I
think
we
consciously
didn't
kind
of
mention
cube,
cf
or
cfocates
as
like
the
existing
projects,
but
rather
like
kept
that
agnostic
and
set
a
cloud
foundry
on
kubernetes
right
to
like
not
prescribe
like
one
being
the
the
kind
of
basis
for
the
effort
and
the
other
one
not
or
kind
of
suggesting
kind
of
that
there's
a
migration
path
from
one
to
to
that
target
picture,
but
not
from
the
other
that
that's
why
I
think
we
kind
of
try
to
to
choose
like
a
non-existing
term.
D
Okay,
then,
integrate
with
projects
and
technologies
from
kubernetes
and
other
cloud
native
ecosystems
and
daniel
is
asking,
if,
like
the
examples
that
we
make,
are
illustrative
examples
or
examples
of
decisions
that
have
been
taken.
I
believe
it's
it's,
the
former
where
we
basically
say,
let's
introduce
like
an
api
for
probably
this.
This
kind
of
relates
to
to
this
kind
of
example
picture
here.
D
Let's
introduce
for
like
the
key
components
that
make
up
cloud,
foundry
and
api
definition
and
then
provide
like
an
a
community
implementation
and
that
might
or
might
not
be
istio
for
ingress
and
and
service
mesh
technologies,
but
kind
of
have
that
as
an
api
that
allows
people
to
actually
swap
out
the
actual
implementation.
I
think
eric.
You
also
mentioned
that
maybe
for
the
community
implementation
of
ingress,
indeed,
is
still
is
too
heavy
weight
right.
E
Yeah,
I
I'd
agree
that
that's
the
former
in
that
comment
that
we're
illustrating
things
that
are
yeah
we've.
We've
already
demonstrated
a
capability
to
integrate,
but
this
wouldn't
be
the
final
option
or
indeed
the
only
option.
D
A
A
If
we
compare
number
of
pages
of
manual
from
cloud
foundry
and
kubernetes
kubernetes
very
large,
because
of
all
the
other
objects,
maybe
I
see
maybe
one
possibility
into
which
the
the
kubernetes
api
machinery
for
doing
cred,
create,
read,
update
and
delete
would
be
extracted
from
from
the
rest
of
kubernetes
in
a
way
that
the
default
api
version
for
kubernetes
object,
such
as
pods
deployments
services,
would
not
be
a
prerequisite
to
using
the
api
machinery,
so
users
could
just
use
kubectl
and
a
subtract
of
of
kubernetes
api
machinery
for
interacting
with
the
custom
resources,
but
without
being
asked
to
be
trained
on
the
rest
of
the
concept.
A
So
this
has
maybe
some
impact
on
the
user
experience
in
terms
of
the
tools
both
the
cli
and
the
web,
the
web
uis.
That
would
not
make
the
assumption
that
the
default
api
versions
for
built-in
kubernetes
subjects
are
available
to
application
developers
and
as
well
that
the
documentation
from
kubernetes
be
split
into
two
parts,
a
bit
similar
that
in
in
the
way
that
the
cloud
foundry
community
had
split
the
documentation
in
different
repos,
so
that
they
can
be
repeat
catch
down
stream
independently,
one
or
the
other.
A
If
there
is
existing
kubernetes
community
efforts
to
split
the
api
machinery
from
the
actual
default
api
version
default
api
namespace,
then
that
might
be
a
way
that
cloud
foundry
application
developers
could
use
this
api
machinery
without
being
asked
to
train
to
get
the
full
cloud
foundry
training
as
a
full
kubernetes
training,
sorry.
A
Otherwise,
I
I
don't
see
how
the
promise
of
cloud
foundry
to
keep
things
simple
could
be
delivered.
If
we
ask
application
developers
to
get
a
kubernetes
account
and
to
still
be
surfaced
with
the
default
api
versions
and
api
namespace
from
from
kubernetes.
E
Yeah
yeah,
I
think
that's
that's
a
great
point
about
you
know
having
this
fundamental
tension
between
you
know.
We've
had
a
convenient
but
opinionated
interface,
especially
for
application
developers
that
has
really
simplified
their
their
workflows
but
that
they
sometimes
feel
constrained
by,
and
I
I
also
think
that
in
in
almost
all
of
those
cases,
just
exposing
everyone
to
those
raw
primitives
from
kubernetes
from
you
know,
its
core
resources
is
is
not
going
to
be
the
most
productive
option
for
people
most
of
the
time.
E
I
think
that
we
in
as
we're
thinking
about
the
evolution
of
cf
towards
kubernetes,
we
might
just
be
able
to
arrange
that
more
conveniently
with
some
of
the
you
know,
maybe
even
just
the
role-based
access
control
rules
that
can
apply
to
the
api
resources
and
maybe
maybe
some
amount
of
system-wide
coordination
beyond
that,
potentially
with
something
like
opa
and
gatekeeper.
E
If
there's,
if
there's
things
that
aren't
easily
expressed
just
as
rules
about
roles
based
on
types
and
operations
within
a
namespace,
because
there
I
think
you
could
get
that
it
sounded
like
okay,
you,
you
want
an
experience
where
the
cloud
foundry
developer
effectively
their
scope
of
interactions
is
manipulating
a
set
of
the
crds
that
map
on
to
the
resources
that
you
would
expect
them
to
have
access
to
you
in
a
cf
space.
So
builds
apps
processes,
routes,
service,
instances,
service,
bindings.
Things
like
that.
E
I
think
you
could
probably
arrange
that
just
by
having
suitable
representation
of
those
resources
at
the
same
level
of
abstraction
and
then
not
giving
them
permission
to
anything
else.
Even
in
that
namespace.
A
At
the
same
time,
I
don't
as
an
execution
developer
when
I
use
portal
and
I
ask
for
completion
to
cut
all
get.
I
don't
want
to
get
parts
deployments
and
all
the
the
built-in
objects
in
the
name
in
the
built-in
core
resources
because
and
then
I'm
polluted.
I
just
want
the
cloud
foundry
namespace
apis.
E
I
mean
yeah
that
that
that's
a
you
know,
that's
a
good
point
about
insulating
developers.
From
from
that
level
of
detail,
you
know,
I
think,
on
the
flip
side,
there
there
are
potentially
cases
where
developers
would
get
a
better
understanding
of
the
platform
and
what's
happening
underneath
it,
especially
if
they
have
like,
if
that's
just
going
so
far
down
as
kubernetes
and
the
familiarity
of
those
concepts.
E
A
C
I
like
jesus
as
sorry
as
like,
supporting
like
more
cases
of
developers
like
like
if
we
like,
implement
a
cf
api
experience.
People
used
to
that
can
continue
using
like
cfcli
and
clients,
they're
used
to
they
like
don't
have
to
touch
or
like,
maybe
even
not
be
given
access
to
keep
cuddle,
but
like
ones
that
should
have
access
or
need
access
to
keep
cut
over
now
like
enabling
them.
A
A
The
question
comes
with
a
would,
would
cfcli
and
and
appear
eventually
be
deprecated
in
favor
of
cattle?
A
How
would
that
always
be
two
different
ux,
and
would
the
community
be
able
to
maintain
those
two
different
ux
in
double?
We
know
the
cfcli
is
costly
to
maintain
and
to
reach
this
simplified
rate.
It's
costly,
because
every
new
feature
needs
development,
and
so
is
it
realistic
to
to
keep
those
twos.
I
think
that
would
be
ideal
for
existing
users
to
not
be
exposed
to
to
good
couple,
but
what
risk
least
realistic
in
terms
of
budget
for
the
community
can
spend
to
to
maintain
those
two
ux.
E
Maybe
one
thing
to
consider
is
that
you
know
we're
we
as
a
community,
and
certainly
we
as
vmware,
are
still
putting
resources
into
cloud
controller
as
it
is
today
and
the
cli
and
so
and
and
we
expect
that
to
still
be
around
and
working
for
years,
and
so
that
might
be
work
that
we
have
to
do
anyway
to
preserve
that
and
so
it
you
know,
if
we're
now
just
talking
about
having
another
distribution,
that's
routing
those
to
a
different
back
end.
E
Maybe
with
you
know,
we
might
need
some
additional
effort
to
support
variations
on
that
and
have
that
come
through
the
api
or
be
accessible
in
the
cli.
But
you
know,
maybe
that's
an
order
of
magnitude,
less
work
than
having
a
totally
separate
implementation
of
those
things.
A
You
are
mentioning.
This
is
an
example
of
things
that
just
an
example
and
maybe
not
a
target,
but.
D
When
I
suggested
the
example,
I
did
not
think
about
like
exposing,
then
hierarchical
namespaces
to
the
developer,
but
my
thinking
was
rather
to
say
like
in
future.
If
we
are
on
kubernetes
at
some
point
in
time,
hierarchical
namespaces
will
be
a
thing
and
then
like
we
should
make
use
of
them
in
like
the
underlying
implementation.
A
And
and
so
when
I
hear
eric
you're
saying
that
capian
fcli
aims
to
be
maintained,
at
least
by
vmware
over
time
and
in
a
different
parallel
distribution
for
kubernetes
and
might
be
other
concepts
targeted
to
to
kubernetes
developers
that
might
have
different
concept
at
what's
in
in
cscli
and
and
capi.
Do
I
get
this
right.
E
Well,
I
think
one
thing
we've
also
talked
about
is
with
the
with
the
kind
of
modularity
that
we
have
in
mind
here
and
that
we've
we've
talked
about
later
in
this
document.
E
I
think
we're
you
know,
maybe
stepping
back
one
one
piece
with
a
lot
of
the
work
that
we've
done
with
edwina.
We've
been
really
trying
to
replicate
with
a
high
degree
of
fidelity,
the
the
current
domain
model
and
features
of
especially
that
app,
pushing
and
managing
experience
down
even
into
the
container
environment
and
there's
definitely
been
cases
of
that.
They've
been
really
hard.
E
Like
the
example
that
I
always
come
back
to
is
instance
indices
where
that
really
forced
irene
very
early
on
to
work
with
stateful
sets,
because
they,
coincidentally
had
the
same
type
of
indexing,
but
then
even
exposing
that,
as
the
an
environment
variable
into
the
application,
runtime
container
has
required
some
work
to
do,
and
then
anyone
coming
from
the
kubernetes
community
looking.
That
is,
like
that's
really
weird
you're
running
a
stateless
app
as
something
that
is
explicitly
designed
to
manage
stateful
workload
with
a
lot
more
control.
E
And
so
I
I
think
even
the
the
irini
team
has
started
thinking
about
this
a
little
bit
and
there's
been
some
discussion
on
on
some
of
the
irini
and
cf
for
kate's
issues.
It's
like
what
would
it
actually
be
like
to
back
these
things
with
a
deployment
instead,
that
definitely
seems
like
a
much
more
natural
correspondence
of
a
cf
application
into
the
set
of
core
kubernetes
resources.
E
E
So
maybe
as
part
of
that,
it's
worth
considering
like
what,
if
anything,
actually
needs
to
change
in
the
cf-api
interface,
if
they're
going
to
be
say,
arbitrary
identifiers
for
individual
instances
coming
back
or
and
then
correspondingly,
what
would
need
to
change
in
the
cfcli
to
display
and
then
refer
to
those
arbitrary
identifiers
instead
of
the
sequentially
indexed
ones
that
we
have
today
and
so
again,
that's
where
I
could
see
like
you
know
there
aren't.
Actually
I
think
that
many
places
where
that
specific
instance,
identifier
comes
through
the
api
or
the
cli.
E
So
maybe
it's
actually
not
a
huge
amount
of
work
to
do
that
through
that
interface
and
then
through
the
cli
code
basin
interface,
but
aren't.
B
I
mean
it
might
still
be
under
control,
with
spring
being
like
managed
by
vmware,
but
this
it
seems
to
escape
just
the
the
cfcli
and
and
cloud
controller
domain.
Yeah.
E
Absolutely
I
think
that
that's
the
other,
maybe
the
other
part
of
that
is
that
we
would
definitely,
at
least
for
the
the
immediate
future,
want
that
to
be
an
opt-in
behavior
instead
of
a
default
one,
certainly
for
existing
apps
for
compatibility,
and
so
also
what
would
what
would
the
kind
of
control
on
that
look
like
if,
if
you're,
an
app
developer,
saying
like
all
right,
I'm
ready,
I
want
to
try
out
this
deployment
thing.
Give
me
some
some
arbitrary
strings
as
ids
instead
of
numbers.
E
How
would
they
flip
into
that?
You
know,
would
it
be
an
annotation?
Would
we
have
more
deliberate
api
field
on
an
application
or
a
process
that
puts
it
into
that
mode
where
maybe
again,
it's
just
ignored
on
diego
do
it
goes
like
I
don't
know
what
this
is,
but
on
the
kubernetes
back
end
or
and
with
irini,
it
would
say
great.
Let
me
try
running
this
as
a
deployment
and
reporting
that
information
back.
E
A
E
Yeah,
definitely
that's
a
great
point
if
there's
any
any
carrots
or
incentives
that
we
could
identify
to
get
people
to
try
it
out.
It's
definitely
been
a
problem
in
the
past
too,.
E
I
know
we've
just
got
a
few
minutes
left.
We'll
certainly
continue
discussion
on
the
document
asynchronously
and
then
in
this
forum
again
in
two
weeks,
but
maybe
in
the
last
couple
of
minutes.
If
anyone
had
something
they
wanted
to
really
urgently
discuss,
please
bring
it.
D
Up
yeah,
if
not,
then,
as
eric
suggested,
let's
continue
discussions
in
in
the
next
call.
In
two
weeks
from
now,
I
think,
there's
still
quite
quite
a
few
things
that
have
received
feedback
so
far
and
then,
as
as
we've
seen,
daniel
has
just
started.
Commenting
so
probably
we'll
collect
a
little
bit
more
over
time
and
then
talk
again
in
in
two
weeks.