►
From YouTube: Cloud Foundry for Kubernetes SIG [March 2020]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Frigga,
the
recording
so
then
officially
or
welcome
to
this
week's
cloud
foundry
on
kubernetes
special
interest
group.
The
topic
is
Gabe
talking
about
container
to
container
networking
and
before
the
recording
started,
we
were
already
in
the
process
of
figuring
out
what
that
might
mean
in
more
details
and
I
was
mentioning
that
I
would
be
interested
in
like
overview
of
the
current
state
of
container
to
container
networking,
as
it
relates
to
cloud
foundry
on
on
Cullen
ages.
A
B
I'm
Michael,
maybe
an
MEP
before
you
start
Gabe
I,
think
we
had
two
or
even
three
ideas
about
what
this
could
mean
or
what
what
people
would
want
to
talk
about,
so
one
was
in
terms
of
feet
apart
like
is
it
planned
or
expected,
or
is
this
even
wanted
that
we
do
have
a
future
parity
between
container
to
container
networking
in
the
CF
for
Bosh
and
this
year
for
Kate's
world
the
application
security
groups
network
policy
is
all
this
stuff,
I
think
the
second
thing
that
came
up
was
about
this.
B
Currently
strong
requirement
for
is
tio
being
in
there
like
do
we
want
to
require,
is
here?
Do
we
want
to
even
expose
easier
functionality
to
the
end-user
and
like
how
are
like?
Have
we
decided
on
this?
How
are
we
deciding
on
this
like?
What's
the
process
for
for
this,
and,
thirdly,
I
think
what
came
up
very
briefly,
it
was:
are
we
or
can
we
using
can
can
be
using
kinetic
container
networking
for
some
kind
of
migration
support
between
CEO
for
boss
and
CEO
for
Cades?
B
C
That
that
makes
sense,
I
just
took
some
notes
on
what
you
said
feature
parity
questions
specifically
is
cheese.
Networking
policies,
requirements
for
is
do
currently
its
strong.
Should
we
try
to
require
it?
How
should
we
expose
it
if,
at
all,
what's
our
decision-making
process
for
these
things
and
then
is
there
a
way
to
help
migrations
by
taking
advantage
of
shared
container
networking
stuff,
I
hope
I
got
everything
there
so,
like
I
said,
I,
don't
have
any
slides
but
I'm
happy
to
talk
through
these
things.
Let's
start
with
the
easiest
one,
the
feature
parity.
C
My
understanding
is
that
we're
looking
to
ship
a
a
thing
that
is
production
ready.
That
is
not
yet
fully
feature
parity
before
we
get
to
full
feature
parity.
So
the
ordering
is
gonna
like
right
now
we're
shipping
alphas
of
various
things,
then
we're
gonna
try
to
make
those
production
ready
with
a
limited
feature
set
and
then
sometime
after
that,
we'll
get
them
to
full
feature
parity
with
the
cloud
foundry
on
Bosch
world.
Does
that
make
sense
any
questions
about
that.
B
C
C
Second
of
all,
I
think
WebSocket
support
is
going
to
be
in
there
I,
don't
see
any
reason
why
we'd
be
supporting
WebSocket
and
third
I,
don't
know
what
that
list
is.
I
can
tell
you
that
I
can
tell
you
the
things
that
would
be
hard
to
to
make
feature
parity
work
for
and
therefore
we
may
defer
a
little
bit
not
to
say
that
we
wouldn't
do
it
just
like
they're
not
going
to
be
the
first
things
that
we
invest
in.
C
Some
of
the
things
from
an
engineering
standpoint
that
are
hard
to
do
are
per
instance,
routing
support
where,
when
a
request
contains
the
application
instance
identifier
and
a
special
HTTP
header,
the
gerado
currently
is
able
to
route.
That
request
to
a
particular
instance
of
that
application,
it's
possible
to
make
that
work
with
Envoy,
but
it's
not
supported
as
a
first-class
feature
using
the
issue,
control
plane
that
we're
currently
using,
and
so
it
might
take
some
work
to
make
that
hit
feature
parity,
and
so
we
haven't
prioritized
that
for
the
short
term.
C
Another
thing
that
would
be
hard
to
do
from
an
engineering
standpoint
is
route
services
in
the
current
way.
That
they're
implemented
in
the
go
router
the
concept
of
having
an
intermediary
service
between
a
client
and
a
server
is
definitely
something
supported
like
first-class
within
service
mesh,
but
the
particular
API
surface
that
they
go
Roderick's
like
the
routing
system
exposes
in
which
there
are
particular
headers
that
have
to
be
set
and
like
inspected
and
then
passed
back
through.
That
sort
of
is
bespoke
protocol
that
we
came
up
with
in
Cloud
Foundry
before
any
of
this.
C
You
know
modern
service,
much
technology
existed
and
so
to
make
existing
around
services
work
with
the
new
platform,
with
the
exact
same
API
surface
like
where
the
route
service
itself
wouldn't
have
to
change.
That
would
also
be
some
work,
so
those
are
two
things
that
I'm
aware
of
that.
We're
definitely
saying
like:
let's
defer
working
on
those
things
until
after
we've
gotten
like
a
production
grade
version
of
the
more
fundamental
like
routing
and
container
networking
stuff,
not
to
say
that
we
won't
do
it
just
like
it's
not
in
the
short-term
roadmap.
That's.
D
C
C
There's
some,
you
know,
deadline
for
a
production
grade
thing,
and
maybe
maybe
it
comes
slips
after
that
one
one
way
we're
thinking
about
this-
is
that
if
we
want
to
ship
something
that
we
call
production
grade
and
enable
users
to
upgrade
from
that
to
subsequent,
you
know
feature
increments.
We
don't
want
to
break
them
once
they're
on
the
1.0
thing
and
doing
something
like
moving
from
wide
open
network
policies
to
a
lockdown
network
policy
would
be
a
breaking
change
for
those
users
right.
C
So
I
think
what
we
want
to
do
is
before
we
call
something
1.0.
We
want
it
to
have
a
default,
deny
all
for
dead
worker
policy
and
then
the
feature
which
is
I
can
set
a
policy
that
enables
a
to
talk
to
be
basically
add
a
rule
to
the
list
of
allowed
rules.
That's
a
feature
that
can
be
added
later,
but
we
don't
want
people
to
like
start
taking
advantage
of
all
this
wide,
open,
Network
stuff
and
then
have
an
upgrade
kill
that
for
them.
That.
D
C
D
C
That
makes
sense,
and
that's
actually
another
area
where
that
service
discovery
is
slightly
different
from
the
way
they're
in
kubernetes
right.
The
fact
that
you
can
have
a
custom
domain,
and
so
there
might
be
a
little
bit
engineering
work
just
to
make
that
DNS,
stuff
yeah
operate.
The
same
way
with
the
same
domains
are.
C
Right,
yeah,
yeah
yeah,
so
one
of
the
advantages
of
being
onto
kubernetes
is
that
we
shouldn't
have
to
build
our
own
networking
stack
right.
We
should
be
able
to
rely
on
whatever
the
cluster
provides,
so
I'm
hoping
that
works.
I've
heard
I've
talked
about
this
I.
This
idea,
inside
of
vmware
and
I've,
heard
a
little
bit
like
yeah.
C
In
theory,
the
different
scene
eyes
all
implement
network
policy
the
same
way,
but
in
practice
you'll
see
depending
on
which
C&I
plugin
you're
using
policies
have
slightly
different
effects
and
that
can
like
be
visible
to
end-users.
I
haven't
seen
that
firsthand,
but
I've
heard
it
from
the
folks
who
are
working
on
the
lower
level,
networking
stuff
so
hoping
I'm
hoping
we
can
just
rely
on
the
the
kubernetes
network
policy
API
and.
F
C
C
C
Okay,
so
that
was
a
lot
on
container
to
container
I.
Think
on
aSG's
sort
of
a
similar
story.
I,
don't
think
we're
gonna
have
that
block
the
1.0,
but
it
should
be
straightforward
to
implement
by
having
something
that
reads
from
Cloud
Controller
and
writes
out
network
policy
resources.
Maybe
we
get
to
that
before.
We
do
container
networking
policy
I
think
because
of
the
implementation
ease
like
I'm
tending
to
order
like
I'm
wanting
to
order
things
by
just
like.
F
Just
thinking
even
about
the
that
kind
of
backwards
compatible
and
progressive
rollout
of
features
they
Anna
like
there
would
be
denial
egress
from
app
containers,
and
that
seems
like
a
difficult
line.
That
is
a
very
good
point.
It's
it
so
for
her
and
initial
you
see
like
now,
you
talk
to
your
network
dependency
because
we
haven't
implemented
aSG's
and
we're
doing
the
most
conservative
rain.
Okay.
E
C
C
A
really
good
point,
yeah
I,
think
we've
been
imagining
this
sort
of
like
operator
user
interface,
which
is
hey,
it's
default
denial.
But
if
you're
an
operator
you
have
cuckoo
idol
access
go
ahead,
write
whatever
policies
you
want
by
the
way,
here's
the
conventions
we're
using
for
labels
on
pods.
So
you
can
know
what
kind
of
policies
you
might
want
to
write
right.
D
C
Okay,
so
we
could
find
we
could.
We
could
see
reasons
to
have
this,
not
block
CA
and
document
said
document
like
as
an
operator.
Here's
what
you
do,
what
you
do,
yeah,
okay,
cool
I,
think
it's
still
think
it
makes
sense
for
us
to
probably
prioritize
this.
So
ultimately,
as
long
as
we're
talking
about
priorities,
I
want
to
share
like
what
we're
I
mean.
C
So
those
are
like
that
that
plus
the
ingress,
HTTP
and
HTTPS-
and
you
know
TLS
termination
with
operator
provided
certificates.
Those
are
sort
of
the
core
features
that
we're
thinking
about
for
for
GA
we've
also
we're
gonna
be
getting
access
logs,
working,
I,
think
currently
in
the
CF
for
Kate's
in
a
release.
C
B
C
That's
a
good
question:
I
know
nothing
about
that:
I'm,
not
even
sure
how
the
application
autoscaler
collects.
Metrics
today,
I
know
that
there's
a
there's,
a
team
that
does
logging
in
metrics
for
Cloud
Foundry.
They
might
be
the
better
people
to
ask
about
that
question.
I
could
imagine
that
at
the
network
later,
like
with
the
ether
with
the
router,
are
these
metrics
that
are
emitted
by
the
go
Rider
today.
F
C
C
C
B
Have
to
talk
to
the
autoscaler
team,
okay,
the
I
mean
for
I
think
for
this
I'm,
not
sure
if
we
show
any
of
that
in
Stratis
I'm,
just
sort
of
thinking
about
all
the
things
that
touches
and
in
my
district
and
a
powder
scale
is
the
one
that
that
jumps
out
in
the
film
again.
That's
not
my
team,
so
I
have
to
find
out
yeah.
C
That's
a
good
point:
yeah
I
think
we're
like
one
of
the
things
we're
going
through
is
like,
what's
the
surface
area
that
we're
trying
to
like
maintain
exactly
you
know,
api
compatibility
and
what
things
can
slip
a
little
bit
so,
for
example,
like
access
logs
were
gonna.
Let
we're
gonna
have
those
things
change.
C
They
were
printed
in
a
like
space,
delimited
format
that
was
kind
of
weird
and
vaguely
reminiscent
of
apache
and
now
they're
gonna
be
like
json
formatted
system,
log
lines
and
but
obviously,
like
you
know,
the
CF
CLI
is
gonna
stay
the
same
and
application
security
groups
are
going
to
be
the
same
JSON
as
before.
C
So
if
you
like
on
the
metric
side,
like
we're
generally
feeling
like
operator
facing
things,
can
change
and
developer
facing
things
cannot
and
that's
the
rule
that
I
think
we're
following
stuff
like
metrics
format
as
consumed
by
another
component
like
autoscaler.
That's
a
little
fuzzy
for
me
like.
Maybe
it
should
be
okay
to
make
that
a
component
change
a
little
bit.
Maybe
they
have
a
feature
flag
like
which
system
am
I
running
against?
F
D
F
A
On
the
topic
of
autoscaler
there's
another
one
key
thing
which
is
at
the
moment:
this
is
actually
a
boss
deployment
and
so
I
guess
like
apart
from
changing
or
kind
of
consuming
a
different
metric
format.
I
guess
we
also
need
to
have
a
conversation
around
how
that
actually
translates
over
to
kubernetes
phase.
What
I
think
we
happen
to
have
some
contributors
to
that
project,
so
I
guess
I'll
take
connection
on
that.
One.
D
C
All
right,
so
that
was
about
20
minutes
on
feature
parity
stuff,
I
had
down
to
other
kind
of
like
high-level
topics
from
the
market
mentioned.
One
of
them
was
about
sto
and
our
current
dependency
on
it,
exposing
it
to
users,
yes
or
no,
and
what's
our
decision-making
process
around
that,
does
that
seem
like
a
natural
thing
to
move
on
to
you
any
other
questions
on
feature
parity
before
I
go
there.
We
can
always
come
back
all
right,
so
yeah,
currently
we
are
using,
is
do
to
provide
ingress.
C
We
also
recently
started
using
sto
to
provide
automatic
mutual
TLS
between
all
system
components
and
all
workloads.
This
is
in
C
F
for
kubernetes
yeah.
It's
amazing
right
aren't
we
they
were
so
excited,
it's
only
the
thing
that
people
have
been
asking
of
us
for
like
years
and
years
and
years,
and
we
haven't
actually
gotten
working
yet,
but
now
it's
working
on
kubernetes.
So
that's
huge-
and
this
is
one
of
the
things
that
I
expect
will
make
this
deal
a
little
bit.
C
Sticky
I
think
we
could
imagine
replacing
that
kind
of
functionality,
maybe
using
link
or
D
or
using
some
other
service
mesh
technology
that
manages
the
sidecar
per
workload
and
I'm
definitely
interested
in
exploring
like
whether
whether
there's
something
like
we
could
swap
out
one
for
another
I
know
like
maybe
just
to
start
with,
like
yes,
there's
a
lot
of
uncertainty
around
the
governance
of
Sto,
and
that
is
that's
everyone
in
the
open-source
community.
It's
definitely
something
that
people
in
VMware
leadership
are
paying
attention
to.
C
D
Yeah
drop
the
obvious
buzzword
and
just
have
interest,
but
this
this
SMI
service
machine
interface
thing
this
supposed
to
abstracts
across
them.
Yeah.
D
C
That
makes
sense,
I
think
there
are
like
this
is
really
unfair
but
like
there
are
three
buckets
there's
like
ingress
and
fancy
ingress
which,
like
could
maybe
take
advantage
of
the
kubernetes
ingress
object
or
eventually
a
kubernetes
ingress,
v2
project,
which
is
like
this
thing.
That's
in
incue
bation
and
there's
the
service
mesh
interface
like
as
a
potential
abstraction
and
then
there's
also
just
like
ambient
stuff
that
shouldn't
require
any
user
configuration
like,
for
example,
MPLS
everywhere,
like
you,
shouldn't
need
an
abstraction.
C
You
should
just
be
able
to
like
say
pay
service
mesh,
whatever
you're
doing
just
like
do
the
that.
Do
that
thing,
where
you
make
everything
empty
less,
that
would
be
ideal
and
so
like
for
the
for
the
MT
less
everywhere
thing:
I,
don't
think
we
even
need
an
abstraction
and
for
the
ingress
thing,
I'm
hoping
it's
like
ingress
v2
we
can,
we
can
evolve
towards,
for,
although,
like
I,
don't
know,
I
guess
I'm
yeah
I,
don't
have
anything
intelligent
to
say
about
this
I'm.
Sorry,
yes,
service,
mesh
interface.
Maybe
we
should
look
at
that.
C
A
D
The
only
like
user-facing
thing
I
can
imagine
that,
like
would
be
a
slight
concern,
is
I,
do
think
for
the
kind
of
Cloud
Foundry
use
case.
It's
super
nice
if
it
could
be
lightweight
and
running
your
laptop.
You
know
like
that.
That
would
be
really
nice,
misty
Oh
SCO
burns,
a
lot
of
memory
and
CPU.
Even
on
my
64
gigabyte,
you
know
16
inch,
Mac,
yeah.
C
So
I
think
like
this
is
maybe
is
gets
to
the
question
of
like
like
so
they
answer
the
first
question
with
like:
should
we
require
it
like
I?
Don't
care
like
it's
doing
some
useful
stuff
for
us
right
now?
If
we
find
other
technology
that
can
do
the
same
stuff,
great,
let's
consider
swapping
it
out
or
making
an
optional
and
like
I,
don't
have
a
dog
in
that
fight
on
the
exposure
to
end
users.
Question
like
that's
real
like
so.
C
The
way
we've
been
thinking
about
this
is
we're
not
giving
up
developers
coupe
cuddle
access
right,
and
so
they
shouldn't
be
directly
accessing
this.
Do
api's
either
I
think
there
were
some.
There
were
some
talk
like
Shannon
got
excited,
maybe
a
year
ago
about
like
oh
wait.
What
if
we
embedded
stom
all
into
the
Cloud
Foundry
application
manifest
or
something
like
that,
and
we
didn't
go
in
that
direction.
C
I'm
we
didn't,
like
so
I,
think
but
I
think
as
an
operator
or
like
just
like
as
an
operator
house,
coop
cuddle
axis
and
I'll
probably
be
doing
configuring
community
secrets,
and
you
know,
communities,
network
policies
and
stuff
like
that.
C
You
could
use
this
do
and
so
I
think
right
now,
like
the
documentation
pages
for
see
if
for
kubernetes,
the
only
things
that
that
our
operator
facing
in
there
are
maybe
like
the
certificate
management
things
like
if
you
wanted
to,
although
I
think
even
there,
it's
just
right
a
kubernetes
secret,
and
maybe
it's
like
reference
that
secret
name
from
within
your
is
do
gateway.
So
I
think
maybe
that's
the
place
where
they
they
talk
to
their
to
this.
Do
we
could
consider
trying
to
hide
that
I?
B
C
All
right
so
decision
making
process
around
that
stuff,
gosh
I
wish
we
had
a
formal
decision-making
process,
I
think
our
decision
making
processes.
Let's
use
this
thing,
it
seems
to
work
and
if
I
don't
know,
I
guess
I
outlined
at
the
principle
of
like
don't
expose
it
to
end-users.
Just
yet,
that's
our
decision-making
process
it
like
do
people
have
decisions,
they
want
us
to
make
differently
or,
like
I,
don't
know
either
I'm,
not
really
sure
what
to
say
about
this.
One.
G
Don't
know
I
need
to
drop
off,
but
I
was
going
to
make
a
plug
before
I
drop
off,
like
I'd
love,
more
people
to
contribute
to
the
space.
If
you
have
people
who
would
like
to
participate
and
help
shape,
you
know,
or
more
contributors
in
this
space
would
be
helpful
for
accelerating
the
timeline
and
wander
and
making
us
more
confident
in
our
ability
to
do
these
things.
Yeah.
C
I
think
that
we've
that's
great
thanks.
Julie
there's
been
like
separable
chunks
of
work
that
I
could
imagine
splitting
off,
especially
even
to
like
a
base
team,
for
example,
or
a
Vancouver
BC
base
team
intent
receive
you
there
any
no.
C
Application
security
groups
and
see
it
see
to
see
network
policy,
both
probably
standalone
from
the
current,
like
routing
control
planes
stuff
that
either
one
of
those
things
could
be
built
as
a
standalone
component
that
reads
from
the
relevant
source
of
truth.
The
copy
or
the
network
policy
server
and
then
turns
around
and
writes
companies,
Network
policy
objects
and,
like
those
two
things
could
probably
be
built
in
parallel
by
teams
that
don't
even
talk
to
each
other
and
they
would
still
end
up
working
at
the
end
of
the
day.
C
D
C
B
C
F
C
C
Okay,
so
the
third
thing
that
is
on
here,
which
is
super
interesting,
is
the
idea
of
using
container
networking
for
migration.
You
know
I
hadn't
thought
about
that
until
someone
just
brought
it
up,
so
maybe
I
don't
know
someone
want
to
talk
at
us
for
a
minute
or
two
about
how
you
see
that
working
I.
D
C
C
C
C
Yeah
so
like
even
any
I
should
say
even
link.
Rd
is
moving
in
this
direction
of
like
put
gateways
on
every
cluster
and
then
have
them
have
clients
speak
mutual
TLS
with
an
S
and
I
header
when
it
connects
out
to
the
Gateway,
and
that
way
you
can
basically
build
this.
It's
not
a
VPN,
but
it
sure
looks
like
a
VPN
and
it
operates
at
layer
4.
So
right.
B
C
D
D
C
Let
me
clear
there
on
voice
I
cars,
not
sto,
side
cars,
that's
an
important
distinction,
fine
yeah,
because
because
they're
like
one
of
the
things
that
made
it
challenging
to
develop
in
the
Bosch
Diego
world,
one
of
the
things
that
slowed
us
down
honestly
was
that
those
side
cars
were
like
deeply
under
the
control
of
Diego
and
the
process
of
like
keeping
everything
alive
and
like
satisfying
the
use
cases
that
Diego
had
for
those
envoys,
while
also
putting
them
under
the
control
of
Ischia
and
like
that
was
quite
a
fan
like
that
was
quite
a
I.
A
C
Pulling
the
carpet
out
from
underneath
the
camel
I
don't
know,
but
yeah
like
that
said.
Yes,
I,
think
that
the
on
the
migration
side
I
haven't
been
involved
in
the
most
recent
discussions
in
the
in
VMware.
I
should
probably
get
back
into
that
stuff,
because
I
think
container
to
container
traffic
across
different
clusters
totally
work,
and
you
should
be
able
to
make
that
work.
C
Even
when
the
clusters
aren't
different
layer,
three
networks
and
I
think
maybe
one
thing
would
be
like
if,
if,
if
you're
like
making
use
of
the
existing
container
to
container
networking
like
service
discovery,
for
example,
you
can
use
that
DNS
as
a
as
a
hook,
and
you
can
say
well,
it
was
returning
only
IP
addresses
inside
the
Bosh
Diego
cluster.
Now
it's
gonna
start
returning,
this
external
gateway
or
maybe
returning
the
address
of
a
of
an
egress
gateway.
C
D
Mean
yeah
I
mean
in
the
short
term,
I'm
kind
of
oh
yeah
I'm
on
board,
with
the
general
short-term
idea
of
the
most
important
thing
is
to
ship
something
that
works
and
get
one
point
out,
but
I
mean
there's
an
obvious.
D
E
D
Thing
that
this
is
like
so
like
in
my
head,
I
was
like
well,
okay,
look
the
easiest
thing,
the
Simpson
that
could
work,
don't
even
bother
having
like
migration.
Don't
have
one
cloud
controller.
Just
push
just
have
builded
client-side
tool
to
push
it
for
place.
You
know
that,
but
the
thing
that
kills,
you
is
networking
right.
It's
all
very
well,
but
if
you've
got
containers
to
contain
a
networking,
I
mean
everything
breaks
in
the
middle
of
that.
C
E
B
D
Right,
so
let
me
just
put
it
like
this
right,
so,
let's
they
say,
I've
got
two
apps
to
CF
apps
right
and
they
both
use
container
networking
to
talk
to
each
other.
How
do
I
move
them
right
like?
How
do
I?
What
do
I
do
it's
the
simplest
case
right
and
now
now
do
that
with
like
a
hundreds
customer
applications,
so.
E
E
D
D
C
Wait
a
second
let's,
let's,
what
like
the
network
connectivity
between
the
clusters
should
make
this
a
lot
more
tractable
right!
You
push
another
point.
Yeah
you
push,
some
I
mean
like
yeah.
You
push
another
copy
of
one
of
the
running
apps
to
the
new
cluster.
It
can
reach
the
other
apps,
because
DNS
and
policy
allows
that
and
then
it
starts
working
and
then
you
delete
it
from
the
old
cluster
and
then
repeat
right.
D
Right
there
there's
that's
exactly
where
this
topic,
that
was
the
core
of
this
topic
right
like
if,
if
we
could
have
all
this,
have
the
access
work
between
CF
on
Porsche
and
CF
on
Kate's,
then
yeah,
you
just
blue-green
everything
over
and
you're
done
right,
you're,
all
client-side
it
just
otherwise
is
harder.
I
mean.
B
D
Right
I
mean
if
you
go
through
the
go
recess
right
like
if
everything
still
went
to
go,
there's
no
like
container
container
access.
It's
a
tiny
Z
I
mean
honestly,
otherwise
that
you
just
put
a
low
balance
with
in
front
of
the
whole
thing
and
failover
from
one
cluster
to
another
right
here.
So
you've
basically
done
right.
It's
just
I!
Don't
know
how
to
do
that
with
like
two
separate
clusters,
where
you
can
talk
behind
the
Garuda
can't
like
put
a
load
balancer
from
yeah.
C
And
to
like
elaborate
on
that
fail,
everything
like
we'd
been
imagining,
so
the
earlier
part
of
this
conversation,
I
had
been
in
inside
of
VMware
was
like
you.
You
actually
published
the
list
of
routes
from
one
of
the
clusters
to
the
other,
so
that
the
like
front-end
load
balancer
can
actually
be
like
a
go
router
or
an
envoy.
D
C
C
B
D
F
C
C
C
An
egress
gateway
knows
how
I'm
receiving
traffic
from
this
client
I'm
going
to
you
know,
follow
this
failover
policy,
and
it
could
that
actually
that
egress
gateway
might
itself
be
an
sto
managed
Envoy
proxy,
like
you
could
deploy
on
my
proxy
to
Cloud
Foundry
Diego.
Have
it
run
managed
by
sto
and
treat
it
like
an
egress
gateway
and
like
there's
going
to
be
some
details
to
sort
out,
but
I
think
that
fundamentally
that'll
work.
C
You
can
see
if
pusher
your
sto
egress
gateway
to
your
Diego
cluster,
and
that
would
be
now
that
would
be
an
application
developer.
Ish
workflow
I
mean
the
operator
could
run
into.
We
could
also
make
something
that
was
more
like
well
yeah,
I
I,
don't
know
like
whether
the
workflow
should
be
initiated
by
the
application
developer.
D
C
E
C
B
Yeah,
but
you
still
need
your
in
a
case
where
only
half
of
your
apps
gonna
be
migrated.
You
still
need
to
be
able
to
migrate
half
of
your
apps
right
if
unique
it
like
I've
I've
done
the
retooling
on
a
pay,
and
you
stuck
at
B,
but
@b
still
has
to
run
a
Diego
because
I
don't
know
what
the
new
new
world
is
missing
a
feature
or
something
database
might.
E
A
So,
probably,
along
those
lines,
I
think
I
saw
document
once
where
there
were
also
some
like
multi
cluster
cases
described,
but
also
the
K
is
where
you
would
like
update
from
one
coronate,
his
minor
version
to
another
one,
so
I'm
kind
of
wondering
if
there's
parts
that
could
be
useful
but
like
from
from
what
we
just
discussed.
Also
in
in
these
types
of
cases,.
C
Yes,
yeah
I
think
one
one
question
I
have
in
the
kubernetes
world
is
to
people.
Imagine
that
their
kubernetes,
if
they,
if
they
want
cross
cluster
in
the
in
the
kubernetes
world,
do
they
imagine
all
the
pods
can
reach
each
other
over
the
same
like
IP
network,
or
would
there
be
like
in
that
boundary
around
each
cluster.
A
D
C
Yeah,
that's
I
mean
for
what
it's
worth.
It's
not
even
like
you
know,
VMware
customers,
a
lot
of
them
are
not
using
that
kind
of
software-defined
network
writer,
so
yeah
I've
also
been
an
operating.
The
assumption
that,
like
you,
can't
just
assume
away
not,
then
that
is
going
to
be
there
and
you
have
to
live
with
it,
and
so
then
the
reason
I'm
asking
that
question
is
because
I
think
it
pushes
us
to
a
very
similar
model
we
were
just
talking
about,
which
is
an
investigate
way
on
each
cluster.
D
Right
yeah
I
mean
like
I,
say
it's
not
like
the
immediate
consents
of
that,
but
I
mean
isn't.
The
longer
term
is
the
more
kind
of
hard
we
gets
as
a
thing
to
migrate.
People-
and
you
know,
let's
just
say
the
quite
thing-
I'll
add
one
of
the
big
advantages
CF
has
versus
all
of
the
various
other
things
in
this
space.
It's
got
a
ton
of
existing
big
customers
using
it.
So
getting
them
onto
the
new
thing
in
a
painless
ways
is
a
pretty
important
thing.
Yeah.
C
One
thing
that
I
was
pushing
for
inside
of
VMware.
It
was
like
treating
the
migration
thing
is
like
we
should
start
doing
this
early
as
opposed
to
like
putting
it
off
because
you're
not
gonna,
really
know
that
you're
kubernetes
thing
runs
well
in
production
until
you
migrate
production
workloads
to
it.
So.
D
Yeah
right
like,
if
you
don't
get
people
using
it,
you're
not
gonna,
get
the
right
feedback
to
make
it
good
design
there
plus
the
longer
we
have
to
support
stuff
more
expensive
is
because
we
have
to
keep
supporting
at
some
point.
We've
got
a
bionic
stem
cell
versus
Zinio
stem
cell
issue.
Right,
like
CEO,
goes
out
of
support
in
a
year,
and
if
you
don't
get
people
migrated
before
then,
you've
got
pretty
expensive
transition
on
your
hands.
Yeah
yeah.