►
From YouTube: Istio Networking WG - 2018-06-07
Description
Agenda:
- Zone-aware LB/Locality Weighted Endpoint LB support
- Pilot Decomposition
- What is the status of multi-cluster for case when clusters are on different L3 networks?
A
Hello,
everybody,
so
thank
you
for
joining
the
eastern
networking
community
meeting
it's
already
11
of
3,
so
I
think
we
could
probably
get
started.
I
see
the
rate
of
people
joining,
has
slowed
down
a
bit
so
I
I
guess
we're
good.
So
could
you
please
add
your
names
in
the
agenda,
as
is
that
Indies
and
let's
see
what
we
have
so
we
have
a
few
very
interesting
topics,
so
one
of
them
is
a
discussion
about
zone,
aware,
load,
balancing
and
Liam
and
Christopher
signed
up
for
this.
A
The
second
item
in
the
agenda
is
a
document
I'd
like
to
share
and
present
to
the
community
about
the
composing
pilot,
and
the
third
item
in
the
agenda
is:
let
me
see
because
I
just
I
somehow
there's
something
else
which
I
just
lost
here.
Ok
there,
it
is
so
question
about
the
status
of
multi
cluster.
Ok,
I
hope
we
will
be
able
to
cover
all
three
items
so
I
propose
we
we
split
in
about
like
20
minutes,
20
minutes,
10
minutes
or
maybe
less
for
the
last
item.
A
B
Yeah,
it's
gonna
work.
When
we
originally
increase
version,
you
had
a
list.
We
had
every
intention
of
implanting,
so
nowhere
but
I've
had
some
discussions
with
asharam
and
we
don't
believe
it's
possible
to
actually
do
zone
while
load
balancing,
based
on
how
we
name
clusters
in
a
sphere,
and
the
reason
for
that
is
basically
that
we
use
services
include
most
of
the
time
or
as
naming
conventions
for
clusters
and
multiple
Anna
and
Envoy
can't
belong
there
for
two
two
clusters,
because
you
can
have
multiple
services
pointing
to
the
same
pod
in
ku
and
envoy.
B
Has
this
requirement
of
you
need
to
tell
it
which
cluster
it's
in
and
in
the
case
where
we
have
to,
we
can't
tell
it
which
cluster
is
in
so
the
current
thinking
is,
unless
we
provide
some
hard
conventional
people
that
who
are
running
in
coop.
That
says
you
can
only
have
a
one-to-one
relationship
between
or
one-to-many
service
to,
plot
relationship,
not
on
many
to
many,
the
we
basically
transporter
which
comes
on
to
the
cavity
way
to
them
point
stuff,
I've
been
having
chats
with
shahram
and
Andrew
I
know
you
had
some
use
cases
for
this.
B
In
kind
of
how
we'd
go
about
configuring,
it
I
think
generally
we'd
probably
extend
is
that
the
destination
rules
load-balancing
setting
we'd
have
to
extend
it
to
support.
That's
like
we
I
think
we
have
hashing
as
an
extra
one.
Now,
where
you
can
think
configure
and
we'd
probably
want
to
concept.
We'd
probably
want
to
set
the
configuration
in
there
I'm.
So
that's
currently
the
thought
process
I
feel
like
I'm
speaking
into
the
abyss.
So
on
that
someone
else
talk,
I,
don't
know
sure,
I'm
or
Chris
has
any
specific
opinions
on
any
of
this
stuff.
B
C
C
C
Okay,
now
you're
muted.
My
question
is:
why
is
the?
Why
does
the
community
service
need
to
show
up
in
the
cluster
if
we
want
to
support
multiple
services
for
a
pod,
and
we
want
the
pods
identity
to
be
in
the
cluster?
Why
must
the
service
that
it
belongs
to
be
in
its
identity?
In
fact,
wouldn't
we
expect
that
the
assignment
of
a
service
to
a
pod
be
something
that's
dynamic
that
wouldn't
require
recreating
renaming
the
identity
of
the
of
the
endroit.
B
D
It's
not
on
the
it's,
not
the
in
what,
but
this
whole
thing
is
actually
on
the
outbound
path.
I
mean
you
need
to
know
what's
up
set
apart
belongs
to
so
when
you're
outing
from
a
sidecar
to
a
particular
service.
You
need
to
know
that
this
cluster
is
belongs
to
that
particular
service,
and
when
you
do
that,
that's
when
you
can
actually
go
and
fetch
these
lists
of
all
endpoints
but
service
and
do
all
the
load
balancing
stuff.
D
There
is
this
whole
thing
is
being
generated
dynamically,
so
we
do
not
know
a
priori
wreck
at
bootstrap
time.
What
is
the
cluster
a
pod
will
belong
to,
and
so,
when
we
don't
know
what
is
a
cluster
and
all
the
you
know,
a
pod
will
belong
to.
There
is
no
way
by
which
that
zone
ever
outing.
The
way
it's
set
up
currently
would
work,
but
that
said,
I
mean
the
actual
I
mean
solution.
Is
there's
a
much
more
sophisticated
and
more
Cybil
solution
that
I
think
habían
Co
actually
added
in
Hanoi?
D
This
is,
this
is
pretty
much
same
as
oh,
never
outing,
but
with
more
knobs
and
tweaks,
and
you
know
it's
a
much
more
hierarchical
system.
It's
called
locality
where
routing
so
effectively
you
can
have
like
you
know,
multiple
failure,
domains
like
data
center
like
it's
in
a
particular
rack
and
a
pod
with
and
so
on
so
forth,
and
then
you
can
just
effectively
trout
requests
to
other,
like
you
know,
clusters
and
other
endpoints
from
the
same
locality
and
then
escalate
slowly.
D
C
D
D
When
a
pod
is
making
a
call
to
a
different
service,
it
has
to
go
into
some
cluster
and
when
it
goes
into
that
cluster,
we
need
to
know
a
list
of
all
endpoints
that
belong
to
that
cluster
and
the
only
way
by
which
we
would
know
list
of
all
endpoints
that
belong
to
that
cluster
is
by
looking
at
the
service
name
to
which
that
call
is
bound
to
see
what
I'm
saying
this
is
not
for
the
inbound
pass.
This
is
actually
for
the
outbound
path.
D
D
It
depends
on
services,
I
mean
respect
up
in
think,
there's
Cloud
Foundry
equipment.
You
need
to
know
if
it's
going
to
review
service
or
rating
service,
so
you
need
to
generate
a
cluster
for
each
of
these
services.
That's
it.
But
what
this
thing
means
is
that,
if
you're
launching
a
pod
from
product
page,
you
need
to
tell
this
hotel
envoy
from
the
very
beginning
that
hey
you're,
launching
a
pod
that
belongs
to
the
product
page
cluster.
If
you
don't
tell
it,
then
it
has
no
way
of
knowing
for
the
corona
we're
outing.
D
This
is
not
related
to
the
the
availability
zone
issue.
I
mean
this.
Is
it's
a
come,
I
mean
I.
That
would
be
the
bug
is
there,
but
what
I'm?
This?
I
think
apes
question
was
that
what
are
the
other
implications
of
having,
like
you
know,
embedding
the
service
name
of
a
pod
in
the
cluster
name
of
the
court?
And
my
point:
is
we
don't
actually
do
that
at
all.
C
F
F
F
G
I
think
one
of
the
key
distinctions
there
is
with
Zona,
where
the
all-boy
instance
itself
is
making
a
decision
about.
You
know
am
I
in
the
same
zone
where
she
actually
said
prefer
a
local
custom
versus
a
remote
one,
whereas
with
locality
weighted,
it's
a
central
load
balance
which
is
providing
assignments
making
that
determination.
F
G
B
F
B
Yeah,
but
the
thing
I'm
talking
about
is
specifically
looking
on
void
cluster
name,
not
the
coudé
cluster
stuff,
there's
only
what
the
service
zone
is
already
set
in
envoy,
currently
in
sto.
We
just
don't
do
anything
about
it,
because
we
haven't
set
a
local
cluster
name
where,
when
I
say
local
cluster
name,
I
mean
the
specific
envoy
configuration
cluster
manager
parameter
that
bootstrap,
which
is
different
to
the
service
zone.
B
Okay,
so
so
I
don't
know
if
we
want
to
continue
having
a
discussion.
Well,
I
have
a
discussion
offline
about
the
local
cluster
name
stuff,
but
that
if
we
can
have
that
conversation
offline
in
terms
of
the
locality
stuff,
so
this
is
the
other
weight
locality
weights
at
endpoint
stuff,
like
I,
said
earlier.
I
think
we
should
probably
I
don't
know
if
we
need
a
full
design
dog
or
have
some
kind
of
API
proposal,
but
basically
we're
going
to
need
to
extend
the
load
balance
or
the
exact
name
of
the
figuration
is
it's.
A
Helpful
to
have
some
sort
of
design
dog
it
doesn't
have
to
be
complicated
or
super
detail,
but
just
to
introduce
the
concept
so
more
people
can
you
know,
look
at
that
understand
what
what
we're
talking
about.
I
think
it's
a
some
of
the
concepts
could
be.
You
know
fairly
unknown,
for
some
of
their
people
could
have
like
real
news
cases.
B
A
G
Be
aware,
it's
going
to
think
Matt's
been
making
some
recent
changes
to
the
released
request
load
balancer
to
make
it
work
better
for
waiting.
So
that's
one
of
the
big
changes
which
is
happened
recently
and
we're
is
about
actually
merger
PR
to
improve
the
behavior
they're,
switching
from
parents
to
choice
algorithm
to
I'm,
EF
scheduling,
I'm,
not
aware
of
any
others
which
are
currently
in
flight,
and
there
are
so
many
sort
of
an
intention
to
make
more
scalable
in
terms
of
what
number
of
learner
threads
and
endpoints
some
of
the
load
balancers.
D
Guess
the
other
thing
is
that
we
would
probably
have
a
partial
way
of
like
doing
sni
based
routing
with
the
original
destination
clusters
by
you
know
just
forwarding,
based
on
the
header
like
we
could
just
embed
the
DNS
is
all
by
local
sidecar,
an
HTTP
header
and
then
make
the
egress
gateway
use
that
use
the
IP
in
the
header
as
the
destination
where
it
routes
that's
actually
being
done
as
we
speak,
which
would
help
some
use
cases
for
this.
My
routing
and
or.
D
I
D
Think
we
can
just
solve
it
very
easily,
with
it
I
mean
with
adding
software
destination
rule
where
we
can
allow
people
to
specify
the
localities
for
individual
services
and
the
priorities
and
so
on,
and
that
should
just
give
us
the
you
know
the
required
feature
right.
Louis
I
mean
it's
just
like
matter
of
locality.
I'll
be
routing,
does
not
require
any
knowledge
of
the
source
on
wise
locality.
So
it's
just
a
matter
of
assigning
weights
and
localities
to
subsequent
points
within
a
destination
rule.
D
F
Just
you
like
that's
just
the
API
part,
so
complicated
the
intelligence
is
the
thing
that
assigns
the
way.
Yes
well
so
yeah,
it's
pretty.
It's
unlikely
that
we
will
have
we're
certain
nothing
to
block
what
I
want
have
any
work
in
implementation
of
it.
We
haven't
figured
out
what
the
intelligence
needs
to
be
your
one
information.
A
Time
to
move
to
the
second
item
on
the
agenda,
which
is
decomposing
pilots.
So
let
me
just
share
my
screen
here.
This
is
there.
Is
this
design
doc
I
shared
with
you
and
I'm
gonna
start
actually
not
to
the
introduction
that
goes
straight
to
the
meat
I'm
gonna
start
with
making
sure
everybody
is
aware
of
the
current
design
of
pilot
and
the
current
architecture
with
plugins.
Basically
so,
pilots
right
now
has
multiple
plugins
for
the
different
platforms.
It's
a
plug-in
for
kubernetes
services
and
endpoints.
It
also
reads:
are
these
from
kubernetes?
A
A
A
They
all
they
all
feed
information
of
various
types
into
pilot,
whether
that's
config
or
services
or
endpoints
they
fit
into
pilot
right
and
I
am
I
depicted
here
pilot
is
a
three
layer
cake.
The
first
lay
layer
which
I
named
the
configure
ingestion
layer
in
Pilot
right,
that's
the
part
where
we
ingest
stuff,
then
we
have
the
core
data
layer,
which
is
our
core
data
model
layer,
which
is
basically
our
our
data
store.
We
have
their
cache
q
client
maintains
of
a
cache
of
services
and
point
c
are
these.
A
We
also
do
some
like
copying
into
these
token
fix
for
storage,
so
we
have
a
bunch
of
like
data
structures
there.
The
good
part
is
that
all
these
data
stores
have
:
interfaces
for
crude
operations
like
on
one
side,
we
see
at
add
update,
delete
notification
handlers
on
the
other
side.
We
see
the
get
list
interface
and
this
one
in
particular
the
get
list
of
the
data
access
is
used
intensively
by
the
third
layer
of
pilot,
which
is
the
proxy
serving
layer.
A
So
the
proxy
proxy
serving
layer
basically
reads
data
that
we
have
in
pilot
and
produces
a
android
configuration
that
then
sends
over
the
ad
s
protocol,
which
is
easier
to
see
by
directional
streaming
protocol
to
end
boy.
So
what
are
some
of
the
issues
with
the
current
model?
So,
for
instance,
we
we
have
a
tight
dependency
on
kubernetes
and,
more
specifically
on
the
cube
api
server.
A
That's
one
thing:
it's
kind
of
hard
to
add
a
new
service
registry
to
pilot
without
recompiling
pilot.
It's
it
basically
forces
people
to
learn
the
build
process
of
pilot.
In
addition,
it's
actually
pretty
hard
to
test
pilot
like
well.
There
is
a
SWAT
testing
that
tried
to
develop
a
simpler
framework
for
pilot
and
they
discovered
it's.
You
know
not
so
obvious
and
not
so
simple
to
do
because
of
all
this
tight
coupling.
A
We,
the
various
platforms
and,
of
course
like
there,
is
not
a
clear
contract
between
pilot
and
kubernetes
or
the
other
platforms,
or
it's
it's
right
now
it's
under
specified.
Basically
there
is
everybody,
add
stuff
to
file
at
this
dating
fit,
so
that
ends
up
in
pilot
becoming
like
slowly
like
a
bloated
control
plane,
and
we
would
like
to
move
away
from
that.
A
So
we
would
like
to
make
pilot
a
bit
more
on
ike
really
decompose
it
so
make
it
more
like
more
lean,
less
platform
dependent
and
basically
easier
to
extend
so
make
it
easier
for
other
people-
and
you
know,
proprietary
service
registry
is
proprietary,
config
servers
to
be
able
to
fit
data
into
pilot,
and
like
this
document
I
shall
actually
a
bit
late
this
morning.
It's
proposing
a
more
of
an
make
mechanism
for
doing
so.
A
It's
proposing
a
an
api,
but
we
can
use,
and
it's
actually
annex
the
existing
API
and
protocol
and
its
really
the
mechanism
by
which
we
can
achieve
this
decomposition
and
that
I
can
like
show
another
picture
and
talk
a
bit
about
this.
I
want
to
just
emphasize
that
it's
it's
all
meant
to
be
a
gradual
move.
A
It's
not
proposing
a
redesign
of
pilot,
it's
something
we
would
like
to
do
incremental
ii
to
help
with
the
end
goal,
which
is
a
you
know,
more
like
stable,
stable,
scalable
and
performant
the
pilot
also
more
extendable
and
obviously
it's
again
open
to
like
feedback
from
the
community
and
its
yeah.
That's
pretty
much
it.
So.
A
The
idea
of
the
composition
is
to
like,
gradually
they
emphasize
gradually
migrate,
the
the
various
plugins
out
of
process
of
there's
two
pilot,
or
also
like
out
of
Process
Servers,
for
instance,
golly,
which
can
aggregate
other
plugins,
for
instance
kubernetes
plugin,
and
we
can.
We
can
achieve
this
by
introducing
a
very
concrete
layer,
an
API
layer,
around
pilot
and,
like
we
already
have
existing
god.
It's
your
api's
for
like
service
and
three
gateways
virtual
service.
A
All
these
are
absolutely
sufficient
right
now
to
configure
pilot,
and
we
would
like
to
use
those
as
the
mesh
configuration
api
as
the
api
layer
around
pilot,
and
we
can
we
can
carry
those
api
is
over,
like
over
transport
protocol,
that
that
needs
to
be
the
best
would
be
to
use
a
g,
RPC,
bi-directional
streaming
protocol
and,
for
instance,
the
ideas
protocol
that
can
void
users.
It's
a
very
good
choice
for
this.
It's
very
generic.
It
can
actually
carry
other
things
than
listeners
and
clusters.
A
It
can
carry
pretty
much
anything
and
it's
already
defined,
so
we
could
reuse
it
and
yeah
I'm
here
in
this
in
this
diagram
that
I'm
sharing,
I
I
I
gave
a
name
I
called
it
like
mesh
config
protocol.
But
in
fact
it's
really
a
TS
protocol.
Under
the
hood,
we
can
obviously
like
name
those
things,
as
you
know,
as
we
decide
in
the
community,
the
more
important
thing
is
to
agree
on
the
mechanism
and
the
fact
that
we
do
need
an
API
layer
around
pilot
and.
A
In
fact,
the
the
changes
in
Paulo
to
achieve
this
are
fairly
simple,
that
we
don't
need
to
change
all
the
three
layers
of
pilots,
so
everything
that
is
this
is
discussing.
This
document
regards
the
config
ingestion
layer,
which
is
the
first
layer
and
the
we
can
also
like
slightly
change
the
core
data
model
layer
in
the
sense
that
we
can
have
a
a
common
core
model,
as
opposed
to
the
two
dichotomies
like
services
and
end
points
and
configs
with
two
separate
interface,
so
that
can
be
easily
achieved
just
viago
lang,
interface
wrapper.
A
A
F
F
Checking
that
you
know,
resources
that
are
pushed
into
kubernetes
have
valid
schema
and
that
code
is
kind
of
ended
up
being
speared
across
a
variety
of
components
and
we'd
like
to
put
that
in
one
place
so
that
the
other
components
don't
have
to
deal
with
it
don't
have
to
take
that
as
a
production
risk,
and
so
we
get
okay,
some
developer
efficiencies
around
having
one
mechanism
for
dealing
with
the
kind
of
user
facing
configuration
site.
So
there's
an
effort
being
proposed,
which
you
know
we
pretty
much
forever
called
galley
with
just
I.
F
C
B
A
That's,
like
that's
a
very
good
question,
so
in
fact
we
thought
a
bit
about
this.
So
right
now,
for
instance,
they
it's
gonna,
be
bi-directional
right,
like
the
reasons
for
choosing
ideas
as
a
protocol,
it's
gonna
be
bi-directional
and
there
is
the
ability
to
send
a
connects
back
to
the
like
config
server
right
now.
I
think
ideas
has
that
and
now
we
can
like
I,
think
we
can
like
further
refine
the
details
about.
You
know
what
will
happen
so.
F
I
think
right
that
there
are
only
so
many
topologies
that
make
sense
right.
So
you
could
imagine
a
world
where
you
know
you've
got
all
your
configuration
sources
and
they're
feeding
into
pilot.
Well,
then,
you
want
to
create
another
remote
pilot.
That's
basically
a
replica
of
it
right
and
you
don't
want
to
do
the
ant
integrations
into
an
pilots,
so
it
might
be
easier
to
just
do
the
integration
into
one
pilot
and
then
have
it
basically
act
as
a
know,
an
intermediate
reputation
mechanism,
I,
don't
think
we
would
do
bi-directional
between
two
pilots,
though
no.
A
A
Sorry,
I
can't,
okay,
yes,
yes,
so
yeah!
The
remote
pilot
is
exactly
as
like,
as
like
Louie
said,
and
so
this
is
versus.
What
I
haven't
mentioned
is
that
we
plan
to
support
multiple
streams
of
config
in
at
the
same
time,
so,
for
instance,
some
configuration
can
can
come
from
like
console
as
a
service
registry
right.
Well,
that
would
be.
A
Is
there
visas
and
end
points
and
at
the
same
time
get
CR
this
from
gali,
let's
say
or
you
some
completely
other
like
concurrent
adapter
right
and
pilot
will
be
able
to
merge
those
as
long
as
they
have
separate
like
networking
domains
and
organizational
domains.
So
in
the
in
the
case
of
remote
pilot,
for
instance,
is
very
first
able
that
remote
pilot
would
send
something
like
service
entries
containing
a
service
with
the
list
of
endpoints
from
a
cluster,
while
some
other
you
know,
configuration
would
come
from
galley
or
some
config
server.
The.
F
Yes,
so
yeah
it
works
that
way
with
service
registry,
we
haven't
really
formalized
how
we
would
do
domain
merging
for
non
service
registry
stuff
so
that
that's
gonna
take
some
work
right.
We
right
now.
We
you
have
kind
of
one
master
source
of
say
virtual
service
from
Gateway
or
you
have
you.
You
have
a
guarantee
that
the
two
sources
of
that
information
are
entirely
disjoint
and
there's
no
murchison
on
Dex
right.
F
F
Given
that
the
most
the
most
likely
first
rounds
will
be
either
will
have,
we
already
have
merging
for
service
entry,
and
we
have
disjoint
sets
for
configuration
sources.
It
isn't
really
a
problem,
but
when
we
have
like,
as
this
scales
out,
you
could
imagine
where
they
start
to
overlap
a
bit
and
you'll
have
to
have
precedents
or
other
things.
Yeah.
A
There
are
clearly
many
like
details
to
be
fleshed
out
right.
There
is,
there
will
be
a
clear
evolution
of
the
API
as
well
to
be
able
to
support,
sold
like
multi
cluster
scenarios
and
yeah.
I
A
An
increased
case,
oh
and
my
my
point
here
is
not
to
get
into
the
API
part,
because
I
know
that,
like
many
people
who
can
do
that
very
well
and
I
I
think
there'll
be
a
lot
of
discussion.
You
know
in
fires
and
other
design
Doc's,
but
I
just
wanna
like
basically
introduce
the
concept
and
you
know
get
input
on
that.
F
F
You
know
we
want
to
get
we'd
like
to
get
galle
into
100,
because
we'd,
like
all
any
of
the
api
validation,
work
to
be
done
by
golly,
like
the
user
facing
API
validation,
because
there's
there's
a
clear
distinction
here.
Between
kind
of
you
know
it's
to
you,
as
user
facing
API
is
and
system
integration
concerns
like
like
a
foundry
where
they
have
their
own
API
or
console
right,
where
there
are
pre-existing
systems.
I
F
So
they're
kind
of
two
sets
of
problems.
You
know
it's.
It's
clearly
hit
or
miss
whether
that
would
happen
in
100
or
not,
but
I
think
we'd
like
to
make
it
happen.
So
we'd
like
to
get
at
least
that
the
basic
version
of
this
API
working
with
the
existing
resources
that
we
have
do
the
minimally
invasive
changing
pilot
all
but
have
a
fullback
fund
where
that
doesn't
work.
Yeah.
F
A
Right
so
for
one
old,
Jason
and
I
would
work
on
just
you
know,
reference
implementation
for
this
mCP
flash,
ABS,
server
and
client
and
then
how
it
fades
into
like
you
know,
into
the
notification
handlers
and
all
that,
and
we
can
have
a
config
flag
to
say
you
know
enable
this
mode
of
ingestion
or
not,
and
it's
it's
meant
to
be
really
like.
Gradual
and
non-invasive.
F
C
C
H
I
was
just
going
to
say,
I
think
this
is
an
interesting
way.
I
I'm
curious
on
your
thoughts,
how
this
impact
the
existing
multi
cluster
support
is
still,
which
we
kind
of
run.
Sto
remote
chat
on
the
secondary
classroom
and
point
back
to
the
primary
clusters
pilot
end
point
with
this
model.
I
think
it
would
be
a
lot
easier.
Are
you
wrong?
It's
do
the
secondary
piloted
the
in
certain
point,
you
back
to
the
end
point
on
the
first
pilot
secondary
clusters,
so
we
actually
could
potentially
having
pilot
Ronnie
on
both
clusters
right.
F
A
H
F
Do
you
eat
commander's
cluster
to
have
an
a
galley
right,
exacting
is
the
local
validator
right
if
the,
if
all
those
close
to
work,
those
clusters
were
basically
clones
of
each
other
from
a
configuration
standpoint,
then
really
that
they're
just
acting
as
replication
and
the
only
thing
that's
unique
to
each
cluster
is
the
set
of
endpoints
and
services
that
are
running
there,
but
the
configuration
is
identical.
So
the
only
thing
that
needs
to
happen
is
the
Federation
of
endpoints
right,
which
is
really
what
the
current
solution
does
right.
H
A
Yeah,
so
in
here
you
you,
the
bottom
part
is
like
it's
pick
and
choose
right:
yeah,
you
don't
have
to
install
the
bottom
part,
so
in
fact
this
reduces
like,
if
you
want
the
binary
size
of
piloting
also,
and
it
allows
you
to
customize,
really
your
deployment
and
I
guess.
One
of
the
good
things
is
that
it
also
decouples
the
release
cycle
right.
So
people
can
can
write
the
you
know
their
own
galley
and
plug
it
into
pilot.
As
long
as
the
API
is
versioning
and
backward
compatibility
is
maintained,
we
can
evolve
independently.
H
A
Obviously,
yes,
actually
that's
a
very
good
thing.
You
brought
up
because
I
sure
I'm
said
that
sometimes
there
may
be.
You
know
very
strict
latency
concerns
right.
So
in
that
case
we
can
always
run
the
out
of
process
adapter
on
the
same
host
as
pilot
right,
but
it's
not
a
requirement,
so
it
can
run
anywhere
because
it
will
talk
over
over
the
over
the
G
RPC
protocol
with
pilot.
F
F
F
E
F
Right,
so
that's
why
we're
taking
a
very
incremental
approach
to
this,
and
it
will
probably
be
unless
things
go
perfectly,
it
will
probably
be
a
flag
enabled
feature,
so
Galle
is
already
going
to
be
present
in
100,
because
it's
already
the
web
hook
a
validator
for
the
api
calls.
So
it's
kind
of
already
there
right.
It's
just
its
role
is
increasing.
K
Thought
about
the
impact
here
on
like
alucard
deployments,
because
we
need
to
think
there's
some
logistics
there,
like
in
particular
one
of
the
common
use
cases
we're
seeing
is
people
that
wanted
to
pilot
only
deployments,
and
that
gets
a
little
bit
heavier
weight
to
run
potentially
in
in
other
ways.
This
actually
makes
it
substantially
easier
right
in
the
long
run,
that
should
make
it
easier,
yeah
and.
F
Right
so
we
already
have
done
the
decoupling
in
the
install
for
mixer
all
right,
the
only
other
component
other
than
see
you
you're
gonna
have
done
galley,
but
is
there
any
way
and
you
wants
it
because
it's
your
API
validation
or
you
guys
not
even
be
usable,
it's
not
validated.
So
the
only
other
one
that
I'm
aware
of
right
now
is
the
coupling
to
Citadel
in
the
web
hook.
Stuff,
yes,
which,
if
you
wanted
to
work
on
something
to
make
decoupling
better.
That
would
be
the
one.
F
F
F
The
other
piece
of
work
that
probably
isn't
gonna
happen
in
1o
are
some
of
the
other
things
that
we
wanted
to
do
with
note
agent
right
around
using
pilot
to
be
the
driver
of
configuration
for
note
agent,
all
right
which
right
Mike,
the
latency
around
startup
in
getting
the
first
config
or
looking
at
doing
that
an
injection
time.
My
all
those
types
of
things
I
think
there's
a
limited
room
way
to
take
on
that
type
of
work,
but
it's
it's
something
that
will
come
out
post
one.
F
Oh
because
clearly
will
do
this
API
and
then
there'll
be
another
person's
API
right
when
we
figure
out
what
we
really
want
to
do
with
its
API,
oh
yeah,
usual
kind
of
stuff
and
yeah.
So
I
don't
know.
If
there's
some
coupling
also
node
agent
to
Citadel,
that's
the
other
place.
They
would
be
something
yeah.
F
D
D
F
F
F
A
J
A
All
right
good,
we
have
seven
more
minutes,
so
actually
suffer
this.
Please
read
into
more
detail
and
provide
comments
and
we'll
move
to
the
next
item,
which
is
a
question
from
Gabe.
What
is
the
status
of
multi
cluster
for
case
when
customers
offer
the
case
when
clusters
are
on
different
l3
networks?.
C
C
Yeah
so
we're
thinking
about
Cloud,
Foundry,
Cates,
eventually,
windows
and
we're
thinking
about
what
we're
trying
to
figure
out
whether
there's
one
is
do
or
multiple
estudios
and
we're
trying
we're
imagining
that
there's
going
to
be
a
gateway
for
ingress
to
each
cluster,
so
it'll
be
a
flat
network
within
the
cluster,
but
then
crossing
between
clusters.
We're
trying
to
imagine
how
that
would
work.
So
yeah.
F
F
K
B
K
D
C
F
That's
the
kind
of
the
short
term
is
like
will
tell
people
how
the
Bolden
twine
it
like.
Obviously,
we
want
to
fry
that
much
better
or
more
usable
a
more
scalable
experience
around
this
going
forward.
I
doubt
that
would
get
into
one
out,
as
it
requires
is
to
to
understand
network
topologies
right.
We
need
to
build
a
label
endpoints
as
being
a
part
of
networks,
yeah.
C
F
F
K
F
A
F
A
D
C
F
A
In
fact,
I
wanted
it's
not
just
a
networking,
because
Pilate
also
need
authentication.
So
there
is
a
CRT
to
add
authentication.
So
that
will
be
part
of
this
like
API
straw
substrate
as
well,
and
we
definitely
don't
want
to
have
another
API.
We
want
to
use
those
api's
wait.
They're
already
there
yeah.
F
Just
evolved
over
time
yeah,
so
they
may
go
through,
like
you
know,
there'd
be
another
version,
maybe
at
some
point
later
this
year,
as
we
figure
out
that
it
doesn't
cover
all
the
things
we
wanted
to
cover
and
the
reason
why
under
mention
the
authentication
policy
stuff
is
because
the
authentication
policy
declares
which
services
are
allowed
to
talk
to
which
other
services.
So
it's
used
for
graft,
runing,
yeah,.
A
A
C
M
F
Theory
you
could,
but
we
certainly
haven't
made
that
easy.
Okay,
all
right!
So
that's
that's
one
of
those
abstractions
right
that
the
API
probably
doesn't
cover
well
enough
today
to
make
swapping
that
simple
or
you
can
obviously
live
at
the
same
level
as
the
Citadel
API.
So
you
can
look
like
I
said:
Adele,
okay,
yeah
that
could
work
for
us.