►
From YouTube: Istio networking WG meeting - 2019-03-28
Description
- 1.2 Release scope proposal from TOC
- https://github.com/istio/istio/issues/12551 What is expected behavior? How should we fix?
- Multicluster issues: https://github.com/istio/istio/pull/12793, https://github.com/istio/istio/issues/11841, https://github.com/istio/istio/issues/12830
- Readiness and Endpoints via MCP - promiscuous listener could solve the chicken-and-egg issue
- https://github.com/costinm/istio-consul
A
Okay,
hello,
everybody
welcome
to
this
your
networking
community
meeting
today.
We
have
several
topics.
First,
there
be
a
follow-up
on
the
roadmap
discussions
we
had
in
the
past
and
then
next
we're
going
to
discuss
a
bit
some
issues
related
to
an
TLS
or
an
Multi
cluster
and
readiness
probe.
So
there's
quite
a
bit
of
an
agenda,
hopefully
we'll
be
able
to
cover
everything,
I
wanna.
First
of
all
share
my
screen,
I'm,
not
sure
if
Louie
was
able
to
join
okay,
so
I
think
Louie.
A
A
Okay,
Steven
thanks
for
sharing
the
the
dog
now
I
cannot.
Unfortunately,
I
cannot
share
my
screen
for
whatever
reason.
So,
if
you
could,
please
follow
the
link
that,
in
the
meeting
notes
and
also
like
posted
in
the
chat,
it
leads
you
to
a
document
that
is
called
plan
for
East
eo1
and
further
down
in
this
document
are
like
many
many
aspects
being
covered,
especially
as
they
relate
to
the
cosmos
initiative,
but
the
somewhere
in
the
dog.
A
Here
there
is
a
networking
section,
so
maybe,
let's
see-
and
it's
called
it's
cause
specifically
features
networking
and
I'm
I'm
highlighting
this.
Are
you
able
to
see
that
yeah?
Okay?
Okay,
great?
Thank
you
so
in
this
dog,
so
the
features
that
are
deemed
valuable
and
are
likely
to
pass
the
1/2
bar
are
first
of
all
related
to
scale
in
pilot
I.
Think
that
has
always
been
a
high
priority.
A
Second,
it's
the
refinements
to
namespace
isolation,
as
this
relates
to
the
user
experience.
So
this
is
basically
everything
that
was.
You
know
this
related
to
the
sidecar
CRD
that
was
released
in
one
one.
There
are
further
improvements
needed
for
that,
so
they
are
likely
to
meet
the
bar
for
the
one
two
okay
and
then
there
are
a
bunch
of
like
user
experience
items
which
D,
which
are
the
elimination
of
container
port
requirements,
the
listening
of
an
Oda
listening,
the
refactor
of
the
pilot
listener.
A
There
are
a
few
approaches
related
to
that:
the
dependency
on
the
port
name
for
a
protocol
identification
and
in
here
what's
listed,
supported
promiscuous
ingress
listeners.
So
that's
it's
a
bit
of
a
new
new
thing:
I,
don't
I,
don't
remember
having
discussed
this
before,
and
it
ties
a
bit
like
the
listener
item
from
before.
So
it
basically
it's
the
ability
to
configure
a
listener
that
operate
in
promiscuous
mode
and
will
solve
like
many
of
them
or
not
many.
A
Some
of
the
issues
related
to
the
container
port
requirement,
as
well
as
some
of
the
readiness
problems
where
we've
been
having
okay-
and
there
is
also
like
it
marked
as
a
p2,
some
tooling,
related
to
the
multi
cluster
Federation
multi
cluster
and
generally
like
mesh
Federation
I.
Don't
know
why
this
is
p2,
it
should
probably
be
higher,
but
that's
to
be
discussed
with
the
TOC
okay.
So,
in
addition
to
that,
there
are
other
items
that
may
pass
the
DAR
and
it's
actually
specified
down
below.
A
It
says
what
shouldn't
be
in
one
two,
with
some
exception
and
I'm,
highlighting
this
big
block
of
things.
So
there
are
some
exceptions
exception.
For
instance,
there
is
the
pilot
endpoint
work
injections
of
endpoint,
with
the
mCP
protocol
will
likely
meet
the
bar.
Also
everything
that
aligns
with
the
namespace
isolation
we
discussed
already
and
everything
that's
graduating.
A
An
existing
API
to
better
will
not
have
problem
so
items
that
relate
to
improving
the
code
base,
making
it
more
stable
and
all
that
will
will
meet
the
bar
okay,
but
we
probably
should
not
be
doing
general
refactoring
work
that
doesn't
have
a
an
immediate
need,
like
from
any
point
of
usability
or
life
reliability
and
doesn't
improve
the
stability
okay.
On
the
other
hand,
my
expectation
that
any
test
should
be
able
to
pass
this
bar
okay,
like
are
there
any
questions
related
to
this?
B
Andrew
I'm
wondering
there's
work
that
I'm
not
really
clear
on
the
status
of,
but
some
refactoring
between
pilot
and
galley
is
that
sort
of
done
is
that
in
scope
for
one
to
was
the
feeling
on
that?
So.
A
C
A
Better
as
we
like,
Nino
and
Nate
are
working
on
this,
there
are
various
things
to
be
solved.
For
instance,
one
of
them
is
the
readiness
of
the
probe,
and
it's
like
the
third
or
fourth
bullet
item
in
the
agenda
today.
So
as
we
try
to
move
towards
this
model
with
discover
various
like
technical
issues
that
were
solving
one
by
one
I
think
the
tricky
part
is
to
be
able
to
move
to
that
model
without
making
a
big
refactoring
on
pilot.
So
it
has
to
be.
A
D
I
can
jump
in
on
this
topic.
I
think
the
big
problem
we
had
in
one-to-one
was
that
we
had
a
bunch
of
changes
that
took
a
very
long
time
to
stabilize
and
many
customers
are
unhappy
because,
again
we
introducing
stability
into
their
production
environments.
My
understanding
was
the
goal
for
code
move
is
to
make
sure
that
when
the
authorities
is
improving
as
the
stability
and
you
know,
kind
of
making
sure
that
everything
that
we
tell
people
to
using
production
is
well
tested
is
mature.
It's
solid.
D
Many
of
the
changes
at
on
this
list
will
likely
be
disabled
by
default
and
we're
not
because
again
but
having
something
become
mature,
doesn't
happen
in
two
months.
It
takes
a
bit
of
testing.
It
takes
not
only
automated
tests,
but
some
experience
in
production
we're
talking
about
the
listener,
so
we
know
pretty
much
everything
on
the
list
of
this
new
feature.
D
I
personally
have
hideouts
that
it
will
be
production
quality
in
one
cycle,
so
I
want
to
make
distinction
between
stuff
that
will
go
in
as
alpha
quality,
because
it's
brand-new
and
didn't
get
enough
kind
of
stability
testing
and
the
main
goal,
which
is
to
have
the
features
that
are
already
shipped,
that
people
are
using
in
production,
be
super,
solid
and
and
kind
of
where
proof.
That's
my
take
at
least.
C
Yeah
that
sounds
great
Carson
in
Andhra
at
there's
kind
of
been
a
late
feat,
a
late
road
map,
request
for
distro
list
containers
and
actually
somebody's
done
a
prototype,
and
that
seems
to
work.
So
I
think
we
can
take
that
up
in
the
environments
meeting.
But
we
may
want
to
consider
adding
that
to
this
master
list
and
because
it
impact
security
as
well.
So.
E
D
A
Today,
right
now,
like
I,
this
document
is
very
new
right,
so
do
spend
some
time.
You
know,
go
over
the
various
item
and
because
they
are
like
they
may
be,
networking
environments
or
security,
and
please
add
to
that
I'm,
not
sure
when
is
the
next
yossi
meeting,
if
it's
on
Friday
or
the
week
after,
but
like
it's
a
good
idea
to
bring
your
input
as
to
what
should
make
the
bar
right.
A
It's
very
new
so
and
yeah
so
I
think,
like
that's.
The
update
for
now
related
to
the
roadmap
will
probably
have
like
one
more.
You
know
one
more
of
updates
in
the
next
meeting
just
to
reconcile
this
list
with
our
initial
list.
That
is
recorded
in
the
meeting
notes,
and
you
know
we'll
see.
I
guess
also
like
the
there
might
be
some
room
for
negotiation
with
the
TOC,
but
the
respective
stakeholders
need
to
rely
Creedy
to
the
work
about
that.
A
A
B
So
actually
the
atom
glt
items
a
lot
opened
the
issue,
but
I
happen
to
have
the
same
one,
and
so
basically
I
wanted
to
talk
a
little
bit
about
what
we
see
and
information
understanding
it
sanely.
And
then
you
know,
there's
kind
of
I
think
two
broad
ways
to
go
about
solving
it,
and
so
I
wanted
to
know
how
what
the
what
the
recommended
approach
was.
B
Or
I
found
it
where
Kafka
wants
to
talk
to
it,
Kafka
that,
like
there's
a
controller
node,
it
wants
to
talk
to
other
nodes
to
do
this.
This
gossip
protocol,
one
of
those
notes,
is
itself
and
it
happens
to
talk
to
itself
on
its
external
pod.
Ip
address
not
one
to
set
0:01
and
when
it
does
that
in
this
configuration
what
happens
is
it
gets
directed
to
envoy,
but
it
envoy
maps
it
to
the
inbound
listener
and
not
the
outbound
listener?
B
And
so,
if
you
have
something
like,
you
are
requiring
a
mutual
TLS
between
pods,
then
only
the
inbound
policy
gets
applied
and
the
connection
gets
rejected
because
you
only
went
through
inbound,
which
requires
MPLS.
You
didn't
go
through
outbound
to
do
the
client
side
of
TLS
and
then
inbound
to
do
the
server
side.
So
that's
kind
of
like
broadly
what
what
is
going
on
as
far
as
I
understand
it,
and
so
then
there
is
kind
of
like
two
different
things
you
could
do.
One
is
you
could
say
it
should
go
through.
B
Neither
of
those
we
should
treat
connections
to
a
pods
own
external
address,
as
if
they
are
loopback
connections.
That's
like
kind
of
approach,
one
in
my
mind,
approach
to
is
no.
We
should
somehow
make
it
go
through
both
the
outbound
and
inbound
policy,
so
we
should
somehow
you
know,
work
with
envoy
and
IP
tables
and
stuff
like
that
to
make
that
that's
kind
of
like
approach,
one
approach
to.
D
B
Right
and-
and
so
this
actually
relates
to
costume
I-
think
I
mentioned
you
in
the
bug
here,
because
there
is
an
IP
table
rule
that
is
trying
to
send
traffic
when
a
pod
talks
to
its
external
IP
back
to
envoy.
It's
that
the
listeners
don't
like
if
the
most
specific
listener
replies
and
it
ends
up
hitting
the
inbound
the
store
instead
of
the
service
listener,
I
kind.
D
Ken
Ken
Ken
site
can
be
used
because
the
sidecar
at
all,
as
you
find,
control
over
over
the
ports
and
what
gets
intercepted
and
you
can
basically
use
different
ports.
You
can,
you
can
have
so
force
applications
that
use
this
kind.
Have
this
kind
of
needs
wouldn't
be
better
to
recommend
using
the
sidecar
and
explicit
and
no
I
I.
D
Because
that's
what
sidecar
is
so
so-called
white
box?
More
do
it
where
applications
are
just
listening
on
a
port
on
localhost
or
whatever
they
need,
and
sidecar
is
racing
on
a
different
port.
The
service
is
using,
you
know
and
I
think
you
can
have
both
of
them
completely
decoupled
with
no
idea
billions
of
be
in
between,
and
you
know
that
improves
scalability
performance,
a
lot
of
other
things,
because
you
bypass
IP
tables
and
also
gives
you
some
cuts.
The
other
side
you
lose
some
control,
I
mean
it's.
It's
yeah.
B
D
Again,
one
thing
we
need
to
evaluate
and
and
wait
we
just
started
because
it's
white
books
just
got
released
in
one
one.
We
are
starting
to
do
measurements
on
what
is
actual
impact
on
France
other
things.
Also
in
many
applications
where
time
is
super
sensitive
Kafka
is
usually
you
know
used
for
messages
where
people
are
super
interested
in
in
low
latency.
B
I
guess
I'll
just
say:
I
think,
there's
still
an
unresolved
question
of
what
behavior
do
we
want
here
because
I,
if
you
want
so,
if
you
don't
use
sidecar
and
you
have
a
service
and
a
pod
tries
to
talk
to
its
to
another
pod
IP
like
another
pod
in
the
service,
it
will
go
through
inbound
and
outbound.
If
a
pod
tries
to
talk
to
its
own
external,
you
know
pod
IP,
it
will
only
go
through
inbound
and
that
seems
like
that.
I
pink
has
to
get
fixed
right
way
or
the
other.
So
right.
A
Also,
don't
think
that
relates
to
Kafka
I
think
we
should
be
able
to
have
a
consistent
behavior,
no
matter
for
the
other
parties.
So
I,
that's
one
thing.
The
second
sidecar
is
I
like
a
great
CRT.
The
question
is:
do
we
really
want
on
to
wanna
make
that
mandatory
for
any
pot
who
wanna
talk
with
itself
right,
I?
Don't
think.
That's
necessarily
the
case.
I
B
D
B
B
D
A
Corner
case
costing
I
totally
disagree
with
that
being
a
corner
case.
I
think
it's
really
the
basic
case
right,
but
I,
I
kind
of
see
what
like
Andrew
is
proposing.
His
he's
basically
saying
that
we
should
still
go
through
the
telemetry
chain
and
go
through
and
boy,
and
he
able
to
do
whatever
we
do
right
and
there
are
some
like
ways
to
solve
that,
like
we
talked
in
the
past
about
this
like
separating
the
inbound
and
outbound
listeners.
So
right
now
we
have
a.
A
There
is
a
bit
of
a
mess
with
our
listeners
right
so
like
by
separating
those
we
would
be
able
to
kind
of
force
like
the
request
through
go
through
first
to
the
outbound
and
then
match
the
inbound,
because,
right
now
they
there
is
no
distinction
right.
So
that's
one
way
to
solving
that
I'm,
not
sure
if,
for
instance,
they
change
to
listen
in
promiscuous
mode
would
help
with
anything.
But
there
is
definitely
a
need
to
look
at
this
and
to
see
how
it
will
interact
all.
D
Right
that
work
just
started
is
not
ready
when
it's
ready
to
appear
in
probably
one
or
two
would
be
alpha,
because
it's
brand-new
and
and
out
of
box,
and
on
top
of
that
we
want
to
add
Amina,
long-term,
absolutely
I
agree.
We
should
have
everything,
but
in
tactical
way,
for
1.2.
We
need
to
kind
of
consider
reality
of
what
could
we
have?
What
resource
you
have
what
people
are
working
on
and
could
move?
That's
my
reason.
J
D
Bridges
are
a
small
set
of
competition
and
what
I'm
saying
is
if
we
have
a
simple
society
tables
and
if
we
can
have
a
small,
safe,
IP
tables
that
implements
well
in
the
safest
ways
that
one
of
the
solutions-
that's
perfect
long
term-
probably
adoptions,
because
it's
clear
is
our
distinct
requirements
for
telemetry.
We
definitely
want
to
go
through
through
sidecar,
since
they
have
different
things
about
TLS
and
performance.
Where
you
don't
want
to
do
TLS
Plinko
to
to
set
your
own
mutation
yourself,
so
we
may
let
user
choose.
You
may
have
richer
API.
D
C
A
We
should
try
in
the
future
to
keep
those
two
separate
like
not
every
time
we're
discussing
the
technical
solution
there
is
are
there?
Is
the
code
move
and
so
on?
Let's
try
to
find
first.
What
is
the
best
approach
you
know
see
when
we
could
do
that
and
if
it
we
cannot
do
it
and
we
need
to
find
some
other
compromise
or
like
creative
solution.
D
But
we
didn't
make
this
decision
is
that's
one
opinions
that
some
people
will
want
to
go
to
it.
They
want
telemetry.
Another
one
is
that
some
people
want
performance
since
they
don't
want
to
go
to
have
women.
It's
not
one
size,
one.
One
pair
of
momentum
for
one
or
the
other
I
agree
that
many
people
who
want
telemetry
but
I,
don't
that
is
that
everyone
wanted
Emily.
Okay,.
A
A
B
D
B
Know
there,
so
there
are
a
couple
of
technical
approaches
that
I
sort
of
just
laid
out
before
I
went
any
deeper
into
this.
Some
that
might
be
more
feasible
than
others
like
I
think
there
might
be
an
envoy
filter
that
can
like
a
listener
filter
that
can
help
us
make
this
a
lot
easier.
I
think
maybe
what
I
should
do
is
I
should
go
off
and
see
how
awful
it
would
look
to
do
this
where
it
goes
through
both
outbound
and
inbound,
and
we
can
come
back
at
talks
over
then
we
can
decide.
B
A
My
main
concern
is
that
we'll
probably
collide
with
the
work
that
will
happen
in
reorganizing
the
listeners
right
you
might
like
to
the
new
filter
right
now.
We
don't
have
filter
chains
for
listeners
right,
but
there
is
a
proposal
to
actually
implement
those,
so
I
think
you
need
to
like.
We
need
to
see
what
you'll
see
the
TOC
decides
as
well.
That
can
be
done
on
listeners
and
sort
of
tire
change
with
that.
B
D
D
So
in
series
work
on
ancien
I
have
you
know
the
externality
and
I
where
traffic
doesn't
go
to
I
Peter
will
go
to
route.
There
is
work
on
separating
a
filter
chain.
There
are
other
changes
that
are
happening
and
we
don't
want
to
go
in
and
make
them
harder
to
implement
and
and
put
on
strain
systems.
That's
what
I'm
saying
that
some
gradual
approach
is
probably
safer.
B
A
I
These
are
just
several
small
issues:
I
guess
it
could
be
quick,
discuss
them.
Okay,
so,
first
of
all,
currently
it's
not
possible
to
define
a
gateway
for
the
local
network
in
the
cluster
where
multi
cluster
okay.
So
what
I
propose
is
to
have
some
reserved
word
for
the
local
network
to
call
it
local,
and
this
way
it
will
be
possible
to
specify
the
gateway
for
the
local
network.
I
D
D
I
A
D
I
D
Me
add
code
moved
back
into
discussion
if
we
can
have
a
test
case
and
that
you
know
that
can
be
used
in
the
qualification
and
end-to-end
to
very
verify
that
behavior
I'm
super
happy
to
have
this
changement.
But
we
really
need
testing
for
this
feature
because
it's
currently
completely
uncovered
by
pretty
much
any
test
except
manual
tests.
D
E
D
D
I
Okay,
so
we
need
okay.
So
let's
talk
about
the
next
next
issue.
The
idea
here
is
to
be
to
prevent
ping
pong,
ok
of
a
quest
between
the
gateways.
Okay,
that
is,
that
I
think
that
ingress
gateways
should
draw
traffic
to
the
local
endpoints.
Only
okay.
So
if
an
English
Gate
we
received
receives
a
quest
from
some
other
cluster,
it
should
not
send
it
back
to
that
cluster
or
to
some
other
cluster.
It
should
let
the
request
to
go
inside
only
instead.
A
D
Well,
in
some
cases
it
doesn't,
it
depends
on
how
it
is
before.
Yes,
you
are
right
is
the
general
case
probably
is
happening
into
exactly
the
same
thing.
I
said
earlier,
we
don't
have
enough
testing
on
this
area,
so
it's
very
likely
that
whatever
happens
and
I
guess
same
solution
to
add
testifies
that
it's
happening
and
fix
it.
D
I
D
D
I
Okay,
so
I
will
work
on
it
as
well,
and
we
have
the
third
issue:
I
proposed
to
use
the
single
ingress
gateway
and
the
single
port
443
or
all
the
traffic
okay,
so
for
the
traffic
between
the
between
the
services
for
the
traffic
between
sidecars
and
remote
control,
plane,
mixer
and
pilot,
and
also
to
including
excess
to
the
cube
api
server.
So
this
way
we
will
have
connectivity
only
between
the
gateways
and
it
will
use
the
single
port.
It
will
be
simple
see.
It
will
be
simple
to
configure
the
firewall.
D
Thirteen
one
point
one
is
to
split
it
into
parts:
I
mean
I
was
going
to
propose
the
same
thing
for
could
be
bi
server
to
to
having
access.
Let's
agree
on
that
such
wonderful
and
then
I'd
be
very
happy
to
see.
448
I
think
I
had
this.
We
may
be
hacked
at
some
point,
but
we
still
need
for
the
same
reason
or
file
or
configuration
is
still
in
the
way
to
configure
a
different
port
I.
Don't.
D
Is
4
for
3,
but
we
definitely
need
to
test
with
a
different
port
because
some
people,
one
for
40,
to
be
used
only
for
normal
ingress
for
homicides
the
world
and
it
gives
protection.
What
kind
of
other
stuff
could
top
of
it
and
say
you
need
to
have
a
different
work.
I
would
personally
choose
the
default
to
be
a
different
important.
Allow
easy
configuration
I'm,
not
against
it.
Some
people
having
4
4,
3,
4
everything
but
I'm
just
saying,
cannot
be
so
any
choice.
D
C
A
C
F
D
One
problem
was
a
current
multicast.
The
solution
relies
on
epa
cell
for
being
accessible
on
the
internet.
So
it's
a
pile
of
the
central
panel
and
of
the
pilots
can
can
query
the
API
server
to
get
remote
endpoints.
With
this
we
can
solve
the
case
of
cost
was
a
customer's
wear
80s
that
is
only
accessible
on
the
internal
IP
is
not
publicly
because
they
so
sure
in
many
clusters
to
not
makes
a
case
cell
for
publicly
available.
D
Different
port
is
rated
with
exposing
ZK
server
through
the
Gateway.
It
was
a
negated
because
which
I
don't
expose
our
services,
but
API
server
is
not
exposed
properly
because
it
has
a
special
MPLS
and
other
stuff.
If
it's
about,
we
knew
about
it
its
route,
but
in
both
port
or
perfectly
fine.
It's
the
same
thing
so.
D
I
D
D
I
Okay,
so
just
to
clarify
there
are
three:
here:
are
three
types
of
communications
between
the
services
between
sidecar
and
Remo
and
remote
control,
plane
and
between
pilot
and
cube
API
server.
Okay,
so
I
propose
communication
between
the
services
by
default,
to
use
the
port
443
and
for
two
other
types
to
use
is
today
and
to
also
allow
for
port
443
and.
D
On
this
topic,
there
is
another
port
that
exists
in
the
mesh
mesh
config,
which
is
secure
DNS,
because
it
is
also
possible
to
expose
cube
DNS
to
TLS
over
DNS
over
TLS,
but
is
it
as
the
hard
code?
It
is
well
defined
port
that
needs
to
be
to
be
specified,
know
if
you
can
leave
it
in
place
to
be
perfect:
okay,
okay
and
that
works
with
core
DNS.
Another
other
DNS
servers
that
can
can
query
over
DNS
over
TLS
mm-hmm.
A
Note
I
have
a
comment
related
to
how
we
name
our
features.
So
this
is
tag
does
multi
cluster,
but
it
is
more
related
to
Gateway
connectivity
feature
right.
We
used
to
call
it
zero
VPN,
then
we
all
agree
to
call
it
gateway
connectivity,
because
wherever
you,
this
is
about
service
like
in
mesh
service,
to
service
communication
across
multiple
clusters,
right,
it's
traffic
that
goes
via
dhingra's
gateway
for
the
in
mesh
traffic,
correct.
A
A
D
I
C
A
D
D
D
A
D
D
Confusion
start
to
do
with
the
idea
that
we
have
three
multicast
implementation,
have
one
multicast
implementation
that
has
much
pre
deployment
targeted
to
different
categories
of
users
and
they
use
different
technologies
to
coop
to
communicate.
But
in
the
end,
it's
you
know
are
those
communication
across
what.
I
A
So
we
actually
had
that
document.
It
was
called
these
two
taxonomy
which
talked
about
the
two
models
right
like
single
and
federated,
but
I,
see
our
documentation,
doesn't
really
map
to
that.
Yes,
it
cluster
and
that
the
very
top
there
it
doesn't
say.
Actually
what
it
is,
is
a
it's
a
multiple
control
plane
for
Palo
g.s.
I
I
J
D
A
So
here
I'm,
just
thinking
like
the
Federation,
seems
to
tie
also
with
the
like
policies
and
administrative
domain
right.
So
I
I,
don't
know
who
can
take
an
action
item
to
try
to
sort
of
organize
this
so
that
it
becomes
more
clear
for
users
that
the
first
you
know
the
first
one,
maybe
just
rename
the
menu.
You
know.
D
I
think
I
mean
I
want
to
say
that
you
know
federated
is
probably
not
the
right
term.
I
mean
it's
it's
and
it's
not
entirely
clear
that
you
have
to
have
pirate
or
not
have
to
have
pilot
and
again
you
may
have
pilot
cash.
Is
there
all
kind
of
technologies
that
are
going
to
make
these
difference
a
bit
less
visible
to
users
and
I'm?
K
Costin,
it's
a
spectrum,
there's
a
whole
spectrum
from
one
extreme
to
another.
You
could
do
different
things
with
a
mixer
different
things
with
pilots.
It's.
So
it's
really.
It's
that's
part
of
our
problem.
We're
trying
to
bundle
this
into
two
very
specific
or
three
very
specific
deployments,
but
the
problem
is:
there's
the
spectrum
and,
depending
on
how
you
install
it,
you
can
get
anywhere
on
the
spectrum
and
it's
not
the
problem
easily
said,
would
sing.
I
mean
it's
a
wonderful
things.
K
D
C
K
A
We
need
that
just
a
short
summary
right,
underdog
where
it
says
multi
cluster
installation.
You
know
just
a
short
sort
of
overview
or
something
talking
about
the
fact
that
this
is
a
spectrum,
and
maybe
the
big
thing
you
know
like
the
various
possible
deployments
and
then
saying
we
will
give
like
two
example
or
two
you
know
of
these,
like
one
would
be
and
link
to
the
Gateway
connected
connectivity
and
then
to
the
cluster,
where
something
like
that
picture.
A
A
So
you
know,
with
this
part
of
the
composition,
we're
trying
to
remove
kubernetes
dependencies
dependencies
in
pilot
and
right
now
we
have
a
chicken-and-egg
problems
because
pilot
downloads
listeners
to
envoi
only
when
the
endpoints,
the
kubernetes
endpoints
are
up.
Okay
and
the
kubernetes
endpoints
are
up
only
when
the
pod
is
up
and
the
bodies
up.
A
Only
when
the
readiness
prob
phase
and
the
readiness
prop
kind
of
waits
for
the
listeners
for
the
injections
of
endpoint
VI
MCP,
we
need
to
decouple
file
from
this
like
readiness
and
kubernetes
dependency
to
the
point
where
it
would
ingest
when
I
went
to
receive
endpoints
from
an
NCP
source.
It
means
you
know
it
can
really
download
the
listener
for
that
respective
endpoint.
D
D
C
D
A
D
Dr.
Britt,
as
he
has
a
start-up
I,
would
have
many
cases
for
his
application
starts
first
and
then
cannot
get
traffic
or
it's
in
a
bad
state
or
cannot
because
it
cannot
be
configured
because
you
don't
have
just
all
the
replication
between
a
service
controller,
to
pilot
to
to
back
tool
to
them.
Hawaii
there
are
half
a
second,
probably
less,
but
it's
it
matters.
Sometimes
it's.
You
have
problems
and
on
the
same
note,
if
I
can
finish
this
I.
D
Just
wanted
to
put
the
link
there
if
people
want
to
try
it
so
I,
don't
know
if
you
know
is
the
console
adapter
in
pilot
has
been
polling,
is
doing
repeated,
puddings
and-
and
it's
not
scaling
very
well
and
also
doesn't
help
incrementally
dear
support
and
I'm
trying
to
build
a
prototype.
Is
that
link
you
have
some
half
working
gamos
proof-of-concept
or
coastal
adapter,
implementing
MCT
protocol
and
using
listeners,
and
so
it
would
work
with
we
decrement
our
IDs,
every
genuine
CV.
That's
if
people
are
interested
in
counselor
or
want
to
help.
D
An
offender
in
was
a
postal
of
the
postal
code
in
pine
logs
demised
for
for
watching
n
points,
instead
of
of
polling
and
modified
to
generate
synthetic
service,
and
it
has
the
same
readiness
problem
as
the
other
one
and
it's
not
sure
I
can
go
both,
but
poster
has
its
own
readiness
probe.
So
he's
not
perfectly
myself,
yeah.