►
From YouTube: Istio Networking WG meeting - 2019-09-26
Description
- Endpoints slice
- Istio failover
- Config distribution status
A
A
A
The
first
one
is
discussing
a
proposal
to
achieve
well
I
kissed
your
failover
for
it
yes
like
and
I
will
probably
pass
it
to
too
young
to
discuss
his
proposal
and
debate
a
bit
on
that
cuz.
There
been
some
discussions
already
and
second
John
has
a
very
interesting
proposal
about
again
about
TDs
and
it's
called
end
point
end
point
slides
from
what
I
read
a
bit
about
and
discuss
with
John.
It's
actually
matching
the
first
proposal,
so
maybe
we
can
merge
these
two
and
give
adequate
coverage
and
also
Mitch
later
we'll
discuss
about
config
distributions.
A
A
D
D
But
this
one
is
simple
and
provides
context
for
the
second,
so
yeah.
This
one
probably
won't
take
too
long
yeah.
So
basically
kubernetes
added
this
new
API
in
the
1/16
release
as
an
alpha
called
endpoint
slice
and
the
long
term
goal
is.
This
will
replace
the
endpoint
object.
Basically,
what
it
is
is
endpoint
with
a
few
more
fields
added
and
it's
designed
to
be
extensible.
So
it's
actually
not
even
part
of
the
core
API
group
anymore,
because
it's
designed
for
things
such
as
east
yo
to
utilize
them
in
our
own
ways.
D
So
they
added
a
couple
things
like
apology,
which
has
first-class
support
for
a
locality,
and
we
can
also
add
arbitrary
metadata.
We
went
there,
they
had
different
address
types,
so
we
could
probably
replace
our
current
resolution
where
we
have.
You
know
some
host
names
that
are
resolved
in
DNS
and
then
some
IP
addresses
there
are
some
things
like
missing
fields
that
we
have
right
now,
like
the
weight,
the
l7
protocol
and
export
to,
but
I
think
a
lot
of
those
we
could.
E
C
Is
because
endpoints
what
say
you're
using
before
it
wouldn't
scale
to
larger
number
of
hosts?
We
have
exactly
the
same
problem
where
I
said
be
centrally,
doesn't
scale
to
large
number.
Of
course,
if
we
get
to
a
thousand
two
thousand
ten
thousand
endpoints
editing
formation,
expansion,
for
example,
becomes
almost
impossible
and
the
size
becomes
too
large
to
to
set
in
Protoss
and
and
so
forth.
So
that's
how
an
kubernetes
is
going
to
scale
and
because
I
was
actually
inspired
from
history
API
a
formal
requirement.
C
C
C
It
is
charting
the
endpoints
in
multiples.
My
husband
means
there's
not
going
to
be
incremental
in
any
way,
but
just
anyways.
That's
that's
scalability
I
do
so
far.
There
is
no
other
alternative
to
Z
seen
from
in
from
any
point
of
view,
because
what
for,
if
you
go
to
ten
thousand,
what
we
have
doesn't
work
yeah.
D
So
this
side
dock
was
just
having
a
higher
level
overview
of
how
we
can
embrace
this
new
API.
The
first
step
is
just
support
in
pilot.
This
is
we
need
to
do
this
because
otherwise,
once
people
start
using
endpoint
slice,
there
is
no
more
endpoints
created
and
so
pilot
will
not
work
at
all.
So
this
just
we
have
to
do
this.
It's
probably
not
that
hard,
and
this
isn't
kubernetes
116.
So
hopefully
we
could
get
it
in
one
for
my
four,
but
it
is
an
alpha
right
now.
D
C
Things
that
we
did
for
our
user
see
if
they
used
kubernetes
116
for
Safari
to
work
so
being
open
and
second,
they
will
be
able
to
use
mesh
expansion
or
other
features
using
endpoint
slice
and
no
longer
have
to
use
semi
sentences.
You
will
use
a
first
class
benefits
API
and,
as
a
side
effect
automatically,
it
will
be
integrated,
never
be
visible.
To
so,
prometheus
would
be
able
to
scrap
it,
and
so.
D
Forth
yeah-
that
was
my
next
point,
is
that
once
we
get
support,
we
can
also
just
complete
the
replace
service
entry
with
descent
point
slice.
It
serves
the
same
functionality,
but
it
simplifies
a
lot
of
things.
We
don't
have
our
own
C
or
D,
that
we
have
to
maintain
teach
users
about
document
etc
like
cost.
Instead,
we
get
first
class
community
support,
especially
things
like
Prometheus
will
work
out
of
the
box
and
they're
really
like
almost
the
same
guys.
So
it
doesn't
really
make
sense
for
us
immediately.
C
E
A
D
That
Bob
is
an
exporter
station
yeah
here
I
give
some
examples
of
service
entries
and
their
equivalent
in
endpoint
slice.
It's
kinda
hard
to
see
on
one
screen,
but
if
you
look
at
them
they're
effectively
the
same
thing,
so
here's
one
for
DNS
service
entry,
here's
what
it
would
look
like
as
an
endpoint
slice
space
is
the
same
thing.
D
E
D
This
one
is
really
it's
completed,
image
that
require
some
changes
yeah.
So
so
what
is
actually
right
now
the
address
types,
only
sports
IP,
but
it's
still
alpha
and
I'm,
pretty
sure
we
can
them
to
allow
a
hostname.
So
we
can
do
it.
The
DNS
stuff,
as
well
I,
talked
to
some
of
the
people
on
the
community
side.
It
seemed
open
to
that,
but
one
floor
would
go
weak,
yeah.
D
D
Basically,
if
you're
going
from
a
thousand
and
points
to
ten
thousand
you're
gonna
send
three
three
hundred
thirty
two
billion
endpoints,
it's
just
basic
and
squared
problems,
so
we
should
come
up
with
solution
in
the
near
term
to
do
some
sort
of
incremental
or
batch
TDS.
One
potential
solution
for
this
is
what
yan
is
going
to
present.
D
If
you
could
have
some
sort
of
aggregate
cluster
that
has
so
many
sub
clusters,
so
we
have
like
her
HTTP
bin
cluster
and
then
that
would
be
composed
of
a
few
different
sub
clusters
which
have
their
own
independent
endpoint
sets.
So
that
way,
if
an
endpoint
gets
updated,
we
don't
have
to
send
all
10,000
endpoints.
We
can
just
send
one
chunk.
Another
one
is
if
this
was
just.
We
had
a
first
cost
support
for
this
in
EDS.
I.
Think
there's
been
some
efforts
around
that,
like
the
Delta
x,
TF
yeah,
but.
C
But
it
doesn't
solve
the
problem
because
it
is
that
sort
of
proof.
One
incremental
update
illusion
to
one
extra
endpoint
is
a
initial,
send
it
still
the
full
list
and
it's
still
probably
larger
than
the
G
RPC
buffer
size
and
yeah.
That's
what
does
work
so
it's
it's!
It's
a
palliative,
but
is
not
going
to
solve
the
problem.
G
C
G
C
C
I
Endpoint
slice
support
for
the
kubernetes
adapter,
that's
fine,
I,
don't
think
there
are
any
state
to
replace
service'
tree
I
mean
our
deprecated
service,
intricacy
I
mean
maybe
pod
VM
expansion
and
so
on.
That's
great
service
entry
represents
an
actual
service.
It's
an
object.
That's
actually
declaring
this
is
the
service
with
so-and-so
parameters
and
properties,
and
endpoint
slice
represents
a
group
of
endpoints
that
belongs
to
one
or
more
services,
so
overloading
that
API
to
represent
the
services,
as
shown
in
the
examples
here,
he's
pretty
wrong,
especially
because
how
you
can
add
additional
external
entries.
I
Need
be,
and
if
there
are
an
actual
use
cases
that
vapor
people
come
and
say
you
have
a
country
with
10,000
endpoints
and
you
would
need
to
have
some
sort
of
like
that
Unicode
you
can
always
create
an
additional
like
you
know,
referential
point
sets
or
whatnot
treaties,
but
I
don't
think.
We've
come
across
the
case
that
somebody
has
come
in
and
said:
I
have
10,000
points
in
one
single
service
entry
and
adding
and
removing
it
is
creating
an
issue
so.
I
But
as
such
this
proposal,
I
mean
we
should
definitely
support.
The
endpoints
lies
on
kubernetes,
because
but
I
think,
even
if
we
support
it,
I
think
it's
pointless
until
we
update
pilot
to
like
not
do
the
incremental
ad
as
the
town.
Why?
Because
that
pain
point
is
in
terms
of
the
differential,
should
have
been
higher.
D
A
I
A
Not
fully
but
I
think
we
will
reduce
some
of
the
pilot,
CPU
and
processing
and
blasts.
We
will
do
it
like
incremental
in
terms
of
effort
and
people
who
are
working
on
it,
because
it's
a
different
set
of
people
working
in
pilot
and
galley,
which
is
the
one
working
on
DDS
right,
so
that
would
need
to
change
also
in
LA
Bradley,
to
create
cosmetics,
37
points
with
slices
instead
of
actual
services
and
end
points
in
kubernetes.
C
A
C
D
F
J
I
Is
just
an
internal
stuff
that
we
have
to
anyway
add
and
is
in
yeah
after
I?
Reverse
it,
because
yet
another
way
of
reading
endpoints
come
kubernetes,
that's
it
I
mean
if
they
wicked
one
and
add
one.
That's
this
part
of
us
maintaining
the
interfaces
with
kubernetes
left
leg
way.
We
do
it
by
the
other
systems,
though,
but
that
whether
we
use
this
and
galley
and
all
the
other
things
in
the
sky.
Oh
that's
a
second,
that's
a
separate
discussion
thing,
but
yes,
we
still
need
to
add
support
for
this.
Normally.
I
C
A
C
A
good
point,
but
just
like
service
engine
other
things,
it's
not
necessarily
the
spec
for
for
slice
is
not
necessarily,
it
is
exclusive.
I
mean
it
can
be
used
in
any
environment.
It's
a
protocol.
So
that's
what
we're
discussing
within
CP
I
mean
it
is
sent
over
MCP.
It
can
be
translated
from
any
source.
So
harsh.
A
A
So
this
is,
this
is
basically
the
endpoints
which
is
right
now
implemented
in
two
places.
It's
in
kubernetes
plugin
running
directly
in
Pallet,
and
it's
also
in
gully
right
and
what
Shannon
is
saying
is
that
we
keep
updating
that
code
in
pilot
instead
of
getting
rid
of
it
and
going
with
the
galley
and
MCP,
and
the
mCP
can
send
the
service
entry
with
the
merger
ball
slices
of
endpoints,
which
are
the
kubernetes
slices.
D
Using
I,
don't
think
your
goal
is,
as
Easter
is
to
add
a
duplicate
API
of
a
coronaries
one
right.
It's
no
api's,
like
virtual
service,
because
you
can't
express
the
same
thing
in
kubernetes,
but
now
they
have
an
API.
That's
almost
identical
a
service
entry
is
literally
inspired
by
service
entry,
so
I
don't
think
it
makes
sense
to
have
a
API.
That's
essentially
the
same
thing
as
the
endpoint
slice
and
just
make
your
users
in
there
and
get
another
API.
That's
subtly
different
and
doesn't
work
with.
C
K
Remember
the
service
has
two
parts
right:
it
has
the
collection
of
endpoints
part
right
associated
with
a
name,
and
then
it
has
the.
How
do
I
talk
to
it
part
right.
The
reason
why
they're
doing
endpoint
slicing
kubernetes
is
because
of
change.
Velocity
right
within
large
groups
of
endpoints
is
a
scaling
limitation.
We're
actually
subject
to
the
same
scaling
limitation
right,
particularly
when
endpoints
have
labels
are
duplicate.
Labels
on
every
endpoint,
fully
denormalized
is
very
expensive
for
us
and
is
only
going
to
become
more
so
if
we
try
to
model.
K
You
know
the
reason
why
they're
doing
in
point
slice
is
to
support
a
communities
cluster
that
has
thousands
of
nodes
and
hundreds
of
thousands
of
pots.
We
have
the
same
physical
limit,
so
it
exists
for
a
good
reason
and
we
share
the
reason
so
no
and
as
decoupling
are
allowing
for
a
functional
decoupling
of
the.
How
do
I
talk
to
a
part
of
service
from
the
one
endpoints
does.
It
have
part
is
probably
a
good
compositional
change.
Now
we
can
model
it
differently
to
kubernetes.
We
have
an
abstraction
layer
where
we
do
that.
K
C
K
B
B
K
A
C
K
That's
what's
happening
right.
We
want
to
support
the
that
there's
the
the
usability
thing
of
like.
Don't
make
me
update
two
things
if
I'm
doing
something
really
simple
right,
which
is
why
we
allow
for
inline
endpoints
in
service
entry
right,
but
we
need
to
support
a
model
where
that
can
be
decoupled
for
scaling
reasons.
All.
A
C
K
I
mean
this
is
like
you
know:
envoy
has
the
same
model
as
well
right,
they
have
clusters
within
line
points
right
or
you
know,
direct
resolution
of
its
own
end
points
or
it
does
it
delegates
to
enact
an
end
points
sirness,
which
basically
provides
it
what
it
needs.
This
is
the
same
logical
composition,
remodel.
You
could
be
there
how.
C
L
K
M
M
M
So
what
we
are
going
to
do
is
that
we
stand
the
service
entry
to
have
some
load
balancing
hosts,
which
contains
post
mod
internal
host
name
and
extraneous
name
and
provide
some
priorities
to
denote
the
properly
full
of
oil
fell
over
sequences
and,
for
example,
in
this
example.
So
they
the
traffic,
will
those
to
eternal
for
Baca
come
first
in
there.
If
this
host
or
fail,
then
took
the
traffic
will
go
to
phobar
calm,
and
even
even
we
aggregate
this
one.
M
M
M
Why
am
I
not
like,
for
example,
when
the
pilot,
it's
very
far
from
the
oil,
the
tiara,
which
is
not
it's
not
ridden
boat
because
it
responds
to
following
so
envoy,
receives
IP
address
and
the
IP
address
may
not
be
reachable
by
the
owned
by
down
boy
and
that's
one,
like
maybe
ds2
DTS
resolution,
so
that's
first,
that's
the
implementation
may
be
similar
to
Africa
cluster,
but
the
conscious
that
way.
India's
receives
the
response
there.
So
the
ideas
routes
response
will
not
be
applied
immediately
because
it
need
to
do
some
resolution.
Yeah.
M
C
And
in
general,
the
problem
that
we
are
trying
to
solve
is
that
we
have
services
that
are
implemented
in
local
clusters
and
can
be
resolved,
the
TDs
and
load-balanced
and
so
forth.
But
the
exact
same
service
can
be
implemented
by
one
or
more
DNS.
Entries
do
not
have
what
we
do
not
have.
Access
to
psychiatric
assistance,
I
need
to
go
to
to
DNS
resolution,
and
besides
fallback,
we
also
have
the
concept
of
ro
curricular
balancing.
G
I
Is
the
other
thing
to
notice
that
today
you
can
just
ship
a
strict
DNS
cluster
with
IPS
and
the
DNS
name?
And
you
know
the
IP
addresses
will
just
be
a
no
op
for
DNS
resolution
and
the
DNS
names
will
just
get
resolved
as
usual,
and
that
is
a
solution
compared
to
which
requires
more
18.
Just
it
call
it
request
with
a
small
fix
and
pilot
such
that
I
could
be
able
to
define
hey
for
this
entry
with
the
same
host
as
the
kubernetes
service,
name
and
I
can
add
a
bunch
of
other
endpoints.
I
Existent
pilot
yet,
but
we
can
actually
we
have
to
do
it
anyway,
and
we
can
add
that
merging
logic.
And
if
you
had
to
add
that
merging
logic,
then
it
becomes
a
very
simple
thing
of
shipping.
A
simple,
strict,
DNS
clusters
by
our
serious,
which
has
the
for
eyepiece
and
the
you
know
the
three
accord
minion
fall
back,
but.
C
I
Talk
to
you
can
I
talk
to
all
of
the
other
Walmart
guys.
This
is
not
a
large
scale
scenario,
nor
is
this
one
where
they
have
thousands
of
endpoints
of
that
thought.
This
is
all
core
stuff,
that's
happening
in
the
Walmart,
like
the
stores
and
so
on,
where
they
have
few
pods
that
is
running
the
service
is
on
coupon.
It
is
plus
one
or
two
VMs
actually
supporting
the
same
thing
on
VMs,
and
they
simply
want
to
be
able
to
pay
low
from
the
pod
into
the
vault.
The
pods
are
unavailable.
C
I
Always
the
case,
because
if
you
do
it
by
scale
and
you
create
a
more
complex
solution,
which
is
not
a
common
case-
and
it
is
always,
then
it
goes
in
a
completely
different
path,
and
until
we
hit
that
scale
thing
we
shouldn't
like
over,
complicate
a
solution
that
gets
into
more
much
please.
But
if
we
don't
require
a
change
because
the
command
key.
This
is
a
change
where
we
just
don't
even
need
that
scale.
Then
why
I
think
there.
I
Small
fix
and
pilot
to
allow
actually
merging
service
entry.
They
should
be
able
to
satisfy
this
use
case
today
with
zero
API
changes,
zero
changes
in
Hanoi
and
any,
but
so
it's
DNS
entries
in
ideas
we
can
actually
like
achieve.
You
can
start
shipping
that
same
cluster
as
in
EBS
cluster,
rather
than
a
strictly
honest
cluster,
and
that
becomes
if.
D
I
C
A
C
I
C
I
You
just
wanted
to
do
this
whole
shard,
business
and
so
on,
and
so
I'm
in
it's
like.
Yes,
that
sure
that's
also
the
equally
case.
Yet
she
make
the
CDA
stickiness
thing
to
have
this
incremental
property.
Then
we
don't
necessarily
read
yes,
but
the
point
is
that
none
of
them
would
require
another
cognitive
effort
for
people
to
understand
what
this
new
type
of
resolution
is
not
have
to,
like.
You
know,
figure
out
what,
if
I,
declare
sub
such
here
and
there
and
how
do
I
actually
reconcile
them
and
adding
it.
I
Another
type
of
this
whole
thing
and
it
spirals
on
its
own,
set
up
a
complexities
worst
case
really
wanted.
This
way,
you
can
always
go
uses
on
rifle
to
API,
to
create
the
special
clusters
or
our
customer
for
those
for
those
services
where
they
want
this
hybrid
kind
of
load,
balancer,
which
still
also
eliminates
the
need
to
like
complicate
api
with
with
this
kind
of
a
one-off
scenario.
So.
A
A
K
I
K
I
C
K
I
D
We
need
for
the
proposal
of
aggregate
cluster.
We
could
still
urge
them
with
the
usual,
not
renewing
so
right
now
today,
if
you
dim
to
service
centuries,
one
with
DNS
and
that's
not
DNS
highly
actually
will
send
the
hostname
in
the
EDS
response,
like
it
completely
breaks
on.
Where
rejects
it,
we
can
change
that.
G
G
C
C
G
G
G
G
C
K
C
I
I
K
M
K
I
A
locality
based
failover,
so
the
idea
is
that
you
could
declare
a
service
entry
way.
You
specify
the
locality
for
your
internal
parts
and
a
separate
locality
for
your
BM,
based
like
in
the
DNS
endpoint,
and
then
you
just
defined
l,
think
that
says
like
hello
from
the
porch,
the
VMS,
when
they
all
fail
and
that's
it
and
that
way
it
is
it
works.
Today.
People
are
doing
the
same
locality
based
like
no
pail
overs
and
what
about
the
road
balancing
today
and
so
that
exact
same
principle
applies
from
the
ok.
F
I
There
will
be
total
of
three
clusters.
That
means
to
be
declared
and
no
money
sucking
about
three
clusters.
We
have
a
bunch
of
properties
on
clusters
from
protocols
pls
context
to
subsets
to
in
a
what
not
and
we
now
have
to
start
resolving.
All
the
you
know
the
interplay
between
all
of
these
properties
across
three
different
clusters
as
to
like
you
know
what
happens
when
you
define,
but
not
in
the
other,
and,
for
example,
if
I,
just
a
bunch
of
and
together.
A
G
I
So
you
know
you
can
robert
pants
at
the
walmart
folks
like
they
wanted
this
functionality
for
they
extending
the
mesh,
but
the
solution
is
dial-up
much
more
simpler
than
going
with
aggregate
cluster.
Unless
it
is
a
real,
like
you
know
what
it
called
showstopper
fail
issue
which
does
not
simply
talking
to
him
even.
E
C
G
I
G
M
K
Listen
you're
the
one
who
made
the
argument
that
we
should
align
with
the
existing
envoy
pattern
of
relying
on
labeling
right
because
that's
going
to
use
to
select
other
features
onto
these
endpoints.
So
if
we
are
beholden
to
that
requirement,
then
the
only
way
to
achieve
that
effect.
It
is
d
LS
in
EDS.
C
C
L
C
You
just
much
you
think,
the
cross
that
one
cross
that
will
merge
them.
Yeah
they're
equivalent,
inter
you
sends
pds
larger
ideas
with
more
label,
so
you
sent
slices
which
cause
the
business
slice,
and
then
you
put
them
together
in
Revit
the
only
thing
sliced
our
endpoints
with
labels
yeah,
but
you
can
slice
it
further.
If
you
choose
to
that's
kind
of
the
nice
and
I
think
that's.
Why
surely
like
at.
C
K
F
K
G
G
K
G
K
P
M
K
Ring
and
don't
get
me
wrong
by
the
way.
I
actually
think
if
I
were
to
choose
I
would
think
I
will
get
cluster
with
which
teacher
and
emergence
is
actually
probably
a
better
option.
As
long
as
you
could
select.
As
long
as
the
aggregate
cluster
could
select
the
child
clusters
with
more
flexibility,
because
it
probably
scales
better.
K
D
P
J
A
O
Week
so
it
was
fairly
simple
and
so
I
think
what
I
propose
is
that
you
look
at
the
document
offline
if
you're
interested
at
a
very
high
level,
the
30-second
pitch
is
I'm,
proposing
we
begin
hashing,
config
versions
and
pilot
using
those
hashes
as
nonsense
to
XTS
and
leverage
that
data
to
help
our
customers
understand
when
a
config
change
has
propagated
out
to
the
proxies.
The
details
are
all
in
the
document
and
I
think
relatively
boring,
but
I
do
address
comments
as
they
come
in.
If
there
are
enough
comments,
I'd
be
happy
to
come
back.
O
I
Let's
do
it
because
this
has
been
now
minds
were
long
and
maybe
wanted
to
use
some
semantic
base
of
like
something
in
the
version
that
actually
allows
us
to
track
whether
a
function
propagated
all
the
way
to
data
plane
or
not.
The
problem
is
that
the
non
certify
stand.
Are
they
specific
to
in
an
80s
thing?
Is
the
one
single
non
for
the
entire
chunk
alright
unknown
spoiling
yet
one
for
sleep,
theater,
yeah,
yeah,.
O
H
I
H
O
I
Problem
here
is
this
right,
so
it's
like,
if
even
if
it's,
let's
sail
LD
s,
it's
a
composition
of
multiple
resources.
So
if
it's
a
question
of
like
to
what
granularity
do
you
want
to
track
the
the
update
propagation?
If
you
want
to
track
the
propagation
of
this
virtual
service,
all
the
way
down
to
the
South,
that
is
an
onsen
in
RDS
and
sometimes
possibly
in
LDS
as
well.
O
I
Doing
the
hashing
the
entirety
of
the
well
I
guess:
hash
is
a
decent
way
to
do
that.
Yes,
I'm
sure,
because
the
point
is
that
you
can,
instead
of
doing
the
hashing
the
entire
context,
which
is
not
going
to
be
of
that
much
use,
basically
because
dependent
Duty
eventual
consistency.
Some
pilots
may
have
all,
and
some
pilots
may
not
have
all,
and
each
one
is
going
to
have
a
different
cache
and
from
an
end
user.
I
The
hash
they
will
not
see
instead
well,
but
there's
no
solution
for,
is
because
pointers
create
the
concept
of
epochs,
which
is
like
you
know
at
this
I
poke.
This
is
all
the
configuration
that
this
pilot
has
and
we
could
have
a
linear
ordering
across
all
the
pilots,
in
terms
of
not
necessarily
all
the
eventually
much
that
the
end
user
should
be
able
to
see,
lay
okay,
this
on.
Why
has
only
confirmation
from
epoch,
one.
I
Guess
the
the
phones
I
have
this
box,
where
we
can
actually
tell
them.
You
said
this:
is
these
envoys
are
an
epoch,
one
which
very
one
only
has
these
configuration,
then
that
would
be
great
rather
than
had
an
effect.
This
idea
of
like
need
to
have
a
central
depository
of
what
et
focus
and
some
form
of
linearization
of
meeting
from
one
followed
to.
C
O
O
O
A
I
have
a
proposal
this
like
this,
is
going
very
detailed
discussions.
I'm
sure
not
everybody
had
the
chance
part
should
be
written
on
this
under
proposal,
and
there
is
one
thing
which
I
may
have
missed,
which
is
the
utility.
What
is
this
good
for
and
actually
found
it
in
the
document
right
so
any
feature?
Mystic
power.