►
From YouTube: 06.17.2020 - Service Mesh Hub Community Meeting
Description
Meeting Details
https://www.solo.io/blog/service-mesh-hub-community-meetings/
A
B
B
A
A
This
is
the
first
community
meeting.
There
was
a
lot
of
good
since
we
had
a
lot
started,
getting
a
lot
of
questions
and
discussion
in
the
slack
Channel.
We
thought
it
would
be
probably
best
to
actually
just
host
it
as
a
has
a
meeting,
an
open
meeting.
So
then
we
can
also
no
record
the
conversation
share
it
and
let
people
kind
of
discuss
you
know,
especially
if
it's
related
to
a
specific
feature
request,
or
you
know,
QA
and
also
use
this
opportunity
to
share
come
what
people
are
doing
with
it.
Yeah.
D
A
F
A
Lot
of
familiar
faces
from
this
familiar
faces
and
names
from
the
slack
I
apologize.
If
I,
don't
remember
where
everybody
where
one
works
Michael,
where
do
you
work?
He.
A
So
we've
got
a
couple
folks
for
me
to
be
San,
Cisco
and
then
Roku
and
but
to
go
anyone
else
as
they
joined.
Please
feel
free
to
type
your
name
in
there.
So,
first
of
all,
thank
you
for
being
so
active
in
the
community
slack
and
being
interested
in
the
project
in
an
effort
to
make
this
more
community
driven,
be
more
transparent
and
engage
everyone
in
the
conversation
kicking
off
the
first
ever
community
meeting
goal
is
to
have
this
every
other
roughly
every
other
week,
but
we
can.
You
know
if
there's
a
lot.
A
We
can
kind
of.
You
know
see
how
that
goes
based
on.
If
there's
a
lot
of
topics
or
if
there's
no
topics,
then
we
can
say
hey:
do
we
skip
a
week?
What
have
you,
but
part
of
this
is
to
one
you
know,
do
kind
of
like
also
open,
Q&A,
half
the
agenda
topics
really
driven
by
folks
in
the
community,
whether
it
be
like
you
know,
we've
got
questions
here
from
folks
at
AWS
around.
You
know
at
much
support
and
support
it
to
the
other
AWS
services.
A
So
that's
a
great
discussion
point
and
then
just
other
general
questions
as
well
as
like
specific
things
that
solo
is
possibly
thinking
about
as
it
relates
to
road
maps,
road
map,
and
it
might
be
like
something
that
we
have
thought
about
internally,
or
maybe
we
heard
through
a
private
channel
from
a
customer
or
end
user,
and
we
want
to
use
this
forum
to
kind
of
discuss
with
the
larger
group
to
get
more
feedback.
So
that's
really.
A
The
purpose
of
these
we
will
make
a
point
of
you
know:
I'll
make
a
point
of
making
sure
that
we
post
this
into
the
community
slack
on
a
regular
basis,
so
that
gender
driven
agenda
is
driven
by
everyone
but
would
like
to
have
you
know
one
see
who's
attending
and
then
also
and
then
go
in
and
kind
of.
Do
these
topic
points,
and
so
this
is
fantastic.
So
we
can
actually
know
what
the
Welcome
is
done.
A
We
can
start
with
the
first
topic
Alex
on
plans
for
extending
at
mesh
support,
and
so
I
know,
that's
something
that
we've
also
talked
about
internally.
So
we
can
have
some
of
one
of
our
engineers
talk
about
what
how
we're
looking
at
it.
What
we've
you
know
what
we've
kind
of
thought
about
there
and
then
also
open
it
up
to
the
folks
at
on
the
AWS
team.
For
more
conversation,
oh.
B
Thank
you
sure,
so
I
can
give
an
overview
about
how
we're
supporting
at
mesh
right
now.
So
the
extent
of
the
support
is
the
following:
once
a
user
gives
service
mesh
hub,
but
they
are
AWS
credentials.
Assuming
that
the
permissions
are
their
service
mesh
hub,
you
can
go
and
discover
any
existing
add
mesh
instances.
It
can
also
discovery
any
existing
gks
instances
and
the
full
set
of
functionality
for
discovery
figuring
out
what
the
workload
services
are
that
that's
all
working.
G
That's
awesome
and
I
was
really
excited
to
see
that
updated
version,
0.5
I,
believe
he's
done
lately.
So
today
we
don't
have
nga
support
for
things
like
be
crass
way:
spiritual
routers,
but
it's
available
in
three
new
channel
and
we
expect
to
move
it
cheering
within
next
few
weeks.
Do
you
have
any
plans
to
support
us
to
add
support
for
us?
G
G
Also
good
right
now
the
admin
support
is
limited
to
discovery,
and
configuration
of
of
the
mesh
itself
basically
gives
you
a
good
well
potentially
single
pane
of
glass
kind
of
view
over
your
mesh,
but
in
order
to
get
service
in
service
communication
across
different
meshes
egress
would
be
crucial.
Yeah.
B
Definitely
that's
really
exciting
to
hear
we've
mentioned,
having
a
mesh
to
add
mesh
support.
Just
one
question:
do
they
have
to
be
in
the
same?
Are
there
any
regional
constraints
there?
Can
you
have
a
knavish
instance
in
a
different
region,
talk
to
instance
in
a
different
region
yeah
there
should
be.
G
C
Let's,
what
had
a
question
on
that?
Maybe
it's
very--it's
the
next
topic
down,
but
you,
sir,
a
road
map
posted
somewhere
for
service
much
and
how
some
of
these
things
are
on
the
road.
A
So
part
of
that
is
this
meeting
is
kind
of
like
is
starting
kind
of
that
public
process.
For
that,
and
we
wanted
to
use
this
to
have
kind
of
live
discussion
and
then
start
you
know
start
here
and
then
what
we
can
do
is,
if
you
have
a
you,
can
start
filing
issues.
You
know
with
regards
to
like
hey
I'm
interested
in
this,
and
then
we
can
use.
We
want
to
start
having
some
of
that
discussion
happen
here
as
well
as
then
move
that
to
in
the
repo
itself,
but
this
is
kind
of
you
know.
A
C
A
You
know
what
we
can
do
is
we
can
also
more
formally
like
say,
like
hey:
let's
follow
this
way
and
we'll,
actually
we
should
we
should.
We
should
put
that
into
the
repo
itself
so
that
we're
always
like
you
know
as
a
group
doing
that
together,
but
I
think
that
listing
in
here
is
one
especially
and
then
also
in
the
repo
itself,
like
as
an
issue.
C
G
D
Internally
or
externally,
we
have
design
Doc's
in
the
public
repo,
and
you
can
basically
do
like
something
we
do
internally,
but
anybody
can
do.
It
is
open
up
a
poll
request
with
a
design
doc
that
goes
in
that
directory.
Then
we
can
discuss
everything
there
and
you
know
seeing
like
the
result.
That'll
usually
lead
to
a
pull
request
afterwards,.
D
B
If
there's
a
substantial
feature,
yeah
I
think
raising
an
issue.
I'll
take
an
action
item
to
create
an
issue
template
to
help
structure.
Some
of
that,
but
I
think
an
issue
is
a
good
first
step
and
then,
if
we,
if
there's
initial
consensus-
and
we
know
we
need
a
complex
new
feature,
I
think
then
we
can
make
a
call
for
a
design
doc,
but
definitely
as
the
first
step.
Maybe
a
design
dog
is
a
little
bit
too
heavyweight
and
we
can
create
scope.
A
No,
it's
here
and
thanks
Harvey
for
saying
he'll
take
up
that
action,
but
so
that
we
can
put
like
youth,
like
you
said,
the
issue
template
and
we
could
have
some
parameters
on
like-
and
this
is
a
this-
is
a
joint
prep.
This
is
a
process
with
everyone,
so
we'll
start
there,
and
then
we
can
kind
of
define
like
you
know
what
would
require
dog?
What
doesn't
how
we
want
to
go
through
the
discussion.
A
B
A
B
I
think
if,
if
a
particular,
it
queue
evolves
into
like
a
milestone
that
we
want
to
put
more
investment
into,
we
can
maybe
you
like
designate
one
of
the
owners
of
the
repo
to
work
with
community
to
push
that
effort
across
I.
Think
we
can
form
those
groups
as
needed
right
now.
We
don't
have
any
formal
distinctions.
D
A
A
H
A
Okay,
anything
else
on
this
and
the
other
thing
it
is,
if
you
think,
of
something
after
this
call,
feel
free
to
add
it
here
and
then
just
tag
tag,
one
of
us,
it's
so
low
in
there,
so
that
we
know
that
you've
added
something.
You
know
this
is
not
a
like
a
in
place
forever.
Only
document,
you
know
once
you
once
it's
type,
that's
it.
We
can't
add
it's
not
one
of
those.
You
know
we
don't
always
all
think
of
stuff.
A
A
E
Cool,
so
this
is
not
process
related
at
all.
This
is
now
more
product
related,
so
and
I
had
a
few
questions
about
the
metamodel
or
the
conceptual
foundation
of
service.
Mashup
I
have
been
digging
into
service
mashup
for
a
bit,
and
please
do
correct
my
my
statement
or
understanding
where
I
didn't,
where
I
didn't
fully
get
it.
So
it
looks
to
me
that,
on
on
a
high
level,
service
mesh
hub
is
multi.
Cluster
multi
mesh
solution
service
mesh
hub
has
a
strong
notion
of
what
a
cluster
is.
E
One
of
the
first
things
to
do
in
service
mashup
is
to
register
a
cluster
from
there
on
the
class
of
service.
Mesh
hub
will
identify
an
instance
of
a
service
mesh
on
that
cluster,
so
I'd
be
interested
in
some
of
the
intersections.
When
we
look
at,
for
example,
single
cluster
mighty
mesh,
they
have
a
single
cluster,
multiple
sto
installations
on
that
cluster
and
in
a
sense
of
soft
multi-tenancy
I'd
also
be
interested
in
talking
about
what.
E
E
G
I
E
Far
apart,
when
we
look
at
sto
and
up
mesh
and
the
discrepancies
that
I
see
right
now
is
a
cluster
right
service.
Mashup
deals
in
clusters
up
mesh
does
not
and
service
mesh
up
deals
in
workloads
which
basically
boils
down
to
pods
up
mesh
does
not
nodes
as
far
as
I
can
tell
in
up
mesh
I'm,
not
an
optimist
expert,
but
no
virtual
nodes
in
up
mesh
may
also
reflect,
for
example,
a
kubernetes
service
right.
B
With
that
I
think
that's
a
great
question,
I
think
there's
a
lot
to
discuss
here.
As
you
pointed
out,
the
initial
open
source
release
of
service
measure
was
very
much
tied
to
a
notion
of
a
kubernetes
cluster.
But
since
then,
parallel
to
the
work
with
integrating
at
mesh,
we
have
expanded
the
abstractions
to
be
more
kubernetes
cluster
independent.
What
we
want
to
represent
is
what
we
call
a
compute
target
or
a
compute
platform,
something
that
houses
a
atomic
unit
of
compute
and
might
be
interested
to
ask
you
for
the
notion
of
workload.
D
E
Yes,
that
is
true,
so
then
the
workload
is
whatever
consumes
the
message,
and
so
I
would
argue
that
workloads
are
sources
and
targets
of
network
traffic,
yes,
then,
or,
and
the
the
atomic
unit
in
in
up
mesh
I
would
think
that,
but
maybe
somebody
from
AWS
team
correct
me
if
I'm
wrong
on
that
one.
The
virtual
note
is
not
that
the
virtual
note,
for
example,
can
also
represent
a
kubernetes
service,
which
in
itself
is
then
made
off
of
group
loads
or
made
up
of
your
clothes.
D
D
One
thing
I
just
want
to
say
so
we're
aware
of
these
discrepancies
and
we've
been
planning
around
them.
We
have
not
in
the
initial
version
of
our
API,
it's
all
been
very
kubernetes
specific
and
you
know
working
with
those
assumptions
which
I
think
makes
sense
for
the
v1
alpha
one,
but
we
we
definitely
have
on
our
roadmap.
The
ability
to
introduce
more
like,
like
Harvey,
was
saying
for
different
compute
flat
forms.
There
is,
there
are
common
concerns,
I
mean.
D
Ultimately,
we
know
that
we're
dealing
with
an
envoy
injected
sidecar,
which
may
be
you
know
running
wherever,
if
you're
running
in
the
VM
that
may
be
hosted,
but
we
know
that
there
is
a
source
workload
somewhere
whose
traffic
is
going
through.
You
know
an
originating
proxy
and
then
you
know
arriving
at
either.
You
know
an
external
service
or
another
thing
in
a
so
I.
I
do
think
that
these
abstractions
are
scalable
I
mean
in
the
end
they'll
build
our
API
definitions
boiled
down
to
our
protobufs.
D
So
if
you
look
at
the
photos
and
see
how
we
could
introduce
there
certain
areas
where
fields
may
be
turned
into
one
else,
where
we'll
be
able
to
select
between
different
options
or
we
may
introduce
additional
abstractions
into
the
API
in
order
to
represent
that,
you
know
like
what
exactly
is
a
workload.
Maybe
we
can
have
more
types
or
a
single
abstraction
point,
but.
E
D
Other
workloads
today
is
based
on
the
presence
of
the
proxy
sidecar
yeah,
and
that's
also
how
we
know
which
control
plane
it's
actually
configured
to
configured
by.
So
if
a
user
tells
service
my
shop
I
want
to
configure
workload
a
with
some
configuration,
we
know
which
control
plane.
You
know,
we
know
which
namespaces
the
control
plane
is
watching
and
therefore
we
know
which
control
plane
to
put
the
configuration
to.
B
And
for
the
discrepancies
between
workloads
running
on
different
computers,
part
of
service
specialist
whole
job
is
to
handle
those
discrepancies.
So
from
the
user
perspective,
you
can
select
workloads
of
heterogeneous
types,
maybe
they're
running
in
different
places.
They
have
different
data
attached
to
them,
but
service
mesh
hub,
if
it's
implemented
correctly,
should
be
able
to
handle
the
discrepancies
when
it
comes
to
actually
doing
the
configuration
management.
E
There,
if
you
do
not
mind
I
jump
into
the
into
the
next
question,
since
that
is
somewhat
somewhat
related,
I
do
think
that
service
mashup
may
run
into
even
so
that
is
basically
more
on
the
on
the
on
the
fringes.
There
is
an
edge
case
may
into
service
discovery
and
workload
drift
as
far
as
I
can
tell
service
mashup
discovers
services
near
its
own
logic,
so,
for
example,
for
for
East
EO,
your
you're
looking
for
the
side,
cars
right
for
link
at
E
you're,
looking
for
the
side
cars.
E
So
when
we
look
a
simple
example
link
at
eat,
for
example
just
iterates
over
over
the
over
the
parts,
if,
for
whatever
reason
they
encounter
an
error
in
their
in
their
iteration,
they
just
do
not
consider
this
part
to
be
part
of
a
service
right.
So
then
service
mashup
would
not
would
not
do
the
same
since,
since
it
has
its
own
logic,
when
we
look
for
is,
do
I
think
he
still
has
fairly
complex
possibilities
of
including
and
excluding
services
and
parts
or
workloads
for
these
services.
E
This
is
all
stuff
that
would
need
to
be
chased
forever.
If
you
wanted,
if
you
wanted
perfect
congruent
view
of
the
system,
so
my
question
is:
is
there?
Is
there
also
a
notion
or
an
idea?
A
plan
to
integrate
with
similar
to
up
mesh
to
integrate
with,
is
still
and
linker
T
directly
and
ask
the
issue
and
link
at
the
instance
for
the
services
it
knows
and
for
the
workloads
it
knows
for
these
services.
So.
D
I
think
that
when
you
use,
if
a
platform
like
service
Mashhad,
you
understand
that
there
are
some
trade-offs
in
terms
of
it
being
opinionated.
I,
think
that
you
know
and
there's
definitely
an
approach
where
we
could
go
directly
to
pilot,
let's
say
and
query
the
API
there
and
get
like
a
deterministic,
definitive
output
of
everything
it
sees.
Or
you
know
we
can.
We
can
simplify,
let's
say
our
own
logic
by
or
make
it
closer
to
the
corresponding
mesh
by
actually
importing
exported
functions
from
their
libraries.
D
Yeah
I
think
it's
on
a
case-by-case
basis,
but
it's
natural
to
assume
that
there
be
certain
edge
cases
that
will
not
be
picked
up
or
will
not
be
regarded
as
valid
and
maybe
over
time
as
the
project
grows.
We'll
will
address
more
of
those
but
I
think
it's
just
a
natural
part
of
the
the
trade-off
that
you
have
right
when
you're
building
on
top
of
something
I
think.
B
Oh
sorry,
go
ahead.
Yeah
I
was
just
gonna
say
that
it
is
our
goal
to
minimize
the
drift
whenever
possible.
So
we
try
to
take
a
source
of
truth
that
we
feel
like
is
the
most
robust
and
in
the
case
of
kubernetes
clusters,
we
feel
like
looking
directly
at
the
pods,
for
the
presence
of
the
Envoy
sidecar
is
the
most
robust
source
of
truth.
D
So
it's
just
a
matter
of
trying
to
calibrate
those
heuristics
and
that's
why
I
getting
community
feedback
is
so
valuable
because
they're
always
going
to
be.
You
know
a
larger
number
of
existing
use
cases
in
edge
cases.
Then
we
can
come
up
with
in
our
own.
So
that's
just
why
it's
very
valuable
for
us,
but
I
think
is
where
we're
tackling
the
you
know
the
92
99%
of
use
cases,
ideally
where
we're
at
a
happy
spot.
H
F
D
D
Will
Dominic
had
before
just
mentioned
that
he
had
like
some
follow-up
questions
about
at
mesh
I
know.
Maybe
we
can
circle
back
to
that
at
a
certain
point.
I
don't
want
you
to
lose
them.
Oh
yeah,
the
other
thing
I
wanted
to
address
and
I
actually
wanted
to
ask
the
users-
or
you
know
our
community.
D
Maybe
we
can
wait
till
the
end.
I
put
some
notes
in
that
I
want
to
ask
exactly
about
this
question
about
preferences
between
one
control
plane
for
cluster
multiple
control
planes
in
a
cluster,
multiple
clusters
per
control.
Plane-
and
you
know
in
those
cases
we'll
have
to
do.
You
know
the
heuristic
for
resolving
ownership
these
these,
like
ownership
hierarchies.
What
is
in
a
mesh
which
one
belongs
to
which
mesh
becomes
even
more
complicated
and
there
you
know
I
think
there's
even
more
concern
about
what
we
might
potentially
miss.
D
So
it's
very
important
to
us
I
think
to
hear
about
the
different
deployment
scenarios
that
are
being
considered
out
there
like
how
are
people
actually
architecting
their
control
planes
today?
What's
the
desired
architecture?
So
can
we
capture
all
those
use
cases,
ideally
with
the
the
smallest
amount
of
complexity
in
implementation,
but
understandably
we
may
have
to
write.
You
know
separate
modules
to
consider
separate
use
cases,
but
we
can
come
back
if
you
guys
have
some
thoughts
on.
D
You
know
what
the
future
of
your
multi
cluster
control
plane
strategy
is,
and
you
know,
maybe
you're
using
solution
that
sort
of
natively
multi
cluster
like
at
mesh
or
you're
planning,
to
do
your
own
with
linker
deras.
Do
just
be
very
helpful
to
hear
how
you
know.
Oh
you
guys
see
it
working
or
you're
going
to
use
a
flat
network.
How
are
your
proxies
going
to
contract
connect
to
a
remote
control
plane
things
like
that.
A
H
C
It
so
to
echo
what
Tim
said:
I
think
we,
we
were
not
great
people
to
be
able
to
indicate
what
people
want.
You
know
we
do
some
of
our
own
customer
research.
We
have
some
internal
customers,
so
we're
very
interested
in
seeing
what
other
people
want.
But
I
don't
must
extend
this
question
to
talk
about
also
sort
of
the
trust
domain.
Where
are
the
trust
to
main
boundaries?
I
saw
you
know,
I
saw
a
service
mesh
hub
has
something
that
they
deem
limited
trust
but
I'm
not
sure
the
scope
of
the
implementation
there.
C
D
So
the
challenge
there
is
imposed
by
the
mesh
because,
right
now
we
really
only
you
know
the
smallest
but
I
think.
Ideally
the
smallest
boundary
would
be
to
allow
a
user
to
define
source
and
destination
selectors
like
we
currently
have
for
traffic
and
access
policies
and
define
that
as
a
trust
domain.
And
that
way
we
could
have
a
very
concrete.
You
know
where,
basically,
it's
up
to
the
user,
it's
flexible
to
just
determine
what
are
those
boundaries,
but
there
are
limitations
as
far
as
different
meshes
and
how
they
allow
you
to
configure
them,
like.
D
I
There's
a
there's,
an
sto
there's
like
three
lines
of
C++
that
disallow
you
to
doing
that.
That's
something
that
that's
like!
That's
a
feature
that
I
think
they
specifically
they're
I.
Think
that
they're
they're
thinking
that
the
trust
domain
should
be
the
essentially
that
the
boundary
of
any
given
a
quote
unquote
mesh
or
in
our
language,
virtual
mesh
and
so
anytime.
You
would
have
direct
MPLS
between
workloads
and
I'm.
Actually,
we
need
to
share
the
same
trust
of
me.
C
Yeah
I
mean
I,
understand
that
and
that's
probably
a
limit.
If,
if,
if
s
consumes,
you
know
basically
the
underlying
mesh
restrictions,
that's
true,
but
envoy
itself
has
a
decent
set
of
capabilities
here
right
so
again,
my
question
wasn't
as
much
about
what
is
currently
possible
in
the
code
base.
It
was
more
about
what
do
people
want,
because
I
see,
in
my
opinion,
that
when
we
start
talking
about
what
service
mesh
hub
is
trying
to
do,
you
know
federate
different
meshes.
This.
I
Ya
know
I
think,
like
you
said,
I
think
it's
a
really
interesting
question
and
you
know,
as
the
space
evolves,
we'll
learn
more
and
more
about
people's.
You
know
best
practices
in
people's
preferences.
If
you
read
you
know,
based
on
just
like
strictly
sto
documentation,
another
mesh
documentation
out
there,
the
limited
trust
scenarios
seem,
you
know,
I
mean
it's
always
a
trade-off
right
in
the
limited
trust
scenarios
you
you
lose
the
identity
of
the
individual
workloads,
but
you
gain
the
you
have
much
you
you
you
you
gain.
I
I
D
A
B
Sure
so
one
thing
we've
been
curious
about
it.
Solo
is
the
interest
in
the
community
for
integrating
service
match
hub
with
other
compute
platforms
other
than
kubernetes.
It's
something
that
our
abstraction
is
currently
support.
We
just
have
to
go
and
do
the
work
to
implement
additional
compute
platforms.
Does
anybody
here
have
such
an
interests
like,
for
instance,
integrating
with
ECS.
G
Well
being
who
works
for
the
millions
they
can
say?
Definitely,
yes,
we
see
a
lot
of
customer
interest
in
PCs
I
mean
our
customers
run
significant
workloads.
Will
this?
Yes,
so
an
increase,
yet
support
will
be
crucial
for
the
customers.
That's
specifically
in
the
specifically
use
a
platform
right
and.
F
F
F
G
E
B
Don't
know
if
compute
platform
is
represented
as
like
a
first-class
entity.
We
don't
have
that
just
yet.
Workload
is,
however,
first-class
entity
and
I
guess.
The
answer
is
just
because
we
need
an
abstraction
to
represent
an
atomic
compute
unit
like
a
unit
of
the
data
plane.
Is
that
for
reporting
purposes
specifically.
D
And
so
I
think
that
the
the
way
that
we've
modeled
it
is
like
this,
it's
the
way
I
picture
it,
so
your
source
is
a
workload
source
is
a
client
which
we
call
a
workload.
Workload
doesn't
necessarily
have
a
service
to
it.
It's
just
making
requests
out
to
something
in
the
in
the
cluster.
So
we're
looking
at
thought
of
his
client
and
the
client
is
a
pod
and
I.
Think
ISTE
actually
has
the
same.
E
C
E
E
But
yeah
we
can.
We
can
definitely
take
that
offline
and
talk
about
that
so
I
generally
I
I
do
I
do
like
the
the
abstraction
of
the
service
mesh
instance
and
the
service,
because
that,
if
you
only
look
at
that
level,
that
would
in
that
would
immediately
include
single
cluster
multi-mission,
also
multi
class,
the
single
mesh,
because
you
basically
the
the
compute
platform,
is
not
represented
anymore.
Cluster
is
not
represented
anymore
right
and
then
also,
if
you
work
on
that
level,
the
AWS
verge
applies
virtual
node.
E
Is
it's
fairly
much
it's
basically,
the
the
problem
disappears
that
it's
one
and
then
you
can
probably
take
that
even
further
and
then
talk
about
social
service.
Mesh
hub
composes,
currently
multiple
meshes
multiple
clusters.
If
you,
if
you
take
it
to
that
level,
you
can
probably
also
have
a
fairly
fairly
consistent
API
that
lets
you,
then,
on
top
of
that
compose
multiple
service
mesh
hubs,
because
that
would
be
the
the
next
question
in
the
in
the
recursive
structure
right.
E
E
E
D
E
Exactly
which
has
its
own,
which
has
its
own
interesting
side
effects.
Yet
usually
we
talk
like,
for
example,
the
shopping
cart
service
talks
to
the
payment
service.
An
interesting
is
interesting.
If
you
realize
payment
service,
you
talk
about
service
provider.
The
shopping
cart
service,
who
is
the
consumer,
is
also
identified
on
the
provider
level
right
because
I
call
it
the
shopping,
cart
service.
You
can
take
that
all
the
way
to
the
client
for
the
client.
It
doesn't
work
anymore.
So.
I
So
I
think
like
this
is
actually
we've
had
this
discussion
a
lot
too,
especially
when
you're,
initially
designing
and
I,
think
that
so
there's
a
couple
things
you
mentioned
so
I
think
that
that
is
sort
of
like
language
drift
among
like
within
the
word
service
right
like
what
is
a
service,
is
a
service
the
like
in
the
case
of
the
payment
service
right.
Is
it
the
group
of
workloads
which
together
do
this
thing
right?
I
What
group
of
workloads
right?
So
it's
like
I,
think
that
part
of
it
is
just
a
drift
or
a
sort
of
I'm
misunderstanding
but
general
like
and
big
ambiguity
of
the
word
service
in
this
space
and
I
think
that
for
us,
I
can
only
speak
to
service
mesh
hub,
but
we,
we
chose
the
the
target
of
a
traffic
for
a
given
set
of
workloads
right.
So
you
have
like
a
bunch
of
you.
You
have
your
your
payments.
Let's
say
your
payments
work
loads
which
are
fronted
by
this.
I
Let's
say
in
this
case
kubernetes
service,
which
are
then,
which
then
poured
the
traffic
whatever
it
could
be
sto,
it
could
be
a
Cooper
net,
it
could
be
anything
which
then
poured
the
traffic
to
these
workloads,
and
so
that's
like
that
theoretical
thing
which
is
accepting
it's
like
the
service
front.
It's
the
thing
that
it's
the
target
of
the
internet
traffic,
which
then
gets
doled
out
into
a
bunch
of
workloads.
That
is
what
we
are
thinking
of
as
the
service
is
theoretic.
I
I
Okay
I
was
like
where
to
go.
It's
this
be
radical
thing.
It's
this
theoretical
thing
that
is
fronting
a
bunch
of
workloads,
and
so
you
know
that's
why
our
quote-unquote
workload
is
actually
not
a
single
workload.
It
is
a
group
of
workload.
So,
if
you
look
at
a
mesh
workload,
the
primary
the
primary
mesh
workload
is
a
deployment
which
actually
isn't
a
a
workload.
It's
technically
more
of
a
payments
service,
because
really
it's
a
whole
bunch
of
them,
potentially
right.
E
D
That
the
way
that
we've
modeled
it
is
that
a
workload
is
definitively
the
source
for
traffic
now
and
a
service
is
definitively
the
destination
for
traffic,
and
then,
with
that
knowledge,
you
know
we
may,
because
it's
an
implementation
detail
in
a
sense
that
a
workload
is
actually
backing
a
service,
and
we
can
use
that.
But
it's
not
an
implicit
assumption
by
the
system,
and
so
using
that
model.
I
I
still
think
we
have
a
requirement
to
model
our
sources
unless
we
decide
that
we're
not
going
to
have
source
level
policy,
but
I
I.
D
D
One
is
a
server.
One
is
a
client.
The
fact
that
they
are
both
running
in
the
same
pod
is
it's
it's
abstracted
from
us.
We
can't
that's
that's
a
common
pattern,
but
just
like
the
service
itself,
a
kubernetes
service
may
or
may
not
resolve
the
pods,
so
I
I
think
that
we
can
think
we've
been
working
mostly
with
the
you
know,
simpler
case
where
they
are
backed
by
pots.
But
it's
definitely
our
intention
to
support.
D
A
Just
want
to
do
a
quick
time
check
because
we're
almost
at
an
hour
I
will
say
when
we
first
kick.
This
off
I
wasn't
sure
how
many
people
were
going
to
join
the
first
meeting
and
if
we
were
going
to
have
a
lot
of
discussion,
but
this
is
fantastic
and
I
think
some
folks
here
did
mention
that
some
of
these
topics
may
want
to
follow
up
and
discuss
further.
There
are
a
few
things
that
we
didn't
get
to
talk
about.
A
One
of
the
specific
ones
here
is.
We
also
were
interested
in
getting
some
feedback
on
integrating
with
existing
service
meshes.
That
may
be
installed
in
an
environment
for
service
mesh
hub,
and
we
can
take
that
up
at
the
next
meeting
and
then,
if
there's
any
other
things
that
come
up,
please
do
put
them
in
other
notes
for
the
next
meeting
and
we
will
hopefully
see
y'all
in
two
weeks.