►
From YouTube: 07.29.2020 Service Mesh Hub Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
So
I've
been
looking
over
the
links
you
sent
me
on
slack,
but
I
see
that
you
have
a
huge
refactoring
going
on.
A
Yeah
we're
doing
scott
weiss
can
talk
more
in
depth
about
it,
but
basically
we're
taking
everything
we
learned
in
the
last
few
months
and
just
simplifying
the
abstractions
in
the
code
base
so
that,
hopefully,
it's
much
easier
to
extend
from
the
open
source
community
like
the
code
right
now,
it's
kind
of
difficult
to
locate
where
the
different
features
live.
A
A
We're
also
looking
to
write
up
some
documentation
about
the
architecture
of
the
system
and
just
some
different
resources
to
help
folks
who
are
interested
in
contributing
open
source,
making
it
easier.
A
Yeah,
I
think
we're
waiting
on
a
few
folks
to
join.
E
E
E
D
E
F
F
E
F
F
All
right,
let's
start
with
the
first
here,
we've
got
some.
So
we've
got
a
couple
things.
A
few
things
here
discuss
discussing
additional
capabilities
with
respect
to
workload,
discovery,
there's
a
high-level
proposal,
in-depth
proposal.
F
Then
there
is
any
a
question
from
mihai
in
istio
to
atmash
federation
and
then
the
question
we
get
to
last
time
from
dominic.
So
let's
go
to
that.
First,
one
workload,
discovery.
F
And
john
you
made
that
proposal.
Would
you
like
to
walk
through.
I
I
Let
me
see
if
I
can
share:
if
not,
you
can
pull
up.
I
G
I
I
So
so
I
think
this
will
be
a
little
bit
both
papa
and
I
kind
of
talking
through
this,
and
we
can
figure
out
how
we
go.
So
we
introduced
this
discussion
on
slack
and
I
think
christian
specifically
suggests
that
we
bring
it
up
here.
So
that's
what
we're
doing
and
I'm
going
to
give
a
little
bit
of
a
background.
So
what
I
have
here
and
sharing
you're
able
to
see
what
I'm
sharing,
I
assume.
I
So
a
lot
of
this
is
kind
of
comes
out
for
some
of
our
work
on
network
service
mesh
or
what
we
will
refer
to
more
commonly
as
nsm
it's
a
sandbox
project
in
the
cncf.
It's
something
that
cisco
and
a
few
other
contributors
like.
I
Actually,
you
can
see
some
of
the
other
contributors
on
the
slide,
vmware
doc,
ai
and
some
others
contribute
to,
and
we
we've
been
doing
some
work
with
nsm
and
we've
been
doing
some
work
with
nsm
and
service
meshes,
and
that's
where
some
of
the
justification
or
desire
for
this
stems
from
so
again,
I'm
linking
here
publicly
available
stuff,
the
first
one
and
most
important
one
to
understand,
probably
if
you're
not
familiar
with
service
network
service
mesh
and
the
one
that
kind
of
spawns
some
of
the
thinking
about
how
we
inject
other
types
of
services,
other
than
pure
kubernetes
services
into
service
mesh
hub
is
that
network
service
mesh
has
the
capability
of
injecting
multiple
interfaces
into
a
pod
and
typically
for
most
of
its
services
and
use
cases.
I
It
will
do
that,
so
there
will
be
a
cni
interface
and
an
sm
managed
interface,
as
I'm
sharing
in
this
this
this
slide
here,
so
that
presents
a
need
to
when
you're
talking
to
envoys
and
stuff,
like
that.
That
presents
a
need
to
sort
of
be
able
to
expose
to
envoy
the
fact
that
some
of
the
services
sit
behind
the
cni
and
some
of
the
services
sit
behind
the
nsm
manage
interface
and,
for
the
most
part
envoy,
doesn't
really
care
with
respect
to
to
that.
I
As
long
as
it's
presented
to
in
its
xds
data,
it's
happy
to
just
send
the
packet
based
on
the
xds
rules
out
to
the
to
the
stack
of
the
pod,
which
then
effectively
just
does
the
normal
thing,
with
routing
and
ip
tables
to
send
it
to
where
it
goes.
So
I
don't
want
to
get
too
much
into
the
network
details
that
the
main.
I
There
are
use
cases
where
we'd
like
to
talk
with
you
about
how
service
mesh
hub
can
do
service
discovery,
specifically
maybe
workload
discovery,
that's
a
little
bit,
maybe
second
order,
discussion
and
and
not
have
it
be
tightly
bound
to
like
kubernetes
and-
and
I
know
you
guys
are
dealing
with
this
problem
anyway,
because
you're
talking
about
vms
and
consoles
and
other
types
of
things,
so
I
know
you're
generally
dealing
with
this,
but
we
wanted
to
make
sure
we
convey
that
the
the
use
case
might
be
even
broader
than
you're.
Currently
thinking.
I
Okay,
so
that's
the
justification
again.
I
I
probably
won't
don't
want
to
spend
too
much
time
on
the
justification.
We
can.
Let
kavan
talk
a
little
bit
about
some
of
the
ideas
we
had
that
we
could
explore
with
you
as
to
whether
they
make
sense
or
not
or
whether
they
want
more
content
or
details
on
them.
My
other
links
here
are
to
some
nsm
interdomain,
which
talks
a
little
bit
about
how
nsm
again.
I
This
is
directly
from
the
network
service
mesh
repository
about
how
they
deal
with
inter
domain
and
floating
into
domain,
to
interconnect
things
cross
cluster,
and
that's
where
the
tie
in
with
some
of
the
work
you
guys
have
with
service
mesh
hub
and
then
there
is
some
more
detail
in
a
presentation
that
tim
and
I
gave
at
msm
con
last
year,
giving
a
little
bit
more
detail.
I
don't
I
I
think
in
a
general
level,
you
can
probably
get
the
just
here
of
this.
I
What
we're
thinking
about
without
actually
having
to
go
to
these
links
but
they're
there.
If
you
need
them.
G
I
I
It's
actually
I
added
this.
This
just
is
added
to
the
meeting.
I
I
So
is
there
any
questions
sort
of
on
the
sort
of
the
background
and
the
use
case
before
we
talk
about
some
of
the
possibilities
that
we
were
exploring
and
get
some
feedback
on
those
possibilities.
I
Okay,
so
pavan
do
you
wanna
gonna,
maybe
go
through
some
of
your
ideas.
C
J
C
Okay,
great
so
yeah
as
john,
I
was
basically
saying,
given
some
of
our
projects
with
nsm.
There
is
like
a
whole
variety
of
networking
capabilities
added
to
some
of
the
pods.
You
know
whether
be
it
a
new
interface,
you
know
or
a
direct.
You
know
a
tunnel
from
one
pod
to
the
other,
like
a
vpn
for
example.
So
we
basically
just
you
know
in
general,
we're
discussing
how
could
we
do?
C
You
know
service
discovery,
and
you
know
we
basically
had
our
initial
chat
on
slack,
and
I
know
you
guys
have
already
been
you
know,
thinking
about
some
of
these
possibilities.
So
just
wanted
to
you
know
just
go
through
what
we
were
thinking
and
get
your
feedback.
C
So
the
you
know
short
term
workaround,
which
we
had
was
you
know
if
we
had
any
sort
of
services
which
were
in
kubernetes
service
but
sort
of
like
a
service
entry
in
steer,
for
example,
and
if
we
wanted
smh
to
pick
it
up,
then
you
know
we
could
write
a
simple
controller
which
would
you
know
manually
push
a
mesh
service
object
into
it
whenever
there
is
a
new
service
being
added?
C
And
you
know
this
is
basically
a
short
term
right
and
it
doesn't
require
any
of
the.
You
know
smh
code
to
be
modified,
and
you
know
it
seemed
okay.
Until
we
have
like
a
native
smh
capability
which
does
this,
we
could,
you
know
just
try
this
out,
see
how
this
works
and
you
know,
get
some
feedback,
and
things
like
that.
So
this
is
what
you
know.
We
are
currently
just
trying
this
out
as
a
poc
and
some
of
the
other
two
proposals
which
we
had
in
miami.
I
Before
you
go
on,
I
just
had
one
question
so
for
for
the
solo
people
on
the
call
or
anybody
else
do
you
believe
this
would
work
or
is
there
some
more
tight
coupling
that's
needed
between
the
service
objects
and
the
workload
objects
that
we'd
have
to
manage
and
change
something
within
the
smh
code
itself?.
H
C
Yeah
it's
so
it's
you
know
has
like
a
few
different
custom
fields
which
we
are
okay.
You
know
just
adding
them
as
labels,
but
we
are.
You
know,
trying
to
make
sure
that
whatever
manifest
we
are
creating
ties
in
well
to
whatever
you
guys
expect
and
not
push
anything
which
currently
smh
doesn't
accept.
So
it's
it's
basically
general.
I
might
have
something
which
I'll
share
after
you
know.
I
stop
this
recording
in
in
the
github
document.
Sorry,
so.
H
Under
the
hood,
everything
all
the
mesh
services
have
to
map
to
either
a
kubernetes
service
or
we'll
need
to
create,
in
the
case
of
istio,
a
service
entry
for
them.
So
it
just
requires
us
to
know
things
like
the
host
names.
Ip
addresses
that
back
the
service.
There
isn't
really
much
more
that
we
need
so
it's
a
question
of
and
where
we
haven't
opened,
that
up
yet
to
anything
other
than
a
kubernetes
service.
H
So
there's
definitely
going
to
be
some
changes
that
need
to
happen
there,
but
I
think
we
have
a
clear
idea
how
to
represent
anything.
That's
going
to
be
a
service
entry.
I
think
the
only
question
is:
do
you
want
to
be
manually
providing
those
ips
or
dns
names
in
the
mesh
service?
Or
do
you
want
service
mesh
hub
to
be
able
to
resolve
them
by
allowing
service
mesh
hub
to
actually
read
from
your
service
registry
directly.
D
I
H
In
the
current
forum
today,
service
mesh
hub
is
only
supporting
kubernetes
services.
However,
we
have
plans
to
expand
it
to
support
like
a
generic
ip
or
you
know,
static,
dns
name
and
from
there.
You
would
be
able
to
support
this
and
then
there's
a
question
of.
Would
you
even
want
to
go
one
step
further
and
allow
it
to
be
more
dynamic
rather
than
providing
a
static
host
name
to
provide
a
you
know,
a
pointer
to
the
service
entry
in
in
your
registry
that
service
mesh
hub
could
fetch
directly.
I
Yeah,
so
that's
perfect,
so
we
don't.
We
didn't
really
want
to
do
this.
We
were
trying
to
just
prove
that
you
know
find
any
skeletons
and
it
sounds
like
there
are
some
so
so
pap
and
continue,
because
paving
is
really
getting
exactly
to
what
you're
suggesting
scott.
C
Yeah,
I
guess
a
great
segue,
so
you
know
we
initially,
you
know,
thought
okay
for
us
to
actually
step
out
of
the
poc
mode
and
actually
have
like
a
proper.
You
know
service
registry
client.
If
you
will
in
smh
itself,
then
what
we
could
do
is
write
a
you
know,
a
grpc
client,
which
would
basically
put
the
information
without
smh
actually
needing
to
poll,
for
example,
and
all
of
this
obviously
would
be
some
sort
of
a
service
entry.
C
You
know
type
object
which
would
get
registered
in
smh
and
sm,
which
would
be
able
to
sort
of
resolve
this.
So
this
is
sort
of
one.
You
know
client
interface
plug-in,
which
we
thought
you
know
if
as
much
had
the
capability
of
hosting
some
plugins,
which
would
basically
sit
in
the
repo
or
somewhere
else
which
could
tie
in
you
know
with
your
current
service
discovery,
this
could
work,
but
you
know
we
had
another.
C
You
know
idea,
as
you
suggested
scott,
where
service
registry
client,
you
know,
if
you
basically
had
that
implemented
in
smh,
you
know
it
could
basically
pull
two
n
number
of
you
know
external
registries
which
are
out
there.
It
could
be
xcd
or
console
or
anything
like
that,
and
you
know
constantly
keep
track
of
whatever
service
entries,
which
could
be
kubernetes
external
vm.
Anything
like
that.
C
If
the
cluster
wants
to
keep
track
of
them-
and
you
know
the
client
could
basically
you
know-
get
updated
whenever
a
new
entry
is
added
to
one
of
the
clusters
and
create
a
service
object,
so
that
smh
would
be
able
to
discover-
and
you
know,
sort
of
use
it
as
a
regular
kubernetes
service.
So
this,
I
think,
is
more
general
and
the
first
approach,
which
I
just
shared,
is
sort
of.
You
know
cisco
specific.
C
But
you
know
with
this
general
approach
we
feel
you
know
it
doesn't
really
tie
into
some
of
the
you
know.
Sms
sorry
nsm
needs,
but
rather
you
know
like
it
would
act
as
a
general
service
registry
client,
which
will
work
with
anything
basically
so
yeah.
C
C
I'm
sure
you
might
have
some
of
this
already
in
the
works
wanted
to
hear
what
were
your
thoughts.
H
Yeah,
so
it's
not
totally
clear
to
me
the
service
registry
client,
I
assume,
would
be
specific
to
the
type
like
we
would
have
a
different
service
registry
client,
depending
on
the
service
registry,
that
you're
using
you
want
to
yeah.
C
I
mean
sdo
is
a
bit
of
a
outlier
in
this
case
because
it
doesn't
rely
or
use
like
a
general
purpose
service
registry
like
console
which
linker
d
and
app
mesh
can
hook
into.
So
I
guess
for
sdod,
you
might
need
a
custom
entry,
but
with
everything
else
you
could
just
say
we
support.
You
know
console
for
example,
and
you
know
any
service
registry
entries
which
you
want
to
be
discovered.
You
know
give
us
your
console,
you
know
access
and
we
would
be
able
to
discover
them
automatically.
H
The
way
that
I
I
would
say
now,
I
I
don't
want
to
like
you-
know,
mail
us
to
a
particular
implementation
because
we're
not
up
to
it.
Yet
I
think
it's
flexible,
but
based
on
what
we've
already
done
in
glue.
What
I
would
say
is
we'll
probably
have
a
different
type
of
mesh
service,
so
we
have
one
that's
backed
by
kubernetes.
H
You
know
kubernetes,
you
can
consider
as
a
service
registry,
so
we've
got
a
kubernetes
one.
We've
got
a
console
one.
We
can
pull
from
other
sources
and
essentially
that
the
the
cert,
the
mesh
service
will
be
the
abstraction,
the
unified
abstraction,
on
top
of
whatever
service
registry
that
you
pulled
from,
and
we
should
have
something
where
it'll
be.
It
should
be
easy
to
plug
in
or
extend
service
mesh
hub
with
additional
types
of
registries
so
that
we
can
pull
from
those
as
well.
H
H
H
So
that's
there's
there's
like
a
kind
of
a
separate
conversation.
This
console
is
it's
both
a
registry
and
a
mesh
and
then
with
with
linker
d
and
others
I'd
be
interested
to
see
what's
possible
there,
but
it
it
depends
on
the
mesh.
I
Yeah
scott,
regarding
your
yeah,
I
know
this
has
come
up
a
few
times
on
slack
in
the
meetings
that
that
some
of
those
extensions
are
in
the
works.
Okay,
is
there
a
way
to
see
what
that
means,
or
I
mean
you
mentioned
the
glue
a
few
times?
Is
it
just
going
to
look
at
what
you
do
in
glue
and
assume
that
the
current
plan
of
record
is
to
follow
that
rough
model,
or
is
there
other?
You
know
information
or
pointers
that
you
have
to
on
how
you're
going
to
pull
that
into
here?.
H
H
H
If
we
are,
we
have
that
data
available
to
us.
It's
just
a
matter
of
representing
it
in
a
way
that's
convenient
for
users
as
well
as
that
fits
the
model
inside
a
mesh
shop.
I
do
think
that
glue
since
glue
has
already
addressed
those.
I
mean
in
particular
with
console
that
glue
can
use
console
as
a
service
registry.
It
can
also
treat
ec2
as
a
service
registry,
so
it
can
actually
look
at
ec2,
vms
and,
using
you
know,
user
provided
labels.
Can
you
know,
do
service
discovery
for
those
vms?
H
It's
really
a
flexible
model.
I
I
think
the
point
is
where
we
want
to
learn
from
glue.
Is
that
it's
an
extensible
model,
where
you
know
we
have
something
like
the
mesh
service
where
we
can
plug
in
different
service
registries
on
the
back
end,
but
once
they
get
funneled
into
the
system,
they
all
get
treated
the
same.
I
C
I
I
think
I
saw
somewhere
that
linker
d
supports
console
out
of
the
box.
I
might
be
wrong,
but
it
seemed
like
you
know.
A
lot
of
the
service
meshes
are
trying
to
use
console
for
service
discovery.
I
know
console
connect
is
another
service
mesh,
but
it
seemed
to
be
like
one
of
the
top
used
sort
of
external
one
out
there.
H
Yeah,
we
can
definitely
use
console
as
well
yeah.
I
think
that
should
be
no
problem.
I
So
I
guess
the
question
that
at
this
point
is
how
how
do
we
want
to
proceed
on
this?
Do
we
want
to
provide
a
more
complete
proposal?
Do
we
want
to
create
an
issue?
Do
you
want
to
digest
the
pointers
we
have
for
a
day
or
two
and
respond
on
slack
how
you
know?
How
do
we
address
this
general?
Need
I
mean.
Is
it?
Is
it
something
we've
got
a
road
map.
You
know
we're
there.
You
could
have
any
number
of
thoughts
you
know
of
how
we
want
to
proceed
on
this.
I
H
So
the
only
thing
I
would
say
from
the
from
the
engineering
side
is,
it
would
be
great
to
get
a
very
specific
set
of
requirements
down
and
maybe
like
a
user
story
or
two
like
as
a
user.
Here's
what
I
would
like
to
be
able
to
do.
H
You
know
with
a
concrete
like
let's
pick
a
concrete
registry
to
go
with
it
sounds
like
console,
might
be
the
one
that
you
know
fits
the
largest
set
of
use
cases,
and
then
you
know
we
can
come
back
and
you
know
have
a
design
proposal
of
what
that
would
look
like
in
the
api
user
experience
and
and
take
it
from
there.
I
Yeah,
it
sounds
sounds
okay,
I
guess
I
mean
I
think,
that's
probably
a
good
first
step
it
does.
It
comes
around.
I
I
think
you
know
at
the
end,
at
the
end
of
the
day,
we
have
to
find
some
way
to
also
marry
this
with
some
of
our
network
service
mesh
work,
but
that
doesn't
necessarily
that
doesn't
necessarily
mean
that
has
to
come
directly
in
and
be
exposed
to
service
mesh
hub
in
any
way
there
could
be
intermediates
like
console,
but
that's
that's
one
of
the
things
we
want
to
figure
out
sort
of
at
the
end
of
the
day.
So
I
think
you
know
maybe
addressing
a
little
bit.
I
You
know
how
you
do
this
with
something
like
console.
I
mean
when
I
say
you
I
meant
the
the
community
you
in
that
you
know.
Response
might
be
a
good
way
to
sort
of
see
how
this
plays
out.
C
Father,
what
do
you
think
yep
no
sounds
good.
We
were
actually
you
know,
john
mihai
and
us
some
of
us
were
discussing.
If
you
know
solo
want
us
to
collaborate
and
work
together
to
implement
this,
we
would
be,
you
know,
happy
to
do
it
just
wanted
to
see.
Okay,
would
there
be
any
chance
to
for
cisco
to
get
involved
or
something
like
that,
so.
G
Yeah,
I
definitely
think
that's
a
that's
a
good
idea,
adidas
on
the
phone
here
and
she
can.
She
can
weigh
in
as
well.
We
would
need
to
coordinate
yeah.
J
Yeah,
I
guess
so.
Actually
I
am
talking
to
vj
tomorrow,
the
day
after
so
we'll
be
able
to
set
up
some
time
more
for
us
to
work
together.
I
think
that
it
makes
sense
to
kind
of
like
start
doing,
a
bio
kind
of
like
like
a
weekly
meeting
or
something
like
that
that
we
can
actually
sync
up
can
help.
You
add
you
to
the
deposit
or
you
know,
see
how
we
actually
contributing
closely
together
and
yeah.
J
H
H
I
would
say
to
finishing
the
the
current
refactor
and
we'll
be
ready
to
open
service
mesh
hub
up
to
more
contribution
and
scaling
it
horizontally
in
terms
of
more
features,
platforms,
meshes
supported,
so
being
able
to
collaborate
with
you
guys
would
be
excellent
for
us
as
far
as
actually
finding
what
are
the
desired
integration
points,
and
how
can
we
make
it
easier
in
terms
of
the
code
and
the
contribution
workflow
to
make
that
easier,
smoother.
G
Cool,
do
we
want
to
looking
at
the
community
notice,
so
we
want
to
go
back
and
address
the.
What
was
it
was?
What
was
the
question
about
aws
at
mesh.
J
Yeah
yeah,
so
I
mean
I
mean
yeah.
No,
we
definitely
we
own
it.
The
only
thing
is
that
we-
probably
it's
just
that,
because
we
re
re-architect
the
code,
so
we
wanted
first
to
finish
the
the
re-architecture,
because
we
felt
that
our
if
we
put
a
lot
of
stuff
on
an
api
that
is
changing,
it
will
be
problematic.
J
I
think
right
now
we
have
everything
like
right,
squats
like
everything
in
place,
so
we
can
actually
extend
it
to
federation
between
meshes
and
and
and
and
that's
and
we
have.
We
have
a
prototype
working,
which
is
mainly
to
actually
make
it
more
clean
and
beautiful
outside
yeah
I
mean
I
mean,
can
we
can
you
be
more
specific
about
what
the
question
is
just
they're
interested
in
us
adding
the
support,
or
is
there
a
more
specific
question.
B
B
Yeah,
so
I'm
with
john
and
pavin
and
everybody
else.
D
B
Team,
so
my
business
case
is
kind
of
it's
the
same.
They
were
presenting
where
we
have
these
workloads
in
different
kubernetes,
pods
and
clusters,
and
maybe
they
are
in
different
meshes.
Then
we
would
want
to
make
use
of
the
mesh
capabilities
to
connect
these
pods
and
I'm
trying
to
figure
out,
what's
what's
already
there
for
possible
istio
to
app
mesh
connectivity.
Given
the
scenario
that
john
already
presented.
B
I
Let
me
also
add
on
mihai's
behalf
so
edith.
I
know
you've
talked
to
b
joy
in
the
past,
so
this
is
all
related
to
some
of
our
discussions.
That
vijay
has
been
you
know,
driving
with
us
as
far
as
what
what's
most
important
to
work
on
and
what
what
he's
most
interested
in-
and
I
believe,
you've
had
some
discussions
with
him
on
how
our
relationship
with
aws
affects
some
of
those
things.
So
this
is
all
in
that
vein
and-
and
we
know
you're
a
small
team
and
doing
a
lot
of
work.
I
So
there's
also
trying
to
understand
the
current
state
of
things
what's
planned,
what's
not
planned
and
maybe
figure
out
ways
that
we
should
help
to
both.
You
know
make
sure
our
business
is,
you
know,
represented
in
the
the
shared
output
or
the
open
source
output,
as
well
as
drive
through
some
of
our
priorities
that
we've
talked
about
with
aws.
Does
that
make
sense,
yeah.
J
Yeah
that
totally
makes
sense,
and
I'm
talking
to
alex
and
I'm
talking
to
vijay,
so
I'm
on
the
loop,
and
I
know
what
we're
working
on
so.
First
of
all,
I
will
say
that
on
my
team
we
actually
have
joe
kelly
here
and
he's
going
to
run
this
team
as
a
lead
and
we're
going
to
do.
As
you
can
see.
J
There
is
more
people
from
solo
here
on
this
right
now
and
mainly
because
they're
going
to
be
way
more
resources
going
through
this
project,
and
so
we
are
growing
dramatically
right
now
and
specifically,
a
lot
of
them
is
going
to
this
project.
So
so
what
I
wanted
to
say
is
that
is
that
that's
next
or
on
our
list,
we
already
had
support
for
app
mention.
I
think
the
rv
is
here.
J
So
we
can
talk
about
it
a
little
bit
more
and
now
and
but-
and
you
know,
as
you
guys
probably
know,
sto
is
not
as
sorry
apache
is
not
as
mature
as
as
sto.
Yet
so
I
mean
so
so
that
will
be
the
next
thing
that
we're
going
to
add.
But
to
be
fair,
it's
going
to
be
a
little
bit
challenge,
mainly
because
you
know
they're
not
supporting
all
the
features
I
mean
scott.
Maybe
you
can
talk
about
it
a
little
bit
more
about
because
we
you
looked.
I
J
J
H
Or
regular
tls,
it
says:
encrypts
communication
between
the
envoy
proxies.
I
From
discussions
well,
actually
I
better
not
say
what,
because
I'm
not
sure
what's
covered
by
nda
or
not
so
never
mind
I'll.
Take
back
what
I
what
I
thought
I
was
going
to
say
I'll,
take
back.
H
J
H
H
For
doing
it
there
there
is
something
called.
We
refer
to
a
shared
trust
and
limited
trust.
In
order
to
have
side
cars
from
different
meshes
communicating.
We
need
to
either
establish
shared
trust,
which
means
a
shared
root
certificate,
as
well
as
a
shared
concept
of
identity
between
the
meshes,
so
that
policy
can
be
applied.
H
However,
there's
another
approach
called
limited
trust
which
is
a
bit
more
challenging
to
implement,
but
it's
on
our
roadmap
and
it
essentially
employs
using
a
gateway
as
the
intermediary,
so
we
can
configure
a
gateway
on
each
side
which
will
negotiate
the
switch
in
trust,
which
means
the
switch
in
trust
domains.
The
switch
between
the
mtls,
the
common
shared
route,
as
well
as
the
common
concept
of
identity
across
meshes,
and
so
even
if
we
can't
get
to
a
shared
trust
model
working
for
appmesh
because
of
what
the
limitations
on
what
they
support.
H
H
So
it's
it's
a
there's,
a
fair
amount
of
work
involved,
but
it's
it's
definitely
doable
and
the
current
system
takes
into
account.
You
know
that
that
this
is
where
we
want
to
extend
to.
A
I
Yes,
actually
I
wasn't
gonna,
I
was
it
wasn't.
We've
talked
before
about
we're
very,
very
interested
in
the
limited
trust
model
and
and
seeing
that,
so
that's
something
we've
brought
up
in
the
past,
but
I
wasn't
gonna.
You
know
hit
on
that
again
here
because
I
know
it's
coming
work
in
progress.
J
I
B
No,
no,
I
was
just
wondering
where
the
status
was
with
the
app
mesh
and
what
still
needs
to
be
done
to
provide
that
full
integration.
So
we
can
get
a
better
idea
of
what's
going
to
happen
and
maybe
plan
out
if
we
can
help
with
that
integration.
J
Yeah,
that
will
be
very,
very
useful
so,
as
I
said,
they
are
kind
of
like
specifically
because
there
is
some
and
there
maybe
we
should
take
it
offline
and
talk
a
little
bit
about
you
know
we
can
actually
collaborate
and
split
the
walk
or
something
like
that.
Is
that
something
that
will
make
sense.
I
Yeah
yeah
I'll
try
to
I'm
also
I
didn't
realize
vijay
was
gonna,
be
talking
to
you
so
soon
I'll
try
to
talk
to
him
a
little
bit
about
you
know
about
this
call
and
what
we
discussed
so
to
prep
him
a
little
bit.
So
that
would
be
perfect.
J
J
F
We've
got
the
we've
got
a
comment:
the
question
from
dominic
from
last
time
that
we
didn't
get
time
to
discuss
so
don't
make
it
right.
Hi.
D
Yeah
cool.
Actually
you
will
see
that
that
question
goes
very
much
into
the
direction
of
the
initial
topic
that
john
brought
up
in
the
very
beginning
so,
but
to
keep
that
open-ended.
D
My
question
was
so
the
current
implementation
of
service
mesh
hub
uses
a
pool
model,
so
there
are
components
deployed
on
the
on
the
the
management
cluster.
There
are
components
deployed
that
reach
into
the
other
clusters
in
order
to
discover
the
meshes
services
and
workloads,
and
I
was
wondering
why
you
made
the
decision
to
have
a
pull
based
model
instead
of
a
push
model.
D
Yeah,
I
was
wondering
why
you
have
a
pool
based
model
where
components
in
the
management
cluster
in
the
central
smh
cluster
reach
out
to
the
other
clusters
for
discovery
and
not
a
push-based
model,
where
you
have
components
in
the
other
clusters
that
push
to
the
smh
cluster.
H
There
are
a
number
of
benefits
to
doing
this.
I
think
the
foremost
is
that
there's
just
less
components
to
deploy.
There
are
less
potential
things
that
can
fail.
We
know,
for
example,
if
we
can't
reach
a
cluster
due
to
network
connect
connectivity
issues.
We
are,
you
know
we
have
a
single
source
of
truth.
We
have
a
single
master
cluster
that
we
can
look
at
in
order
to
say.
Oh,
these
clusters
are
reachable
these
ones
aren't.
H
I
I
think
in
general,
we
found
that
when
doing
a
discovery
approach,
a
pull
model
tends
to
work
better
yeah,
there's
also
things
like
permissions.
You
know
it's
it's
easier
for
us
to
like
we.
We
already
need
right
access
right.
D
Understood
yeah,
the
the
the
reason
for
that
for
the
question
was
when
I
think
about
extensibility.
So
right
now
the
the
service
mesh
up
is,
let's
say,
fairly
intimately
aware
of
the
of
the
service
mesh
implementations.
Like
istio
and
linker
d.
I
was
wondering
if
there
is
at
least
a
possibility
to
plug
plug-in
and
an
api.
D
Whether
now
the
component
that
discovers
the
services
actually
runs
on
the
on
the
satellite
clusters
or
runs
on
the
central
management
cluster
would
then
be
a
flexibility,
but
at
least
talk
to
the
api
to
tell
smh
about
the
discovered
services,
because
that
would
be
a
natural
extension
point
right
where,
if
you
have
different
registries
or
whole
different
setups
that
you
have.
You
have
a
component
next
to
that
setup
that
just
pushes
the
information
via
standard
standard
protocol
into
smh
and
basically
tells
smh
about
the
services.
H
I
don't
think
that
it
actually
should
matter
too
much
in
terms
of
you
know,
is
the
data
being
pulled
or
is
the
data
being
pushed
the?
I
I
think
what
matters
for
this
use
case
is
simply
that
we
have
a
generic
api
that
can
be
used
for
to
discover
like
arbitrary
resources.
So
if
you
wanted
a
place
like
some
generic
crd,
that
represents
my
custom
mesh
into
that
cluster
service
mesh
shop
can
still
discover
that
you
know
it's
it's
it's
not
so
much
a
question
of
which
direction
the
data
is
flowing.
H
D
Yeah
and
you
could
even
have
a
mix
and
match
right
where
it's
like.
You
have
default
things
like
linkedin
istio
and
that
just
runs
on
smh
hub
because
technically
yeah,
it's
a
plug-in,
but
we
set
it
up
so
that
it
is
part
of
the
of
the
core
solution.
Whereas
if
you
have
some
on
the
fringe
meshes,
then
you
can
run
that
on
your
on
your
cluster.
D
However,
the
one
thing
that
I
would
that
I
wanna
stress
that
is
a
that
is
a
little
bit
off
now,
not
completely
off
topic,
but
a
little
bit
out
there
is
that
if
we
open
that
up
right
so
to
to
the
like
the
api,
basically
that
any
mesh
can
just
create
services
and
workloads,
and
all
of
that
there
should
be
a
very
well
communicated
and
very
well
defined
notion
of
what
a
service
actually
is-
and
I
know
that
john
is
just
about
to
strangle
me
again,
but
I
I'm
not
getting
tired
to
to
repeat
that.
H
I
think
that
we
try
to
go
within
service
meshua.
The
approach,
which
is
that
a
a
service
is
an
endpoint
that
receives
traffic,
so
it's
an
http
server
or
grpc,
maybe
udp
or
whatever,
but
it's
a
server,
it's
the
recipient.
It's
we
ultimately
map
mesh
services
to
a
host
name.
I
mean
that's
really
the
key,
so
host
name
being
the
recipient
for
traffic
and
then
a
workload
would
be
the
client.
H
So
I
I
think
we
do
have
the
basis
for
a
pretty
concrete
abstraction
and
then,
depending
on
you
know,
the
the
type
of
mesh
service,
whatever
the
actual
concrete
implementation
is,
we've
got,
you
know
in
our
in
our
protobuf
schema
the
ability
to
add
concepts
for
whatever.
So
we'll
we'll
definitely
sounds
like
we'll
be.
Adding
console
will
be
the
next.
H
You
know
we'll
have
a
mesh
service
that
can
be
backed
by
a
console
service
entry,
and
you
know
we'll
be
able
to
fetch
that
data
from
console
or
static
endpoints
or
another
one
that
that
sound
like
seemed
to
me.
D
Are
there
then
additional
guarantees
around
that?
For
example,
if
I
have
two
different
clusters
and
the
management
cluster
by
cluster,
a
cluster
b
cluster
a
can
reach
the
management
cluster
cluster
b
can
reach
the
management
cluster
that
as
long
as
a
and
b
can
reach
the
management
cluster.
A
and
b
can
call
all
services
if
they
have
the
permission
and
they
have
guaranteed
reachability.
H
Yeah
so
the
way
that
it
works.
The
way
you
would
do
it
today
is
you
would
register
cluster
a
and
cluster
b
to
service
meshup
servicemishop
is
going
to
go
and
discover
all
your
services
in
there.
Then
you
create,
what's
called
a
virtual
mesh.
The
virtual
mesh
is
a
group
of
meshes
to
treat
them
as
one.
H
You
would
create
a
virtual
mesh
that
encompasses
both
cluster
a
and
cluster
b
you'd
set
a
policy
there
that
says:
allow
federation
allow
services
to
talk
back
to
each
other
service
mission
will
configure
the
underlying
mesh
resources
so
that
requests
can
reach
each
other
via
a
globally
available
dns
name
that
we
publish.
K
H
H
The
naming
convention
is
when
you
register
a
cluster.
You
register
the
cluster
using
like
a
kubernetes
compatible
name,
which
is
a
dns
name,
and
then
we
create
a
suffix.
So
it's
going
to
be
the
service
name,
dot
the
service,
namespace
dot,
the
cluster
name,
and
you
should
be
able
to
reach
that
service
call
it
from
any
cluster.
That's
in
the
virtual
mesh.
D
Okay,
yeah,
I
would
I
would.
I
would
suggest
that
basically,
all
of
these
attributes
that
contribute
to
the
service
semantic
that
they
are
documented
well
right,
because
I
mean
the
industry
will
never
agree
on
what
a
service
actually
is,
but
at
least
in
the
context
of
service
mesh
hub,
we
probably
should.
H
I
agree,
and
one
other
thing
I
wanted
to
say
just
going
back
to
your
comment
about
the
pull
and
push
model,
just
to
be
clear.
Discovery
happens
via
a
pull
model,
but
if
you
want
your
cluster
any
client
you
want,
you
can
manually
create
those
discovered
resources.
They
don't
have
to
be
generated
by
discovery.
So
if
you
want
to
build
a
client,
that's
going
to
push
discovery
resources
into
your
master
cluster,
that's
entirely
possible
to
do.
D
So
would
you
that
is
actually
cool?
If
you
allow
me
one
follow-up,
would
you
would
you
suggest
that
that
goes
via
the
standard,
kubernetes
api
of
basically
create,
update,
delete
and
then
custom
resources,
or
should
that
be
different,
basically
like
encapsulated
in
something
that,
in
spirit
is
close
to
like
xds
of
of
envoy,
where
you
have
a
grcp
protocol
that
exposes
these
objects
and
and
operations.
H
I
personally,
I
really
prefer
crds
because
of
the
tooling
around
them,
so
you've
got
everything
from
api
versioning
to
admission
control,
to
being
able
to
use
coop
ctl
for
debugging.
We
have
sub
resources,
so
you
can
look
at
the
spec
and
status
as
independent
fields,
resource
versions
in
order
to
prevent
right
conflicts.
H
D
Yeah,
I
have
no
argument
there
same
and
just
one
more
question
super
detailed.
I'm
sorry
for
that.
But
you
were
talking
about
reachability
that
the
the
master
server
is
basically
reachability
is
expressed
from
the
observer
from
the
from
the
observing
point
of
the
master
server.
So
is
there
a
guarantee
that
the
master
server
can't
reach
cluster
b?
D
That
then
cluster
a
will
not
be
able
to
reach
cluster
b
as
well?
Will
that
be
propagated.
H
That's
a
good
question.
So
if,
if
the
master
cluster
can't
reach
one
of
the
satellite
clusters,
then
we
won't
be
able
to
configure
that
cluster
to
receive
global
traffic,
so
the
I
think
what
we're
best
suited
for
is
intermittent
connectivity.
So
if
we
drop
connectivity
to
that
cluster,
but
the
satellite
clusters
still
have
connectivity
to
each
other
as
long
as
they've
already
been
configured
and
nothing
has
changed,
then
you
will
have
you
know
persistent
connectivity,
but
as
soon
as
we
need
to
control
something
we
need
to
update
configuration
on
a
satellite
cluster.
H
If
we
can't
connect
to
it,
then
then
there
will
be.
You
know
a
loss
of
connectivity
for
that
period.
H
And
there
are,
you
know
there
are
strategies
for
mitigating
that
we
haven't
gone
too
deep
into
it,
but
one
of
the
nice
things
about
service
mesh
hub
design
is
that
anything
can
be
technically
considered
a
master
cluster.
So
at
a
certain
point,
if
we
decide
we
want
to
install,
you
know,
distribute
service
meshup
instances
across
each
satellite
cluster
in
order
to
ensure
that
we'll
never
have
a
problem
with
the
master
cluster
in
single
master
cluster
failing
to
have
connectivity,
we
can
do
that.
D
H
Yeah,
so
part
of
part
of
the
approach
in
the
design
is
that
service
mesh
hub
should
run.
You
know
the
same,
regardless
of
where
it's
running
you
can
even
run
it
locally.
You
just
need
what
you
really
need
is
a
common
bucket
of
crds,
all
the
crds
that
servicemesh
hub
uses
to
represent
the
state
of
the
world.
So
as
long
as
that
data
is
available,
whether
it's
replicated
across
clusters
or
centralized
in
a
master
cluster
service,
mesh
hub
should
have
the
same
result.
J
Thank
you
very
much
yeah
and
dominic.
I
just
wanted
to
add
that
we
also
have
we're
going
to
improve
dramatically
the
docs
right
now,
so
we're
going
to
have
someone
dedicate
to
riding
dogs,
so
I
think
you
will
see
their
dogs
getting
way
better
and
we
will
be
able
to
write
more
implementation.
C
I
had
one
question:
can
you
hear
me.
C
Yeah,
so
when
I
was
actually
going
through
the
code
where
you
know
sms
does
service
discovery,
I
found
that
you
actually
check
if
a
service
is
actually
backed
by
a
workload
and
reject
the
service
if
it
doesn't
have
an
actual
workload,
and
you
know
I
just
wanted
to
see
if
that's
the
right
approach,
because
a
lot
of
the
kubernetes
services
have
you
know
some
some
have
selectors,
some
doesn't
have
selectors.
So
you
know,
endpoints
is
a
good
example,
so
wanted
to
know.
H
It's
it's
still
a
requirement
for
subset
selection.
So
if
you
want
to
route
to
a
subset,
you
want
to
do
like
a
traffic
shift.
Subsets
are
based
on
those
backing
workloads,
that's
how
we
partition
a
service,
but
we
can
support
external
name
services.
Headless
services
in
that'll
be
part
of
the
the
benefit
of
switching
to
the
the
next,
we'll
probably
be
doing
a
pre-release
based
on
the
refactor
within.
C
J
Thanks
one
more
thing
that
we're
going
to
do
that,
I
you
might,
I
don't
know
if
it
would
be
specifically
interesting
for
you,
but
we're
also
going
to
really
say
ui.
So
I
think
that
maybe
will
be
something
that
you
might
be
interested
mainly
a
read-only
ui,
that
you
will
give
you
a
good
way
to
kind
of
like
save
something
wrong
kind
of
like
a
dashboard.
J
So
you
know
because
we
anyway
the
way
our
resources
is
always
working
is
there
is
kind
of
like
a
notification
if
it's
accepted
or
denied
there
is
any
problem.
J
So
all
of
this
is
going
to
help
you
get
to
a
very
good
you
kind
of
like
nice
ui
that
showing
you
know
all
the
upstream,
all
the
the
policies
and
everything
that
is
being
not
in
a
good
state,
so
you
will
be
immediately
be
able
to
go
and
basically
debug
it
and
understand
what
the
problem
in
the
mesh
itself
or
in
the
federation.
F
All
right,
if
there's
no
other
questions,
then
we
are
just
a
few
minutes
before
the
end
of
an
hour.
So
we
can.
We
can
anything
else
that
comes
up.
We
can
kind
of
add
it
onto
the
list
to
next
time
and
we'll
see
all
in
two
weeks.