►
From YouTube: 08.12.2020 Service Mesh Hub Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
C
Awesome
all
right.
Sorry,
let's
see
john
thank
you
for
all
the
agenda
topics.
C
C
Right
all
right:
do
we
want
to
start
off
with
we've
got
a
few
topics
in
here
for
the
agenda.
John
you've
got
the
first
three:
do
you
want
to
go
up
and
pose
your
questions.
E
We
had
two
weeks
back
where
we
had
proposed
or
talked
about
some
ways
to
get
additional
discovery
of
workloads
and
services,
mostly
more
workloads
and
services,
but
into
service
mesh
hub
and
as
I
conveyed
and
provided
some
links
last
time
that
was
related
to
some
of
our
interaction
with
nsm,
the
open
source,
open
source
project
called
network
service
mesh,
which
does
things
a
little
bit
outside
of
the
the
straight
cni
kubernetes
service
landscape.
E
So
that's
that
was
the
sort
of
impetus
to
for
that
suggestion,
and
so
I
just
wanted
to
really
kind
of
continue
that
a
little
bit
and
and
see
if
you've
had
any
further
discussion
on
it.
Any
further
thought
on
how
to
proceed
here.
We
have
been,
in
the
meantime,
exploring
an
option
to
just
try
to
do
this
via
the
crd,
the
creation
of
a
crd,
it's
a
little
clunky
and
there's
some
issues
that
we
might
run
into
that's
kind
of
province.
Questions
later
on.
E
But
it's
really
just
more
of
a
continuation
of
that
topic
from
last
week
to
see
if
you've
digested
that
request
at
all
or
or
what
you
think
or
how
we
should
proceed
that
make
sense.
F
Yes,
it
does
so
scott
vice
is
not
here
right
now,
and
I
think
that
he
had
some
some
a
lot
of
the
knowledge
rv.
Can
you
try
to
take
this.
G
F
E
Yeah
yeah
at
the
highest
level:
that's
that's
what
it
is
and-
and
I
mean
there's
another
a
number
of
ways
to
to
skim
this
cat
too.
Your
crdu
offers
one
way
if
we're.
If
we,
if
there
was
a
more
flexible
way
to
discover
the
services
that
are
created
by
the
underlying
mesh
via
config,
that
might
be
another
way
or
the
most
sort
of
flexible
and
extensible
way
is
that
if
there
was
some
kind
of
api,
that's
offered
to
be
able
to
programmatically
create
them.
E
Maybe
maybe
the
crd
is
that
api.
Maybe
maybe
that
would
is
what
you'd
argue,
which
is
okay.
I
think
what
we're
just
really
trying
to
figure
out
is
what,
where
things
fit,
and
what's
the
best
way
and
how
to
how
do
we
actually
proceed
on
an
option
that
you
know
the
broader
community
can
buy
in
on
and
support
longer
term?
Does
that
make
sense.
F
Yeah
sure
so
I
mean
basically
cork
referendum
if
I
understand,
but
basically
you're
looking
for
some
sdk
or
something
like
that.
That
will
help
you
basically
some
api
that
will
help
you
to
create
those
crds
right
in
the
nutshells.
You.
E
Want
us
to
run
no,
if
we're
going
to,
if
we're
going
to
necessarily
use
the
cr,
if,
if
the
decision
is
just
create
crds
okay,
if
that's
the
api
that
we
want
where,
where
the
service
mesh
hub
itself
just
uses
the
existence
of
a
crd
to
be
convey
that
workload
and
service
mapping.
Okay,
then
then
that's
we
could
probably
work
with
that.
It's
not
perfect,
but
we
can
probably
work
with
that.
I
don't
think
we
need
an
sdk
to
do
that.
E
We
can
figure
out
how
to
create
that
crd,
but
but
we
haven't,
you
know
a
a
better
way
might
be.
E
You
know
like
offering
an
api,
as
pavin
has
described
in
a
few
proofs
in
some
of
the
links
that
we
provided
last
week,
that
we
can
go
in
and
do
like
a
a
straight
api
interface
into
your
your
mesh
discovery
so
that
we
could
create
those
apis.
E
Let
me
create
those
services
of
workloads,
so
we're
not
necessarily
stuck
on
a
particular
way,
but
we
want
some
get
a
dialogue
and
decide.
You
know
what
way
we
can
can
be
supported,
so
we
can
proceed
with
some
of
our
work.
H
So
let
me
ask
you
this
here
by
the
way,
when
you
say
an
api
with
that
api
create
a
discovered
resource,
because
the
question
is
where
eventually
it
needs
to
get
persisted
somewhere.
So
with
that
api
you
mean
is,
is
a
part
of
it
is
persistence
or
persistence
in
a
crd.
I
If
I
may
take
this
right,
I
think
in
the
last
call
we
briefly
spoke
about.
You
know
having
additional
discovery
capabilities
baked
in
smh
like
one
as
to
how
it
works
today.
Is
that
you
know
any
of
the
cluster
services
it
actually
recognizes,
and
if
there
is
a
workload
which
is
backing
it,
then
it
basically
registers
as
a
mesh
service
and
it
tries
to
federate
them
across
the
clusters.
I
What
us
and
scott
sort
of
discussed
last
time
was,
you
know,
having
extra
capabilities
through
either
an
external
service
registry
like
xcd
or
console,
or
something
else,
or
you
know,
smh
trying
to
read
the
istio
service
registry,
which
it
provides
through
service
entries,
for
example,
and
you
know
just
trying
to
expand
their
discovery
capabilities
in
smh
via
just
the
normal
kubernetes
service
as
to
what
it
does
today.
I
E
E
It
is
a
little
bit
of
a
different,
maybe
question
versus
how
the
you
know
how
you
conveyed
the
object
from
somewhere
else.
So
you
know
in
our
particular
case
we
we
we
will
we'll
persist.
These
have
have
a
persistent
store
for
the
workload
and.
F
E
Service,
mappings,
okay,
that
are
associated
with
the
network
service
mesh,
but
that
doesn't
really,
as
you
said,
you
value
you.
You
will
need
to
persist
that
in
service
mesh
hub
as
well.
So
how
that's
done?
I
guess
we
didn't
really.
We
didn't
offer
any
suggestions
on
on
the
best
way
to
do
that.
We
kind
of
thought.
We'd
interact
with
you
more,
it's
more,
you
know
what
is
the?
What
is
the
means
to
convey
that
mapping
over
to
service
mesh
hub?
You
know,
must
it
be
a
crd?
E
H
Yeah,
currently,
everything
is
here
at
these
right,
so
I
think,
in
order
to
make
so
service
machine
assumes,
or
rather
the
api,
that
service
mesh
app
uses
internally
is
the
crd
right,
so
in
a
sense
to
make
it
work.
For
example,
with
a
it's
your
service
entry,
we
have
to
create
an
internal
abstraction
that
would
convert
one
to
the
other
internally
and
harvey
I'll
defer
to
you
to
do.
Do
you
think
there's
a
good
place
to
do
that
in
the
code
or
how
would
you
go
about
this.
E
Yeah,
well,
I
said:
let
me
let
me
stop
you
a
little
bit,
then
I
think
what
you're
you're
saying
is
no
matter
what
what
what
kind
of
api
you
offer
it's
going
to
end
up
mapping
back
to
the
crd
right,
so
I
I
I
mean
creating
a
having
us
create!
No
us
creating
a
crd
is
a
fairly
trivial.
You
know
exercise.
So
I
think
maybe
what
we
ought
to
do
here,
at
least
for
for
the
next
two
weeks
is
we'll
proceed
down
that
path
of
using
the
crd.
E
I
think
what
we
got
to
talk
about,
then
really
is
how
to
ensure
the
crd
validation
is
relaxed,
because,
right
now
it's
tied
very
tightly
or
at
least
according
to
scott.
I
think
it
was
last
week
tied
very
tightly
to
the
existence
of
those
workloads
in
kubernetes,
so
that.
G
Constraint
is
only
for
services
for
mesh
services
that
are
managed
by
service
mesh
hub.
If
you
end
up
creating
the
crd
yourself
that
constraint,
there
is
no
such
constraint.
G
G
So
I
think
these
are
some
like
design
points
that
are
worth
talking
about
before
we
move
forward
with
any
one
approach.
What.
G
I
I
think
what
we
found
when
we
did
this
testing
right
is:
let's
let's
say
I
create
a
crd
manually
and
in
the
background
one
of
the
kubernetes
service
is
deleted,
so
messenger.
What
it
does
is.
It
goes
through
the
reconcile
loop
that
okay,
something
is
deleted.
I
need
to
update
my
services.
It
sees
that
I
created
a
mesh
service
manually
and
it
thinks
that
it
shouldn't
be
there
and
it
tries
to
delete
it.
I
F
I
The
apps
could
be
backed
by
sorry.
The
service
could
be
backed
by
a
proper
workload
and
everything.
I
don't
want
smh
to
delete
that,
because
you
know
it
didn't
discover
through
its
normal
discovery
process.
G
J
G
G
We
are
just
shoring
up
the
the
testing,
the
unit
testing,
end-to-end
testing
porting
over
the
mesh
ctl
commands
and
we'll
have
a
decent
amount
of
work
to
update
the
documentation
and
make
sure
that
that's
a
good
source
of
truth
is
that
on
master,
it's
not
on
master.
Yet
I
think
we
are
a
few
days
away
from
being
able
to
merge
to
master
and
do
a
pre-release
with
full
istio
support,
but
lacking
at
mesh
eks
support.
E
Okay,
that's
great.
Actually
that
was
one
of
the
other
questions
we
had
so
so
within
a
the
expectation
is
on
the
order
of
days.
We
should
see
little
hit
master,
be
able
to
you,
know
rock
through
it
and
see
what
changed
and
try
to
try
out
this
try
out.
Would
you
you
mentioned
harvey
right.
F
Yeah,
so
the
only
thing
that
is
left,
as
I
said,
is
obvious.
As
we
mentioned
he's
working
on
the
end
to
end
testing
and
more,
you
know
robust
end-to-end
testing.
We
have
a
score
just
merge
yesterday,
this
subset,
so
that's
actually
functionality
like
multi-cluster
subset.
F
So
that's
something
that
wasn't
actually
on
the
release
of
master.
So
that's
another
feature
that
we
just
emerged
yesterday
and
then
docs.
That's
the
only
thing
and
we're
already
working
with
our
dark
guys.
So
you
know
I.
I
believe
that
we
probably
will
be
in
a
very
good
position
and
by
the
end
of
this
week,.
E
Okay,
great
so
so
I
think,
based
on
this,
I
think
what
we
should
probably
do
is
just
focus
on
the
crd
creation
at
the
current
time
and
and
see
see
how
that
meets
our
or
doesn't
meet
our
needs
and
from
there
we
can
figure
out
whether
we
want
some
other
kind
of
api,
some
other
kind
of
abstraction
that
you've
all
mentioned
or
or
not.
And
then
you
know
talk
about
that
in
two
weeks
out
what
we
found
or
what
we
think
or
any
input
you
have
on
direction.
F
Totally
totally
so
maybe
I
should
just
give
you
a
little
bit
more
update.
So,
as
always
mention
app
mesh
is
not
in
the
newer
factors,
but
that's
the
first
thing
that
we're
going
to
work
immediately
after
so
that
should
come
very
soon
and
then
eight
and
hearing
the
call
say
hello,
eitan
he's
actually
working
right
now
and
adding
osm
so
open
service
mesh
integration
and
is
probably
will
you
know
I
don't
know
you
can
give
a
status
item
if
you
want.
K
I'd
say
end
of
week:
I
am
oops
sorry,
I
am
making
my
way
through
dependencies
right
now.
So
let's
say
end
of
week:
yeah.
F
So
that's
another
one
that
we
are
focusing
on
so
up
mesh,
as
well
as
a
osm
or
open
service
mesh
for
microsoft.
That
will
be
the
next
one,
working
parallel
right
now
and
then
yeah,
and
then
there
is
a
few
more
stuff
that
we
wanted
to
add
that
is
kind
of
like
on
the
way.
One
thing
is:
is
the
we
calling
it
routing
based
on
localization?
F
That
will
come
next,
so
we
did
the
failover,
but
we
want
to
be
more
specific
on
this,
so
that
will
come
next
after
it
and
so
on
so
and
vm
support,
that's
the
stuff
that
we
are
focusing
right
after.
E
So
so
it's
just
curious-
and
this
maybe
also
goes
to
my
question
on
the
limited
trust
are:
is
there
any
design
documents
or
descriptions
about
you
know
how
some
of
these
things
are
coming
in?
How
how
do
you
generally
manage
that,
because
it
would
be
interesting,
as
these
features
are
being
put
together,
to
be
able
to
look
at
them
or
comment
on
them
or
even
think
about
how
they
influence?
You
know
some
of
the
work
we're
doing.
F
Yeah,
so
I
I
don't
think
we
have
a
dog
sioux
valley
is
here.
You
can
probably
talk
a
little
bit
about
that.
We
totally
know
how
what?
How
is
that
going
to
look
and
what
we're
going
to
do
about,
but
I
don't
think
that
you
know,
but
it
was
an
internal
discussion
more
than
actually
written
one
I
mean
we
can
probably
put
something
on
page.
What
do
you
want
to
kind
of
like
a
high
level,
a
share,
limited
trust.
H
For
big
features,
we've
done
that
in
the
past
we
created
design
dock.
I
believe
it's
in
their
repo
there's
a
folder
for
those.
If
you
don't
know
what
features
like
osm.
L
Oh
okay,
one
is
that
what
you're
looking
for
just
to
make
sure.
E
Well,
so
I
mean
it
wasn't
as
I'm
not
not
trying
to
be
specific
here
or,
as
for
you
know,
an
exact
thing.
I
guess
it's
just
as
we're
trying
to
keep
up
with
what's
happening
and
some
of
the
work
that's
going
on.
It's
always
good
to
be
able
to
you
know,
look
at
things
on
a
on
a
personal
level.
If
you
will
of
what's
happening
and
contrast
it
with
some
of
the
things
that
your
company
or
you
personally
are
interested
in.
So
it
isn't
really
a
specific
request,
but
it's
more
like
okay.
E
E
But
I
mean,
if
you
don't
have
I
mean
obviously,
if
you've
been
working
under
a
different
modus
operando,
you
know
that's
fine.
I
just
wanted
to
make
sure
we
don't
there's
nothing
missing,
that
we
don't
see
kind
of
thing.
F
F
But
right
now,
as
I
said
right
now
actively,
we
are
not
working
on
this,
but
we
we
have
a
poc
already
done
and
we
basically
know
exactly
how
it's
working
just
you
know
we
need
first
to
finish
the
refactor,
so
once
that
will
be
done,
we
will
put
both
people
to
work
on
them
in
the.
If
and
on
on
the
is
that
make
sense.
J
So
I
heard
I
heard
just
be
qui
before
you.
You
know.
I
heard
two
different
things
there,
john
one
is
you
know
what
is
what
are
the
things
that
that
we're
working
on
and
what
are
the?
What
is
the
heads
up
that
you
can
get
before
like
it
gets
into
the
code
base
so
that
you
all
are
aware
as
well.
So
you
know
in
part
so
that
we
don't
duplicate
efforts
right
if
you
were
to
look
at
these
same
things
as
well.
So
that's
one
I
would
say
up
until
now.
J
We've
loosely
been
using
asana
kind
of
internally
right
now,
but
you
know,
I
think,
there's
there's
probably
room
for
discussion
about.
What's
what's
best
for
that,
but
then
the
second
thing
is
and
then
more
specifically
around
the
design
docks.
We
have
done
design
docks
in
the
past
for
the
purpose
that
you
started
hinting
at
where
you
know,
especially
in
the
community
here
where
others
might
have.
You
know
an
interest
in
what
that
implementation
looks
like
and
can
bring
other
perspectives.
J
That
would
be
useful
to
include
in
that
in
that
implementation,
and
I
I
don't
want
to
speak
for
engineering
like
as,
like
you
just
said
joe.
You
know
joe's
going
to
be
leading
this,
but
I
think
we're
open
to
figuring
out
a
good
solution
for
both
of
those.
M
Yeah
absolutely
go
ahead.
Sorry.
M
Yeah
I'll,
let
you
say
that
hi,
I'm
joe
I'm
going
to
be
sort
of
like
assuming
some
project
management
responsibility
on
the
service
metro
project,
as
I'm
ramping
up
technically,
with,
like
the
current
state
of
the
new
refactor,
and
also
kind
of
getting
a
grasp
on
the
way
that
we
have
like
sort
of
managed.
The
various
work
streams
going
into
service
mesh
hub
yeah,
like
that's
gonna,
become
my
responsibility
to
make
sure
that
we're
communicating
out
to
all
of
our
users
and
other
stakeholders.
M
You
know
what
exactly
we're
working
on
and
what
our
plans
are
and,
like
christian
said,
I
think
that
that's
gonna
contribute
a
lot
to
our
like
technical
direction
as
well
and
just
like
making
sure
that
we
get
as
much
input
as
possible
from
folks
like
you
on
this
call
and
anybody
else
in
the
github
and
larger
community,
like
those
processes
are
going
to
be
evolving
and
any
input
you
have
as
we
go
would
be
more
than
welcome.
E
Okay,
yeah,
no,
that
that's
great,
it
sounds
like
you
you're
already.
You
already
anticipated
this
question
and
working
a
plan
to
try
to
help
help
move.
You
know
make
it
clearer
more.
So
let
me
try
to
be
a
little
more
specific
on
on
the
request
that
both
lehigh
and
I
were
kind
of
interested
in
so
the
limited
trust
we
talked
about
that
before
you
said
that
there
had
been
some
plans
and
some
design
talks
about
that.
That's
a
model
that
you
know
we're
very
interested
in
as
well.
E
E
M
Yeah
for
sure,
like
today,
the
way
that
you
could
contribute
to
these
conversations
is
unfortunately,
like
literally
in
conversation
in
this
meeting
and
like
that's
absolutely
like
a
note
taken
on
our
side
that
we
need
to
like
be
better
about
having
like
public-facing
documentation
and
design
docs,
and
you
know
even
just
like
holding
discussions
and
github
issues
about
you
know
the
plans
that
we
have
and
how
we're
going
about
implementing
them.
M
For
this
specific
case,
I
think
that
you've
all
would
be
able
to
describe
the
like
pocs
that
we've
been
running
internally
and
the
conversations
that
we've
been
having
but
yeah
as
far
as
where
that
conversation
is
happening
right
this
moment
it's
in
calls
like
this,
but
that
is
something
that
we'll
be
improving.
E
F
E
F
E
Yeah
he
mentioned
that,
but
I
hadn't
heard
of
anything
is
that
something
that
all
right?
Well,
let
me
talk
with
the
v
joy
and
figure
out
how
we
move
that
along.
We
can
try
to
help
put
that
along
as
well.
J
H
H
A
So
if
I
understand
this
correctly,
a
request
would
come
with
say
to
cluster
b
with
an
identity
from
from
cluster
a,
and
there
would
be
something
in
the
in
the
cluster
that
can
tran
that
can
transform
the
identity
of
a
to
b
or.
H
That
can
so
it
will
not
transform
the
identity,
but
it
will
come
to
be
with
the
identity
of
the
gateway
right
because
the
gateway
is
in
cluster
b,
but
the
gateway
will
also
make
a
decision
if
cluster
a
because
the
gear
is
the
only
part
or
the
only
piece
here.
That
knows,
what's
cluster
a
and
what's
cluster
b
and
it
can
make
the
decision
if
service
a
from
cluster
a
is
allowed
to
talk
to
service
being
cluster
b.
H
A
H
E
H
H
We
can't
really
translate
an
identity
without
re-encrypting
right,
so
that
led
us
to
the
conclusion
that
the
gateway
needs
to
make
the
auth
decision
if
both
meshes
have
two
different
identity
mechanisms.
A
H
I
think
we
had
a
poc
back
a
few
iterations,
not
not
with
the
current
version
of
service
mesh
hub.
We
used
just
android
just
to
do
a
poc.
K
K
K
Oh
yeah,
I
mean
separate,
I
mean
this
is
a
bit
of
a
digression,
but
we
I
mean
we
could
easily
show
you
how
to
do
that
with
glue
and
like
we.
We
do
that
with
glue
all
the
time.
I
do
that
for
some
of
our
bigger
new
features
like
failover
and
all
this
stuff
I
mean
this
is
this
is
something
that
you
can
accomplish
in
glue
very
easily.
A
If
you
can
point
the
meta
documentation
in
this
chat
or
during
the
solo
slack
channel
for
smh,
that
would
be
great
yeah.
Let
me
go
find
it.
Thank
you.
E
So
have
you
so
have
you
explored
all
the
the
the
envoys
ability
to
associate
sds
on
a
per
cluster
basis?
Has
that
been
something
you've
explored
at
all
for
this
or
have
any
advantages
here
that
you've
thought
about.
E
So
envoy
supports
the
ability
per
cluster
per
envoy
cluster
to
specify
an
sds
server
or
an
sds
source.
I
guess
I
should
say
so
that
so
where,
where
it's
going
to
run
its
secret
discovery,
service
or
or
who
it
connects
to
for
secret
discovery
service,
so
that
doesn't
necessarily
solve
exactly
the
identity
issue,
but
it
does
allow
an
easy
way
for
to
get
certificates
for
different
clusters
from
different
places.
H
N
I
I
think,
I
think,
what
john's
trying
to
say
is
or
describe
the
mechanism
that
say
if
smh
had
a
almost
third
source
of
identity
or
or
its
own
umbrella
of
identity
for
services.
For
the
the
actual
connection
between
services,
you
could
potentially
have
the
envoy
instance.
That's
acting
as
a
gateway
use,
something
in
smh
as
a
sds
source
for
for
the
for
for
specific
cluster
targets
and
in
envoy
config.
J
So
you
could
theoretically
keep
so.
You
could
theoretically
keep
a
central
source
of
what
are
the
certificate
chains
for
all
of
the
clusters
that
you
manage,
but
I
think
that's
a
little
bit
separate.
You
know,
then
translating
the
identities
between
two
different.
J
J
N
Yeah
yeah,
I
I
think
that's
where
we're
trying
to
understand
the
options
on
being
able
to
sort
of
have
that
federated
identity
model
and
and
what
we
could
make
use
of
at
the
envoy
level.
To
accomplish
that,
I
mean
I,
so
it
would
be
akin
I
guess,
to
having
the
something
in
a
in
each
target
cluster
that
trusts
the
the
source
from
the
other
cluster.
I
guess
when
we
because
we
know
who's
allowed
to
talk
to
who,
at
that
level.
J
J
Was
you
know,
maybe
maybe
more
generically,
not
not
specif,
spiffy,
specific
yeah
and
and
how
do
we
solve
it
at
that.
E
A
A
You
could
register
you
could
define
the
trust
domain
as
being
the
class
workloading
cluster.
A
can
talk
with
only
a
certain
resource
in
cluster
b
because
they
have
established
a
trust
mean
between
the
two
of
them,
not
between
the
two
clusters
right
yeah,
because
if
we're
talking
about
limited
trust,
that's
the
most
limited
you
can
get.
J
Yeah
that
makes
sense
and
then
like,
like
you've,
always
saying
that
the
the
gateway
would
live
in
one
of
those.
You
know
the
the
target
cluster
and
would
know
and
be
configured
and
instructed
by
let's
say
service
measure
to
you
know:
to
have
the
capability
to
do
that:
identity,
translation
to
to
link
to
make
to
make
that
trust
boundary.
You
know
that
the
server
the
service
in
that
target
cluster
part
of
that
trust.
E
Okay,
so
I
I
don't
know:
if
does
anybody
else
from
our
side
have
a
more
question,
but
I
mean
I
think
at
a
general
level.
We
understand
what
you're
doing
I
I
guess.
Maybe
the
question
would
be
if
the
specific
areas
that
you're
looking
for
help
with-
and
maybe
we
could
take
that
offline
via
that
other
meeting
eat
it.
But
I
think
we
would
like
to
try
to
get
some
people
more
directly
involved
and
contributing
to
the
code
base.
F
E
Yeah,
maybe
maybe,
but
I
I'm
not
the
I'm-
a
technical
guy,
not
the
guy-
that
controls
the
resources
either.
So
it's
we
probably
should
oh
we'll
figure
it
out
I'll
I'll
talk
about
after
this
media.
At
this
meeting
and
figure
something
out.
F
That
sounds
good,
as
I
said
for
the
meantime,
I
think
that
you
know
you
guys
know
what
we're
working
on
it
next
right.
So
the
limited
trust
the
you
know,
bring
backup
mesh,
osm
and
then
also
the
the
routing
based
on
localization.
That's
the
stuff
that
we're
focusing
on
right
now.
E
So
I
I
hate
to
bash
anybody
in
this
call,
but
osm
is
fairly
fairly
new
right,
fairly
green
yeah.
F
So
so
I
mean
this
is
this
is
the
reason
we're
doing.
It
is
very,
very
simple.
F
I
believe
that
in
the
nutshells
eventually
they're
going
to
you
know
like
the
question:
if,
if
a
mesh
will
succeed
or
not
to
me,
it's
based
on,
does
it
add
an
audience
to
use
it
is
there
is
a
purpose
already
for
someone
to
use
it
and
osm
will
be
eventually
being
used
because
it's
being
used
in
azure
right,
that's,
basically
what
they're
going
to
do
so,
all
the
clouds,
the
3b
clouds
they're,
going
to
have
a
match
and
they're
going
to
use
it
and
they're
going
to
use
their
own
mesh
and
therefore
I
believe
that
that's
going
to
be
something
that
will
catch
eventually.
F
Exactly
but
there
is
a
lot
more
more
more
service
measures.
That's
why
I'm
less
optimistic
about
right.
I
think
that
they
will
not
survive
or
I
think
that
they
will
disappear
by
the
end
of
the
time.
I
don't
think
that
though
osm
and
upmesh
is
one
of
those.
I
think
that
sto
up
measure
you
know
sm
will
be
here
to
stay.
F
I
believe
that
believe
that
ashikop,
it
will
be
something
like,
in
my
opinion,
one
like
nomad.
It
would
be
an
alternative
that
people
will
use,
but
it's
not
going
to
be
the
most
people
right.
It's
like
exactly
like,
nomad
right
people
using
it,
but
it's
not
kubernetes
right.
So
this
is
the
way
I'm
looking
at
this.
So
when
I
need
to
target
what
is
you
know,
you
know
what
is
the
future
of
this
project?
That's
where
I'm
tempting
to
do.
F
E
Yeah,
no,
I
I
mean
I
would
agree
with
you
from
the
public
side
they're
all
going
with
their
own
specific
meshes
and
that's
likely.
I
don't
see
that
changing.
If,
if
it
was
to
change,
istio
probably
had
the
biggest
lead,
so
it
would
have
been.
You
know
a
gravitation
toward
that,
but
that's
not
happening.
I
guess
the
part
that
you
know
has
a
very
keen
interest
to
us.
Is
you
know,
as
the
on-prem
type
data,
centers
and
cloud
environments
start
building
meshes,
you
know,
do
they
use
link
or
d?
Do
they
use
istio?
F
J
I
think
I
think
everything
is
that
we've
talked
about
in
terms
of
supporting
istio
and
app
nash
and
osm
is
dead
on.
I
also
want
to
point
out
that,
by
supporting
these
multiple
meshes
and
further
refining
the
you
know
the
internal
apis
that
we
use
to
be
able
to
support
them
in
service
mesh
hub,
and
you
know,
try
to
get
parity
with
some
of
the
the
features
that
might
exist
or
or
in
some
cases,
lack
of
lack
of
exist
and
in
these
different
meshes
it'll
make
it
a
lot
easier.
J
Let's
say:
istio
doesn't
continue
to
be
the
dominant
self-managed
service
mesh.
It
would
be
pretty
straightforward
at
that
point
to
add,
for
example,
that
maybe
console
starts
to
gain
market
share
or
whatever.
So
I
I
think
right
now
we're
focusing
on
what
we
see
as
the
the
right
ones,
but
we're
always
happy
to
have
feedback
from
from
you
all
or
anyone
in
the
community
that
says
actually
what
you're
doing
with
service
mesh
shop
should
apply
to
these
ones
over
here
as
well.
N
Yeah
I
mean
I
I
was
just
going
to
add.
I
think
the
dry,
the
drivers
that
we've
seen
at
least
with
the
customers
that
cisco's
talks
to
have
have
not
been
which
cloud
they
live
in
it.
It's
been
more
their
how
their
skill
set
with
the
api
that
they're
already.
You
know
how
how
much
of
investment
they've
already
made
in
in
integrating
with
an
api.
N
So
that's
where
we
see
istio
way
out
ahead,
maybe
maybe
that
because
the
concepts
are
so
similar
across
these
meshes,
maybe
that's
not
as
big
of
a
deal
but
there's
a
significant.
N
I
mean
from
our
point
of
view
that
people,
the
the
there
seems
to
be
a
significant
investment
already
and
integrating
with
the
cis.
It's
the
o
api
and
operationalizing
around
istio.
So
I
can't
see
like
just
because
I
went
to
app
mesh.
I
mean
I
went
to
aws,
I'm
going
to
jump
into
app
mesh
full.
You
know
fully
right
like
or
I
went
to
gke
or
you
know
that
that's
kind
of
how
it
seems
to
us,
maybe
for
greenfield
where
you
haven't,
started,
investing
in
a
integrating
with
a
mesh.
N
F
N
No,
I
I
said
they
if
they've
already
invested
in
in
integrating
with
istio.
That's
what
we've
seen
that
be
the
driver
of
they've.
Stuck
with
that.
You
know
they
haven't
really
looked
around.
I
guess
even
what
the
cloud
native
I
mean
public
cloud
apis
that
that
are
that
are
native
to
that
environment.
E
I
I
I
think
another
way
to
say
what
tim
is
saying
is
that
we,
in
our
experience,
are
early
adopters
for
servers.
The
customers
that
we
see
as
early
adopters
or
the
customers
we've
talked
to,
who
are
early
adopters
of
service
mesh
that
also
use
amazon,
don't
show
a
lot
of
interest
rap
mesh
because
they've
already,
you
know
been.
N
F
I
think
I
believe
that
if
you
will
run
an
app
on
aws
app
mesh
will
give
you
functionality
that
you
will
not
be
able
to
achieve
with
this.
For
instance,
serv
serverless.
F
J
In
the
field
we
do
see,
we
see
people
who
are
on
this
deal
today
and
who
have
been
on
in
our
on
amazon
and
they
they
have
been
reluctant
or
they
haven't,
moved
or
put
anything
on
app
mesh,
because
app
mesh
didn't
have
the
features
that
they
needed,
but
they
are
anticipating,
and
one
in
particular,
is
anticipating
by
the
end
of
this
year,
that
atmesh
will
have
the
features
that
they
want
and
they
will
actually
move
off
of
istio
that
so
that's
that's.
You
know
we're
seeing
we
are
seeing
some
of
that,
but
we.
J
The
folks
who,
like
you
like
you,
mentioned,
who
go
to
amazon
and
then
like
edit,
was
saying
that
they
they
do
want
native.
Somehow,
better
integration
with
the
service
mentions
they
use
on
amazon,
so
yeah.
J
That
amazon
provides
and
istio
will
never
get
there,
but
at
mesh
will
could
could
potentially
and
that
that
would
be
a
reason
then
to
to
use
you
know
both
not
not
starting
off
thinking
that
you're
gonna
use
two
service
methods,
but
you're
gonna
use
the
networking
technology
that
you
know
best
suits
the
the
problem
space
that
you're
you're
trying
to
solve.
That
could
be.
F
Smh
right,
so
you
really
don't
care
because
they
that
you're
going
to
use
will
be
the
smh
one
and
then
for
you.
It
doesn't
really
matter
using
sdr
if
using
cement,
if
you're
using
osm,
if
you're
using
anything,
it
doesn't
really
matter.
That's
just
implementation
details
right
and
you
can
swap
theoretically.
N
Yeah
yeah,
I
mean
that
that's
the
attractiveness
for
us
to
smh
is
just
you
know
that
that
driver
we
thought
we
think
we
see,
I
guess,
for
the
the
complexity
around
using
a
service
mesh
api.
If
you're
going
to
invest
in
that,
then
why
not
one
that
you
know
an
api
that
works
across
multiple
service
mesh
types
right
like
that.
F
And
I
think
what
we
should
at
least
the
way
I
see
this.
I
think
that
what's
interesting
in
service
mesh,
is
it's
it's
interesting
for
sure
and
giving
you
a
lot
of
functionality
on
one
cluster,
but
I
think
that
the
multi-cluster
will
be
extremely
more
interesting.
So
I
think
that
a
lot
of
the
feature
that
we
can
do
on
top
of
it
like
fell
over,
like
you
know,
as
I
said,
rowing
based
on
localization.
F
So
that's
will
be
the
interesting
stuff
and
guess-
and
you
know,
then
what
are
you
going
to
do
if
you
have
an
app
machine
as
in
aws
and
an
sdo
in?
Oh,
is
that
going
to
work
on
that
right,
and
I
feel
that
the
fact
that
we're
putting
this
obstruction
is
exactly
that
we'll
be
able
to
put
interesting
stuff
on
top
of
it,
and
we
will
not
care
about
the
implementation
of
this.
F
So
a
lot
of
stuff
that
I
can
see
like,
for
instance,
the
loop
project
that
we
did
before
you
know
using
a
lot
of
stuff
that
recording
networking
and
base
that
can
so
a
lot
of
interesting
stuff,
akios
monkey
and
so
on.
That
can
come
on
top
of
it.
It
would
be
very
useful
if
they
will
all
speak
the
same
language
and
then
it's
implementation
data,
no
matter
what.
B
Yeah,
I'm
also
curious
and
not
very
related,
but
there
I
haven't
seen
any
discussion
about
traffic
director
and,
like
the
google
cloud,
seems
pretty
big
and
I
don't
know,
are
there
any
plans
or
is
there
some
problems
with
the
integrating
with
traffic
director.
J
So
traffic
director
today
is
very
simplistic.
It
is
obviously
it's
a
managed
control
plane.
It
doesn't
support
the
istio
apis.
Yet,
as
far
as
I
know,
they
keep
saying
that
it
will
at
some
point,
but
then,
when
it
does
then
obviously
well,
that
will
will
be
in
in
a
position
since
we
support
the
ceo
of
servicenow
to
to
integrate
with
traffic
director
and
any
other
additional
configs
that
you
might
need
for
for
that.
But
right
now
it
doesn't
it's
just
a
very
simple
config
store
for
or
xtds
for,
onboarding.
J
I
I
I
had
one
last
question:
I
think
I
guess
like
when
we
create
the
service
crd
manually
right.
Is
there
any
way
to
influence
the
sql
service
entry
creation?
I
K
We.
We
basically
look
on
the
service
to
find
you
know,
is
it
a
load
balancer?
Can
we
use
one
of
those
ips?
Is
it
a
node
port?
Can
we
find
the
ip
of
the
node?
You
know
blah
blah,
but
potentially
a
very
simple
option
could
be
a
hard-coded
ip
or
something
like
that
or
some.
You
know
some
mechanism
that
we
put
on
the
on
the
mesh
service.
So
it
doesn't
do
our
discovery
mechanism
for
the
ip.
J
And
I
would
say
that's
how
it
it's
working
today
and
we
can
and
but
but
the
question
is
actually
a
little
bit
more
more
broad
in
my
mind,
because
what
at
a
higher
level,
what
you're
asking
is
what?
If
we
want
to
be
explicit
or
create
our
own
entries
of
services,
that
then
can
be
discovered
across
multiple
different
clusters
right.
J
So
I
think,
that's
more
generally
the
question
that
you
asked
specifically:
that's
you
know,
that's
how
istio
does
it,
but
that
in
my
mind,
going
back
to
what
it
said
right
when
you,
when
you
have
this
management
plane
and
even
the
api
to
support
it,
you
can
build
things
like
failover
and
routing
localization
like
she
said,
but
also
service
discovery,
and
you
know
the
identity
federation
and,
like
all
of
these,
all
of
these
things
that
you
can
build
on
top
of
service
mesh
hub.
J
F
Thanks
fantastic
awesome,
so
I
mean
we
will
try
to
put
some
stuff
in
writing
to
make
sure
you
know
kind
of
like
put
a
little
bit
more
structure
around
the
community
and
again
any
requests
or
a
you
know.
F
Feedback
will
be
very
very
useful
here
if
there
is
a
way
that
you
prefer
that
that
will
be
one
in
life
yeah.
But
I
think
this
is
our
homework
and
our
action
item
and
yeah
and
and
john
I
don't
know
if
you
are
the
point
person
you're,
the
only
one
I
see
because
you're
the
only
one
with
a
video
picking
on
you,
but
if
we
wanted
to
set
up
some
stuff
for
some
time
offline,
more
kind
of
like
spread
the
word
to
be
very
yeah.
E
I
think
I
think
I
think
we
need
to,
as
we
said
in
the
last
couple
calls
we
would
like
to
try
to
get
it
more
involved.
Try
to
actually
do
some
specific
contributions,
okay,
and
so
we
need
to
figure
out
that
process.
We're
not
been
a
little
bit
both
on
our
side
and
trying
to
figure
out
the
community
structure,
how
to
do
that
so
yeah.
E
E
Okay,
great
okay
sounds
good.
I
will
try
to
figure
that
out
within
the
next
day
or
so,
and
you
know
contact
you.