►
From YouTube: 07.15.2020 - Service Mesh Hub Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
one
and
harvey
we
can
have
harvey
and
you
all
can
do
this
one.
Why
don't
we
start
with
we'll
give
it
one
more
minute,
but
then
why
don't
we
start
with?
What
we
will
start
with
is
some
of
the
recent
things
that
were
merged
into
master.
I
think
that'd
be
a
good
starting
point
and
then
we've
got
a
couple
of
items
from
the
from
the
community.
A
A
A
So
scott
or
harvey
do
you
guys
want
to
kick
off
with
any
other
recent
items
that
may
have
been
merged.
B
Yeah
we
can
talk
a
bit
about
zero
six
zero
release.
So
let
me
include
a
link.
B
B
This
is
only
supported
on
istio
right
now
and
we
use
the
envoy
filter
to
sort
of
create
an
aggregate
cluster
in
envoy
terms.
That's
how
we
achieve
this.
B
Aside
from
that,
there's
just
a
few
bug
fixes.
We
have
a
breaking
change.
We
renamed
zephyr
to
smh
in
our
group
names.
So
it's
a
minor
thing.
B
B
Yeah,
I
have
to
look
into
it
into
the
details.
Specifically.
Maybe
you
don't
need
to
do
a
fresh
install.
You
can
probably
keep
some
of
your
crds
but
yeah.
I
I
don't
need
to
look
into
it
exactly,
but.
B
A
Got
it
even
just
a
few
tips
might
be
good
and
I'll
take
it
as
an
action
that
maybe
we
could
get
a
little
blog
out
on
this,
so
that
people
are
aware.
C
Yeah
sure
it
would
be
great
to
actually
once
we're
a
little
bit
closer
along
and
we've
finalized
the
the
architecture
and
the
apis
to
publish
something
for
the
community,
because
we
don't
have
anything
like
that
yet,
but
I
can
I
can
give
a
rough
overview
for
the
most
part.
It's
an
internal
refactor
and
it
shouldn't
affect
things
like
user
experience.
C
Some
of
the
apis
will
be
changing,
but
things
should
just
be
moved
around.
Everything
that's
possible
today
will
be
possible
with
the
you
know.
Whenever
we
release
this,
we'll
do
a
a
bump
of
our
minor
version,
because
we're
still
pre-100
and
all
the
breaks
will
be
noted.
There
we'll
make
sure
to
do
everything
we
need
to
to
support
migrating
from
the
older
apis.
C
C
There
are
certain
things
that
I
can
get
from
a
high
level,
like
the
statuses
on
crds,
are
going
to
contain
a
lot
of
information
about
what
the
controller
is
actually
doing
so
bugs,
and
things
like
that
should
be
a
lot
easier
to
track
down
yeah
and
we're
just
just
simplifying
some
of
the
internals.
The
way
servicemenship
interacts
with
different
clusters
or
how
it
its
interface
works
with
multiple
clusters.
C
Yeah
again,
it's
not
a
huge
user-facing
change,
but
for
any
of
our
community
who's
interested
in
getting
into
the
internals
of
service
mesh
hub.
There's
a
significant
refactor
going
on
and
you
know,
maybe
it
makes
sense
to
either
publish
a
document
or
maybe
have
us
like
section
some
time
to
go
over
in
more
detail.
A
C
I
think
that,
within
one
or
two
weeks
we
should
be,
you
know
at
least
we'll
have
the
within
a
week.
I
would
say
that
we
have
the
skeleton
down
for
how
everything's
going
to
look.
What
I
would
like
to
do,
if
possible,
is
show
basically
in
in
some
of
our
older
projects
at
solo.
We
had
some
guides
on
here.
If
you
want
to
extend
the
system
here
are
the
the
the
files
that
basically,
you
want
to
edit
here's,
how
to
do
it
with
a
little
bit
of
a
workflow.
C
For
you
know
we
call
adding
plugins
or
or
adding
additional
extensions
and
support.
So
if
we
have
a
place
where
those
interfaces
are
defined-
and
we
can
go
through
that-
I
would
say
to
be
on
the
safe
side
two
weeks,
but
but
possibly
as
early
as
one
week
for
that,
the
community.
I'm
curious.
If,
if
any
of
our
community
members
are
have
looked
into
the
code
itself,
and
maybe
you
know
contributing
some
features
or.
C
To
help
to
help
guide,
like
you
know,
what
exactly
do
we
want
to
showcase
from
the
refactor?
What
is
it
that
people
would
be
interested
in
seeing?
I
would
like
to
hear.
C
A
Could
it
be
framed
in
like
a
what
kind
of
functionality
areas
the
refactor
impacts
for
people
who
may
want
to
contribute
things
or
build
to
a
service
mission?
I
don't.
C
Yeah
sure
so
I
don't
know
if
I
want
to
jump
into
it
right
now,
but
basically
there's
a
new
architecture
that
we're
approaching
with
where
we're
trying
to
do
all
that
so
service
mesh
hub.
From
our
point
of
view,
it
can
be
understood
as
a
translation
engine.
Basically,
everything
that
service
mesh
shop
is
doing
is
taken
from
higher
level
crds,
like
a
user,
defines
a
failover
crd,
and
we
take
that
and
we
translate
it
into
resources.
You
know
for
istl
and
other
meshes
that
are
presumably
based
on
a
declarative
api
of
some
kind.
C
You
know
that
we
can
translate
to
there's
some
configuration
for
the
mesh
that
we
know
how
to
operate,
and
then
you
know
we
have
this
abstraction
on
top
of
it.
So
it's
a
translation
engine
and
basically
what
the
refactor
attempts
to
do
is
separate
out
the
parts
of
the
translation
that
the
user
or
the
use
case
is
concerned
with.
C
So,
for
example,
if
I
want
to
set
up
failover,
what
I
want
to
be
able
to
do
is
have
my
my
failover
crd,
all
of
the
existing
objects
it
needs
to
map
to
so,
for
example,
an
existing
destination
rule
existing
virtual
services.
C
Anything
that's
that's
already
defined
in
your
system,
whether
it's
defined
by
service
mesh
hub
itself
or
in
existing.
You
know,
potentially
I'm
not
saying
that
this
is
supported
right
now,
but
eventually
we
would
be
able
to
support
a
model
where
servicemen's
hub
is
sharing
configuration
with
that
which
exists
in
the
cluster.
C
What
we
basically
want
to
do
is
allow
users
to
come,
build
a
plug-in.
Let's
say
they
want
to
add
a
new
crd
or
add
a
field
to
an
existing
crd
that
they
can
do
that,
modify
our
protobuf
files,
re-run,
the
code
generation
and
then
just
add
in
a
plug-in,
which
is,
it
adheres
to
an
interface
that
adds
the
necessary
config,
an
istio
service
entry
or
a
virtual
service.
So
the
plugins
define
the
domain,
but
you
don't
have
to
do
any
wiring
yourself.
You
don't
have
to
go
and
create
any
sdo
objects.
C
A
So
then,
what
we'll
do
is
I've
made
a
note
in
here
since
we're
this
meetings
every
other
week?
We
can
give
an
update
if
you
want
to
talk
more,
I'm.
A
C
Okay,
so
this
is
just
to
be
clear:
this
is
a
work
in
progress,
so
a
lot
of
stuff
is
stubbed
or
unfinished,
but
I
can
show
you
here.
If
we
look
on
the
left
side,
I
have
the
list
of
plugins
that
are
are
currently
implemented
in
this
refactor
and
if
we
look
and
say
I
have
an
mtls
plugin,
so
the
mtls
plugin
is
going
to
make
decisions
about
how
all
the
mtls
is
going
to
be
set
up
in
your
network
now
right
now,
it's
very
lame.
It
just
enables
mtls
for
everything.
C
C
And
then
you
know
based
on
something
in
that
traffic
policy
I'll
determine
how
to
set
up
the
tls,
and
so
basically,
what
you
have
is
is
a
hook
to
the
data.
The
input
and
output
data
that
service
mesh
hub
is
working
with,
and
almost
kind
of
like
a
filter
like
an
envoy
filter.
You
can
act
on
those
on
the
output
object
and
make
decisions
based
on
the
input
object
in
order
to
add
functionality
and
behavior.
C
C
I
can
make
sure
that
I
propagate
this
through
and
when
I
process
my
my
my
destination
rule,
I
can
look
at
the
value
of
that.
I
will
put
it
on
me.
C
E
Hey
sorry
for
the
interruption,
but
could
you
show
the
the
retries
as
well
easy
easy
to
wrap
head
around
sure?
Multiple
examples?
Yes,
cool!
Thank
you
so.
C
Here's
how
the
retry
plug-in
works
and
it's
a
bit
more
mature
than
the
mtls,
so
this
is
a
good
example.
So
the
interface
here
for
a
traffic
policy
plug-in
is
that
you
get
a
reference
to
the
traffic
policy
so
that
the
api
for
retries
is
defined
in
traffic
policies.
C
C
C
C
E
It's
a
plugin
target
service
mesh
specific
because
I
do
see
istio
in
the
code,
but
yes,
d.
Yes,.
C
C
So
if
it's
one
plug-in
that
implements
multiple
translations
for
different
mesh
types
or
if
you
have
separate
plug-ins
for
each
mesh,
the
interface
would
slightly
change.
So
this
function
here.
This
process
traffic
policy.
Instead
of
the
output
being
an
http
route,
the
output
would
be,
let's
say,
an
atmash
virtual
router
just
for
comparison,
and
then
we
can
do
the
same
thing.
C
C
C
Yeah
sorry,
I
hope
I
didn't
take
up
too
much
time
with
that.
So.
D
The
idea
may
be
to
explain
you
why
we're
doing
what
we're
doing
I
mean
we
worked
a
lot
with
customers
and
what
we're
discovering
is
that
you
know
everybody
has
their
own
need
their
own
requested,
a
request,
and
usually
you
know,
what's
beautiful
about
the
system
that
we're
building
is
that
you
can
do
this
by.
Actually,
you
know
have
the
right
hooks
in
envoy
or
in
the
service
mesh,
something
that
we're
using
a
lot
like
envelope,
proxy
filter,
a
filter
proxy
in
in
stl
and
basically
we'll
be
able
to
extend
it.
D
So
so
you
know,
because
we
want
to
make
sure
that
people
will
be
able
to
extend
into
their
own
use
case
and
rewriting
a
lot
of
those
for
people
that
are
interesting
and
our
customers.
It
was
very
important
for
us
that
it
would
be
very
you
know
as
simple
and
fast
as
we
can
do
it
and
that's
basically
what
the
system
is
trying
to
do.
First
of
all,
it's
cleaning
up
a
lot
of
stuff.
D
F
Would
it
be
accurate
to
say
that
that
you
it
it
makes
it
speeds
up
the
use
case
where
a
a
single
class
service
mesh
type
you
want
to
add
functionality
to
it
without
having
that
at
the
without
smh,
requiring
you
to
have
some
kind
of
some
form
of
that
functionality
in
the
other
service
mesh
types
or
is
it?
Is
there
there's
no
required,
I
guess
stubbing
out
or
whatever
of
the
functionality
in
in
service
mesh
types
that
don't
support
that
corresponding
or
is
this
more
just
a
pure?
F
You
know
api,
I
guess
mechanism
for
leveraging
the
specific
api
interfaces
that
service
mesh
hub
defines
and
just
adding
functionality
underneath
the
the
specific
service
mesh
hub
service
mesh
type.
Sorry
getting
too
many
service
measures
in
mind.
C
You'll
be
able
to
so
that
the
piping
under
the
system
will
automatically
handle
which
meshes
are
supported
by
a
plug-in
or
or
policy.
So
this
definitely
simplifies
that.
D
A
D
That
they
don't
have
all
those
functionality
and
aws
is
adding
a
lot
of
functionality.
So
we
want
to
make
sure
that
we
will
be
able
to
make
two
as
fast
as
they
adding
it
very
quick,
adding
it
now
from
our
experience
when
we're
doing
it
in
you
know
different
project
device.
This
model
works
so
well
that
we
could
do
a
feature
in
a
matter
of
day
and
it
will
be
very
solid
because
the
system
itself
is
very
solid
right
now,
it's
very
very
robust
because
you
don't
need
to
worry
about
that.
D
So
then
you
can
go
and
without
make
sense
of
all
the
system.
Add
a
very
simple
feature
and
it's
just
going
to
work.
So
I
think
it's
the
first
of
all,
at
least
for
us
inside
solo,
it's
taking
the
velocity
insanely
high
right,
I
mean
we
can
actually
shift
way
faster
and
I
think
it's
important.
So
that's
helping
us
with
question
of
what
will
happen
right
now.
D
If
stu
is
going
to
come
with
1.6
and
the
api
will
change
or
what,
if
they
will
add
new
feature
on
1.6,
you
can
just
basically
go
very
very
fast
and
make
sure
that
we
are
basically
keeping
up.
You
know
up
to
speed
with
upstream
in
every
match
and
then,
as
you
said,
I
mean
generally.
The
system
is
built
like
this-
that
we
are
only
supporting
the
server.
The
feature
of
the
service
meshes
that
the
service
mesh
themselves
is
is
supporting
so
like,
for
instance,
if
kuma
is
not
whatever.
D
If
app
mesh
is
not
supporting
circuit
breaking,
we
are
not
going
to
do
this
right.
We
are
not
going
to
support
that.
So
that's.
D
A
D
B
I
think
one
helpful
way
to
think
about
it
is
the
service
mesh
hub
interface
is
a
superset
of
all
of
the
possible
features
that
exist
in
the
service
meshes
that
we
support.
However,
I
think
you
asked
a
question
about
like
lockstep.
We
don't
actually
have
to
implement
our
mesh
specific
features
in
lockstep
with
each
other.
B
F
I
think
I'm
still
trying
to
I
mean
I,
I
still
I'm
trying
to
understand
exactly
how
service
mesh
hub
works
in
that
way
like
where
you
you
have
the
kind
of
intent
of
what
you
you
want
to
do:
exposure
of
service
and
traffic
policy
wise
and
then
carry
that
out
in
the
individual
service
mesh
types
like
how
you
know,
I
I
think
to
some
degree
it
looks
like
the
implementation
is,
is
very
like
targeted
at
if
you
want
to
have
this
service
like
an
istio
service
mesh
that
has
a
ton
more
capability
than
like
a
link
or
d
service
mesh,
and
you
just
want
to
have
multiple
istio
service
meshes
under
your
smh
umbrella.
F
We
have
a
ton
of
features
for
you.
You
know
that
that
seems
like
more
the
the
use
case
that
you
guys
are
going
for
then
sort
of
the
normalized
api
that
will
will
somehow
give
you
the
least
common
denominator
support
if
you
do
bring
in
a
link
or
d
api,
and-
and
I
think
some
of
this
is
just
my
not
understanding
and
completely
so
I'm
just
trying
to
think
it
through.
D
D
What
people
want
and
what
we
see
in
the
field?
Right
I
mean
if
everybody
using
steel
yeah,
we
will
give
way
more
love
to
still
because
that's
what
everybody
is
using
right,
not
because
this
is
oh,
it's
more
rich,
so
we
will
be
able
to
do
this
right,
it's
basically
driven
by
the
market
itself
and
not
by
what
we
want
per
se
if
it
makes
sense,
but
indeed
the
vision
itself
is
a
definitely
not
to
do
the
lower
common
denominator.
This
is
one
thing
that
I
really
not
think
that
we
should
do.
D
We
should
go
in
the
fastest
right.
We
all
about
innovation
on
the
fastest
pace
of
the
fastest
mesh
right.
We
should
have
make
sure
that
we're
basically
getting
all
the
features,
that's
number
one
and
number
two
is
we're
going
to
add
a
lot
more
meshes,
like
I
mean
kuma,
is
the
one
that
we're
going
to
do
and
up
mesh
is
already
supported,
but
and
also
linkedin
console
is
the
next
one.
D
So
I
mean
the
idea
is
that
we
we
totally
believe
that
in
the
future
they're
going
to
be
you
know,
potentially
you
know
you
will
want
to
run
something
on-prem
and
then
want
to
go,
for
instance,
to
upmesh,
and
it
will
make
a
lot
of
sense
to
you
to
use
this
because
it's
natively
going
to
run
there
and
therefore,
and
therefore
we
want
to
make
sure
that
we
are
grouping
them
together
to
virtual
machine.
D
You
can
actually
act
as
one
big
mesh,
so
that's
kind
of
like
a
big
one,
but
of
course,
if
we're
doing
all
of
this
and
we're
basically
coming
with
this
api
on
top
of
it,
that
all
the
the
the
measures
are
talking
and
also
the
fact
that
we
have
you
know
all
day
the
virtual
matches
now.
What
will
be
interesting
is
what
we
can
leverage
this.
D
What
we
can
do
on
top
of
it,
and
I
think
that
there
is
tons
that
we
can
do
in
the
multi-cluster
level,
and
I
think
that
would
be
very
interesting
stuff,
like
you
know,
failover
a
routing
based
on
a
localization
or
based
on
user.
That's
the
stuff
that
I
think
that
internationals,
that's
what
we're
hearing
that
people
are
interested
in
and
again
we
would
love
to
what
you
guys
are
interested
in.
F
Yeah
that
intersects,
with
more
with
what
we
are
interested
in,
I
think
at
cisco
I
mean
we,
we
are
interested
in
the
I
guess:
sort
of
the
agnostic
service
gateway
kind
of
model,
and
I
think
that
you
know,
but
we
want
to
add
value
like
underneath
a
little
bit,
so
we're
probably
gonna
have
a
proposal
at
maybe
in
time
for
the
next
meeting
about
you
know
how
we
could
integrate
a
at.
F
The
very
first
step,
I
think,
would
be
to
integrate
an
alternate
service
discovery
mechanism
into
into
service
mesh
hub
with
some,
and
it
may
be
just
exactly
the
the
use
case
that
you
guys
are
talking
about
for
incorporating
new
service
mesh
types.
What
we're
talking
about
is
not
really
a
service
mesh
type,
so
much
as
a
sort
of
a
connectivity
layer
that
has
awareness
of
services,
so
we're
probably
going
to
have
some
proposals
soon
on
getting
a
sort
of
experimental
feature,
but
how?
F
What
would
I
think
we
want
to
work
through
the
process
with
the
community
on
trying
to
get
that
exposed
to
the
community
as
early
as
possible
and
and
just
work
on
the
implementation
as
we
work
on
the
the
overall
solution
as
well?
So
I
I
don't
have
all
the
details
for
that
right
now,
but
that
would
that's
just
a
stay
tuned
thing
from
our
side.
D
F
Yeah,
exactly
that's,
that's
that's
sort
of
the
approach
that
so
we
we're.
I
don't
know
if
you've
heard
of
the
network
service
mesh
project
and
this
it's
a
it's
a
sandbox
or
incubation
project
and
cncf
and
cisco,
we
are
one
of
the
main
sponsors,
and
so
our
the
work
we've
been
doing
on
my
and
john
joyce
and
and
dominic's
team
is,
is
working
on
a
a
solution
based
on
that
project.
F
For
the
initial
thing
is
just
some
service
multi-cluster
service
interconnect
with
varying.
F
We
have
varying
network
function,
types
that
we
can,
that
that
will
provide
that
interconnect
and
some
features
on
top
of
that,
and
then
we
want
to
have
like
a
the
integration
with
service
mesh
hub
to
give
us
that
tie
in
with
the
the
service
meshes
that
are
actually
sort
of
locally
owning
those
services
and
have
that
kind
of
the
same
kind
of
gateway
model,
but
with
our
essentially
our
network
functions
as
as
the
actual
traffic
path,
and
so
that
that's
that's
the
high
level
of
what
we're
talking
about.
F
F
C
C
The
mesh
service
has
the
potential
to
support
all
of
those
the
as
well
as
external
services.
So
if
you
want
to
bring
a
service
registry
to
service
mesh
hub,
it
should
support
that
and
if
you
want
to
use
service
mesh
up
as
your
service
registry
directly
by
creating
your
mesh
services
manually,
that
should
also
work.
F
Yeah,
I
think
that's
so
we
were
thinking.
I
think
our
initial
proposal
probably
would
be
to
create
mesh
service
objects
in
via
a
a
client
of
the
interface
that
we
have,
but
I
I
we
were
also
prior
to
this
work.
We
were
looking
at
integration
with
istio,
more
tight
and.
F
Like
like,
we
were
thinking
of
maybe
an
mcp
server,
implementation
or
or
you
know,
and
so
that
that
was
how
those
were
the
avenues
that
we've
been
kind
of
exploring
a
little
bit
so
but
yeah
that
so
you're
recommending
go
to
the
to
the
mesh
service,
interface
or.
C
F
C
Yeah,
so
we
would
so
we
can
get
more
into
what
the
implementation
would
look
like,
but
essentially
I
could
see
a
plug-in
for
the
discovery
mechanism
which
basically
takes
arbitrary
data
from
an
endpoint
that
servicemenship
doesn't
know
about,
but
your
your
plugin
does.
Your
plugin
knows
how
to
connect
to
read
from
that
and
then
produce
mess
services
and
then
the
discovery
component.
What
you'll
be
able
to
do
is
provide
a
credential
or
whatever
it
needs.
In
order
to
connect
to
that
backend
I
mean
this
is.
F
F
I
think
the
thing
that
we
would
still
I'm
not
sure
if
we've
figured
out
is
the
the
the
in
the
traffic
policy.
Yeah
like
our
thing,
is
not
actually
a
service
mesh
right.
It's
a
it's
more
of
just
l3
connectivity.
F
You
could
think
of
it
like
that
between
it's,
it's
like
almost
like
an
l3
subnet,
that's
reachable
for
for
specific
workloads
or
between
specific
workloads.
So
from
the
point
of
view
of
a
single
of
a
workload,
that's
in
that
service
that
was
discovered
the
other
services
that
it
would
be
either
clients
of
or
or
cert
providing
services
to.
F
Those
are
that's
the
the
the
spectrum
of
of
services
that
would
be
coming
from
our
service
discovery
component,
and
so
then,
how
that
maps
to
reachability
how
sma
to
wraps
maps
you
know
reachability
from
the
other
service
mesh
types
is
something
that
we
would
probably
have
some
either
questions
on
or
proposals
to
possibly
change.
D
F
I
had
a
question,
or
or
maybe
maybe
it's
in
line
with
some
of
the
issues.
It's
it's
more
like
this
is
just
more
pride.
How
operationally
to
use
service
mesh
in
in
the
you
know
in
the
cluster
registration
it
does
it
doesn't
really.
F
It
doesn't
really
solve
any
of
the
problems
that
we
have
with
like
having
access
to
a
kubernetes
like
a
generic
way
to
or
or
I
guess,
multi
kubernetes
cluster
type
config.
F
You
know
usage
right
the
there,
so
I
there's
a
project
called
admiralty
that
I
reference
in
the
notes
that
describes
the
problem
pretty
well,
so
I
was
wondering
if
you
guys
had
looked
at
any
of
that
stuff
or
if
that's,
if
we,
if
you're,
if
that's
a
thing,
that
is
just
a
a
something
that
somebody
needs
to
tackle
or
you
you
want
to
hunt
it
for
you
know
not
have
our
your
the
service
mesh
hub
project
solve
that
it
just
would
be
a
client
of
something
else
that
would
solve
that.
F
Yeah,
it's
it's
so,
for
example,
your
the
cert,
the
the
mesh
or
cluster
registration
requires
adding
a
cube
config
for
access
to
the
api
server
for
a
specific
cluster
right
and
that
coupe
config,
like
none
of
the
service
mesh
hub
documentation,
indicate
like
how
what
the
requirements
for
that
coupe
config
are.
F
So
a
user
of,
like
eks,
for
example,
wouldn't
be
able
to
just
take
an
eks
coupe
config
that
they
use
on
their
mac
and
drop
it
into
service
mesh
hub
and
have
it
work
right,
because
the
im
integration
portion
is
not
installed,
and
you
know
the
identity
that
is
actually
associated
with
the
the
the
im
utility.
That's
running
that
coupe
config
triggers
or
coupe
ctl
would
trigger,
isn't
in
the
pod
where
that
service,
mesh
hub
mesh
or
cluster
registration
discovery
thing
is
running
right.
So.
H
B
F
B
F
I
don't
think
I
would
clear
on
that
from
the
documentation-
okay
cool,
so
it
actually
is.
Mesh
ctl
is
actually
required
in
that
path
right
to
trigger
the
the
service
account
creation,
okay,.
C
One
thing
I
would
say
is:
if
we
want
to
make
it
something
declarative
that
doesn't
involve
an
sctl,
that's
doable.
You
know,
another
option
is
a
job,
so
if
you
wanted
to
poop
ctl
apply
a
job
that
would
do
the
necessary
work
for
us
or
even
to
just
have
the
objects.
You
know
pre-created,
because
I
I
think
we
there's
an
understanding
that
you
know
we
don't
want
necessarily
to
force
an
imperative
flow.
If
you
have
a
cluster
set
up,
it
sets
up.
B
D
F
F
Yeah,
I
know
that
so
that's
good.
I'm
glad
that
I
I
I
figured
there
was
some
level
of
solution
that
I
wasn't
understanding
but
yeah,
okay,
so
yeah,
I
think
just
clearing
up
the
the
documentation,
especially
I
mean
at
least
have
the
mechanism
for
drilling
down
right.
So
like
a
pointer
to
even
something
that's
more
code,
like
would
be
fine.
B
F
F
Yeah
yeah.
I
think
this
is
the
type
of
thing
that
I
mean
we.
We
should
like
what
we're
sure
we're
trying
to
build
up
like
some
contributors
to
smh.
So
this
is
the
kind
of
thing
that
you
know
we
could,
you
know,
carry
water
or
chop
wood
type
of
thing
for
this.
Just
I
it
was
something
that
I
thought
of
when
I
was
going
through,
the
those
the
examples
demo
stuff
so
and
we
I
we
ended
up
like
at
the
time
it
was
a
while
ago
it
was
probably
a
couple
months
ago.
F
A
I
don't
know
if
any
of
you
all
have
talked
to
him
on
slack.
C
So
my
guess
is
that
this
would
be
to
allow
us
to
wire
up.
We
have
a
config
right
now
on
the
virtual
mesh
that
allows
you
to
specify
the
certificate
authority
for
your
mesh,
and
currently
we
just
allow
you
to
reference
a
secret.
I
think.
Theoretically,
we
could
extend
that
to
support
vault
as
well
as
a
source
of
secure
certificates.
G
A
Is
that
my
issue
that's
going
to
file
an
issue,
so
we
can
at
least
drag
it
and
dominic
you've
got
the
question
here.
This
started
with
the
identity
management,
identity,
cluster
instance
and
then
you've.
E
Oh
yeah,
so
I
was.
I
was
curious
about
the
how
the
clusters
cluster
instances
are
identified
because
my
in
my
my
early
experiments
with
it,
I
was
able
to
register
a
cluster
twice.
So
then,
of
course,
the
the
workload
instances
and
mesh
instances
were
discovered
twice.
So
I
was
curious
about
the.
What
is
what
is
the
anchor
of
identity
for
a
cluster
and
then,
therefore
for
the
for
the
services.
C
It's
a
very
good
question,
so
currently
we
we
don't
have
one
and
if
you
know
kubernetes
itself
doesn't
have
any
notion
of
identity
for
clusters,
we're
what
we're
one
of
the
the
changes
that
we
have
coming
for.
C
Cluster
registration
is
what
you
register
cluster
you'll,
be
registering
it
there's
a
crd
that
corresponds
with
the
cluster,
and
the
crd
represents
the
identity
of
the
cluster,
and
we
can
verify
that
I
mean
the
thing
is:
is
that
at
registration
time
right
now
we
require
some
kind
of
string
identifier
in
order
to
differentiate
the
clusters,
because
there
there's
really
no
way
unless
we
use
some
kind
of
heuristic.
C
All
right
I
mean
what
we
one
of
the
things
I
would
say
is:
maybe
we
can
use
the
the
ip
address
of
the
api
server
and
say
if
it's
the
same
api
server,
it's
the
same
cluster.
C
E
It's
not
that
I
have
an
answer
for
the
question,
but
I
would
not
necessarily
go
with
the
api
server.
I
know
it's
typically
done
that
the
external
address
of
the
api
server
is
is
identity
anchor,
but
I
mean
you
can
have
multiple
api
servers
in
a
high
availability
scenario
right
so.
C
I
think
that
you
know
we're
tracking
the
work
that
the
sig
multi-cluster
group
is
doing
in
kubernetes,
and
this
is
something
that
they're
we're
they're
focusing
on
addressing
as
well
cluster
identity.
So
if
the
community
can
reach
a
larger
consensus
on
how
to
do
that,
I
I
think
that
would
be
the
most
preferable
path
forward
as
long
as
it's
a
good
solution,
but
so
far
I
I
think
the
work
that's
come
out
of
the
sig
multi
cluster
has
been
at
least
a
good
guideline.
C
F
F
I
was
just
gonna
say
I
mean
I
on
the
sig
multi-cluster
point,
I
I
have
I've.
I
haven't
been
following
the
keps
like
super
closely,
but
I
did
come
across
one
with
a
multi-cluster
service
api
that
had
exactly
kind
of
this
problem
as
well.
So
and
it's
a
proposal.
It's
a
proposal
for
crd's
essentia
or
a
model
for
exposing
having
a
multi-cluster
service
definition
and
having
a
service
export
and
import
sort
of
semantics
across
multiple
kubernetes.
F
So
separating
the
service
concept
from
the
cluster
level,
and
it
had
a
close,
but
it
did
have
a
cluster
identity
sort
of
problem.
Inside
of
there.
E
So
from
my
point
of
view,
I
have
to
say
one
of
the
nicest
things
that
network
that
service
meshup
does
is:
give
a
service
mesh
instance,
an
identity
because
also
service
mesh
instances
don't
have
identities
right.
There
are
basically
they're
implicit
so
with
the
with
the
mesh
object
or
mesh
crd
you're,
actually
giving
that
an
identity
which
is
pretty
cool,
but
that
hinges
on
the
cluster
identity
right
and
that
that
is,
that
is
still
fragile.
E
But
I
understand
that
that
is
a
that
is
a
that
is
a
common
problem
yeah,
but
then
that
fragility
basically
translates
all
the
way
to
the
to
the
mesh
identity
in
service
meshup.
C
Yeah
I
mean
I,
I
could
think
of
one.
You
know
an
approach
that
might
be
useful,
which
is
once
a
cluster
has
been
registered,
trying
to
think
you
can
make
it
so
that
once
a
cluster
has
been
registered,
we
can
assign
it
a
unique
id
and
further
attempts
to
register
the
cluster
will
cause
an
error
so
that
we
can
ensure
that
every
cluster
has
only
been
registered
once.
C
I
mean
we
already
do
that
right,
we
create
the
surface
account
we
create
the
the
namespace
for
the
service
account,
so
I
think
that
we
can
use
either
one
of
those
just
like
a
global
identifier
for
the
cluster
and
then
just
choose
to
error.
If
we
you
know
when
we
try
to
create,
if
we
see
that
it's
already
existing,
we
just
error
out
right.
E
E
So
eventually
this
we
have
to
ask
the
question
anyways.
What
is
what
is
the
same
thing
right
so
I
mean
that's
a
philosophical
question:
am
I
still
the
same
person
after
I
cut
my
hair,
or
can
I
jump
into
the
same
river
twice
right
so
eventually
there
will
be
the
question:
is
this
still
the
same
cluster?
E
But
if
you,
if
you
stick
to
the
to
the
to
the
service
account
taking
the
uuid,
then
you
at
least
cannot
double
double
register
at
the.
At
the
same
point
of
time
right,
you
may
make
the
mistake
over
time
to
think
that
quote.
Unquote.
The
same
cluster
is
a
different
cluster,
but
you
cannot.
You
cannot
double
register
and
after
the
after
the
cluster
is
done.
You
can
definitely
clean
up
the
service
mesh
instances
and
workload,
instances
that
point
to
the
old
uuid
and
basically
just
rediscover
it
for
the
new
uuid.
C
Yeah,
we
could
also
the
bundle
that
we
used
to
connect
to
the
cluster
is
a
coupe
config
that
we
generate
from
the
service
account
and
I'm
trying
to
think
if
there's
some
way
that
we
could
actually
use
the
bytes
like
a
hash
of
the
bytes,
maybe
not
a
hash
of
the
whole
bytes,
because
that
can
change
but
yeah
yeah.
Another
service
account
is
probably
the
best
thing.
B
Another
interesting
question
this
raises
is
in
internally
in
service
measure.
The
cluster
is
just
identified
with
a
string,
and
I
can
see
advantages
and
disadvantages
to
doing
that.
Like
say,
you
have
a
bunch
of
configuration
that
targets
a
cluster
with
a
certain
name.
That
name
is
like
a
loose
identifier,
maybe
under
the
hood.
You
want
to
swap
that
cluster
out
for
a
different
cluster
and
you
don't
want
to
rename
all
your
crds
to
this
new
cluster
name.
So
you
could
keep
the
same
string
identifier,
but
it's
now
pointing
at
a
new
cluster.
C
C
The
identity
of
the
cluster-
you
know
it's
its
name
with
the
the
secret
we
use
to
discover
the
cluster,
but
with
the
introduction
of
the
kubernetes
cluster
crd,
which
will
be
our
official.
You
know
the
cluster
name
can
be
populated
right
in
there
and
then
you
know
if
you
want
to
change
the
cluster
name
or
swap
them,
you
would
theoretically
be
able
to
just
edit
that
object,
but
it's
not
supported
today.
C
So
I
I
think
we
could
do
it.
I
think
that
cluster
actually
does
make
sense
to
have
as
a
string,
because,
from
the
point
of
view
of
service
mesh
hub,
a
cluster
is
like
a
namespace.
So
namespaces
are
a
string
identifier,
you
know
a
cluster
is
a
bucket
of
namespaces.
The
namespace
is
a
bucket
of
resources.
C
A
E
A
A
So
then
we
will
start.
We
can
start
with
the
rabbit
hole
in
two
weeks
at
the
next
next
meeting,
so
we'll
kind
of
do
that
earlier
on.
So
we
have
lots
of
time
to
discuss
thanks
all
for
joining
today
and
let's
see,
links
are
in
the
links
are
in
there
for
the
new
release
and
such
that
harvey
talked
about,
and
scott
will
also
kind
of
do
a
an
update
on
the
refactoring,
and
then
we
can.
You
know
any
other
questions
that
come
up.