►
From YouTube: MeshSync Weekly Meeting (July 14, 2020)
Description
Kubernetes sharedInformers, custom controllers, and fingerprinting service meshes.
A
A
A
A
Okay,
good
all
right,
good
man
boy,
I
was
talking
with
well
with
well,
even
her
first
name
is,
but
I
was
talking
with
shreddy
and
she
popped
my
bubble,
and
she
told
me
like
just
you
know
that
I
don't
know
how
you
know
half
to
most
of
the
names
that
I'm
enunciating
or
about
you
know,
aren't
quite
there
and
then
she
said.
Well,
you
know
technically,
like
her
name
is
more
like.
D
Shitty
I
like,
like
the
r
the
the
r
is,
let's
see.
A
A
Nice
well,
abhishek
is
with
us
vinayak,
whose
name
I
can
announce
it
with
confidence.
Now,
as
with
to
abhishek
and
vinayak,
we
were
able
to
kind
of
ad
hoc
meet
for
a
little
while
yesterday
to
review
some
progress
that
some
initial
progress
that
arip
had
made
and
then
kind
of
a
second
crack
at
that
that
natish
has
done
and
and
has.
A
Essentially,
I
think
the
team
has
arrived
essentially
at
a
skeleton
of
well
of
well
the
initial
structure
of
what
not
just
mesh
sync
probably
looks
like
but
and
nitisha.
Maybe
you
want
to
describe
it.
F
Yeah
so
essentially
around
the
code
structuring
and
how
we
set
up
the
meshi
operator
repository
so
that
we
can,
we
can
basically
treat
it
like
a
monorepo
for
all
the
components
of
this
meshi
kubernetes
deployment,
which,
which
kind
of
include
the
the
meshing
piece.
You
have
some
policy
engines
and
you
have
a
whole
bunch
of
other
other
components
that
we'll
be
building
out,
but
the
idea
being
that
we
can
share
share
the
code
with
the
code
across
all
of
these
projects.
F
So
we
have
a
we've
started
with
the
skeleton
for
mesh
sync
and
adip
is
going
to
fill
out
some
some
more
bits
for
sinking
istio,
but
yeah
we're
building
on
that,
there's
going
to
be
certain
patterns
to
use
in
terms
of
identifying
a
mesh.
So
it's
it's
it's
still.
It's
still
alpha,
so
it'll
keep
changing
so
yeah.
That's
essentially
what
we
have
you
can
take
a
look
at
the
link
lee
sent.
A
Sense,
try
to
I'm
going
to
try
to
move
some
of
our
conversation
to
the
bottom
of
the
dock.
All
right,
I'm
going
to
try
to
help
clean
up
our
docs
concepts.
A
Yeah
and
so
the
to
the
extent
that
we
ended
up
using
something
like
oppa
as
the
as
a
well,
maybe
as
a
component
or
maybe
the
component
for
the
policy
engine
oppa
can
be,
I
mean
oppo
would
fit
inside
there
right
because
because
it
is
deployable
in
a
couple
of
different
ways,
I
thought
you
could
it's
been
like
two
years
since
I
earnestly
looked
at
using
it,
but
we
had
treated
it
as
a
library
as
a
package
last
time
that
we
went
to
go
use
it
so.
A
A
So
maniac
you
had
there's
a
couple
of
research
items
that
you
were
looking
into
and
I
think
you
you
made
a
comment
recently.
C
A
C
It
was
around
shared
informants,
tries
to
reduce
the
number
of
api
calls
to
the
kubernetes
api
server.
That
was
our
main
concern
right.
Are
we
going
to
overload
the
kubernetes
master
api
server
with
a
lot
of
queries,
so
the
job
of
shared
informer?
So,
at
the
end
of
the
day,
it's
going
to
be
a
single
watch
watching
on
kubernetes
api
resource
api
server
for
the
resource
updates
and
multiple
controllers,
along
with
our
own
controller,
we'll
get
the
update
for
that.
F
Yeah,
I
think
that's
what
we've
settled
on
adip
is
going
to
do
that
for
listing
crds
right
now,
like
the
cubecard
will
get
crds,
so
we're
gonna
start
there
so
that
we
can
look
for
istio.iocrds
to
signal
that
sto
is
running
on
the
cluster,
but
yeah.
We
will
stick
to
our
shared
index
and
farmers.
A
That
makes
sense,
and
so
that's
that's
essentially
the
design
of
kubernetes
itself
that
and
that's
the
design
of
a
shared
informer
that
I
mean
yeah.
There's.
A
There
were,
maybe
there
were
kind
of
two
parts
to
that
conversation.
I
think
one
was
hey:
how
do
we
efficiently?
A
A
I
I
it
might
well
there's
a
couple
of
there's
two
ways
in
which
I
think
it
could
be
an
issue.
One
is
when
you
think
about
it
in
context
of
of
a
piece
of
management
software.
Look
like
here's,
a
here's!
A
So
we
were
just
trying
to
be
on
guard
about
something
like
that,
like
hey,
if
we're
now
granted,
that
was
that's
a
polling
system.
This
is
more
eventing
and
so
the
the
system
itself
kubernetes,
is
a
bit
more
in
control
about
how
you
know
how
quickly
it's
going
to
respond.
A
If
there
are
you
know,
if
there
are
thousands
of
objects,
we
may
not
be
interested
in
all
of
them,
but
if
we're
trying,
but
if
we're
also
trying
to
design
a
system
in
which
the
the
human,
the
user
ends
up
feeling
like
they're,
getting
pretty
much
a
real-time
perspective
of
what's
transpiring
in
the
when
we're
sourcing,
you
know
from
here
for
sourcing
some,
some
metrics
from
prometheus,
maybe
some
logs
from
somewhere
else
and
then
just
updates
about
objects
and
their
status
and
deployments
and
things
that
yeah,
I
don't
know
yeah
it's
it's
a
good.
F
I
yeah,
but
I
still
think
shared
informers
would
be
fine
for
the
use
case
because
you
basically
in
in
this
use
case
you're.
Just
I'm
just
talking
about
the
mesh
sync
controller.
His
job
is
he's
just
a
scanner.
F
So
so,
even
if
there's
a
delay
in
the
shared
and
former
like
a
plum
milliseconds,
it
shouldn't
be
a
big
deal.
Yes,.
A
Good
deal
yeah,
these
were,
and
these
were
questions
that
we'd
raised
up.
I
think
just
before
on
a
call
and
one
of
our
early
calls,
I
think,
would
be
before
you'd
we're
on,
and
it
was
just
us
acknowledging
like
okay,
hey.
How
do
we
and-
and
I
guess
in
some
respects,
to
to
what
vinaya
could
just
how?
How
you
were
just
describing,
how
shared
informers,
work
or
well
is
a
shared
informer
with
a
shared
index.
Is
that
the
right
or
is
it
sort
of
one
of
the
same
shared
informers?
Have.
A
That,
even
if
there
is
like,
let's
say
that
the
delay
goes
up
to
10,
milliseconds
or,
like
you
know,
we
get
a
we're
seeing
worse
performance
than
we,
otherwise
were
that's
not
likely
and
or
that
really
actually
shouldn't
be
gonna
say
it
shouldn't,
be
an
artifact
necessarily
of
mesh
sync,
because
that's
a
job
that
kubernetes
is
having
to
do
anyway.
A
Now
that
said,
hey
if
no
other
inf,
now
I'm
going
to
use
the
wrong
terminology,
but
if
no
one,
if
there's
no
need
for
measuring
or
kubernetes
to
be
running
in
former
or
running
informers,
for
a
bunch
of
objects
that
it
otherwise
wouldn't
be
informing
on
that,
then
I
can
see
us.
You
know
adding
a
small
overhead.
F
A
A
It
sounds
like
it's
about
as
good
as
it
probably
gets
anyway.
It
sounds
like
you
know.
Kubernetes
itself
has
been
designed
in
consideration
that
there's
other
systems
that
are,
you
need
to
know
what's
going
on
inside
of
these
control
loops
and
that
it's
designed
to
inform
as
many
others
like
in
some
respects.
If
there's
you
know
one
to
ten
thousand
process
and
like
subscribers
for
for
a
particular
update
on
an
object
that
the
the
overhead
there
is
negligible
between
1
to
10,
because
it's
going
to
send
out
the
same
event
either
way.
A
And
so
yeah,
so
I
mean
I
I'm
not
sure
if
there's
yeah
other
than
like
shared
memory
like
so
another
question
lee
manik.
F
How
what
do
you
think
are
the
mechanisms
for
identifying
a
mesh?
We
we
recognize.
Crds
is
one
the
the
other
two
are
not
concrete,
but
we
were
thinking
of
searching
for
deployments
and
then
searching
for
images
to
figure
out
what
version
we're
running.
F
F
So
if
there's
a
common
pattern
to
these
things,
then
we
can
build
out
a
fingerprint
fingerprinting
package
and
an
interface
that
we
can
reuse
or
use
across
all
the
meshes.
But
I
do
want
to
abstract
these
things
out
and
you
can.
You
can
use
the
builder
pattern
to
say
the
first
one
is
fingerprints
with
crds.
The
next
step
is
fingerprint
with
deployments,
and
the
third
step
is
fingerprint
with
images
and
that
fills
out
the
final
object
model
saying
that
we
know
it's
present.
A
Is
the
builder
pla
when
the
builder
pattern?
Is
that
a
go
thing,
or
is
that
you
just
are
you
just
mean
like
in
general,
like
it's.
A
You
don't
have
to
that
makes
sense.
I
was
just
wondering
if
it
was
like
a
specific.
F
Yeah,
so
it's
it's
not
I
mean
it
is
used
in
go
kubernetes.
Does
this
a
lot,
but
the
alternative
is
the
options
pattern,
but
I
feel
with
options.
It's
it's
not
going
to
be
that
clear
options
is
great
for
setting
up
config
values
or
some
structure
values
like
in
addition
to
the
defaults
or
overriding
defaults.
F
The
builder
pattern
is
essentially
where
we're
saying
fingerprint
on
crds.
Whatever
result
you
get
fingerprint
on
on
on
deployments
images,
and
we
can
just
chain
all
of
these
in
in
order.
A
Well,
okay,
so
let's
think
about
that
for
a
minute.
So
what
makes
one
mesh
different
from
another,
so
I
mean
what
there's
different
deployment
models,
some
of
them
sidecar
proxies,
and
so,
if
you're,
looking
at
a
service,
you
might
find
a
you
know
a
proxy
in
the
same
pod.
All
right!
That's
interesting!
That
doesn't
I
mean
granted
if
the
proxy's
image
name
is
istio
proxy
right.
It's
probably
you.
A
Is
yeah
totally
and
that
specific
example
that
I
was
giving
is?
A
It
is
a
good
thing
to
examine
so
that
you
can
answer
a
different
question,
which
is
more
or
less
the
thing
you
just
said,
which
is
hey
what
services
are
present
and
of
those
that
are
present,
which
ones
are
on
the
mesh
which
ones
are
off
the
mesh
and
part
of
the
like
the
concept
of,
or
what
you
were
articulating
earlier
about
sort
of
tiered
discovery
is,
is
like
that's
one
of
those
subsequent
questions
that
you
would
kind
of
ask
later
in
like
in
some
respects
it
kind
of
I
mean
it
actually
does
matter
eventually,
if
there's
no
control
plane
and
yet
there's
some
proxies
that
are
clearly
istio
proxies
side
card
to
you
know
it's
like
hey.
A
F
So
so,
let's,
let's
focus
on
istio,
I
think
crds
is
going
to
be
one
of
them,
because
all
the
all
the
meshes
that
use
custom
resources,
we
would
just
do
a
cube,
cuddle,
get
crds
and
search
for
the
istio
io
or
say
for
istio.
I
think
once
we
have
identified
that
istio
is
the
control
plane
is
running.
We
can
kind
of
integrate
the
sda
cuttle
ctl
code
and
basically
like
what
sdo
ctl
does
with
version
that
could
be
used.
Actually
we
could
start
with
that
too.
F
E
A
That's
that's
exactly
what
I
was
going
to
say
when
you
were
done
not
talking
about
it,
yeah
yeah.
Only
unless
they're
doing
something
really,
you
know
like
really
stupid
or
something,
but
even
at
that,
like
obviously
it
works
for
and
and
the
thing
about
and
like
the
thing
is
like,
even
if
it
is
you
know,
was
that
well
they're
going
to
guarantee
that
they're
going
to
sustain
that
implementation,
they're
going
to
make
sure
that
it
works
from
version
immersion,
so
yep.
F
I
think
yeah
we,
it
might
be
a
good
idea
to
just
go
with
that.
Instead
of
the
but
yeah,
we
should
still
like
what
adip
is
doing.
We
we
need
to.
We
need
to
expose
that
function
so
that
we
can
use
it
for
more
types
other
than
crds
or
whatever.
So
we
we
have.
We
have
this
module
that
we
can
use
to
create
shared
informers
when
we
need
it,
it
might
not
be
urgent.
F
I
think
I
think
the
first
step
would
be
now
that
we
think
about
is
is
using
using
the
istio
ctl
approach,
so
we
can
import
their
package
and
just
basically
invoke
that
version
command.
Yeah.
A
It
is,
it
is
yeah,
and
then
we
will
find
like
in
istio's
case
specifically
because
they
are
putting
so
much
off
behind
istio
ctl,
that
it
is
the
case
that
there
may
be
some
other
or
or
there
are
a
number
of
other
functions.
You
know
commands
that
they've
included
that
yeah
they'll
be
good
leverage.
Now
the
linker
d-
hey
not
dissimilar
in
that
the
linker
d
utility,
the
command
line
does
quite
a
bit
like
they
expect
that
you're
installing
using
that
command
line
as
well.
A
I
think
the
issue
with
both
of
these
is,
I
think
that
there's
an
ist
I
mean-
and
I
get
well
so
so
there's
an
istio
ctl
go
client,
oh,
no,
that's
the
wrong
way!
There's
a
yeah!
There's
a
gold
package
for
for
the.
F
F
Api-
it's
probably
a
rest
call
to
pilot
and
pilot
responds
with
the
version.
So
when
linker
d
is
going
to
be
the
same,
the
cli
is
talking
to
the
linkedin
control
plane.
I
forget
what
it
is
the
api
server
that
they
have
and
that's
responding
with
the
version.
So
I
think
I
think
that
that
might
be
a
good
way
for
things
that
we
know
we
can.
We
can
leverage
their
own
packages
where
we
do
not
know
we
invent
our
own.
A
A
F
So
if
they
don't,
we
open
a
bug,
because
that's
that's
the
breaking
feature
unless
they
go
to
sto
2.0
and
then
things
change,
but
as
long
as
they're
in
the
one
dot
version
we
we
can
assume
that
everything
will
be
supported.
F
We
could
we
could
use
the
master
branch
or
the
last
table
release
and
keep
upgrading
if
we
have
to,
but
I
think
I
think
we
can
assume
that
it
is
backwards
compatible.
They
don't
get
rid
of,
let's
say
objects
like
I
can't
remember,
I
think,
authentication
or
whatever
they
had.
A
Like
or
like
mixing.
F
A
Yeah
well
and
that's
a
good
yes,
I
like
that
answer.
It
does
put
assigned
faith
to
and
let's
say
that
they,
okay,
like
you're,
saying
they
don't
you
can
go
file.
We
can
go
ask
for
them
to
do
something
to
the
extent
that
they
are
never
going
to,
and
it
was
a
major
issue
for
us
and
we
said
we're
not
going
to
use
like
one
thing
to
do
is
that
we
fall
back
to
more
generic
patterns
of
like
doing
grabbing
all
the
crds
looking
for
ac
or
something
something
and
then.
A
C
I
think,
because.
C
You
know
yeah
in
general,
go
packages
themselves,
maintain
the
version
in
their
path
right,
so
they
will
be
slash,
v1,
slash
so
like
github.com,
slash,
kt,
slash
or
co,
slash,
v1
and
then
v2
v3,
so
that
same
package
in
the
different
versions
can
be
implemented
or
imported.
But
that's
up
to
the
maintainer
of
the
package
how
they
are
doing
that,
but.
F
F
Here's
use
this
item.
This
is
curious.
No
that's
just
it's!
It's
gonna
do
your
go-get
for
that
release
version
and
then,
basically,
in
your
ghost
sum,
it's
gonna
add
the
checksum
or
actually
yeah.
It's
gonna
add
the
version
that
it's
using.
Sometimes
it
uses
the
commit
charge
or
the
checksums.
F
So
since
we
have
to
support
both
it'll
pick
1.2
because
of
the
same
versioning,
it's
it,
it's
assumed
that
it's
backward
compatible.
So
they
would
pick
whatever
I
mean
it
would
pick
the
minimum
that's
supported
across
both
of
them
gotcha.
A
Okay,
is
this
the
last
time
that
someone
had
looked
at
this?
It
was
either
that,
like
there
wasn't
enough
support
for
things
that
we
were
trying
to
do
or,
and
so
I
guess
I'm
wondering
with
some
of
the
dates
on.
I
wonder.
F
Yeah,
they
probably
don't
the
only
thing.
They're
gonna
change
is
yeah
that
apis
okay.
So
so
previously
they
used
to
use
product
protobufs,
and
that
was
becoming
a
problem.
So
aspen
mesh
created
the
client
go
that
nera
just
had
set
up,
and
I
remember
contributing
to
it
and
then
so
that
we
can
support
it
with
the
operator
framework
or
like
building
our
own
operators.
F
So
so
then
they
they
brought
in
the
aspen
mesh
client
go
and
made
it
part
of
istio
client
go
so
now
they
actually
support
more
native
types
that
have
that
basically
implement
all
the
methods
that
make
it
qualify
as
a
custom
resource
okay,
so
they
might
not
have
every
type,
but
I
don't
think
that's
a
concern
initially
for
us,
okay,
but
I
think
they
have
everything.
Now
I
mean
they
need
to
because
of
the
whole
operator
pattern
that
they
have.
A
So
educate
me
if
you
would
so
this
we're
looking
at
a
client
go
package
for
for
interfacing
with
istio
in
here
the
under
the
apis.
A
F
F
So
I
think
it's
it's
better
to
query
the
api
server
using
that
istio
ctl,
whatever
they
use
under
the
hood,
and
I
mean
I
mean
pilot
and
ask
pilot
what
version
are
you
running?
Are
you
up
and
running,
or
whatever
information
it
gives
back
to
us?
We
we
massage
that
information
and
kind
of
convert
it
to
our
abstract
model
or
object
that
we
want
to
use
across
meshes.
A
So
sounds
sounds
good
or
perfect.
Actually,
my
my
question
here
is
more
just
one
of
one
of
learning
like
oh
okay,
just
being
genuine
like
hey,
hey,
we're,
hey,
I'm
just
learning
about
shared
informers
and
we're,
and
I'm
I'm
learning
more
about
the
anyway.
It's
just
more
of
like
what
what
or
what
are.
F
Do
they've
just
created
a
package
for
it
so
that
they
can
reuse
it
across
different
places.
This
is
probably
for
the
side
cars
where
the
sidecars
could
be
watching
for
resources.
I
don't
know
whatever
this
is,
but
yeah
we're
going
to
take
a
similar
model
where,
where
we
were
thinking
that
you
would
have
pkg
in
this
case,
they
have
informers,
we
would
we
would
have
a
similar
directory
and
blah
blah
blah.
F
I
mean
this
is
just
they
made
it
modular
so
that
they
can
follow
the
drive
principle
and
they
don't
have
to
basically
reinvent
everything.
F
So,
nice,
okay,
but
if
you're,
if
you're
interested
in
learning
about
shared
information,
there's
a
lot
of
articles
around
it
and
I'm
sure
when
I
knows
some
where
they
they
give
you
the
the
diagram
like
it's
illustrated
on
how
things
are
watched
and
added
to
the
cache
and
how
events
are
created.
F
Yeah
or
any
other
operator
that
depends
on
sto
like
when,
when
I
wrote
an
operator
at
a
firm
that
wanted
to
basically
list
virtual
services,
we
used
client
go
because
client
go
had
the
virtual
service,
dot
list
or
dot
watch
or
dot
get
option
gotcha,
because
it
implemented
that
regular
interface.
A
A
A
Okay,
there's
a
couple
of
things
to
two
things
that
might
end
up
going
by
the
wayside
and
one
that
I
would
think
would
be
the
first
to
go,
and
that's
the
there's
two
things
in
measuring
today
that
kind
of
achieve
similar
synchronization
with
kubernetes.
But
the
architecture
is
not
good
and
so,
and
this
actually
probably
needs
to
be
turned
into
a
user
story
to
see
if
mesh
sync
can
just
achieve
it
for
us
or
see
how
it
achieves
it.
A
And
that's
the
notion
that
every
time
you
go
to
run
a
performance
test
from
measuring
that
there'll
be
a
there's
the
need
to
to
snapshot
what
the
environment,
what
the
infrastructure
looks
like
at
that
point
in
time,
so
that
that
can
be
snapshotted
and
persisted
along
with
the
results
of
the
performance
test,
and
so
there's
a
a
small
set
of
go
that
will
go
over
and
use
the
kubernetes
client
to
you
know.
Take
it
do
an
assessment
of
the
environment.
A
Use
it
nice
and
then
yeah,
then
you,
you
just
highlighted
the
second
area
that
will
hopefully
be
positively
impacted
by
mesh
sync
and
that
being
the
architecture
of
the
atta.
Well
one
the
architecture
of
the
adapters
which
which
maybe
the
architectures
that
they
they
still
are.
A
Vinayak
and
abhishek
here,
because
I
think
of
and
our
abhishek
had
to
go
but
natasha.
The
thing
is
that
that
I
think
the
path
that
arip
is
on
and
and
that
you're
kind
of
tying
off
right
now
does
make
a
lot
of
sense.
What
what
asks
do
we
have
of
vanayak
is
what
I'm
trying
to
explore.
A
Near
as
we
can
tell,
I
think,
asuko's
been
trying
to
dig
into
using
to
leveraging
just
that
and
has
been
coming
up
dry
so
leveraging.
A
F
No,
no,
no,
we
don't
need
a
client,
though.
So
client,
though,
is
only
meant
for
for
let's
say
solutions
that
use
custom
resources.
Linker
d
does
not
use
custom
resources.
The
the
thought
is
that
we
we
basically
want
to
dissect
their
client,
their
cli
tools
code
and
see
what
kind
of
api
request
it
makes.
So,
I'm
pretty
sure
they
must
have
abstracted
a
lot
of
that.
F
So
it's
basically
diving
into
what
the
code
flow
is
for
linker
d,
cli
and
everything's
written
in
go
other
than
the
data
plane,
so
it
should
be
pretty
easy
to
follow
through
and
say.
This
is
how
we
can
import
their
package
and
make
the
api
call
to
the
the
api
gateway
or
server
for
linker
d,
and
I
can
try
drawing
some
diagrams
around
that,
but
for
for
vinayak
I'm
I'm
I'm
curious.
If
we
can
start
looking
at
something
like
kumar
or
con
or
console,
is
it
is
it
a
similar
approach?
F
Do
we
have
clies
for
them?
Are
they
written
in
go,
but
we
linker
d
and
sdo
are
the
common
ones
and
there's
there's
a
lot
known
about
these
guys,
but
what
about
the
the
ones
like
kumai
and
mesh
so
just
curious
like
what
their
repositories
look
like?
Is
there
any
code
that
we
could
import
and
use
and
what
is
their
way
of
identifying
like
their
their
version
command
if
they
have
a
cli
and
what
it
does
so
so
that
way?
F
Maybe
we'll
have
some
tiered
approaches
where
we
say:
look
for
crds
or
look
for
images
or
look
for
deployments,
but
that's
not
our
our
primary
method
of
fingerprinting
a
mesh.
The
primary
should
be
that
hey.
Can
I
talk
to
the
control
plane?
Is
there
a
control
plane
up
and
running
if
the
the
response
is
like
timed
out,
there's
no
control
plane.
If
there's
a
response
coming
back,
then
there
is
a
control
plane.
F
A
It
makes
a
bunch
of
sense
to
me.
Have
you
is
this?
The
right
excuse
that
you
need
to
put
your
put
your
hands
on
lanker
d
and
console.
C
Yeah,
so
what
what
so
console
is
one
thing,
what
the
second
one
should
be,
the
kuma
or
which
one.
F
So
when
I
let's
do
this,
let's
tackle
linker
d
and
istio
at
the
same
time,
so
me
or
adeep
could
start
tackling
istio.
You
could
start
tackling
linker
d
and
look
at
the
code
on
how
we
can
do.
We
can
query
the
their
control
plane
and
ask
for
the
version
or
get
any
other
details
we
need
about
that
mesh.
F
So
that
way
we
can
divide
and
conquer,
and
do
these
two
primary
service
meshes
that
we
want
to
support
and
then,
if
someone
else
jumps
in
or
we
have
time
we'll
start
looking
at
puma
and
mesh
and
console
if
there's
a
third
person,
maybe
a
deeper
me,
jumps
into
console
and
starts
looking
at
that
as
well,
so
we're
basically
tackling
each
of
these
meshes
in
parallel,
so
I'll
try
to
get
the
abstractions
and
the
interfaces
ready
so
that
all
we
have
to
do
is
focus
on
the
business
logic
and
then
just
implement
those
interfaces.
C
Sure
sure
so,
I'm
just
reiterating
the
my
understanding.
We
need
to
look
for
how
these
individual
mesh
clies
are
able
to
identify
or
talk
and
connect
with
their
own
control
planes.
And
can
we
leverage
that
same
logic
in
our
scene
to
fingerprint
the
meshes
right.
F
Yep
yep,
it
would
be
ideal
that
we
can
import
their
go
packages
like
go
get
them.
If
we
don't
have
that
we'll
have
to
basically
mimic
the
api
calls
that
they
make.
F
I
don't
know
if
there
are
certificates
and
everything.
I
highly
doubt
that
these
cli
tools
have
too
many
certificates
that
they
exchange
but
yeah.
We
can
see
what
happens
so
yeah,
it's
it's
it's
research
work
and
then
we
can
basically
see
how
we
want
to
do
this.
F
Yeah
yeah,
I
think
we
have.
We
have
things
to
think
about
so
I'll.
So
I
I
guess
the
I
I
don't
know
I'm
trying
to
think
out
loud
how
we
want
to
divide
responsibility.
So
we
discussed
when
I
takes
on
linker
d
adi
probably
could
take
on
istio,
and
I
can.
I
can
initially
do
the
architecture
of
like
or
just
the
layout
the
skeleton
code,
the
interfaces
and
how
you
want
to
structure
repo
and
then
jump
on
to
a
third
mesh.
F
If,
if
console,
is
the
higher
priority,
we'll
jump
on
console
and
I'll
start
doing
the
same
thing
that
deep
and
maniac
are
doing
so
that
we
can
just
move
from
mesh
to
mesh
and
discussion,
we
will
need
to
have,
is
what's
our
what's
the
mesh
sync
model
object
or
struct
that
we
want
to
use
to
the
abstraction
that
that
tells
us
about
the
mesh,
so
it
just
needs
to
tell
us.
Is
it
up
and
running
what
version?
Is
it
running
any
other
information
that
we
need
to
know
which
is
common
across
meshes?
F
Some
some
configuration
elements
that
we
need
to
know
about
the
control
plane
and
then
maybe
the
like
in
esther's
case,
is
the
sidecar
config
map.
So
we
want
to
figure
out
what
that
what
the
sidecar
envoy,
bootstrap
config
looks
like
or
the
commands
that
they're
on
or
whatever.
A
Nice,
okay,
oh
this
is
good.
Did
I
get
any
other
thoughts.
C
Oh,
I
think
you
should
go
with
this
path.
A
As
you,
if
you
think
about
it,
if
you
can
I'll
message
you
his
his
name,
asuko
in
will
be
interested
in
some
of
your
findings.
I
think
on
linker
d
in
part,
because
you
were
trying
to
find
a
more
intelligent
way
of
provisioning,
linker
d,
which
would
also
have
to
do
with
like
leveraging
the
package
inside
of
its
cli
to
provision
it
so.
C
Okay,
so
can
you
just
select
me
his
name,
so
I
need
to
refer
to
him
or.
A
Nice
all
right
natish
same
time
next
week.