►
From YouTube: Layer5 Community Meeting (January 14th, 2022)
Description
Layer5 Community Meeting - January 14th, 2021
Agenda:
- Badges: https://layer5.io/community/members
- Discuss Forum: Accepted Solutions
- Weekly Newsletters are now active!
A call for your blog posts.
- Extending SMP to CNCF Community Infrastructure Lab
- Analyzing MeshSync’s efficiency: Kubernetes SharedInformers
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
It's
always
nice
to
be
able
to
say
good
morning
to
someone
when
it's
actually
morning.
That's
nice,
of
course,
jared's
got
us
all,
beat
jared,
and
is
it
the
wee
hours
of
like
8
a.m?
Right
now
is
that.
C
So
do
we
have
any
newcomers
this
week,
even
if
you
have
introduced
yourself
in
any
of
the
other
calls
which
happened
this
week
feel
free
to
introduce
yourselves
again
in
this
call.
Do
you
wanna
start.
A
Yeah
his
super
d
super
dip,
super
dips.
D
And
I'm
very
much
interested
in
open
source
contribution,
but
absolutely
beginner
in
this
open
source
world
and
I
just
exploring
the
layer,
fiber
and
the
measuring
and
hope
this
air
will.
I
will
do
something
best
for
the
machete
and
the
layer.
Five.
C
Thank
you,
okay,
so
anyone
else
who
would
like
to
go
next.
E
Oh,
hey
hi,
I'm
harsh
and
I
recently
graduated
in
2021
and
I
work
as
a
software
engineer
for
flipkart,
which
is
an
indian
e-commerce
company,
and
I
am
I'm
currently
in
the
cloud
and
platforms
team
which
actually
got
me
interested
in.
You
know
devops
and
cnc
affirmative
related
projects
and
organizations
which
in
turn
led
me
to
layer
5
initiative.
So
I'm
glad
to
be
on
this
call.
C
Welcome
to
the
pipe
community,
okay,
so
who
else
do
we
have
to
go
next?.
F
Yeah
hello,
can
you
all
hear
me
yeah,
so
I'm
current
and
I'm
a
third
year
engineering,
undergrad
and
yeah.
I
have
a
very
limited
open
source
experience.
F
I
contributed
to
an
organization
that
had
basically
the
similar
tech
stack
but
am
looking
forward
to
expanding
my
experiences
in
this
area
and
that's
what
led
me
to
layer,
five
and
mystery
and
other
repos
of
this
organization
and
yeah.
I'm
looking
forward
to
learn
with
all
of
you.
A
C
A
A
It's
like
it's
kind
of
a
silly
question
it's
like,
so
it
follows
exactly
what
you're
supposed
to
say:
okay,
good
yeah,
hey
great,
to
have
you
what
was
I
gonna
even
say.
Thank
you
so
much
yeah!
No,
it's
really!
Oh
yeah
you'd
recognize
that,
like
there's
more
than
one
repo
here,
yeah.
F
Yeah,
I'm
not
really
sure
of
the
sure
of
how
it
goes,
but
yeah
two
different
repos
that
I
explored
were
measury
and
I
think
layer,
five
io.
So.
A
Yeah
totally
yeah,
I'm
gonna.
If
it's
okay,
I
want
to
just
interrupt
for
just
a
moment,
because
I
think
it's
a
it's
an
important
note
that
current
is
making
one
of
the
mesh
mates.
Nikhil
lada
had
made
a
network.
A
There's
a
lot
going
on
a
lot
of
tech
being
used
a
lot
of
people
generally
pretty
friendly
here,
pretty
warm
and
welcoming
everybody's
learning,
something
somewhere
in
there
everybody's
challenging
someone
else,
and
generally,
I
think,
for
the
most
part,
all
I've.
A
All
I
see
are
people
helping
each
other
like
I
I
see
occasionally
somebody
has
a
question
and
it
doesn't
go
answered
and
I
most
of
the
time
I
know
that's,
because
the
other
person
is
either
out
sick
or
they're
having
an
exam
or
they're
they're
having
a
project
at
work.
It's
not
like
it's
not
because
they
don't
want
to
so
anyway.
A
Or,
and
to
have
that
understanding,
so
you
can
go,
take
a
look
at
what
what's
in
there
and
can
I
get
a
quick
overview,
there's
a
doc,
a
google
doc.
So,
if
you've
filled
in
your
community
member
forum,
you'll
have
access
to
a
bunch
of
docs,
including
this
one
which
just
it
just
gives
a
high
level
overview
of
like
what
what
the
individual
repos
are
for
what
tech
is
being
used
in
them.
A
A
There's
this
more
or
less
the
same
repository
overview,
so
I'll
just
paste
this
link
into
the
chat,
but
it's
kind
of
nice
because
it
just
quickly
like
this
is
one
way
of
looking
at
what
what's
going
on
is
by
by
repo
by
tech
involved
and
sort
of
by
project,
a
repo
purpose.
A
So
if
you're
as
you
digest
that,
if
you're
kind
of
paying
attention
you'll
note
that
mescheri's
a
large
rock
is
probably
the
biggest
project,
but
it's
tentacles
sort
of
to
touch
into
many
other
satelliting
projects,
if
you
that's
one
way
of
sort
of
thinking
about
it,
but
yeah
so
anyway,
just
a
quick
word
welcome.
C
Okay,
so
I
think
with
that
we
can
begin
with
the
agenda
today,
so
if,
on
the
layer,
5
site
itself,
if
you
go
to
the
members
page
you'll
see
that
we
have
this
drop
down,
which
has
a
list
of
all
these
badges.
Now
what
these
badges
do.
Basically,
they
are
sort
of
a
recognition
of
your
area
of
interest
of
your
area
of
expertise
across
the
various
roles
and
the
various
projects
that
we
have
here
in
the
community.
C
So
all
these
profiles
that
you
have
the
community
member
profiles
they
show
which
badges
you
have.
So
you
have
the
community
badge.
Some
people
have
the
measuring
bags,
so
we
have
a
list
of
contributors
here
with
us
who
have
contributed
a
lot
in
nursery
in
landscape,
so
we
would
like
likely.
You
would
like
to
announce
this.
They
are
we
getting.
They
are
getting
the
recognition
they
deserve.
C
We
are
adding
these
badges
to
their
profiles,
so
yeah,
that's
the
one
good
first
issue
that
we
can
create
and
I'll
create
eventually,
is
that
we
are
missing
the
patterns
badge
here.
So
we
all
we
have
another
batch
which
needs
to
be
added
here.
The
service
mesh
patterns,
which
some
community
members
already
have
they
have
been
working
on
it
I
think
yeah,
so
so
for
all
the
newcomers,
if
you're
looking
to
get
started
to
open
source.
This
is
one
thing
which
you
can
start
off
with
so
yeah.
C
That's
all
about
badges
next
agenda
is
the
discussion
forum.
Do
we
have
barney
on
to
talk
about
this.
C
Okay,
so
just
a
brief
overview,
we
have
our
discussion
forum,
I've
attached
the
link
here,
so
the
main
reason
why
we
use
this
is
because
you
can
ask
all
your
questions
on
slack,
but
that
is
a
temporary
record
of
all
your
questions
on
slack,
if
you're
aware
all
the
previous
chats
they
go
away
after
some
time.
So
on
the
discussion
forum,
if
you
ask
your
questions,
there's
a
permanent
record
and
the
best
thing
is
later
on
after
a
couple
of
days.
C
If
someone
else
has
the
same
question,
we
can
just
direct
them
to
your
post
over
here
and
that
question
can
get
sort.
So
one
thing
which
I
would
like
to
point
out
is
on
the
discussion
forum.
There
is
an
option
to
mark
the
suppose.
You
ask
a
question
and
there's
someone
there's
a
bunch
of
community
members
will
come
to
help
you
out.
If
you
think
that
a
particular
comment
has
actually
solved
your
issue,
it
will
be
appreciated
if
you
mark
it
as
a
solution
to
your
post.
C
D
C
Easy
if
they
know
exactly
which
solution
actually
solve
the
problem,
so
this
is
a
practice
which
I
think
all
of
us
can
adopt.
So
any
questions
or
anything
else
you
guys
have
you
can
just
bring
it
up
right
now,.
A
Yeah
shin
is
going
to
speaking
of
the
discussion
forum
shin's,
going
to
make
a
quick
update
there.
A
little
bit
later
today,
jared
had
made
a
post
on
there
most
recently
there's
a
number
of
you.
Actually
one
of
the
things
that
we
occasionally
do
in
the
community
call
is
we
do
a
bunch
of
stuff?
Sometimes
we
have
people
presenting
from
other
communities.
A
Sometimes
we
dive
deep
into
technical
areas
of
different
projects
that
are
going
on.
Sometimes
we
celebrate
peop
individuals
and
roles
and
things
that
they're.
You
know
accolades
that
they've
accomplished
here
so
last
week
it
was
like
five
new
interns
and
a
mentor
which
was
great
and
in
the
future
we
should
be
celebrating
those
that
have
like
solved
the
most
problems
or
have
the
most
solutions
marked
on
the
discussion
forum
or
those
that
are
the
most
active
like.
It
really
does
help
to
ask
your
questions
out
there.
A
If,
for
nothing
other
than
the
fact,
that
slack
seems
to
have
a
memory
as
bad
as
mine,
which
is
to
say
that,
like
some
of
us
have
answered
the
same
question,
50
000,
my
fingers
hurt
just
thinking
about
it.
So
it's
really
nice
when
you
ask
your
question
on
the
discussion
forum,
because
then
we
don't
repeat
ourselves
repeat
ourselves,
repeat
ourselves,
okay
enough
enough
for
me
that
would
be.
Did
you
talk
about
the
newsletters
already.
G
Yeah,
actually,
I
actually
had
a
doubt
so
no
like.
If
you
take
an
example,
we
are
having
a
discussion
on
slack
and
I
actually
posted
a
question,
and
you
know
someone
like
answered
that
particular
question
like
it's
a
big
topic,
for
example.
So
can
I
do
one
thing
like
if
I
want
to
keep
a
record
of
that
particular
question?
Can
I
post
that
question
plus
the
answer
on
the
discussion
forum
to
keep
the
record
of
that?
Can
I
do
that
as
well.
C
Yeah
so
recently
we
have
added
a
command
on
slack
itself.
So
with
that,
what
happens?
Is
that
particular
number
of
messages
in
a
thread
you
can
actually
post
it
without
even
copying
pasting
it
on
the
discussion
forum,
there
is
a
command
with
which
you
can
just
directly
create
a
draft
of
the
post
on
the
discussion
forum
itself.
So,
okay.
A
Yeah
sure
yeah,
as
a
matter
of
fact,
there's
a.
Let
me
let
me
show
that
let
me
send
a
link
to
how
to
use
it
and
maybe
just
a
little
recorded
demo
but
yeah
like
everyone.
That's
everybody!
That's
on
the
call,
almost
everybody,
except
for
it2
who's
in
dorm
room
1263,
I
guess,
is
a
member
of
the
slack
community.
You
should
there's
a
couple
of
custom
commands
in
slack.
A
If
you
haven't
noticed,
there's
a
few
automated
messages
that
go
around
in
the
slack
one
of
those
is
slash,
discuss
and
to
hersh's
question
that
n
in
that
command
is
a
parameter
for
how
many
slack
messages
in
history.
Would
you
like
for
that
command
to
grab?
So
if
you
do
a
slash,
discuss
post
it'll
go
back
in
the
history
of
the
current
channel
that
you're
in
copy
and
create
a
draft
for
you
automate
the
creation
of
a
draft
for
your
draft
blog
post.
A
A
So
I
think
the
only
thing
you
might
bump
your
head
into-
if
you
don't
have
a
discus,
a
discussion
forum
account
you'll
need
to
you
want
to
grab
one
that
way.
The
slack
automation
can
create
it.
As
you.
C
C
So,
moving
on
so
we
have
started
this
new
weekly
newsletters,
which
I
think
one
went
out
this
week.
So
what
the
purpose
of
this
newsletter
is.
We
have
a
lot
of
blogs
published
on
our
site,
so
this
newsletter
will
basically
update
you,
which
inform
you
of
all
new
blog
posts
that
we
put
that
gets
posted
over
here
so
that
you
can
come
to
the
site
and
read
it.
C
I
think
the
idea
plan
is
to
send
it
out
every
once
in
two
weeks,
so
all
the
new
blog
posts
that
are
posted
here,
you
can
just
be
aware
of
what's
happening
now.
C
This
also
means
that
there
is
a
huge
chance
for
you
to
come
up
with
your
own
blog
posts,
so
you
can
suggest
like
right
now,
if
you
want,
if
you
have
any
ideas
for
any
future
blog
post,
that
you
would
like
to
work
on
you're
free
to
work
on
it,
create
a
draft
and
I
think
we'll
go
through
it,
we'll
review
it
all
of
us
and
we
can
get
that
posted
here
and
share
it
with
all
these
subscribers.
A
But
the
the
message
here
is
that
in
the
past-
and
now
we
talk
a
lot
about
well,
we
talk
about
layer,
5
being
a
platform
for
shared
success,
and
it's
that
that
this,
the
layer,
5
io's
site,
its
blog,
is
part
of
that.
A
component
of
that
platform
that
proverbial
platform.
D
Hello
yeah
actually
for
the.
A
D
Post
I
asking
about
if
there
is
a
blog
like,
for
example,
a
beginner
or
a
newcomer,
join
the
mystery,
and
he
I
just
want
to
why
mesh
ray
is
necessary
and
how
to
get
into
that
some
like,
because
it's
a
mesh
service
and
suppose,
for
example,
if
anyone
don't
know
about
my
service,
so,
for
example,
give
some
ideas
about
the
learning
path
and
also
the
contribution
about
on
the
machine.
So
this
box
can
be
helpful.
D
D
C
B
Hi,
can
you
hear
me
yep
yeah
yeah,
I
okay,
I'm
shane
lee
just
told
her
to
tell
me
to
show
the
extending
the
github
action
to
a
cnscf
community
alive.
B
So
I
think
we
should
introduce
the
smp
project
at
first
because
I
think
someone
of
you
are
not
was
never
heard
about
it.
So
lee
can
you
make
some
introduce
about
the
smp,
the
first
sure
yeah.
A
Absolutely
so,
by
the
way
I
shin
is
being
a
good
sport.
I
just
messaged
him
a
few
minutes
back
asking
him.
If
he's
doing
anything
else
turns
out,
he
had
a
few
three
minutes
to
join
the
call
and
kind
of
present
on
a
design
spec
that
he's
got
going
on
and
the
design
spec
has
to
do
with
one
of
the
other
one
of
our
other
projects.
So
this
is
this.
Other
project
is
in
another
project
in
the
cncf,
it's
called
service
mesh
performance,
that's
what
it's
called
today.
A
It
went
into
the
cncf
under
the
notion
that
it
might
combine
with
service
mesh
interface
potentially,
which
is
another
spec
project,
or
it
might
expand
to
be
much,
have
a
much
broader
focus
than
just
service
mesh
performance,
but
but
be
cloud
native
performance
in
general.
There's
a
couple
of
engineers
at
microsoft
who
are
pushing
for
that?
The
problem
is,
we
need
more
people
engaged
to
be
able
to
help
expand
this
scope
anyway,
to
introduce
the
project
very
briefly.
There
are
it's
really
hard
to
necessarily
necessary.
A
Okay,
as
you
use
infrastructure
like
a
service
mesh
or
like
kubernetes,
to
run
applications,
and
to
do
that
in
different
ways,
you
can
harness
the
intelligence
of
that
infrastructure
to
make
sure
that
your
apps
are
running
in
a
resilient
fashion
in
a
secure
fashion,
highly
available
in
a
performant
way,
and
all
these
things,
the
more
that
you
ask
your
infrastructure
to
do
the
more
that
you
configure
your
service
mesh
to
to
perform
these
things.
You
know
the
that
has
an
effect
on
the
behavior
of
the
infrastructure.
A
Some
you
know,
there's
overhead
involved,
there's
a
lot
of
value
that
you're
gaining
as
well,
so
being
able
to
characterize
that
understand
and
compare
to
what
others
are
doing
and
how
efficiently
they're
running
versus
you
or
what
some
of
the
best
practices
are.
Also
when
you're
choosing
a
service
mesh
like
which
excuse
me,
by
the
way,
when
you're
choosing
a
service
mesh,
which
one
maybe
performs
best
for
a
particular
use
case
that
you
have
for
your
type
of
workload
or
for
your
type
of
infrastructure.
A
So
there's
just
like
there's
a
lot
to
all
of
this
type
of
a
discussion
and
at
part
of
the
core
of
it
is,
and
you
know,
users
trying
to
like
focus
on
their
applications.
A
A
Oh,
how
this
mesh
is
faster
than
that
one
or
this
et
cetera,
and
it's
not
that
those
statistics
that
are
reported
are
untrue,
but
they
are,
you
know,
inherently
biased,
because
those
are
published
by
generally
published
by
the
projects
themselves,
the
service
mesh
projects
themselves
and
not
that
anyone's
evil.
It's
just
it's
just
a
matter.
It's
like
a
human.
A
A
Anyway,
there's
a
lot
to
the
project.
It's
it's
one
of
our
other
projects.
Shin
is
currently
at
intel
and
working
in
a
team
that
spends
a
lot
of
time
focused
on
envoy
as
a
proxy
in
its
performance,
as
well
as
inherently
then
istio
and
its
performance
and
yeah.
So
so
one
of
the
the
objectives
for
2022
for
service
mesh
performance
is
well
is
to
like
put
measures
in
place.
A
Put
automation
in
place
to
start
publishing
on
an
ongoing
basis,
performance
analysis
and
that's
what
shin
is
gonna
talk
a
bit
about.
B
Yeah
yeah,
thank
you
lee.
I
will
share
my
screen.
A
Shin
I
was
talking
with
you
talking
with
red
hat
cto
on
wednesday,
and
I
was
mentioning
you
and
then
I
couldn't
remember
you're
in
shanghai,
right,
yeah,
yeah,
yes,
okay,
all
right
good!
I
got
it
right.
I
remembered
correct
okay,
good.
B
Okay,
sorry,
I
had
some
issue
about
the
sharing.
I
need
to
restart
the
soon
sorry.
Yes,
I
mean
within
a
minute.
A
Sure
so
shin's
gonna
drop
off
and
then
come
back
to
share.
But
if
you
there's
a
link
to
this
particular
design
spec,
so
he
I
just
asked
that
he
give
the
community
here
a
quick
overview
of
the
proposal.
So
some
of
you
may
or
may
not
be
aware
that
the
cncf
has
well.
A
Has
the
community
lab
if
you
will
so
infrastructure,
that
is
for
purposes
of
primarily
for
purposes
of
projects
like
service
mesh
performance,
to
go
test
things
at
scale,
so
it's
like
so
and
that
lab
it
used
to
be
it's
it's
infrastructure
donated
by
equinix
or
by
formerly
by
packet
and
actually
formerly
by
intel
some
many
years
ago.
So
it
works
out
very
well
because
we're
we're
able
to
go
over
request.
A
You
know
a
collection
of
servers,
20
or
something,
and
and
then
because
we
have
a
tooling
like
mesherie
to
spin
up
a
bunch
of
different
service
meshes
under
different
configurations.
We
can
then
and
then
mesh.
We
can
generate
load
and
do
analyses.
So
this
is
a
proposal
about
how
to
go
about
that.
B
Okay
today,
I
will
show
you
the
the
design
about
extending
subscribe
performance
to
cnsf.
B
Community
infrastructure
live
so
at
this
stage
I
I
think
I
need
to
introduce
the
behavior
about
how
to
run
the
subsmash
performance
at
this
stage.
Yes,
this
is
the
background.
B
Benchmarking
test
and
the
second
one
is
schedule,
a
stirrable
benchmarking
test.
B
B
B
B
So
I
folked
this
ripple
so
that
I
can
config
the
behavior
by
myself.
You
can
see
there
is
a
configurable
benchmarking
test
and
you
can.
B
I
don't
do
some
preparation
and
it
will
apply
a
machine.
B
Because
you
add
this
install
s2
as
a
service
mesh
and
deploy
a
a
simple
application
about
the
s2.
A
B
Okay,
this
is
the
the
behavior
about
the
github
exchange
around
snp.
So
at
this
stage
the
all
the
tests
was
wrong.
On
the
github
host
run,
it
is
a
very
small
machine
and
which
will
restrict
really
for
the
test
running.
B
B
Our
I
is
are
is
contributed
and
managed
by
the
economics
economist
middle.
It's
leading
our
mitral
cloud
provider,
so
it
can
provide
more
capability
for
us
to
to.
B
B
B
B
You
could
just
focus
on
the
cognitive
computing
or
just
walk
around
the
cloud
native
and.
B
With
cncf
so,
but
I
I
don't
know,
what's
what's
the
the.
B
Yes,
so
we
want
to
estimate.
B
B
A
So
I
see
mario
on
the
call
not
to
distract
him
necessarily
but
lunati,
and
I
were
just
chatting
about
some
open
issues
on
the
github
action,
for
that
shin
was
just
reviewing
there's
a
couple
of
things
that
are
fairly
embarrassing.
I
think
that
we're
doing
and
we're
not
using
our
own
tooling,
like
we
should
be
so
there's
some
bash
scripts
that
are
going
on
that
need
to
be
using
message
ctl
instead
of
other
bash
scripts,
and
so
that
was
the
whole
purpose
of
writing.
A
Measuring
ctl,
you
know
items
so
so
we
should
so
there's
that
there's
another
piece
of
work
around
like
in
kind
of
in
in
what
I
think
is
kind
of
a
a
nice
way
for
when
you're
interacting
with
github
workflows
or
github
action
like
this.
A
If
you,
if
you
write
the
actions
such
that
you
can
write
the
action
such
that
it's
scheduled
to
run
every
once
a
while
that
it's
triggered
by
some
other
event
or
like
shin,
was
showing
that
you
can
go
over
and
just
manually
invoke
it
when
you
do
like
in
this
case,
you're
you're
asked
for
a
couple
of
inputs
like
which
mesh
did
you
want
to
what
service
measures
you
want
to
deploy
and
what's
the
test
profile
they
want
and-
and
it
looked
like
to
me-
not
all
it
looked
like
some
of
this
wasn't
necessarily
being
honored
or
enhancements
could
be
done
where
we're
testing
linker
d
in
in
a
particular
example
that
shin
was
showing,
but
we
downloaded,
we
pulled
all
the
container
images
of
mescheri
for
all
the
other
service
meshes
as
well.
A
It's
like
that's
unnecessary
and
messy
ctl
has
considerations
for
that,
like
you
can
just
pass
it
a
parameter
and
just
tell
it
download
this
one
adapter
so
that
doesn't
that's
not
those
are
non-blocking
for
shin
and
the
project
to
move
forward,
but
as
we
go
to
run
these
a
lot
of
times
over,
it's
very
much
so
worth
our
while
to
go,
make
those
enhancements,
inaudi
or
mario
or
whomever
there's
a
couple
of
individuals
that
are
kind
of
working
on
some
of
these
things
right
now,
gibril
of
nigeria
real
nice
guy,
but
he
only
comes
around
on
occasion,
and
so
the
issues
have
been
getting
stale.
A
The
other
gentleman
asuko
li
he's
a
mashery
maintainer
and
he
lives
in
hong
kong.
So
not
shanghai.
So
I've
got
I've,
got
my
cities
right,
but
but
he's
been
kind
of
working
on
that
and
so
yeah.
Please
jump
in
it's
a
great
opportunity
for
anyone
to
do.
Learn
some
github
workflows.
So
the
question
that
I
have
or
kind
of
the
suggestion.
A
I
think
the
thing
that
shin
was
also
getting
to
is
the
fact
that,
when
he
showed
the
workflow
that
ran
well,
the
the
environment,
kubernetes
linker
d,
a
sample
application
measuring
those
were
provisioned
in
an
ephemeral
way
in
the
particular
environment
that
was
constructed
for
that
particular
workflow
and
the
compute
and
the
network
and
the
systems
the
infrastructure
right.
The
servers
that
ran
that
well,
those
were
github
runners
that
was
github's
infrastructure
and
good,
but
the
purpose
here
is
to
go
grab
some
bare
metal
servers
and
to
really
you
know,
grab
a
pristine
environment.
A
That's
sort
of
you
know,
vacuum
cleaned
and
is
tightly
controlled
so
that
we
can
do.
You
know,
collect
some
use,
fine
instrumentation
to
do
these
performance,
characterization,
okay!
Well,
if
equinix
metal-
and
these
other,
like
that's,
not
those
aren't
github
servers.
So
how
are
we
going
to
use
a
github
workflow
to
do
automation
on
someone
else's
servers?
B
Performance
test,
you
can
see,
it's
run
on
ubuntu
latest,
it's
a
github
machine
and
it
will
install
service
mesh
and
applications
and
then
a
tool
you
can
change.
This
conversion
generates
code
to
to
to
do
something
anything
you
want
and
then
wrong.
B
Okay,
next,
what
will
what
do?
Well,
we
need
what
to
do
next
about
the
design.
First,
we
need
to
learn
about
the
cncf
cli.
B
B
2019,
so
I
I
don't
know,
what's
the
next
so
lee
do
you
know
about
the
status
of.
A
A
Yeah,
let's
have
a
call-
and
I
know
that
they've
got
they've-
got
a
bunch
of
automation
around
how
their
environments
can
be.
What
do
you
call
it
automated,
so
yeah?
We
want
to
try
to
use
a
self-hosted
runner.
Let
me
say
this
shin.
If
it's
okay,
we've
got.
I
think
one
one
other
topic
to
try
to
cover
on
this
call
today,
but
like
mission
accomplished
in
terms
of
letting
folks
on
the
call
and
those
that
are
watching
us
on
youtube
know
they
can
come
and
jump
in.
A
A
B
B
So
I
I
continue
mine
continue
the
the
design.
A
Absolutely
yeah
absolutely,
and
but
if
you
don't
tomorrow,
when
you
wake
up,
if
there
isn't
a
calendar,
invite
on
your
schedule,
then
ping
me,
because
we
should
we
should
meet
and
I'll
make
sure
that
we
post
the
that
calendar,
invite
for
anyone
who
wants
to
be
on
the
call
with
equinix
but
yeah.
That's
a
good
next
step.
Another
next
step
is
umnati
is
going
to
try
to
help
enhance
the
github
action.
Another
next
step
is
like
someone
to
understand:
github,
self-hosted,
runners
and
shin.
A
You
might
already
understand
those,
but
you
know
how
do
we
connect
the
github
action?
Just
understanding,
self-hosted
runners
in
general,
leaving
equinix
aside,
is
very
helpful
and
is
a
necessary
next
step.
A
One
of
the
others
like
there's
a
bunch
of
other
steps
about
like
well
how
many
tests
should
we
run
and
what
configuration
should
they
be,
and
all
that,
like
I
gotta
say,
we've
been
like
that
was
what
june
2019,
I
think
was
the
date
of
that
issue.
It's
like
I
don't
I
could
for
my
part
to
just
me
speaking.
I
could
give
two
rips
about
what
we
actually
test
like
what
how
long
they
are,
how
short
they
are,
what
configuration
like
all
of
them,
they're
all
valid
like.
Let's
just
go.
A
Let's
just
I
start
testing.
I
start
publishing
some
results
and
you
just
watch
as
about
10
service,
mesh
projects
blow
up
and
jump
in,
like
you'll,
we'll
have
more
people
trying
to
wrangle
this
then,
especially
the
psyllium
folks,
they'll
be
all
up
in
our
grill.
A
G
All
right
so
yep
so
actually
in
measuring.
We
have
multiple
components
and
one
of
which
is
mesh
thing,
and
it's
like
it's
like
a
data
source
for
kubernetes
resources
in
the
cluster.
So
we
often
say
that
mesync
is
like
the
heart
of
misery,
which
means
that,
like
the
data
that
is
prompting
through
without
it
like,
we
cannot
actually
a
lot
of
measuring
components
would
not
be
able
to
function
properly.
So
that's
there's
like
a
cute
analogy
there,
and
so
the
thing
is
that,
like,
like
the
data
is
produced.
G
All
right,
so
the
data
that
is
produced
by
it's-
it's
not
actually
produced
like
data
that
is
in
mesh,
think
and
it
will
actually
be.
You
know,
moving
towards
multiple
through
multiple
components
of
machinery
and
then
finally
landing
something
that
the
users
could
see.
Let's
say
for
a
ui
client
or
something
in
this
case
mention
ui
or
any
other
extensions
that
measure
you
can
allows
you
to
configure
so
yeah.
G
So
the
thing
is
that,
like
whenever
these,
this
particular
data
is
starting
from,
which
is
starting
from
meshing
and
flowing
through
multiple
components
like
this
particular
flow
has
to
be
robust
and
consistent.
Otherwise,
like
there
would
be
a
bunch
of
problems
that
would
erase
because
of
this-
and
this
is
this-
is
exactly
the
case
with
this
right
now
like
there
have
been
a
lot
of
problems
because
of
this
particular
scenario
and
yeah,
I'm
just
trying
to
work
out
a
way
to
you
know
how
to
solve
those
problems
by
yeah.
G
This
is
basically
my
approach
to
how
we
would
actually
fix
all
those
problems
and,
let's
yeah,
let's
start
so
like
if
you
like.
If
you
want
to
ask
anything-
or
something
like
just
feel
free
to
stop
me
in
between
okay,
so
yeah,
this
is
actually
forgive
me
if
the
diagram
is
not
yeah,
not
good
or
something,
and
also
like.
G
I
have
a
documentation
here
which
describes
some
of
the
things
that
we'll
be
talking
about
and
it's
not
completed,
but
then
I'll
curate
this
and
then
like,
if
it
is
done
I'll
post
a
link
so
that
everyone
will
get
access
to
it.
G
So,
let's
so
like
we
have,
as
I
said
like,
we
have
multiple
components
in
mesh
map
and
oh
sorry,
measuring
and
I
wouldn't
actually
be
going
explaining
what
each
and
every
component
does
and
let's
just
go
over
the
surface
of
this.
So,
as
I
said,
like
messing
is
like
kind
of
the
data
source
for
all
the
components
in
measuring
and
that
server
is
the.
It
facilitates
communication
between
multiple
components
in
measuring
and
we
have
measuring
server,
which
is,
as
you
all
know,
it's
a
machine.
G
It's
a
server,
and
in
this
case
I
have
taken
a
mesh
map,
visualizer,
asleep,
client,
and
but
this
client
can
be
anything
like
it
can't
be
measure
ui
or
any
other
client
so
which
id
like
finally
like
this
is
where
the
data
would
be,
which
is
generated
here.
It
would
be
here
so
that
can't
be
anything
so
with
messaging.
In
my
think
we
have.
We
are
actually
using
kubernetes
shared
informers.
G
So
if
you
don't
know
what
informer
or
a
shared
informer
is
it's
it's,
it's
actually
a
huge
topic
in
itself
and
I
would
advise
you
to
go
over
the
documentation,
and
you
know
google,
google,
about
it
to
find
out
I'll,
be
actually
skipping
a
lot
about
this,
but
then
like
to
give
you
a
just
about
what
it
actually
does.
This
is
this.
G
This
is
actually
the
component,
which
would
you
know,
establish
a
watch
connection
with
the
http
database
and
kubernetes
and
then
receive
the
notifications
that
it
is
sending
events
that
it
is
sending.
So
this
is
basically
responsible
for
that,
and
also
there
are
multiple
components
in
the
informer.
I
have
just
listed
two
of
the
components
which,
which
is
which
actually
like
I've
listed
only
two
components,
but
there
are
actually
more
so
we
have
store
and
store
is
basically
like
the
shared
informer
or
informer.
G
I
would
actually
be
switching
between
those
two,
but
they
actually
mean
the
same
thing
in
this
context,
so
yeah
just
pointing
it
out.
So
it's
informer
like
it
provides
some
guarantees
in
the
sense
that
at
the
store
inside
this
informer.
G
So
whenever
this
events
that
are
from
lcd
it
comes
over,
it
would
actually
make
sure
that
first,
it
will
pass
through
the
store
and
then
it's
it's
an
accumulator
store
in
sense
that
it
would
get
these
events
and
then
like
try
to
not
try
to
it
would
actually
reflect
the
actual
state
of
the
cluster.
So
one
of
the
guarantees
that
it
provides
is
that
eventually,
the
data,
the
state
of
the
cluster
in
the
store
would
actually
be
consistent
with
the
actual
state
of
the
cluster.
G
So
that's
actually
one
of
the
guarantees
that
the
informer
provides
and
after
it,
after
the
event
passed,
you
know
goes
through
the
store
it
would.
Actually,
we
would
have
multiple
callbacks
for,
like
the
three
events
that
we
will
be
getting
our
on
ad
event,
that
is
whenever
a
resource
is
added
and
an
on
delete
event,
that
is
when
a
resource
gets
deleted
and
an
on
update
event
so
yeah.
G
So
for
all
these
events
like
we
would
have,
we
would
configure
separate
handlers
for
each
of
those,
and
the
first
thing
that
each
handler
will
do
is
like
it
will
push
all
those
events
into
a
queue
and
from
that
queue
you
would
actually
have
a
worker.
You
know
popping
items
from
that
queue
and
then
basically
like
what
it
will
first
do
is.
G
It
will
send
all
these
events
like
if
you
want,
we
can
actually
process
those
events
or
you
know,
make
some
changes
to
the
events,
that's
and
then
like
send
it
to
the
subject
called
measure.mission.events
like
these
namings
can
defer
differ,
but
then
I'm
just
just
for
the
sake
of
simplicity.
I'm
just
saying
that
it's
called
my
sync
dot
mystery.11,
oh
sorry,
yeah.
G
So
it
will
just
put
it
publish
to
this
subject
and
one
more
before
moving
on
to
measuring
server
like
one
more
other
thing
that
it
does
is
that
it
will
actually
subscribe
to
this
message.missioning.request
subject,
and
this
is
actually
useful
when
as
a
client
as
a
different
component
of
measuring.
If
you
want
to
ask
messaging
to
do
something,
this
is
when
you
would
actually
issue
the
request
so
in
the
subject
so
yeah.
G
So
it
would,
it
would
be
subscribe
to
the
subject
and
whenever
the
request
comes,
it
would
process
and
one
one
such
requests
is
to
you
know,
get
the
data
in
this
particular
store
and
send
it.
So
we
would
have
another
subject
called
necessary
dot
meshing.store,
which
should
be
used
to
send
the
data
in
the
store.
G
So
you
have
to
remember
that
the
data
in
the
store-
it's
actually
consistent
with
the
actual
state
of
the
cluster,
and
it
does
this
by
a
lot
of
the
informer-
is
actually
pretty
intelligent
and
it
does
a
lot
of
things
internal
and
it
make
sure
that
this
data
is,
you
know
consistent
with
the
actual,
so
you
have
to
you
actually
have
to
read
more
about
it
in
order
to
understand
why
how
it
achieves
that,
and
but
it's
great
the
guarantee
that
it
provides
it's
either
consistent
or
it
is
not
like
even
like.
G
We
can
even
go
to
the
extent
of
saying
that
if
it
is
not
consistent,
then
the
informer
will
stop
so
yeah
and
then
it
will
push
it
to
the
missing.
So
so
this
is
basically
the
outline
of
what
mesh
sync
will
do
when
it's
actually
deployed,
and
so
actually
there
are
a
bunch
of
other
things
as
well,
but
then
I'm
just
keeping
through
all
those
things,
because
I
just
want
to
talk
about
a
specific.
G
A
G
So
I
haven't
actually
digged
into
it,
but
then
like
that
would
be
something
that
we
can
do
next,
but
yeah
I
haven't
like.
I
can
give
you
some
answers
that
I
haven't
done
women,
but
then
that
wouldn't
be
yeah
yeah
so
actually
like.
If
you
take
q,
the
reason
for
why
we
are
using
q
is
that
so,
basically
like,
even
so
when
we
get
a
bunch
of
a
lot
of
events.
G
Informer
is
not
you
know
designed
to
handle
that,
so
it
is
actually
mentioned
in
the
dogs
that
we
should
actually
use
a
queue
to
so
that,
like
we
don't
actually
lose
an
event.
So
that's
actually
a
kind
of
like
a
fail-proof
thing.
G
But
yeah
actually
like
performance
is
a
whole
another
topic
by
itself.
So
the
issue
that
I'm
trying
to
target
over
here
is
to
make
that
make
sure
that
the
data
flow
is
consistent
and
it
reaches
the
client.
The
way
like
the
client,
like
the
data
in
the
client
is
actually
consistent
with
what
is
in
the
cluster,
so
right
now
that
that
is
also
not
the
case.
So
I'm
just
trying
to
address
that
first
and
then
like,
let's
think
about
how
we
can
make
it
more
performant.
If
that
makes
sense,.
G
So
yeah,
so
that's
what
meshing
does
when
it's
deployed-
and
we
have
mercedes
server
over
here
and
so
just
note
that
okay,
this
is
not
the
current
architecture
of
misery
and
and
the
other
components
like
it's
a
bit
different
from
what
it
is
right
now.
What
what
I
have
shown
in
this
diagram,
but
then
like
I'm
just
trying
to
say
that,
like
this
particular
architecture,
would
be
more
preferable
than
what
we
have
right
now
yeah
and
if
we
move
over
to
a
mercedes
server.
G
So
on
initiation
I
have
some
yeah.
I
have
actually
started
some
of
the
things
over
here,
which
could
be
useful
for
me
to
help
perform
me
to
go
over.
So
whenever
measurey
server
is
initialized,
it
will
actually
subscribe
to
this
particular
subject,
not
subject,
and,
as
I
mentioned
before,
this
subject
is
responsible
for
sending
like
missing,
would
actually
send
the
data
of
the
store
in
this
particular
subject.
So
it
would
first
subscribe
to
it
and
then
it
would
publish
a
message
to
yeah.
G
It
would
actually
ask
for
the
data
in
the
store,
because
okay,
like
rather
than
directly
start
rather
than
directly
start
processing
the
events,
something
like
try
to
do
over
so
basically
like
the
one
contrasting
difference
between
the
things
that
we
have
right
now
and
what
we
are
trying
to
do
in
this
diagram
is
that
so
right
now
we
are
getting
the
events
and
then
like
processing
it
in
measuring
server
and
trying
to
you
know,
make
it
reflect
the
original
state
of
the
cluster,
but
then
like.
G
We
are
actually
missing
the
point
that
this
informer
is
actually
doing
it
internally,
in
the
sense
that
it
is,
it
has
a
lot
of
intelligence
inside
it
and
it's
it's
always
like
the
data
in
the
store
is,
is
it
always
reflects
the
actual
state
of
the
cluster?
So
we
can
actually
leverage
that
and
directly
take
that
and
make
use
of
that
instead
of
trying
to
compute
that
and
accumulate-
and
you
know,
do
a
lot
of
processing
that
essentially
is
not
needed.
G
So
first
you
get
the
data
from
there
and
then
store
it
in
the
sql.
So,
actually
before
all
this,
like
there
are
some
handling
like
edge
cases,
are
being
handled
before,
and
I
won't
be
actually
touching
those
but
yeah.
That's
something
that
you
can
be
aware
of,
such
as
yeah
one
example
of
that
is
like
before
it
starts
establishing
the
broker
connection.
G
It
would
only
do
all
these
things
once
the
connection
with
the
broker
is
established
and
yeah
all
those
stuff
so
yeah
after
that
it
would
put
that
thing
in
the
sql
database.
That's
that
would
basically
be
the
list
of
all
the
objects
in
the
cluster.
Essentially
now
we
would
call
it
a
snapshot
of
the
state
of
the
cluster.
A
Yeah,
there's
that
yeah
there's
a
lot.
There's
a
lot
lot
telling
how
to
this
type
of
a
system.
This
is
it's
really
good.
I
mean
mesh.
Sync
is
something
that's
taken
us
a
long
time
to
it's
funny.
Yeah
on
one
hand,
it's
such
a
simple
thing
to
say
in
terms
of
what
it
does
and,
on
the
other
hand,
yeah
to
do
it
intelligently.
So
so
mick
of
vmware
is
committed
to
enhancing
the
way
that
we're
tombstoning
records,
which
is
nice
so
nitish.
A
If
you
haven't,
had
a
chance
to
meet
him,
it's
a
good
good
one
to
meet,
but
then
other
than
this
I
think
like
hey.
This
is
probably
sufficient
to
have
introduced
the
topic
to
the
folks
on
the
call
and
to
let
them
let
it
percolate
for
a
bit
so
you're
gonna
sounds
like
next
steps
is
you're
gonna,
yep,
clean,
clean
it
up
a
little
bit
and.
G
Yeah
so
yeah
the
thing
I'm
a
thing
I'll
be
working
on
is
to
make
this
data
flow
consistent
and
yep
and
we'll
go
we'll
see
where
we
go
from
that.
A
C
There
was
a
topic
from
you,
but
I
think
we'll
over
time.
You
want
to
go
through
this
today.
A
Like
unaudi
might
want
to
pick
it
up
she's,
it's
not
a
good
devops
centric
thing
to
do
and
relatively.
A
Not
the
most
simple
thing
but
relatively
simple,
so
there's
a
high
probability
of
success
and
we
will
talk
about
this
a
fair
bit.
I
mean
that
that
work
will
be
promoted.