►
From YouTube: Meshery Development Meeting (Feb 10th, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
all
right
we're
we're
just
about
we're
about
five.
After
it's
wednesday,
the
10th
2021.
thanks
everybody
for
coming
to
the
mastery
development
call,
we've
got
a
collection
of
oldies.
I
guess
no
one
on
the
call
needs
an
introduction
everybody's
been
here
more
times
than
I
can
count.
So
this
is
good
and
we've
got
some
meaningful
things
to
discuss
today.
Now
is
the
right
time
to
pop
open
or
to
list
out
any
topics
that
you
might
have.
A
So
a
quick
call
for
topics,
if
you
have
one
that
isn't
listed,
please
jot
it
down
since
alonso
is
on.
I
know
he's
got
one
that
we
would
like
to
follow
up
on
I'll
put
his
name
down
so
take
a
moment
to
put
down
topics
if
you
haven't
and
if
you're
on
the
call,
please,
you
know,
put
your
name
into
the
record.
Your
name
also
also
everyone
gets
to
hear
from
me
a
lot
on
our
calls
and
you
don't
need
to.
It
doesn't
have
to
be
this
painful.
A
A
Project
manager
of
the
note
taker
of
the
host
and
other
people
can
make
jokes
so
so
think
about
it.
Next
time
we
go
around
to
just
you
know
signal
before
the
call,
if
somebody
wants
to
kind
of
host
and
sort
of
stewardess
through
each
topic
get
me
to
get
me
to
shut
up
and
us
to
move
on
and
get
productive.
A
So,
having
said
that,
let's
get
productive.
Let's,
let's
go!
One
of
our
bigger
topics
is
the
readiness
for
our
upcoming
message
release
the
v050.
It's
a
massive
release.
It's
by
all
measure.
It's
a
you
know,
it's
a
major
release,
we're
just
not
at
the
point
by
which
we're
making
our
first
major
release
yet
and
why
aren't
we
making
our
first
major
release?
A
Well,
that's
because
things
are
still
fairly
fluid
when
you
hit
a
v
when
you
hit
a
1.0,
you
know
for
better
or
worse,
that's
just
sort
of
our
collective
industry
standard
or
a
signal
for
production
readiness,
and
there
are
aspects
and
certain
functions
of
measuring
that
are,
are
and
have
been
and
production
ready
for
a
long
time.
People
have
been
using
measuring
for
certain
features
for
a
long
time.
There
are
other
aspects
of
mastery
that
are
being
built
out
and
new
architectural
components
being
added.
A
There
are,
I'm
not
sure,
four
or
five
or
six
depends
on
how
you
want
to
count
new
architectural
components
inside
of
this
dot,
5
release.
A
Some
of
those
are
you
know
some,
so
this
is
the
beginnings
of
a
draft.
All
of
you
are
encouraged
to
toss
in
some
words
here
briefly
we'd.
A
We
listed
out
these
items
as
things
to
highlight,
and
that's
for
certain,
not
the
only
things
to
highlight.
There's
a
number
of
other
large,
significant
things
all
of
the
adapters
or
nearly
all
of
the
adapters
had
an
overhaul
in
this
release.
They
are
in
a
much
better
position:
they're
much
improved
they're,
much
more
consistent
mesh
kit
was
brought
forth.
What
is
mesh
kit
and
measuring
adapter
library?
A
What
is
that?
How
are
those
used?
Why
is
that
important?
Why
do
we
rewrite
these
things
like
that?
Those
types
of
stories
and
those
that
type
of
information?
That's
a
great
blog
post,
yeah,
okay
and
we've-
got
it
down
here
in
this
issue.
A
There's
a
bunch
of
things
to
write
up
and
talk
about.
This
is
an
opportunity
for
all
of
you
to
get
your
name
out
there.
Even
if
you
didn't
I'll
say
this,
even
if
you
didn't
directly
work
on
on
one
of
these
things,
you
still
need
volunteers
to
write
up
stuff.
A
So
you
can
you
it's
a
good
way
to
learn
how
mystery
works
by
trying
to
write
some
of
it
down.
Even
if
you
can't
do
the
full
blog
post,
that's
fine,
get
some
words
on
paper.
It
just
helps
move
things
forward.
The
intention,
then,
is
for
us
to
you,
know,
get
get.
Have
some
docs
updated
the
mesh
redox
also
to
have
an
announcement
that
goes
out.
A
That
looks
you
know
somewhat
similar
to
this
talking
about
what
the
release
is
and
the
highlights
and
then
linking
people
out
to
in-depth
individual
posts
on
each
of
these
features.
A
A
Take
a
look
put
your
name
on
things
if
you
like
this.
B
A
Of
the
things
that
we
need
to
do
before
we
make
the
release
a
few
of
you
have
draft
blog
posts
out
there
and
ready
for
review.
A
So
that's
another
way
to
get
involved
is
to
review
existing
or
the
drafts
of
of
others
highlight
one
that
we
have
right
now
so
asuko,
whom
I
think
a
lot
of
you
haven't,
had
the
opportunity
to
speak
with
asuku
is
a
maintainer
actually
and
he'd
been
focused
on
the
linker
d
adapter.
For
a
long
time
he's
focused
on
across
the
adapters.
Now
he
wrote
up
a
bit
about
mesh
kit
and
the
mesh
readapter
library,
the
it's
the
layer,
5
repo,
that
has
the
layer
5.
A
I
o
site
that
you
know
an
entirely
new
version
of
it
will
be
released
soon
very
soon,
it'll
be
much
improved
in
the
meantime,
here's
a
link
to
esukos-
and
at
this
point
I
don't,
I
think,
it's
a
draft
that
he
started,
but
I
think
a
number
of
other
people
have
begun
to
add
to
it,
and
so
this
type
of
a
thing
is
something
that
asuko
sees
a
lot.
He
lives
in.
He
lives
in
china.
A
A
A
Okay,
along
with
the
release
itself,
we
would
and
the
post
that
you
know
describe
the
features
you
know
we're
getting
to
a
point
wherein
our
process
can,
they
have
been
and
they
continue
to
evolve.
One
of
those
is
around
well.
How
do
we
do
testing?
How
do
we
ensure
the
quality
of
the
software,
the
quality
of
the
features
that
we're
not
that
they're
doing
what
they
said,
that
they
would
do
and
that,
as
we
make
a
next
release,
we
don't
regress
in
quality.
A
There
are
two
documents
that
are
oftentimes
used
to
help
characterize
and
provide
some
process
around
tests
and
how
they're
made
so
certainly
there's
a
long
well,
there's
a
an
in-depth
document
out
there
about
how
we
do
build
and
release.
A
So
it's
it's.
The
measuring,
build
and
release
document
or
build
and
release
strategy,
and
it
lays
out
how
builds
are
done
very
well
how
the
artifacts
are
released
very
well
so
really
tightly
coupled
with.
That
is
the
notion
that
that
we
would-
and
this
is
a
this
document-
really
hasn't
been
written,
but
it's
this
is
more
focused
on
quality.
So
how
are
we?
What
tools
are
we
using?
It
hasn't
been
written.
What
are
the
criteria
for
making
release?
What
tools
are
we
using
to
perform
tests?
A
A
So
it's
a
call
for
volunteers
to
articulate
that
more
part
of
what
we
would
probably
that
not
probably,
but
something
that
we
would
for
sure
write
in
here
is
what
what,
if
any
gating
criteria
we
have
for
making
a
release.
You
know
how
many,
how
many
open
high
severity
issues?
Can
there
be?
How
many
do
do
we
make
beta
releases
and
then
do
we
have
release
candidates
and
the
answer
is
yeah.
We
do
these
things.
We
make
better
releases,
we
have
released
candidates.
Well,
what's
the
graduation?
A
What's
the
criteria
by
which
we
graduate
from
one
stage
the
next,
so
this
just
needs
articulated
in
a
general
form,
while
a
test
plan
is
potentially
more
of
a
more
like
a
spreadsheet,
probably
which
is
much
more
specific
to
the
software
under
test
like
what?
What
cases?
What
test
cases
are
you
running?
These
would
be.
You
know
high-level
integration
tests
so
so
in
some
respects
it's
kind
of
a
catalog
of
functionality
that
mesri
has
and
then
verifying
that
that
functionality
is
in
fact
intact
and
functions
as
it
should
so.
A
The
discussion
here
isn't
to
wrap
a
bunch
of
slow
process
around
what
we're
trying
to
accomplish
it's
more
to
help
inspire
confidence
for
users;
confidence
for
ourselves
that
we're
not
collectively
that
all
of
you
aren't
having
to
answer
or
both
haphazardly
and
somewhat
urgently
try
to
fix
issues
and
answer
users.
Questions
on
why
they're
having
problems
with
a
certain
area?
A
We
want
to
take
you
know
very
much
so
an
automated
approach
to
to
these
these
things,
I'm
just
like
the
highly
highly
autumn
player
like
actually
the
entirely
automated
approach
that
we
have
to
building
and
releasing
there's
only
one
manual
step
at
this
point
when
we
make
well
there's
two,
when
you
make
a
release,
the
the
one
manual
step
is
when
we
make
a
release.
Is
a
human
goes
over
and
clicks
the
release
button,
so
they
they
give
it
a
name.
A
You
know
mesherie
version
0.5
0.,
they
click
the
release
button
and
that's
intentional,
because
it's
just
we're
continually
releasing
into
the
edge
release
channel
when
we
want
to
make
a
stable
release
all
of
us
first.
How
do
we
know
when
we
want
to?
How
is
that
defined?
Well,
we
should
probably
write
that
down
to
since
that's
a
manual
thing
and
we're
going
to
be
releasing
to
stable,
builds
into
that
release,
channel
it's
more
of
a
human
thing
to
press
the
button.
A
A
A
But
it
just
assumes
that
you're
running
the
latest
we
need
to
have
a
drop
down
somewhere.
That
lets
you
switch
between
doc
versions
and
in
order
to
do
that,
we
actually
have
to
have
different
versions
of
the
docs
like,
like
you
know,
individual
copies
of
the
docs.
A
C
Lee
I
just
have
a
quick
question:
is
there
is
a
spot
where
somebody
like
myself,
stephen
miller,
could
upload
some
templates
for
unit
system
source
testing
they're
pretty
good?
You
know
from
former
life,
but
you
know
the
templates
and
also
the
second
question
was
regarding
to
your
testing:
do
you
use
or
versioning
rather
document
versioning?
C
Do
you
use
any
part
of
any
component
of
sharepoint
document,
libraries
or
teams,
or
anything
like
that,
or
is
this
manual
type
versioning
and
also
just
go
into
github.
A
Yeah
great
question:
yeah
thanks
steven
fantastic
to
answer
your
first
question.
Here's
a
this
link
here
goes
out
to
to
the
community.
The
shared
community
drive,
there's
a
folder
in
there,
there's
the
so
just
to
sort
of
orient
us
and
navigate
us
for
a
moment.
A
There's
the
dimensionary
folder
and
the
build
and
release
folder,
and
in
here
it's
a
it's
somewhat
light
of
a
folder
like
hey,
there's
the
release
strategy,
there's
copies
of
like
earlier
drafts
of
of
releases,
so
we
the
one
that
we
were
just
looking
at,
but
that's
it
like
it's
missing
some
of
those
artifacts
that
you
were
just
talking
about
so
so
yeah,
please!
This
is
a
good.
This
is
a
good
area.
B
A
Templates
kind
of
an
approach
to
you
know,
look
you
know
defining
and
looking
at
integration
tests
unit
tests
that
kind
of
that'd
be
great
great.
Thank
you
I'll.
D
Yeah,
thank
you.
May
I
just
say
a
few
words
sure.
D
If
I
wanted
to,
if
I
want
to
put
in
something
in
that
test
strategy,
should
I
just
go
and
put
what
I
have
to
say:
yeah,
please
yeah,
because
it's
yeah
that'd
be
great
okay.
The
other
thing
I
wanted
to
say
is.
D
There
are
a
lot
of
testing
frameworks
now
the
problem
is,
you
know
you
have
to
keep
the
testing
framework
updated
with
the
development,
because,
if
you
don't
you
know,
the
testing
framework
will
break
and
you
know
it's
its
value.
Is
it
is
diminished?
D
So
maybe
there
is,
you
can
have
a
system
whereby
you
know
the
testing
framework
is
part
of
the
development
you
know
so
so
that,
as
you
create
something
you
automatically
programmatically
create
a
counterpart
like
like
you
have
a
a
test
development
project
along
with
your
I
mean
I
I
don't
want
to
you
know,
I'm
sure
you
have
your
own
system
and
you
know
it's
not
that
I'm
not
I'm
not
I'm
advocating
taking
off
on
a
tangent.
You
know,
but
but
just
a
couple
of.
A
Thousands,
those
are
lovely
thoughts.
I
have
some
things
to
say
about
this.
Just
before
I
do
just
before.
I
forget,
I
want
to
the
second
question
that
stephen
had
gotten
asked
is
steven,
yet
the
to
your
question
about
the
docs
version
or
the
versioning
of
the
docs.
Yes,
they
are
stored
in
github
and.
A
I'll
put
a
link
to
where
those
are
I'll
put
it
under
this
here,
slash
docs
and
yep.
There's
there
they
run
on
jekyll.
There's
a
in
the
community
drive.
There's
a
google
doc
that
talks
about
the
the
docs
so
that
talks
about
the
fact
that
it's
it's
like
based
on
jekyll
uses
this,
and
actually,
if
you
spend
any
time
thinking
about
it,
you
please
do
ping
on
he
put
together
a
short
doc
that
talks
about
different
options
to
approach
versioning
of
the
documentation.
A
Very
good
and
then
back
to
vijay's
comments.
Vijay,
I
you
know.
What's
funny
is
that's
your
first
comment
about
recognizing
the
value
of
unit
tests
or
integration
or
any
any
testing
framework?
Is
that
on
the
surface
of
it
highly
valuable,
because
you'll
you'll
have
confidence
that
things
are
working
and
then,
as
soon
as
a
change
is
made
to
the
code
that
it's
testing?
A
Oh
you,
you
just
broke
the
unit,
you
just
broke
the
test
case
or
the
you
know.
The
test
harness
itself
needs
to
be
updated,
and
so
then,
all
of
a
sudden
you've
got
about
twice
as
much
work
as
you'd
had
before,
because
now
you've
got
to
carry
those
test
cases
along
with
you,
as
you
make
changes
to
the
behavior.
This
is
oh
wow.
What
up?
A
What
are
frustrating
so
like
half
the
time
you
don't
know
is
that
test
case
failing
because
it
wasn't
because
the
code
itself,
the
behavior
of
the
code,
is
wrong
or
is
the
test
case
failing
because
we
just
forgot
to
you
know
whoever
changed.
The
code
recently
forgot
to
update
the
test
case,
and
so
so,
when
an
initial.
C
That's
a
good
argument
for
I'm
sorry,
I
did
there's
a
lag
here.
I
guess
that's
a
good
argument
for
cicd
and
devops
the
way
you
check
out
a
branch
and
check
in
a
branch,
and
you
can
always
revert
and
you
might
use
something
like
jenkins.
I
don't
know
if
you
use
that
or
not
to
end
and
it
checks
the
branch
to
make
sure
that
it
will
compile
and
it
will
hopefully
not
generate
errors.
C
A
Yeah,
that's
a
great
yeah.
The
builds
are
done
in
github
actions,
so
so
like
like
at
jenkins
and
there's
recently,
there's
been
the
addition
of
the
use
of
code
cov
to
help
inform
us
as
to
what
percentage
of
the
code
that's
been
written,
the
functions
that
are
there,
what
percentage
of
them
have
unit
tests
that
have
been
written
to
to
test
out
those
functions
and
then
how
many
of
those
are
passing
so.
A
A
And
so
there's
there's
a
blog
there's
a
blog
here
that
I
think
actually
even
myself
would
benefit
from
going
back
to
to
read
so
this.
So
the
link
that
that
to
this
blog
post
is
is
a
link
to
the
new
layer
5
I
o
site
that
will
be
published
soon.
A
A
Yeah
yeah,
can
you
could
you
repeat
that
selenium.
C
Oh
yes,
yes,
yeah
and
there's
a
great
there's,
actually,
a
great
youtube
posted
all
right,
a
a
youtube
channel
and
posting.
That's
like
ten
and
a
half
hours
long
and
the
sky
goes
all
through
that
cicd
cycle
and
he
mentions
he
talks
about
using
selenium
in
there.
A
Yeah
very
good,
so
in
this
in
this
post
by
rodolfo,
it
highlights
the
fact
that
we've
begun
to
use
cypress,
ui
and
he'd,
given
a
presentation
on
this
topic
about
how
it
is
to
to
vj,
to
your
point
about
how
to
avoid
fragile
test
cases,
how
to
make
sure
that
the
test
cases
you're
writing
are
as
resilient
as
possible
to
change
because
yeah
you
want
to
avoid
that
overhead.
A
So
he
he
wrote
up
a
present
he's
got
a
deck
inside
of
the
google
drive
on
best
practices,
so
how
to
write
tests
with
cyprus
and
there's
a
recording
of
him
presenting
that?
I'm
not
sure
it
was
it's
just
a
little
while
ago.
A
A
I
encourage
you
guys
to
to
dive
in
there
he's
he's
nice
guy
he's,
leading
leading
up
that
effort
and
and
there's
there's
many
more
functional
tests
to
be
written
vj.
A
I
think
part
of
what
you
were
sort
of
driving
at
was
maybe
not
entirely
test
driven
development,
because
that
doesn't
necessarily
mean
that
your
tests,
your
test,
harness
your
test
framework,
is
any
less
fragile
but
as
part
of
the
maybe
part
of
what
you
were
bringing
up,
I
think
anyway,
the
short
of
it
is
like
yeah,
hey,
cypress
ui
is
probably
the
place
to
go
dig
into.
A
And
cool
okay,
so
next
topics
up
were
well
is
augustine
on.
A
E
E
And
then
also
check
the
test
configuration
which
they
can
save
and
use
to
run
a
future
test
and
from
the
conversation
we
will
add
one
of
the
users
we
also
created
and
it's
it's
tabular
representation
of
of
mystery
performance
management
whereby
you
can
see
the
lasts
and
just
run
endpoints
in
nests
and
test,
your
run
from
which,
which
which
you
indicate
from
your
schedule,
and
it
gives
you
the
ability
to
check
your
actions,
run
your
test,
check,
more
actions
and
then,
from
the
conversation
we
also
had
the
user
persona.
E
We
got
some
feedback
about
creating
performance
for
very
tests
which
will
run
in
clusters
and
giving
users
ability
to
save
configurations
for
their
tests
in,
in
the
sense
that,
for
future
tests
use
past
configuration
and
not
have
to
go
over
each
configuration
every
time.
They
are
running
tests
that
also
improves
the
user
experience
of
using
mystery
and
and
then
having
an
endpoint
outside
the
test
configuration.
E
Level
and
then
the
ability
to
also
share
the
tests
for
later
day
for
later
dates
that
you
can
choose.
For
example,
you
want
to
run
the
test
bi-weekly,
give
you
the
ability
to
run
your
test
or
share.
Do
your
tests,
wherever,
whenever
you
want
bi-weekly
weekly
monthly,
depending
on
your
your
choice,
so
that
these
these
are
a
little
more
costly.
We've
been
able
to
draw
from
user
than
user
flow
and
on
the
user
journey
to
improve
on
the
user.
Experience
of.
A
E
Oh,
oh
mia,
she
will
be
unable
to
join
the
club
because
she's
easy,
so
the
feedback
was
these.
These
were
a
few
of
our
feedback
which
I
wrote
down
for
and
for
md
easy
documentation
and
then
we
also
which
which
recommended
giving
out
and
allowing
the
user
store
and
configurations
past
configurations.
E
So
they
will
not.
So
it
also
has
to
improve
the
user
experience
and
also
recurring
and
scheduling
feature
at
the
test
level
and
not
individual.
So
the
test
level
are
not
the
individual
tests.
You
need
not
in
the
individual
test
level,
so
those
are
major
feedback
and
I'm
working
on
the
end
point
outside
the
configuration.
E
A
Oh
nice,
okay,
what
so
good!
So
I'm
glad
that
you
thanks
for
meeting
with
her
and
taking
her
through
the
the
thinking
around
this
new
feature.
We,
if
many
of
you
are
like
me,
we
are
starved
for
a
bit
of
feedback.
A
So
we
want
to
you,
know,
spend
time
spend
as
much
time
with
users
as
we
can.
There
are
there's
a
lot
of
people,
there's
a
number
of
people
using
meshri,
so
there's
like
a
little
over
700
people
who've
downloaded
and
tried
it
and
done
things
with
it.
Recently,
this
last
month
about
two
weeks
ago,
someone
used
the
oh,
the
performance,
testing
capability,
they'd
used
it
pretty
intensely
like
I
don't
know,
ran
many
hundreds
of
tests
and
it
was
and
that's
fantastic
and
so
getting
some
feedback
from
them.
A
A
Okay,
good!
Well,
let
me
let
me
give
everyone
else
a
little
bit
more
context
here,
so
that
people
can
weigh
in
so
so
here.
Here's
the
here's
where
we
are
today
with
mastery
and
its
performance
capability.
So
when
someone
wants
to
mastery
will,
I
think
the
best
way
of
articulating
it.
There's
kind
of
two
terms
to
articulate
what
measury
does
around
performance?
One
is
performance
management.
A
Is
it
does
a
bit
of
performance
characterization
so
generate
load
analyze,
your
environment,
there's
a
few
of
you
in
the
community
who
are
more
of
a
data
science
background
or
not
more
of
but
are
of
a
data
science
background
and
are
working
on
algorithms
to
help
to
help
users
better
tune
their
environments
automatically
measuring
doesn't
do
that
today,
but
that's
kind
of
where
we're
headed.
So
the
thing
that
you
can
do
today
is
run
an
individual
performance
test
and,
as
a
user
meshrey
will
discover
what
service
meshes
you
have
deployed.
A
So
I
have
a
few
and
you
can
give
your
you
can
go,
create
a
test
and
give
it
certain
parameters
identify
what
endpoints
you
want
to
hit
and
how
hard
and
all
these
things
after
a
while,
if
you're
using
this
intensely
and
you're
having
to
type
in
basically
each
all
this
every
time
you
run
a
test,
it
gets
old,
fast
right
having
to
type
it
all
in
so
the
the
feature,
then,
is
to
have
a
performance
test
profile
so
that,
after
you
fill
this
in
when
you're,
when
you're
done,
you
can
just
click.
A
A
Look
at
the
results
in
context
of
that
profile.
So,
if
you,
if
you,
if
you
configured,
you
know
certain
parameters
of
the
test
that
you
want
to
run,
and
you
run
that
same
profile
a
bunch
of
times
over
that
same
test
great,
then
maybe
structure
the
results
in
that
same
way,
so
that
I
can
go
over
to
oh,
the
soak
test
go
over
there
and
I'll
look
at
how
many
times
that
was
run
and
how
it's
changed
over
time
and
okay.
Good,
that's
helpful
to
me
or
oh,
it's
the
soap
test
for
that
application.
A
Okay,
and
so
that's
what
we're
hoping
to
improve
that
that's
kind
of
the
problem
statement.
There's
a
in
the
meeting
minutes.
We
went
ahead
and
put
in
a
link
to
a
discussion.
That's
in
slack
that
talks
about
what
the
feature
is
supposed
to
do.
What
questions
it's
supposed
to
answer!
Help
people
answer!
So
here's
the
discussion
in
the
performance
channel
on
letting
people
know
hey
when
what
was
the
last
performance
test
that
was
run.
A
What
what
were
the
results?
You
know
how
many
performance
profiles
do.
I
have
just
you
know
these
various
things.
So
I
went
ahead
and
took
a
augustine
has
been
working
on
these
mock-ups.
A
I
went
and
took
I
a
a
copy
of
one
of
the
mock-ups
and
just
put
it
into
the
meeting
minute,
so
you
can
check
it
out
more
in-depth
augustine
if
you
don't
mind
that
mock-up
tool
that
you're
using
the
wireframing
tool
figma,
if
you
don't
mind,
two
things,
go
ahead
and
put
a
link
to
here,
so
that
people
can
go
look
at
it.
You
know
more
in
depth
and
then,
if
you
would,
the
notes
from
neha
here
would
be
great
as
well,
so
she
basically
gave
three.
A
If
I
understand
correctly,
one
of
the
notes
is
like
like,
in
my
mind,
very
prominently,
a
positive
thing
and
something
that
should
be
done.
She
was
essentially
calling
for
the
the
split
out
of
a
new
logical
construct
and
what
she
had
said
was
this
endpoint
that
you
have
here
like
what
application
you're
going
to
test
or
what
service
you're
going
to
send
load
to
her
comment
is
hey.
I
have
a
lot
of
those
and
actually
I
track
different
things
like
they
have
names.
They
have
different
versions.
A
Basically,
these
sort
of
represent
the
various
faucets
of
my
applications,
my
workloads,
it
might
be
nice
that,
like
I've,
got
a
soak
test
configuration
that
says,
like
you,
know,
hammer
on
this
thing
at
this,
this
much.
For
you
know
this
long
and
and
so
great.
I
want
to
run
this
soap
test
against
that
end
point
which
is
this
here.
I
guess
that
one,
but
I
also
want
to
run
that
same
soak
test
against
that
one
that
one
out
so
she's
calling
for
a
degree
of
separation
and
a
splitting
out
of.
A
End
points
or
applications
or
whatever
we
want
to
call,
though
she's
asking
hey,
can
those
be
split
out
separately
so
now
I
can
have
a
performance
profile
that
I
can
save
and
recall
and
tune
and
tweak
over
time.
I
can
associate
with
that
with
my
different
endpoints
marry,
those
up
ad
hoc
run
a
test,
or
I
can
marry
those
up
and
schedule
a
recurring
test.
A
D
D
Has
a
cli
and
a
yam
driven
interface
for
launching
things,
so
you
you,
you
can
have
a
graphic
interface
like
the
one
that
you
have
or
you
can
have.
You
might
be
able
to
script
the
same
thing
and
do
that
from
the
command
line
and
oh
and
so
you
might
you,
you
might
be
able
to
automate
that
by
writing
an
entire
script
or
you
can
simply
make
it
declarative
and
specify
what
I
mean.
I
don't
know
if
all
those
things
are
applicable
here.
D
I
don't
want
to
you
know,
say
too
much
you
know
see
because
I
I
haven't
been
a
regular
member.
You
know
I
mean
I
try
to
read
this
stuff,
but
I
don't
want
to
simply
come
come
here
one
day
and
talk
a
lot
and
not
and
disappear.
A
The
thing
is,
is
no,
it
actually
gives
me
us
an
excuse
to
talk
about
it,
a
bit
more
because
there
are
many
of
us
that
are
there's
a
lot
going
on.
There's
a
lot
of
us
that
are
here
in
the
same
boat
as
you,
which
is
there's
faucets,
of
what
we're
doing
that
that
you've
been
involved
in
and
understand,
and
there's
other
stuff
going
on.
A
My
infrastructure
is
still
configured
in
such
a
way
that
I
won't
have
performance
degradation
for
my
users,
so
I'd
like
to
just
have
this
this
component
of
mesh
tree
as
part
of
my
my
own
ci
process,
and
I
don't
need
a
ui
there
as
a
matter
of
fact,
being
forced
to
do
mouse
clicking
there.
That's
not
going
to
work
for
my
automation,
so
so
two
things
that
you
mentioned
that
make
a
ton
of
senses.
Well,
one
that
mesh
three
has
a
rest.
A
Excuse
me
a
rest
api
which
people
can
use
to
invoke
these
tests.
Great,
sometimes
that's
easy.
Sometimes
that's
the
preferred
api.
Sometimes
that
sounds
like
a
messy
developer
thing,
so
there's
messy
ctl.
It
has
the
command
perf
mastery
ctl
perth,
that
lets
you
specify
the
exact
same
parameters
that
you
see
in
the
ui
about
what
endpoint
you
want
to
test,
how
long
you
want
to
do
it
and,
as
a
matter
of
fact
like
to
to
your
next
point
about
okay,
that's
good
I
can
get.
I
can
programmatically
invoke
this.
A
You
know
simply
by
using
a
command
line,
client,
and
I
could
embed
that
into
my
ci
process
or
something
that
but
jeez.
What?
If
again,
if
I
have
these
performance
test
profiles
and
I
want
to
reuse
them,
can
I
can't
I
just
like
save
that
into
a
file
and
it's
like
yeah?
Actually
that's
what
this
is
smp
spec
service
mesh
performance
specification,
it's
a
specification
by
which
you're
able
to
say
in
a
in
a
yaml
based
format
like
hey
I'd
like
to.
A
I
would
like
to
take
this
test
case
and
represent
it
in
a
static
format
and
so
over
here,
oh,
you
could
do
a
measure,
ctl
perf,
hyphen,
f,
for
file
or
dash
dash
file
either
and
then
say
so.
It
contains
an
s.
P
compatible
test
configuration
nice.
So
this
is
my
soap
test
a
or
what
you
know
and
then.
D
Yes,
so
I
see
that
you
have
some
stuff,
you
know
I
mean
you
have
that
stuff.
I
didn't
realize.
I
didn't.
A
D
A
Vijay,
actually,
specifically
for
some
of
the
things
that
you've
helped
advanced
in
the
mashery
project,
abhishek,
so
vijay
had
been
instrumental
in
the
discovery
of
prometheus
and
grafana
instances
that
are
floating
around
like
as
part
of
mesh
sync
abhishek.
B
B
One
of
them
is
that
we
we
get
the
meshing
data
in
which
we've
got
the
prometheus
and
graphical
pods
or
application
endpoints,
and
basically
using
that
data
leveraging
on
that
data
to
auto,
discover
or
directly
fetching
the
end
point-
and
you
know
like
directly
or
a
smart
way,
to
connect
to
those
endpoints
while
meshi
is
getting
started,
that
that
was
the
whole
idea
around
it.
B
So
my
implementation
is
that
I
created
a
graphql
endpoint,
in
which
I
defined
a
couple
of
queries
which
would
answer
to
the
available
the
available
parameters
and
graphing
endpoints
according
from
the
messaging
data
or
whatever
we
have
so
we
don't
have
to
individually
do
do
a
api
call
to
discover
these
parameters
and
graphene
applications,
but
instead
we
can
leverage
on
the
data
and
then
like.
Basically,
the
graphql
endpoint
would
be
the
only
go
to
endpoint
to
you
know,
sort
of
to
fetch
information
about
these
two.
D
Oh,
that
is
great
stuff,
so
now
in
the
ui
it
automatically
populates,
grafana
and
prometheus.
B
The
whole
integration
has
not
been
done
because
I
I've
brought
it
till
the
graphql
endpoint.
Basically
the
ui,
the
chips
that
we've
defined
needs
to
be
you
know,
sort
of
configured
to
call
those
graphql
endpoints
and
get
the
data.
Yes,.
D
Sorry,
let
me
interrupt
you,
so
that
was
exactly
what
I
was
thinking.
You
know
it
wasn't
calling
the
you
know
the
api
so
and
also
I
had
a
couple
of
thoughts
on
that
user
interface
right
now.
It
is
a
drop
down
box
and
there
is
a
preference
object
which
gets
populated
with,
but
that
reference
object
the
way
it
is
structured
right
now
it
it
holds
only
one
instance.
D
So
so
what
I
think
what
we
need
to
do
is
change
the
preference
object
so
that
it
can
hold
a
list
or
an
array
of
all
the
grafana
and
properties.
I
don't
know
if
you're
using
the
preference
object.
I
think
the
idea
behind
the
preference
object
is
what
the
user
is
saved
as,
like
you
know,
as
the
preferred
instance
or
something
like
that.
Is
that
correct?
D
B
Correct
you're
exactly
right,
that's
that's
the
main
reason
behind
which
we've
not
touched
the
front
end
part
yet,
because
the
whole
design
principle
needs
to
be
changed,
we'll
need
to
accommodate
or
we'll
need
to
account
for
multiple
instances
of
prometheus
and
grafana
more
specific
to
each
and
every
service
mesh.
Maybe
so
that
that's
that's
a
part
where
work
needs
to
be
done.
D
Right
so
I
actually
you
know
I
fell
behind.
I
I
looked
into
this
a
lot.
D
I
tested
a
lot
and
you
know
the
the
problem
is
I
I
I
I
did
not
know
react,
but
you
know
now
I
picked
up
react
a
little
bit
no,
but
the
last
couple
of
months
have
been
really
a
little
rough
and
I
hate
to
say
this
because
I
know
lee
you're
you're
you're,
taking
on
a
lot
lot
more
than
that
you
know,
but
you
know
I
haven't
been
able
to
do
that,
but
these
were
some
of
the
things
that
I
I
was
thinking
about.
D
You
know
I
I
was
trying
to
run
some
some
tests.
I
had
some
debugging
code
in
javascript,
you
know
trying
to
put
so
I
did
notice.
I
I
think
you
have
changed
the
back
end
so,
but
I
did
notice
that
there
is.
There
is
a
particular
code.
What
it
looks
for
is
it:
it
passes
the
grafana
and
prometheus
keywords
and
it
gets
all
these
particular
instances
out.
Right
is
the
is
the
code.
Are
you
still
using
that
or
there's
something
else
now.
B
D
Okay,
that's
fine!
So
anyway,
at
that
time
I
that
that
was
returning.
You
know
a
grafana
instance,
but
in
any
case
I
I
suspected
you
know
there
was
some
one
of
the
things
that
I
was
trying
to
test
I
was
I
was.
I
was
in
the
process
of
doing
that.
The
front
end
is
not
being
populated
because
it's
never
being
caught.
That's
what
that's
what
I
thought
and
it
looks
like
you're
saying
the
same
thing
right.
A
A
It
won't
auto
register,
though,
and
to
your
point,
about
an
array
like
not
just
having
a
singular
registration
of
like
measuring
right
now.
It
need
we
need
to
have
an
environment,
an
actual
environment,
construct
that
says
not
just
that,
but
also
an
array
for
these
instances
such
that
it
is
valid
that
you
might
have
multiple
instances
of
prometheus
multiple
instances
of
grafana.
It
is
valid
that
maybe
that
cluster
is
using
that
grafana
and
that
prometheus
this
cluster
is
using
that
one
and
so
cluster
and
environment,
not
they're,
not
synonymous.
A
A
What's
the
relational
model
in
which
these
come
together
such
that
yeah
such
that
then,
like
the
the
mission
that
you
would
you
were
on
some
time
ago,
could
actually
could
actually
be
achieved
because
because
without
having
a
logical
model
for
how
these
things
are
to
relate,
and
you
like
you,
what
does
it
mean
if
a
user
registers
you
know
one
instance
in
their
preference,
but
not
another,
and
so
this
logical
model
is
to
help
with
that
to
be
able
to
describe
okay,
hey,
let's
allow
people
to
right
now,
meshri
has
it
doesn't
really
know
of
an
environment,
it
doesn't
have
that
noun
in
its
object
model
vocabulary,
there's
sort
of
an
implicit,
singular
environment
in
which
measuring
can
connect
to
one
kubernetes
cluster
at
a
time
it
can
connect
to
different
instances,
but
only
one
at
a
time,
and
it
needs
to
be
up
leveled
to
say,
wait.
A
A
A
Now,
when
I
go
through
like
assuming
that
that
was
there
when
they
were
to
go
through
the
ui
and
they
were
to
go,
you
know
to
go
over
to
manage
console
instead
of
them
implicitly
like
right
now,
it's
implicitly
just
defined
that
when
I
mess
with
this
namespace
that
this
namespace
belongs
to
the
only
the
singular
kubernetes
cluster
that
I
can
connect
to
wait.
A
A
Basically,
we
need
to
then
bring
forth
a
switcher,
a
construct
that
lets
people
move
between
environments
and
apply
these
things
between
environments,
we're
about
we're
about
ready,
like
or
by
we,
I,
I
always
mean
inclusive
of
everyone.
That's
on
this
call,
but
I
also
in
this
case
I
sort
of
mean
measuring
as
a
pieces
as
a
piece
of
software
as
reached
a
phase
of
maturity
that,
like
it's
ready
to
take
on
that
type
of
a
complex.
A
Thing
and
it's
not
it's
not
the
bad
complex.
No,
it's
not
like.
That's
not
achievable
like
this.
We
just
it's
more
just
about
us
humans,
writing
down
some
words
that
say
an
environment
allows
you.
You
can
have
these
things
inside
the
environment.
You
can
have
you
know
one
or
more
than
one
grafana
instance
in
an
environment
or
you
can't.
We
just
need
to
make
up
those
rules
and
that's
not
very
difficult.
A
So
we're
right
at
the
top
of
the
hour.
Obviously
I
was
pretty
excited
about
some
progress
that
you
had
just
reported
in
slack,
and
so
I
quickly
tossed
this
up
here.
Can
you
speak
to
this
for
just
a
moment.
B
Oh
well,
not
a
big
progress,
I
would
say,
but
I
was
able
to
sort
of
build
the
whole
nighthawk
application,
the
client
server
and
the
test
server.
B
B
So
we
are
ready
to
move
ahead
with
the
rest
of
the
process,
which
involves
creating
the
ci
website
and
whatnot.
A
Nice,
that's
better
so
for
those
so
as
we're
to
end
here
for
those
that
are,
if
anyone's
interested
in
this
project.
If
you
don't,
then
here's
the
link
to
the
doc
it's
in
the
meeting
minutes.
There
are
a
number
of
things
called
out
at
the
very
bottom
of
this
dock.
A
Oh
speaking
of
workflow
engine
analysis,
we'll
probably
invite
the
caverno
folks
to
present
on
friday,
the
community
call
because
it's
not
just
a
workflow
engine
that
we'll
have
in
mesheri,
but
it's
also
a
policy
engine.
D
A
A
Nice
to
have
you
all
yeah,
alonso
alonso
you're,
not
off
the
hook
on
the
international
language
plugin.
So
I
see
I
see
you
there.
A
Cool
okay,
all
right!
Thanks
all
see
you
see
you
guys
next
week
talk
to
you
later.