►
From YouTube: Meshery Community Meeting - June 7th 2019
Description
A discussion on `mesheryctl` in this Meshery community meeting.
See https://layer5.io/meshery for more @layer5 Meshery project details and other recorded meetings.
A
A
Okay,
good,
you
got
a
couple
of
questions
here,
some
general
items
to
cover
today.
One
of
those
is
whether
or
not
we
should
move
the
the
meeting
time
to
I.
Guess
it
wouldn't
be
it
a.m.
central.
Maybe
that's
a
bad
suggestion.
Idiom,
central,
a
I'm,
sorry
10
a.m.
Central,
8
a.m.
Pacific
and
the
thought
in
doing
so
is
that
harsh
Eenie
Paul
and
a
few
other
folks
that
are
in
you
know
on
on
other
other
folks
that
are
in
different
time
zones
might
be
able
to
attend
right
now.
I
think
it's!
A
You
know
one
it's
Friday!
So
anything
really
late
is,
you
know,
eats
into
the
weekend,
but
to
it's
extraordinarily
late,
so
I'm
gonna
go
ahead
and
tag
people
here
asking
that
then
that
question
and
the
meeting
minutes
to
see.
If
we
did
move
the
time,
would
it
be
helpful
to
them?
Will
they
be
able
to
attend
so
I
know
we're
gonna
be
a
little
bit
light
on
attendees
this
week.
I
think
both
Nick
and
Eric
has
Shashi
corp
or
forget
the
name
of
the
conference
that
they're
presenting
at
this
week.
A
A
It
really
just
something
of
a
regurgitation
of
a
test
patent.
We
already
have
so
that
we
can
submit
that,
along
with
the
cluster,
that
requests
and
people
could
see
what
we're
looking
more
in-depth
at
what
we're
looking
to
do.
Part
of
what
that
request
is
is
to
identify
the
ideal
systems
that
we
would
have
time
on,
and
so
a
question
to
the
community
is
what
would
be
the
ideal
gear?
A
Is
that
and
in
my
mind,
maybe
there's
two
answers
that
one
of
these
answers
is:
if
we're
running
on
bare
metal,
then
we
may
be
able
to
eliminate
more
variables
and
be
able
to
experiment
with
a
couple
of
a
couple
of
other
software
capabilities
that
are
close
to
metal
the
other.
But
that's
one
way
of
looking
at
the
other
way
of
looking
at
it
is
got
the
range
from
leading,
hey,
Socko,
hey,
oh
good,
just
talking
about
the
cluster
time
and
oh
my
what
type
of
infrastructure
we
might
want
to
request.
A
So
the
other
way
of
looking
at
this
is
requesting
VMs
such
that
that's,
maybe
more
representative
of
the
majority
of
people
who
are
going
to
be
running
these
service
messages
they're
likely
going
to
be
running
them
in
VMs
on
the
cloud.
Not
always
they
clearly
they're
gonna
be
running
them
on
Prem
as
well,
but
that
again
might
be
in
VMs,
and
so
the
question
that
we're
we
need
to
ask
ourselves
an
answer
is:
is
the
request
here,
ideally
on
bare
metal,
or
is
it
all
inside
of
virtual
machines?
B
A
A
You
know,
in
terms
of
things,
to
measure
things
to
account
for
okay,
but
maybe
you
know
that's
interesting,
I
think
you
know,
but
it's
it's
interesting,
I
think
to
the
service
mesh
maintainer
z--
the
projects
because
they
don't
want.
They
don't
necessarily
need
that
variable
in
there,
but
I
think
to
your
point.
Most
people
are
going
to
want
to
see
it
inside
of
a
virtual
machine
because
that's
where
they
are.
B
A
C
Really,
the
only
thing
is,
the
answer
is
going
to.
Actually
it's
I
think
it's
less
sufferance
of
the
people
here
versus
the
capability
to
run
incorporate
these
on,
because
when
we
are
testing,
actually
the
service
missions
how's
that
today,
but
now
they're
running
the
coordinators.
So
if
we
can
I
mean
I,
don't
know
like
we
can,
if
they
have
a
very
like
they're
close
to
bare
metal
way
of
running
kubernetes.
A
A
Okay,
I'm
gonna
take
some
notes
to
just
document
what
we
said
about
the
potential
hues
of
bare
metal,
benefiting
part
of
boutiques
research
and
then
but
the
commonplace
scenario
being
that
people
are
running
in
VMs,
think
about
it.
I
guess
you
know
if,
if
what
we
would
want
is
I'm
assuming
kind
of
what
we
would
want
is
not
necessarily
the
most
a
small
number
of
very,
very
beefy
systems,
but
probably
a
higher
number
of
moderately
young
moderately
sized
VMs
so
that
we
can
get
to
a
very
high
number
of
services
very
high
cluster
node
count.
A
I
saw
an
article
come
through
this
morning
from
a
friend
here
in
town
rob
Hirschfeld.
Maybe
we
should
reach
out
to
him
and
ask
if
he
wants
to
participate
because
bare
metal
provisioning
is
part
of
his
focus.
He
recently
taken
a
survey
and
put
out
some
data
on
the
average
size,
the
kubernetes
cluster,
that
is
of
interest
because
I
think
it's
you
know
it's
the
average
size
it's
the
average
company.
You
know
that
it's
the
one
that
it's
not
the
average
as
much
as
it's
the
mean,
what
what
are
most
people
running
at
what
size.
B
B
A
B
A
C
A
A
Yeah,
you
know
yeah,
that's
funny,
that's
a
good
idea,
it's
it
is.
It
is
in
context
of
a
project
but
but
yeah.
If
there
were
a
second
time
where
we
wanted
to
come
back
and
it
was,
the
set
of
tests
were
dramatically
different
from
the
first
time.
I
think,
if
that's
you
know
compelling
in
terms
of
thing,
we'll
use
it
twice.
A
C
We
haven't
I,
think
most
of
the
items
are
too
loose
here.
They
don't
have
a
derivation
yet
where
those
have
to
be
created.
So,
oh
yeah,
we
know
we're
not
good,
so
it'll
be
a
good
thing
to
persuade
them
for
sure,
and
at
the
moment
yeah
we
don't
have
anybody
centreford,
okay,
but
I
think
it
is
becoming
a
company
use
case
because
I'm
working
on
this
other
prometheus
piece
so
or
or
towards
Prometheus
piece
where
this
would
actually
be
helpful
because
you
know
we
would
also
want
to
persist.
C
You
know
some
some
kind
of
like
the
board
settings
or
panel
settings
for
user,
so
so
yeah
probably
like.
That
would
be
a
very
good
thing
for
me
to
actually
start
now,
working
on
or
or
somebody
else
like
that
who
can
get
to
work
at
more
before
it'll
be
it'll,
be
a
good
import
on
as
a
mark
it'll
be
a
combination
of
what
do
you
I
hadn't
back
and
work
that
would
be,
doesn't
need
to
be
done.
A
Okay:
okay,
we
do,
we
have
had
the
request
for
the
CSV
export
a
couple
of
times.
We
probably
need
to
get
have
issue
here,
but
but
I
think
part
of
what
we
need
to
consider
here
is
that
one
of
the
goals
of
the
project
has
been
to
create
a
performance
benchmark
specification
that
captures
the
results
set.
A
The
information
about
your
cluster,
your
environment,
your
the
type
of
tests
that
you're
running
and
I
guess
the
question
is,
is
the
is
what
people
are
asking
for:
I
think
when
they
ask
for
it
that
they,
they
think
just
about
the
performance
results,
and
they
are
not
thinking
holistically
about
the
fact
that
they
they
also
need
to
capture
their
environment.
What
info
yep.
A
Okay,
so
this
would
probably
needs
a
little
bit
of
discussion
about
when
we
facilitate
that
I
think
they
probably
you
know,
let
me
hear
what
you
guys
think
that
it's
probably
appropriate
that
we
do
facilitate
that
we
do
that
people
export
the
reason
individual
result
sets
that
is
inclusive
of
their
environmental
and
test
specific.
Your
configuration.
C
Sorry
come
on
my
take
on
that
would
be
absolutely
I
mean
without
that
yeah,
like
you
said,
without
context,
the
results
are
not
going
to
be
that
meaning
for
sure
the
context
are.
There
are
many
more
credits
that
come
together.
I
just
I'm,
not
just
a
legal
director,
but
also
the
worst
of
for
nad
is
worse.
Not
you
know
this
do
or
gonna
be
you're
running
and
maybe
include
look
at
some
fundamental
information
about
the
application
that's
running
except
up
so.
A
A
B
B
A
A
B
Ought
not
always
need
to
share
like
these
details,
but
if
you
need,
for
example,
you
can,
for
example,
give
them
as
an
option
hey
you
can
under
test
this
out
and
all
these
saved
and
you
can
share
most
of
the
time.
You
don't
need
it
like
in
very
specific
details.
Maybe
at
most
you
need
you
can
go
from
CTL
and
get
it
kind
of
is
yeah.
A
A
It
I
think
that
yeah
there's
a
poor-man's
that
they
can.
They
can
hobble
along
right
now,
the
Grisha,
the
work
that
you've
been
doing
to
interface
with
Prometheus
more
directly,
rather
than
via
graph
Anna,
the
direct
Prometheus
interaction
and
the
notion
that
there
would
be
a
pervious
daemon
set
like
a
node
exporter
in
most
environments.
We
would
then
be
able
to
collect
a
lot
of
that
information
from
right.
We
be
able
to
use
a
combination
of
that
for
the
memory
and
CPU
resources
per
node,
but
we'd
be
able
to
use
to
your
point.
A
Sakho
sum
of
like
QC
TL
to
garner
a
bit
of
specifics
about
the
software
environment
and
then
the
specific,
the
mesh,
the
surface
master,
specific
adapter.
The
thing
used
in
that
mesh
redeployment
each
measure,
II
adapter,
should
facilitate
for
gathering
information
about
that
specific
service.
Mashes
version.
B
A
C
I
mean
fundamentally,
like
everything
is
actually
talking
to
a
cube
API,
so
you
know
the
capability
to
get
the
you
get.
The
infrastructure
vetoes
whatever
is
known
to
cube,
CTL
or
the
you
know
the
cube,
API
server,
because
I'm
pretty
sure
the
APS,
or
has
quite
a
item
information
like
when
we
can
get
the
software
details
like
the
API,
where
I
mean
the
version
of
the
current
ID,
is
where's
Marcus,
the
extra
but
also
it
collects
Arno
level
details
like
the
total
number
of
nodes.
You
know
the
CPU
memory,
etc.
C
So
we
should
be
able
to
get
most
of
the
details
from
there.
I
think
once
we
do,
it
I
mean
so
this
would
I
would
say.
I
will
tighten
this
down
with
the
other
effort
that
you
know
where
we
need
to
move
the
common
adapter
code
into
our
library,
so
that
all
look
first
in
that
this
will
be
out
there
anywhere
along
the
other
common
code.
A
A
Is
it
because
if
it
is
gonna
source
from
maybe
three
areas,
it
needs
to
source
from
the
service
minish
right
and
ask
ask
about
that
service
match,
so
it
needs
a
source
from
each
measuring
adapter
and
he
said
then,
the
source
from
the
knows
that's
in
Prometheus
the
nodes
themselves,
source
from
in
this
case
the
load
generator,
whether
it's
for
IO
or
other
yeah
like
when
you
think
about.
What's
going
to
be
bundled
inside
of
an
export
result,
you
know
max
port
and
then,
potentially
you
know,
source
from
communities
as
well.
A
It
also
needs
a
source
from
measure
e.
When
someone
goes
in
to
configure
a
test
that
they're
gonna
say,
you
know
a
certain
thread
count
a
certain
number
of
requests
per
second
for
five
minutes
for
20
minutes.
I
think
that
that
data
needs
to
be
captured
as
well.
This
fact
that
I
was
sharing
a
moment
ago.
The
repo
did
the
one
that
I
was
sharing.
Is
that
ring
a
bell
Sakho?
A
A
C
We
will
probably
only
capture
like
line
numbers
52
to
56,
for
example:
Aquino
has
samples
there.
We
could.
We
could
sort
ways
we
could
go
vote
it
like
I
mean
so
every
time
a
testers
one.
There
is
a
collection
of
data
points
that
are
generated
sensually
time
series
we
can
capture
I
mean
we
already
are
capturing
that
in
our
database
done
so
we
could
just
append
it
here,
while
also
capturing
like
in
all
agencies,
because
the
latency-
those
are
the
summary
results.
C
But
if
people
are
interested
in
but
not
a
complete
data
sets,
you
know
you
should
probably
have
them
as
well.
So
people
want
to
charge
it
out,
then
we
can
compare
them
with
other
ways
to
be
nice
so
in,
in
addition
to
the
latency
signal
also
have
like
the
actual
data
bodies
of
Vietnam,
but
but
those
would
be
like
fine
SEC
data
points.
C
We
will
probably
not
include
the
server-side
metrics
here,
but
maybe
we
could
capture
some
highlighting
metrics
like
the
CPU
memory,
like
a
few
of
it,
but
I'm
yeah,
but
I
mean
to
think
about
that.
I
know
that's
a
stretch,
but
at
least
for
the
point
a
target
may
be
sure
to
actually
include
you
know
the
the
data
points
and
the
latency,
so
that
I
mean
essentially
the
p50
7590.
Ninety-Nine
point:
nine
eleven
sees
okay.
A
Yep
makes
a
lot
of
sense
to
me.
I
actually,
the
the
not
only
are
most
probably
interested
in
is
summaries,
but
so
too
is
that
a
lot
easier
to
provide
people
as
opposed
to
all
of
the
data
points,
so
maybe
for
this
week,
that's
probably
enough
discussion
on
that's
back
I
think
we
can
iterate
on
it
and
maybe
maybe
I'll
take
on
the
task
of
writing
out
that
the
spec
that
I
referred
to
before
you
know
the
designs
back,
just
just
to
help
move
this
one
along
said.
I
know
we
shift
your
focus.
A
B
B
B
That
should
work,
but
then
I
thought
probably
adding
it
as
a
config
yeah
like
as
umbrella,
yes
and
then
having
like
platform
defining
and
Savini
sellers
account
and
we
need
namespace,
which
could
have
some
default
values,
but
you
can
also
define
which
shall
be
agnostic
like
Isaac,
tricky
or
VPS,
because
then,
if
you
have
this
option,
people
not
have
this
issue
like,
for
example,
for
a
double.
Yet
you
have
I
I've
been
stated
all
portugee.
We
have
G
cloud
and
stuff,
so
you
won't
have
this.
B
A
A
B
The
platform-
actually
that's
the
womp,
began
agenda
rate.
It
doesn't
matter
like
it
can
be
tricky.
Here's
a
key
is
like
basically,
you
will
need
just
to
have
access
to
troops.
E-Tail
tube,
let's
cross
the
lake
just
to
clear,
serve
the
connoisseur,
then
you'll
create
seven
arm
and
generate
the
Pavich.
Pretty
much
will
be
agnostic
of
the
cloud
platform
just.
B
A
B
A
That
as
they
going
to
use
measure
e,
they
may
consider
that
they
want
to
use
measure.
Ect
l
is
like
that
the
main
way
of
interfacing
with
missery
maybe
depends
on
what
they're
trying
to
do.
If
it's,
you
know,
deploy
a
mesh
and
then
run
performance
test
and
then
shut
it
down
and
gather
the
results
as
part
of
their
CI
process,
which
some
of
the
projects
are
wanting
to
do
right.
A
A
Yeah,
when
you
go
to
I
mean
good
support
platforms
and
you
go
to
do
a
memory
deployment
on
mini
cube.
There
needs
to
be
conceited
when
you
go
to
do
it
on
gke
or
AWS,
or
you
may
need
to
provide
credentials
at
the
command
line
when
you
go
to
do
it
for
just
when
you're
going
to
do
it
for
kubernetes
there's
another
credential
a
lot
of
times
it's
cute
config,
but
could
be
different
credentials
that
it
provided
when
you
go
to
do
that
for
docker
compose
anyway.
A
My
point
is,
if
you
think
about
that
use
case,
which
is
more
or
less
the
use
case,
that
Saco
is
working
on
right
now.
For
me,
what
would
be?
What
do
we
consider
would
be
the
best
syntax
for
this
measure,
a
CTL
platform
config,
make
sure
CTO
config
p4
platform
to
do
something
because
people
may
use
mesh
reconfig
later
if
it
was
just
not
sure
I
can
take
they
might
later
or
we
make
want
to
later
expose
certain
service
master
configuration.
A
B
Currently,
yeah,
that's
kind
of
you
can
put
measures
if
you'll
come
with
C
minus
P
and
probably
will
be
picking
but
yeah,
but
that
also
yeah.
That's
we
can
define
either
now
or
later
leave
and
get
more
time
to
discuss
her
and
I
can
just
change
it
accordingly,
but
I
didn't
it's.
Definitely
like
your
idea,
making
hitting
the
API
from
the
backend
and
for
running
the
test.
That's
really
could
be
great
creator.
Think
of
that
just
basically
break
it
as
other
fooling.
These
ranges
based
be
starting
the
test
and
again
I
guess
we
should
be.
C
Sorry
yeah
sorry
I
was
just
thinking
thinking
about
what
not
just
thing
you
know
completely.
There
are
actually
few
things
that
will
first
need
to
happen
in
my
shriek
to
support
this.
So
the
way
I'm
actually
worse
today
is
actually
I
mean
in
the
sense
the
UI
is
actually
managing
or
working
with
metric
through
sessions.
So
we
will
have
to
move,
bring
that
capable
to
be
in
the
match
race
et
al,
to
support
it,
or
we
have
to
enable
the
use
of
some
kind
of
trainings.
C
The
other
thing
is
before
you
actually
go
about.
Like
you
know,
talking
about
the
specifics
we
I
mean
no.
This
is
a
CLI.
Just
like
democracy
like
where
you
know
the
doctor
see
I
can
actually
talk
to
a
little
more
docker
host.
So
the
same
way,
you
know
you
can
have
the
mastery
CLI
summer,
while
actually
instance
itself
is
like
elsewhere,
so
which
means
you
also
have
to
provide,
let
you
know
the
location
of
the
host,
like
you
know
where
my
shrine
use
running,
so
we
have
to
start
from
there.
C
B
B
B
C
Right
so
I
put
it
here,
yeah
leader,
just
like
the
way
you
were
approaching,
that
it'll
be
like
mesh
recon
face
and
then
mini
queue
would
probably
be
a
provider.
So
just
like
you
know
how
Sokka
was
saying
like
not
so
like
maybe
a
PFLAG
like
an
over
the
mini
Q
or
e
WSC
KS
or
whatever
we
can
call
that
food.
But
if
can't,
we
just
only
provide
a
contact
like
not
just
a
courtesy
of
CTL
camping
Mike.
C
You
know
as
bigger
capability,
while
the
generation
of
the
context
is
kind
of
like
a
sub
capability
under
that,
so
we
just
need
to
ditch
a
line,
but
you
know
how
far
do
you
want
to
go
with
that
yeah?
Maybe
to
start
with
like
no,
we
can
just
start
with
config
and
for
ye
bin
ich.
You
were
ready
to
be
just
like
you
know
what
what
I
just
mentioned
or
not
as
a
provider,
so
I
think
that'll
be
good
thing
for
nothing.
A
C
The
week
Saco,
can
you
actually
do
us
a
favor?
Can
you
actually
start
at
Google
Docs,
where
we
can
actually
have
a
further
detailed
discussion
on
the
you
know
on
the
different
options,
like
you
know,
if
you
would,
you
know
kind
of,
have
running
down
the
road
so
that
other
people
can
also
come
in
join
and
looking
kind
of.
B
Wise
here,
yeah
I
think
that
would
be
good
yeah.
Definitely
that's
what
we
were
planning
to
the
raziel
like
to
meet
this
week,
but
I
guess
such
he
had
production
issue.
He
needed
some
installation
stuff
that
cousin
to
make
it,
but
therefore
is
yet
whenever
yellow,
because
we
had
lost
of
the
community
discussion,
that
kind
of
course,
or
the
design,
all
this
stuff
and
kind
of
the
touch
Marco
and
which
Chris
can
shape
the
experience
that
will
be
more
could
get
for
a
path
right
so
and.
C
B
Become
yet
all
the
documentation,
I
started,
I
guess
I'm,
not
sure
if
thoughts,
only
if
you
can
make
it,
but
definitely
let's
squish,
once
you
can
make
this
a
job
and
meanwhile
open
this
case
faster,
try
to
finish
that
part
comfy
we
can
and
have
to
change
anytime.
We
even
you
come
out
this
better
idea.
Okay,
what
should
be
the
options?