►
From YouTube: Meshery CI Meeting (Aug 12th, 2021)
Description
Meshery CI Meeting - Aug 12th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
to
the
call
thanks
for
jumping
on.
Actually
we
have
a.
We
have
a
tradition,
yeah
afghani
whenever
you
join
one
of
the
community
calls
for
the
first
time.
Congratulations,
you're,
the
lucky
winner.
We
ask
that
you
just
say
hi
and
introduce
yourself.
You
know
briefly,
you
know
your
favorite
color.
A
What
brought
you
to
the
community?
What
kind
of
you
know
things
like
that?
Do
you
want
you
have
guinea?
Do
you
want
to
say
hi
briefly.
B
I
can
quickly
introduce
myself,
I
am
now
working
as
devops.
I
am
interested
in
performance
measurements.
I
am
now
for
my
personal
tasks.
I
am
investigating
some
performance
issue
with
easter
and
kubernetes
develop
deployment
as
via
kobe
spray.
I
already
have
configured
one
node,
very
special,
very
special,
interesting
things.
One
note
kubernetes
deployed
with
kobe
spray.
I
configure
it,
I
don't
have.
I
don't
have
configured
what
what
balance
are?
Nothing
like
metal
will
be,
or
something
like
this.
I
can
figure
it.
B
Also,
I
configured
knight
hook
serra
from
radia
from
docker
hub
images,
with
only
config
map
deployment,
config
map
with
config
and
for
external
machine.
I
have
not
required,
and
now
I'm
just
interesting,
how
it's
work
in
more
general
way,
just
not
in
hello
world
and
maybe
interesting
in
gpa
wrestling
from
gprs
measuring
performance.
B
B
My
my
name
is
your
guinea:
if
you,
if
you
want
to,
and
surname,
is
meresco.
A
I
see
okay,
all
right,
wait,
I'm
gonna!
Let
me
write
that
down
real
quick.
I'm
gonna
forget
that,
but
yeah,
no
nice
good,
yeah
guinea,
that
that
was
fantastic.
I'm
glad
that
we
asked
for
you
to
introduce
so
that
so
geez
man
refresh
my
memory.
So
a
cube
spread
group
spray
is
ansible.
B
Speed
for
a
kubernetes
deployment
on
localhost,
usually
it's
used
for
more
than
one
notice,
but
in
this
case
I'm
interested
in
deployment
in
one
another.
Only
it's
have
more
than
a
more
comfort
way
to
configure
some
additional
option.
For
example,
default
setting
for
ipvs
are
not
not
not
epitables
for
previous,
for,
for
example,
ip
list
more
for
notice.
If
you
have
some
issue
with
default,
settings
might
be
much
easier
to
convert,
make
this
setting
in
yaml
file.
It's
it's
really
simple.
A
Nice
yeah,
no
yeah,
I'm
sorry!
I
didn't
I
meant
to
say
it's
been
about
four
years
since
I've
looked
at
cube
spray.
The
cncf
had
commissioned
me
to
do
a
study
back
then
on
well
on
the
different.
Well,
how
should
you,
how
do
you
clarify
I
just
dip
before
one
boom
and
cube
spray,
and
I
and
I
had
thought
that
cube
spray
was
a
aws
only.
B
No
okay,
but
that's
kubernetes,
yeah,
it's
just
as
simple
as
possible.
You
just
need
python,
pipe
three
install
requirements
and
then
default
array
in
bash.
These
ap
addresses
make
some
and
generally
that's
all
and
just
run
or
an
additional
command
for
create
my
inventory
file
for
ansible
and
that's
all
only
one
issue
exists
for
one
node.
B
If,
if
it's
try
to
make
two
colored
corridors
put
in
coop
system-
and
I
am
little
bit
cheating-
I
just
put
two
acquire
ap
addresses
and
it's
work
as
expected.
A
A
B
But
I
don't
know
what,
from
what
point
to
start,
yeah.
A
Yeah,
no,
I
actually
said
there's
an
an
intel
engineer
or
an
engineer
at
intel.
That
was
just
asking
about
that
and.
A
Okay,
okay,
fair
enough
he's
from
so
this
guy
okay,
so
this
guy
is
from
poland.
His
name
is
robert.
We'll
have
to
might
have
to
introduce
you
in
the
community
just
because,
because
he's
kind
of
he's
using
mastery
at
the
moment
to
do
some
performance
tests-
and
he
was
just
asking
about
doing
grpc
based
load
performance
tests
and
so
so
your
inquiry.
There
is
like
very
it's
very
timeless,
but
maybe
I
have
to
chat
about
that.
A
Just
a
little
bit,
nice,
okay,
well,
you're
getting
nice
to
have
you
it's
good
good,
to
curious
to
see,
there's
a
number
of
contributors
that
are
here
who
are
curious
to
hear
feedback
from
you
and
specific
about
like
if
measuring,
is
helpful
to
you
if
the
performance
management
features
are
what
you
expect
them
to
be,
and
so
there's
a
lot
of
people
in
the
community
hungry
for
helping
hungry
for
that
feedback.
So
it's
good!
A
So
as
we
as
we
go
into
the
so
you
have
getting
by
just
to
orient
you
to
the
community,
there's
that
we
have
a
number
of
different
meetings
and
this
one
happens
to
focus
on
continuous
integration
and
kind
of
devops
related
things.
So
this
is
a
good
call
to
jump
on
to
as
a
recap,
from
the
last
time
that
we
met,
it
ended
up
being
a
bit
more
of
a
working
session.
A
The
just
the
catalyst
for
making
that
upgrade
was
that
there's
a
few
users,
a
few
contributors
who
are
now
running
apple,
silicon,
so
running
the
m1
chip
in
their
system
and
we
needed
to
get
to
the
utility
that
we
used
to
make
to
compile
mescheri's
cli
to
different
architectures.
It's
called
go
releaser
and
anyway
we
had
to
be
on
we
had
to
in
order
to
make
those
compilations.
We
had
to
be
on
160
so
and
in
general,
I'm
getting.
A
This
is
just
kind
of
a
general
update,
for
I
guess
every
everybody
on
the
call
to
kind
of
refresh
on
what
we
spoke
about
last
time
and
then,
as
we
jump
into
today's
topics
there.
This
this
is
a
general
statement
for
everyone
here,
kind
of
a
lay
down
the
gauntlet,
maybe
and
that's
to
say,
yeah.
If
you
downloaded
mesherie
recently,
you
might
notice
the
container
image
size
and
you
might.
A
You
might
have
time
to
notice
that,
because
it
might
take
time
to
download
it.
That's
not
good
part
of
the
reason
that
measures
image
container
image
size
is
as
big
as
it
is,
is
because
measuring
includes
three
different
load
generators
and
those
the
different
load.
Generators
are
integrated
into
measuring
in
different
ways.
So
the
first
one
that
measures
supported
is
fort
deal
is
a
french
is
made
by
a
frenchman,
and
he
has
corrected
me
publicly
any
number
I
think
I
was
presenting
at
dockercon
and
he
said
it's
it's
enunciated
fortio.
A
So
so
you
know
you
get
publicly
corrected
once
on
stage.
You
remember
so
anyway.
Fourth
deal
and
the
way
that
measuring
is
integrated
with
it
is
through.
Golang
is
as
basically
as
a
library,
that's
great,
some
lightweight,
or
you
know
it
like
always
works,
because
it's
a
message
written
and
going.
They
work
together.
Good,
the
the
second
one
that
was
integrated
is
wrk2.
A
Actually,
it's
a
modified.
It's
a
forked
copy
of
wrk2
with
a
slight
modification
and
actually
meshrie
has
also
added
another
modification,
and
that
is
there's
basically,
a
golang
wrapper
around
this
c
plus
project,
or
is
it
c
plus
plus,
I
think,
might
just
be
c
around
wrk
too
anyway.
The
the.
A
Embedding
a
copy
of
that
binary,
wrk2
and
having
the
right
build
environment
is
in
part
why
mesh
reserver's,
container
image
size
is
so
large
so
that
you
can
run
so.
It
can
use
that
binary
to
invoke
performance
tests.
A
That
mesri
has
a
going
wrapper
for
around
this
c
plus
plus
project
and,
to
be
candid,
this
particular
load
generator
is
the
one
that
we
want
to
invest
more
time
into
the
one
that
we
want
to
provide
well
to
do
some
really
interesting
research.
A
Well,
based
on
your
the
services,
you
have
the
service
mesh
you're
running
the
version
of
that
mesh.
The
configuration
you
have
we
intend
to
measure
will
provide
tooling
to
help
recursively
run
a
set
of
performance
tests
that
that
feed
off
of
one
another
feed
off
the
results
of
one
versus
the
next
and
arrive
at
an
optimization
basically
run
run
an
optimization
routine,
and
so
that
project
hasn't
been
done
yet.
A
So
there's
a
lot
of
nighthawk
things
going
on
one
of
the
nighthawk
things,
while
we're
on
the
subject
I'll
just
jump
to
this
very
quickly.
Is
that
I'll
say
this
that
that
mescheri
has
been
around
for
well
a
year
and
a
half
couple
or
a
couple
of
years
now,
and
what?
What
issue
number
are
we
on
currently
we're
on.
A
Way
back
on
issue
131
back
in
2019,
we
said
hey,
why
don't
we
support
grpc,
based
load
generation
and
turns
out
today
twice
in
one
day,
people
have
mentioned
it
might
be
nice
to
to
do
that
and
it's
a
capability,
grpc
load
generation
is
the
capability
of
both
fortio
and
nighthawk,
and
so
to
add.
That
is
not
horribly
challenging.
A
So
I
do
think
that
we'll
that
fairly
soon
we
would
have
the
capability
to
have
to
measure
would
have
the
capability
to
do
performance
test
based
on
grpc
and
that'll.
Be
interesting,
there'll
be
some
things
to
talk
about,
actually,
so
that,
having
said
all
of
that,
in
context
of
measuring
has
a
large
container
image
size
of
1.7
gigs
right
around
there.
A
That's
too
large
docker
has,
for
the
last
few
years,
had
multi-stage
docker
files,
where
you
can
have
one
stage
using
an
image
of
a
certain
size,
generally
much
thicker
and
fatter,
and
with
all
the
linux
kernel,
headers
and
all
the
stuff.
You
need
to
do
to
compile
things
and
then
put
those
outputs
of
that
compilation
into
what
was
generally
a
slimmer,
more
production
deployable
image.
A
That's
small
we've
got
to
be
missing
something
here
like
I'm,
I'm
not
convinced
that
as
a
project
that
we
haven't
that
we've
yet
to
figure
out
how
to
optimize,
like,
I
think
the
people
who've
looked
at
this
in
the
past
have
said.
The
contributors
who
looked
at
this
in
the
past
have
said.
Well,
the
image
size
needs
to
be
that
big,
because
you
need
to
have
the
right
type
of
environment
to
run
these
programs.
A
Please
just
interrupt
me
or
say
something
in
the
chat,
if
you,
if
that
intrigues,
if
there's
a
second
way
of
characterizing
this,
the
second
way
of
kind
of
getting
at
the
same
goal
of
reducing
the
image
size
and
that
is
so
measuring,
well
sm
service,
mesh
service,
mesh
performance
and
measuring-
I
I
can't
think
of
any
other
easy
to
we've
got
easy
to
remember
reference,
I'm
going
to
pull
up
this
page,
and
that
is
there's
a
goal
to
do
distributed
performance
management
which
is
something's
going
on
with
my
house.
A
Sorry,
the
goal
here
is
that
we
want
to
be
able
to
take
multiple
instances
of
well
nighthawk
in
this
case,
deploy
a
copy
over
there
in
a
container
running
on
that
cluster,
a
copy
and
a
container
running
on
that
cluster
in
that
cluster,
and
then
we've
got.
You
know.
Five
of
these
things
set
up
mesh
retells,
all
five
start,
hammering
on
this
endpoint
or
these
ten
endpoints
or.
However.
However,
we
want
to
configure
that
test.
A
Great.
The
fidelity
only
increases
the
fidelity
by
which
you
can
measure
and
characterize
performance
from
different
vectors
is
going
to
be
intriguing,
and
so
it
is
a
desirable
that,
while
to
get
people
started,
it's
convenient
that
you
can
take
measuring
and
just
have
the
load
generator
right
there
in
the
same
server.
A
The
second
thing
I
would
say
is
that,
let's
assume
that
that
is
the
case
and
there's
nothing
else
to
be
done
to
reduce
the
size
of
the
container
image
like.
I
think
I
think
it's
untenable
like
I
don't.
I
am
surprised,
people
don't
complain.
Yet
we
haven't
really
gotten
many
complaints,
but
that
I
would
suggest
that
well
out
of
the
box
and
bundled
into
mesh
server,
most
people,
don't
care,
don't
know
and
don't
care
about
the
differences
between
these
three.
There
are
differences,
there's
a
reason
why
we
have
three.
A
There
are
some
of
the
performance
nerds
out
there,
or,
if
I
can
be
so
bold,
is
to
call
it.
You
have
guinea
performance,
networking
performance
nerd,
because
his
last
name
is
you
know,
born
of
this,
that
he
might
really
care
and
it
and
and
that's
you
know,
that's
good,
because
that's
why
we're
supporting
those
those
three.
A
So
what
I
was
going
to
say
is
that
potentially
it's
not
giving
people
a
choice
in
the
way
in
which
they're
being
given
choice
today
when
they
run
a
performance
test
to
choose
one
of
the
three
rather
just
to
say
by
default.
It's
this
one.
Unless
you
want
to
do
distributed,
load
testing,
then
it
becomes
nighthawk
by
default
or
something
like
that.
C
A
Yeah,
if
the
user
were
to
say,
let's
I'm
here
measuring
is
connected
to
I
don't
know.
This
is
an
example
like,
let's
say,
measuring
is
connected
to
one
cluster,
that
kubernetes
cluster
has
10
nodes
and
if
they
were
to
check
on
the,
I
don't
think
I
have
kuber
mesh
ray
running
right
now,
but
if
they
were
to
say
if
they
were
to
check
a
box
that
says
deploy
as
a
daemon
set
or
rather
deploy
a
copy
to
every
node,
so
10
copies
of
nighthawk
and
then
generate
10
streams
of
load.
A
A
One
copy
of
the
nighthawk
pod
per
node
occurs
like
basically
a
daemon
set
occurs
and
so
close
to
what
you
said.
Rude
rocks,
but,
yes,
we
would
withdrew
through
a
manifest.
We
would
measure
would
facilitate
that
deployment
like
that
and
then
over
grpc,
which
is
what
it's
using
today,
to
talk
to
the
golang
rapper
around
nighthawk,
that
it
would
then
give
nighthawk
the
appropriate
configuration
to
run.
D
Okay,
I
have
a
couple
of
questions
yeah,
so
1.6
gigabytes,
that's
the
size
of
the
image
or
the
program
that
that
you're
downloading.
So
there
are
two
aspects
to
it
right.
First
of
all,
you
have
the
demand
on
the
disk
space
right
and
the
other
is.
D
How
long
does
it
take
for
the
program
to
actually
start
running
because
of
the
amount
of
time
that
it
actually
takes
to
down
to
download
this
image?
There
are
two
things
to
this
to
this
right
yeah,
so
I
mean
all
this
stuff
might
all
be
simplistic,
but
so
so
the
thing
is:
is
it
possible
to
have
something
like
download
on
demand,
yeah.
A
Yeah
and
to
your
point,
like
hey,
if
these
like,
let's
say
that
these
two
components
are
broken
out
into
their
own
container,
so
that
the
overall
container
image
here
can
slim
down.
A
Yeah,
mr
chatterjee,
can
you
will
you
double
check
that
there
is
an
open
issue
that
calls
for
an
evaluation
of
how
we
can
slim
down
the
the
other
issue
that
we
should
it's
time
to
open
and,
and
we
might
have
it
open-
is
basically
one
that
that
also
calls
for
supporting
distributed
performance
tests,
and
I
think,
if
you
search
for
the
word
distributed
in
the
issues,
you
might
find
one
so
leave
that
maybe
regress.
B
B
A
A
A
A
Suhani,
argawal
and
vivanch
gupta.
I
think
they're
both
taking
a
look
at
updating
these.
So
if
you
want
to
learn
how
to
service
mesh
while
at
the
same
time
potentially
working
on
the
creation
of
these
labs,
it's
fairly
easy
to
create
the
labs,
because
you're
just
doing
markdown,
maybe
you're
doing
some
a
couple
of
scripts
or
something
but
you're,
mostly
just
defining
what
the
steps
are,
what
people
should
do,
and
so
these
screenshots
are
old.
They
need
updated
and
there's
anyone
interested
in
collaborating
there
or
working
on
those.
A
We
have
to
call
for
volunteers
in
meetings
like
this,
because
nobody
knows
that
there
are
issues
to
fix
and
so
yeah
sergio
shankar
that'd
be
great,
and
I
think
that
you
guys
are
well
positioned
to
help
because
and
learn.
At
the
same
time,
I
think
it
really.
A
Yeah,
please
do
pop
open
an
issue
or
two.
If
you
think
it's
an
issue,
it's
probably
an
issue,
even
if
it
isn't
an
issue,
that's
fine,
then
you'll
be
shown
that
it's
not
also.
We
should
probably
update
the
docs
that
show
that
you
know
so.
There's
a
gal
agarwal
and
I
believe
in
the
slack
community,
in
our
slack
community.
She
goes
by.
A
A
So
she
works.
There
then
also
davange.
I
believe
it's
his
last
name.
These
two
folks
you'll
find
them
in
slack.
Please
you
know
please
ping
them.
Let
them
know
or
just
open
up
the
issues
you
jump
in,
don't
be.
You
know
again
like
if
you
open
up
an
issue
on
something
that
isn't
an
issue
like
you
know:
don't
don't
sweat
it.
A
It's
not
just
the
updating,
the
finding
of
bugs
and
fixing
of
of
issues,
but
it's
also
the
notion
that
well
there's
only
how
many
labs
today
there's
not
enough,
there's
not
as
many
as
there
could
be
so
there's
one
two:
three,
four:
five
for
learning
five
different
types
of
service
meshes.
How
many
types
of
service
meshes
does
measure
support
ten?
Yes,
we
need
five
more
of
these.
If
we're
gonna
fully
explain
what
mescheri
does
by
way
of
supporting
different
service
meshes,
so
so
we're
missing
five,
that's
a
call
to
action
to
create
other
ones.
A
A
Well,
you
can
do
it
using
measuring
smi
and
that's
how
this
lab
is
described,
but
you
can
also
do
it
in
other
ways.
There's
a
bunch
of
labs
to
be
created.
That's
my
point!
So
if
you
all
start
to
get
active
there,
we'll
start
to
make
this
discussion
more
consistent.
A
A
A
A
There's
a
lot
of
workflows
that
kick
off
every
day,
all
day
long,
all
night
long
and
that's
good.
Some
of
the
workflows
that
we
run
in
on
the
projects
kick
off
much
more
frequently
than
they
need
to.
So
for
our
continuous
integration.
We
use
github,
workflows,
github
actions
and
sometimes
we
go
off
and
build
the
docs
in
a
workflow
when
there's
been
no
changes
to
the
docs
or
we
compile
mescheri's
command
line.
Client
and
there's
been
no
changes
there
either,
and
so
you,
you
know-
and
so
there's
a
call
to
action
here
this.
A
So
I
think,
there's
a
there's
a
help
wanted
one
sitting
right
here.
This
is
it's
a
good
way
to
step
into
github
actions.
If
you
haven't
been
around
them
before
we,
we
use
wow
a
lot.
A
C
Nothing
I
have
pasted
an
example
like
we.
We
can
ignore
entire
workflows
as
well
as
we
can
ignore
paths
for
some
little
checks
like
some
of
them
are
in
the
same
workflow
file
and
that's
why
they
run
even
when
they
should
not
so
yeah.
They
have
been
included
in
the
issue,
but
if
you
still
have
doubt
I'm
always
here
mystery
ci
is
the
channel
that
you
can
reach
out
to
and
yeah.
That's
it
nice.
C
So
this
is
about
linting
linting
code,
formatting
code.
Recently
we
I
mean
we
have
been
doing
this
stuff,
but
recently
there
was
a
pr
that
auto
formatted
a
lot
of
code
and
it
made
a
little
bit
messy
stuff.
So
we
also
recently
included
in
the
ui
some
pre-commit
hooks
that
would
automatically
format
the
code
whenever
commit
is
created,
and
I
propose
we
have
something
similar
for
back-end
code
as
well,
so
as
to
ensure
that
the
code
gets
minted,
because
it's
it's
always
like
people.
C
For
that,
even
I
forget
to
format
my
code
and
that
results
into
the
workflow
running
and
oh
yeah.
You
forgot
this,
then
I
have
to
edit
it
and
then
I
have
to
create
a
commit.
Sometimes
reviewers
have
to
point
out
that
see
this,
so
if
that
process
is
made
a
little
bit
offline,
that
would
be
helpful.
I
guess
so.
We
can
evaluate
some
of
the
perigamet
hooks
and
also
spread
them
to
various
reports
like
some
of
the
repositories.
A
A
A
One
of
the
concerns
that
I
had
had
was:
how
are
we
ensuring
that
those
that
are
contributing
get
their
environment
configured
such
that
they're
using
the
same
number
of
spaces
as
the
lint
check?
You
know,
the
linter
is
going
to
look
for
or
same
format
right.
C
So,
a
little
bit
I
have
read
on
it
and
ids
and
rich
text.
Editors,
like
news,
visual
studio,
do
come
with
support
for
plugin
that
automatically
trigger
the
predicament
hooks
like
they
have
get
integration
in
them
as
well.
So
when
you
create
a
git
commit
using
the
editor,
it
will
automatically
trigger
it.
So
yeah
some
of
the
editors
have
it.
C
A
A
Configurations
is
nice
because
this
just
happens
automatically,
irrespective
of
what
ide
that
they're
using
and
what
linter
they
might
be
using.
It
becomes
common,
so
yeah.
A
You
can
tell
it
it
becomes
really
difficult
like
without
these
types
of
pre-committed
hooks
and
enforcing
a
common
format,
it
becomes
really
difficult
on
maintainers
to
train
their
eyes
to
what
is
actually
changed,
because
in
fact,
in
this
5
000
line
change
in
this
one.
Pr,
no
nothing
material
changed
just
spaces
and
line
returns,
and
you
can
imagine
how
long
it
takes
to
look
at
that
for
a
human
when
actually
there's
nothing
to
review.
There's
conceptually
nothing
to
review
so
yeah
having
pre-commit
hooks
and
having
well-defined
or
commonly
defined
formatting
helps
anybody.
A
Were
you
asking
for
volunteers
on
that
re-rocks
for
people
who
might
want
to
go,
evaluate
and
suggest.
A
Nice,
so
so,
as
we
go
to
kind
of
wind
down
the
call
you
have
getty,
if
you
don't
mind,
you
know
hey,
please,
please
that
you
join
today.
Thanks
for
jumping
in
sounds,
like
you
know,
the
focus
of
a
couple
of
the
projects
really
falls,
quite
in
line
with
the
things
that
your
your
concerns
of
things
that
you
were
looking
at
and
curious.
Have
you
attempted
to
use
mescheri
to
help
answer
part
of
the
questions
that
you
have.
B
I
not
yet
I
just
investigate
in
some
variants
to
run
a
nighthawk
eso,
oh
from
ito,
and
I
just
it's
not
much
information
how
it's
need
to
be
run.
Some.
I
found
only
simple
diagrams
that
how
it's
started
my
main
processes,
how
it's
started.
Other
processes
and
I
didn't
have
a
clear
picture.
What
need
to
be
done?
Not
not
not
how,
but
at
least
what
exactly,
because
I
have
some.
I
have
read
some
articles
about
comparing
some
cni
plugins.
I
have
read
some.
B
I
have
some
measurements
in
my
case,
but
I
didn't
have
a
clear
period.
I
understand
it's
typical
graphic
generator,
a
typical
client,
it's
a
typical
point,
it's
easy,
but
how
it's
work
for
grpc?
How
client
needs
to
be
work
with
grpc
it?
How,
when
do
I
need
some
how
to
put
certificate
on
it
or
I
need
to
generate
it
myself
or
I
can
use
some
external
ca
for
this
or
how
I
need
to
measure
all
this.
I
mean
it's
not
clear
for
me.
A
Nice
yeah
makes
sense.
Well,
there's
two
projects
there
I
mean
there's
three
projects,
really
that
might
you
might
find
are
significantly
helpful
to
you.
I
hope
one
of
those
is
a
very
early
project
and
it
it's
not
overly
helpful
at
the
moment,
but
the
project
is
called
get
nighthawk
and
it's
exactly
to
answer
part
of
what
you
were
just
saying,
which
is
like
you
want
to
use
nighthawk.
But
what
does
it
do?
How
do
you
use
it?
A
A
This
project
is
about
standard,
is
about
some
of
the
other
aspects
of
what
you're
saying
which
is
like
hey.
So
what
should
I
be
measuring
here?
What
what
and
what
should
I
be
expecting
like
if
mtls
is
turned
on?
How
much
overhead
would
I
generally
expect
that
is
or
does
it
make
a
difference?
If
I
use
the
built-in
ca
or
an
external
ca,
and
how
does
that
affect
performance
or
not,
and
that's
there's
many
of
those
types
of
questions
that
are
intended
to
be
answered
in
this
project.
A
B
Yeah,
I
I
have
read
some
rfc
also
clicks
a
quick
look
through
rfc
about
network
performance
measurements.
It's
really
not
not
new.
Now
they
use
three
kind
of
packets,
three
kind
of
payload,
it's
small
packet,
big
packet
and
the
mixed
packet,
and
for
this
they,
for
example.
If
I
in
my
in
from
my
experiments
when
we
selecting
some
hardware
for
some
task,
we
also
analyzed-
we
have
some
performance
measurement
from
vendor
yeah
and
when
we
check
is
it
true
or
not?
True,
we
use
also
different.
B
For
tests-
and
we
expect
that
if
this
vendor
writes
these
things,
we
can
divide
from
true
it's
more
or
less
true
and
it's
this
vendor.
I
know
that
he
measure
with
small
backpacks
it's.
It
could
be
true
for
some
type
of
payload,
because
it's
http
is
not
only
one
possible
payload
for
network
performance.
B
A
Those
are
all
yep.
Those
are
all
good.
I
mean
you,
you
need,
if
you,
if
you
can,
I
encourage
you
to
join
the
sm.
The
next
snp
meeting
the
next
service
mesh
performance
meeting,
because
that's
that's
like
part
of
what
that
project
is
trying
to
do,
is
identify
like
espouse
some
of
those
testing
practices
and
publish
some
some
vendor
agnostic
research
about
how
you
know
if
you're
doing
it
right
or
if
you're
running.
If
your
infrastructure
is
running
well
or
not,
how
does
it
compare,
and
you
know
yeah.
A
The
the
meshery
project
is
tooling
that
implements
those
types
of
practices,
so
between
get
nighthawk
service,
mesh
performance
and
mesh,
they're,
all
intentionally
intertwined
and
so
yeah
yeah.
So
we
were
talking
about
measuring
running
these
performance
load.
Generators
mesherie
is
going
to
invest
most
heavily
in
nighthawk
going
forth,
and
if
you
haven't,
I
mean
there's
an
example
of
kind
of
what
the
ui
looks
like
relatively
recently,
the
mesher
ui
here
out.
A
So
you
can
create
different
performance
test
profiles.
Save
them
run
them
look
at
analysis.
The
characterization
between
the
differences
of
those
tests,
eventually,
there's
a
calendar
that
you'll
see
and
then
it
exports
the
test
in
s
p
format,
so
that
eventually,
that
becomes
hopefully
somewhat
common.
B
That
measures
one
small
question,
maybe
I
don't
know
I
don't
catch
my
idea
when
I
measure
something
in
some
today
yeah
I
have
some
environment.
I
have
kernel
version.
I
have
driver
version,
I
have
firmware
version.
I
have
network
configuration
some
and
so
on.
Is
it
somehow?
B
A
There
yeah
yeah,
you
know
you're,
hitting
it
right
on
the
head,
like
hey,
if
you
don't
I'll,
put
I'll
put
words
in
your
mouth
and
say:
if
you
don't
capture
some
of
these
very
specific
things,
then
then
there's
no
frame
of
reference,
then
the
tests
kind
of
don't
well,
maybe
they
mean
something
to
you,
because
you
would
be
running
the
same
exact
environment
over
and
over
and
over
again,
but
once
you
change
that
and
once
you
try
to
compare
your
results
to
anyone
else's
like
there
there's
no
frame
of
reference
they're
pretty
much
meaningless
because
you
could
have
had
a
you
know:
100
cpu
system
versus
a
one
cpu
system,
and
so
there's
no,
comparing
and
so
to
your
point
like
yeah.
A
Okay,
then
what
are
the
specific
good?
Let
me
jump
to
the
so
this
project
as
service
mesh
performance.
It's
a
specification
project,
which
is
to
say
it's.
It
is
not
an
implementation.
Measurey
measuring
is
the
implementation
of
smp
smp,
then,
as
a
spec
right
now,
it
boils
down
to
about
three
different
protobuf
files:
three
protos
to
capture
different
things,
I'll
I'll.
A
Take
a
look
at
this
one
first,
just
because
I
think
it's
simple
enough
to
is
that
when
measure,
when
service
mesh
performance
is
looking
across
different
service
meshes,
it
needs
a
way
of
like
universally
saying
the
service
mesh
that
you're
testing
against.
Is
this
one
so
good?
So
it
captures
that
as
part
of
its
spec
good.
A
How
many
cpus,
how
much
memory
a
bunch
of
node
metrics
is
quite
important
to
capture
and
meaningful,
as
you,
you
know,
go
to
go
to
discern,
go
to
learn
things
about
the
way
that
your
systems
are
performing.
That's
what
this
particular
proto
is
about
and
I'll
submit
to
you
that
it
is
incomplete.
A
A
The
difference
between
cpu,
sockets
and
cpu
cores,
or
maybe
it's
fine,
because
you're
using
millicourse,
so
it
doesn't
really
matter
and
that's:
okay
as
a
universal
measurement
or
not
yeah
and
so
yeah.
That's
what
that
the
short
answer
is
it's
supposed
to
be
tracked
here
and
I
hope
that
it
covers
the
info
that
you
need,
but
that
type
of
feedback
is
very
helpful.
B
First,
first
in
just
a
quick
look
through,
I
for
me
is,
for
example,
that
core
version-
if
we
can't,
for
example,
we
have
no,
no,
no,
no,
not
not
only
chorus
for
socket
or
coursework,
but,
for
example,
I
have
when
I
not
in
this
job
by
the
previous
job.
When
I
select,
I
have
a
heavy
task,
it
starts
from
start
for
to
end
it
takes
14
hours
yeah.
B
I
want.
I
push
button
and
jenkins.
I
want
result
in
the
result.
I
get
14
hours
and
that
is
why
every
percent
for
improvement-
it's
really
important,
because
it's
it's
a
bunch
of
minutes,
yeah
yeah,
and
that
is
why
I
try
to
analyze
a
compare
his
independent
from
vendor
tools,
different
processor,
for
example,
or
I
build
raid
from
ssd
disk
yeah,
or
I
built
this-
this
type
ssd
with
cache
or
without
cache
and
so
on,
and
I
it's
very
hard
for
me
to
build.
B
Have
some
very
simple
tests,
build
kernel,
build
kernel
time,
build
a
wasp,
android
time
or
build
something
like
this.
It's
very
generic,
but
for
this,
for
this
speed
is
important.
Quark
type
is
it
in
the
processor
or
this
is
intel
processor,
for
that
is
this.
If
you
have
cloud
you
didn't
see
these
details,
even
maybe
you
have
some
platform
setting.
B
You
can
check
it
from
platform
machines
for
google
if
you
have
an
one
or
and
tumor
type
and
so
on,
and
when
you
have
this
zippo
core
version,
I
don't
remember
linux
response
you
can
and
also,
if
you
select,
for
example,
pneuma
if
you
have
two
cores
and
128
gigabytes
total
and
you
have
64.
B
A
The
I'm
curious
as
to
those
same
intel,
I'm
curious
as
to
what
that
link
is
or
what
that
project
is.
Was
it
part
of
the
last
service
mesh
con.
B
A
Good,
because
the
okay
yeah
part
of
the
group
that
is
pushing
this
specification
forward
is
his
intel
actually
is,
there's
a
few
folks
involved
and
yeah.
I
don't
know
I
mean
this
is
a
great
thing
to
I'm
going
to
suggest
this
to
the
group
and
and
it's
something
to
reflect
on
it's
like
well
and
it's
something.
The
project
has
always
tried
to
straddle
the
line
on
being
overtly
detailed
and
encompassing
all
of
what
the
world
is
and
can
be
described.
B
B
A
Right
right,
so
so
I
get
that
that
concept
of
like
baselining
and
then
just
you
know
tracking
over
time
and
change
plus
or
minus
just
yeah.
Absolutely
so
I
don't
know
if
this
is
gonna.
Let's
see
yeah
that
that's
one
of
the
things
that
measuring
attempts
to
so
smp
and
mesh
read
attempt
to
do
kind
of
out
of
the
box
are
now
in
support.
Now,
if
we
look
at
this,
it
will
show
us
that
briefly,
it'll
show
us
that
you
can
run
a
test.
A
And
you
can
then
compare
you
can
yeah
hold
on.
This
is
pretty
you
can
select
two
of
them
and
compare
their
difference,
so
you
can
run
the
the
same
test
same
environment
and
in
a
in
a
human
manual
way,
look
at
whether
you're
looking
at
it
visually
in
a
graph
or
you're.
Looking
at
it
in
tabular
data
like
under
the
same
circumstances,
is
my
p50
or
my
p99
or
whatever
you
want
to
whether
it's
my
latency
or
my
throughput
are
either
the
two
of
those
inc.
A
You
know
improving
or
regressing
it
directly
tries
to.
You
know,
help
in
that
regard.
Ultimately,
there's
enhancement,
that's
enhancements
that
we'd
like
to
make
that
it
would
have.
You
know
simple
anomaly:
detection
in
there.
That
would
say:
hey!
Look.
We
think
that
you're
regressing
or
even
if
it's
like
stupid
detection,
where
it's
just
like
you
know,
this
number
is
less
than
that
one.
A
A
So
yeah,
so
this
is
really
good.
I'm
going
to
click
submit
on
this
one
enhancement
because,
and
you
have
getty,
I
mean
to
the
extent
that
you
end
up
hanging
around
which
would
be
fantastic
if
you
do
I'll
have
to
introduce
you
to
some
of
those
intel
folks.
But
if
you,
if
you
like,
feel
free
to
make
a
comment
or
or
like
drop
your
github
username
on
here,
so
you
can
watch
it
if
you
want
to
or
we'll
join
the
call.
A
This
is
great,
it's
good
I
I
have
kenny.
I
don't
think
it
would
be
difficult
for
you
to
identify
other
areas
of
additional
detail
that
are
like
quite
reasonable.
I
think,
like
it's
quite
reasonable
to
why,
because
it
is
meaningful,
like
the
difference
between
these
two
as
an
example,
how
how
easy
it
is
to
get
to
some
of
that.
Some
of
it
may
be
super
easy
part
of
what
the
project
is
leans.
Well,
what
the?
What?
What
measuring
as
an
implementation
liens
on
is
the
type
of
data
that's
available
from
prometheus
node
exporter.
A
So
I
don't
know
if
you're,
if
you're,
but
you
know
any
of
the
metrics,
that
this
node
exporter
tracks
is
probably
a
fair
game
immediately.
So.
B
For
example,
if
you
want
to
measure
m2
changes
for
network
interfaces
or
for
calico,
or
something
like
this
or
just
put,
for
example,
m2
butterflies
are
for
ipvs
infiniband
somebody
have
it.
B
For
example,
I
don't
didn't
see
net
start.
No,
maybe,
for
example,
when
you
try
to
make
little
speed
up
for
your
network,
usually
developers
start
from
m2
and
mss
yeah
and
for,
for
example,
they
have
standard
tempo
or
have
m2
from
10
gigabits,
usually
used,
and
so
on,
maybe
biggest
one
or
how
it's
other
details
not
very
often
used,
for
example,
thomas
imperial
affinity
or
subsequences,
but
maybe
how
many
rules
we
have
in
calico
how
many
rules
we
have
in
some
other
nether
right,
yeah
right.
Maybe
if
you
have
10
rules?
B
A
Okay,
yeah,
that's
a
good
like
yeah!
If
there
I
was
gonna
say
this
is
kind
of
part
of
where
we
struggled
in
the
past
is
like
well,
okay,
so
should
we
have
something
that
first
class
captures
the
number
of
calico
rules
or
what,
if
they're
running
any
of
the
other
cni's
and
okay
fine?
Well,
maybe
we
need
to
integrate.
A
You
know
more
more
directly
or
point
to
a
cni,
config
and-
and
I
think
that
that
in
some
respects
becomes
ideal,
that
you
can't
that
we
would
point
to
other
common
specifications
and
say:
will
s
p
will
either
refer
to
won't
first
class
like
like
first
class?
What
it
would
track
is
a
pointer
to
the
cni
config
and
it
and
and
whether
or
not
you
know
that
is
present.
You
know
so
so
in
terms
of
like
the
spec
itself
would
be.
A
What's
the
right
language
just
would
be
optional
would
be
a
point
of
an
extension.
A
But
it
becomes
yeah,
I
mean.
That's,
that's
a
question.
People
want
to
answer.
It's
like
you
know,
leaving
everything
else
the
same.
If
I
tack
on
a
hundred
and
first
rule
in
calico,
what's
gonna,
you
know
what
impact
does
that
have
so
yeah
consideration?
Accessibility.
I
think
there's
kind
of
two
things
in
there.
One
is
considerations
of
accessibility,
but
the
other
one
is
potentially
things
to
do
specifically
around
cni,
which
is
highly
relevant.
A
So
I
have
to
figure
out
how
to
phrase
that,
but
but
yeah
guinea
also
like
it,
it
is
even
more
powerful.
Actually,
if
you
submit
the
issue
because,
okay,
because
people
are
tired
of
listening
to
me,
talk
if
you
can't
tell
by
aditya's
face
you
have
getty
fantastic,
we
should
we
should
conclude,
because
we
have
this
nasty
tradition
of
going
spilling
over
time
which,
which
I
think
we
are
and
so
nice
to
meet
you.
It
was
a
really
good
discussion.
I'm
excited
about.
A
A
A
Yeah
and
so
oh.
A
A
Nice
all
right,
we'll
chat
with
you
all
very
soon.
Oh
adina,
we
miss
what
you
said,
so
I
wasn't
being
rude.
I
think
you
are
mute.