►
From YouTube: Layer5 Community Meeting (May 7th, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
good
morning,
good
morning,
it's
about
three.
After
take
a
moment,
if
you
would
drop
your
name
into
the
attendees
list,
get
get
credit
for
being
here,
otherwise
it
doesn't
count.
So
also
we'll
start
in
two
minutes.
One
thing
is
anita
on
yes
anita
there
are,
it
is
deceiving,
but
there
are,
I
don't
know
25
people
that
have
joined
this
week,
or
so
maybe
20.
A
and
so
they're
a
bit
difficult
to
find
slack
kind
of
hides
them,
but
anita
in
the
community
management
channel
you'll
find
a
message
saying
that
someone
is
that
person
has
joined.
So
if
we
go
all
the
way
back
to
last
friday,
all
the
way
back
to
seven
whole
days
ago,
then
we'll
we'll
find
some
more
so.
A
Anything
also
anita
we're
going
to
have
to
celebrate
your
the
winning
of
the
pageantry
for
you,
so
we'll
have
to.
A
All
right,
fair
enough,
well
we're
five
after
it's
the
seventh
of
may
friday,
it's
the
last
day
of
kubecon
eu
kubecon
cloud
native
con
kubecon
eu
virtual
2021.
I
think
that's
like
the
full
title
of
the
conference.
Well,
a
lot
of
the
as
by
way
of
just
jumping
into
the
meeting
and
into
the
announcements.
A
There
was
a
talk
given
earlier
today
that
adina
can
tell
you
about,
of
which
she'll
probably
harass
me
for
not
being
at
the
talk,
but
adina
was
there
making
sure
that
people
are
aware
of
your
projects,
because
today's
talk
was
essentially
like
almost
entirely
on
your
projects.
A
Go
be,
I
encourage
you
to
well
re
review
and
go
be
social
with
go,
be
social
go,
go
share
with
the
rest
of
the
world.
What
you're
doing
I
I
certainly
try
to
so
in.
As
a
recap,
most
of
you
are
familiar
with
this.
Some
of
you
come
to
the
cncf
service
mesh
working
group
and
present
on
some
of
the
initiatives
that
you're
working
on,
because
those
initiatives
are
one
in
the
same.
A
So
let
me
let
me
take
a
quick
look
at
who's
on,
so
we've
got
about
20
people
on
and
other
than
so
so
samir
and
garov.
I
know
that
we've
spoken
before.
I
know
that
you've,
you
know
you
might
have
been
on
at
least
one
or
two
calls
before,
but
I
don't
know
that
either
the
two
of
you
have
been
on
this
call.
The
community
call
have
you
and
neil
as
well.
E
A
Nice
yep
it
so
here's
here's
the
trick
or
here's
the
hidden
sneaky
secret
is
that
in
the
community
we
end
up.
We
meet
there's
a
meeting
each
day
on
different
topics,
and
so
this
is
the
community
meeting
we
meet
once
a
week.
This
is
the
broadest
set
of
topics
that
we
discuss.
A
Your
mileage
may
vary
from
week
to
week.
Fortunately,
michael
gefeller
is
on
today's
agenda,
so
so
we'll
have
a
we'll
we'll
have
an
enlightened
discussion,
and
but
the
point
is
not,
everyone
shows
up
to
all
the
meetings
and
how
could
they
or
you
know
not
all
the
meetings
are
of
interest
to
them.
They
don't
always
have.
The
time
point
is
as
a
as
a
new
member
you're.
You
have
to
introduce
on
each
of
those
calls
so
that
everybody
gets
to
know
you
so
samir.
A
F
So
hi
hi
everybody,
I
hope
I'm
audible,
you
are
yeah,
so
I
am
sameer,
so
I'm
basically
working
as
a
devops
engineer
for
one
of
the
product
firm
in
bangalore,
so
I
have
been
working
as
a
like.
You
know:
implementation
of
our
infrastructure,
along
with
that
designing
and
architecting
the
services
for
our
company.
So
I
have
joined
mostly
one
week
back
to
this
layer,
5
community,
and
I
really
find
it
interesting
and
hope
to
contribute
and
how
to
learn.
F
A
Oh,
that's
it
yeah
samira
that
couldn't
have
been
more
perfect
like
a
you're.
Both
I
mean
you're.
Both
have
had
a
lot
a
fair
bit
of
experience
with
this
type
of
infrastructure
you're
here
to
both
share
and
learn
you're
here
to
contribute,
which
you
already
are-
and
we
might
even
talk
about
some
of
that
today
and
you're
here-
to
use
the
tools
as
well.
So
what
else?
What
else
can
we
have
you
doing?
A
A
A
I
might
share
a
slide
on
this.
The
the
so
is
nikhil
and
vinayak
and
there's
a
number
of
people
here
who
are
going
to
expand
how
much
knowledge
we
try
to
share
on
the
layer,
five
io
site
and
eventually
that
would
probably
formulate
form
a
certification
program
for
those
that
are
quite
meshy,
and
maybe
we'll
do
that
in
partnership
with
the
linux
foundation.
A
A
Time
will
tell,
as
mesherie
goes
into
the
cncf
we
might
base
the
certification
exam
around
mastery,
which
would
work
really
well,
and
then
we
would
partner
with
the
linux
foundation,
and
the
thing
is,
is
we
wouldn't
be
able
to
call
the
certification,
a
mesh
master,
and
I
so
desperately
have
wanted
to
have
mesh
mates
and
mesh
masters
and
and
any
other
portmanteau
that
oh,
like
mesh
mark
that
idina
is
talking
about
any
other
portmanteau
that
we
can
like
squish
mesh
with
is
good.
A
So
sorry,
anyway,
there
are
other
roles
that
said
samir,
I'm
not
sure.
If
one
person
can
take
on
more
than
you
are
it's
it's
nice
to
have
you.
F
Yeah
yeah
I
can
and
yeah-
and
you
know
in
in
the
isd
book
so
which
is
I
I
have
been
going
through.
It's
it's
really
good,
but
the
thing
which
I
felt
is
like
the
paragraph
are
quite
long,
so
it
could
have
been
shortened
so
because,
or
maybe
the
punctuations
are
being
a
bit
certain
so
to
understand,
but
yeah
it's
it's
really
good.
So
also
I.
A
Love
the
the
yeah.
Sorry
I
get
it
chuckled,
because
the
the
book
that
smears
referring
to
is
the
istio
up
and
running
book,
of
which,
when
I
had
embarked
on
it,
I
thought
I
was
gonna
write
a
third
of
it.
I
ended
up
a
co-author
left,
so
I
wrote
two-thirds
of
it
and
yeah.
If
you
can't
tell
I
get
long
in
the
mouth
long
in
the
writing,
and
so
I'm
working
on
that
the
editors
are
trying
to
help
me
be
more
concise,
direct,
straightforward,
shorter.
F
D
A
D
So
I
am
neil
I'm
currently
like
I'm
in
my
third
year
of
undergrad,
I'm
studying
computer
science
like
in
doing
engineering
in
computer
science.
I
wanted
to
contribute
to
a
cncf
project
which
was
which
had
us
like
some
part
of
javascript
as
well
as
go.
I
wanted
to
shift
to
go
as
well
like
learn,
learn
a
bit
because
I
wanted
to.
I
want
I
wanted
to
like
start
my
journey
in
back
in
engineering
basically
and
wanted
to
also
learn
about
kubernetes
and
kubernetes
ecosystem
service
measures,
etc.
A
Promising
it's
a
love
story
in
the
making
new.
It's
a
it's
a
good
good
to
have
you
very
nice
to
have
you
guerrav,
mr
chatter,.
E
Well,
yeah,
I'm
not
even
properly
before
I
go
yep
so
hello,
everyone,
so
my
name
is
gaurav
seda,
so
I'm
currently
a
sophomore
pursuing
beta
computer
science
from
india.
So
this
is
my
initial
learning
stage,
so
I
am
learning
very
much
from
layer,
5
community
and
I
have
learned
quite
a
bit
few
things
so
currently
I
have
decent
skills
of
front-end
development
and
ux
ui
designing.
A
Funny,
that's
beautiful,
as
a
matter
of
fact
I
mean
like
it
is
what
a
sweet
thing
to
have:
samir
introducing
and
may
and
making
waves
already.
I
I
it's
a
what's
the
right
him
introducing
at
the
same
time
that
neil
and
gorov
are
introducing
you
samir's
got
a
lot,
there's
a
lot
to
be
shared
back
and
forth
the
more
questions
that
gorov
asks,
the
more
that
samir
will
learn
as
he
goes
to
share
and
instruct
and
teach,
and
and
and
vice
versa.
So
it's
a
beautiful
thing.
A
And
so
back
to
the
agenda.
The
reason
that
I
interrupted
myself
was
to
say
we
had
a
talk
earlier
today
and
by
we,
even
though
some
of
you
might
think
that
that
phrasing's
odd
they
were,
there
were
two
two
overweight
pale,
looking
guys
talking,
but
what
we
were
able
to
talk
about
was
well
all
the
things
that
that
you
all
do
and
that's
actually
why
we
got
to
talk.
A
That's
why
we
had
stuff
to
even
walk
through,
and
so
I
I
really
do
encourage
all
all
of
you,
no
matter
how
long
you've
been
hanging
around
to
take
a
look
at
the
link.
That's
in
the
community
meeting
minutes
it'll.
Take
you
to
this
deck,
to
peruse
it
and
to
soak
it
in
to
let
it
to
take
some
pride
in
seeing
the
fact
that
your
work
is
on
stage
at
cubecon.
A
That's
not
the
ultimate
goal
or
that's
not
the
the
end
all
be
all
but
but
it
sure
does
feel
good
and
there's
a
lot
of
folks
that
show
up
adina.
I
wasn't
able
to
make
this
talk.
Do
you
were
you
able
to
see
how
many
yeah.
B
B
It's
just
I
went
to
some
other
talks
after
that
and
let's
say
you
know
how
to
present
to
make
it
simple
when
you
present
complicated
things
so
yeah,
it
was
a
nice
presentation.
A
That's
good
yeah,
the
thing
is:
is
it's
kind
of
an
interesting
thought
that
well
many
of
you
are
here
learning
a
lot
of
this
well,
and
so
am
I
well
in
advance
of
much
of
the
rest
of
the
world
and
again,
like,
I
think,
some
of
the
things
that
I
say
like
this.
Some
of
you
might
be
thinking
you
know
what
are
you
talking
about
like?
A
How
could
I
be
out
ahead
of
in
front
of
that
the
rest
of
the
world
like,
but
the
reality,
how.
A
And
so,
but
the
the
reality
is
that
that
that's
that's
true,
I
can't
find
the
screen
to
share,
but
the
reality
is
that
that
it's
true
that
most
folks
have
yet
to
make
it
this
far
in
their
journeys
and
so
you're
here,
and
that
there
will
be
a
long
long
long
time
of
of
helping
people
learn
and
and
learning
at
the
same
time,
it's
nice,
so
okay,
that
that
was
that
actually,
so
we
are
up
to
our
first
topic
and
by
the
way
did
everyone?
That's!
A
Oh,
oh
boy,
jessica,
rap,
yes
dude!
I
I
hit
it
hello,
sir
hello,
good
good.
I
didn't
see
you.
I've
been
practicing.
Your
name
too.
Who's
been
somebody's,
been
helping
us.
A
A
A
G
So
hey
everyone,
my
name
is
singh
and
I'm
currently
a
junior
undergrad
at
guru,
texas
of
technology.
That's
the
college
of
ivy
university,
india
and
I'm
currently
majoring
in
information
technology
and
passionate
about
web
development
and
have
worked
with
mern
stack,
mean
stack
and
I
acquired
skills
for
the
same.
G
I've
also
have
interest
in
devops
field
and
in
building
cloud
native
applications
like
the
media,
my
future
goals
and
I'm
looking
forward
to
be
like
be
a
prominent
part
of
layer,
5
community
and
like
contributing
every
day.
G
A
It
well
I'm
not
going
to
embarrass
myself
again
by
saying
your
first
name,
but
it's
I'm
it's
very
nice
that
you're
here
you're
already
you're
already
hitting
your
four
gigabyte
limit.
I
think
your
laptop
so,
which
means
that
you're
doing
you're
already
into
the
thick
of
things.
So.
I
G
A
Nice,
by
the
way,
any
dot
core
in
there
for
you
what.
A
What
you
know
I'll,
I
will
type
it
out.
Have
you
gotten
a
chance
to
work
with
net
core.
A
Okay,
cool
all
right,
I'll,
I'll
type,
it
out
and
chat
to
you,
but
very
good
yeah.
Look
nice
thanks
for
coming
nice
too!
Well,
all
right,
michael!
You!
Please
take
it
away.
Do
you
want
to
share
it?
You
want
me
to
leave
this
up.
J
Yes,
thanks
so
I'll
talk
a
little
bit
about
the
messaging
system
and
notification
center
from
a
very
high
level
perspective
and
then
something
very
concrete
in
around
error
codes
in
golan
code.
So
on
a
very
high
level
view.
The
messaging
system
and
notification
center
is
about
that
all
the
components
in
measuring
produce
and
emit
various
events,
and
these
events
are
interesting
for
other
components
and
in
order
to
handle
this
stream
of
events,
there
will
be
a
messaging
system
and
notification
center.
J
As
far
as
I
know,
there's
a
couple
of
sort
of
like
basic
implementations
that
have
been
done,
but
mainly
this
is
still
in
the
design
phase.
J
Some
thing
that
I've
been
involved
in
in
this
recently
was
a
discussion
about
how
the
actual
events,
the
data
structure
of
the
events,
should
be
structured
and
we
we
decided
to
leverage
something
called
cloud
events
cloud
events,
not
quite
sure
whether
it's
the
standard.
But
the
point
is
that
it
should
provide
a
unified
format
for
events
that
various
event
providers
and
implementations
support,
so
that
it's
easier
to
exchange
events
between
different
messaging
systems.
J
So
we
have
been
looking
at
these
the
format
of
these
cloud
events.
The
event
is
what
is
emitted
and
then
an
event
is
packaged
into
a
system
into
a
message:
sorry,
which
is
sent
through
a
messaging
system
and
can
be
picked
up
by
interested
parties.
J
J
Adapter
component
is
console,
so
the
source
of
an
event
in
an
event,
as
in
an
actual
event
in
a
field,
would
be
a
urn
measuring
adapter
console.
That's
a
suggestion
that
we
can
discuss.
Of
course,
and
then
there
is
a
version
of
the
specification
of
cloud
event.
J
Then
there
is
a
type
that
will
be
one
of
the
measuring
event
types.
So
we
have
different
types
of
events:
for
instance
a
log
event,
an
error
event
and
we'll
be
looking
a
little
a
little
bit
closer
at
error
events,
and
then
we
have
an
attribute,
which
is
data
which
contains
the
actual
payload
of
the
event.
J
J
We
there's
also
possible
to
to
specify
a
schema
or
point
to
a
schema
for
the
data
for
the
payload
and
timestamp
when
the
event
was
generated
and
subject,
which
is
optional,
that
can
be
used
to
group
events
and
a
correlation
idea,
which
is
optional.
It
is
possible
so
cloud
events
opens
up
for
that.
J
J
Now.
If
you
look
a
little
bit
closer
at
error
events,
we
all
have
seen
them
in
the
code.
We
have
specified
a
format
in
meshkit
for
events
for
error
codes
and
with
specific
attributes.
There's
a
code
I'll
talk
a
little
bit
more
about
that
later,
which
uniquely
identifies
a
specific
error
within
a
specific
component.
J
J
Now
you
see
that
the
attribute
names
are
all
lowercase
and
no
hyphens
no
underscores
and
that's
part
of
the
cloud
event
specification,
the
naming
convention
for
attributes
well.
That
makes
it
easier
for
operability
between
now
the
error
code
is
just
a
number:
it's
just
an
integer.
It
carries
no
semantics.
J
One
of
the
reasons
why
we
sort
of
decided
doing.
That
is
that
if
you
have
semantics
in
codes,
then
everybody
has
to
be
aware
of
that,
and
sooner
or
later
there's
a
chance
that
someone
doesn't
know
that
and
then
use
it
uses
or
uses
an
error
code
in
the
wrong
way
or
attaches
a
different
semantics
to
it.
So
the
decision
is
or
the
suggestion
is
no
semantics.
J
Now
one
one
thing
important
thing
is:
okay:
we
emit
errors
with
error
codes
which
carry
useful
information
and,
of
course
the
user
would
like
to
have
some
more
information
about.
This
would
have
like
to
have
a
reference.
They
can
go
and
figure
out
what
the
error
code
is,
because
maybe
in
the
user
fail
interface,
we
do
not
expose
all
this
information,
but
just
the
error
code,
maybe
severity
and
the
short
description.
J
So
we
would
like
to
have
an
error
called
reference
on
the
measuring
homepage,
where
the
user
can
go
and
look
up
this
error
code,
as
you
can
see
here,
this
is
sort
of
like
initial
design
story
about
how
this
can
look
like.
There
are
not
so
many
error
codes
yet
defined
and
as
far
as
I
could
see,
none
of
the
error
codes
actually
have
information.
J
J
J
So
this
is
one
of
the
things
a
new
utility
that
we're
working
with
will
handle
that
we
make
sure
within
a
component.
Mirror
codes,
do
not
overlap
and
that
error
codes
are
actually
incremented
and
designed
automatically
in
the
source
code.
When
new
error
codes
are
defined
and
there's
also
a
question:
where
is
the
master
data?
So
where
do
you
actually
maintain
and
specify
all
the
descriptions,
probable
cause
and
suggested
mutations?
J
It
should
be
close
to
the
code,
and
it
should
be
easy
to
find
easy
to
remember
with
this
and
intuitively
be
able
to
understand
where
one
should
maintain
this
and
the
the
sort
of
most
sensible
place
is
in
the
code.
So
the
question
is,
then:
how
does
it
get
from
the
golan
code
onto
this
onto
the
reference
pages
on
the
website?
J
J
This
so
in
the
code
there
are
variables
with
a
convention
that
they're
beginning
with
err
and
that
the
error
codes
actually
integers,
are
defined
here
and
then
used
other
places
in
the
code.
You
see
here
that
this
this
is
not
an
integer.
This
is
a
placeholder,
and
the
tool
would
then
find
all
these
placeholders
and
set
the
next
code
and
an
integer
and
replace
this
placeholder.
With
that.
J
You
can
also
see
that
one
method
that
is
going
to
be
deprecated
from
the
meshkit
airs
package.
Then
the
new
default
should
be
recognized
by
this
tool
and
report
it
so
that
we
know
this
is
deprecated
and
should
be
replaced
by
a
constructor
method
or
a
new
method
which
will
contain
all
the
extra
information
that
I
was
mentioning
earlier.
J
J
J
So
when
walking
through
the
tree
this
one
here,
this
method
here
will
look
at
one
node
and
we'll
test
whether
this
node
corresponds
to
a
call
expression
to
a
function
called
new
default,
and
then,
if
so,
it
will
just
return
to,
but
it
is
actually
corresponding
to
this,
and
then
you
can
create
a
log
entry
with
that.
J
Now
this
this
tool
summarize
finds
all
these
nodes.
That.
J
And
is
analyzing
whether
this
is
a
literal
value
or
a
call
expression,
because
all
these
error
constants
should
be
literal
values
and
integers
and
not
all
expressions.
I
call
in
other
functions
that
return
an
error
code,
and
then
this
tool
summarizes
that
and
helps
to
make
this
a
consistent
and
uniform
handling
of
of
the
error
codes.
J
What
remains
here
is
so
it's
already
working
and
replacing
the
codes
it's
working,
but
extracting
this
extra
information
in
that
remains
to
be
implemented,
and
that's
what
I'm
I've
been
working
on
recently
and
continue
to
work
on.
A
Quick
question
my
you
might
might
have
just
said
this:
I
might
have
just
missed
it,
but
once
the
once,
the
air
util
program
has
gone
through
identified
nodes
within
the
various
meshi
components,
the
code
within
the
various
measuring
components,
and
it's
found
the
fact
that
oh,
like
hey,
there's
a
new
error
code
that
needs
to
be
you
know
defined.
Is
that
like
of
the
json
that
we're
looking
at
here?
A
J
That's
cool,
it's
a
good
question,
so
so
this
is
the
export
that
will
be
imported
by
the
the
the
code
that
alonso
wrote
right
so
whether
whether
this
here
is
persisted
committed
or
whether
it
is
uploaded
to
some
other
place.
G
J
Have
to
figure
that
out
right
right
now,
no
plan
so
far,
but
if
you
want
to
automate
it,
we
have
to
discuss
it.
I
don't
know
uploading
it
or
maybe
you
know
committing
maybe
directly
committing
to
to
measure.
It
could
be
a
possibility,
but
this
is
something
we
have
to
figure
out
right
now.
I
was
just
alright
giving
it
to
him
manually
so
to
speak.
A
Yeah
there
you
know
there's
prior
or,
as
you
think
on
that,
there's
prior
or
something
to
be
aware
of
some
prior
art
with
respect
to
release
notes
so,
each
time,
each
time
the
message
would
make
a
measure
of
release.
There's
you
know
certain
number
of
there's
the
change
log.
A
If
you
were
the
release,
note
well,
the
there's
been
it
took
us
about
a
year,
but
we
finally
automated
the
publish
the
the
publishing
of
those
release
notes
as
and
when
we
make
a
release
it
just
auto,
updates
the
mesh,
redox
and
so
yeah.
I
think
both
not
only
is
there
prior
art
in
terms
of
like
what
the
github
workflow
might
look
like,
but
also
in
terms
of
just
the
project
being
comfortable
with
that
level
of
automation.
A
Let
me
let
me
rephrase
this
last
question
that
I
asked
to
just
make
sure
that
that
is
and
this
just
maybe
this
isn't
yet
decided,
but
so
in
each
of
mystery's
components
of
which
or
that
that
use
the
used
mesh
kit,
you
know,
has
a
consistent,
uniform
set
of
you
know:
utilities
like
the
air
utility.
A
So
it's
a
message
server
at
some
point.
It
will
it
sort
of
predates
mesh
kit,
so
it
only
uses
so
much
of
mesh
kit
today,
but
it
will
eventually
sort
of
you
know
fall
in
line
so
to
speak,
the
mesh
readapters
they
all
use.
Well,
I
think,
as
of
like
yesterday,
maybe
they
all
use
or
they're
about
to
all
use,
mash
kit
and
then
the
whoops
I
got
to
the
middle
finger
and
then
they
all
use.
A
I'm
sorry
measuring
operator
is
another
example
of
a
component
that
would
that
benefits
from
this,
and
so
my
question
is-
or
I
guess
I'll
say
like
this,
my
unders
and
this
doesn't
have
to
be
the
case.
That
was
stating
my
understanding.
You
correct
me
on
what
this
is.
A
My
understanding
was
that
there's
an
error
folder
each
of
those
components
that
use
mesh
kit
that
they
end
up
having
like
an
errors,
folder-
and
it
might
be
within
that
folder-
that
the
definition,
the
ongoing
definition
of
errors
that
might
be
produced
by
that
component-
would
reside
local
to
the
component.
J
Okay,
I
understand
it
correctly,
so
here
you
see,
on
the
left
hand,
side,
there
is
a
package
called
database,
and
then
you
would
have
an
error
go
file
here
that
defines
these
codes
and
functions
that
create
that
return.
These
errors
and
like
in
maybe
in
the
now
of
course
not
here
in
the
config
package.
J
You
also
have
an
ergo
file
that
does
exactly
the
same,
but
the
tool
goes
through
the
whole
tree
of
and
actually
looks
at
all
files
right
now,
but
of
course
this
can
be
filtered
and
the
the
util
is
just
a
binary
that
would
then
you
know,
use
in
in
either
independently
built
independently
and
using
the
csct
pipelines
to
go
through
and
do
the
analysis.
L
L
The
modern
languages,
like
c
plus,
plus
c
sharp
and
all
those
they
don't
give
you
the
choice.
If
you
don't
handle
that
runtime
error,
it's
going
to
throw
that
error
anyway,
anyway,
right.
So
what
what
are
the
types
of
errors
that
are
being
handled
here?
And
what
is
the
mechanism
for
ensuring
that
you're
always
aware
of
the
error-
and
I
I
was
wondering
you
know
if,
in
this.
J
J
J
Maybe
I'll
appreciate
wants
to
comment
on
that.
If
he's
in
the
call.
F
Yeah
I
mean
I
didn't
clearly
understand
what
the
concern
was,
but
I
had
a
different
concern
about
the
exports.
Basically,
where
is
or
which
is
the
central
place,
where
you
define
all
those
english
words
of
filling
in
probable
cause
and
certifications,
for
example
like.
Basically,
these
are
all
independent
of.
J
F
It
was
clear
when
you
asked
whether
it
was
clear,
okay
cool.
So
what
I
was
asking
is
that,
where
is
where
are
the
english
words
placed,
for
example,
the
suggested
remediation
provocation
and
or
all
of
these
are
just
a
fill
fill-in
after
you
code?
So
where
are
we
planning
to
place.
J
These,
so
are
you
asking:
where
are
we
planning
to
actually
write
the
yeah,
the
detailed
information
we
would
write
it
here
right,
like
three
turn.
J
F
J
J
Actually,
I
think
it's
I
think
we
discussed
this
and
landed
on
that.
We
do
it
this
in
the
code.
J
If,
if
we
just
fill
it
in
in
the
in
in
the
export
afterwards,
then
we
have
to
make
there's
more
housekeeping
to
do
there's
more
especially
yeah,
especially
when
you
have
now,
if
you,
if
your
error
code
is-
and
here
here,
you
actually
see
that
this-
this
is
actually
a
call
expression.
So
this
this
one
is
not
the
way
it
should
be
right.
J
If,
if
you
call
this
like
test
right,
replace
me
replace
me
right
and
using
this
in
this
hole
here,
I'm
just
making
this
up
like
return
return.
J
J
Then
you
can
start
writing
all
these
descriptions
that
we
don't
have,
of
course,
in
that
race,
the
separate
severity
as
well,
which
is
something
whatever
it
is.
It's
very
good.
F
So
basically,
it's
easier
for
the
developer.
J
I
think
it's
a
yeah.
I
think
it's
easy,
because
if
we,
if
we
don't
do
this,
then
we
have
to
have
a
replacement
first,
we
have
to
do
the
replacement
first
right,
export
it
and
then
we
can.
Then
we
know
which
number
it
is
and
then
we
can
start
filling
in
with
the
information.
So
I
think
it's
easier
to
handle
and
it's
closer
to
the
code
as
well.
Yeah.
J
Yeah,
yes,
time's
almost
up,
I
can
just
demonstrate
that
it
actually
works.
I
hope
still
like
database
errors,
you'll
see
that
it
actually
replaced
the
placeholders
with
actual
values,
and
one
thing
that
I
didn't
mention
was
that
every
repository
will
get
a
file
com
component
info
json,
where
the
name
of
the
component
and
the
type
of
the
component
is
specified
and
also
the
next
error
code
that
is
to
be
used,
is
specified,
and
this
is
then
updated
by
the
tool.
J
A
If
that's,
okay,
you
you
drop,
you
dropped
the
mic,
that's
the
yeah,
the
shoot
I
had
one
more
since
so
avishak.
I
don't
think
I
don't
we
probably
don't
have.
I
was
like
how
long
is
your
demo
on
nighthawk
five
minutes
or
less.
A
Okay,
then
I'll
save
I'll
slack,
you
a
question
or
two
michael.
F
Sure
so
should
I
go
or.
F
So
I'll
be
talking
about
the
nighthawk
integration
that
went
ahead,
and
actually
this
is
in
regards
to
the
project.
Get
night
talk
I'll,
just
post
the
link
to
the
repository
of
this
project
and
the
documentation
just
a
second.
F
So
basically,
what
we
have
achieved
is
that
we've
got
this
night
of
tool,
which
is
a
load
testing
tool
which
does
performance
test
and
basically,
what
we've
done
is
integrated
the
tool
with
a
machine
to
do
performance
test
from
rhythm
meshri
and
to
manipulate
with
different
values
in
the
ui
I'll.
Just
quickly
share
my
screen
to
show
a
short
demo
of
how
it
looks
like
so.
I've
got
measuring
running
in
my
local.
F
So
when
you
navigate
to
performance,
you've
got
this
performance
testing
dashboard
in
here,
in
which
you
can
see
different
profiles
that
you
you
can
create
in
order
to
do
a
load
test.
So
I
had
created
a
test
profile
called
test
which
basically
hits
this
particular
end
point.
It
generates
load
with
the
load
generator
network
and
it
does
it
for
five
seconds
on.
All
other
specifications
are
sorry
the
other
specifications
are
mentioned
inside.
F
F
So
I'm
gonna
hit
that
to
see
how
the
load
testing
is
gonna
go
through.
So
if
you
hit
run
test
as
you
can
see,
it
has
started
to
put
load
inside
like
in
this
on
the
server
and
after
it
is
done,
it
brings
back
with
results.
It
says
no
error,
which
means
all
of
the
requests
were
executed,
properly
and
and
yeah.
So
basically,
this
histogram
describes
the
rpss
the
the
number
of
requests
so
so
yeah.
Basically,
this
demo
was
regarded
in
regards
to
how
night
talk
as
a
load.
F
I
F
So,
which
are
you
talking
about
the
end
point
that
I'm
going
that
I
performed
low
test
against?
Yes,
what
is
this
endpoint?
This
is
just
a
local
server
like
a
simple
server
python
server.
I
just
started
in
my
local
to
test.
A
Well,
yeah,
and
so
what
smears
really
asking
is
like
hey
of
the
functionality
offered
within
mesherie
for
performance
management?
What
can
it
be
used
against
just
the
just
the
ingress,
just
the
just
the
service,
something
off
the
mesh
ten
things
at
once.
Only
one
thing
at
once.
F
Yes,
so
because
this
kind
of
test
I
can
do
like
with
you
can
say
the
tomcat
load
testing.
So
why
should
I
go
for
the
integrated
load
test
tool
which
you
have
in
my
c
yeah?
So,
basically,
what
we
can
do
is
in
here
we
can
run
some
sequence
test
like
basically,
we
can
define
different
profiles
and
then
run
or
define
different
endpoints
and
run
parallel
tests
among
them
and
then
compare
results
with
the
graph
that
you
get
in
here,
like
basically
collaborate
and
compare
the
performances
and
like
basically
indirectly.
F
What
I'm
saying
is
that
you
can
compare
the
performance
of
different
service
measures
by
performing
like
by
using
this
dashboard
yeah.
So
that
is
my
question.
So
this
performance
graph
is
the
performance
of
service
base
or
my
end
server
or
my
kubernetes.
F
This
is
basically,
I
had
created
a
local
server
which,
on
which
I
did
performance
test
against,
and
this
is
just
for
demo.
So
if
you
have
some
ingress
or
or
a
server
switch
running
inside
cluster,
you
can
just
replace
this
endpoint
with
that
particular
ingress
and
then
like
define
your
own
parameters
to
start
testing.
F
F
So
basically,
how
can
I
know
how
much
time
does
it
take
to
pass
through
the
ingress
or
the
istio
service
mess
to
reach
to
my
server?
Where
is
that
information,
because
other
information
which
is
showing
here
that
I
can
go
with
the
geometer
and
do
a
test
so
where
I
can
see?
Okay,
this
is
something
which
is
running
or
this
is
the
performance,
and
this
is
more
of
a
graphical
representation
and
when
I
see
it
in
a
cli,
but
what
is
the
speciality
about
this
right
now?
F
E
F
F
A
Yeah
samia,
those
are
those
are
great
questions.
As
a
matter
of
fact,
like
I
appreciate
the
way
in
which
they're
phrased,
which
is
which
is
you,
know,
direct.
It's
like
hey,
hey,
there's
a
I've
got
a
load
testing
tool.
Why
do
I
need
this?
One
like
what
and
it's
a
great
question
in
part.
Abhishek
was
speaking
to
it.
There's
there's
some
more
immediate
answers
that
I
think
will
provide
some
compelling
clarity,
hope.
Well,
they
will
provide
clarity.
Hopefully
they
will
be
compelling
I'll.
Try
to.
A
Obviously,
if
you
don't
mind
since
you've
got,
you
might
bring
your
screen
back
up,
there's
two
two
things
that
I
think
will
it'll
it'll.
Hopefully
these
will
be
the
aha
moment
for
you,
so
to
directly
answer
the
the
specific
thing
that
you're
that
you're
currently
chasing
after
samira
is
well.
One
part
of
the
answer
is
there
will
be
better
tooling
in
the
future,
like
the
best
tool
to
probably
answer
your
question
is
probably
distributed.
Tracing
and
meshri
will
at
some
point
incorporate
that
telemetric
signal.
Today.
It
mostly
focuses
on
metrics.
A
That
said,
you
can
still
arrive
at
an
understanding
as
to
what
the
different
differences
between
the
the
time
spent
in
the
mesh
like
how
how
much
of
the
millisecond?
How
much
of
this
response,
how
much
the
overhead
and
well
it's
milliseconds.
A
It's
time,
memory
and
cpu
like
how
much
of
that
is
going
to
the
mesh
and
what
it's
trying
to
do
versus
just
raw
right.
You
know
right
to
my
app
so
a
couple
of
ways,
so
abby
check,
if
you
close
this
one,
is
that
what
you
measuring
facilitates
running
a
couple
of
different
tests
and
comparing
them.
So
so
you
can
deploy
your
app
off
the
mesh
to
use
meshery
to
hammer
on
your
app
with
a
defined
test.
Set
I
mean
you'll
persist,
those
results.
You
can
see
a
very
similar
graph.
A
Then
then
you
can
go
over
and
measure
will
let
you
deploy
the
mesh,
deploy
your
app
on
the
mesh
and
then
take
that
same
exact
test.
The
same
parameters
then
hammer
on
your
app
while
it's
on
the
mesh
and
so
and
then
meshri
will,
if
you
don't
mind
object.
If
you
go
to
like
this,
the
test
with
10
test
results
or
one
that
has
more
than
two.
So
when
you
look
at
the
tabular
view
here
under
view
results,
I
mean
we'll
improve
this
user
experience.
So
it's
more
clear.
A
That,
if
you
select
two
of
these,
if
you
ch
like
let's
say
that
those
first
two
they
were,
they
were
fairly
similar
tests.
I
don't
know
if
they
actually
got
but
and
then,
if
you
compare
them
in
the
upper
right
hand,
corner
there's
a
compare
the
two
bad
example.
But
what
you're
seeing
is
that
they're
in
a
color-coded
way,
one
test?
The
results
are
in
red,
the
other
tests.
A
The
results
are
in
green
and
so
a
terrible
example
of
tests
because
they
ran
for
like
five
seconds
a
piece
and
basically
there's
like
hardly
any
data
to
look
at,
but
you
can
sit
there
and
look
at
something
off
the
mesh
on
the
mesh.
Compare
them
overlap
them.
You
can
compare
more
than
two
tests
at
a
time.
A
Maybe
you've
got
three
things
you're
trying
trying
out
so
you
can
select,
I
think,
almost
any
number
of
tests
to
compare
against
it
after
it
takes
a
little
while
to
interpret
and
understand
the
graph,
because
it's
comparing
a
lot
and
the
graph
itself
will
change
in
terms
of
its
how
it
displays.
A
But
if
you
read
the
labels,
it's
what
was
the
p99
comparison
across
the
three
different
environments?
So
that's
one
way
that
you
can
try
to
get
at
that
answer
that
your
your
direct
question,
the
second
way
is
when
and
and
when
you
connect
meshri
to
grafana
meshri,
integrates
with
grafana
and
integrates
with
prometheus,
to
be
able
to
not
only
show
you
this
latency
analysis
this
graph
here,
but
in
the
same
context,
show
you
that
and
also
show
you
abhishek.
Do
you
mind
pulling
up
like
measuring
data?
A
You
don't
have
grafana
connected
to
you
in
here.
No,
I
don't
know
how
to
install
it.
But
if
you
go
to
measure.io,
I
think
there's
a
screenshot
of
this.
That
kind
of
shows
folks
it'll
basically
show
you
like
any
number
of
graphs
at
the
same
time
and
as
your
mouse
moves
up.
This
is
kind
of
a
good
example
like
well.
Well,
you
keep
going
down.
If
you
would,
we
might
have
moved
out
the
screenshot,
because
it's
kind
of
old
and
very
detailed,
that's
not
it
I'll.
Send
you
a
screenshot
samir.
A
A
What's
the
memory
of
just
the
envoys
that
are
just
tracking
for
that
service
or
or
for
the
nodes
that
aren't
running
istio
or
just
for
the
ingress
controller
like
what
was
the
you
can
so,
basically,
in
any
query
that
you
can
formulate
in
prometheus
to
collect
those
metrics
and
display
that
in
grafana
meshuri
uses
grafana's
sdk
to
show
that
you,
the
the
specific
service
mesh
graph
that
histogram
that
you
really
won't
get
anywhere
else,
you
won't
get
that
from
a
jmeter.
A
You
won't
get
that
same
analysis
and
you
won't
get
it
done
in
a
standard
way
in
which
you
can
compare
across
service
meshes.
Moreover,
you
won't
be
able
to
with
those
other
tools,
compare
how
I'm
running
my
environment
relative
to
yours
like.
Are
you
doing
it
more
efficiently?
Or
am
I
and
well
that's
a
super?
That's
like
well,
I
don't
know
you're
running
90,
a
90
node
cluster
and
I'm
running
a
nine
node
cluster.
How
do
we
normalize
that?
A
Well,
that's
what
this
standard
is
going
is
is
attempting
to
do,
and
so
then
you
can
begin
to
not
only
answer
your
direct
questions
but
then
also
some
other
questions.
You
begin
to
have
over
time,
which
is
like
well
hey
over
time
as
well,
is
jamie
either
or
the
other
tool
is
going
to
facilitate
like
they'll
they'll.
Let
you
save
off
the
results
right.
You
can
save
the
history
of
them
and
kind
of
pull
them
back
up
and
share
them.
Mesri.
Does
that
as
well
and
that
overlays
them
visually?
A
So
you
can
track
and
baseline
across
time
and
compare
against
yourself.
You
can
also,
then
compare
against
others.
Eventually,
there's
some
road
map
items
from
mesri
in
which
it
will
automate
using
nighthawk,
specifically
not
the
other
load
generators
but
nighthawk
it
will
do
adaptive
load
control,
so
it'll
run
some
optimization
routines
to
be
able
to
answer
questions
like
do.
You
have
like,
if
you
were
to
reconfigure,
if
you
were
to
turn
off
some
of
your
virtual
services
or
turn
on
some
additional
ones
or
maybe
restructure
them.
A
Would
your
performance
improve,
like
you,
can
use
meshri
to
test
those
types
of
things
manually?
You
can
run
different
scenarios
and
compare
them,
like
I
just
said,
but
it'd
be
nicer.
If
you
could
just
click
the
button
that
says
run
the
optimization
routine
and
that
could
be
like,
what's
the
optimal
number
of
retries
that
I
should
configure
based
on
my
my
reliability,
posture
like
like
the
more
retro
my
reliability
versus
my
sla
guarantee,
I'm
guaranteeing
to
you
user.
That
you'll
have
at
least
a
250
millisecond
response.
A
That's
my
you
know
that's
as
slow
as
I'm
willing
to
go
just
as
an
example,
so
so
you'd
be
able
to
put
that
in
you'd
be
able
to
put
in
but
and
then
I
want
to
maximize
resiliency
while
not
exceeding
that
sla
and
so
mesherie.
Why
don't
you
run
like
and
that
answer
is
going
to
be
different
based
on
whatever
service
you're
running?
Like
you
can't
just
say,
I
can't
just
tell
you
like.
Oh,
it's
two
retries
like
everybody,
should
run
two
retries
because
then
you
won't
it's
like
no.
A
That
is
totally
specific
to
your
environment.
Moreover,
it's
totally
specific
to
your
environment
at
a
point
in
time,
because
today
you
have
90
nodes
in
your
cluster
tomorrow
you
have
nine
clusters
and
20
nodes,
a
piece
or
you
have
another
microservice
that
you've
deployed
in
there,
and
so
it's
like
all
that
changes
and
your
ability
to
just
go
in
and
click
press.
The
button
to
perform
those
types
of
evaluations
is
well
time
will
tell,
but
I
think.
A
F
Okay
and
another
thing:
can
I
store
this
kind
of
performance
data
or
into
my
maybe
some
databases
and
see
a
time
series
graph
wherein
it
shows
me
over
the
time
how
I
have
improved
in
the
performance
yeah.
A
Yep,
we'll
have
to
show
you
that
we'll
have
to
well.
So
it's
a
longer
answer
like
it
is
one
of
those
things
where,
as
an
open
source
tool
that,
like
persisting
even
prometheus
itself,
long-term
storage,
the
project
doesn't
address
like
it
doesn't
do
it.
You
have
to
plug
in
a
remote
storage
adapter
like
like,
I
think
you
were
looking
at
thanos
or
something
like
the
meshri
does
not
and
attempt
to
incorp.
There
are
meshry.
A
A
Some
clarity
yeah
these
are
these-
are
good
questions
yeah,
so
that's
it
and
actually
smeared
to
your
point
like
hey.
It
might
be
so,
while
the
while
there's
a
bunch
of
nerd
knobs
in
there
to
configure
the
type
of
performance
test,
you
want
to
run,
which
is
great,
maybe
some
curated
tests
or
like
some
sort
of
quote,
unquote
standard
tests
like
hey.
A
If,
if
there's
a
default
configuration
of
istio
a
default
sample
app
and
a
default
performance
configuration
if
you
run
it
and
I
run
it
like,
while
our
environments
might
be
different
at
least
just
on
that
level,
even
though
there's
some
fudge
factor
in
there
like,
we
can
begin
to
compare
some.
A
So
that
then,
eventually
we'll
publish
something
we'll
tell
the
world
something
like
here's.
The
slowest
mesh
here's
the
fastest
mesh
by
the
way
samir
is
the
best
like
he
does.
He
runs
his
the
you
know,
it'll
be
anonymous,
but
but
but
hopefully
we
can
help
inform
the
world
that,
like
you,
know
off
offhand
like
in
general
or
on
totally
on
average,
it
costs
across
the
10
different
service
meshes
that
mescheri
supports
it's
going
to
cost
you
just
that.
A
You
could
never
say
what
I'm
about
to
say,
because
it
will
be
totally
totally
wrong,
but
it
would
cost
you
five
percent
cpu
on
average,
and
that
would
be
a
total
farce
because
that
absolutely
like
just
on
one
thing
in
particular
by
the
way
I'm
going
on
for
way
longer
than
I
should
thank
you,
michael,
is,
if
you
like
distributed
tracing,
is
an
example.
If
you
have
your
service
mesh,
facilitating
that,
if
you
sample
those
traces
once
every
10
minutes,
it's
pretty
lightweight.
C
Well,
fair
enough,
you.
D
Oh
yeah
also
one
more
thing:
it
gives
a
user
to
choose
which
load
generator
to
choose
right,
but
it
doesn't
show
their
pros
and
like
their
benefits.
D
A
That's
a
great
suggestion:
yeah
really.
What
is
there
is
a
meaningful
difference.
They
will
absolutely
show
you,
they
absolutely
analyze
it
differently
and
that's
why
nighthawk
was
written.
Nighthawk
was
written
only
after
wrk2
and
fortio
were
evaluated
and
they've.
The
maintainers
found
a
meaningful
enough
difference
that
they
wrote
yet
another
performance,
characterization
tool,
nighthawk
and
so
yeah,
neil
totally
we
should
have.
A
It
would
be
good
to
I
mean
like
we
need
those
kinds
of
pop
rich
tool
tips
or
like
help
like
we
need
a
framework
in
which
we
do
that
in
general
in
measuring
anyway,
and
that's
a
good
it's
a
prominent
example
of
a
question
that
people
would
have
because
yeah
otherwise
we
might
just
want
to
hide
the
fact
that
there's
choice,
because
what
do
they
know?
D
A
So,
hey
we're
totally
over
time.
I
appreciate
that
some
folks
have
had
to
go
already
and
the
rest
of
us
have
to
go
now.
So
thanks
everybody,
it's
been
very
nice,
we'll
catch.
Some
of
you
in
slack,
we'll
see
you
next
week.