►
Description
Service Mesh Performance Community Meeting - Sept 30th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
everyone
to
the
smp
community
meeting
today
is
30th
of
september.
A
few
folks
have
not
been
able
to
join
us
today,
but
we
have.
We
have
an
agenda
today
to
discuss
so
let's
get
started
first
up,
we
have
the.
We
have
a
call
for
participation
for
reviewing
the
smp
spec
and
identifying
areas
to
improve
in
respect
so
lee.
Would
you
like
to
talk
about
rabbit.
B
Yeah
sure
so
so
right
so
in
in
the
s
p
protos
themselves.
So
the
the
crux
of
the
spec
there
are
currently
the
spec
is
broken
into
three
protos.
If
I
recall
now,
there's
a
bit
of
a
there's
kind
of
a
fork
in
the
road
that
I
think
that
we
face
at
the
moment
with
respect
to
kind
of
kind
of
how
to
how
to
refine
and
re
carry
this
back
forward,
particularly
inside
of
the
service
mesh,
proto
spec
itself,
we've
had
people,
you
know
so,
yeah
the.
B
If
you
go
to
the
third
one,
we've
had
people
you
know
coming
through
with
wanting
details
of
performance,
details
of
their
nodes
of
their
kubernetes
clusters
in
context
of
their
service
meshes
and
in
context
of
their
workloads
and
like
that's
all
within
scope
of
the
spec
right.
It's
it's
all.
All
those
things
build
upon
one
another
like
you
can't
just
analyze
the
performance
of
a
service
mesh.
B
In
a
vac,
you
know
unto
its
own
unless
the
rest
of
the
environment
is
perpetually
uniform
and
that's
a
stipulation
that
we
could
place
on
the
spec,
we
could
say
the
spec
is
only
about
or
the
measurements
are
only
valid
when
the
environment
is
uniform
for
a
given
set
of
test
results.
The
problem
with
that
is,
if
suhani
has
one
type
of
environment,
I
have
a
different
one.
She
can
run
her
tests
and
compare
against
herself
her
own
environment.
B
B
B
linker
d,
as
just
an
example,
is
a
service
mesh
that
does
not
have
an
ingress
gateway
and
as
such,
okay,
we
need
to
define
within
the
spec
what
fields
are
mandatory
and
what
are
nice
to
haves
or
what
are
optional.
B
But
if
you
look
on
line
97,
it's
like
there's
a
sidecar,
okay!
Well,
what's
the
sidecar?
Well,
it's
like!
Let's,
if
we
shoot
it's
for
most
of
the
service
meshes
if
they
have
an
ingress
gateway,
an
egress
gateway
and
a
side
car
the
proxy
that's
inside
is
the
same
kind
of
proxy,
which
is
just
okay,
and
so,
in
this
case
between
lines,
75
and
105.
B
You
know
if
you're
sitting
there
looking
at
the
spec,
but
it's
a
bit
repetitive
right
and
so
okay
should
we
should
we
make
that
generic
such
that
there's
such
a
thing
as
a
proxy
and
we're
capturing
certain
metrics
about
it
and
then
there's
such
a
an
attribute
as
like
its
purpose
or
its
type
or
its
category
or
and
so
the
yep.
B
B
B
So
that's
the
yeah,
that's
something
to
to
you
know,
really
consider
it's
something
that
could
be
refined
over
time
could
be
changed
over
time.
You
know
the
every
specification
goes
through
revisions
and
versions.
It
could
be
that
we're
being
quite
repetitive
in
which
we
are
currently
and
that
we
re
you
know,
maybe
there's
a
later
date
later
point
in
time
in
which
we
choose
to
be
much
more.
Generic.
B
B
B
B
B
If
someone,
you
know,
if
there's
another
type
of
gateway
that
comes
through
that
that
augments
this
back
quite
a
bit.
So
if
someone
wants
to
capture
the
fact
that
they
have
a
load
balancer
in
front
of
their
cluster,
so
then
is
that
an
ingress
of
the
ingress
or
or
like
where's
that
captured.
So
so
this,
I
guess
in
part,
that's
what
I'm
getting.
What
I'm
trying
to
tease
out
here
is
yeah
a
reflection
on
how
generic
to
be.
B
B
Okay,
so
as
each
of
you
sort
of
digest
more
of
the
spec,
I
think
you'll
come
across
other
examples.
So
so
here
so
right
here
is
probably
a
good
example.
So
what
on
line
19
we
so
18
is
this
stanza?
If
you
will
it
talks
about,
is
a
message
that
talks
about
the
configuration
of
the
load
that
you're
going
to
generate
like
the
configuration
of
your
environment
and
how
you're
going
to
run
your
test.
So
here
it's
like
okay.
B
B
B
The
other
thing
about
like,
if
you
look
at
line
36
okay
for
a
given
test
here,
is
the
end
point.
That's
going
to
be
have
load
generated
against
here's
the
target
endpoint
well,
in
this
case,
we've
done
well
to
have
that
as
a
repeated
string
to
acknowledge
that
it's
not
just
necessarily
a
single
endpoint
under
test,
but
it
could
be
many
and
great.
B
Now,
I
think
something
we're
probably
missing
out
on
here
is
like
well,
wouldn't
that
you
know
quite
commonly
refer
to
your
application,
like
the
application,
the
the
workload
under
test-
it's
like,
maybe,
instead
of
going
directly
after
the
endpoint
that
should
be
going
after
the
name
of
the
application,
and
then
we
go
over
to
the
application
message
and
we
look
at
some
of
its
details.
One
of
its
details
is
one
or
more
of
its
endpoints,
so
anyway,
this
is
what
I'm
trying
to
you
know,
help
tease
out
and
sort
of
figure
out.
B
And
my
sense
of
it
is
my
sense
of:
is
that
that
it
is,
you
know,
please
argue
or
discuss,
but
is
that
for
the
most
part
we
shouldn't
like?
For
the
most
part,
we
should
employ
what
I
what
I
consider
to
be
bad
engineering
practices
or
poor
engineering
practices,
which
is
to
say,
I
think
that
we
should
be.
We
should
do
it.
We
should
continue
on
the
same
vein
that
we
have
been,
which
is
sort
of
fragile
thing.
You
know
fragile
thinking,
yeah.
The
justification
for
that
is.
B
Is
that
it's
it's
part
of
the
value
that
people
derive
from
this
project
is,
is
an
opinion
and
and
is
a
ultimately
a
standard
to
be
measured
against,
and
it
is
a
forced
function
like
as
we
go
to
define
standard
benchmarks
and-
and
we
conclude
that
one
of
the
standard
benchmarks
is
to
have
a
multi-threaded
database,
intense
application
under
under
soap
test
for
four
hours.
B
At
this
rate,
like
whatever
that
stand,
that,
like
that's
just
some
random
example
of
a
standard
benchmark,
that's
a
very
opinionated
perspective
about.
Hopefully
what
is
a
representative
workload,
a
you
know,
representative
workload
common
to
what
people
you
know
are
running
out
there
and
and
then
that's
where
people
find
value.
When
you're
explicitly
saying
when
it's
a
bit
more
specific.
B
So,
okay,
just
so
suhani,
is
thinking
about,
has
just
kind
of
started
to
think
about
this,
a
little
bit
so
so
honey.
I
won't
put
you
on
the
spot
and
force
you
to
like,
but
there
was
an
issue
that
was
raised.
I
think
it
was,
if
I
recall
suhani
it
was
to
capture.
C
B
Yeah
there
it
is
so
226,
good,
so
cpu
architecture,
type,
nice
sort
of
differentiate
or
like
capture
the
number
of
sockets
and
then
identify
so
you
can
see
like
this
stuff
starts
to
go,
can
go
fairly
deep,
so
part
of
the
part
of
the
so
so
honey.
As
you
think
on
this.
This
particular
issue
is
really
ends
up
being
so
much
less
of
well
yeah,
so
much
less
about
any
code
as
much
as
it's
just
about
augmenting
the
proto
here
with
with
fields
that
capture
that
capture.
B
Just
that
like
right
now
for
the
most
part,
I'm
pretty
sure
that
we're
capturing
like
the
number
of
cpu
miller,
cores
and
and
that's
good,
but
again
it's
kind
of
that
fragile
start.
It's
sort
of
like
a
lack
of
acknowledgement
that
there
are
other
attributes
to
a
cpu,
it's
like
well,
so
should
we
have
part
of
what
the
consideration
that
that
you'll
make
and
will
will
help
you
is,
should
there
be
a
message,
type
called
cpu
and
and
then
should
that
message
have
the
cpu
message
have
things
like
millicores,
sockets
architecture,
hyper-threaded.
B
And
then
from
there,
the
ingress
gateway,
for
example,
instead
of
on
line
78,
instead
of
it
saying
at
76
and
78
like
how
many
cores
does
it
have
and
how
many
millicourses
I'm
sorry.
No,
no,
I'm
sorry!
It's
just
line
78
978
is
a
cpu
mill.
Of
course
it
would
be.
Instead
of
that,
it
would
be
more
of
a
reference
to.
B
Know
yeah:
this
is
great,
no
we're
just
thinking
aloud
about
protos
and
the
things
that
we're
capturing
so
actually
very
relevant
to
part
of
your
focus
as
well.
B
I
I
have
a.
I
have
a
proto
question
being
only
knee-deep
into
protobuf
design,
myself
of
the
example
that
we
just
gave
where
we're
considering
breaking
out
cpu
millicourse
and
having
a
our
own
cpu
message.
Is
anyone
familiar
with
like
what
that
reference
would
be.
D
Is
it
number
of
that
cpu
codes
or
it
is
or
just
the
random
value.
B
Yeah,
it
would
yeah
like
so
if
we
ended
up
defining
a
message,
type
called
cpu
and
then
of
the
attributes
that
it
has,
if
it
has
a
number
of
sockets
number
of
millicourse
architecture,
type
like
cpu
architecture,
type,
whether
it's,
whether
these
are
physical
cores
or
hyper-threaded
cores,
if
that's
defined
in
its
own
message.
B
E
Actually,
this
is
how
we
define
so
if
we
are,
if
you
want
to
define
just
the
way
we
have
done
client,
it's
hierarchical,
that
is
it's
available
only
in
performancetest.com
config,
so
it
will
be
available
to
other
files
at
dot
client.
So
protocol
is
a
custom
type.
It
must
be
defined
somewhere
as
a
message.
B
B
B
B
Perfect,
okay,
cool,
so
honey
is,
does
any
of
that
make
sense
so
far.
C
B
Oh
cool,
okay,
okay:
anybody
else
have
thoughts
on
the
s.
D
So
lee
I
have
one
question
here
I
mean:
are
we
discussing
this
for
the
improvements
of
this
spec.
B
D
Okay,
and
do
you
have
do
we
have
any
like
I
saw
like
I
was
running
that
performance
and
I
could
see
some
of
the
parameters
which
I
used
in
this
spec.
So,
overall,
all
the
smp
performance
is
built
built
on
this
prototype.
Is
it.
D
B
B
I'll
say
this
as
well
that
within
one
of
the
things
that
measures
ctl
and
measure
ui,
and
so
just
measuring
it
does
or
the
way
that
it
part
of
the
way
that
it
implements
the
spec
is
its
implementation
is
called
a
performance
profile
or
part
of
part
of
the
implementation
is
called
the
performance
profile.
The
part
of
the
implementation
called
a
performance
profile
is
well
initially
when
you
first
define
your
performance
profile
when
you
basically
first
define
a
test
that
you
want
to
run
your
novendi.
D
B
Yeah,
and
so
not
only
should
that,
should
these
fields
be
more
or
less
what
you're
seeing
in
measures
ui,
but
also,
if
you
ever
deal
with,
if
you
ever
want
to
use
mescheri
ctl
to
run
a
performance
test,
so
rude
rocks
has
done
this
a
bunch,
so
rude
rocks.
B
You
might
be
more
up
to
speed
on
mescheri's
implementation
and
how
well
it's
the
ct,
the
measuring
ctl,
hence
the
the
performance
profile,
the
file
that
you
can
supply
to
mastery
ctl,
so
you
don't
have
to
when
you
want
to
run
a
test,
you
don't
have
to
have
20
something
arguments
on
the
command
line.
You
can
just
give
it
a
file,
a
configure.
You
know
performance
profile.
B
A
A
So
I
I
don't
know
like
if,
if
we
should
capture
it
in
this
pack
as
well,
I'm
not
sure
yeah.
B
It's
probably
it
absolutely
has
to
be
here
in
the
spec
and
like
like
that,
like
okay,
if
you
do
this
for
me,
if
you
would,
if
you
go
to,
if
you
go
back
to
the
other
protos,
the
one
called
service
mesh
and
it's
you
know
it's
its
own
proto
that
just
defines
an
enum
full
of
service
messages
by
name
so
this
type
surface
mesh.
We
should
find
a
reference
to
it
in.
B
F
Okay,
so
I'll
speak
smp,
the
the
structure
doesn't
have
any
mesh
service
mesh
field,
okay.
So
what
I
do
right
now
is
capture
mesh
from
the
flag,
dash
dash
mesh
and
pass
it
to
the
meshi
server,
and
then
it
will
process
on
the
back
end
with
that
mesh
type.
B
I
mean
my
service
mesh
and,
and
that
that
can
be
okay
like
the
the
implementation
can
you
know,
is,
can
still
adhere
to
this
spec
so
long
as
that
dash
mesh
or
that
that
other
flag,
where
we
capture
the
type
of
service
mesh.
B
If
those
names
are
coming
from
like
like
so
long
as
we're
still
referencing
the
spec,
then
it's
an
implementation
of
the
spec,
like
the
the
implementation
doesn't
have
to
just
because
something
is
all
defined
in
one
proto
doesn't
necessarily
mean
that
it
has
to
be
all
defined
in
the
other.
In
the
you
know,
a
direct
reflection
now
I
do
think
that
I
would
submit
to
you
all
that
if
I'll
use
an
example,
I'll
just
say
you
know,
push
is
running
a
performance
test.
B
You
know
in
a
particular
environment
and
he
would
like
for
road
rocks
to
go
ahead
and
verify
that
he's
seeing
the
same
results.
You
know
under
the
same
profile
right
like
so
pu
should
probably
just
send
over
that
file.
Here's
my
performance
profile.
Can
you
try
this
out
and
verify
that
you're
seeing
the
same
things
and
it
from
an
implementation
perspective?
It's
probably
really
convenient,
or
it's
more
convenient
if
in
that
file
that
you're
able
to
describe
the
fact
that,
yes,
this
is
a
console
service,
mesh
and
so
yeah.
B
From
a
messages
perspective,
I
would
suggest
to
all
of
us
that
that
file
support
the
service
mesh
type,
which
is
also
nice,
because,
just
from
a
user's
perspective,
if
you
think
about
someone
running
a
performance
test
from
the
command
line
by
the
way
novendo,
do
you
have
your
command
line
just
to
be
able
to
look
at
it?
Is
that
you
they,
someone
would
be
able
to
just
run
mastery
ctl
apply.
F
B
F
Yeah
that
actually
makes
totally
sense,
but
we
need
to
like
enhance
that
stuff
that
I
use
to
like
kind
of
unmarshal
the
file
to
have
that
smp
sorry,
service
mesh
field.
A
Just
to
clear
like
since
we
are,
we
have,
we
are
referencing
the
service
mesh,
the
mesh
type
field
here,
like
I
think
you
you,
you
mentioned,
that
we
should
just
use
this
to
capture
the
mesh
type
instead
of
having
to
change
the
change.
The
proto
here
I
mean
proto
here-
is
that
what
you
mentioned.
B
B
I'm
missing
the
right
word
like,
but
because
I
don't
want
to
use
mushri's
words
but
but
like
for
a
given
performance
profile,
you
you
have
to
be
able
to
say
what
type
of
service
mesh
it
is
and
the
spec
captures
that
and
that's
great.
It
doesn't
capture
that
in
the
client
config,
which
is
fine,
it
captures
it
in
the
service
mesh
config.
A
Does
that
make
sense,
yeah
I'll
look
into
it?
I'm
not
sure
how
I
will
use
it,
but
I'll
raise
any
queries
if
I'll
have
any.
In
slack
I
mean.
B
F
A
D
I
have
one
question
here
like
this
mesh
control
plane
us
that
structure.
Where
do
we
see
these
values
in
the
ui?
I
I
did
not
put
that
I
mean.
Is
it
internally
when
we
trigger
the
performance
test
depending
on
the
mesh
measuring
server?
We
select
like
this
adapter
I
mean.
B
Yeah,
that's
great,
that's
a
great
question.
It's
both
actually
does
anybody
have
mystery
up
by
chance.
B
B
D
Yeah
add
a
performance
profile,
so
here
you
mean
like.
D
If
I
select
this
too,
but
this
is
the
mesh
type
right-
maybe.
C
D
And
and
like,
and
so
where
are
the
that
mesh
other,
which
we
saw
in
the
spec
that
properties
we
don't
have
here
right.
D
I
mean
we
saw
that
profiles
like
it
is
all
same
here,
but
we
saw
that
mesh
spec
as
when
right.
B
Right
so
in
the
mesh
spec
it
for
the
most
part,
it
just
had
an
enum
full
of
all
capital.
Letters
of
service
mesh
names
like
it
was
just
trying
to
define
once
and
for
all
like.
This
is
how
you
refer
to
console
as
an
example
and
the
the
same
drop.
If
you
do
the
drop
down
list
one
more
time,
it
should
be
the
case
that
each
of
these
services
that
this
list
matches
like
verbatim
the
other
list
in
the
service
mesh
proto.
B
But
yeah,
but
then
there's
another
area,
there's
a
couple
of
other
areas
in
mesherie
where,
when
the
user
initiates
a
test
like
that,
like
like
you
just
did
that,
if
they're
doing,
if
they're
initiating
a
test
from
the
istio
interface
as
an
example
like
they're
they're,
managing
just
that
type
of
mesh,
well,
then
they
shouldn't
have
to
choose.
They
shouldn't
have
to
specify
that
it's
that
service
mesh
in
their
test.
That
should
be
auto
populated
the
same.
B
Yeah,
so
a
lot
of
this
info
isn't
a
lot
of
this
isn't
or
most
of
that
info
is
in,
is
not
being
captured
in
mesherie's
user
interface
it.
This
is
the
type
of
important
information
that
does
need
to
be
captured
from
prometheus
or
from
grafana,
and
that,
as
part
of
your
ongoing
focus
like
that
yep,
this
is
part
of
that
validation.
It's
like,
okay,
so
of
the
things
that
mescheri
collects
behind
the
scenes,
the
telemetry
that
it
collects
in
the
static
boards.
B
C
D
This
is
the
spec
for
static
board,
or
it
is
the
generic.
I
think
right
respect
what
I
understood
for
all
the.
D
Is
the
spec
for
the
general
performance
test
right?
It
is
not
defined
for
static
boards.
I
mean
that
I
did
not
want
it.
Oh.
B
Yeah
right
so
the
venue,
if
you
don't
mind
if
you
go
back
up
a
bit
to
the
the
section
called
performance
test
result
so
in
line
49
the
once
measuring
has
run
a
test
and
it
produces.
It
does
some
statistical
analysis
and
produces
results
it
should
it
should
implement.
You
know
you
should
everything
that's
under
the
performance
test
result
from
49
to
72
should
be
in
that
result,
like
measure
should
give
you
that.
B
D
B
B
If
we
don't
have
anything
captured
about,
if
you
keep
going
is
there
is
the
word
node
in
here
it
should
be
under
environment
config.
So
this
is
more.
This
is
sort
of
where
the
static,
like
the
relationship
between
static
boards
in
measuring
and
what
we're
trying
to
capture
in
the
spec
is
so
in
the
environment
that
you
run
the
test
within
how
many
nodes
did
it
have
how
much
resources
did
they
have?
B
B
You
know
like
so
what
kubernetes
version
were
you
running
in
that
environment?
Like
that?
That's
what
how
many
nodes
were
there
on
line
177.?
B
You
know
like
how
many
instances
of
that
workload
did
you
have
of
assuming
that
the
workload
was
like
the
same
kind,
and
this
is
where,
like
it
starts
to
get
kind
of
fragile,
it's
like
well
the
workload
that
I
was
testing
it
had.
B
D
B
B
B
Cool
so
on
that
topic,
just
briefly:
wood
cars.
I
was
talking
to
the
venue
yesterday
and
I
had
said
that
hey.
This
is
the
58th
time
that
I
will
be
speaking
to
someone
about
filling
in
the
auto
populating
the
endpoints
field
in
the
performance
profile
test,
and
so.
B
Not
with
not
with
respect
to
neha's
conversation,
I
just
mean
like
user
experience
and
so
just
a
note
for
wood
cars
cool,
so
that'd
be
59.
B
D
Actually,
I
just
not
really
have
captured
the
data
yet,
but
first
question
I
have
is
like
what
do
you
say
when
you
say
static
board
like
in
the
grafana?
Whatever
boards
we.
B
D
B
Static
boards
are
not
shown
in
the
ui,
it's
just
it's
it's
a
made-up
term
as
you
digest
that
spec.
Please
do
publicly
ask
drove
questions
around
it.
If
you
do.
B
There's
a
meshsync
nodes,
metric
collection,
there's
a
second
there's
a
another
spec,
that's
linked!
That's
linked
from
this
one.
It's
the
one
that
the
venue
is
sharing
yeah.
D
B
To
see
those
yeah,
let
me
let
me
follow
up
to
make
sure
that
you've
got
the
right
to
tooling,
like
like
those
static
boards
are
just
collected
by
measuring
they're,
not
shown
back
in
the
ui,
and
I
don't
think
that
they're
made
available
as
you
retrieve.
If
you
go
back
in
time
and
look
up
your
performance
test,
I
don't
think
that
those
are
shown
either
they
used
to
be,
and
they.
D
Are
not
shown
like
yeah,
they
are
not
shown.
Okay,.
B
Okay,
they
were
in
the
past,
and
so
we
have
a
like
a
regression
issue
here.
We
need
to
make
sure
that
they
do
get
shown
yeah.
So
actually,
this
is
a
helpful
conversation.
Just
because,
like
it's,
it
was
an
oversight
that
they
had.
Do
you
do
you
mind
not
or
raising
an
issue?
That's
saying
you
know
static
board
data
is,
in
you
know,
inaccessible.
A
We
are
going
to
talk
about
the
benchmark
test
and
sungu
had
raised
a
couple
of
questions,
so
maybe
we
can
help
him
clarify
some
of
these,
so
he
was
asking
about
the
kind
of
test
cases
that
were
needed
for
us
to
define
and
these
couple
of
questions
and
about
the
environment
we'll
be
running
these
tests
on
in
the
cncf
testbed,
so
yeah.
B
Okay,
all
right
good,
good
yeah.
Those
are
let's
skip
to
rudraksha's
in
some
respects
like
sunku
needs
to.
She
were
on
the
call
he
needs
to
step
forward
and
define
those
answers.
Like
answer
those
questions
himself,
let's
jump
to
rudraksha's
item,
because
this
is
something
that
we
want
to.
B
Let
everyone
know
about
and
send
out
the
details
of,
because
that
will
actually.
This
will
actually
help
push
answer
some
of
sunku's
questions
some
because
we
will
tell
him
we're
testing
in
an
environment
now
and
here's
what
that
environment
looks
like
it's
a
ubuntu
vm
running
in
a
github
runner
and
then.
B
F
F
So
we
have
some
of
the
data
on
it
here
and
quickly
paste
the
link
in
chat,
so
that
covers
it
up,
like
github
actions,
offer
a
few
of
the
environments
to
test
this,
I'm
assuming
that
you'll
most
probably
use
ubuntu
and
the
latest
one,
because
it
it
works
and
mac
os
and
windows,
probably
not
so
good
environments
for
testing
and
yeah,
regardless
of
the
os
that
you're
using
if
you're
using
the
github,
hosted
runner.
F
It
would
allocate
you
one
of
these
one
of
these
azure
vms
and
it's
pretty
random,
like
I'm,
yet
not
sure
on
what
exactly
would
be
the
number
of
cpus
and
the
ram
that
would
be
allocated
to
you.
So
that
might
affect
the
test
configuration
and,
as
we
were
discussing
during
the
duration
of
call
that
to
maintain
a
consistency,
we
need
to
have
consistent
setups
for
running
performance
tests
right
so.
F
B
Or,
to
the
extent
that
that's
out
of
our
control
and
that,
like
so
long
as
the
yeah,
yes,
is
the
answer
like
yes,
you're
right
like,
but
also
the
fact
that
the
environment
has
changed,
doesn't
necessarily
invalidate
the
test.
It
just
means
shoot:
we've
got
to
run
more
tests
so
that
we
have
enough
tests
using
one
environment,
all
of
the
environments.
B
B
So
I
mean
like
you're
right
that
there
are
certain
tests
where
like,
if
you
want
you're
trying
to
get
to
an
answer
and
you're
like
okay.
So
so
we
want
to
analyze
this
thing
and
we
want
to
run
these
series
of
tests
and
if
your
environment
is
changing
on
you
every
time
it
can
make
it
really
tough
to
you'd
have
to
like,
if
you
don't
have
control
over
it,
yeah
you're
going
to
have
to
just
run
500
tests
until
you
get
enough.
You
know
times
where
you
got
the
same
environment.
B
F
B
F
B
And
the
self-hosted
runners
are
free
to
use
with
github
actions,
but
you're
responsible
for
the
cost
of
maintaining
your
runner
machines,
okay,
good
so
we'll
just
the
cncf
will
pay
for
it.
It's
fine,
that's
great,
I
mean,
and
so
what
we
should
do
folks
is
write
up.
The
meeting
minutes
tell
people
that
the
intended
thing
here
is
to
go
set
in
action.
What
rudraksha
has
written
and
let
it
run
wild
on
some
free
github,
hosted
runners.
B
Look
at
look
and
confirm
that
those,
in
fact,
are
that
we're
getting
the
right
info
that
these
work
etc
and
then
we'll
go.
Ask
the
cncf
for
time
on
the
their
systems,
use
the
self-hosted
runners
to
do
the
same
thing
and
yeah
it's.
This
is
really
great.
I'm
really
like
this
is
actually
really
exciting.
It's
been
like
a
year
and
a
half
of
having
the
question
out
in
front
of
us.
What
like?
F
B
Yeah
now
part
of
what
neha
is
looking
at
with
static
boards
and
what's
being
collected
is
like
is
important
here,
because,
if
we're
not,
if
we're
not
collecting
the
right
things
in
those
static
boards,
we
won't
have
a
record
of
or
won't
be
collected.
We
won't
know
what
those
self-hosted
runners
have
or
or
the
github
hosted
runners
have
so.
B
F
F
F
So,
basically,
you
can
trigger
them
manually
or
there's
a
crown
schedule
set
up
for
them,
which
triggers
at
this
weird
time
like
in
every
12
hours.
I
mean
43rd
minute
of
every
12
hours.
It's
it's
an
arbitrary
time
because
sometimes
get
up
drops
the
scheduled
jobs
due
to
a
lot
of
incoming
traffic,
and
yes,
that's
a
problem
with
github
runners
and
github
actions
for
this.
We
are
using
this
and
even
if
we
do
scheduled
tests,
we
might
we
might
face
that
token
thing,
because
I
tried
several
times.
I
don't
know
for
some
reason.
F
F
F
B
I
think
you've
got
yeah
so
we're
at
time.
I
think
there's
a
couple
of
people
that
probably
have
some
questions
about,
like
the
ability
to
define
what
workload
is
deployed,
what
app
sample
app
is
deployed
and
if
there's
a
certification
for
a
github
action
developer,
like
I
suggest
that
you
sit
for
that
exam.
D
I
missed
that
part.
I
was
opening
the
ticket,
so
the
static
boots,
you
said,
is
not
in
the
ui,
so
I
mean
what
should
I
like
mention
like.
B
D
B
Thank
you
yeah
and
yvonne,
not
that
I've
spent
any
time
in
wsl
too,
but
but
I
consider
this
kind
of
magical
what's
happening
there
and
and
I
think
it
is
a
ubuntu
centered
environment.
So.