►
From YouTube: Service Mesh Performance Meeting (January 20th, 2022)
Description
A discussion on extending Service Mesh Performance benchmarking to the CNCF Community Infrastructure Lab.
Proposal - https://docs.google.com/document/d/16b-cWkyhDwumK8afd0YiQtZUe2zc5TfQ_9g7dQ9q8jQ/edit?usp=sharing
Issue - https://github.com/cncf/cluster/issues/115
A
Hello,
edward,
hello,
oh,
how
are
you
good
good
to
see
you
yeah
we've
good
good?
I
I
it's
ed
nice
nice
to
meet
you.
I
I'm
not
sure
how
many
others
might
join.
There's
sort
of
a
general
interest
around
the
the
project.
The
project
itself
is
is
well
okay,
so
there's
a
cncf
project
called
service
mesh
performance.
A
Okay,
so
it
is,
you
know
more
or
less
that
dedicated
to
that
very
specific
topic.
The
concept
and
the
sort
of
the
original
issue
that
was
filed
was
some
long
time
back.
You
know,
prior
to
prior
to
packet
being
acquired
prior
to
packet
being
the
provider
I
think,
of
the
infrastructure
in
the
first
place.
I
don't
know.
A
Goes
back
a
ways
yeah
and
we
we
finally
got
and
so
actually
before
I
before
I
go
further
in
I'll,
say
anyone
else
is
welcome
to
join
the
call,
and
I
think
we've
got
maybe
one
other
person
on
and
so
for
the
other
person
that's
on.
Do
you
want
to
say
hi
and
just
introduce
yourself.
A
A
How
are
you,
hopefully
you
can
hear
us
if
your
your
audio
doesn't
work
or
you're,
not
in
a
position
to
use
audio,
maybe
just
open
up
the
chat
and
say
just
say:
quick,
hi.
A
All
right
there
he
is
taosif
welcome,
hey
taos.
If
I,
how
are
you
we're
just
about
to
kick
off
the
call
kind
of
getting
folks
introduced.
A
Gentlemen,
hey
nashon,
you
want
to
just
say
hi
to
ed
and
the
rest
of
the
folks
just
to
introduce
yourself
real,
quick.
C
Yeah
sure
so,
hi
everyone,
I'm
nishant
and
I'm
currently
in
computer
science
undergrad
based
out
in
india
and
yeah.
I
haven't
really
contributed
to
layer
5
yet
but
yeah
I
am
very
excited
about
it.
I
want
to
do
it
and
currently
I'm
learning
on
the
basics
of
kubernetes
service
meshes.
A
Beautiful
this
is
a
great
place
to
jump
in
there's.
There's
a
lot,
there's
some
devops
to
be
learned,
there's
a
little
bit
of
well.
If
I
can
abuse
the
heck
out
of
that
term,
I
get
out
of
pardon
my
partner.
A
And
so
nishant,
thanks
for
jumping
on
and
tasif
is
here
tell
us
if
you
can
say
hi,
real,
quick,
otherwise
drop
a
note
in
the
chat
drop,
a
note
in
the
chat.
D
Hi
I
have
completed
my
master's
in
sca
last
year,
I'm
getting
doing
devops
from
x,
so
I
just
want
to
explore
this
pool
of
opportunities
over
there
and
want
to
learn
more
about
services
as
well.
It's
awesome.
E
A
Cool
guys,
let's
jump
in
okay,
so
the
this
is
the
just.
The
right
page
is
the
wrong
page.
See
if
I
can
find
that
so
there's.
A
Ed
there's
there's
a
couple
of
projects
that
are
interrelated
in
this
effort.
A
One
of
the
so
the
core
project
is
service.
Mesh
performance,
that
project
and
the
cncf
technical
advisory
group
for
network
or
tag
network
is
is
home
based
for
well
any
of
the
networking
related
open
source
projects
that
the
cncf
has
just
been
networking.
It
could
be
some
other
other
projects,
but
so
there's
a
number
of
service
meshes
that
the
cncf
has
and
this
performance
project-
and
it's
just
been
an
outstanding
question
in
the
in
our
sector
of
the
industry.
Okay,
how
do
we
compare
and
contrast
the
performance
of
these
things?
What's
the
overhead?
A
What's
all
this
and
and
we've
been
building
and
building
and
building
tooling,
to
help
empower
people
to
answer
that
questions
that
question
for
themselves,
because
you'll
find
info
out
there,
some
of
it's
biased
and
some
of
it's
like
well,
that's
interesting,
but
that's
not
my
environment
and
and
so
we've
got
a
lot
of.
We've
got
a
variety
of
configurations
we
want
to
run
through.
We
were
we've
built
up
a
lot
of
tooling
that
helps
us
automate.
A
The
stand-up
of
these
meshes
the
deployment
of
the
apps,
the
generation
of
the
load
and
all
that
stuff
and
so
yeah.
Our
hope
is
that
within
a
presti,
you
know
a
pristine
environment
that
we
can
go
through
and
run
a
myriad
of
tests,
and
so
some
of
the
questions
that
no
doubt
you'll
have
you
know
as
you're
digesting
this
you're
saying:
well,
okay,
great
so
did
you
need
a
server
or
did
you
need
500
and
too
bad?
A
If
you
needed
500,
you
know
and-
and
you
know
how
what
was
the
size
and
what
was
the
like
I'll
make?
It
super
easy-
and
I
hope
that
this
makes
it
easy
and
say,
like
almost
all
of
the
tests
are
or
I'm
sorry,
almost
all
of
the
configurations
are
valid.
It's
sort
of
whatever
is
the
most
representative
of
the
people
running
clusters
in
general,
a
lot
of
people
who
care
about
the
performance
going
to
be
running.
A
You
know
a
number
of
clusters
and
quite
a
few
nodes
per
cluster,
and
but
we
can
extrapolate
out
some
of
the
test
results.
You
know
the
the
initial
you
know
like
if
someone
has
40
clusters
with
40
nodes,
a
piece
like
that's,
not
what
we're
asking
for
that's
too
much
for
us
to
try
to
initially
a
couple
years
ago.
It
was
like
hey,
you
know.
If
we,
if
we
had
about
a
20
node
cluster,
that
would
be
that'd,
be
nice,
10
would
work.
A
Five
would
work,
three
will
like
it.
It
starts
to
get
lesser
representative.
I
think
if
we,
you
know
a
hundred
sure,
but
I
wouldn't
like,
I
think,
we'll
waste
your
time
like,
I
think,
hey.
Maybe
in
two
months
from
now
we'll
come
back
saying
yeah,
we
really
got
our
act
together.
The
scripts
are
just
you
know
like
we
know
what
we're
doing
now
in
equinix
as
well,
and
so
the
here's,
the
crux
of
part
of
the
today's
conference
or
like
what
I'm
hoping
we
can
revolve
around.
A
So
a
lot
of
the
automation
that
we
have
that
will
deploy
these
service
meshes
that
will
deploy
the
sample
apps.
That
will
generate
the
load.
That
will
then
measure
it
all
all
that
automation
well
convenient
conveniently
for
some
of
these
service
mesh
projects.
Today,
that's
bundled
into
a
github
action,
we
run
the
github
action
and
we
hammer
the
heck
out
of
whatever
vm's
github
is
giving
us
in
their
hosted.
You
know
system
and
I'm
an
ignorant,
but
I'm
ignorant
of
the
specifics.
B
A
B
Yeah,
I'm
familiar
with
that.
We've
had
a
number
of
folks
for
ci
purposes
or
for
other
sort
of
testing
purposes,
set
up
self-hosted
runners
in
our
infrastructure.
B
So,
as
I
understand,
I
don't
have
a
ton
of
first-hand
experience
with
the
performing
the
docs
and
talking
to
folks
about
it.
You
basically
stand
up
a
machine
and
on
that
machine
deploy.
B
You
can
run
them
on
more
powerful
machines
than
the
that
the
get
than
the
github
systems
have,
or
just
have
more
flexibility
in
general.
It
has
how
to
do
things
but
yeah.
The
self-hosted
runner,
I
think,
would
be
a
a
perfectly
reasonable
way
of
doing
this.
With
the
runner
running
on
on
the
equinix
system,.
B
B
You
have
to
either
restore
things
to
an
original
state
or
tear
down
infrastructure
when
you're
done,
rather
than
just
spinning
up
a
brand
new
fresh
vm
to
do
it.
B
That's
probably
the
biggest
difference
between
a
bare
metal
environment
and
a
dm
environment.
This
is
the
software
is
all
going
to
run.
The
same
and
network
interfaces
might
have
different
names.
What
have
you,
but
getting
back
to
a
known,
reliable
state,
may
require
more
setup
code,
more
tear
down
code,
then
than
you're
used
to.
A
Can
you
speak
a
little
bit
to
the
bare
metal
provisioning
of
with
kubernetes
systems
or,
if
yeah,.
B
Yeah,
so
we've
got,
we've
got
some
existing
configurations
using
terraform
to
do
network
provisioning,
as
it
turns
out
in
the
kubernetes
space.
B
B
The
but
basically
you
know
deploying
a
new
node
doesn't
matter
like
you
can
imagine
an
architecture
where
you
were
github
runner
right,
the
the
runner
code,
that
self-hosted
does
the
deployment
script
out
of
that
runner
sets
up
a
bunch
of
brand
new
nodes
launches
off
those
tests,
tears
them
down
when
it's
done
and
then
the
runner,
you
know,
stays
active,
but
the
kubernetes
infrastructure
is
essentially
found.
A
It
is
what's
that,
what's
the
fast,
so
there's
a
lot
of
things
about
performing
these
tests
that
we
just
don't
have
a
strong
opinion
on
today.
So
like
what
flavor,
what
distro
of
kubernetes
we
don't
care.
What
distro
of
linux
we
don't
care,
or
rather
those
are
things
that
people
will
tell
us.
Probably
after
we
go
out
and
hopefully
not
piss
everyone
off.
A
B
B
You
want
to
spend
that
time
usefully
running
tests,
so
we've
had
we've
done
a
bunch
of
work
to
reduce
the
startup
time
for
a
couple
of
specific
operating
systems.
We
have
quick,
install
ubuntu
nodes.
B
If
that's
a
reasonable
choice
for
you
we
had
been
doing
quick
install
for
centos
sent
to
us
is
a
little
bit
funny
right
now
because
of
the
centos
eight
end
of
life
from
red
hat,
but
we've
got
some
alternatives
there
and,
if
you're
in,
if
you're
in
a
centos
rel
world,
you
know
happy
to
understand,
you
know
what
what
would
work
out
there.
B
B
So
like
three
four
five
minutes
from
pressing
the
button
to
having
a
running
system
doing
the
whole
kubernetes
install
depending
on
that
that
might
take
a
little
bit
more
time,
but
we're
not
talking
about
hours
or
days
to
get
a
cluster
spun
up
it's
you
know
completely
automatable
and
and
small
numbers
of
minutes
for
time
everything
I
don't
know
how
long
your
tests
typically
take
to
run.
A
Yeah,
are
we
yeah
it's
another
good,
that's
another
area
where
it's
like
where
we
haven't.
We
just
haven't
gotten
enough
feedback
to
hone
in
we.
You
know
like
an
hour.
A
test
is
probably
is
a
good
starting
point
to
figure
out
if
that's
and
then
go
off
and
get
feedback
from
folks
sure.
B
B
You
know
across
data
centers
on
the
same
continent,
across
data
centers
across
an
ocean
like
there's,
there's
a
there's,
a
lot
of
things
that
could
be
tested,
yeah
so
and
there's
probably
some,
like
smallest,
reasonable
set
of
things.
That's
interesting,
that's
likely
to
work
and
then
extend
out
from
there
as
you
get
as
you
get
more.
A
I
anticipate
this
is
probably
the
start
to
a
long
relationship,
or
rather
that,
like
yeah
to
your
point,
I
think
what
you're
essentially
saying
is
hey.
We
can
start
relatively
simple
gain.
Some
gain
some
traction
with
respect
to
interest
from
the
in
like
eventually
we
want
to
be
able
to
give.
A
We
want
to
be
able
to
empower
others
with
the
same
tooling
to
the
extent
that
they
have
an
extremely
specific
thing
about.
You
know,
cross
region,
failover
and
then
such
as
you
know
the
other
that,
but
that
they
could
participate
in
publishing
in
the
same
format
such
that
it's
in
relatively
vendor-neutral
relative.
A
You
know
it's
as
unbiased
as
you
can
try
to
get
and
yeah
so
for
us
to
kick
off
yeah
the
the
simpler
the
stand-up
of
the
the
provisioning
of
of
a
nominal
size,
kubernetes
node,
running
of,
what's
probably
like
it's
like
that,
that
hour-long
test
is,
is
really
probably
more
fairly
static
like
we
might
stepwise
up
the
load,
but
at
least
to
start
potentially
not
just
just
leaving
it
running
that
long
to
account
for
any.
I
don't
know
what
would
you
say
I
don't
know,
but.
B
So
in
terms
of
you
know,
we
don't
have
as
much
variety
and
node
size
as
say
an
amazon
configuration
which
can
get
everything
from
a
fractional
cpu
up
to
very
large.
B
But
we
do
have
a
couple
different
node
sizes,
basically
small
medium
large,
occasionally
extra
large
we've
got
some
node
types
that
have
four
nicks
on
them.
Most
of
them
are
dual
nick
configurations
and
there's
a
little
bit
of
variability
in
terms
of
network
speeds
anywhere
from
2
by
10
gigabit
to
2,
by
25.
B
N2
dot
large,
I
think
it's
what
it's
called
n2..
It's
an
n2
server.
We
have
a
next
generation,
n3
server
in
the
works,
but
not
yet
shipped.
It's
probably
a
first
quarter.
Early
second
quarter
thing.
So
in
terms
of
like
how
performance,
intersects
with
networks
and
nics
and
cpus
and
memory
like
there's
a
little
bit
of
there's,
there's
some
variety
there.
That's
worth
acknowledging!
B
A
There
or
there's
a
few-
that's
great,
I
was
actually
I
was
talking
to
one
of
the
nsm
maintainers
a
little
earlier
today
and
a
part
about
this.
There's
a
couple
of
people
from
intel
that
are
involved
in
this
initiative
and
they
will
be
quite
opinionated
about
the
cpu
you
know
and
for
my
part,
you
can
see
the
data
that
the
original
issue
was
opened.
For
my
part,
I
just
want
to
run
a
dang
test.
B
Understand,
yeah,
yeah
and
and
like
we
we
do,
I
mean
you
know:
people
who
are
opinionated
will
find
things
to
be
opinionated
about
both
cpus
and
nick
types.
We
have
both
intel
and
amd
cpus.
B
We
have
both
intel
and
melanox
nix
like
and
there
may
not
be
a
perfect
machine
right
to
satisfy
everyone's
needs
or
what
have
you
but
we've
published
all
those
specs
and
as
you
look
through
them,
you
can
find
systems
that
would
be,
you
know
appropriate
and
then,
when
I,
what
I'd
imagine
is.
B
The
the
runner
setup
should
let
you
have
essentially
one
or
a
very
small
number
of
permanent
systems
that
are
basically
hosting
the
runner
piece
of
things
and
then
I
would
think
that
since
you're
only
doing
tests
for
an
hour
two
hours
three
hours
at
a
time,
then
most
of
this
usage
would
be
ephemeral
in
the
sense
that
you'd
provision
provision,
some
machines,
spin
them
up
load.
All
the
software
run.
B
All
the
tests
report
back
the
results,
tear
them
all
down,
return
them
to
the
pool,
and
we
have
just
generally
a
lot
more
capacity
for
short
term
ad
hoc,
spinning
up
and
tear
it
down.
That's
that's
a
lot
easier
for
me
to
talk
to
people
about
that.
If
you
said
oh,
we
need
1600
machines
for
six
months.
It's
like
well!
No,
but
you
know,
potentially
it's
like
even
larger
tests.
B
If
they're
set
up
in
such
a
way
that
they're
torn
down
relatively
quickly
might
be
an
excellent
way
for
us
to.
You
know:
soak
test
new
hardware
get
some
opinions
on.
You
know
you
might
be
the
first
person
to
boot
it.
So
there
might
be
hardware
problems.
A
good
hours
test.
Might
show
that
whatever
so
just
you
know,
try
and
try
to
understand,
like
capacity
for
supporting
this
probably
will
be
improved
by
some
attention
to
detail
in
the
on
the
automation,
stuff,
yeah.
A
That's
that's
perfect.
I
mean
that
that
just
further
incentivizes
yeah
making
the
having
the
spin
up
and
tear
down
about
as
about
as
long
lived
as
the
test
themselves,
they're
I
mean
like
yeah.
We
would
figure
out
later,
if
someone
really
if
there
was
some
telemetry,
that
we
didn't
collect
that
needed
to
be
included
in
the
report
that,
like
because
of
the
that
someone
needed
to
go
back
and
look
over,
but
that
would
only
be
after
just
after
analysis
and
discovery
later.
Oh,
we
just
figured
out.
A
We
weren't
even
collecting
enough
potentially.
B
A
Yeah,
that
might
help
I
mean,
while
we
fumble
or
I'm
assuming
fumble
around
over
the
self-hosted
runner
and
getting
the
quick
install
ubuntu
as
we
fumble
through
it
for
the
first
time
yeah.
We
may
not
want
to
allocate
many
nodes
at
all.
B
A
It
down
as
well
so
some
of
the
yeah
to
your
point
like
and
like
where
the
the
complexity,
the
combinatorial
math,
comes
in
about
the
the
myriad
setups
and
configurations.
A
E
A
To
run-
and
some
of
us
involved
are
quite
intrigued
by
at
the
various
ends
of
that
on
the
hardware
and
or
on
the
software
end,
and
so
right.
B
Spin
up
time
would
be
a
little
bit
longer,
but
if
you
wanted
to
boot
a
custom
kernel,
a
completely
custom
operating
system,
we
do
support
booting
over
ipxi.
So
you
know,
as
you
extend
in
that
software
direction
like
we
need
to
have
exactly
this
and
that's
one
piece
of
things.
B
For
dedicated
nodes,
we
can
also
do
specific
bias
related
or
firmware
related
things.
I
don't
wanna.
B
I
don't
wanna
promise
that
ad
hoc,
that
you
can
specify
that,
because,
when
you
get
a
machine
from
the
pool
that
you
you
get
what
you
get,
but
it's
possible
that
the
sort
of
work
that
you're
doing
you
know
will
depend
on
the
latest
nick
drivers
or
the
latest
kernel
patches
or
any
of
that
stuff,
and
I
think
it's
quite
reasonable
to
expect
that
the
the
nature
of
the
bare
metal
system
you
have
gives
you
you
know
as
long
as
you
understand
which,
as
long
as
we
understand
what
you
need,
we
give
you
some
some
flexibility
there
as
well.
A
The
in
terms
of
the
this
is
very
helpful.
The
say
a
bit
more,
if
you
would
about
the
general
tooling,
by
which
er
sort
of
access
is
controlled
or
accesses,
are
there?
Is
there
a
portal
involved?
Are
there
vpns
involved?
Are
there
folks
credential?
A
A
The
like,
in
part,
this
conversation,
is
to
help
answer
tons
of
questions,
but
also
eventually
kind
of
scheduling
of
like
if
we
were
to
start
out
with,
like
like
four
nodes
for
some
time,
while
we
fumbled
through
the
trying
to
like
using
a
control
node
to
provision
three,
a
three
node
cluster
and
then
running
through
that
automation.
Kind
of
that
scheduling
is
that
and
if
that's
just
like
the
quick,
if
it's
the
ubuntu
quick
to
know
like
it's
sort
of
your
common
commonly
available
node
type.
B
So
yeah,
so
let
me
let
me
talk
through
the
sort
of
what
you
get
when
you
spin
up
a
new
server,
they're
they're,
all
bare
metal
machines,
so
there's
no
joint
tenancy.
You
have
the
whole
system
to
yourself
when
the
system
is
booted.
B
B
B
There's
organizations
which
would
be
the
cncf
organization
for
this
and
then
individual
projects
under
that
one
or
more
people
can
have
access
to
a
project
when
they
create
their
account
within
that
project,
they'll
upload
their
ssh
keys
to
our
portal
and
then
at
provisioning
time.
B
So
there's
some,
you
know
you
can
of
course
do
more
after
the
system
is
up,
but
there's
a
there's
a
fairly
straightforward,
get
the
system
online
with
the
set
of
people
who
should
have
access
to
it.
B
In
addition
to
access
over
the
standard
network
interface,
we
have
a
serial
over
ssh
interface,
which
taps
into
the
essentially
taps
into
the
console
port.
On
these
things,
that's
it's
not
a
hardware
port.
It's
delivered
over
ipmi,
but
it
lets
you
do
things
like
get
access
to
a
server
after
the
network.
Configuration
has
gone
horribly,
bad
right
because
there's
a
separate
there's,
a
separate
management
work
that
you're
connecting
through
you
can
also
remotely
reboot
the
machine
reload
it
with
a
new
operating
system.
B
Do
other
sorts
of
management
tools
from
our
console.
In
addition,
there's
a
full
api
that
gives
you
access
to
all
of
these
same
attributes
and
we've
got
a
cli
tool
that
that
wraps
them
as
well.
B
Lets
you
get
a
machine
up
and
running
using
common
tooling,
but
if
something
goes
wrong,
you
have
a
authenticated
point
of
access
to
to
get
to
a
console.
A
Nice,
okay,
good
yeah,
so
it
sort
of
invalidates
the.
When
I
was
saying
like
a
three
node
cluster
and
I
had
said
four
nodes
to
do
that,
it's
like
well
yeah,
except
you,
don't
really
need
the
fourth
node
for
that,
because
the
management
console
is
there
as
your
bastion
point
as
your
point
of
control
for
the
litany
of
things
that
you
just
said
right,
yeah
all
sounds.
B
If
to
me
it
sounds,
you
know
it
sounds
perfectly
reasonable
right,
you're,
just
loading
some
software
in
a
machine
and
running
some
tests.
How
hard
could
that
be
the
the
the
setup
automation.
B
We've
we've
seen
that
pattern
before
general,
often
like
coming
out
of
something
like
jenkins
or
some
sort
of
ci
infrastructure,
where
people
will
spin
up
a
machine
or
a
set
of
machines
and
tear
them
down
the
thing
to
watch
for
while
you're
doing
this.
B
Is
that
not
all
automation,
technology
is
completely
perfect
and
sometimes
people
will
spin
up
machines
and,
for
whatever
reason,
a
logic,
error
or
a
bug
or
whatever
the
system
fails
to
tear
down,
and
you
start
to
accumulate
multiples
of
what
you
thought
you
were
going
to
be
using
one.
B
A
That's
great.
That
makes
a
lot
of
sense.
Yeah
it's
instead
of
container
container
or
vms
sprawl,
some
sort
of
bare
metal.
B
If
you
end
up
in
a
situation
where
you
have
to
run
a
lot
of
tests
in
a
certain
data
center,
when
a
machine
deprovisions,
when
we
when
you
destroy
it
and
it
gets
turned
off,
doesn't
really
get
turned
off,
it
goes
through
a
deep
provisioning
cycle
where
we
scrub
it
and
get
it
ready
for
the
next
person
that
de-provisioning
cycle
takes
some
time
depending
on
the
size
of
the
machine.
A
Temporarily
consuming
all
available,
oh
yeah,
yeah.
B
Yeah
nice
yeah.
B
That's
that's
just
like
you
know,
that's
an
artifact
of
how,
like
things
actually
take
time,
and
they
may
take
time
invisibly
to
you,
but
you
know.
A
D
A
You
know
so
this
is
yeah
we're,
like
we've
tried
for
a
while
to
be,
and
we've
written
like
doc
after
doc
of
here's,
the
things
we
want
to
test
and
this
kind
of
a
situation
this
thing
and
and
for
my
part,
I'm
either
just
kind
of
burned
on
it
or
or
have
just
acknowledged
like
we
really
just
need
to
start
like
and
then
and
then,
like.
A
I
expected
that
so
I
I
don't
know,
I
guess
in
the
best
of
ways,
I
hope
that
there
are
enough
people
that
are
involved
and
so
intrigued
by
what's
going
on
people
learning
so
much
that
that
that
something
like
this
is
something
that
we
need
to
be
very
much
paying
attention
to
like.
I
hope
that
we
get
that
far.
I
guess
is
what
I'm,
what
I'm
saying
yeah
well,
there's.
B
You
know
the
processes
of
of
just
getting
it,
so
you
can
get
automated
test
results.
I
don't
want
to
minimize
how
hard
that
work
is,
but
it's
it's
valuable
once
it's
done.
A
Yeah,
that's
yeah.
That
speaks
in
part
to
how
old
why
that
issue
is
so
like,
because
it's
an
acknowledgement
like
we
should
not
start
until
until
we
think
we've
got
it
all
under
a
single.
You
know,
click
and
then,
and
then
we'll
enter
into
equinix's
world
and
work
through
a
smaller
set
of.
Hopefully,
a
smaller
set
of
automations
then
get
it
to
the
yeah.
It
really
is
part
of
the
thing
that
we're
I
mean
just
for
your
own.
You
know,
as
we
go
to
start
this
relationship.
A
What
have
you
done
so
this
for
context?
I
suppose
the
particular
project,
or
this
specific
one
is
so
it's
service
mesh
performance,
like
people
have
all
kinds
of
so
we're
trying
to
there's
a
specification
by
which
we
provide
a
common
way
to
kind
of
articulate
capture
and
articulate
the
performance
of
your
cloud
native
workloads
that
are
running
on
a
mesh
and
then
consequently
sort
of
running
on
kubernetes
and
then,
consequently,
on
you
know
on
whatever's
underneath
and
to
help
a
spouse
best
pre,
you
know,
there's
a
lot
of
different
technologies.
A
A
We
won't
go
through
it,
but
the
part
of
the
goal
here
is
to
on
a
routine
is
to
engage
in
this
lab
on.
I
have
no
idea
what
cadence,
but
on
some
sort
of
a
cadence
to
pub
to
publish
to
to
like
sort
of
track
the
and
part
of
the
goal
here,
the
reason
for
a
lot
of
the
tooling
and
investment
into
the
tooling
isn't
just
for
it's
for
what
we
just
said,
but
it's
also
to
say
for
each
of
the
service
mesh
projects
that
get
measured.
A
They
may
well
know
better
than
we
do
that.
There's
a
certain
tweak
in
the
config
that
really
makes
a
difference
and-
and
they
may
want
to
be
in
control
of
the
reporting
of
these,
so
so
part
of
our
goal
is
to
hand
off
the
or
get
you
know.
Let
others
come
in
with
that
tooling
and
sort
of
self-report,
so
it
might
be
so
part
of
the
works
that
we
engage
in
here
with
respect
to
self-hosted.
A
B
Because,
because
this
I
would,
I
would
think
that
there's
probably
some
tension
between
everyone
wants
to
work
together
towards
a
common
goal,
and
everyone
wants
theirs
to
be
the
best
and
that
sort
of
usual
tension
right.
That's
yeah
part
of.
A
Our
part
of
my
specific
goal
is
to
help
them
is
to
provide
a
vendor-neutral
venue
in
which
each
of
them
can
highlight
their
strengths.
It'll
be
more
about
highlighting
strengths
than
it
is
about
you
know
and
and
yeah,
and
so
so
I
so
there's
a
couple
of
other
there's
a
couple
of
other
folks
that
there's
a
couple
on
the
phone
who
are
interested
in
being
involved
and
there's
a
there's,
a
collection
of
folks
that
are
maintainers
of
this
project
and
kind
of
dedicated
to
seeing
through
the
publication
of
those
reports.
A
B
Yeah,
from
a
from
a
benchmarking
perspective,
what
we're,
generally
speaking,
fine
with
having
benchmark
results
on
our
infrastructure,
published
as
you
don't
have
to
get
special
permission
or
agree
to
various
things.
We're
not
we're
not
protective
about
that.
The
hardest
thing
to
do.
B
Probably
the
the
hardest
part
of
this
that
I
can
see
is,
if
you
end
up
with
any
dependencies
on
like
very
specific
kernel
versions,
very
specific
driver
versions
like
understanding
that
some
elements
of
network
performance
are
at
the
high
level
of
just
what
the
software
is
doing.
B
B
Like
for
for
prototyping,
something
that
should
run
everything
you
need
to
do,
we
have
a
small
version
and
a
medium
version
in
intel
and
amd
flavors.
The
n2
system
has
quad
necks.
If
you
scroll
down
a
little
bit
more
in
the
grayed
out
ones.
The
n3.1,
I
think,
is
going
to
be
an
interesting
machine
because
I
think
that
has
a
quad
neck
as
well.
A
Just
very
good
I'll
suggest
this
that
hey,
we
want
to
be
hope
like,
like
I
said.
Hopefully
this
isn't
the
first
or
last
or
time
that
we're
yeah
right
that
we're
speaking
but
and
so
yeah.
Let
us
be
as
good
of
let
us
be
good
citizens
to
the
extent
that
we
can
and
like.
Actually
it
actually
I'll
say
this,
like
initially
like,
I
think
in
well
boy,
there's
a
lot.
E
I
just
up
to
something
because
I
realized
before
so
when
you're
trying
to
offer
the
context
to
edward
you're.
Maybe
you
wanted
to
show
like
what
you
intend
to
measure
on
the
performance
like
system
to
system
or
service
mesh
to
service
mesh,
because
I
think
the
system
to
system
it's,
it
seems
to
me
it's
compatible
with
the
style
that
equinox
have
to
have
the
ramp
up
of
the
level
just
yeah
when
you
are
presenting
the
the
performance
results.
Yes,.
A
Yeah
we're
yeah
we're
generally
one
of
the
things
we're
really
heavily
focused
on
is
that
I
mean
which
we
would
get
we'd
spin
up.
A
kubernetes
cluster
put
some
nodes
in
there
deploy
a
service
mesh
on
the
cluster,
deploy
a
sample
application,
take
a
grab
at
least
one,
maybe
multiple
over
time.
It
would
be
multiple
load
generators.
A
You
sometimes
run
them
at
the
same
end.
Point
like
the
same
microservice,
sometimes
at
separate
microservices
track
how
much
load
we're
generating
over
a
certain
time
measure
the
latency
by
which
responses
are
coming
back.
A
Measure
the
throughput
by
which
responses
could
be
sent
and
received
those
two
all
the
while
trying
to
collect
sort
of
some
standard,
node
level
statistics
about
cpu
memory,
some
fairly
common
stuff
there
like
yet
to
the
extent
that,
to
the
extent
that
so,
there's
various
levels
at
which
to
take
those
measurements.
A
So
if,
if
within
equinix,
either
through
the
api
or
if
there's
sort
of
or
an
easy
path
to
some
of
that
hardware,
level,
telemetry
or
more
fine,
maybe
it's
just
like
hey
what
you
commonly
find
is
people
just
in
the
kubernetes
land
here,
just
deploying
a
prometheus
node
exporter,
or
something
and
right
there's
examples
of
that.
Like
you
know
like
to
to
just
like,
I
have
zero
opinion
on,
like
almost
everything
to
start
with,
because
it
just
it
kind
of
just
doesn't
matter.
A
There
are
others
involved
that
will
come
in
with
particular
goals
in
mind
about
the
my
goals
are
much
more
about
a
static,
stable,
consistent.
A
My
goals
are
much
more
about
like
constraining
the
hardware
and
having
it
be
static
and
not,
and
then
really
evaluating
the
various
configurations
that
we
can
do
different
service
mesh
types,
different
configurations
of
those
meshes
different.
We'll
do
some
we'll
inject
some
faults
at
a
at
a
software
level
and
and
watch
as
the
micro
services
respond
or
don't
respond
like
for
some
of
the
other
folks
involved
from
intel
or
others.
B
What
I
remember
when
I
remember
from
it
was
so
we
try
as
hard
as
we
can
to
have
all
nodes
of
the
same
type,
be
as
identical
to
each
other
as
possible.
The
reality
of
the
world,
even
before
the
supply
chain
issues
came
up
but
sort
of
the
reality
of
the
chip
world
is
that
there
can
be
differences.
B
Between
two
identical
machines
that
have
the
same
skew
we
we
saw
this
with.
B
Like
I
don't
know,
it
was
like
the
stepping
on
up
stepping
on
a
cpu
or
like
something
so
minor
that
no
one
should
notice,
but
we
had
someone
with
very
precise
measurement
tools
who
was
able
to
tell
us
that
we
had
two
machines
that
were
the
same
that
were
different,
so
you
know,
as
you
look
to
getting
results
that
are
repeatable,
getting
results
that
you
can
run
over
and
over
again
and
get
the
same
answers
or
within
tolerances
of
answers.
B
B
A
For
a
single
single
data
center
configs
is
that
okay,
I
guess
I'm
answering
my
own
question.
It
looks
like
that.
Just
like
see
the
docs
here
for
a
description
of
that
top
of
the
layout.
B
B
B
B
There
might
be
some
small
amounts
of
variance
depending
on
exactly
which
systems
you
get
right,
even
within
the
data
center,
because
you
know
there's
going
to
be
a
few
more
meters
of
fiber
between
the
two.
I
don't
know
if
your
measurements
are
going
to
be
sensitive
enough
to
notice
a
few
hundred
nanoseconds
or
a
millisecond
or
a
couple
of
milliseconds
here
or
there,
but
just
know
that
there's
some
you
know
like.
A
Yeah
there's
some
there's
some
there's.
There
may
be
some
variants
there,
yeah
and
yeah
to
your
point
like
yeah
having
a
control.
What
do
you
call
it?
A
control
group
and
a
like
a
control
note
or
a
controlled
test
or
a
just
something
to
yeah
and
create?
You
know
at
least
speak
to
the
confidence
of
the
yeah,
and
that
is
in
part
why
we
might
run
some
things
for
an
hour
or
whether,
like
some
of
the
soap
tests
might
be
or
the
different
types
of
tests
that
we
would
run.
We
might.
A
The
yep
you're
right
that
all
things
that
I
hope
that
we
really
care
about
later
or
the
folks
from
some
of
the
folks
that
are
participating
yep.
They
have
the
world's
best
and
finest
instrumentation
for
hardware
level,
measure
like
and
and
they're
not
on.
This
call
yeah,
you
know
so
we'll
see
if
they
try
to
bring
some
of
those
to
bear
or
like
some
of
those
that
represent
the
hardware
vendors.
The
hardware
vendors
themselves
that
were
amd
and
intel
were
speaking
of
like
they
have.
You
know.
A
B
We
have
good
relationships
with
both
with
both
amd
and
and
intel,
and
you
know
have
done
early
access
programs
with
those
folks
and
have
have
gotten
at
some
of
their
sort
of
newest
newest
hardware.
So
you
know
if,
at
whatever
point
of
the
cycle
you're
doing
this,
that
you
find
a
need
to
get
deep
into
the
weeds
for
stuff.
We
may
be
able
to
provide
some
resources
as
well.
A
You
know
for
like
this
cross-plane
integration
did
you
know,
folks
at
equinix
that
were
involved
in
this
partnership
or
this
relationship
or.
B
A
Very
good
part
of
the
so
the
reason
I
ask
is
it
goes
back
to
what
I
was
saying
earlier
about
the
work
that's
gone
into
the
tooling.
So
when
we
kind
of
focus
that
conversation
on
a
github
action-
okay,
fine,
the
tooling,
that's
inside
the
action,
it
actually
uses
some
of
the
same
tech.
That
crossplane
does.
That
tooling
is
a
cncf
project
measuring
and
so
okay,
it
measures.
A
You
know
you
could
refer
to
it
in
various
ways,
but
an
orchestrator
of
the
10
plus
service
meshes,
and
so
it
has
load
generator
inside
it
has
kind
of
instrumentation
inside
it
has
like
actually
to
date
it
keeps
track
of.
You
know:
people,
the
various,
so
the
part
of
the
project.
The
goal
of
the
project
has
been
to
empower
people
to
run
these
tests
themselves
and
report
them
and
then
for
us
to
share
with
the
world
what
you
know
hopefully
be
able
to
inform
people.
A
Here's
if
you're
running
it
like
this,
do
that
after
running,
like
you
know,
we
want
to
try
to
espouse
best
practices,
and
so
the
reason
I
note
that
is
well.
It's
I
mean
reading
into
the
future,
is
possible,
it's
possible
that
it
would,
speaking
of
empowering
others,
to
run
these
these
tests
or
empowering
the
other
projects
to
run
these
tests.
There
might
be
enough
interest
that
measuring
makes
its
way
here.
B
Yeah
and
so
yeah
we've
been,
I
mean
this.
This
set
of
integrations
is
work
that
we've
been
actively
doing
and
supporting
the
you
know,
each
of
the
projects
evolves
at
a
certain
pace,
some
faster
than
others,
some
easier
to
work
with
than
others
right,
but
we're
you
know
we're
really
committed
to
providing
that
cloud
environment
on
top
of
bare
metal.
So
whatever
whatever
experience
you
have
in
in
doing
stuff,
yeah
we're
we're
interested
in
that
and
and
then
helping
it
forward.
Nice.
A
Nice
well
edward,
very
nice,
to
meet
you,
I
think,
yeah.
A
I
think
you
you
will
lead
the
way
with
respect
to
can
where,
where
to
get
started,
you
were
alluding
to
account
where
to
get
started
in
the
how
to
you'll
some
of
the
info.
You'll
probably
share
is
about
oh
how
to
go
about
yeah
like
I
get
like
either
requesting
a
couple
of
nodes
or
or
identifying
when
those
are
just
freely
available
and
grab
a
couple
or
when
to
where
the
inflection
points
are
in
terms
of
pre-planned
activities.
B
Yeah,
so
the
cncf
infrastructure
itself
is
managed
by
the
cncf,
so
he
horror
has
been
lead
on
actually
doing
account,
setup
and
like
who
needs
to
have
access,
make
sure
they
get
information.
So
I
would
if,
if
you
know
what
you,
to
the
extent
that
you
know
what
you
need,
he
can
help
very
straightforwardly
get
access
set
up.
B
For
my,
my
guess
is
that
this
is
probably
a
two-part
or
a
three-part.
Ask
you
know
the
first
part
being
just
like
hey,
let's
get
some
infrastructure
up
so
that
we
can
test
but
like
test
it
test
the
tooling
and
then
probably
it's
like
once
the
tooling
is
proved
out,
then
there's
the
sort
of
bigger
ass
that
says.
Well,
you
know
we
know
that
it
works.
B
B
How
long
do
we
want
to
run
them
for,
like
the
sort
of
like
second
stage
of
the
experiment
right
once
once
things
are
working
but
but
based
on
all
I
know
in
our
budgets
and
access
and
whatnot,
you
know
asking
for.
B
I
don't
know
something
on
the
order:
four
machines,
five
machines,
something
like
that,
all
in
one
data
center,
some
some
reasonable
type-
that
you
could
prove
things
out
on
that
you'd
hold
them
for
a
long
time.
B
Before
turning
them
on
to
make
sure
that
I
don't
come
back
to
you
three
weeks
later
saying:
hey
those
machines
that
you
had,
we
need
them
for
a
customer,
but
yeah
I'm
happy
to
happy
to
run
point
of
them.
Nice,
nice.
A
Edward,
I
think
that
takes
care
of
you
actually
honestly
edward.
I
think
I
probably
got
a
two-year-old
account
for
me
or
like
yeah.
I
think
they
were
when
dan
khan
was
here,
he's
pretty
quick
to
try
to
get
us
in
and
and
yeah
a
lot
of
time
has
passed.
A
So
this
is
grammys
right,
like
very
nice
to
meet
you
yeah,
we'll
we'll
make
a
mess
of
things.
I'm
sure.
B
Yeah
and-
and
you
know,
and
we're
always
really
interested
in
seeing
people
who
are
doing
state-of-the-art
stuff,
do
it
on
our
platform
right.
That's!
That's
a
contin,
that's
a
perpetual
interest,
so
you
know
I'm
I'm
very
interested
to
see
what
you
can
prove
that
works
and
also
just
to
understand
your
results.
Nice.
E
No,
no,
no!
I
I
really
enjoyed
the
presentation,
so
thank
you
very
much
for
having
edward
and
for
allowing
me
to
be
in
the
in
this
meeting
yeah.
I
had
no
clue
that
I
can
equinox
has
so
many.
E
Let's
say
so
many
types
of
provisioning
and
for
me
it
was
really
cool.
It
was
really
cool
the
presentation.
So
thank
you
very
much.
A
Last
question:
ed
tinkerbell,
yes,
which
is
tinkerbell,
am
I
am
I
getting
the
right
associate:
mental
association
with
tinkerbell
and
packet
and
equinix
and
okay.
B
Yeah
so
so
tinkerbell
is
a
provisioning
tool
based
on
our
production,
core
stuff,
but
it's
open
source.
B
It
handles
the
low-level
pixie
boat,
and
you
know
the
the
the
power
off
to
initial
operating
system
mode
and
there's
a
whole
stack
of
stuff
associated
with
that,
including
a
language
for
doing
fast
boot
of
systems
that
we've
been
working
really
hard
at.
Just
you
know:
how
quickly
can
you
get
a
system
from
scratch
to
run.
A
Yeah,
so
okay,
good
yeah,
there's
there's
a
startup
here
in
I'm
in
austin,
texas
and
there's
a
rack
inn.
It's
familiar
with
those
folks.
B
Yeah,
some
of
the
same
space,
wreck-in
red
hat,
has
been
working
on
a
project
called
metal,
cubed
or
metal.
Three,
that
sort
of
fits
in
the
same
space
to
a
certain
degree,
canonicals,
juju
kind
of
fits
in
that
same
space.
A
lot
of
people
have
rolled
their
own
sort
of
things.
We.
B
We
really
wanted
to
open
source
some
of
the
work
that
we
had
done,
because
I
think
some
of
the
control
plane
for
that
is
is
novel
and
there's
some
more
cloud
native
elements
of
the
whole
thing
that
are,
you
know
new
to
to
this
space.
B
B
I
don't
want
to
say
rickety,
because
that
would
make
it
sound,
maybe
worse
than
it
is,
but
like
there's
a
bunch
of
code
that
runs,
you
know
for
the
first
couple
minutes
of
every
server
and
most
people
in
the
world
don't
see
that
code,
and
so
most
people
in
the
world
don't
care
as
long
as
your
server
comes
up
and
yeah
by
nature
of
what
we
do.
We
care
about
that
a
lot.
A
Well,
I
guess
actually
one
other
question
and
that
is
I
participated
in
creating
redfish
kind
of
redfish
v1
and
is
that
of
the
ipmi
that
we
were
speaking
about
earlier?
I
just
I
ran
out
of
curiosity.
A
Is
that
did
you
know
there
is
redfish?
Did
it
become
a
popular
thing
or
is
like
the
do?
You
know
those
ipmi
interfaces
adhere
to.
B
Yeah,
redfish,
I
mean
redfish,
I
think
is-
is
a
thing.
I
don't
live
in
that
world
as
much
as
some
other
people,
but
I've
heard
it
come
up.
We
have
part
of
our
engineering
team
doing
a
bunch
of
work
with
with
the
bmc,
including
a
project
called
openbmc,
which
is
basically
trying
to
let
the
end
user
of
us
like
trying
to
let
someone
like
us
control
that
system
so
like
within
openc.
B
Yeah,
that's
that
finding
people
who
who
even
like
fighting
people,
even
who
even
know
that
that
stuff
exists,
let
alone
have
opinions
about
it,
has
had
like
you
find
some
very
smart
people
at
that
level
at
that
layer,
at
the
stack.
B
A
So
well
very
good,
ed
thanks.
So
much
for
the
time
yeah.
B
Thank
you,
and
you
know,
reach
out
to
me
with
whatever
way
is
best
to
with
any
questions,
you
know
happy
to
happy
to
set
up
a
call
if
that
helps.
B
If,
if
you
have
questions
that
come
up
along
the
way
in
terms
of
integration,
happy
to
bring
in
some
of
the
people
who
wrote
the
code
on
our
side
to
you
know
guide,
you
guide
you
along
the
way,
but
I
this
this
sounds
like
something
that
overlaps
with
things
that
have
been
done
before.
So
I
I
don't
anticipate
any
brand
new
problems.
Just
nice
yeah,
just
tired,
tired
and
old
problems,
just
just
just
the
just
the
you
know
things
we
may
we
we
may
have
already
run
into
before.
So
thanks
yeah
I'll.
A
Go
about
introducing
a
couple
of
other
folks,
probably
by
email
and
we'll
get
jump
on
e-hor's
back
and
see
if
we
can
generate
some
load.