►
From YouTube: CNCF SIG Network 2020-09-03
Description
CNCF SIG Network 2020-09-03
A
A
A
Nice
yeah
yeah,
you
know
yesterday
I'll
share
this,
since
we
have
to
you
know
waste
the
first.
You
know
like
obligatory
waste,
the
first
three
minutes
or
so
I
I
tried
a
well.
It
was
like
a
chili
popsicle,
so
I
think
I
had
cayenne
pepper
and
some
pineapple
chunks
and
something
and
so
yeah.
I
was
really
really
out
there
on
a
limb.
A
Jim
jim
there's,
oh
yeah,
it
is
blue
blue
shirt,
thursday.
The
thing
is,
is
you
know
this
is
like
this
is
like
a
pre
there
you
go
pre-kubecon
cucum
type
deal.
This
is
this
thing
used
to
you
back
when
this
thing
like
used
to
fit
now
now
I
can
barely.
B
I
just
order
a
bunch
of
shirts
which
were
one
size,
bigger
and
a
bunch
of
labels
which
are
one
size,
smaller
stitch.
You
mean
it's
exactly
the
same
size.
I've
never
not
changed.
A
All
right
welcome
all
gonna
get
rolling
in
about
a
minute.
The
meeting
minutes
are
posted
in
the
zoom
chat,
be
sure
to
jump
in
and
plop
your
name
in
there.
If
you
would.
A
D
D
A
A
A
Well,
very
good
amy
is
with
us.
I
believe
ken
is
with
us.
I
think
yep
and
mr
klein
is
there
and
we're
about
four
after.
A
Let's
get
rolling
thanks
everyone
for
for
coming
today,
it's
nice
to
have
a
few
of
you
fresh,
so
good.
So
if
you
haven't
been
on
a
cncf
sig
network
call
before
or
really
a
cncf
sig
call
this
one's
not
much
different
than
the
rest,
and
we
do
adhere
to
well,
hopefully
adhere
to
cncf
cultural
values,
but
but
also
just
in
terms
of
recording
these
calls
and
posting
them
publicly.
A
We
do
that.
We
ask
that
you
be
respectful
on
the
call
I'm
excited
about
today's
agenda.
A
It's
actually
most
of
the
time
that
we
get
to
meet
that
that
I
get
to
be
excited
a
lot
of
times
we're
looking
at
really
interesting
projects,
nearly
all
of
which
end
up
inside
of
the
cncf
at
various
levels.
There's
a
number
of
you
on
the
call
that
represent
those
very
projects.
A
And
so,
if
you're,
you
know
just
also
we
gave
at
this
last
cubecon
eu,
we
gave
an
intro
and
a
deep
dive
and
matt
klein,
and
ken
owens
who
are
co-chairs
on
the
sig
network
have
helped
have
given
those,
I
think,
we've
given
them
a
couple
of
times
now.
A
A
So
you'll
probably
hear
me
say
that
a
couple
of
times
last
time
that
we
met
so
we
missed
the
beat.
I
think,
because
last
time
we
were
going
to
meet
was
during
kubecon
week
and
during
kubecon
week
you
you
just
kubecon,
and
so
but
the
time
that
we
met
before
that
we
had
discussed
a
couple
of
times
over
the
notion
that
there
were
a
number
of
topics
and
can
work
streams
forming
specific
to
service
mesh
topics,
and
we,
I
think
we
discussed
what
some
of
those
were
in
in
this
call,
some
of
them.
A
We
want
to
discuss
what
those
look
like
today
and
part
of
the
charter
of
the
service
mesh
working
group.
Last
time
we
met,
we
had
agreed
that
that
our
core
agenda
for
the
sig
network
had
dwindled
down
to
leave
enough
space
to
use
this
same
meeting
time
for
service
mesh
working
group,
that's
subject
to
change
based
on
future
needs
to
do
project
reviews
or
other
topics
that
come
up
for
the
sig
network
in
general.
A
We
want
to
introduce
that
today
and
talk
about
some
of
the
work
streams
and
really
again
like
an
overarching
theme
here,
is
to
solicit
interest
in
the
projects
that
are
presented
as
well
as
potentially
projects
that
aren't
listed
so
matt
or
ken
anything
that
we
want
to
do.
We
want
to
note
before
we
take
a
look
at
the
surface
mesh
working
group,
slides.
A
Fair
enough,
all
right,
so
hey
the
service
match
working
group
by
the
way.
Just
I
think
you
know
this
this,
the
the
call
that
we're
on
right
now,
it's
it's
got
a
few
enough
people
that
I'm
hopeful
that
it
becomes
pretty
interactive.
So
please
don't
please
don't
treat
this
as
a
formal
call.
Anything
you
say
will
be
recorded
and
posted,
but
that's
not
the
point.
The
point
is
this.
Thank
you
for
coming
today.
If
you
like,
what
you
see
please
come
back,
please
express
opinion.
E
Hour
for
yeah
quick
question
so
you're
just
not
going
to
give
a
monologue,
then.
A
Yeah,
no
thanks,
steve
yeah,
so
we're
fortunate
that
there
of
the
three
pr
four-ish
project
projects
that
we'll
talk
about
today,
you
guys
will
hear
from
others
and
not
just
me,
which
is
which
is
nice.
A
Moreover,
as
they
come
up,
please
just
interrupt
like
like
steve
did
so
so
there's
there's
a
couple
of
a
couple
of
things
around
service
meshes
that
there's
a
collection
of
you
that
have
been
either
asking
on
about
or
working
on
and
and
we're
trying
to
help
uplift
those
projects,
those
efforts
shine
a
light
on
some
of
them
and
bring
others
to
bear
on
them
to
influence
them
so
to
steve's
point
like
not
just
you
know,
questions
and
comments,
but
like
influencing
and
directing,
and
so
there's
of
the
projects
that
we're
about
to
talk
about
which,
which
we
could
either
refer
to
them
as
projects
or
maybe
work
streams.
A
I
think
today
today
we're
gonna
they'll
be
introduced
and
they
won't
really
be
advanced
because
it'll
be
about
introducing
them
and
and
making
sure
that
people
are
understanding
that
their
vision
and
and
whether
or
not
you
want
to
get
involved
and
do
things
there.
So,
at
the
end
of
the
call,
we'll
do
again
kind
of
a
call
for
interested
parties
and
we'll
figure
out
how
to
how
to
have
times
in
which
we
can
go
fairly
deep,
a
lot.
A
A
lot
of
things
can
be
done,
asynchronously
in
terms
of
advancing
these
projects,
there's
some
commonalities
across
them.
One
of
those
is
the
notion
that
the
cncf
lab
is
an
excellent
resource,
particularly
for
as
we
look
at
well
things
that
you
know
anything
at
scale
doing
tests
at
scale
a
lot
of
times
that
has
to
do
with
performance
and
that's
I
don't
know,
arguably
an
underused
resource.
Maybe
it's
been
well
used
in
a
number
of
instances.
A
A
lot
of
very
interesting
analyses
have
come
out
of
the
use
of
those
labs,
so
I
expect
that
those
labs
will
be
used
for
a
couple
of
the
projects
here,
ideally
those
labs
and
the
analysis
of
them
there
a
lot
of
times.
These
are
point
in
time
things,
software
changes,
and
so
so
should
you
maybe
run
a
test
again
or
do
an
analysis
again
to
the
extent
that
you
know,
people
consider
that
it's
warranted.
A
So
I
wanted
to
call
out
that
resource
and
part
of
the
goals
of
these
initiatives
are
are,
in
fact
to
publish
a
few
things.
Some
of
that's
either
whether
that's
analysis
or
maybe
whether
that's
service
mesh
patterns.
A
So
so
this
as
a
topic
unto
its
own,
is
of
personal
interest
to
me-
and
I
know
it's
of
personal
interest
to
others
that
are
on
this
call.
A
There
was
a
if
folks
are
familiar
with
paul
bower
of
microsoft.
He
had
been
really
interested
in
the
space
before
and
been
trying
to
help
organize
an
effort
around
identifying
patterns,
documenting
them,
sharing
them
in
a
vendor-neutral
way
and
I'm
I'm
really
biased.
I
think
that
this
is
a
great
venue
for
vendor
neutral
stuff,
and
I
think
you
know
so.
Does
he
I've
been
paying
him?
I
think
that
that
effort
might
have
puttered
out,
but
I'm
hopeful
that
it
will
pick
it
up
pick
up.
A
I'm
hopeful
that
any
number
of
you
or
others
will
participate
in
this.
This
is
to
say
that.
A
A
If
they
were,
we
would
probably
have
less
in
the
world,
and
I,
for
my
part,
I
anticipate
that
we
will
have
less
in
the
world.
At
some
point,
I
would
expect
like
a
couple
of
others
on
this,
call
that,
before
this
year
is
over,
there
will
be
more
in
the
world,
not
less
so.
A
The
call
to
action
here
the
call
to
interest
here
around
patterns
is
to
help
achieve
part
of
the
charter
of
the
cncf
sig
network,
which
is
to
to
inform
broadly,
and
that
would
in
this
case
this
is
informing
by
providing
reference.
I
don't
know
what
are
the
word
to
use
other
than
patterns?
I
guess,
but
like
references
of
common
uses
of
this
tech
and
the
patterns
by
which
they're
used,
I
know
well
nick
jackson,
you're
on
the
call
here.
A
I
know
this
is
of
kind
of
a
particular
focus
for
you
and
to
not
make
this
a
monologue,
a
different
way
that
you
would
characterize
this
or
maybe
certain
examples
you
might
give
to
help
get
people's.
B
You're
trying
to
maybe
smoothly
balance
the
routing
between
different
services.
You
you
kind
of
want
to
implement.
I
don't
know
like
layer,
specific
routing.
You
want
to
do
things
like
managing
sort
of
load
balancing
to
to
caches.
B
You
want
to
be
able
to
handle
reliability
that
you
have,
with
with
unreliability
that
you
get
from
sort
of
dispute
systems
and
networks
and
dynamic
systems
in
general
and
the
core
thing
about
that
is
that
we're
doing
the
same
thing
so
we're
trying
to
do
canary
deployments
where
we're
trying
to
ensure
that
there's
blow
off
of
you
know
like
pressure
cooker
safety
valve
circuit.
B
Breaking
on
our
services
so
that
we
don't
end
up
with
sort
of
critical
cascading
crashes,
we're
trying
to
do
things
like
sort
of
balance,
traffic
across
regions,
we're
trying
to
fail
traffic
over
to
regions
and
manage
it
across
multiple
different
clouds
and
and
those
I
think
those
patterns
really
can
be
distilled
down.
To
I
mean,
there's,
probably
quite
a
few
of
them,
but
you
know
you,
you
have
that
that
sort
of
commonality-
and
it
doesn't
matter
regardless
of
what
industry
you're
working
in
or
which
service
mesh
you
choose
there
is
that
commonality.
B
Now
I
believe
that
by
educating
people
on
the
patterns
of
use
it,
it
really
helps
people
move
forward,
distributed
systems,
and
I'm
very,
very,
very
passionate
about
about
this.
A
So
on
that
particular
topic,
it's
a
general
call
for
interest
and
participation,
so
the
so
do
signal
if
those
are
of
interest,
if
whether
you're
help
wanting
to
help
produce
those
or
identify
them,
work
through
them
or
just
be
a
consumer
of
them,
provide
feedback,
they'll
be
sort
of
ongoing
discussion
and
work
there
and
that's
in
part
because
there's
a
lot
of
service
meshes
nick.
You-
and
I
were
talking
about
about
this
a
little
bit
earlier.
Do
you
want
to
yeah.
B
So
things
like
smi
is
trying
to
do
that.
It's
trying
to
take
a
sort
of
a
rational
approach
which
says
look
instead
of
this
very
specific
gamma
for
for
controlling
traffic,
routing
on
on
brand
x
of
service
mesh
and
a
very
different
method
for
brand
y.
What
we're
going
to
try
and
do
is
is
consolidate
a
consistent
practitioner,
experience
and
operate.
Our
experience
by
saying
this
is
how
it's
it's
an
abstraction,
an
interface
layer
into
the
underlying
implementation
and
an
smi
is
kind
of
growth
around
that
and
thinking.
B
But
service
mesh
performance
is
another
working
group
which
is
inside
the
cncf,
which
is
looking
at
how
you
can
kind
of
describe
and
manage
mesh
performance
and
we'll
look
at
that
a
little
bit
on
the
next
slide.
But
then
you've
also
got
connectivity
now.
Cncf's
own
statistics
and
many
other
statistics,
including
gartner,
are
kind
of
showing
that
a
high
number
of
people
are
operating
in
a
multi-cloud
world
or
they're
operating
heterogeneous
environments.
And
they
do
that
for
a
number
of
different
reasons.
B
They
do
it
through
acquisition
because
they
want
to
sort
of
do
develop
a
choice
because
they
want
to
be
able
to
kind
of
balance
hedging
their
bets
across
multiple
clouds
or
taking
advantages
of
various
different
costs.
But
the
key
thing
is
that
we
want
to
be
able
to
connect
all
of
that
together.
So
vmware
has
a
specification
called
hamlet,
which
is
an
open
source
specification
currently
looking
at
cncf,
and
the
idea
around
that
is
looking
in
a
common
interface
method
to
manage
things
like
catalogs
synchronization
between
the
different
meshes
and
identity
federation.
B
So
it's
going
to
be
a
really
nice
to
hopefully
see
a
standard
at
the
forefront
that
you're
going
to
get
it's
you
and
link
dsd
on
console,
vmware,
tanzu
and
console
sort
of
mesh
and
kuma
everybody
been
able
to
integrate
together.
Why
that
benefits?
It
benefits
the
the
practitioner
and
and
also
it'll
benefit
the
vendors,
because
they're
no
longer
being
constrained
on
integration.
People
can
choose
the
right
tool
for
the
right
job
pop
up
there.
B
Next
one
yeah,
so
service
mesh
interface
conformance
is
a
project
which
has
just
been
picked
up
by
the
cncf,
and
the
intention
around
this
is
to
be
able
to
say
right.
We're
going
to
bet
on
the
smi
as
the
the
method
of
of
kind
of
defining
an
interaction
with
a
service
mesh,
and
it's
actually
important
to
to
understand
which
of
the
service
meshes
adhere
to
the
various
different
sort
of
capabilities
of
smi.
So,
for
example,
policy
based
routing
or
or
just
kind
of
traffic.
B
Splitting
traffic
routing,
so
smi
conformance
is,
is
going
to
be
a
project
which
looks
at
that
and
it'll
it'll
be
able
to
run
automation
against
the
service
meshes
which
are
subscribing
to
to
to
be
included
in
there
and
and
also
to
kind
of
be
able
to
kind
of
say
right.
You
know
this
particular
mesh
implements
these
features.
It
implements
those
again
it's
about
the
ability
to
provide
consumers,
the
ability
to
make
the
correct
decision
for
them
and
do
so
in
an
easy
and
an
easy
way.
B
Layer,
five
and
measuring
is,
is
kind
of
working
around
the
sort
of
the
this
particular
tooling.
To
facilitate
that,
and
it's
pretty
exciting,
I'm.
I
think
it's
a
really
great
thing
to
be
able
to
benefit
with
smi
as
well,
hopefully
promote
and
push
that
standard.
D
B
Two
different
things:
yeah,
sorry
so
interface
conformance
is
basically
does
this
mash
implement
this
particular
interface
smp,
which
can
you
flip
a
slide?
There
libre
it's
all
about
providing
a
kind
of
a
standard
way
of
measuring
the
various
different
outputs
and
performance
capabilities
of
the
service
mesh.
B
Now,
why
do
you
want
to
do
that?
Well,
you
know.
Benchmarking
is
one
thing,
but
there's
a
number
of
reasons
why
you
need
to
be
able
to
benchmark
you.
You
want
to
be
able
to
benchmark
to
understand
a
change
or
a
potential
change
that
you're
going
to
make
into
a
system.
I
think
one
of
the
things
that
will
take
time
to
when,
when
you
think
of
being
educated,
is
that
you
know
service
mesh
is
not
free.
B
Now
I
can't
give
you
numbers,
but
potentially
you
could,
if
you're
running
layer,
seven
right
through
your
stack,
you're
doing
things
like
inspecting
http,
headers,
you're
kind
of
double
buffering,
a
lot
of
that
request:
information
into
memory
as
you
process
it,
etc,
etc.
That
takes
cpu.
It's
it's
memory!
You
could
find
by
switching
layer,
7
inspection
across
your
entire
service
mesh.
It
increases
your
cpu
and
memory
counts
and
and
overall
sort
of
resource
consumption
by
10.
B
Now,
as
a
as
an
operator,
you
want
to
be
able
to
make
a
decision
based
on
that,
because
increased
consumption
is
increased
cost.
Do
you
really
need
that
that
capability
right
through
the
the
network?
Do
you
just
need
it
in
bits
and
pieces
service?
Mesh
performance
is
through.
One
of
the
goals
is
going
to
be
able
to
enable
people
to
better
make
that
choice
for
vendors.
What
vendors
can
do
is
vendors?
B
Can
leverage
service
match
performance
to
to
be
able
to
run
things
like
regression
tests
across
the
various
different
versions
of
their
software,
which
benefits
them?
It
helps
them
to
to
kind
of
keep
keep
on
track
and
ensure
that
they're
sort
of
keeping
performance
and
all
of
the
tooling
be
there
for
them
to
do
that,
but
it
also
benefits
the
consumer,
because
that
index
benchmarking
can
be
used
to
to
to
educate.
Now
again
like
one
of
the
goals
of
smp
is
to
compare
apples
and
apples.
B
The
other
thing
that
that
kind
of
smp
is
going
to
enable
we
hope
is,
is
a
sort
of
a
variety
of
plug-in
ecosystems.
So
the
ability
for
the
likes
of
let's
say
datadog
as
a
sas
platform
to
be
able
to
provide
specific
service
mesh
metrics
and
to
be
able
to
do
so
by
kind
of
just
consuming
the
an
interface
which
implements
the
performance,
specification
and
yeah
and
the
hope
around.
This
again
is
a
universal
performance
index
which
is
kind
of
gauge
efficiency
and
efficacy.
B
It's
important
for
the
consumer
to
have
the
choice.
They
want
to
be
able
to
make
a
balance
of
decision
between
feature
and
speed.
It's
kind
of
not
everybody
has
exactly
the
same
requirements.
Not
everybody
has
the
same
requirement
in
the
same
application.
Really
so
big
hopes
for
smp
to
to
be
able
to
make
some
headway
and
get
some
standardization
in
that
space.
A
As
people
digest
that
and
and
formulate
questions
and
comments,
I'll
I'll
toss
in
this,
this
perspective
that
well,
I
guess
in
general,
a
perspective
that
I
think
a
lot
of
times.
We
see
in
our
our
industry
that
infrastructure
gets
somewhat
commoditized
and
in
just
in
a
from
my
from
a
myopic
perspective
of
service
mesh,
if
you're
looking
at
a
data
plane
a
control
plane,
a
management
plane
that
those
lower
planes
would
get
commoditized
kind
of
over
time.
A
A
As
a
matter
of
fact,
efforts
that
some
of
you
are
on
this
call
are
directly
helping
with
in
and
around
plugable
filters,
whether
those
are
webassembly
or
or
other
or
native
to
the
project
means
that
you
can
ask
even
more
of
a
service
mesh,
and
it
can
be
even
more
dynamic
to
the
extent
that
those
are
dynamically
pluggable
and
so
the
like.
A
The
ability
to
for
for
people
for
us
to
use
common
nomenclature
and
a
way
of
sort
of
exchanging
a
format
to
say
that
you
know
to
to
discuss
how
much
it
costs
to
run
a
more
highly
intelligent
piece
of
infrastructure
or
how
much
you
are
saving
in
terms
of
the
time
that
it
would
have
taken
to
perform
that
task.
Otherwise,
or
from
my
perspective
it.
This
actually
becomes
more
important
over
time
as
data
planes
are
fairly
powerful
today
and
potentially
get
even
more
so
going
forward
comments
on
this
questions
on
this
spec.
A
When
you
hear
about
it
on
the
surface,
I
think
I'm
highly
complementary.
The
two
in
so
much
as
one
smi
facilitates
a
standard
interface
for
invoking
for
describing
a
traffic
split,
for
example,
while
the
other
one
provides
a
standard
way,
a
centered
unit
of
measure
of
that
traffic,
splits
performance,
so.
D
Okay,
hey
so
I
took
a
quick
look
at
the
the
smp
site.
That's
linked
on
this
page,
and
so
is
it
not
part
of
cncf
today
like?
Is
it
a
standalone
effort
because
it
shows
you
know,
contributors
cncf,
which
makes
me
think
okay,
so
it's
an
independent
project,
not
part
of
cncf.
Can
you,
I
guess,
help
me
position
a
little
bit
to
understand
that.
A
Yeah
good
good,
good
question.
The
hope
is,
is
that
in
a
couple
of
weeks
this
becomes
a
cncf
project
in
part
yep,
so
it's
not
so
to
be
concise.
It's
not
today,
and
those
partners
that
you
see
listed
are
in
agreements
that
that
we
want
to.
We
want
to
bring
it
over
here
god,
I'm
in.
A
And
that
it's,
I
don't
know
how
you
quite
gage,
this
it's
relatively
young
in
its
life
cycle
of
its
development.
You
know,
I
guess
that
part
of
it's
a
concept
has
been
around
for
quite
some
long
time.
This
got
got
started
really
really
in
the
istio
performance
and
scalability
work
group
as
a
I
don't.
You
know,
wasn't
called
this
then,
but
but
there
was
an
acknowledgment
that
such
a
thing
would
be
really
helpful.
A
This
is
kind
of
rolled
into
something
that
we
can
give
a
term
to,
and
and
hopefully,
roll
into
the
the
cncf
and
have
broad
participation,
and
I
think
part
of
the
goals
here
are
that
if
there
is
value
found
which
that
that
this
unit
of
this
common
way
of
measuring
a
common
way
of
describing
the
environment
and
what
you're
doing
will
be
that
you
would
find
either
implementations
of
it
in
each
participating
service
mesh
or
that
there's
a
canonical
implementation
of
smp
today
inside
of
the
meshery
project,
which,
hopefully
would
would
go,
the
same
route
would
come
into
the
cncf
shortly
as
well
that
either
the
service
meshes
themselves
are
implementing
this
spec
or
that
they're,
maybe
they're
running
measuring
in
their
pipelines
to
be
able
to
perform
some
of
the
things
that
they
could
just
said.
A
Around
regret
regression,
analysis
of
performance
with
each
each
build
or
with
each
release,
and
doing
so
in
a
consistent
manner.
Yeah.
A
A
The
overlap
being
in
the
best
of
ways
of
like
the
very
simple
examples
of
if
in
s
p,
if
you're
going
to
say,
hey
the
the
service
mesh,
that's
being
measured,
is
kuma
as
a
random
example.
In
s
p,
it's
a
collect.
Currently,
it's
a
collection
of
proto
files
and
and
in
there
it
has
names
of
meshes
if
there
is
an
smi
protofile
which
which
there
isn't.
But
if
there
was
a
common
way
of
describing
the
fact
that
that
this
thing
this
thing
represents
kuma
great.
A
C
Go
ahead,
kevin
thanks.
Does
the
service
mesh
performance
toolset
currently
make
use
of
smi's
metric
implementation,
or
is
that
a
planned
thing
or
is
it
are
the
two
just
they
just
don't
meet
at
all.
A
They
do
actually
you
kind
of
see
nick's
head
going
up
and
down,
which
is
a
little
bit
to
the
example
that
I
was
giving
around
like
traffic
splitting
is,
that
is
that
one
one
one
configures
the
environment
and
the
other
one?
Can
you
know
measures
it?
The
of
what
s
p
is
today
is
a
is
a
specification
or
it's
a
there's,
a
reference
implementation
of
that
specification
and
measuring
tool
and
mesherie
does
implement
smi
it
also
it
does
both.
A
It
speaks
to
the
service
meshes
directly
as
needed,
but
also
leverage
smi,
to
the
extent
that
it
can
doesn't
matter
if
yeah.
A
As
a
matter
of
fact,
I
think
that
answers
it
yeah
and
actually
sort
of
to
to
nick's
prior
the
smi
conformance
bit
of
the
work
stream
like
yeah,
but
mesher
is
very
much
so
aligned
with
the
goals
of
smi
in
terms
of
helping
helping
validate.
A
E
So
leave
a
quick
question
about
smp
who
submits
the
performance?
Is
it
the
s,
p
working
group
or
is
it
a
work
stream
or
is
it
a
project
or
who,
who
does
the
actual
calculation
of
the
benchmark.
A
S
p
itself
a
collection
of
proto
files,
the
implement
the
first
implementation
of
it
has
been
in
mesherie
meshery
as
a
tool
will
well
here's
a
kind
of
a
good
example.
If
we
can
use
this
example.
So
when
measuring-
and
this
is
the
the
each
of
these
projects
by
the
way
they
are
intentionally
like
mid-flight
or
they're
being
presented
mid-flight
so
that
folks
can
influence,
and
so
I
caveat
that
to
say
that
the
way
that
meshri
is
providing
conformance
today
is
it's
running
a
suite.
It
asserts
a
bunch
of
tests.
A
It
makes
a
bunch
of
assertions
runs
a
bunch
of
tests.
Provisions
up
to
eight
different
service
meshes
tries
to
ascertain
whether
or
not
they're,
compliant
with
the
smi,
spec
and
bundles
up
those
test
results
and
will
and
and
has
the
ability
to
persist
those
send
those
off
remotely
or
just
persist
them
locally,
and
so
I
use
that
as
an
example
of
kind
of
the
same
way
in
which
it
has
it
implements.
Smp
is
that
at
and
actually
the
next
discussion?
A
Actually,
maybe
this
is
good
to
roll
into
the
next
project,
because
there'll
be
a
demo
of
this.
The
way
that
meshri
implements
smp
is
to
describe
the
environment
capture
the
do
the
thing
that
s
p
does,
but
also
to
run
load
tests,
collect
the
results.
Do
some
statistical
analysis
and
it'll
have
it'll
collect
that
test
result
in
an
smp
described
format
which
it
can
also
send
back
and
persist
as
as,
hopefully,
both
s
p
and
mesherie
go
into
the
cncf?
A
One
of
the
things
that
meshri
has
been
doing
is
for
those
that
have
been
running
it
and
again.
Our
hope
is
that
each
of
the
service
meshes
that
find
value
in
it
will
run
it
in
their
pipelines
that
it
would
not
only
set
transmit
back
smi
conformance
for
of
that
mesh,
but
also
send
back
performance
tests
or
ssmp
formatted
test
results,
so
they're
in
live.
A
I
think
part
of
the
answer
to
your
question,
which
is
like
hey
who's,
one
of
the
things
that
the
as
a
project
has
been
and
and
a
lot
of
people
have
asked
like:
hey,
where's
where's,
the
performance
analysis
like
where's
that
paper
published,
and
the
group
has
been
really
hesitant
to
do
that.
Because
you
a
lot
of
times,
you
end
up
making
an
ass
out
of
every
out
of
yourself
and
everyone
else,
because,
and
rather
we
try
to
give
people
tooling
to
let
them
go.
A
Do
the
analysis
themselves
as
we,
you
potentially
use
the
cncf
lab
to
run.
Some
of
those
analysis
analyses
we
would
want.
We
would
call
for
participation
from
each
of
the
service
meshes
to
ensure
that
things
are
configured
in
the
right
way
that
we're
getting
as
apples
to
apples
as
is
even
possible,
which
isn't
entirely
possible,
but
that
rather
it's
the
service,
mesh
manufacturers
or
the
projects
themselves
that
are
empowered
with
the
same
tool
using
the
same
common
format
to
send
in
those
reports
or
keep
the
reports.
A
If
they
want
to
you
know,
or
both
thanks
lee
yeah
good
good,
well,
good,
let's
get
it
and
actually,
I
hope
that
there's
a
little
bit
of
a
demo
here
that
will
help
follow
on
steve's
questions,
so
kush
there's
some
distributed
performance
analysis
that
you've
been
working
on
in
combination
with
a
couple
of
the
envoy
nighthawk
maintainers.
Do
you
want
to
tell
folks
about
this.
F
So
the
problem
was
that
many
performance
benchmark
tools
or
analysis
tools
are
limited
to
single
instance,
load
generation
or
single
load
generators.
So
this
limits
the
amount
of
traffic
that
can
be
generated
to
the
output
of
the
single
machine
that
the
benchmark
tool
runs
on
in
the
cluster
or
out
of
the
cluster
distributed
load.
Testing
in
parallel
was
just
a
challenge
when
merging
results
and
like
we
need
to
maintain
some
of.
F
So
we
carried
out
this
project
forward
and
the
project
was
proposed
at
the
as
a
google
summer
of
code
idea
for
cncf
summer
of
code
acted
as
a
catalyst
to
execute
the
project,
so
the
project
didn't
only
enable
us
to
have
distributed
performance
benchmarking,
but
as
we
know
that
different
micro
services
behave
differently
in
different
workloads
and
exhibit
different
signatures.
So
the
project
will
also
enable
us
to
understand,
like
how
different
micro
services
will
execute
the
characteristics
and
different
workloads.
F
So
for
the
project
we
collaborated
with
my
talk.
Maintainers
nighthawk
is
a
layer.
7
performance
characterization
tool,
which
was
created
by
on
yt
and
hopefully
it's
going
to
support,
distributed
performance,
distributed
load
generation
soon,
and
we
took
missouri
missouri,
which
is
a
service
mesh
management
plane
and
which
currently
supports
wrk2
ford.
I
o
and
nighthawk
as
single
instance
load
generators.
F
F
F
F
You
will
also
have
ability
to
process
the
results
and
you
can
compare
different
results
and
benchmark
analysis
with
each
other
and
the
service
mesh
performance
spec,
which
lee
was
just
talking
about,
and
nick
jackson
explained
it
briefly.
So
in
the
mystery
results,
we
have
also
implemented
a
canonical
a
canonical
implementation
of
services
for
service
mesh
performance,
spec.
F
F
So,
let's
just
quickly
move
my
test
here.
We
need
to
specify
any
url
different.
There
is
some
all
the
different
node
generators
behave
differently
with
the
dns
entries
and
the
ip
versions
of
dns
entries
of
the
urls
which
we
have
given
for
the
test,
so
different
load
generators
sometimes
may
give
a
different
results
and
different
benchmark
analysis.
F
So
here
is
the
result
which
we
just
bought
from
the
load
test,
which
we
ran
on
the
website
google.com
and
using
the
load
generated.
In
my
talk,
if
I
will
just
navigate
into
the
results
tab,
I
can
see
there
are
a
variety
of
results
and
I
can
just
select
some
of
the
results
and
can
still.
I
can
see
a
quick
comparison
between
the
results.
F
A
Yeah,
that's
that's
very
nice.
One
of
the
things
that
kush
I
noticed
in
your
demo.
You
you
hit
a
server,
an
endpoint
that
wasn't
on
a
service
mesh,
which
is
maybe
a
good
call
out
that
that's
one
of
the
first
things
that
people
want
to
understand
is
like
hey:
what
is
what
are
the
performance
characteristics
or
the
differences
between
running
my
service
on
the
mesh
and
off
the
mesh,
which
is
which
is
kind
of
nice?
To
be
able
to
do?
A
You
were
noting
some
of
the
differences
in
the
well
algorithms,
I
guess
is
what
I
would
say
the
statistical
analysis
that
each
of
those
load
generators
use
a
bit
of
the
a
difference
in
the
way
in
which
they
might
generate
load
as
well
boy,
I'm
going
to
forget
the
actual
the
term
here.
So
none
of
those
load
generators
are
the
type
of
load
generators
that
academics
like
to
use.
A
As
a
matter
of
fact,
mr
sahu
pratik
has
been
collaborating
in
the
these
area
for
a
while,
I
just
noticed
pratik
is
on
a
phd
candidate
at
ut.
Austin
pratik
help
me
with
the
type
of
load
generator
that.
G
So
there
are
two
types
of
load
generators
that
we
look
at
like
the
open
loop
load,
generators
and
closed
loop,
so
like
just
to
see
how
much
can
we
push?
The
servers,
usually
open,
loop
load
generators
is
what
we
at
academics
like
to
focus
on,
but
most
of
these
load
generators
are
closed.
Loop
load
generators
which
rely
on
the
response.
G
How
many
did
they
send
out
a
request?
Only
when
a
response
is
received
a
per
thread,
and
that
is
I.
I
believe
that
is
the
distinction.
That
lee
is
mentioning.
A
It
is
yeah
if
I
recall
off
the
top
of
my
head,
that
you
know
like
hey,
there's
a
reason
why
there's
there
was
wrk
and
now
there's
wrk
too.
I
think
the
difference
being
like
coordinated
omission
is
part
of
like
how
the
different
load
generators
perform
in
terms
of
being
open
or
closed
and
then
also
in
terms
of
how
they
assess
you
know.
A
Do
their
analysis
from
when
they
start
measuring
to
when
they
don't,
and
so
anyway,
that's
being
done
today
in
a
single
from
a
single
load
perspective
and
part
of
what
kush
and
the
nighthawk
maintainers
that
he's
engaging
with
are
working
on
distributed,
analysis
which
I
think
will
unlock
and,
and
I
think
petit
does
as
well,
and
the
others
that
are
involved,
unlock
some
new
insights
and
we're.
A
You
know
now
in
a
world
where
we're
running
lots
of
microservices,
a
popular
micro
service
might
enjoy
a
lot
more
use
than
anticipated,
particularly
some
east-west
traffic,
that
it
wasn't,
you
know,
maybe
wasn't
designed
it
wasn't
initially
designed
for.
A
A
No,
no,
the
and
so
true
to
what
I
know
is
the
simple
answer,
which
is
I'm
glad
you
asked.
Each
of
the
projects
are
like,
I
would
say,
the
project
that
kush
and
pratik
were
just
speaking
to
or
is
50
of
the
way
there.
If
you
will
so
either
an
excellent
time
to
present
to
get
influence
from
from
mitch
and
others
or
or
maybe
not
the
best
of
times
to
present
until
it's
all
done,
it
depends
on
for
our
part.
A
These
are
these
projects
are
like
whoever
has
come
to
bear
and
come
to
influence
and
provide
insight
has
been.
I
hope
you
know
like
really
warmly
welcomed,
and
so
now
no
it's
good.
The
work
in
progress.
That's
great
one
actually
mitch,
just
as
you
just
putting
your
forcing
you
to
put
an
istio
hat
on
the
one
thing
that
would
be
insightful
both
toward
the
service
mesh
patterns
that
we
were
talking
about
on
the
start
of
the
call-
and
here
is
like
with
the
and
this
isn't
favoritism.
A
This
is
a
fact,
because
I've
spoken
with
every
single
service
mesh,
that's
out
there
and
I
can
prattle
more
often
you
can
anyway,
that
the
the
is
the
well
the
former
performance
and
scalability
working
group
had
their
crap
together,
so
to
speak
or
like
had,
you
know,
in
combination
with
some
of
the
folks
at
ibm
and
google
and
others
that
would
come
in
there
had
a
number
of
benchmark
common
benchmark
common
tests
and
things
that
they
would
look
for,
whether
it
was
x
number
of
envoys
or
this
many
namespaces,
or
this
a
lot
of
things
at
a
much
bigger
scale
than
I
think
that.
A
My
point
is:
there
are
there's
a
lot
of
knowledge
from
within
that
working
group
that,
particularly
just
like
here's,
here's,
the
type
of
test
that
should
be
run
and
part
of
that's
like,
like
I
said
some
of
the
examples,
as
I
said
or
part
of
that
is
based
on
workload,
it
might
be
the
same
exact
test,
but
it's
a
different
type
of
workload.
A
A
lot
of
the
people
that
we've
engaged
in
this
project
have
like
a
very
common
question,
is
yeah,
but
so
like.
What
are
you
using
as
your
example
workload?
Are
you
like?
I'm
pratik
will
bring
up
the
like:
hey?
Are
you
running
a
an
instance
of
get
labs
infrastructure,
for
example,
or
or
some
social
network,
or
some
database
heavy
thing,
or
something,
and
anyway,
to
the
to
the
notion
that
we're
only
halfway
through
getting
influence
from
others
about
what
types
of
easily
repeatable
tests
there
should
be
is
would
be
really
helpful.
H
Yeah,
I
think
the
number
one
thing
that
I
would
take
away
from
the
work
that
the
telemetry
group
did
regarding
performance,
which
has
now
been
kind
of
folded
into
the
test
and
release
working
group,
but
it
is
that
the
details
are
very
important.
H
So
looking
at
that
yaml
file,
it's
possible
that
these
fields
exist,
but
that
they're
just
not
populated,
but
it
would
be
great
to
be
able
to
annotate
it
with
information
about
the
details
of
the
test.
You
know
this
was
being
run
with
mtls
enabled
or
mtls
disabled
or
both
z
policies,
and
it
was
run
against
this
type
of
a
client
application
being
able
to
track
that
from
one
test
to
the
next.
Then
you
get
the
ability
to
say
hey
when
I
kept
all
of
the
details
the
same,
but
only
changed.
H
A
Like
yes,
absolutely
the-
and
that
is
the
example
that
cush
had
just
shown
was
like
it
was
a
15
liner
or
something
that's
it.
A
That's
showing
that
like
of
what's
defined
in
the
spec
today
is
what's
a
good
example,
is
a
bit
more
of
exactly
what
you're
highlighting
which
is
like,
which
is
exactly
why
there's
tooling,
being
why
we're
investing
and
tooling
being
built
is
because
because
good
god,
I
think,
performance
engineers
don't
get
paid
enough
because
yeah
there's
any
number
of
is
there
litany
of
you
know,
did
you
have
an
egress
gateway
or
not,
or
did
you
have
an
ingress
and
then
how
big
was
it?
A
E
All
right,
I
was
just
going
to
say
real
quick.
I
think
it's
great
that
you're
getting
people
together
to
work
on
this.
There
was
sort
of
a
lack
of
interest
in
istio
and
performance
analysis.
E
I
mean
we
reached
a
certain
point
where,
like
most
of
the,
the
performance
tuning
was
in
envoy
itself,
so
that
that
we
dissolved
that
work
group,
because
there
was
a
lack
of
interest,
it
was
like
two
people
would
show
up
and
they
talked
to
each
other
and
said
sign
up
for
a
worker,
but
you
know
if
there's
25
people
involved,
that
might
be
you
might
get
more
out
of
that.
So
I
think
that
I
think
there's
a
great
thank
you.
H
You
know
lead
to
that
point:
do
we
expect
to
see
substantially
different
performance
numbers
from
different
envoy
based
service
mesh
implementations?
That's
a
good!
That's
a
good
question!.
A
Yeah
yeah.
Well,
I
won't
name,
I
won't
name
names.
I
had
that
conversation
a
few
times
my
gosh
I
had
like
a
year
and
a
half
ago.
Maybe
I
would
just
say
I
would
maybe
put
it
like
this,
like
hey
within
the
controlled
plane,
does
does
having
mixer
in
the
control
plane
or
not
make
a
difference.
In
terms
of,
I
guess
it's
a
rhetorical
question.
I
guess
I
think.
B
This,
I
think
just
add.
I
think
this
could
be
something
which
might
be
more
for
the
future,
because
envoy
is
is
obviously
currently
opening
up
extension
points
inside
with
with
wasm
filters
which
are
running
in
the
hot
path
and
once
folks
have
got
control
to
to,
in
effect
change
the
operability
of
of
envoy.
B
You
probably
will
see
a
greater
variance
in
various
different
service
meshes
depending
on
which
filters
they
use
or
how
they
use
them,
and
I
think
one
of
the
common
things
around
that
at
the
moment
that
you
might
see
which
is
slightly
different,
is,
is
things
like
x,
sorry,
x
off,
because
xor
there's
a
call
out
and
then
you've
got
things
like
rate
limiting,
which
again
is
a
call
out,
but
I
think
that
the
variation
will
probably
grow
as
envoy
becomes
more
more
extensible
outside
of
the
core
code
base.
A
Thank
you
and
then
mitch,
actually
I'd
curious
for
your
feedback
on
what
I
was
alluding
to
around,
like
one
control
plane,
not
being
necessarily
the
equivalent
of
the
other
and
I'd
mentioned
just
like
you
know,
to
the
extent
that
mixers
was
doing
a
lot
and
still
doing
a
lot,
but
in
a
different
area
like
from
your
perspective,
yeah,
I
mean
like
yeah
as
a
yeah.
I
had
an
early
conversation
with
product
manager
for
apmash
and-
and
I
think
that
was
their
same
perspective
was
like.
A
Well,
you
know
it's
just
envoy
data
planning,
so
what's
the
well,
the
control
plane
is,
it
has
a
you
know,
depends
on
what
you're
doing
and
so.
H
Yeah,
no,
I
think
I
think,
highlighting
that
that
wasm
will
really
be
a
game.
Changer
in
terms
of
comparative
performance
makes
a
lot
of
sense.
If
all
we're
doing
is
serving
simple
xds
listeners
endpoints,
I
would
not
expect
to
see,
I
would
hope
to
not
see
a
substantial
difference
between
the
two.
Those
apis
are
relatively
tight
in
terms
of
their
implementations,
but
yeah
wasm's
a
whole
new
frontier
in
terms
of
performance,
so
that
makes
sense.
H
A
Blake,
I
won't
necessarily
speak
on
behalf
of
console's
roadmap,
but
unless
you
will.
I
Thanks
for
putting
me
on
the
spot
lee,
I
think
I'll
just
add
that
that
is
something
that
we
are
looking
at.
Like
the
other
service
meshes
that
are
out
there.
I
think
we
see
a
big
opportunity
for
wasm
to
allow
users
to
and
operators
to
do
things
above
and
beyond
what
we
as
a
vendor,
have
built
into
the
product.
So
there's
a
big
potential
there
from
an
extensibility
standpoint,
but
it's
something
that
we're
keeping
our
eye
on
and
obviously
that
ecosystem's
early
and
maturing.
I
A
B
I'm
I'm
personally
looking
forward
to
the
day
when
all
my
application
code
is
going
into
the
service
mesh
as
wasm,
and
I
don't
actually
have
any
micro
services
whatsoever.
I
just
have
proxies
and
wizard
modules
watch
watch
this
kubecon
2024
for
the
horror
story
by
company
x,
which
says
why
we
thought
putting
all
our
business
logic
into
wasm
was
a
really
bad
idea,
and
now
we
suffered
the
major
astoundage
of
our
lives.
A
Yeah
they
will-
and
this
is
important
this
is
like
for
my
from
my
personal
perspective.
This
is
in
part
why
we've
been
investing
a
ton
of
time
into
this
space,
because
I
think
that
there's
a
bunch
of
application
infrastructure
code,
like
we
talk
about
service,
meshes
taking
care
of
infrastructure
concerns,
and,
I
think,
there's
a
lot
of
parallels
between
service,
serverless
things
and
service
mesh
things
and
there's
a
very
similar
value.
A
Proposition
service
meshes,
I
think,
speak
really
well
to
absolving
applications
of
some
of
those
lower
level
considerations
and
going
forward
there's
even
some
like
I
would.
I
would
quote
it
as
like
application
infrastructure,
there's
a
lot
of
so
nick
mentioned
an
x-off
before,
like
there's
a
lot
of
a
lot
of
commonality
in
application
infrastructure
users.
A
Tenants
price
plans
a
lot
of
things
that
you
need
around
the
actual
business
logic
that
you're
trying
to
achieve,
or
that
some
of
those
things
the
service
mesh
were
an
intelligent
data,
plane,
filter
or
they're.
Already,
looking
at
that
header
and
they're
already,
you
know
so
so
as
people
go
to
explore
that
and
they
go
to
prepare
for
their
2024
talk
that
they
might
be
able
to
have
a
common
vernacular
to
describe
that.
They
might
have
easy
to
use
tooling
to
test
that,
and
so.
B
I'm
very
excited
about
wisem.
I
I
think
it's
it's
going
to
present
an
incredible
opportunity.
I
think
there's
a
lot
of
fears
around.
Is
it
going
to
be
the
next
esb?
I
would
argue
that
the
esb
was
actually
probably
not
the
world's
worst
pattern.
It
was
more
so
the
implementation
that
was
wrong
about
the
esb,
but
but
parking.
B
That
aside,
I
think
one
of
the
really
interesting
opportunities
is
when
we
start
looking
at
security,
and
one
of
the
kind
of
the
core
competencies
of
service
mesh
is
the
ability
to
do
micro
segmentation
now
the
kind
of
concepts
behind
why
we
need
to
do
that
is
because
the
firewall
as
a
perimeter
is
not
the
not
not
as
successful
a
form
of
defenses,
as
as
we
would
all
hoped,
there's
ways
around
it.
H
And
nick,
I
think
you
referred
to
that
as
well.
Are
they
right
now?
The
results
that
we
saw
today
were
talking
about
latency
on
the
traffic,
which
is
probably
the
number
one
concern
of
most
service
mesh
users
is
what
sort
of
latency
characteristics
are
they
going
to
see?
Are
we
going
to
see
also
execution
costs
in
terms
of
cpu
and
memory
for
the
data
plane
and
control
plane.
A
It's
a
pratik,
mr
mr
yeah.
The
short
answer,
mitch
is:
that's
absolutely
we
pratik
just
presented
on
that
at
cubecon
eu
there's
a
few
early
results
from
some
of
his
his
research
up
because
yeah.
I
think
that
that's,
I
think,
actually
being
able
to
articulate
that
in
a
granular
way,
so
that
people
can
make
decisions
on
whether
or
not
to
take
a
sprint
of
the
dev
team
or
a
couple
of
sprints
to
go.
A
A
I
really
appreciate
all
the
the
questions
today.
This
has
been
a
it's
been
really
nice
people
gotta
go.
Please
signal
your
interest
in
the
slack
channel
or
on
the
mailing
list
or
or
any
which
way
you
want
to
we'll
try
to
organize
a
bit,
get
some
things
going,
asynchronously
about
providing
a
place
to
put
in
thoughts
and
comments
and
and
bring
your
influence,
but
I'm
looking
forward
to
this.
A
I
hope
that
this
is
as
vendor
neutral
as
we
can
get
or
as
toward
the
end
user
as
we
can
get,
and
it's
actually
in
part
we're
creating
this,
because
there
is
an
end
user
service,
mesh
working
group
talking
about
patterns
and
and
they're
having
all
the
fun
by
themselves,
and
they
they,
you
know,
don't
want
the
vendors
over
there
and
that's
all
right,
but
then
the
vendors
aren't
working
on
the
patterns
and
the
feedback
that
they
need
and
so
yeah
we
need
to
work
on
those
yeah.