►
Description
CNCF TAG Network - https://github.com/cncf/tag-network
CNCF Service Mesh WG - https://github.com/cncf/tag-network/tree/master/service-mesh-wg
Service Mesh Performance - https://smp-spec.io
A
All
right,
hey,
welcome
everybody,
hey
it's
the
cncf
tag,
network
meeting
or
well,
it's
the
first
one
of
december
so
december,
2nd
thursday
december
2nd
nice
to
see
a
lot
of
you
on.
We
were
just
getting
the
meeting
minutes
all
prepped.
I
think
I
think
we've
got
a
number
of
topics,
or
I
know
that
we
do
now
is
the
time,
though,
as
you
look
over
the
topics
to
go
ahead
and
drop
in
any
any
additions,
also
to
drop
in
your
name
good.
To
have
everybody
on
the
call.
A
I'm
hoping
we'll
have
a
couple
of
other
folks
join
if
not
we'll
have
some
incomplete
action
items
as
we
get
going
just
some
housekeeping
notes
one.
This
is
a
cncf
call,
and
so
you
know
be
nice
to
your
neighbor
and
don't
say
things
that
your
mother
wouldn't
approve
of
we'll
post
it
on
youtube.
Even
if
you
do
my
name
is
lee
calcott,
I'm
one
of
the
co-chairs
of
the
tag
network
in
this
top
in
tag
network
topics.
A
C
A
Oh
very
good
and
then
as
a
young
tradition,
but
a
tradition
nonetheless
would
have
it
zachary.
You
might
be
our
winner
today.
Anyone
who
hasn't
been
on
the
call
before
gets
the
has
the
misfort
has
the
fortune
of
taking
a
moment
just
to
say,
hi,
introduce
yourself
and
what's
what's
got
you
hanging
out
with
us
today.
D
Definitely
hi
yeah,
I'm
zach,
I'm
an
esri
at
peloton
and
right
now
one
of
the
things
we're
looking
into
is
adopting
service
mesh
internally.
So
I
know
you
all
talk
about
that.
A
lot
here
so
just
here
to
you,
know
kind
of
stay
on
the
edge
of
what
you
all
are
doing
and
see
how
it
can
help
us
in
the
long
term.
A
Feel
free
to
interject
at
any
given
moment
or
to
be
as
small
of
a
fly
on
the
wall
as
you
so
choose,
and
so
by
the
way.
Just
before
I
be
what's
the
right
word
is
it
is
it
zachary
is
or
is
it
zack.
A
Nope,
no
all
right.
Well,
so
the
the
sequence
of
that
we
chewed
through
agenda
items
is
we
start
with
any
tag
network
topics
commonly
these
topics
include
a
review
like
a
project
review
it's
worth
taking
a
moment
briefly.
A
There
is
our
last
project
that
we
reviewed
was
fab
edge
and
and
there's
been
a,
I
think,
a
very
nominal
amount
of
chatter
ongoing
about
the
project,
and
so
I
figured
I'd
ask
ed
or
ken.
If
there
was
anything
of
note
just
in
terms
of
next
steps
or
comments
from
that
project,
they
were
up
for
review
they'd
gotten
some
feedback.
B
Yeah,
so
my
understanding
is
that
they
they
were
for
review.
There's
a
little
bit
of
confusion
around
some
some
items,
there's
some
feedback
and
questions.
My
counsel
was
to
update
their
slides
to
respond
to
the
feedback
and
questions
because
they
actually
had
relatively
good
answers
to
the
issues
that
were
raised.
B
A
Nice
good
yeah,
I
suppose,
they're
good
they're
if
they
were
to
ask
to
the
extent
that
this
isn't
popularly
known
their
recourse,
then
is
well
either.
They
can
come
and
hang
out
here
and
get
some
more
feedback
and
kind
of
or
they
can
and
or
they
can
resubmit
for
evaluation
by
the
toc.
A
My
hunch
is
if
they
go
and
do
that
if
they
go
do
that
directly
that
they
may
find
that
they'll
face
the
similar
set
of
questions
from
the
toc
and
so,
and
so
I
don't
know
yeah,
I
don't
know.
B
Yeah,
so
the
the
questions
the
toc
asked
were
good
it
just
there
wasn't
any
clear
answer
at
the
time
they
were
doing
the
evaluation,
so
I
I
think
the
tfc
might
potentially
be
open
if
they
could
get
the
feedback
information
they're
looking
for
because
they
were
questions
like
why
not
be
part
of
cube
edge,
which
is
a
completely
valid
question
and
they
had
a
really
good
answer
for
it.
So.
A
Nice
all
right
into
service
mesh
working
group
topics-
hey.
Actually
I
asked
either
sunku
or
navindu.
If
either
the
two
of
you
wouldn't
mind,
would
one
of
you,
ping,
jing
hong
in
case
he's.
A
A
No
yeah,
okay,
all
right,
fair
enough!
I
know
he's
in
he's
engaged
on
these
efforts,
and
so
okay.
Okay,
very
thank
you
for
that.
So
cubecon
china
is
coming
up
and
the
actually.
This
is
a
topic
for
the
tag
network.
A
So
there's
a
session
an
intro
and
deep
dive
to
things
that
happen
here
and
there's
a
fairly
decent
recap
of
what
has
transpired
primarily
like
the
deep
dive
ends
up
being
on
projects
and
initiatives
and
the
their
progress
since
the
last
kubecon,
and
so
ken
had
a
number
of
things
to
say
in
this
update,
so
the
deck
is
kind
of
posted
out
there
early
I'd,
encourage
folks
to
go
check
it
out.
There's
a
couple
of
statistics
that
I
think
are
interesting.
C
I
think
the
the
best
thing
about
the
talk
was
we
really
like
focused
on
just
how
much
we've
evolved
as
a
tag
group
over
the
last.
You
know
three
years
and
kind
of
looking
back
at
how
much
we've
accomplished
it's
it's
it's
easy
to
kind
of
say
this
doesn't
matter,
but
when
you
look
at
what
we've
accomplished
over
the
last
three
years,
really
really
good
to
see
that
was,
you
know,
we've
grown
a
lot
and
we're
continuing
to
grow,
which
is
really
good.
C
B
A
A
Okay
of
the
there
okay,
so
the
other
topics
that
we
have
on
service
mesh
working
group.
There
are
about
three
projects
that
we
focus
on
a
fair
bit,
so
one
of
them
is
service,
mesh,
performance
measuring
and
nighthawk,
measuring
and
service
mesh
performance,
as
projects
are
recent
entrants
into
the
cncf,
and
so
each
of
those
two
projects
have
project
office
hours,
both
of
them
just
had
their
office
hours
assigned
yesterday
and
so
there'll
be
emails
going
out
on
those
lists
to
point
people
to
those
office
hours.
So
that's,
that's
great.
A
Do
you
interact
with
excuse
me?
Do
you
interact
with
jing
much.
F
No,
actually,
we
both
are
in
different
teams,
as
such,
so
hadn't
had
a
chance
to
interact
with
them.
Okay,.
F
Yeah
I
mean
I
do
see
his
conversation
so
hard,
and
I
mean
I
see
his
part
of
the
service
machine
labeling
team
like
nothing
corresponded,
so
they
work
on
actual
enabling
of
service
mesh
code.
So
our
focus
our
teams
focus
most
to
be
in
leveraging
sales
match
for
5g
and
edge
computing,
but
yeah.
So
we
haven't
had
a
chance
to
discuss
with
them.
Yet.
A
A
I
had
suggested
to
him
that
by
the
time
he's
done
a
few
of
those
things
he
might
like
to
join
us
in
the
project
office
hours,
as
just
you
know,
as
another
person,
to
help
discuss
the
activities
of
what's
going
on.
If
you
would
give
that
some
thought-
and
you
know,
let
me
know
how
that
strikes
you
you
know
yeah.
F
I
think
that's
it's
definitely
a
good
idea,
so
I
could
get
to
see
the
initiatives
that
the
team
is
working
on,
bringing
their
new
perspective
from
this
side,
especially
for
the
office
hours
in
cuba,
china,
nice,
okay,
yeah,
good.
A
Good
one
of
the
the
first
topics
up
is
really
the
same
topic.
It's
about
performance
benchmarking,
it's
about
trying
to
accomplish
some
of
the
larger
goals
that
were
set
out
for
service
mesh
performance
as
a
project
much
during
its
genesis-
and
it
is
a
you
know,
a
lot
of
that's
about
informing
the
world
of
like
concerns
around
performance
when
you
turn
on
various
things.
A
Inside
of
a
mesh
understanding,
the
value
that
you
glean
understanding
the
performance
impacts
to
the
various
features
and
functions,
the
different
ways:
you're
configuring,
your
mesh
and
the
tooling-
is
in
place
to
go
run
a
litany
of
tests.
There's
automation
now
in
place
to
run
tests,
some
of
that
automation
has
begun,
and
so
there's
a
couple
of
folks
on
the
call
novendi.
Do
you
want
to
speak
to
this.
E
Yep
yep
sure,
maybe
I
can
show
like
what
we're
talking
about
as
well
to
get
some
idea.
A
So
as
a
precursor
to
this
of
what
I
think,
one
of
the
reasons
that
I'm
enthralled
that
zhing
is
engaging
is
that
I
think
he'll
he
may
be
able
to
bring
some
time
and
a
bit
of
muscle
to
to
actually
running
a
number
of
these
tests
inside
of
dedicated.
You
know,
inside
of
the
cncf
labs.
A
Of
what
navendi's
about
to
show,
I
think,
is
some
of
these
same
tests
running
using
github
runners,
github,
hosted
runners,
and
we
should
discuss
on
this
call
how
we
can
potentially
use
that
same
action
that
same
automation
but
use
different,
self-hosted
runners,
which,
if
you
think
about
it,
that
might
mean
that
we
could
then
use
the
same.
Automation
use
the
same
actions
which
have
a
number
of
well.
You
know
I
won't
I'll,
let
neven
do
show
it
and
then
we'll
talk
about
the
cncf
labs.
E
Yep,
so
I
think
we
have
talked
about
this
action
a
bit
on
the
previous
calls,
but
what
we
have
essentially
here
is
a
github
workflow
that
automatically
runs
performance
benchmarks
on
service
measures,
so
we
are
scheduled
to
run
every.
D
E
So
we
we
test
multiple
size
meshes,
we
have
stu
and
lingard
d
and
we
use
multiple
load
generators
for
fortio
and
wrk2,
so
we
also
have
some
more
other
configurations,
but
yeah
just
for
the
sake
of
this
demo.
I
have
run
this
test
a
while
back
and
what
what
we
can
do
here
is.
We
can
actually,
we
actually
collect
these
results
that
which
we
run
in
the
github
hosted
runners
and
we
have
it
in
measuring.
E
So
we
we
have
been
running
this
test
for
quite
a
while
now,
so
we
have
collected
a
large
number
of
tests
and
yeah
like
one.
So
one
area
that
that
I
have
pointed
out
in
the
meeting
minutes
was
to
actually
define
the
test,
configuration
and
so
sunku
and
zin
was
already
looking
into
how
we
can
actually
define
those
test
configurations.
So
we
have
this
test
configuration
defined
in
a
yaml
file.
So
so,
basically
the
cml
file
contains
the
entire
configuration
of
what
each
test
should
look
like.
E
E
So
one
such
ask
here-
and
one
such
area
for
people
to
get
involved
is
is
to
help
is
to
actually
help
define
these
test
configurations.
So
yeah
like
and
yeah
like
we
also.
So
I
was
just
debugging
some
issues,
but
basically
we
have
a
lot
of.
We
have
been
running
a
lot
of
these
tests
and
yeah
like
we.
E
We
hope
to
add
the
other
service
meshes
as
well,
and
we
also
hope
to-
or
we
are
also
in
the
process
of
designing
a
dashboard
which
is
which
can
which,
which
can
be
made
public
so
that
people
can
come
in
and
see
how
all
the
different
service
meshes
compare
with
each
other
and
what
are
the
benchmark
scores
for
these
each
species
and
with
what
kind
of
workloads
and
all
the
stuff
so
yep
any
questions
or.
F
So
now
andrea
thanks
for
doing
this,
so
this
is
a
really
good
start
to
get
this
going.
So
in
terms
of
the
test
profiles
right
I
mean
there
are
various
ways
we
can
look
at
extra
scaling,
or
you
know
single
node
or
running
as
inside
a
virtual
machine
running
on
bare
metal,
etc.
Different
ways
of
I
mean
dividing
the
test,
so
is
there
any
specific
priority
or
order
how
we
are
considering?
This
is
any
plan
towards
how
we
are
planning
to
approach
this.
E
Yeah,
I
guess
the
initial
plan
was
to
get
this
running
on
github
posted
runner
and
then
move
on
to
cncf's
machine.
So
I'm
not
sure,
like
I
don't
have
the
exact
configuration
of
what
the
cncf
hosted.
Runners
look
like
maybe
like
someone
else
on
the
call
might
be
able
to
provide
some
insight
into
it.
A
Yeah,
the
cncf
labs
used
to
be
offered
by
a
packet,
and
that
is
since
some
while
ago
moved
over
to
oh,
why?
What
is
the
name
escaping
it
starts
with
an
e.
They
just
got
didn't
they
just
get
acquired
equinix.
A
And
yeah,
and
so
the
particulars
that
I
recall
from
packet
probably
have
no
bearing
or
weight
on
what
what
bare
metal
machines
are
available
through
economics,
metal.
B
It's
actually
almost
exactly
it's
a
very
smooth
forward,
evolution
so
other
than
the
fact
that
when
you
go
to
packet.net
it
redirects
you
to
the
equinox
metal
page,
the
apis
are
consistent,
it's
still
using
tinkerbell.
In
the
background
we
you
know
in
network
service
mesh,
we
were
using
them
through
the
transition
of
their
acquisition
and
there
wasn't
a
single
blip.
It
just
worked
so
whatever
it
is.
You
think
you
know
about
packet,
modulo
naming
it's
almost
certainly
exactly
the
same.
A
B
So,
just
by
way
of
clarification,
so
tinkerbell
is
literally
the
software
they're
using
to
run
equinix
metal
and
have
been
using
since
it
was
packet.
Now
the
fundamental
question
you're
asking
though,
which
is
how
easy
is
it
to
get
a
cluster
up
and
going
it's
actually
very
easy,
currently
we're
using
the
their.
B
B
But
the
the
other
thing
to
keep
in
mind
is
that
they're
in
the
middle
of
the
transition
to
the
cluster
api,
and
so
last
time
I
checked,
they
are
still
using
the
pre
1.0
apis
in
their
in
their
implementation
of
cluster
api,
and
so
you
would
have
to
use
an
older
version
of
cluster
control,
but
I
would
strongly
recommend
going
and
checking
because
you
know
you
may
find
that
cluster
control
is
your
your
way
to
happiness
there,
because
it
may
they
may
have
actually
caught
up
with
the
api
shift.
B
Not
generally,
I
mean
you.
The
reason
it's
not
generally
contentious
is
that
it's
not
labs
in
the
traditional
sense.
What
it
is
effectively
is
the
provision
of
credits
by
packet
now
equinix
metal
for
cncf,
for
use
by
cncf
projects,
and
so
the
net
net
is
we're
not
operating
from
some
like
pool
of
reserve
instances.
Generally
speaking,
we're
operating
from
the
hey,
I
need
a
this
flavor
metal
box.
B
Okay,
yeah,
that
that
said,
of
course,
be
hospitable
about
cleaning
up
after
yourself,
so
you
know
make
sure
that
you're
actually
correctly
reaping
the
metal
boxes
that
you
grab
we
want
to
be.
You
know
we
want
to
be
appreciative
of
their
kindness
and
not
abuse
it
so
do
do
make
sure
to
be
scrupulous
about
cleaning
up
after
yourself.
A
Excuse
me
self-hosted
runners
from
github
workflows,
using
equinix
metal
as
self-hosted
runners.
I
imagine
there
wouldn't
be
an
issue,
but.
B
So
I
I
wouldn't
expect
there
to
be
an
issue.
I
haven't
personally
done
that
or
been
close
to
people
doing
that.
But
the
other
thing
I
would
point
out
is
that
equinix
metal
has
its
own
slack
and
they're
super
friendly
and
helpful.
So
if
you
sort
of
pop
up
and
say
hey,
is
anybody
mumble
bumble,
you'll
probably
get
somebody
who
will
respond.
A
Very
good
and
then
zach,
I
was
just
posing
a
question
here.
It's
in
part
to
sunku's
exact
question
which
he
was
he
was
sort
of
asking
like
hey.
You
know
this
is
great.
There's
a
great
start.
There's
tooling,
like
we're,
ready
to
go
off
and
kind
of
do
a
bunch
of
testing
in
a
big
way.
A
Part
of
that
testing
is
either
just
putting
some
blinders
on
and
and
saying,
hey
we'll
run
a
sample
application
that
looks
like
this
and
look
like
this
and
we'll
do
it
under
this
much
load
or
for
this
long
or
we'll
do
it
on
a
super,
tiny
cluster
or
a
really
large
cluster
and-
and
I
think
all
of
the
tests
are
valid
to
the
extent
that
someone
somewhere
out
there
runs
something
kind
of
simple
like
to
these
and
so
yeah
any
amount
understanding
that
that
you
all
are
pre-service
mesh
just
having
some
sense
of
well.
A
I
don't
want
to
pry,
but
maybe
the
right
thing
to
do
is
like
propose
a
few
different
configurations
and
have
you
emphasize
the
ones
that
might
be
event
like
if
you
were
to
see
the
results
for
those?
What
might
be
of
interest
to
you?
We're
trying
to
make
sure
that
we're
curating
the
right
data
we're
measuring
the
in
the
way
in
which
would
be
interesting
to
people.
D
I
think-
and
maybe
you
already
have
this
covered
and
if
you
do,
let
me
know
if
are
most
of
these
tests
like
baseline
tests,
where
you
just
have
a
single
application
in
the
mesh
that
you're
hitting
with
load
or
or
do
you
have?
D
You
know,
let's
say
like
a
set
of
five
applications
that
might
communicate
with
each
other
so
that
you
could
actually
configure
a
quote
more
real
word
micro
service
exam,
so
where
we
could
see,
you
know
if
there's
three
or
four
hops
before
the
request,
like
comes
back
out
to
a
user,
if
that
has
any
impact
on
performance
that
that
would
be
super
helpful
for
us,
I'm
sure
having
like
a
baseline
kind
of
like
what
you
share
as
much
you
can
do,
is
really
helpful
for
you
all
to
just
establish
that
baseline.
A
Nice
yeah,
yes,
is
the
answer
to
your
question
about
the
sample
applications
that
are
being
used
being
like
a
collection
of
of
microservices
that
that
yeah,
when
you
you
poke
the
first
one.
You
know
that
they
have
upstream
services
that
they
need
to
call
and-
and
you
you
mentioned
something
in
there,
though,
like
the
the
word
baseline
that
had
me
realizing
that
what
was
something
sunku
you
know
and
nuvendu
and
anirban
and
everyone
else,
something
we
don't
necessarily
highlight
or
discuss
a
lot.
A
The
first
question
is
just
like:
what's
the
difference
between
being
on
the
mesh
and
off
the
mesh,
you
respect
it,
and,
and
so
that
might
be
something
to
infuse.
Potentially,
I
don't
know
that
in
any
into
each
type
of
the
test
like
that,
or
maybe
I
should
say
for
each
workload-
that's
used
as
an
example
workload
that
the
testing
of
those
should
first
include
a
deployment
of
that
workload
off
the
mesh.
A
And
then
the
testing
on
the
mesh
and
and
yeah,
because
it's
just
one
of
those
it
just
provides
some
immediate
insight
into
the
the
thing
about
the
differences
between
between
being
on
the
mesh
and
off
the
mesh
is
like
you'll,
see
a
dramatic.
You
can
see
very
nominal
to
very
nominal
changes
in
the
performance
and
or
you
can
see
dramatic
changes
depending
upon
like
how
how
much
you're
asking
the
mesh
to
do,
and
so
when
we
present
that
to
people
yeah
part
of
like
curating
these
dashboards
to
to
help.
A
Let
people
know
that
let
people
know
what
they're
looking
at
that
before
they
run
before
they
just
kind
of
took
a
quick
glance
and
then
run
off
and
assume
that
there's
like
there's
a
there's,
a
whole
bunch
of
overhead
or
that
there's
no
overhead
depending
upon
which
chart
they're
looking
at
so
zach.
Let
me
see
if
I
can
yeah
this
is
like
do
you
know
when
you're
inviting
you
know,
did
you
guys
today.
A
Did
you
perform?
Do
you
do
performance
testing
today
like
if
you
do,
is
it
just
you
know
quick,
quick
little
tests,
or
is
it
like
you're
gonna
do
a
new
roll
out
of
your.
You
know
the
next,
the
next
version
of
your
your
services
and
you
let
it
bake
for
like
a
week
or
so
something
like.
D
Yeah,
I
think
it
it
depends
on
the
scenario
like
most
teams
are
doing
some
amount
of
load
testing,
and
then
I
I
think
they
can
mostly
configure
that
like
for
their
use
case
like
if
they
knew
there
was
a.
D
Like
a
big
change
that,
like
might
might
impact
something
that
might
run
longer
tests
or
things
like
that,
I
mean
one
thing:
we're
gonna
have
to
do
as
a
platform
team
for
our
own.
Due
diligence
is
just
as
we
present
this.
This
team
say
like
hey.
You
come
on
the
mesh,
here's
kind
of
like
baseline,
what
we
know
it's
going
to
introduce
latency
wise.
F
Yeah
one
thing
we
have
done
along
these
lines
is
maybe
a
good
point
that
we
need
to
do
before
service
mesh,
after
particular
service
mesh,
but
within
service
measuring
this
time,
for
example,
right
so
some
sample
web
server
like
nginx
web
server,
understand
its
impact
between
the
load
generator
and
the
one
particular
part
which
is
running
the
web
server
now,
but
towards
really
scaling
these
applications
and
testing
the
distributed
setup
right.
F
So
now
you
can
use
something
like
that:
star
bench
or
google,
boutique,
microservices,
etc,
and
then,
which
is
where
the
the
east
west
and
knocks
out
all
of
these
communication
performance
come
into
play
right.
So
that's
another
way
of
looking
at
it.
So
essentially,
I
think,
from
a
test
case
perspective,
we
could
look
at
a
similar
set
of
applications
without
service
mesh
and
do
the
same
with
service
mesh.
If
you
could
narrow
down
on
one
particular
application
suite
and
and
try
and
leverage
that
for
multiple
set
of
tests.
F
Yeah,
there's
a
lot
of
tests
being
done
in
intel,
for
example,
using
death
top
bench
and
identifying
bottlenecks
and
understanding.
What's
the
right
way
to
go
about
it
to
scale
into
particularly
with
service
mesh
right,
some
not
sure
if
you
can
share
the
results
that
I
know
of,
but
yeah
that's
a
pretty
popular
application
under
consideration.
F
So
so
in
terms
of
what
would
help
the
community
right,
so
we
need
to
find
a
workload.
That's
representative
of
you
know,
so
what
the
production
use
case
might
be,
or
a
realistic
use
case
might
be,
for
example,
have
a
database
have
a
web
server,
have
an
application
that
responds
something
along
those
lines
right,
so
maybe
you
can
come
up
with
a
minimal
set
of
components
that
are
necessary
and
say
hey.
These
are
the
set
of
services
we
need
that
are
deployed
in
a
service
mesh
and
then
probably
benchmark
that.
A
So
we
have
those
so
I'll
ask
again:
yeah
very
good
makes
sense
yeah,
but
we
have
those.
Okay
and
again,
I
guess
like
like
making
the
rubber
me.
You
know
like
actual
work,
yeah
yeah,
which
is
so
the
the
tests
are
now
schedulable.
I
mean
I
guess
so.
We've
got
four
or
five
different
sample
apps.
This
is
one
of
them.
This
online
boutique
yeah.
I
don't
know,
I
guess
I
guess
it's
kind
of
like
we
were
talking
about
last,
like
we're
just
gonna
start.
A
Actually
I
mean
I'll
go
as
an
action
item
I'll
go
message
with
the
I'll
go,
send
up
some
some
emails,
I'm
more
or
less
asking
some
of
the
same
questions
that
I
was
just
prodding
zach
on
and
see
if
people
have
opinions
or
as
we
describe
the
sample
apps
that
we
might
use
we'll
see
if
they
yeah,
if
those
are
like
of
the
desktop
bench
that
app
that
you
were
just
referring
to
is
that
do
you
know?
Is
that
close
in
nature
to
any
of
these
sample
apps.
F
Yeah,
I
think
at
that
top
bench
possibly
would
be
similar
to
the
google
boutique
the
set
of
microservices,
I
think
desktop
branch
I
believe,
was
originally
created
by
carnegie
mellon
for
research
purposes
and
it's
gotten
some
popularity,
oh
yeah,.
A
E
F
Got
you
yeah
the
reality
of
these
using
these
apps,
though
I
mean
since
they're,
not
necessarily
designed
for
in
a
high
performance
like
scalable
type
of
infrastructure.
At
some
point,
you
can
find
bottlenecks
with
these
type
of
deployments
that
are
independent
of
the
the
underlying
hardware
or
underlying
infrastructure
underlying
mesh.
Like
we
scale,
I
mean
they're
only
if
we
use
one
instance
of
sample
web
server,
it
can
only
save
us
so
much
right,
so
so
from
that
sense
that
there
are
bottlenecks
per
say
going
with
the
default
versions.
F
A
Ed
question
for
you,
as
as
we
go
to
run
as
we
could
run
these
tests
network
service
mesh
has
one
of
the
meshes
that
we'd
like
to
put
under
test.
B
Well,
I
mean
it's:
it's
not
about
fair,
not
fair.
It's
really
about
network
service
mesh
being
complementary
to
application
service
mesh,
so
network
service
mesh
doesn't
even
try
to
do
http
stuff
because
there's
lots
of
people
in
that
space.
Doing
it
really
well
right,
but
what
we
do
that
nobody
else
is
trying
to
do
is
we
will
allow
you
to
carry
other
kinds
of
payloads
like
l3
payloads,
in
a
mesh
style
approach
across
multiple
clouds,
multiple
locations,
etc.
B
So
you
can
have
pods
running
in
a
bunch
of
different
clusters
in
a
bunch
of
different
places
that
can
be
receiving
a
common
network
service
that
you
could
run
any
number
of
things,
including
a
service
mesh.
Over
so
I
mean,
I
think,
we're
we're
getting
to
the
point
shortly,
where
we're
hoping
to
have
the
use
case
of
running
a
single
service
mesh
over
a
vl3
which
can
span
multiple
clouds
you
could
just
have.
B
Instead
of
having
a
bunch
of
gateways,
you
would
have
just
a
single
instance
of
your
service
mesh
and
you
could
have
pods
running
wherever
when
we
get
to
that
point.
I
think
that
becomes
a
very
interesting
case
for
measuring
to
test,
because,
obviously
the
question
is
okay,
so
like
what?
What
is
the
behavior
like
when
you
smear
a
service
mesh
across
gk
aks-
and
you
know,
eks
and
savon
prem
stuff
as
a
single
service
mesh,
not
as
a
thing
where
you've
got
a
bunch
of
crazy
gateways
going
on
in
static
routes.
A
Yeah
yeah,
that's
a
good
call
out
yeah.
I
agree.
How
does
the
and
you
use
the
right
word
like
behavior?
How
does
the
behavior
change.
B
Well,
I
mean
you,
you,
you've
always
got
to
watch
out
for
implicit
assumptions
and
network
service
may
solve
several
of
them,
but
it'll
be
interesting
to
see
what
others
there
are.
B
B
Not
currently
now
there
are
two
things
to
be
aware
of
with
that.
One
of
them
is
that,
obviously,
when
you
start
playing
serious
performance
games,
you
end
up
having
to
tweak
things
a
bit
right.
So
if
you
deal
with
anyone
who's
actually-
and
this
is
not
so
much
of
an
issue
you're
talking
about
l7
stuff
right
because
compared
to
what
you
have
to
do
to
process
packets
l7
is
so
slow,
it
just
doesn't
matter
yeah
and
that's
fine
right,
but
but
so
there
are
things
you
would
want
to
consider.
B
But
if,
for
example,
you
had
somebody
who
was
like
who
wanted
to
do
say,
for
example,
nfv
use
cases
which
we've
got
folks
doing,
those
folks
are
going
to
want
to
configure
some
stuff
now
from
a
performance
point
of
view,
we
do
have
sort
of
a
lot
of
the
the
data
planes
that
are
in
use
tend
to
be
either
vpp.
You
know
ppp
data
planes
or
things
that
are
taking
advantage
of
smartnet
functionality
or
sriov,
and
so
we
know
the
underlying
things
that
process
packets
are
crazy
fast.
B
You
know
so.
For
example,
vpp
has
been
clocked
at
a
terabit
per
second
of
ipsec
on
a
commodity
server,
with
no
more
accelerating
hardware
than
sort
of
traditional
rss
cubes.
So
we're
feeling
fairly
good
about
the
fact
that
the
actual
bottleneck
you're
going
to
see
is
not
going
to
be
network
service
mesh.
It's
going
to
be
the
kernel's
tcp
stack,
which
is
again-
and
this
is
all
about
where
you
sit
in
the
world.
If
you're,
actually
a
hardcore
network
person,
the
kernel's
gcp
stack,
is
really.
F
A
B
So
feel,
free
to
ping
me
on
slack.
If
you've
got
specifics,
you'd
like
to
talk
about
because
I
I
could
talk
about
the
stuff
far
more
than
is
appropriate
for
this
call.
F
Yeah,
no,
no
definitely
I
mean
some
of
these
things.
I've
been
working
on
so
yeah
definitely
resonate
with
me.
I
guess
the
the
idea
or
question
is
more
like,
although
we
have
underlying
you,
know
accelerators
software,
basic
user,
plane
stacks
or
even
kernel
stacks
right.
So
how
does
nsm
kind
of
impact
right,
so
you
say
minimal
impact?
Is
it
impact
latency?
F
It's
the
things
along
those
lines
to
see.
Is
it
just
a
configuration
time
we
don't
have
to
worry
about
at
the
actual
run
time
in
terms
of
data
plane
performance
right.
So
if
there's
any
general
guidance
that
that's
available,
that
that's
a
good
good
one
or
if
not
even
I
mean
maybe
this
is
a
good
forum
that
you
could
consider
looking
into
some
of
these
things
running
an
automated
fashion.
B
Yeah
I
mean
I
would
I
would
I
would
love
to
get
a
little
more
input
from
you
on
what
you
would
be
looking
for
there
and,
as
I
said,
getting
that
up
and
running
an
automated
fashion
is
definitely
on
our
list.
You
know
one
of
the
things
that
I'm
trying
currently
trying
to
shake
out
is
again
when
you're
dealing
with
the
l3
stuff.
It's
so
many
orders
of
magnitude
faster
than
the
l7
stuff
that
you
need
things
like
t-rex
to
drive
traffic
in
a
meaningful
way.
B
That's
actually
interesting,
because
ipperf
is
just
frankly
not
up
at
that
level,
but
but
yeah,
so
that
is
actually
on
our
forward.
Going
list
is
getting
some
of
that
going
and
part
of
it
also.
Is
we
adopt
a
general
strategy
of
putting
everything
in
our
ci
chain?
B
So
that's
something
we
would
want
to
bring
up
on
ci
on
packet
in
order
to
sort
of
get
that
up
and
running
and
looked
at
the
other
thing
that
we're
in
the
process
of
doing
that
makes
that
interesting
is
if
you're
talking
about
performance
of
anything.
Fundamentally,
the
bottleneck
is
always
going
to
be
the
nick.
B
That
leaves
the
box
the
physical
nick,
leaving
the
box
and,
in
our
run
anywhere
mode
we
opportunistically
graphed
onto
that
with
af
packet,
which,
if
you're
familiar
with
that
sort
of
stuff,
it's
reasonable,
but
not
fantastic
performance-wise,
but
it
will
always
work.
We're
looking
at
being
able
to.
F
Yeah
yeah,
that
makes
sense,
and
definitely
our
team
is
looking
into
an
assignment
for
a
few
of
these
cases.
So
get
back
to
you
on
the
use
cases.
B
F
Thank
you
yeah,
along
those
lines
lee
I
think
you
know
to
your
point
like
measuring
and
testing
nsm
is
very
different.
Like
I
mentioned
compared
to
you,
know,
photoio
based
or
nighthawk
based,
you
know,
service
mesh
testing.
I
need
different
set
of
traffic
simulators
different
way
of
looking
at
the
entire
testing
compared
to
layer.
Seven
yeah
yeah.
A
B
And
one
that
I
would
actually
expect
to
go
relatively
well
architecturally,
because
the
the
traditional
service
mesh
model
is,
you
know
you
push
config
out
to
a
thing.
That's
actually
your
to
your
ingress.
Like
an
envoy
sidecar,
that's
learning
really
really
close
to
the
workload,
so
I
I
would
expect
that
to
other
than
the
fact
that
if
you're
doing
service
mesh
across
the
way
and
you've
got
wand
style
latencies,
I
would
expect
that
to
go
fairly
well.
A
So
we
got
about
15
minutes
left.
One
of
the
other
topics
was,
you
know,
after
as
this
data
is
being
generated
and
tracked,
the
need
to
expose
that
on
a
published
dashboard,
public
dashboard.
A
F
Awesome
yeah.
I
can
ask
around,
for
example,
saying
not
team.
I
think
I'm
not
pronouncing
his
name
properly,
but
yeah.
I
can
ask
his
team
to
see
if
they
they
are
keen
on
something
something
of
this
sort.
Let's
see
what
they
say.
A
Good
well,
there
are
a
couple
of
other
things
that
I
think
we
might
leave
for.
Another
meeting
zach
had
asked
the
question.
I
think
the
venue
had
spoken
to
like
what
given
kind
of
a
prelude
to
what
it
is
that
adaptive
load
control,
some
of
the
things
that
it's
intended
to
be
able
to
answer
and
answer
on
an
ongoing
basis.
So
the
concept
of
like
continuous
automation
or
I'm
sorry,
continuous
optimization,
is
in
scope,
kind
of
in
focus
of
this.
D
Does
does
that
exist
somewhere
on
like
github,
that
can
be
checked
out,
or
is
that
kind
of
just
in
the
design
phases
right
now.
A
The
design
is
there
and
sent
out
for
can
review
implementation
isn't
too
far
away,
actually
that's
kind
of
what
we,
I
think,
that's
what
the
topic
was
kind
of
intended
to
be
I'll.
I
mean,
if
for
nothing,
I'll,
send
you
a
couple
of
diagrams
that
explain
actually
zach
again
like
it
would
be
really
good
for
you
to
get
your
your
and
the
rest
of
your
teams.
A
What
have
you
just
perspective
on
whether
or
not
that's
actually
helpful
to
you
and
if
some
of
the
questions
that
we
think
that
this
adaptive
load
control
capability
will
answer
if
those
are
actually
questions
that
you
guys
have
that
so
yeah?
So
this
is,
you
might
not
think
it
an
ideal
response
that
it
isn't
an
off
the
shelf
capability,
but
actually
maybe
it
is
ideal,
because
the
input
that
you
would
give
would
change
it,
where
there's
a
collection
of
open
source
contributors
that
are
about
to
go
to
go,
write
this
so.
A
There's
a
brief
note
that
I
wanted
to
make
with
prashi
so
prashi
and
I
were
catching
up
earlier
today.
I'm
pleased
to
see
prashi.
So
thanks
for
coming
on
the
call,
there's
prashi
I'll,
send
you
a
link
to
there's
an
open
issue
that
has
to
do
with
the
design
of
the
performance
of
the
the
performance
benchmark,
dashboards
and
exposing
a
bunch
of
the
performance
data.
A
We're
looking
to
do
that.
This
is
just
kind
of
an
example
of
like
a
very
simple
dashboard,
we're
looking
to
do
that
type
of
a
dashboard.
Basically,
what
we
were
just
talking
about
on
this
site
and
helping
tell
people
about
what
they
can
expect
and
so
so
prashi
little
did
you
know
that
by
joining
the
call
you
might,
you
might
be
asked
to
do
some
things
here
so.
A
Okay,
very
good.
Anybody
have
anything
else
for
today.
A
A
It
could
be
really
good
to
do
these
two
items,
I
think,
are
probably
juicy
and
come
with
some
demos,
so
these
could
be
fairly
intriguing.
So
one
of
those
is
about
the
adaptive
load.
Control
thing
we
just
talked
the
second
one
is
about
the
beginnings
of
distributed
performance
tests,
so
load
generation
from
multiple
locations,
not
just
against
multiple
endpoints
at
the
same
time,
but
multiple
locations
at
the
same
time
or
sourced
for
multiple
locations.