►
From YouTube: Kiss.conf / Session02 - Getting started with Meshery
Description
Speaker: Lee Calcote (https://www.linkedin.com/in/leecalcote)
Slides: https://github.com/goupaz/kiss.conf/blob/master/slides/2020/Day01_Session02.pdf
A
And
yeah
foolish
I
admired,
and
he
was
my
google
Summer
of
Code
to
the
19th
mentor
amazing
guy
and
where
they
call
community
building
experiences.
Mostly
coming
from
you
can
see
from
meeting
minutes.
Everything
is
coming
from
my
surely
community
and
regarding
one
example,
I
can
tell
regarding
them
measure
II
that
in
turn,
the
regarding
the
intern
that
was
internally
asked
me
to
talk.
Adnan
they
get
a
go
and
I
talked.
He
basically
had
no
background.
Like
so
much
foundation.
Knowledge
missing
I
was
a
man
to
bringing
up
this
guy
to
speed.
A
It
gonna
be
too
much
effort,
but
Lee
didn't
give
up.
He
really
believed,
and
this
guy
in
two
months
ago
he
got
two
internship
in
the
Red
Hat
that
had
you
know,
just
they
don't
hire
random
interns,
I
mean
but
I.
What
I
was
amazed
at
how
lean
that
short
amount
of
time,
but
in
these
guys
he
he
was
based
in
India
and
he
and
two
more
friends
so
bring
these
guys
without.
B
B
Yeah
I,
just
I,
guess
I
get
a
chuckle
because
there's
a
couple
of
those
all
three
of
those
students
actually
landed
internships
at
Red
Hat
in
various
groups,
and
it
was
in
large
part
due
to
folks,
like
Sacco,
engaging
with
them
being
exercising
some
patients
showing
them
what
he
knows.
Hopefully,
as
part
of
that,
he
too
was
learning
learning
how
to
be
improving
his
mentorship
skills.
B
B
There's
some
selfish
components,
I
get
said
it's
a
it's
fantastic
to
see.
Others
grow
as
part
of
their
growth.
I
really
do
consider
that
that's
my
growth
as
well,
but
that's
that's
a
core
component
of
this
particular
community,
the
layer,
five
community,
it's
it's
a
community
that,
from
a
technology
perspective,
is
focused
on
service
meshes
surface
meshes
as
sort
of
sort
of
well
we'll
talk
about
service
missions,
but
so,
while
it's
focused
on
that
particular
technology,
it
does
envelope
many
other
cloud.
Natives
on
technologies
and
yeah
core
to
the
culture
is
well.
B
It
really
resonates.
I
think
with
the
mission
statement
of
goop
I,
like
how
kind
of
the
first
part
of
that
statement
was
to
build
an
egoless
environment.
I
think,
there's
more
to
what
that
statement
is,
but
I
sort
of
stopped
there
at
the
second
word,
which
is
which
I
really
haven't
seen
described
in
many
other
open-source
communities,
I
think
they
describe
it
as
being
being
inclusive
or
being
or
supporting
diversity
or
or
not
being
prejudice
against
age,
sex
or
sort.
B
You
know
all
of
these
other
things
actually
I,
don't
know
how
often
they
speak
to
something
like
goop
has
around
ego,
which
is
actually
I,
think
how
we
sort
of
started
the
intro
to
this
talk,
which
is
me
saying,
I,
might
learn
something
today,
I've
been
doing
nerd
stuff,
you
know
for
20-something
years,
I've
been
letting
my
little
inner
geek
just
just
sort
of
lead.
The
way
stop.
You
know
stuffing
your
stuffing
Tech
into
his
mouth
as
fast
as
I.
Can
it
turns
out?
B
This
is
that
that
that
these
fools
are
paying
me
to
do
this
stuff,
so
so
I've
been
chasing
that
ever
since
and
and
really
not
any
amount
of
accolades
that
I've
had
in
my
past
are
are
due
to
mentors
that
I've
had
are
due
to
some
of
my
own
grit
sure,
but
are
also
some
of
it.
Some
of
it's
been
luck,
and
some
of
it's
just
been
you
know
getting
back
up
after
you
fall
off
the
horse.
B
Less
capable,
but
has
a
better
attitude:
uplifts
others.
It
helps
believes
in
the
vision
or
if
or
if
they
don't
helps.
You
know
influence
that
vision.
Then
I
would
necessarily
have
a
super
bright
of
an
engineer
on
the
team.
You
know
spoiling
it
for
the
rest
of
us
and
so
so
yeah
so
there
so
alright.
So
so,
anyway,
about
my
five
and
service
meshes
is
I.
B
Was
gonna
say
that
the
service
meshes
for
those
that
are
familiar
and
those
that
aren't
we'll
talk
about
the
tech
a
little
bit
and
then
talk
about
the
kind
of
project
that
the
community
has
got
going
on.
I've
got
about
three
projects,
so
we're
all
familiar
with
docker
that
lid
on
fire.
You
know
about
six
seven
years
ago
now
I
think
it
was
announced
seven
years
ago
at
what
PyCon
and
then
about
five
and
a
half
years
ago,
when
GA
or
went
one
dot
o
and
in
some
respects,
I.
B
Think
if
you
look
at
that,
I
don't
know
I,
don't
know
if
you
either
interpreted
as
being
wow.
That
was
a
long
time
ago,
Oh
or
Wow
hey.
That
was
a
short
time
ago,
considering
how
pervasive
the
technology
is
now
and
then
you
know
we
entered
in
you
know:
people
started
using
containers
and
using
lots
of
them,
so
we
entered
into
the
age
of
container
orchestration
and
and
and
we
had,
we
had
those
that
war
and
those
battles.
B
The
the
leading
container
Orchestrator
was
announced
five
and
a
half
years
ago,
and
about
four
and
a
half
years
ago
it
was
it
1,
dot
og8,
as
we
enter
into
this
next
phase
of
expecting
more
from
your
cloud
native
infrastructure.
This
next
phase
of
not
just
orchestrating
clusters
and
nodes
containers
and
their
scheduling
but
beginning
to
uplevel
the
smarts
of
the
infrastructure
toward
well.
B
Maybe
that's
the
wrong
way
of
phrasing,
but
up
leveling
the
focus
of
the
infrastructure,
more
toward
layer,
7
or
more
toward
that
application,
which
is
which
I
would
consider
to
be
sort
of
the
king.
If
you
will
its
kind
of
wine
do
why
we
run
this
infrastructure
for
those
apps
and
so
service
meshes
as
as
a
technology,
it's
a
little
bit
tongue-in-cheek,
but
they
more
or
less
reside
at
layer.
Five
for
the
session
layer.
B
In
the
OSI
model,
if
you
will
the
linker
dv1
was
announced
four
years
ago,
it
g8
I,
guess
that's
confusing
when
I
say
v1
being
announced
four
years
ago,
the
first
architecture
of
linker
D
was
announced
that
time
ago
it
1.0
three
years
ago.
About
two
years
ago,
though,
there
was
a
linker
dv2,
it's
a
complete
new
architecture.
New
code
base-
you
know,
but
point
is
the
time
frame
is
kind
of
intriguing
and
it's
very
interesting
to
watch
the
scale
of
adoption
of
these
technologies.
B
It's
my
belief
and
and
I
hope
the
shared
belief
amongst
those
in
the
layer,
five
community,
that
you
know
that
surface
mesh.
Is
it
or
it's
only
a
matter
of
time
before
they
are
ubiquitously
adopted
that
really
anywhere?
You
would
see
a
cooper
Nettie's
that
you
would
just
expect
to
see
a
service
mesh.
It
wouldn't
be.
You
know
there
are
reasons
not
to
have
one,
but
you
know
just
as
you
walk
into
an
environment
today
and
if
you're,
you
know
you
take
a
look
at
software,
that's
being
sustained
or
a
new
project
is
being
created.
B
You
kind
of
expect
that
you
know
docker
is
in
there
somewhere,
containers
and
I.
Think
that
that's
the
same,
you
would
have
that
same
expectation
of
us
or
the
presence
of
a
service
mission
in
the
future.
So,
as
the
community
layer
5
goes
to
focus
on
service
meshes,
there
are
three
four
projects
that
are
being
stewarded
that
are
being
created.
B
One
of
those
is,
and
so
we'll
talk
about.
Each
of
these
three
measures
is
the
sort
of
largest
of
these
three
that
the
first
one
is
well,
it's
a
community,
curated
landscape
of
service
meshes
turns
out,
even
though
I
suspect,
most
of
us
on
this
all
can
really
only
prattle
off.
You
know
three
or
four
at
most
surface
meshes
turns
out,
there's
more
like
20
of
them
out
there
there's
a
lot
of
reuse,
some
of
them
look
pretty
similar.
B
Some
of
we
look
completely
different,
and
so
there's
a
community
curated
landscape
of
a
listing
of
those
service
missions
and
also
some
related
technology.
Some
of
that
related
technology
are
proxies
and
client-side
libraries.
We
talked
about
those,
but
you
can
see
them
on
the
landscape.
There's
some
reasons
why
it's
a
bit
messy
out
there
there's
some
reasons
why
there
are
multiple
of
these
things.
B
These
are
some
of
them.
I
have
a
tendency
to
be
long-winded,
and
so
I'm
kind
of,
let
you
guys
read
and
read
into
that,
but
just
suffice
to
say
at
least
for
today
it
is
a
multi
mesh
world,
it's
possible
that
that
might
change
in
the
future
and
just
speaking
in
generalities
about
products
or
projects
and
how
they
are
adopted
in
general.
B
It's
typically
the
case
that
there
is
one
that
enjoys
the
majority
of
adoption,
whether
it's,
whether
it's
technology
or
just
products
in
general,
and
you
know
kind
of
a
second
and
third
that
enjoy
a
fair
bid
and
then
sort
of
all
the
rest
that
kind
of
have
a
sliver
of
it.
It's
my
hope
that,
for
my
part,
it's
my
hope
that
we
don't
end
up
in
a
world
where
one
mesh
has
one,
and
it's
really
only
that
mesh
I.
B
B
B
Some
of
us
might
be
embarrassed,
so
they
raise
their
hand
on
that
because
we
might
there
might
be
they're
following
up
on
that,
might
be
some
shame
or
or
people
trying
to
to
shame,
and
we
think
it's
a
shame
that
that
we
don't
say
hey,
it's
like
actually
a
really
smart
decision
to
run
nomads
if
you're
running
on
a
Windows
environment
and
if
you're
doing
batch
jobs
like
you
know,
that's
awesome
for
that
or
why
would
you
try
to
fit
something
else?
Just
because
it's
very
popular
in
there.
B
On
the
subject
of
the
world
being
of
multiple
meshes
today,
because
of
that
there
are
some
abstract,
some
sort
of
special
abstractions
that
have
come
to
bear.
It
was
about
four
weeks
ago
now
that
I
had
approved
SMI
service
mesh
interface
as
being
allowed
or
donated
to
the
scenes.
Yet,
there's
a
there's,
a
number
of
hats
that
I
wear
in
the
community
and
one
of
those
is
as
a
chair
for
the
scenes.
Yes
from
cig
Network
I,
also
wear
a
maintainer
hat
on
an
SMI
and
SMI.
B
If
you're
not
familiar
just
very
briefly,
is
it's
a
specification.
It's
a
specification
and
interface
for
describing
the
lowest
common
denominator,
functionality
across
many
meshes,
and
as
it
does
this,
it's
doing
it
only
in
context
of
kubernetes.
So
there's
another
emerging
specification
is
about
a
year
old.
Now
that
focuses
on
Federation
of
either
homogeneous
or
heterogeneous
service
missions
and
the
exchange
of
the
catalog
of
services
between
them.
B
You
know
kind
of
two
questions
consistently
from
people
about,
but
you
know
about
which
service
mesh
to
run,
how
to
get
started
with
them,
and
then
the
other
one
about
trying
to
understand
the
overhead
that
occurs
as
you're
running
one
of
these
in
your
infrastructure
and
so
I.
Guess
you
briefly
for
those
that
aren't
familiar
with
a
service
message.
Their
general
architecture
is
such
that
they
generally
are
composed
of
a
data
plane
and
a
control
plane.
B
Most
of
us
who
I
think
for
a
lot
of
you.
These
might
make
some
sense.
I.
Think
it's
an
analogy
here
that
might
resonate
is
that
you've,
most
of
you
are
familiar
with
kubernetes
control,
plane
and
some
of
its.
The
kubernetes
master,
if
you
will
the
data
plane
kind
of
to
make
this
analogy
the
data
plane
there
is
the
nodes
inside
the
cluster
for
the
service
mesh.
The
data
plane
is
comprised
of
a
number
of
network
proxies
and
it's
the
management
of
those
proxies.
That
is
the
responsibility
of
the
control
plane.
B
If
you
are
familiar
with
or
have
a
background
in
networking
I
think
these
terms
can
resonate
with
you
that
there's
also
the
concept
of
a
management
claim,
there's
outside
of
a
control
plane,
focusing
on
a
specific
set
of
proxies
in
that
control,
plane
being
opinionated
about
you
know,
basically,
being
the
brains
behind
console
or
SEO,
or
link
Rd
or
the
brains
behind
a
given
specific
mesh.
You
might
want
to
integrate
across
meshes.
B
You
might
want
to
integrate
with
your
back-end
systems
that
you're
already
using
for
monitoring
or
we
might
want
to
federate
the
service
catalogs
between
them.
You
might
want
to
perform
chaos,
engineering
or
there's
all
kinds
of
things
to
do
on
top
of
a
mesh
that
a
surface
mesh
enables,
but
that
they
can
the
the
native
control
planes
themselves,
don't
necessarily
provide
it's
a.
B
Hence,
we've
gone
out
to
create,
and
by
we
I
mean
just
sort
of
I'm
looking
around
the
room
and
for
gun
and
and
Sacco
and
Nobby
are
all
names
that
come
to
mind
of
folks
that
have
been
over
just
either
been
over
helping
with
this
initiative,
whether
just
you
and
and
really
like
this
is
to
Stefan.
As
an
aside
for
a
second
I
would
say,
it's
sort
of
another
thing
that
I
think
communities
sometimes
get
wrong
is,
is
how
much
code
contributions
are
valued
and
don't
get
me
wrong,
like
open
source
code
contributions.
B
Those
are
awesome,
but
equally
is
awesome.
Is
the
person
using
the
thing
and
providing
feedback
or
the
person
documenting
or
the
person
being
willing
to
speak
on
it
or
and
and
we're
going
to
forthcoming
soon
in
the
lair
file
community?
We're
gonna
be
purposeful
about
expressing
how
how
it
is
that
we
weigh
the
value
of
those
different
types
of
contributions,
so
so
that
was
kind
of
an
aside,
but
measure
e
itself
was
started
about
a
year
ago.
A
couple
of
a
couple
of
us
got
together
and
and
started
and
anymore.
It's
you
know
fairly
shortly.
B
B
So
Sacco
was
the
sort
of
the
champion
and
and
the
individual
to
prove
that
this
was
that
this
can
work
and
and
and
it
did
and
so
now
we've
got
to
go.
Google
Summer
of
Code
interns
I'm
actually
will
have
a
community
bridge
intern
and
then
hopefully,
a
google
season
of
docs
in
turn,
as
well
measure
e
and
layer.
B
B
B
This
is
a
screenshot
of
one
of
its
interfaces.
Today
it
is
a
project
that
does
provide,
or
that
does
have
a
user
interface
to
it.
It
does
have
a
CLI.
It
does
have
a
web-based
user
interface,
not
all
open-source
projects
do,
but
but
I
think
the
sort
of
the
longer
the
open-source
grows
and
grows
and
grows
kind
of
the
the
harder
in
some
respects.
B
It
is
to
get
a
project
off
the
ground,
because
this
is
just
the
the
bar
continues
to
get
raised
for
people
for
individuals
like
all
of
us
that
go
out
and
choose
a
tool
as
many
factors.
I
think
that
we
consider
when
we
go
to
use
the
tools
like
who's
behind
it.
How
capable
is
it?
Is
it
actively
being
supported
and
etc
and
so
to
socco's
point
earlier?
It's
no
small
thing
to
to
gather
up
all
that
energy.
B
This
is
a
quick
view
of
the
architecture.
The
project
not
not
very
complex
in
so
much
as
measure
e
itself
deploys
as
a
set
of
docker
containers,
and
you
can
deploy
missery
on
a
on
a
host
that
only
has
docker
so
on
your
local
machine
or
you
can
deploy
it
in
a
kubernetes
environment
sort
of
so
in
some
respects
you
can
deploy
it
out
of
cluster
or
in
cluster.
B
Mestre
has
an
API,
and
so
these
UIs
it
has
adaptors
for
each
of
the
service
messages
that
you
see,
pictured
it'll
interface
with
Prometheus
and
graph
ona.
It
is
initially
created
to
help
answer
the
questions
that
people
have,
while
they're,
trying
to
adopt
and
learn
and
understand
a
service,
mash
and
maybe
choose
which
one
they
want
to
run
immediately
after
that,
when
someone
makes
that
choice.
The
next
persona
that
it
addresses
is
the
operator
that
helps
you
ongoing,
manage
your
service
mesh
and
the
workloads
on
top
of
it.
B
B
If,
if
those
interests,
I'm
gonna
try
to
maybe
give
it
a
demo
of
mastery,
if
we
can,
if
we
have
time
so
the
quickly
I'll
say
that
we're
also
just
generally
engaged
with
the
CNC
F
with
a
bunch
of
partners,
we'll
talk
about
more
about
those
but
we're
recently
creating
a
service
measure
performance
working
group
inside
the
scenes.
Yeah
we're
gonna
do
some
distributed
performance
testing,
so
part
of
on
the
impact
in
saco
had
had
on
the
measuring
project
was
to
bring
forth
support
for
multiple
load
generators.
B
Part
of
how
what
the
value
that
mastery
provides
is
to
assess
the
performance
of
your
service
mash
and
one
of
the
ways
that
it
does,
that
is
by
generating
a
bunch
of
HTTP
load
against
your
endpoints
and
then
measuring
the
latency
of
their
response
measuring
the
throughput
by
which
it
can
shove.
You
know
a
bunch
of
load
against
your
service
and
there's
a
bunch
of
turns
out.
Performance
engineering
is
quite
involved,
and
so
we're
gonna
go
off
and
hopefully
we'll
be
creating
a
service
measure
performance
specification.
B
B
B
It's
my
my
point
of
highlighting
that
was
to
say
that
you
know
kind
of
in
context
of
community
building
and
is
that
recognition
is
important
or
the
ability
to
share
with
others
and
get
feedback
is
important
and
it's
you
know,
I'm
tickled
that
those
that
are
giving
as
much
of
their
time
like
wishes
are
afforded
opportunity
to
have
their
name
highlighted
like
its.
What
what
a
beautiful,
again
kind
of
the
virtuous
cycle
that
that
occurs
here.
B
We've
also
been
fortunate
to
go
off
and
establish
formal
partnerships
with
with
the
logos
that
you
see
here
so
so
I
for
one
and
really
stretched
too
thin
in
trying
to
maintain
these
partnerships
and
do
these
things
and-
and
it's
like
if
I
haven't
made
it
extraordinarily
evidence,
there's
no
way
that
any
much
of
this
could
have
been
accomplished
without
folks,
like
Sacco
and
Fergana
and
Bobby,
and
push
joining
and
doing
things.
I.
B
B
B
A
A
B
B
A
B
So
there
were,
you
know,
so
one
of
the
questions
was
you
know:
did
dismiss
reassume
multi
mesh
deployments
in
in
infrastructures
and
provide
its
value
in
bridging
them.
That's
a
great
question.
You
know
cuz.
If
I
only
have
a
single
service
mesh
like
just
a
link,
ready
deployment,
what
value
would
Mary
bring
the
performance
testing
is
one
off
value.
What
are
the
ongoing
values
that
it
might
bring?
If
any
that's
a
great
question
I'll
submit
to
you
this
that?
B
Some
of
that
immediate
value
that
we
worked
on
initially
is
is
about
adoption
and
facilitate,
as
one
of
like
the
first
questions
that
people
have,
which
is
why
I
kind
of
we
do
lifecycle
management
of
those
meshes,
and
we
focus
on
some
of
those
adoption
questions
as
the
project
evolves.
It
will
provide
more
ongoing
kind
of
date.
B
You
value,
if
you
will,
the
performance
testing
is
an
interesting
thing
as
I
think
it
strikes
a
lot
of
folks
that
that
it
might
be
a
sort
of
a
one-time
question
like
hey
once
I've
adopted
great,
then
I
don't
need
this
tool
anymore
and
part
of
my
adoption.
Consideration
was
performance
as
it
turns
out.
Performance
is
an
ongoing
concern.
B
Once
you've
chosen
to
deployment
you've
deployed,
your
workloads
turns
out,
like
your
environments,
change.
Sometimes
you
add
a
note
or
take
away
a
note.
Sometimes
you
roll
out
a
new
release
of
your
workload.
Sometimes
you
upgrade
the
service
mesh
itself.
Sometimes
you
change
the
service
mesh
config
and
have
it
do
more
things
or
less
things?
Any
of
those
changes
affect
the
how
your
mesh
is
performing
and
and
the
value
that's
providing.
B
B
Continue
to
measure
changes
and
to
facilitate
that
I'm,
just
looking
back
in
a
history
of
having
run
some
performance
tests
and
we
and
will
even
run
some
here,
but
that
you're
able
to
run
one
and
compare
one
to
another
or
two
three
years
or
fourth
event,
and
so
this
is
maybe
speaks
a
bit
more
to
the
ongoing
value
that
that
performance
management
would
have
is
to
understand
over
time.
How
maybe
there's
nothing.
B
That's
changed
in
your
environment
and
yet
you're,
seeing
you're
either
seeing
consistency
with
your
the
performance
of
your
mesh
and
the
performance
of
your
apps
or
not,
and
so
actually
the
cube
con
talk.
That
was
that
was
well
is
now
it's
coming
up,
but
it
was
to
be
delivered
was
to
help
people
understand
the
cost
of
a
given
network
function.
So
if
you
told
the
mesh
to
introspect
a
packet,
take
a
look
at
a
JWT
token,
maybe
whose
sign
in
and
do
something
based
on
that
like.
B
How
much
does
that
thing
that
one
little
servicemen
function,
cost
you
and
is
it?
Is
that
a
higher
cost
of
using
JWT
versus
cookies
or
versus
this
or
versus?
And
so
yeah,
the
the
more
that
the
project
unfolds
and
the
more
rock
stars
like
Kosh,
show
up
the
the
deeper
that
the
thing
the
the
deeper
of
an
answer
will
happen:
Elva
nya,
that
was
a
fantastic
question.
B
Another
one
that
had
come
out
was
that
you
know
you
have
experience
with
many
servicemen
solutions
so
yeah,
which
one
do
you
think,
is
a
better
solution
for
multi
cluster
kubernetes
infrastructure,
where
the
network
is
not
flat
and
so
elleven.
That's
another
fantastic
question
and
both
of
yours
feed
right
into
part
of
what
mastery
is
attempting
to
accomplish,
which
is
that
exact
question
about
like
aim,
which
one
should
you
use,
and
then
you
know
considering
that
the
environment
looks
like
this
this
or
this
well
I've
got
some
opinions
and
you're
right.
B
I've
been
focused
in
the
space
for
a
while
I've
published
a
few
books
to
O'reilly
on
service
meshes
and
and
we'll
be
doing
a
third
one
here.
Well
well,
we'll
see
how
long
it
takes,
but
I'm
gonna
withhold
those
opinions,
not
because
we're
recording
or
not,
because
I'm
afraid
that
people
might
flame
me
on
Twitter
or
what-have-you.
B
Actually,
because
that's
in
part,
why
we
built
mesh,
read
it,
that's
in
part
way
connects
to
so
many
I
would
just
I
would
love
it
if
you
went
and
grabbed
the
tool
and
answered
that
question
came
back
and
told
me
or
told
the
community.
Here's
what
you
consider,
that
is
the
best
mesh
based
on
the
fact
that
you're
able
to
take
massery
deploy
a
few
different
meshes
in
the
environment,
we're
talking
about
a
multi
multi
cluster
kubernetes
environment
with
not
with
hierarchical
or
multiple,
with
routed
networks.
B
It's
an
awesome,
yeah
I
hope
that
is
a
good
teaser
for
you
and
doesn't
piss
you
off
that
I!
Didn't
answer
the
question
directly.
That's
not
my
intention.
My
intention
is
just
like
that's
perfect.
We
would
love
to
have
a
user.
Tell
us
actually
doesn't
really
answer
that
question
I,
think,
but
because
that's
that's
why
we
built
it.
It's
like
to
be
able
to
help
you
answer
that
question
because,
because
part
of
it
is
well
how
many
networks
are
there
and
like
how
many
clusters,
and
so
what?
B
It's
inevitable
that
well
a
it
was
a
point
in
time
like
yeah,
they
did
some
great
ones
back
on
sto,
100
and
in
liquor
d
such
as
such,
but
they
didn't
do
it
across
more
than
two.
At
a
time
which
means
that
when
you
do
another
comparison,
you
read
about
console
versus
linker
D
is
like
well
that
was
a
totally
different
environment
totally
different
set
up.
It
was
also
back
at
that
point
in
time
it
wasn't
my
environment.
Wasn't
my
setup
I'm,
also
not
running
that
version.
B
A
A
B
So
if
there's
any
time
left,
I,
just
I
think
the
thing
that
was
I
kind
of
sort
of
implicitly
can
I
gave
a
little
bit
of
a
demo
of
the
project.
As
a
matter
of
fact,
we're
looking
at
some
things
right
here.
This
button
push
brought
here,
I
think
last
week,
so
so
we're
already
seeing
his
his
works
up
on
stage
that
the
project
itself
it
kind
of
like
the
community.
B
An
example
of
what
I
mean
by
the
project
being
delightful
is
something
like
well
and
actually
I
would
love
for
any
or
all
of
you
to
to
try
this
and
see
if
this
proved
me
wrong,
and
that's
that
or
prove
me
right.
That's
this
project.
If
you
go
to
install
but
there's
you
know,
there's
actually
a
bunch
of
ways
to
install
it
it.
B
It
supports
a
bunch
of
different
package
systems
like
brew
and
scoop
and
deploying
on
various
environments,
but
depending
upon
what
command
you
end
up
using
or
how
it
is
that
you
deploy
it's
the
case
that
you
can
use
this
command
to
deploy
or
execute
one
command
and
with
only
one
command.
It'll
download
mystery,
run
mystery
and
bring
up
your
web-based
interface
and
drop
you
at
the
login
prompt
and
to
me.
B
Well
that's
kind
of
nice
because
normally,
when
you're
selling
a
piece
of
software
there's
a
couple
of
steps
that
you
need
to
do
and
it's
not
like
those
steps
are
usually
a
big
hassle,
but
hopefully
that
delights
people
a
little
bit.
There's
a
there's,
a
few
Easter
Easter
eggs
in
there.
If
you
will
or
things
that
maybe
give
people
a
chuckle,
so
I
figured
I
figured
if
and
again
just
just
cut
me
off
when
we're
out
of
time,
but
I
think
if
we
do
this
brief
demo
of
measuring
itself.
B
B
Local
to
me,
hey
this
load
generator
can
go
out
and
perform
that
test
and
I
shouldn't
have
set
it
to
30
seconds,
because
that
means
that
I've
got
to
make
a
funny
joke
or
something
like
that.
As
we
look
at
this,
but
part
of
what
measure
will
do
is
it
operates?
It
wants
to
operate,
it'll
operate
in
the
kubernetes
environment
or
out
of
it
that
was
kind
of
the
point
of
what
I
was
trying
to
demo
here,
it'll
connect
to
Prometheus
it'll
connect
to
core
fauna.
B
You
can
do
that
in
context
of
the
effect
that
it
has
on
your
environment
and
right
now,
we're
not
we're
really
not
seeing
an
effect
on
the
environment,
because
that
was
out.
I
was
hammering
google.com
with
a
bunch
of
you
know,
get
requests,
not
our
local
deployment
so
and
so
yeah
I
guess,
that's
it
and
a
couple
of
things.
That's
an
example
of
mesh
reconnecting
to
other
pieces
of
infrastructure,
including
kubernetes,
as
well
as
testing
things
off
the
mesh
and
on
the
mesh.
B
B
If
we
look
at
the
network
service
mesh
adapter
there.
The
this
this
service
mesh
here
is
called
network
service
mesh
that
that
the
group
of
maintained
errs
there
gave
it
a
very
generic
name,
and
anyway,
it
doesn't
have
as
many
capabilities.
So
unlike
service
mesh
interface
SMI
that
spec
that
provides
a
low.
You
know
uniform
capability
across
surface
meshes.
B
Memory
intentionally
allows
each
mesh
to
expose
some
of
its
mesh
specific
value,
one
of
those
things
being
for
SEO
in
the
ISTE
adapter
being
that
measure
e
will
run
a
series
of
checks.
I
was
saying
before
to
help
you
verify
that
your
service
measure
config,
is
doing
best
practices
things
by
the
way.
I
guess.
I
can't
help
myself
in
this
regard,
to
say
that
that
particular
subject
service
met
best
practices
and
patterns,
like
that's,
actually
the
the
book
that
I'm
just
about
to
begin
working
on
and
so
I'm.
B
B
Okay,
so
yep,
so
you
can
use
mesh
right
to
deploy
various
configurations
of
the
meshes
to
deploy
various
sample
apps.
It
might
be
that
each
of
the
meshes
generally
come
with
their
own
sample
app,
but
you
might
actually
so
book
info
is
the
canonical
sample
app
for
SEO
book
info.
Many
of
you
may
be
seen
it
kind
of
it's
a
very
simple
app.
It
looks
like
this
when
you're
hitting
the
interface,
it's
we
can
talk
more
about
how
it
works
and
what
it
does.
B
But
the
point
is
that
that's
the
isseo
sample
app
a
nice
thing
about
battery
is
that
you
can
it
helps
you
intermix
sample
apps
from
one
service
mesh
to
the
next.
So
if
you
wanted
to
run
linker,
D
and
kind
of
see
how
maybe
this
sample
app
the
sto
one
performs
on
linker
D,
it
helps
facilitate
that
which
I
think
is
also
helpful.
In
just
understanding.
B
What
was
I
gonna,
I
I
think
we
I've
been
just
running
around
talking
about
a
bunch
of
different
things,
without
really
having
kind
of
a
a
flow
here.
What
I
was
gonna,
you
probably
show
was
just
that
measure
e
interface,
you
know
is:
is
a
management
plane,
it
interfaces
with
all
the
meshes
and
with
kubernetes,
and
so
right
now
in
this
environment,
I
am
running
book
info
that
sample
app
on
my
local
dock.
My
local
communities
instance.
B
This
is
not
gonna,
be
horrifically
impressive,
but
if
we
were
to
sit
here
and
watch
the
running
pods
did
that
if
we
were
to
go
and
so
these
running
pods
those
are
the
four
services
that
are
running
for
the
book
info
app
there's
a
product
page.
You
have
details
and
ratings,
the
stars
that
you
see
and
that
that
the
you
can
watch
as
nursery
unloads.
It
removes
a
workload
off
of
the
mesh.
B
This
one
was
running
on
Sto,
which
is
why
you
see
two
of
two
containers
ready
inside
each
of
these
pods,
because
one
of
those
containers
is
a
sidecar
proxy
I'm
on
boy
in
this
case,
so
there's
more
functionality
in
the
tool
and
hopefully
a
lot
more
to
come.
That's
so.
This
has
been
the
biggest
project
that
the
community
has
been
focused
on.
B
I.
Think
I
could
talk
to
you
guys
all
day
about
service
meshes
and
measure.
Ii
and
I
want
to
make
sure
we
did
I
think
there
was
another
question
and
it
was
your.
The
question
is,
you
can
I,
you
know,
can
I
ask
this
in
a
github
issue
or
other
channels?
Oh
okay,
yeah
great.
So
the
question
is:
how
is
the
performance
benchmark
set
up?
Do
you
install
some
sort
of
like
ping
mesh
and
and
since
we
have
time
I'd
love
to
answer
that,
because
cuz
we're
cuz
actually
cush
specifically
I
mean
he
didn't
know?
B
He
was
gonna
called
out
this
much
when
he
joined
it
today,
but
he
and
actually
Saco,
probably
didn't
realize
this
either.
But
the
answer
to
that
question
is
but
yeah
that
I
think
well,
maybe
there's
multiple
faucets
to
the
answer
to
this
question.
But
one
of
the
answers
is
that
measure
e
has
is
extensible
in
a
couple
of
ways
in
three
ways
today,
and
one
of
those
ways
is
by
choice
of
load
load
generator,
and
so
some
of
you
may
not
be
familiar
with
for
IO
as
a
load.
B
B
Generate
load,
you
get
your
choice
here,
these
two,
these
two
and
then
shortly
a
fourth
that
the
some
of
you
may
or
may
not
be
familiar
with.
So
your
your
most,
probably
many
of
you
have
heard
of
envoy
as
a
proxy
there's
a
sub
project
within
the
envoy
ecosystem
called
Nighthawk
Nighthawk
is
a
load
generator.
It
was
built
for
for
purposes
of
testing
the
performance
of
envoy,
so
push
and
others
are
going
to
be
working
on
providing
support
for
Nighthawk
and
when
they
they
do
they're
gonna.
B
Going
forth,
hopefully
not
just
with
Nighthawk,
but
with
basically
your
your
choice
of
load
generator
will
be
to
do
so
in
a
distributed
fashion
such
that
you
know
today
we're
generating
load
from
a
single
place
and
pushing
that
over
well.
It
might
also
be
the
case
that
you
want
load
generated
internally
against
some
really
popular
workload
here,
and
so
maybe
that
workload
needs
to.
B
You
know
calm
as
if
it's
coming
from
this
other
internal
consumer
and
not
just
an
external
or
maybe
you're
like
well
hey
I've,
got
we've
got
lots
of
people
in
the
US
hitting
us,
but
we've
also
got
some
popularity
in
China
or
what
have
you,
and
so
we
want
to
generate
that
load
for
multiple
places.
Point
is
forthcoming
over
the
next
few
months,
hopefully,
will
be
support
for
a
distributed
load
generation.
B
And
so
on
that,
let's
yeah
I'm,
gonna,
I'm
gonna
be
quiet
and
say:
hey
thanks
for
listening
to
me
about
misery,
thanks
for
thanks
for
creating
goop
as
a
place
that
I
that
I
want
to
be.
It's
a
great
community.
A
B
B
Is
to
manage
their
to
understand
that
their
mesh
performance
ongoing,
that's
one
two:
they
can
use
it
to
analyze
the
configuration
of
their
service
mesh
and
measure
it
will
tell
them.
You
know.
I
was
showing
that
example
of
mystery
sort
of
evaluating
the
config
and
telling
them,
if
they're
doing
it
wrong
or
doing
it
right
as
more
time
unfolds.
Those
two
feature
areas,
performance
management
and
configuration
management-
they'll
go
deeper,
though
the
configurator,
hopefully
in
the
next
three
months
as
well.
B
There
will
be
a
new
visual
topology
so
that
people
can
see
the
various
services
on
their
mesh
where
the
traffic
is
routing.
What
the
latency
is
for,
like
the
example
that
I
was
giving
earlier,
if
you
configure
your
mesh
to
open
up
a
request,
look
at
who's
signed
in
and
redirect
that
traffic
like.
How
much
is
that
costing
me?
I?
B
Can
visually
now
see
that
that's
going
on,
but
is
the
latency
there
too
great
and
gonna
push
me
over
my
SLO
for
that
service
anyway,
there's
I
think,
there's
a
there's,
a
kind
of
a
long-winded
answer
to
how
it
is
that
large
organizations
might
use
measuring
the
other
way
in
which
a
large,
like
specifically
large
organizations
they
might
use
measure?
Is
it
they're
the
ones
that
are
more
likely
to
find
themselves
using
multiple
types
of
service
measures
and
so
measure
will
provide
a
solution
there
and.