►
Description
Over the last few years, the microservice architecture has become an increasingly popular framework for Kubernetes-based containerized applications. In this session, we will see how Open Service Mesh makes it effortless to enable the most sought-after features and functionality that organizations look for in an SMI-compliant service mesh. Topics (i) To Mesh or Not to Mesh (ii) OSM- Features, Components & Interactions (iii) Managed OSM (iv) Roadmap and integration with other services.
A
A
A
That
said,
let's
start
off
with
the
agenda,
so
here
are
some
of
the
topics
that
we'll
cover
as
a
part
of
this
particular
session.
We
will
try
and
understand
what
is
a
service
mesh
all
about
what
what
are
the
primary
components
that
actually
go
to
service
mesh?
What
is
a
service
mesh
interface?
Then
we
don't
understand
microsoft,
offering
in
this
particular
space,
which
is
open
service
mesh.
A
A
So,
first,
let's
start
off
with
what
is
a
service
mesh
with
the
proliferation
of
microservices,
there
was
something
that
was
actually
required
to
be
able
to
support
service
to
the
service
communication
in
a
secure
fashion.
So
one
key
thing
about
service
mesh
out
here
is
that
it
actually
started
off.
More
is
pretty
much
a
network
infrastructure
component
and
it's
not
really
an
application
specific
component.
A
The
idea
was
to
actually
have
a
component
that
would
sit
outside
the
application
and
take
care
of
key
aspects
like
service
to
service
level,
security,
monitoring,
traffic
routing.
So
all
of
these
components
were
to
be
done
outside
of
the
application
right.
So
I
think
the
key
takeaway
out
here
is
think
about
service
mesh
more
as
a
networking
service
mesh
or
an
infrastructure
component
that
actually
provides
its
capabilities
and
abstracts
the
application
developer
from
actually
building
this
capabilities
as
a
part
of
their
overall
solution.
A
So
what
does
the
overall
service
mesh
architecture
look
like?
So
so
one
key
aspect
out
here
is
obviously,
if
you're
trying
to
build
something
which
actually
monitors
service
to
service
communication,
you
would
need
some
sort
of
proxy
component
that
actually
goes
and
sits
alongside
your
application
right.
So
that's
that's
the
sidecar
pattern
that
is
implemented
by
most
service
meshes.
We
will
see
as
to
how
the
service
mesh
platform
actually
uses
a
side
car
to
be
able
to
actually
inspect
the
traffic,
that's
actually
going
on
between
the
services
right.
A
So
if
you
look
at
some
of
the
key
capabilities
that
a
service
mesh
will
actually
offer
you
routing
between
services,
let's
say
you
want
to
control.
Who
is
able
to
actually
access
your
services?
You
need
to
be
able
to
control
the
ingress.
You
need
to
be
able
to
control
the
egress.
You
need
to
define
certain
traffic
routing
and
splitting
patterns.
So
let's
say
you
actually
want
to
do
a
split
of
50
50
between
us,
multiple
versions
of
your
services
that
you
might
actually
set
up
right.
A
So
overall
yeah,
as
I
told
you
about
as
to
what
a
service
mesh
is
all
about,
but
what
are
the
key
considerations
that
you
should
look
at
while
evaluating
and
selecting
a
service
mesh
platform.
So
the
first
thing
really
is
you
really
need
a
service
mesh
right?
A
If
the
application
that
you're
developing
is
a
very
small
scale
application,
then
you
should
definitely
inspect
and
check
if
you
really
need
to
go
down
the
service
mesh
route
right,
so
so
do
that
I
mean,
if
it's
a
small
scale
application
you
may
be
able
to
get
by
without
having
a
service
mesh
implement
in
patient.
Do
you
need
a
service
mesh
that
actually
spans
clusters?
A
A
Another
key
aspect
would
be:
let's
say
when
you're
looking
at
service
to
service
communication
and
actually
having
a
service
mesh
control
that
do
you
need
a
mesh
that
actually
just
works
with
kubernetes
or
does
it
span
other
forms
of
compute
like
virtual
machines
right?
A
Do
you
need
a
service
mesh
that
actually
supports
windows
containers
right?
Let's
say
your
deployment
happens
to
be
on
on
kubernetes
clusters
which
which
are
based
on
windows.
Then
what
is
the
service
mesh
offering?
That
can
actually
come
in
handy
for
you?
Do
you
need
commercial
support
right
by
the
provider
itself,
so
you
may
need
to
evaluate
that
criteria
in
case
any
in
your
service,
my
selection
process.
One
key
aspect
obviously
is
in
terms
of
what
are
the
overheads
of
actually
operating
a
service
mesh
right?
A
Yes,
a
service
mesh
does
give
you
all
the
capabilities,
but
just
by
workshop,
having
a
let's
say,
an
inspection
pattern
in
the
middle,
you
are
bound
to
actually
have
a
bit
of
a
let's
say:
impactful
performance
perspective,
so
so
try
and
factor
that
in
right.
So
what
are
the
overheads
trying
to
understand
that?
A
Do
you
want
your
policies
to
be
enforced
transparently,
which
is
what
the
service
mesh
the
networking
service
mesh
does,
or
do
you
want
the
developers
to
actually
build
in
and
work
with
a
lot
of
mesh
like
capabilities
as
a
part
of
the
application
development
process
itself?
Right,
in
which
case
your
decision
process
might
actually
lead
you
towards
maybe
a
dapper
sort
of
implementation
instead
of
a
service
mesh,
let's
try
and
get
some
a
broad
understanding
of
the
service
mesh
landscape.
A
I
I
think
one
of
the
most
full
featured,
let's
say
complex,
but
extensible
offerings
out
there
that
can
actually
span
multiple
clusters
would
be
something
like
a
steel
right.
If
you're,
some
looking
at
something
relatively
lightweight
linker
d,
it
has
less
features
as
compared
to
the
other
platforms,
but
that's
something
that
you
could
actually
look
at.
If
you're
looking
at
a
mesh
that
actually
spans
multiple
forms
of
compute.
A
As
we
discussed
right
virtual
machines,
virtual
machines,
kubernetes
clusters,
cncf
governments,
then
you
might
want
to
look
at
console
connect
and
then
we
will
talk
about
the
open
service
mesh
offering
right.
So
so
you
do
have
a
plethora
of
service,
mesh
options
and-
and
you
may
want
to
pick
one
based
on
the
capability
that
we
just
discussed.
A
So,
let's
try
and
first
understand
what
is
a
service
mesh
interface,
so
one
key
aspect
is
with
the
number
of
service
measures
that
are
actually
being
developed.
I
think
it
was
key
that
there
was
a
standard
spec
that
that
people
could
actually
stick
to
so
service.
Mesh
interface
is
a
spec
for
a
standard
or
another
spec
for
a
standard
interface
for
service
measures
and
kubernetes
these
this,
the
smi
interface,
can
actually
be
installed
on
kubernetes,
using
kubernetes
custom,
resource
definitions
and
extension
apis.
A
The
idea
behind
having
an
smi
interface
is
to
provide
a
basic
set
of
features
for
most
of
the
common
use
cases,
especially
if
you're
looking
at
let's
say
ndls.
If
you're
looking
at
traffic
routing,
then
those
features
have
been
defined
as
a
part
of
the
service
mesh
interface
itself
right.
Another
key
aspect
out
here
is:
why
do
I
need
a
service
mesh
interface
right?
Think
about
it
akin
to
let's
say
the
way
we
have
components
like
ingress
as
a
part
of
our
kubernetes
definitions.
A
The
key
aspect
is
the
ingress
itself
can
be
implemented
by
multiple
options,
like
maybe
an
application
gateway
or
nginx
such
a
proxy
and
so
on.
So
similarly
out
here
too,
you
could
actually
look
at
having
a
service
mesh
interface
based
implementation
right.
So
your
tooling,
your
ecosystem
could
just
talk
using
the
smi
interface
and
the
underlying
services
can
actually
be
provided
by
any
one
of
these
service
measures
that
are
actually
implementing
the
the
smi
interface.
A
Today,
the
service
mesh
interface
tasks
covered
basic
set
of
use
cases,
and
we
do
expect
the
interface
itself
to
kind
of
encompass
and
include
more
and
more
capabilities
as
we
go
along
right,
but
today
we
do
have,
let's
say
the
most
key
aspects
of
a
service
mesh
being
covered
as
part
of
the
smi
interface.
Let's
understand
some
of
this
capabilities
in
a
bit
more
detail,
yeah,
so
key
aspects
that
you'd
associate
with
a
service
mesh
traffic
policy
and
access
controls.
A
You
want
to
be
able
to
restrict
which
parts
can
communicate
with
each
other,
which
parts
are
accessible.
You
want
to
ensure
that
service
to
service
communication
is
is
encrypted
and
and
secure.
You
want
to
pick
up
telemetry
from
your
services
right
metrics.
What
is
the
kind
of
latency
between
the
services
you
want
to
pick
up
that
information.
A
You
also
want
to
be
able
to
do,
let's
say,
shift
and
let's
say,
route
traffic
between
different
services
and
and
let's
say
from
a
progressive
delivery
scenario
you
may
want
to
look
at.
Do
I
want
to
go
down
the
canary
route,
the
blue,
green
or
the
ap
route
right
and
we'll
see
as
to
how
some
of
the
traffic
management
capabilities
are
part
of
the
smi
interface.
A
A
All
of
these
are
actually
talking
to
the
smi
interface
and
the
smi
interface
power,
and
their
smi
providers
in
this
case
are
adapters
which
are
being
implemented
by
multiple
service
mesh
platforms
today
right,
so
the
key
thing
out
here
is
your
tooling,
can
still
talk
to
a
common
interface
and
that
can
get
implemented
by
different
service
meshes.
A
So
so
the
key
aspect
out
here
is
you're
not
tied
to
any
specific
service
mesh
implementation
right,
but
you
still
get
all
the
features
and
you
can
actually
have
scripts
that
can
where
your
service
mesh
implementations
can
actually
be
swapped
out.
A
So
today
we
do
have
a
multitude
of
service
meshes
which
actually
support
the
the
smi
interface,
and
this
number
is
increasing.
So
we
do
have
the
most
common
ones,
like
the
moisture's
open
service,
mesh
linker
d.
All
of
these
actually
support
the
the
smi
interface.
Welcome
to
one
aspect,
or
let's
say
one:
implementation,
where
the
the
smi
concept
really
comes
in
handy
right,
folks
use
flagger
as
a
progressive
delivery
operator
for
cooperatives
right.
A
So
in
case
you
actually
want
to
implement
some
sort
of
candidate
deployments
right
where
you
actually
have
a
secondary
deployment
where
you'll
actually
get
out
a
bit
of
the
traffic
and
and
then
once
you've
tested
it
out.
You
actually,
then,
let's
say
swap
out
or
you
wanna
do
a
b
testing
from
a
certain
feature
test
perspective,
or
you
want
to
do
blueprint
as
to
where
you
have
separate
parallel
deployments,
and
you
can
actually
test
everything
and
then
swap
the
environments
out.
A
Flagger
helps
you
do
that
right
and
then
flag
it
by
virtue
of
integrating
with
the
smi
interface,
can
actually
leverage
any
of
the
service
meshes
that
support
smi
to
be
able
to
provide
that
capability
right.
So
flagger
is
a
great
implementation
of
as
to
how
or
a
great
example
of
as
to
how
implementing
smi
interface
can
actually
come
in
handy
for
some
implementations.
A
So,
let's
now
try
and
understand
what
an
open
service
mesh
is
all
about
right.
So
this
is
microsoft's
implementation
of
these
service
mesh
that
we
discussed
so
far.
So
so,
let's
turn
on
the
sense
of
the
key
attributes
of
open
service
mesh.
A
So
osm
is
a
lightweight
and
an
extensible
cloud
native
service
mesh.
This
is
based
on
the
onvoy
proxy
right,
the
cncf
project.
There
it
implements
smi,
it
was
created
by
microsoft
and
it
was
donated
to
cncf
about
a
couple
of
years
back.
A
So
what
are
some
of
the
key
features
that
are
offered
by
the
open
service
mesh?
It
does
offer
traffic
shifting.
So
if
you
want
to
split
traffic
across
multiple
implementations
of
service,
you
can
do
that
key
aspect
in
terms
of
actually
having
secure
service
to
submit
service
traffic
using
mtls.
It
supports
that
it
also
supports
external
certificate
management
solutions.
A
So
we'll
talk
through
some
of
the
certificate
management
solutions
that
the
open,
osm
integrates
with
again
one
one
key
aspect
that
we
discussed
so
far
from
a
service
mesh
perspective
is
in
terms
of
actually
getting
observability
and
metrics
for
your
services,
so
we'll
see
as
to
how
the
open
service
mesh
implementation
actually
offers
support.
Observability
increasing
metrics
and
all
of
this
actually
works
by
injecting
a
site
car
into
your
deployments
right
so
we'll
see
as
to
how
the
features
are
actually
work
out.
That.
A
Right,
so
if
you
look
at
the
open
service
mesh
features
here,
so
the
first
component,
if
I
just
go-
and
let's
say
from
left
to
right
here
in
this
case-
and
so
the
first
thing
in
this
case
is
let's
say
the
ingress
traffic
policies,
so
you
would
want
to
control,
let's
say
what
sort
of
what
can
or
who
can
access
these
services
that
are
deployed
now.
A
A
Just
to
give
you
a
quick
walkthrough,
let's
say
there
is
a
book
buyer
service,
which
is
the
legit
service
that
that
should
be
allowed
to
go
and
actually
talk
to
the
bookstore
service
and
actually
buy
books.
The
book
thief
is
a
service
that
should
not
be
allowed
to
do
that.
I
should
not
be
allowed
to
go
and
talk
to
the
bookstore
service.
A
A
So
what
are
the
osm
features
here
right,
so
first,
as
I
discussed,
was
in
terms
of
the
ingress
traffic
policies
to
be
able
to
control
which
one
of
your
services
is
actually
accessible
over
ingress.
This
point
two
out
here
indicates
the
fact
that
there
is
automatic,
sidecar
injection
of
the
envoy
proxy.
The
minute
you
configure
a
particular
namespace
as
being
monitored
by
osm.
We
will
have
the
onboard
proxy
injected
automatically,
so
it's
the
book
by
the
book
thief,
the
bookstore
and
book
warehouse.
A
The
and
obviously
this
by
because
of
the
fact
that
we
actually
injected
the
the
proxy
there
are
the
ones
responsible
for
actually
looking
at
what
sort
of
traffic
policy
has
been
applied,
whether
it
could
be
a
routing
or
whether
it
could
be
from
let's
say,
access
perspective
itself
right.
So
that's
what
the
automatic
sidecar
does
that.
If
you
look
at
point
three
out
here,
the
scenario
that
I
was
talking
about
was
in
terms
of
allowing
the
book
buyer
to
talk
to
bookstore,
but
not
the
book
thief
right.
A
So
that
is
where
you
have
service
to
service
level,
access
control,
and
then,
let's
say
0.4
is
in
terms
of
let's
say:
if
you
have
two
versions
of
the
bookstore
service,
all
right,
so
you
may
actually
want
to
do
let's
say,
progressive
delivery.
You
may
want
to
switch
from
bookstore
1
to
bookstore.
We
do
using
any
of
the
methods
that
we
discussed
earlier,
so
that
traffic
splitting
is
done
by
open
service
mesh
and
again
controlling
in
terms
of
from
an
egress
perspective
right
as
to
when
there
is
outbound
communication.
A
You
may
also
want
to
control
there
in
terms
of
who
is
allowed
to
talk
outside
and
not
and
0.6
out.
There
are
the
observability
capabilities
that
we
spoke
about
right
to
be
able
to
track
the
metrics
reported
by
these
services.
You
want
to
trace
yourselves
to
service
level,
communication
and
access
the
logs
right.
So
all
the
key
capabilities
that
we
discussed
so
far
as
part
of
service
mesh
are
actually
covered
by
open
source
mesh.
A
Right,
so,
let's
try
and
understand
as
to
how
that
is
the
implementation
of
osm
itself
right.
So
here
what
happens
in
this
case
is
we
do
have
a
proxy
control
plane.
The
proxy
control
plane
is
the
one
that's
actually
responsible
for
going
and
talk
talking
to
the
proxy,
the
the
side
cards
that
are
actually
being
injected.
A
This
communication
happens
again
over
a
secure
channel
over
a
mdls
and
and
this
communication
is
required
for
you
to
be
able
to
actually,
let's
say,
float
down
your
config,
your
policies
from
the
proxy
control
plane
to
the
actual
proxies
that
are
sitting
on
this
nodes.
What
you
also
have
is
a
certificate
manager.
Obviously
this
is
which
actually
helps
you
provide
mtls
between
your
service
to
service,
and
I
said
you
can
actually
have
multiple
and
different
certificate
components
actually
being
leveraged
here.
A
The
endpoints
provided
out
here
helps
you,
let's
say,
communicate
with
different
kinds
of
platforms.
Right,
as
we
said
the,
as
I
said,
the
osm
is
actually
available
on
multiple
platforms.
It's
available
on
ats,
which
is
a
managed
kubernetes
offering.
It
can
also
be
installed
on
any
kubernetes
cluster
right,
so
the
endpoints
provider
actually
helps
you
manage
based
on
the
endpoint
design.
A
That
you're
finally
going
to
be
talking
to
the
mesh
specification
is
the
one
that
actually
takes
up
all
of
these
components
out
here
that
you
see
in
the
service
mesh
controller
packages
into
a
structure
that
can
actually
be
related
back
to
the
envoy,
and
for
that
for
that
information
to
be
configured
on
the
beyond
white
cell
right
they're,
using
the
armor
itself.
A
Yeah
so,
in
terms
of
let's
say,
ntls
support,
we
do
have
mutual
dls
for
port
to
board
encryption.
We
do
have
the
version.
1.0
released
some
a
little
bit
earlier
this
year,
so
we
do
have
support
for
let's
say
oss
upstream.
If
you
install
osm
yourself
on
the
kubernetes
cluster,
it's
supported
along
with
aks2
the
mechanism,
as
we
discussed,
is
by
using
a
side
car
which
is
online.
In
this
case.
We
it
it
works
at
layer.
A
Seven,
that's
how
you
actually
get
http
based
access
control.
You
have
access
control
policies,
we
discussed
in
terms
of
blocking
cells
to
service
level
communication
and
the
installation
methods
would
actually
change
based
on
where
you're
looking
at
installing
the
open
service
mesh
right.
So
in
the
next
couple
of
slides
I'll,
actually
go
through
a
bit
more
detail
in
terms
of
how
the
open
service
mesh
deployment
would
actually
vary,
based
on
whether
you're
going
for
osm
as
an
osm
on
on
your
own
kubernetes
cluster
or
on
aks.
A
So
one
of
the
key
considerations
that
we
discussed
earlier
at
the
start
of
the
session
was
in
terms
of
what
is
the
overhead
that
the
introduction
of
a
sidecar
of
a
proxy
is
going
to
cause
right.
So
what
is
what
does
it
mean
from
a
resource,
consumption
perspective
and
a
latency
perspective,
so
the
numbers
that
I've
pulled
in
here
are
from
a
set
of
load
tests
that
are
done
with
along
with
istio.
A
I
think
the
mesh
in
this
case
was
about
a
thousand
odd
services,
2
000
side
cars
and
about
70
000
mesh,
wide
requests
per
second,
so
based
on
based
on
the
tests
that
were
run
this.
This
is
what
the
summary
information
looks
like
right.
So,
in
terms
of
adding
is
about
2.65
milliseconds,
that's
what
the
proxy
adds
and
in
terms
of
memory,
consumption
and
cpu
consumption
is
about
0.35,
ecp
use
and
and
40
megs
per
about
1000
rps
right,
so
so
be
mindful
of
this
one.
A
This
may
not
really
impact
in
most
of
these
cases
in
most
of
your
scenarios,
but
you
should
definitely
consider
this.
The
impact
while
designing
your
solutions.
A
So
what
we've
done
for
the
open
service
mesh
is
obviously
there
are
a
lot
of
components
that
we
deploy
as
a
part
of
the
open
service
mesh
installation
itself,
and
what
we've
done
out
here
is
for
the
parts
that
are
actually
installed
to
support
osm.
We
have
specified
certain
default
limits
for
cpu
and
memory.
These
are
documented
on
the
usm
website.
I
suggest
you
have
a
look
at
it.
A
I've
just
pulled
in
the
latest
default
configuration
and
it's
pretty
much
the
same
as
what
it
was
sometime
back
that
hasn't
changed,
but
this
also
gives
you
a
sense
of
what
is
what
are
the
resource
consumption?
What's
the
resource
consumption
for
the
components
that
are
actually
installed
as
a
part
of
the
osm
installation
process
itself?
A
Okay,
now,
let's
focus
on
managed
open
service
mesh
right.
So
if
you
are
looking
at
kubernetes
clusters,
which
are
a
financial
or
if
you're
looking
at
azure
are
categorical
kubernetes,
oh,
we
do
have
the
option
of
actually
getting
a
managed
over
seven
there.
A
The
managed
open
service
mesh
is,
is
fully
managed
and
supported
by
microsoft
and
these
and
you
can
actually
install
the
managed
version
of
open
service
mesh
by
an
add-on
on
aks
and,
at
the
same
time,
there's
an
extension
that
you
can
actually
do
for
azure
arc
enable
kubernetes
both
of
these
components,
as
in
both
the
implementation
for
both
azure
kubernetes
service
and
azure
arc,
enabled
kubernetes.
They
are
both
ga
and
then
you
can
actually
look
at
the
talks
online.
The
website
for
some
more
information
on
these
components.
A
So,
let's
not
understand
the
osm
differences
when
you're
looking
at
the
managed
version
right.
So
the
first
column
out
here
is
the
managed
version
which
is
gonna,
be
running
on,
let's
say
aks
or
our
cannibal
kubernetes,
which
would
be
running,
as
I
said
by
the
the
the
case,
add-on
method
or
the
other
scenario
is
where
you
actually
look
at
installing
osm
or
you
can
do
a
self-install
on
any
google
disk
cluster
that
you're
running
right
and
this
could
be
outside
of
the
azure
managed
kubernetes
clusters.
A
So
for
the
arc
components
or
for
the
aks
components,
the
you
can
install
osm
very
easily
by
just
going
and
actually
enabling
an
add-on.
I
can
just
show
you
as
to
how
that's
done
on
the
the
portal,
as
well
as
just
how
you
can
do
that,
while
using
cli
right,
whereas
in
case
of
the
self-installed
option,
you
need
to
actually
maybe
install
the
the
osm
cli
and
then
run
some
osm
commands
to
actually
install
osm
on
your
kubernetes
cluster.
A
One
difference
out
here
is
in
terms
of
the
the
osm
components
in
case
of
the
aks
and
the
archangel
kubernetes.
They
get
installed
to
the
cube
system
link
space,
whereas
in
case
of
the
let's
say
the
osm
self
install
option
yeah
they
actually
get
installed
with
the
osm
system.
Namespace
shouldn't
really
impact
vidya
m.
If
you
are,
if
you've
written
some
scripting
around
it
right,
you
may
want
to
let's
say:
query
the
right
name
spaces
to
get
your
information.
A
The
managed
osm
versions
which
are
actually
running
which
which
work
with
a
case
and
art,
enabled
kubernetes,
are
fully
separated
by
microsoft,
and
you
can
actually
raise
an
azure
support
ticket
in
case
you
see
any
challenges
there
for
the
community,
supported
in
case
of
let's
say
the
osm
self-installed
version.
The
support
is
from
the
community
and
you
will
have
to
go
down
the
github
issues
route
to
raise
an
issue
and
rely
on
the
community
to
actually
support
you
in
case.
You
have
a
challenge.
A
There
is
no
osm
dashboard
in
case
of
the
the
managed
version,
whereas
there
is
one
in
the
case
of
self-installed
version
in
terms
of
features
and
capabilities.
All
of
the
capabilities
that
we
spoke
about,
which
is
in
terms
of
mdls
traffic,
routing
right,
so
all
the
access
policies,
the
split
policies
observability.
All
these
capabilities
are
available
in
in
both
diplomas,
they
managed
to
sm1
or
the
self-installed
version
too.
A
In
case
of
the
managed
version
we
do
or
have
let's
say,
a
self-signed
saved
stressor,
whereas
in
case
of
the
osm,
the
self-installed
version,
you
do
have
the
option
of
actually
going
down
the
route
of,
and
you
do
have
the
option
of
different
certificate
managers
so
before
we
just
do
a
walkthrough.
So
a
quick
sense
right.
As
we
spoke
about
do
you
really
need
a
service
mesh
right?
A
We
we
have
dapper
in
this
space
too,
and
and
let's
try
and
understand
what
are
the
primary
differences
and
and
when
would
you
go
for
tap
on
whenever
you
go
for
osm?
Or
can
you
actually
look
at
going
for
both
right
see?
If
you
look
at
this
particular
diagram
out
here,
the
capabilities
definitely
do
have
an
overlap
right
and
if
you
look
at
the
overlap,
that's
primary
from
let's
say
the
secure
service
to
service
communication,
which
is
mtls
the
observability
part
right.
The
tracing.
A
All
of
these
capabilities
are
part
of
the
osm
as
well
as
dabber
right.
So
osm
has
additional
capabilities
in
terms
of
the
traffic
routing
and
splitting,
whereas
in
case
of
dapper,
those
are
not
natively,
but
you
may
actually
work
along
with
an
ingress
controller.
Do
that
right?
I
think
one
key
aspect
that
you
need
to
actually
consider
out
here
is
that
dapper
is
not
a
service
mesh
right.
A
The
osm
is
a
proper
networking
service.
Mesh
that
we
actually
have
here.
Dapper
was
meant
to
help
provide
building
blocks,
to
make
it
easier
for
application
developers
to
build
applications
as
microservices
right
so
think
about
develop.
Dapper
is
being
more
dev
focused
right
and
osm
has
been
more
infrastructure.
Focused
networking
component
right
do
both
coexist.
A
The
answer
is
absolutely
yes
right,
but
if
you
end
up
using
both
together
for
some
of
the
capabilities
to
ensure
the
common
capabilities
are
not
turned
on
twice
right,
so
so
in
case
you're,
actually
using
damper
along
with
ace
osm,
ensure
that
we
use
the
mtls
encryption
capabilities
of
only
one
of
those
components
and
not
both
at
the
same
time.
A
So,
let's
start
off
with
the
walkthrough
on
setting
up
osm
on
a
kubernetes
cluster.
So
for
the
sake
of
this
walkthrough,
I've
considered
the
same
demo
scenario
that
I
spoke
to
you
about.
This
is
a
sample
application,
that's
available
on
the
open
service
mesh
website
and
it
is
very
easy
to
actually
set
up
this
bookstore
sample
application.
Why
that
template
is
given
there.
A
So
let
me
just
set
up
the
application
for
you.
So
there
is
a
book
by
service
we
and
there's
a
book
thief
service
and
both
of
these
services
currently
are
capable
of
actually
talking
the
bookstore
service
and
the
bookstore
service
finally
ends
of
talking
to
the
book
warehouse
which
is
which
let's
say,
could
be
talking
to
your
database
components.
A
So
in
a
kubernetes
cluster,
you
could
have
all
of
these
services
being
deployed
right,
but
from
an
application
perspective,
it's
it's
critical
that
only
the
book
buyer
service
is
allowed
to
talk
to
the
bookstore
and
the
bookstore
is
the
only
service.
That's
allowed
to
talk
to
the
book
warehouse
right.
A
You
want
to
prevent
the
book
thief
service
from
talking
to
the
bookstore
service.
Let's
take
that
as
our
objectives
and
see
as
to
how
we
can
actually
go
about
implementing
that
using
osm
right.
A
Right
so
for
setting
this
up,
the
first
thing
that
you
need
to
do
is
the
walkthrough
here
that
I
have
here
is
I've
gone
down
the
route
of
actually
using
the
managed
osm
offering
right
and
one
because
I
had
a
kubernetes
cluster
on
azure
and
make
his
cluster
in
place.
I've
actually
gone
on
the
route
of
setting
up
the
managed
osm
offering,
on
the
a
case
cluster
and
as
discussed
earlier,
the
option
to
do
that
on
a
case
is
by
just
enabling
the
add-on
right.
A
What
you
can
do
here
is
just
by
specifying
that
you
actually
want
to
enable
the
add-on
for
open
service
mesh,
the
installation
configuration
process
will
actually
go
through
and
it
will
deploy
the
relevant
board
other
relevant
boards,
the
proxy
control
plane
and
so
on
right.
So,
let's,
let's?
Let's
have
a
look
at
the
steps
here.
A
So
it'll
take
you
about
a
few
minutes
when
you
actually
run
this
particular
step,
but
once
that's
done
and
if
you
actually
browse
through
all
of
the
parts
that
are
actually
set
up,
you
will
see,
let's
say
a
lot
of
them
being
set
up.
As
I
said,
the
ones
that
get
installed
as
a
part
of
the
add-on
route.
A
These
particular
parts
will
actually
go
and
sit
in
the
cube
system
link
space
right,
unlike
in
the
if
you
go
for
the
os
and
on
your
own
kubernetes
cluster
it'll,
actually
go
into
the
osm
system,
name
space
right,
but
here
I've
just
filtered
through.
Let's
see
the
parts
that
I've
got
installed
and
and
we
have
the
the
osn
injector,
the
osm
controller
parts
that
I
actually
got
created
right
and
all
of
those
have
been
these.
These
specified
default
limits
for
memory,
cpu,
etc,
have
been
applied
to
these
particular
parts.
A
It'll.
Take
you
a
few
minutes,
but
yeah
once
that's
done,
you
should
be
able
to.
You
should
have
the
osm
setup
should
be
up.
You
can
actually
then
go
ahead
and
check
up
on
the
azure
portal,
and
it
will
show
you
that
the
osn
configuration
is
done
there
right.
I've
gone
the
client
out,
but
you
could
have
very
easily
done
this,
while
the
azure
portal
do
so.
What
are
the
next
steps
right?
It's
we
need
to
tell
osm
about
the
namespaces
that
it
needs
to
monitor
right.
A
So
what
we
did
for
this
particular
sample
application
is
first
thing
that
we
did
is
we
created
multiple
namespaces,
so
we
went
ahead
and
created
interfaces
for
the
book,
buyer,
the
book
thief,
the
bookstore
and
the
book
warehouse
right.
So
once
these
namespaces
were
created,
we
then
went
and
told
the
os
the
osm
controller
components
to
go
ahead
and
actually
monitor
these
namespaces.
A
So
so
you
just
do
that
by
this
thing,
always
in
namespace
add
and
you
you
specify
a
list
of
all
the
namespaces
that
you
actually
want
to
add
here
right.
One
key
thing
out
here,
though,
is
that
yeah
this,
so
you
will
actually
get
a
message
saying
all
of
these
links.
This
is
where
I
would
do
osm
and
the
same
thing
will
get
reflected
on
the
azure
portal
too.
I
I
didn't
blow
this
screenshot
up
too
much,
but
the
minute
you
do
this
by
the
cli.
A
You
will
see
in
the
osm
interface
on
the
azure
portal
that
all
of
these
links
are
now
actually
being
monitored
by
osm
right.
So
so
what
happens?
So
so?
What
we've
done
so
far
out
here
is:
we've
enabled
the
add-on
we've
created.
We
deployed
a
sample
application
across
name
spaces.
We
then
told
osm
to
actually
go
and
monitor
specific
name
spaces
right
now
by
default.
When
this,
when
you
actually
get
osm
configured,
it
does
not
block
traffic
by
default
right.
A
So
so,
if
you
actually
had
an
application
running
and
you
went
ahead
and
installed
osm,
it
does
not
mean
that
your
current
traffic
will
actually
stop
right
and
your
application
will
still
still
be
working
at
the
end
of
the
day
right.
So
that's
the
default
deployment
option
for
osm
now
when
I
actually
go
ahead
and
open
up,
let's
see
the
books,
the
book
thief
website
or
the
book
buyer
website.
In
this
case,
both
of
these
services
are
actually
constantly
polling
and
making
a
service
call
to
bookstore
right.
A
So
so
so
what
happens
is
if
I
just
take
the
default
setup
deployment
of
osm.
You
can
actually
see
as
to
what's
happening
here.
In
this
cases,
the
book
thief
service
is
working
right.
The
you
can
actually
see
the
counter
continuously
incrementing.
The
same
stuff
happens
with
the
book
by
our
server,
so
both
of
them
will
be
able
to
talk
to
a
bookstore,
and
the
key
thing
here
is
both
of
them
are
actually
talking
to
the
bookstore
v1.
We've
just
deployed
the
bookstore.
We
won
so
far
right.
A
Okay,
so
key
thing
is
all
traffic
goes
through
properly
enough
right
and
then
osm
is
still
not
doing
anything
from
an
access
policy
perspective.
So
the
next
thing
that
we'll
actually
do
is
go
ahead
and
look
at
from
a
traffic
access
perspective
right.
So
I
do
have
this
example
here
in
this
case
and
what
we're
trying
to
do
here
out
here
right
now.
A
If
you
look
at
it,
we
have
the
bookstore
service
and
we
specified
as
a
part
of
this
particular
access
policy,
that
only
the
book
buyer
service
can
go
down,
actually
access
the
bookstore
service.
A
Now
it's
it's
very
easy
to
apply
this
particular
access
policy,
so
I've
just
gone
ahead,
taken
the
sample
policy
and
map
applied
this
to
to
the
cluster.
A
So
once
this
happens,
what
what
what
we
effectively
have
done
is
we've
ensured
that
all
calls
to
bookstore
are
only
from
the
book
buyer
service
and
not
from
the
book
thief
service
right.
So
consequently,
what's
going
to
happen,
is
the
websites
that
I
have
the
books?
Buyer
service
is
still
going
to
be
able
to
go
and
actually
pull
the
book
store
service
and
be
able
to
get
response
back,
whereas
the
book
thief
service
is
not
able
to
call
the
bookstore
service
anymore
and
effectively
the
number
of
books
stolen.
A
That
number
still
stays
stuck
at
that
particular
point
in
time
right,
so
so.
Implementing
a
simple
access
policy
about
controlling
which
service
can
access
with
service
was
very
easy
to
implement
using
the
amble
itself
right,
pretty
straightforward
implementation.
A
So
what
we
also
did
was
just
to
show
you
from
an
mdls
perspective
right
and
one
of
the
key
capabilities
that
we've
been
talking
about.
Is
the
service
to
the
service
level,
encryption
and
security.
So
so,
once
let's
say
we
do
have
the
osm
in
the
picture
and
I
have
empty.
Let's
go
english.
I've
just
picked
up,
let's
say
a
wire
sharp
captured,
and
if
you
look
at
this
wireshark
capture,
you
will
see
that
there
is
services
service
to
service
level,
communication
which
is
locked
down
by
mdls
right.
A
So
we
can
always
run
wireshark
between
those
two
ip
addresses
assigned
to
the
spots.
Again,
you
can
actually
see
that
the
traffic
are
going
between
those
two
particular
services
is
actually
logged
out
so
yeah.
So
we
looked
at
the
traffic
access
perspective.
We
looked
at
mtls
now.
Let's
look
at
traffic
splitting
here
in
this
case
right.
A
So
what
we've
done
to
the
same
sample
application
right
now
is:
we
have
deployed,
let's
say
two
versions
of
the
bookstore
application
right,
a
bookstore
v1
and
a
b2,
and
I
said
you
can
obviously
make
this
as
a
much
more
complex
implementation
by
using
flagger,
etc.
But
here
with
basic
service
mesh
itself,
what
we've
done
is
we
want
to
now
get
our
traffic
split
in
there
right.
So
this
is
what
we
are
keen
to
do
here.
A
In
this
case,
we
want
to
route,
maybe
x,
personal
traffic
to
bookstore,
v1
and
white
person
do
bookstore
v2,
so
all
you
need
to
do
is
is
have
a
amble
in
place.
You
can
actually
specify
in
there.
What
is
the
split
routing
that
you're
looking
at
right?
So
in
this
case
the
yaml
out
here
talks
about
50..
A
Take
this
particular
example
apply
to
your
kubernetes
cluster
and
at
the
end
of
it
you
will
actually
see
the
split
happening
so
there.
Now,
if
you
actually
look
at
bookstore,
v1
and
v2,
you
will
start
start
seeing
calls
going
to
both,
let's
say
bookstore
v1,
and
we
do
that.
A
The
other
thing
that
you
can
actually
do
is
from
also
what
you've
done
out
here
is
for
the
managed
osm
versions.
We've
integrated
that
with
azure
monitor
too
so.
On
azure
monitor,
we
have
something
called
azure,
monitor,
container
insights,
which
can
actually
give
you
insights
about
your
kubernetes
cluster.
So
what
you've
done
right
now?
A
Is
we
integrated
osm
monitoring
also
as
a
part
of
azure,
monitor
container
inside
it's
in
preview
right
now,
but
the
key
thing
that
actually
allows
you
to
do
is
you
can
actually
filter
a
new
inventory
of
all
the
services
that
are
part
of
your
service
mesh?
You
can
visualize
and
monitor
requests
that
are
going
across
our
single
service
mesh
with
what
is
the
request?
Latency?
What
are
the
what's
the
error
rate?
A
What
does
the
resource
utilization
look
like
and
it
provides
an
overall
connection
summary
for,
let's
say
the
entire
version
of
infra
that's
running
on
on
a
case
right.
So
this
this
monitor
integration
is
one
key
aspect.
That's
been
done
done,
along
with
the
managed,
overcentral
planning
on
a
case
so
yeah,
because
it's
about
the
azure
monitor
you
can
then
go
ahead
and
actually
use
kql
and
and
start
querying
the
the
monitor
logs.
If
you
want
to
pull
up
some
metric
information,
you
you
can
do
that.
A
You
can
also
implement
jager
based
tracing
right,
so
you
can
actually
go
ahead
and
do
that
too.
You
can
also
go
ahead
and
integrate
with
prometheus
and
grafana
right.
So
if
you're
gonna
do
some
metric
scraping
for
the
osm,
you
can
actually
go
ahead
and
do
that
along
with
prometheus.
A
So
what
we've
looked
at
so
far
is
how
is
it?
How
can
you
actually
go
ahead
and
install
the
managed
osm
offering
on
aks
the
implementation?
As
I
said
on
your
own
kubernetes
cluster
is
very
similar.
You,
the
deployment
will
differ
a
bit.
You
will
go
the
helm
chart
router.
A
You
will
go
the
osmcli,
an
installation
router
right
so,
but
at
the
end
of
the
day,
all
the
steps
that
we
showed
in
terms
of
let's
say,
being
able
to
do
a
traffic
access,
split,
a
traffic
access
policy
and
a
traffic
split
though
those
concepts
stay.
Exactly
the
same.
A
Let's,
let's
look
at
the
road
map
dumb
line,
so
we
did
get
osmb
one
released
a
bit
earlier
in
the
year.
There
is
we
1.1
that's
out
right
now
and
and
there's
work,
that's
happening
on
v1.2
and
1.3.
A
The
roadmap
is
public
and
you
can
actually
go
ahead
and
look
at
the
public
road
map
on
the
url
that
I
provided
here.
We
do
have
information
displayed
in
terms
of
which
are
the
ones
in
backlog
which
are
the
ones
targeted
for
the
future,
which
are
the
bugs
that
we're
working
on.
Currently,
some
of
the
key
upcoming
features
that
we
are
working
on
are
in
terms
of
actually
providing
windows,
container
support
and,
let's
say
some
of
the
azure
specific
integrations
right.
A
The
same
azure
monitor
integration
that
I
showed
you,
which
is
in
preview,
we're
looking
at
getting
that
into
ga
and
we're
looking
at
getting
a
bit
more
integration
going
with
some
of
the
ingress
controller
components
right.
So
we
do
have
an
agic
on
on
a
case,
implementation,
application,
gateway,
ingress,
controller,
we're
looking
integrating
that
as
a
part
of
the
osm
capabilities
next
yeah.
So
if
you
look
at
the
same
github
site
you
or
the
open
service
mesh,
slash
osm,
you
can
then
go
to
the
issues.
A
You
can
then
group
all
the
capabilities
of
milestone.
You
will
get
a
good
sense
of
what's
what's
coming
down
the
line
from
a
capabilities
perspective.
So
currently
these
are
some
of
the
key
ones
that
I
pulled
out
for
the
v
future
right.
So
the
the
dates
for
this
is
not
logged
on,
but
you
will
see
some
of
the
capabilities
that
we
we
are
looking
for
are
being
targeted.
A
So
that
brings
us
to
the
end
of
this
session.
So
thanks
for
taking
time
out
and
yeah,
I'm
still
open
for
questions.
So
in
case
you
have
any
questions,
please
feel
free
to
post
them
up
in
the
chat.
We
will
take
those
questions
and
answer
them
as
best
as
we
can
again.
Thank
you
very
much
for
taking
time
out
for
this
session
and
have
a
great,
let's
say,
enjoy
the
rest
of
sessions.