►
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
https://sched.co/MRz7
A
Hey
coupe
John
how's
it
going
so
I'm
Gabe
I'm
a
CTF
Ford
member
referee.
You
know:
I'm
represent
Microsoft
on
the
board
of
scene.
Cf
I
also
head
up
the
container
services
and
products
that
Azure
but
I
want
to
talk
today
about
service
mesh,
smart,
endpoints,
dumb
pipes.
The
network's
job
is
to
forward
packets.
That's
it
any
logic
for
things
like
compression
or
identity,
all
that
lives
inside
of
the
network
endpoints.
Now
this
is
how
we've
designed
networks
for
the
past
25
years.
You
could
say
it's
worked
pretty
well.
A
Well,
the
answer
is
you
build
smarter
pipes
and
that's
exactly
what
service
mesh
technology
is.
It
makes
the
network
smarter,
much
smarter.
Instead
of
teaching
your
services
to
encrypt
sessions,
authorized
clients
emit
reasonable
telemetry,
seamlessly
shift
traffic
between
application
versions.
Service
mesh
Technology
pushes
this
logic
into
the
network,
controlled
by
a
separate
set
of
management.
Api's.
Now,
today,
vendors
are
providing
lots
of
exciting
options
for
how
you
can
use
service
mesh
SEO
from
Google
and
IBM
has
lots
of
features
very
popular
in
the
kubernetes
community.
A
Console
from
hatchet
Corp
does
a
great
job,
extending
the
concept
of
a
service
match
beyond
kubernetes,
providing
solutions
for
things
like
vm
to
container
connectivity
that
can
really
help
you
out
in
more
diverse
environments,
linker
D
beloved
for
being
extremely
lightweight
performant,
great
user
experience.
Also
the
only
CN
CF
project
out
of
the
bunch
and
who
knows
what
service
mesh
technology
might
come
next
now.
A
Smi
defines
a
common
portable
set
of
api's
that
provide
developers
with
interoperability
across
different
service
mesh
technologies,
including
sto,
linker,
D
and
console
connect.
Now,
if
you're
running,
applications,
building,
tooling
or
part
of
the
cloud
native
vender
ecosystem,
integrating
with
the
service
master
technology
is
a
messy
affair.
A
So
what
we
did
was
we
join
forces
with
linker
D,
Hatcher
Corp,
the
folks
at
solo
and
others
to
build
an
initial
implementation
that
features
the
top
three
features
we
hear
from
our
customers
number
one
policy:
how
do
we
make
it
easy
to
assign
identities
and
policies
between
services?
Things
like
M,
TLS
number,
two
telemetry:
how
do
we
make
it
easy
to
capture
key
metrics
like
latency
and
error
rate
number?
Three
routing?
A
How
do
we
make
it
easy
to
shift
and
wait
traffic
between
services
now,
if
what
you
need
from
a
service
match
is
part
of
this
initial
SMI
specification
taking
advantage
of
the
service
mesh
ecosystem
now
just
requires
one
integration
to
the
SMI
API.
Now
this
is
just
the
beginning
of
what
we
hope
to
accomplish
with
SMI,
with
many
exciting
new
capabilities
under
development
in
the
service
mesh
space.
We
fully
expect
the
estimize
specification
to
evolve
new
capabilities
over
time
now.
I
also
want
to
point
out
that
this
idea
is
not
new.
A
This
is
not
a
new
concept.
The
idea
behind
service
mesh
interface
follows
in
the
footsteps
of
existing
kubernetes
resources
like
ingress
or
network
policy,
so
like
ingress
SMI
does
not
provide
an
implementation
itself.
Instead,
SMI
specification
defines
a
common
set
of
api's
which
allows
mesh
providers
to
deliver
their
own
best-of-breed
solutions.
A
The
cloud
native
ecosystem
has
always
valued
interfaces
and
modular
design
estimize
now
just
applying
the
same
principle
to
the
world
of
service
mesh.
So
let
me
show
you
how
this
works.
We're
gonna
start
with
a
traffic
split
example
powered
by
SDO
using
the
SMI
api's,
so
the
traffic
split,
API
and
SMI
allows
apps
to
control
traffic
weights,
so
in
this
demo,
we're
using
flagger
from
weave
works
to
run
an
automated
canary
deployment,
which
will
gradually
shift
traffic
from
a
current
version
to
a
candidate
version.
A
Now,
if
the
response
times
remain
within
limits,
we'll
continue
to
advance
the
deployment
increasing
the
amount
of
traffic
to
the
candidate
version.
Now,
interestingly,
flagger
has
been
updated
with
SMI
support.
In
fact,
I
saw
it
already
got
merged
into
master
for
the
traffic
shift
API.
So
you
know
this
is
this
is
very
real
today
and
you
know
really
what
makes
this
possible
is
this
idea
of
a
SM,
ID,
sto
adapter,
that's
deployed
and
watching
for
updates
to
the
traffic
split
api
reconfiguring
SCO
automatically?
A
Second
traffic
metrics
in
this
demo,
linker
D
is
serving
up
an
application
workload
that
tracks
requests
per
second
and
Layton
sees
across
a
few
percentiles
now.
This
is
also
running
something
called
the
SMI
metrics
server,
which
presents
a
standard
API
for
how
to
collect
traffic
metrics
from
link
or
D
or
other
service
measures.
A
So
what
you
can
see
here
is
we've
registered
a
new
aggregated,
API
server,
that's
providing
these
metric
endpoints
and
if
we
query
for
the
different
metrics,
because
these
there's
five
metrics
that
we're
now
collecting
and
if
we
actually
go
ahead
and
fetch
those
metrics
we'll
go
through
the
kubernetes
api
go
directly
through
aggregated
api
directly
to
link
or
D
and
pull
out
response
late
and
seize
success.
Failure
count
that
sort
of
thing.
A
Now
we
can
use
that
in
the
ecosystem
with
tooling,
like
coupe
costs,
which
is
a
tool
for
cost
allocation
and
management,
which
has
also
been
updated
to
use
SMI
for
traffic
metrics.
Here
coop
cost
is
running
a
meter
for
cost
per
1,000
requests,
so
SMI
metrics
allows
tools
like
coupe
costs,
to
automatically
gain
these
multi
mesh
capabilities
through
interoperability.
Lastly,
traffic
policy,
so
this
is
an
example
of
a
two
service
application
with
a
front
end
and
a
back
end
powered
by
console
connect
to
begin
the
front.
End
service
can't
contact
the
back
end
service.
A
You
can
see
that
error
here
now.
What
we're
gonna
do
first,
is
we're
gonna
deploy
the
SMI
adapter
for
console.
In
this
case,
it's
going
to
read
the
SMI
api's
dynamically
configure
console
connect
on
the
back
end.
Now.
What
we
do
next
is
we're
gonna
create
something
called
a
traffic
policy
resource,
and
this
is
signed
sort
of
a
source
and
destination
all
based
on
identities
that
ultimately
provide
TLS
and
sort
of
a
policy
based
routing
inside
the
cluster.
A
Now,
as
soon
as
this
API
is
submitted,
the
SMI
adapter
for
console
will
notice
the
update,
configure
the
intention
inside
of
console
and
the
service
will
link
up
automatically
secured
with
it
with
a
security
LS
tunnel.
Now
the
inverse
of
this
is
also
true.
If
you
go
in
and
you
delete
the
API
for
this,
it's
actually
going
to
pull
down
the
intention
inside
of
console
tear
down
the
tunnel,
tear
down
the
policy-
and
you
know
the
connection
to
the
back-end
service-
will
go
away.
A
So
why
are
we
so
excited
about
SMI?
Well,
first,
customers
want
service
mesh
technology.
That
much
is
clear
but
they're
confused
about
how
to
get
it.
Smi
allows
folks
to
get
started
quickly.
Next,
we
believe
that
simpler
is
better.
There's
lots
and
lots
of
features
out
there
by
service
metric
in
service
mesh
space,
but
customers
don't
always
need
all
of
them.
Smi
provides
a
nice
clean,
subset
of
those
features,
and
it's
something
that
I
think
is
very
attractive
and
hopefully
gonna
really
jumpstart
adoption
in
the
service
mesh
space.
A
Lastly,
SMI
is
ecosystem
friendly
and
users
want
to
support
innovation
and
open
source
s.
My
provides
a
level
of
interoperability
that
we
at
Microsoft
being
critical
to
advancing
state
of
the
art
in
the
industry.
All
right
want
to
learn
more
about
SMI
check
out
this
link.
Otherwise,
thanks
everyone
have
a
great
coop
con.