►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everybody
and
welcome
again
to
another
open
skiff
commonly
briefing
this
time,
we're
going
to
be
talking
about
its
CEO,
a
wonderful
new
open
source
project.
That's
come
out
of
IVM
and
helping
us
get
the
service
mesh,
that's
all
squared
away
for
open
shipped
in
kubernetes,
and
we
have
chat
again
with
us
who's
going
to
talk
about
it
and
give
us
an
overview
in
a
bit
of
a
demo.
A
B
So
one
of
the
fundamental
requirements,
if
you
look
at
for
micro
services,
is
having
a
platform
to
provision
infrastructure
very
fast.
This
is
critical
for
any
micro
services,
adoption
and
then
open
ship.
It
basically
provides
you
with
a
very
good
platform
for
self-service,
poor
infrastructure
and
also
providing
the
resilient
resilient
platform
for
your
micro
services.
This
is
my
analogy
of
open
ship
and
then
the
underlying
kubernetes
platform,
I
think
of
communities
as
a
kernel
for
for
your
entire
data
center.
So
you
no
longer
worry
about
individual
infrastructure
pieces.
B
This
is
also
another
analogy:
I
have
what
what
exactly
is
open
shipped
adding
on
top
of
kubernetes
is
basically
all
the
missing
developer
features
and
then
a
lot
of
the
tools
for
making
it
really
and
price
ready
with
that,
let's
take
a
look
into
the
different
options
that
OpenShift
provides
on
top
of
kubernetes
to
really
bring
it
to
enterprises.
So
with
open
shipped,
you
can
containerize
your
application
from
literally
anywhere
I
would
say.
B
The
usual
way
you
do
is
darker
image,
but
open
shipped
also
supports
various
other
methods,
maybe
using
docker
file
or
converting
directly
from
source
code
to
docker
image
and
talker
image.
This
is
pretty
useful
if
you're,
if
you
want
to
bring
your
bring
container
adoption
to
entire
organization,
because
not
everyone
wants
to
write.
Docker
files
are
back
with
docker
images,
some
some
people
just
for
a
code,
so
this
will
really
help
all
kinds
of
developers
developers
at
all
containers
in
your
organization
and
then
also
you
can
also
start
from
war
files
or
jar
files.
B
This
is
also
very,
very
common
everywhere
in
organizations,
so
open
ship
literally
takes
kubernetes
and
really
makes
it
enterprise
ready
for
for
your
micro
services,
adoption
and
then
next
comes
the
day
to
operations
of
micro
services.
Openshift
provides
a
resilient
infrastructure,
self-service,
Porter
and
everything
is
good,
but
that
is
still
deploying
your
application
like
applications
as
micro
services.
B
Then
what
about
the
day
to
operations
of
micro
services,
whether
it
is
distributed
tracing
like
your
application
micro
service
application,
needs
some
kind
of
tracing,
because
if
we
don't
have
any
visibility
inside
your
app
application,
then
a
lot
of
things
can
go
wrong.
This
is
based
on
my
personal
experience,
working
on
a
micro
services
platform
for
a
mobile
backend,
a
service.
B
If
you
don't
have
a
visibility
inside
your
what's
going
on,
instead,
your
microservices,
then
you
are
basically
introducing
a
lot
of
complexity
inside
your
stack,
so
you
need
to
have
some
kind
of
tracing
and
then
also
when
services
come
come
up
and
go.
They
need
to
discover
and
then
like
scale
like
across
your
infrastructure.
B
So
a
lot
of
service
discovery
needs
to
happen,
and
then,
when
you
have
this
micro
service
architecture,
there's
a
we
have
introduced
network
into
the
picture
and
then
network
is
easy
to
fail,
and
then
you
need
to
retry
services
if
they
fail.
So
you
need
to
have
some
kind
of
retries
mechanism
as
well
all
right
and
then
similarly,
you
have.
You
need
to
have
different
things
like
load,
balancing
security
connection
between
micro
services.
All
these
features
that
you
require
for
the
day
to
operations
of
micro
services.
B
We
are
currently
doing
with
some
kind
of
libraries.
If
you
look
at
the
Netflix
OSS
stack,
we
are
using
for
each
of
this
purpose.
We
are
basically
using
some
kind
of
library
for
that.
The
problem
with
this
approach
is,
you
might
be
using
a
different
technology,
let's
say
node.js,
and
but
you
don't
have
much
support
for
that
particular
technology.
B
You
have
to
create
all
these
tools
on
your
own
and
then
most
of
these
things
are
operation
specific
to
which
developers
may
not
be
aware
of
like
how
to
do
them
well,
and
then
the
application
itself
gets
bloated
with
a
lot
of
unnecessary
non-business
related
logic
inside
the
application
itself.
So
what
if
we
can
take
all
of
these
features?
What
if
we
can
get
all
of
these
features
without
writing
without
having
to
write
any
code?
That
would
be
awesome
right.
So
that's
what
Easter
is
all
about.
B
History
is
basically
a
open
platform
to
connect,
manage
and
secure
micro
services,
but
without
having
to
right
those
huge
libraries
into
your
application
itself,
and
how
does
it
work?
So,
let's
look
at
how
it
works
without
East
here
and
then
we
can
look
at
how
it
works
with
up
with
this
do
so
without
sto.
If
a
service
a
wants
to
call
service
B,
you
need
to
use
some
some
kind
of
library
so
that
you
can
get
that
distributed
tracing
and
then
load
balancing
all
these
features.
B
You
have
to
use
that
library
inside
your
application,
whereas
in
sto
your
calling
technology
or
whatever
logic
that
you
have
that
is
inside
the
application,
will
go
into
a
separate
layer
that
is
just
above
the
application
and
then
in
between
your
infrastructure
and
your
application.
So
this
lab
this
library
that
you
see
is
platform
agnostic
or
language
agnostic,
so
it
can
work
with
Java,
it
can
work
with
node.js
or
it
can
work
with
any
any
kind
of
technology.
B
So
in
the
in
our
case,
in
East
use
case,
this
is
a
on
Y
proxy
that
is
developed
at
lift
in.
In
some
other
cases
there
are,
there
are
different
proxies
as
well,
which
does
the
same
job,
but
in
case
of
sto.
It
is
done
by
on
by
proxy
envoy
proxy
is
a
C++
foxy
built
specially,
for
especially
for
these
micro
services,
I
would
say
by
far
it
has
most
micro
service
friendly
features
as
of
now.
B
So
what
I
want
to
show
now
is
basically
how
to
deploy,
is
T
on
openshift
and
then
look
at
a
sample
application
like
a
book
in
for
application
that
comes
along
if
the
sto
Docs
I
want
to
show
how
we
can
deploy
stay
on
openshift
and
then
also,
how
do
you?
How
do
you
manage
or
get
all
these
micro
services
operation
features
using
simple
vinyl
files?
So,
in
our
case,
the
sample
application
that
we
have
is
basically
a
looking
for
application.
We
have
a
bunch
of
services
here,
so
you
need
to
focus
on
this.
B
Only
then
you'll
understand
the
next
step,
so
try
to
focus
more
on
this.
So
we
have
a
product,
page
micro
service,
which
basically
acts
as
a
UI,
and
then
that
product
page
will
call
a
different
bunch
of
micro
services.
It
calls
reviews
micro
service
and
then
it
calls
so
calls
their
details.
Micro
service,
each
written
in
a
different
language
stack
and
then
reviews
will
further
call
rating
service.
Okay
and
the
other
thing
you
need
to
keep
in
mind
is
reviews.
Micro
service
has
three
versions
of
it:
one
with
no
stars.
B
B
So
with
that,
let's
see
how
would
how
would
how
this
works
when
we
deploy
with
Sto?
So
when
we
deploy
with
sto,
it
becomes
something
like
this,
so
you
have
a
increase
which
is
on
Y,
and
then
you
have
correct
page
micro
services.
Along
with
that
product,
page
micro
service,
you
have
a
Envoy
sidecar
container,
which
is
sitting
in
the
same
part
and
then
a
similar
case
with
with
all
of
the
micro
services.
So
all
the
communication
between
these
services
will
happen
only
to
that
on
Y
proxy.
B
Then
this
on
the
proxy
also
sends
all
kinds
of
metrics
like
how
many
times
say
calls
succeeded
how
much
time
would
took
like
what
is
a
distributed
tracing.
How
can
I
control
the
request
flow?
All
these
can
be
done
by
applying
simple
configurations
at
this
side.
A
sidecar
proxy
and
all
the
applying
of
rules
will
be
taken
care
by
the
east
EO
control
plane.
So
you
don't
need
to
go
for
each
in
microservice
and
then
apply
them
still.
Control
plane
will
go
and
apply.
These
metro
services
are
tools
into
these
corresponding
side.
Car
containers.
A
B
At
this
location,
okay,
so
I'm
using
a
part
of
it,
your
script,
but
this
is
all
happening,
live
so
I'm
going
to
set
up
open
chip.
Cluster
acts
are
set
up
now.
Open
shift
list
is
already
set
up,
I'm
going
to
set
up
East
year
now
so
for
setting
up
steel,
their
needs.
We
need
to
look
into
some
of
the
policies
on
queue
bilities.
This
is
straightforward.
B
It's
it's
almost
straight
forward
and
open
shooter.
Also,
because
open
ship
is
somewhat
more
restricted
when
compared
to
kubernetes.
In
terms
of
security,
we
need
to
add
bunch
of
SCC
policies
so
that
it
can
work
seamlessly,
but
this
is
all
for,
for
the
sake
of
security,
okay,
so
I'm
moving
to
is
tio
folder,
I've,
already
cloned
it.
B
What
I'm
going
to
do
is
install
steal
with
a
single
step
that
is
applying
this
story
ml
file,
and
this
will
create
bunch
of
our
bar
and
then
also
deployments,
and
then
I
want
to
install
few
add-ons,
which
basically
permit
years
graph
Anna
and
there's
a
small
utility
called
service
graph.
These
are
the
these
are
the
add-ons
that
cam
comes
along
with
all-steel.
It's
mainly
for
collecting,
metrics
and
then
showing
it
on
the
UI.
Okay,
these
are
optional.
B
If
it's
up
to
you,
if
you
wanna
choose,
have
it
or
not
now
the
eesti
of
control
plane
should
be
provision.
If
we
go
back
to
our
cluster
I'm
gonna
make
my
user
expects
of
the
administrator.
So
I
can
see
everything
now
it
will
create
a
namespace
called
sto
system
where
everything
will
be
deployed,
so
you
can
see
sto
certificate
authority,
agora,
sin
grace
and
then
mixer
and
pilot.
These
are
basically
the
sto
control
plane
services.
We
don't
need
to
worry
about
what
what
these
individual
pieces
are
but
understand.
B
These
two
are
critical
ones,
released
your
pilot
and
mix
well,
which
basically
takes
care
of
accepting
over,
like
the
configuration
and
then
applying
into
the
individual
micro
services,
and
then
the
other
add-ons
tools
that
we
provisioned
our
graph,
Anna
and
then
Prometheus,
and
there
is
a
small
Italy
called
service
graph
I
will
see,
will
dig
into
each
of
these.
Don't
worry
about
it
now.
B
B
So
we
are
going
to
deploy
looking
for
sample
application
for
these.
What
we
are
doing
is
we
are
using
something
called
as
sto
CTL
cube
inject
this
basically
embeds
the
sto
on
by
proxy
into
the
deployment
configuration
of
our
application.
That's
all
okay,
and
then
we
are
basically
applying,
as
is
like
a
simple
YAML
file.
Now
this
creates
a
bunch
of
services
like
details,
ratings
reviews
and
product
page,
just
like
we
have
seen
in
the
initial
diagram.
B
And
if
you
look
at
this
closely,
there
are
two
containers
running
on
these
on
this
deployment
for
each
micro
solutions
that
you
see,
there
are
two
containers
running:
one
is
the
actual
micro
service
application
and
there
is
a
another
proxy
that
is
running
alongside
with
the
application
container.
So
these
are
all
in
the
same
part.
B
So
all
this
communication
from
this
micro
services
will
be
controlled
or
sent
only
by
a
this
proxy.
That's
how
you
are
basically
getting
all
the
good
information.
You
need
all
right
so
now
that
we
have
deployed
the
macro
service
application.
Let's
go
and
depth
like
expose
our
atoms.
We
can
use
either
OpenShift
router
for
exposing
the
services
or
you
can
also
use
ingress.
It
doesn't
really
matter
I'm
used
to
open
shape
router
in
this
case
now.
Basically,
this
is
a
small
utility
that
is
provided
by
steel
to
show
how
your
calls
are
being
made.
B
You
can
see
that
this
is
empty
now,
because
we
haven't
made
any
calls
to
our
application.
Yet
once
we
make
a
make
calls
to
our
application,
it
automatically
generates
all
the
mapping
for
you.
We
will
see
once
again
after
hitting
some
traffic
and
then
you'll
understand
more
what
what
this
is
all
about
and
then
also
I'm,
exposing
a
graph
Anna,
which
is
a
dashboard
for
ceiling
metrics
on
a
different
end
point
I
can
see.
B
There
is
nothing
here
as
of
now,
because
we
haven't
accessed
any
traffic
yet
so
keep
watching,
and
then
we
are
gonna
hit
the
product
page.
Now
give
it
a
couple
of
seconds.
Sometimes
it
takes
a
couple
of
seconds
to
come
up,
so
this
takes
couple
of
seconds
to
come
up
once
that
is
ready.
We
should
be
able
to
see
the
application
here,
I'm
going
to
check
if
there
any
questions.
Meanwhile,.
B
Is
there
a
good
way
to
convert
existing
open,
shaped
apps
to
be
ice
is
to.5,
so
one
of
the
ways
is
to
use
the
admission
controller
that
is
coming
up
in
21.8
that
will
automatically
convert
your
deployment
configurations
into
sto,
specific
configurations
or
I
have
created
one
controller,
special
control
controller,
which
watches
the
openshift
API
and
then
converts
the
deployments
automatically
into
a
specific
deployments.
Both
are
possible.
In
this
case
we
have
used
a
command
line
tool
which
is
cube
inject
to
convert
those
things
now.
A
B
Couple
more
seconds
should
be:
okay,
I
wouldn't
create
the
route.
So
that's
why
I
just
created
a
route,
and
now
it
should
be
able
to
access.
So
you
can
see
it
is
the
book
info
page
is
up.
There
are
two
uses
here:
one
is
normal
user
which
which
is
like
for
everybody,
and
then
I
can
also
login
with
specific
user
ID.
Okay.
Now,
if
we
take
a
close
look,
if
I
do
researches
on
this
page,
you
can
see
my
reviews.
Micro-Service
is
getting
called
each
time.
B
I
call
it
is
calling
a
different
version,
sometimes
black,
sometimes
no
stars
and
sometimes
red
stars.
If
we
go
back
to
our
initial
book
info
application,
please
pay
close
attention
to
this,
so
this
is
revision,
1
review,
2
and
reviewed
3
trial,
and
this
is
further
calling
2
rating
service.
So
as
of
now,
it
is
like
a
round
robin
policy.
It
is
basically
making
choosing
a
round
out,
wants
to
call
the
reduce
micro
service
so
with
each
deal.
How
do
we
control
this
traffic?
B
Like,
let's
say,
I
want
to
only
send
it
to
reviews
version
1
only
which
doesn't
have
any
stars
or
I
want
to
send
it
to
only
a
specific
user.
Let's
say
a
user
called
Jason
should
have
access
to
review
service,
which
is
having
red
stars,
but
for
everyone
else,
I
want
to
show
just
the
normal
one
like
the
no
stars
or
the
black
stars,
one
ok,
so
all
these
things
that
you
require
for
the
day
to
operations
of
micro
services
can
be
just
done
by
like
simple
Vimal
files
will
see
some
of
those
things
now.
B
So
now
that
we
have
hit
this
end
point
two
times,
if
we
let's
go
and
take
a
look
into
the
service
graph
now
you
can
see
our
service
graph
automatically
got
generated,
so
we
have
hit
the
product
page
and
the
product
page
has
called
reduce
micro
services,
sometimes
version
sometimes
version
3
and
sometimes
version
1,
and
then
it
also
called
details
and
then
reviews
further
called
ratings.
So
all
these
service
grab
that
you
see
got
automatically
generated.
You
haven't
done
any
coding
at
all
right.
So
this
is
a
very
great
value.
B
Add
for
all
the
developers.
They
don't
need
to
worry
about
all
this
stuff.
This
is
automatically
generated
free
for
you
and
then
also,
if
you
look
at
the
metrics
now,
you
can
see
data
coming
up
inside
the
steel
refiner
dashboard,
so
you
can
see
the
global
request,
volume
and
then
also
the
success
rate.
If
there
are
any
failures,
those
failures
will
be
listed
here.
If
there
are
any
final
errors,
all
these
things
are
busted
and
all
these
things
you
can
also
go
by
service
level.
Like
the
detailed
surveys,
how
is
performing
the
product
page?
B
How
is
it
performing?
What
is
the
latency?
You
can
see
a
lot
of
wealth
of
information
on
your
micro
services
without
having
to
write
any
single
line
of
code,
so
that
is
a
very
great
value
at
that
is
tio
brains
on
top
of
openshift,
to
make
it
really
easy
for
the
day
to
operations
of
micro
services,
we'll
see
more,
let's
move
on
with
the
request
outing
part
where
I
want
to
control
how
the
request
gets
flown
between
your
product
page
to
the
other
micro
services.
B
So
as
of
now,
there
is
no
control
on
this
is
basically
doing
a
round-robin
on
to
the
reveal
service.
Let's
go
and
control
that
now
I'm
going
to
clear
existing
rules,
I
don't
have
any
as
of
now.
So
what
I'm
going
to
do
is
create
a
rule
such
that
we
always
go
to
the
version
1
of
everything.
Ok,
so
you
always
go
to
the
version.
1
of
reviews
was
in
one
of
product
page.
B
This
is,
this
is
how
the
rule
looks
like
it
is
pretty
simple
rule,
so
you
can
see
if
the
destination
is
reviews
with
the
precedence
of
one
always
go
to
version
1
of
the
reduce
micro
service,
as
implies
that
when
you
have
this
yml
file
or
JSON
file,
you
can
see
if
I
do
refresh,
it
always
goes
to
version
1.
It
will
not
go
to
version
2
or
version
3.
B
This
is
because
we
control
it
from
the
the
XTO
control
plane
without
having
to
go
to
any
of
these
services
and
change
the
code
or
change
individual
configurations.
We
just
control
all
these
things
from
a
sto
control
plane
just
by
applying
this
simple
rule.
Okay,
so
so
that's
the
first
step,
so
you
can
only
see
the
version
1
of
the
application,
but
what
I
want
to
do
is
only
for
a
user
called
Jason
I
want
to
route
to
version
2,
which
is
basically
the
Black
Stars
right.
B
This
is
important
when
you
want
to
introduce
newer
versions
of
your
application,
and
then
you
don't
want
to
break
everything,
even
though
it
is
in
production.
You
want
to
do
some
testing
before
you.
Actually
make
it
like
like
right
or
maybe
features
like.
Let's
say
you
want
to
introduce
a
service
in
the
East
Coast,
maybe
in
New
York
and
then
test
there
for
some
time
and
then
make
it
available
for
everyone
once
you
are
coming
it
once
you
are
comfortable
at
that.
B
So
all
these
canary
deployments,
which
are
specific
to
very
detail
specifically
like
header
ways,
header
based
canary
deployments,
all
these
can
be
done
pretty
easily
with
SEO
without
having
to
write
a
single
line
of
code
again
into
your
application
excellence.
So
how
is
it
done?
If
you
look
at
the
rule,
that
is
your
reviews
test,
2v2
rule.
This
is
how
it
looks
like
if
the
destination
is
reviews
microservice
and
the
header
contains
a
cookie
with
value
in
user,
equal
to
Jason,
then
always
route
to
version
2
of
our
application
of
the
reviews-
application.
Ok.
B
Now,
if
for
all
other
users
who
are
not
Jason,
the
application
traffic
will
be
same,
but
if
I
log
in
as
a
user
called
Jason,
then
I
should
see
version
2.
You
can
see.
I
only
see
version
2
if
I
log
in
as
Jason.
This
can
be
again
user
based
or
location
based
a
lot
of
customer.
Since
you
can't
oh
right,
it's
pretty
powerful.
You
didn't
write
a
single
line
of
code,
so
I
guess
we
just
have
the
it's
viral
specification.
That
said
now.
B
What
I
want
to
do
is
let's
say
for
all
other
users:
I
want
to
route
split
the
traffic
between
version
1
and
version
3,
50/50
percent.
Okay.
How
does
it
look
like
I
just
need
to
say
golf
fool
and
then
label
version
1
and
version
3,
and
then
50/50
percent
weight
right,
and
that
will
make
my
Canary
deployment
based
on
percentage
of
traffic.
So,
if
I
for
the
Jason
user,
it
will
be
same
thing,
but
if
I
school
sign
out
for
all
the
users,
you
can
see
that
it
will
be
wrong.
B
B
What
you
want
to
see
is
we
are.
We
are
ok
with
the
raw
version,
3
version
of
our
application,
and
then
we
want
to
route
everything
to
version
3.
So
the
rules
again
is
simple.
It
just
says:
if
the
destination
is
reviews
just
send
100%
of
traffic
to
version
3
of
our
application.
That's
it
now.
If
I
do
a
refresh,
it
will
all
be
version
3,
ok,
so
you
can
slowly
introduce
your
newer
versions
of
microservices
into
the
existing
production
stack
without
having
to
break
anything.
B
This
is
pretty
pretty
useful
for
data
operations
and
then-
and
sometimes
you
wanna,
even
you're,
developing
microservices.
It
is
also
important
for
testing
the
microservices
before
they
actually
go
live
you
wanna.
You
wanna
see
the
impact
of
1
micro
server.
We
saw
on
another
micro
services.
How
do
how
do
you
do
that?
This
is
important
like
when
you
want
to
do
some.
Let's
say
dependency
testing?
B
How
is
the
micro
services
dependent
on
each
other?
Ok,
so
for
that?
What
I
want
to
do
is
to
do
a
simple
testing
on
introduce
a
fault
on
one
of
micro-services
and
then
see
how
the
other
microservices
are
behaving.
So
in
this
case,
I
just
get
written
off
all
the
existing
rules
and
then
I'm
just
applying
a
rule
to
go
to
version
1
of
our
application.
B
This
is
what
it
says,
and
then
what
I
want
to
do
is,
but
a
specific
user
I
want
I,
don't
want
to
do
this
for
every
user
right
I
just
want
to
introduce
this
slowly
and
then
maybe
for
a
specific
user.
I
want
to
do
this
fault
injection
testing
so
for
a
user
called
Jason
I
want
to
role
pout,
always
to
version
2
of
the
reviews
side
service.
So
that
is
nothing
but
version.
2
is
it's
black
microcircuits,
okay
and
then,
let's
take
a
look.
B
How
it
looks
like
this
simple:
we
have
already
seen
it
if
the
destination
is
reviews-
and
the
header
value
contains
this
cookie,
always
out
to
version
2,
which
is
3
easy
to
understand.
If
I
log
in
as
Jason,
then
it
should
always
see
black
starts
my
reviews.
Travis,
we
haven't
injected
any
fault
here.
B
We
are
going
to
do
it
now,
so
what
I'm
going
to
do
is
if
the
user
is
Jason,
I
want
to
introduce
a
seven
second
delay
on
the
ratings
micro
service,
so
which
is
here
so
my
user
Jason
is
always
going
to
version
two.
But
this
this
reviews
service
is
calling
ratings,
but
I
won't
introduce
a
delay
at
this
rating
service
and
then
see
how
the
application
is
behaving.
Then,
when
there
is
a
delay
on
the
rating
service,
okay,
so
I've
introduced
a
seven
second
delay
on
this.
B
The
delay
injection
again
is
a
simple
B
Ammal
specification.
You
can
see
that
all
only
two
they'll
say
is
I
will
put
this
section,
which
is
x2
tip
default,
put
a
fixed
delay
of
seven
seconds
for
all
the
hundred
percent
of
request.
If
the
header
contains
a
cookie
with
valued
user
equal
to
chase
and
if
the
destination
is
ratings,
okay,
pretty
simple
logic,
and
then
once
you
apply
that
I
go
and
refresh
this
you
can
see
there
is
a
delay.
It
is
still
loading,
so
it
is
because
we
have
this.
B
We
have
injected
the
delay
into
the
ratings
maker
service
and
you
can
see
when
there
is
a
delay
in
the
rating
service.
Our
application
is
broken.
Okay.
So
from
this,
what
you
understood
is
without
having
to
impact
all
the
users.
You
can
literally
test
in
the
production
in
lung
and
like
how
the
application
is
behaving
when
one
of
the
microservice
is
behaving
improperly,
so
you
can
take
an
action
like
okay,
okay,
I
want
to
do
some
retry
or
maybe
scale
my
ratings
microservices
all
right.
B
So
all
these
things
you
can
identify
before
you
actually
fail
in
production.
This
is
pretty
important
in
micro-services
our
option,
because
you
want
to
do
a
lot
of
testing
before
you
before
you
introduce
your
newer
versions.
Okay,
so
we
are
able
to
identify
issues
before
they
occur
now.
Let's
say
I
wanted
to
do
some
time
out,
because
we
can
also,
let's
say
we
will
try
something
if
that
come
doesn't
come
up
in
some
time
we
just
gonna,
give
up
and
then
show
a
user-friendly
message.
B
Okay,
that
is
important
if
you
wanna
not
blow
to
your
microservices
infrastructure,
with
a
too
many
requests,
calls
okay,
we
will
look
into
two
things:
one
is
timeouts
and
the
other
another
piece
is
rate.
Limiting
I
will
look
both
of
them.
Okay,
so
clearing
this
out.
What
I
want
to
do
is
I
will
again
go
back
with
the
original
step,
where
I
will
forward
everything
to
version
1
and
then
I
want
to
introduce
a
delay
on
one
of
the
micro
services
and
then
provide
a
timer.
B
So
you
can
see
that
in
life,
okay,
so
what
this
has
done
is
what
this
rule
has
done
is
basically,
if
the
destination
is
reduce,
micro-service
always
go
to
version
two.
Okay,
let's
let
me
have
over
it.
In
that
rule,
the
first
rule,
this
nation's
reviews
always
go
to
version
two
of
the
reviews
and
I
wanna
add
five.
Second
delay
on
the
rating
service,
which
is
the
this
service
I,
want
to
add
a
five
second
delay
here.
Okay,
so
the
replaced
part
is
like
this
now.
B
Product
page
will
call
reviews
and
the
reviews
will
call
ratings,
but
there
is
a
delay
of
five
seconds
here,
but
what
I
want
to
do
is
because
this
is
giving
a
five
second
delay.
I
will
try,
maybe
for
two
seconds
or
three
seconds
and
then
I
give
up
okay,
the
way
how
how
would
I
achieve
this
is
basically
just
these
three
lines.
That's
it
so
have
a
HTTP
request,
timeout,
which
is
for
two
seconds.
B
So
even
if
your
book
fails
for
after
five
seconds,
if
the
request
will
give
up
on
after
two
seconds,
okay,
it's
pretty
easy
to
understand
as
well.
So
let's
take
a
look,
so
if
I
you
can
see
now,
the
application
itself
will
will
timeout
in
less
than
five
seconds.
Okay,
you
can
see
that
happening
now.
So
one
two
three:
it
happened
in
three
seconds,
not
five
seconds.
B
B
Okay,
I
still
see
yes,
alright,
let's
go
ahead
and
then
I
will
take
questions
at
the
end,
so
we
have
seen
time
applying
time
outs
in
the
micro
service
area.
Now,
let's
look
at
some
of
the
rate
limiting
okay,
the
rate
limit,
limiting
czar
very
useful.
If
you
wanna,
let's
say
avoid
some
DDoS
attack,
or
maybe
you
want
to
avoid
overloading
your
systems
with
too
many
requests.
That
is
very
important
if
you
wanna
keep
all
all
the
micro
services
le
and
let's
say
maybe
some
services
are
pretty
premium
services.
B
You
want
to
limit
how
many,
how
many
calls
you
make
to
that
service?
Okay,
all
this
can
be
done
using
rate
limiting
again
rate.
Applying
rate
limits
is
pretty
simple
on
microservices
in
this
case,
what
we
want
to
do
is
we
want
to
apply
a
rate
limit
on
the
ratings
microservices.
So
let's
say
this
is
our
premium
service
ratings
is
maybe
like
Joe.
Mato
are
some
other
external
service,
which
is
a
premium
service,
and
you
want
to
limit
how
many
requests
you
make
make
to
it
per
second,
okay.
B
So
the
way
you
do
it
is
you
say
if
the
destination
is
ratings
and
if
the
request
is
coming
from
reviews
version
to
service
only
okay,
you
can
control
what
is
the
source.
Also,
this
is
optional,
but
if
you
want
to
control
only
this
flow,
like
only
the
if
the
request
comes
from
version
2,
then
I
will
do
a
rate
limiting
for
version
1
and
version
3
I
don't
want
to
do
any
rate
limit.
You
can
do
that
as
well.
Okay,
it's
very
in
detail,
so
very
easy
to
understand
as
well.
B
So
the
validity
duration
is
one
second
and
then
maximum
I
want
to
allow
only
one
request
per
second
okay.
So
what
this
means
is,
if
you
make
more
than
one
call
per
second
to
this
rating
service
from
the
reviews
micro-services,
then
it
is
gonna,
absolutely
stop
it.
Okay,
so
let's
take
a
look,
how
we
are
doing
it,
so
what
we
are
doing
now
is
basically
this
is
a
quota
handler.
We
can
ignore
that.
That's
not
required
here.
B
So,
let's
login
to
our
version,
2
of
our
reviews,
match
service
and
then
make
a
call
to
rating
snack
service.
Okay,
we
will
try
to
keep
me
calling
the
rating
services
from
this
manager
services
so
I
have
a
log.
I
will
log
into
now
the
version
2
of
use
micro
services
and
then
make
a
call
very
fast.
Ok,
now
you
can
see
every
I'm
making
a
call
every
five
seconds
like
every
half
second
I'm
making
a
call,
and
then
each
of
my
subsequent
call
is
getting
a
message
called
too
many
requests.
B
That
is
with
error
call.
Yet
that
means
I
have
rate
limited
this
request
to
only
one
request
for
a
second.
If
you
make
more
than
that,
then
you
are
rate
alone.
Okay,
so
you
can
see
that
it's
pretty
simple,
using
a
set
very
simple,
Vimal
file,
I'm
able
to
rate
limit
this
micro-service
extra.
Okay,
all
right,
so
this
is
all
I
have
on
the
on
the
implementation
said
a
lot
more
I'm
gonna
try
to
keep
the
docs
up
to
dated
on
this
example.
B
A
You
we
always
have
time,
but
how
about?
If
we
just
pause
for
a
minute?
Okay,
there
is
then
that
Kareem
has
been
doing
a
wonderful
job,
answering
questions
in
the
chat
and
maybe
if
he
wants
to
unmute
himself
and
add
anything
in
here.
But
Jonathan
has
one
question
around
performance
testing
wondering
if
you've
done
any
performance
testing
with
its
CEO
installed
and
if
what
the
overhead
is
using
its
yeah
yeah.
B
So
skewed
selfies,
like
in
very
alpha
stage,
I
mean
it's
not
not
yet
ready
with
this,
so
these
metrics
needs
to
yet
come
out.
I
haven't
done
any
performance
testing,
but
this
envoy
is
a
production
ready,
foxy,
which
is
basically
used
in
I.
Think
a
very
large
orphanage
is
like
live
and
then
bunch
of
other
organizations,
so
envoy
itself
is
very
much
production
ready.
The
East
EO
control
plane
along
with
the
Envoy,
is
something
we
need
to
test.
Yet.
Okay,
it's
not
yet
like
the
metrics
are
not
out
yet.
B
A
B
A
As
I
mentioned,
and
in
the
chat
that
Kareem
is
written,
a
very
nice
blog
post
as
well
on
evaluating
it
Co,
that's
on
blog
that
open
shipped
to
as
of
yesterday,
I
believe
so
I
think
that's
also
another
reference
point
for
you
and-
and
you
would
also
said
that
you
were
doing
some
work
on
the
documentation
for
its
you
around
open
shift.
If
I
remember
correct,
is
that
hanging
out
around
somewhere
so.
B
B
If
you,
if
you
have
any
issues
running
sto
on
open
shape
like
3.7,
you
can
contact
me
I
will
try
to
I'll
get
more
details
on
that,
but
what
observe
is
it
needs
some
kind
of
privileged
parts
and
then
also
assume
it's
disabled.
As
of
now
we
have
raised
issues
with
that
and
then
working
with
like
there
is
a
work
going
on,
but
in
that,
like
assume
that
these
things
are
not
yet
like
production
grade
year.
Okay,
so
things
are
still
getting
sorted
out
so.