►
From YouTube: k8sOM#17: Advanced Kubernetes: Lesson Learned From Building a Managed Service -Hüseyin BABAL
Description
Abstract:
In this session, I will mention how to create a multi-tenant environment on Kubernetes to build a managed service. There will be real-life examples of Containerization, Monitoring, Tracing, and Microservices.
I will provide golden rules of building managed service on top of kubernetes with real-life examples as I gained experience during Hazelcast Cloud development. - Environment isolation - Microservice Architecture - Monitoring - Logging - Tracing will be the central topic for this session
A
Yeah
I
just
will
do
that.
Okay,
so,
let's
start
sorry,
but
take
a
little
bit
longer
for
everyone,
but
welcome
for
joining
I
hope
the
recording
works
now,
but
it
looks
like
it
because
I
could
see
on
the
top.
Ok
I
will
hand
over
to
you
and
you
just
let
me
know,
or
you
can
do
what
you
can
now-
also
switch
the
slides
right
Lizzie.
Let.
B
B
B
B
Currently
I
am
Virtanen
of
Lucas,
but
I
have
I,
have
collected
a
couple
of
lots
of
experience
from
the
previous
companies
about
micro
source
and
the
develop
transition.
So
you
will
see
two
sections
within
this
session:
a
one
for
the
Trevor
notice
and
another
for
the
micro
services
architecture
so
begin
this
session.
You
will
see
the
kubernetes
related
stops,
isolated,
multi-tenancy,
monitoring,
moving
to
public
and
other
competitors
are
a
little
spotted
stuff,
but
let's
start
with
the
kubernetes
first,
because
we
are
using
coburn.
B
A
B
Compare
this
is
the
open
source
platform
for
managing
the
containerized
workload
and
services
right
and
in
order
to
use
kubernetes
there
are
lots
of
options.
Then
you
have
a
look
at
the
cloud
providers.
You
can
create
the
manage
you
can
use
the
manage
version
of
the
equivalent
in
Google
Cloud,
a
division
hazard.
But
still,
if
you
are
in
your
data
center,
you
can
use
lots
of
technology.
For
example,
you
can
use
extrude
spray
in
order
to
deploy
kubernetes
on
your
data
center.
B
Basically,
could
spray
uses
constable
on
the
background,
just
define
your
host
informations
and
then
let
Tripp
spray
deploy
the
governance
online.
For
you,
that's
very
simple,
or
there
is
native
tools
like
you
Badman,
and
you
can
use
that
in
order
to
install
and
join
your
work
or
not.
So
after
this
point,
when
you
use
the
cube
realities,
you
will
mostly
focus
on
business
logic,
as
I
said,
and
all
the
info
level
things
will
be
handled
by
the
kubernetes.
B
For
you,
my
entire
solution
is
very
important
topic
in
the
kubernetes,
because
then
you
install
it
most
probably
you
will
need
different
kind
of
environments
according
to
your
needs.
So
let's
say
that
it
will
be
def
staging
and
the
proto
moment,
but
let's
have
a
look
at
the
same
isolation
into
category.
One
of
them
is,
you
can
install
you
can
set
up
different
copernicus
classic
for
each
a
moment.
Right,
for
example,
you
can
deploy
cluster
one
for
deaf
cluster,
two
for
staging
and
the
cluster
tree
for
the
prod
environment.
B
But
a
second
way
is
to
you:
is
installing
kubernetes
a
giant
command
s
cluster
and
you
can't
separate
your
environment
by
using
namespaces,
but
here
the
tricky
part
is
they
use
one
cluster
and
the
different
namespaces.
You
need
to
do
as
strict
isolation
by
using
different
kind
of
technologies
as
an
example,
for
example,
you
can
use
the
network
policies
not
to
access
between
the
namespaces,
for
example.
Devon
cannot
access
to
staging
or
staging
cannot
access
the
program
Marmont
here.
The
the
problem
on
here
is
just
an
example.
B
B
A
B
B
You
can
put
Prometheus
graph
on
and
on
see
inside
the
monitor
in
namespace,
and
here,
if
you
want
to
change
something,
for
example,
the
config
of
the
parameters,
you
can
just
speech
to
the
monitoring
namespace
and
then
do
whatever
you
want,
and
let's
say
that
you
just
wanted
to
do
something
with
one
of
your
rest
services.
You
can
switch
to
the
micro
service
one
and
do
a
couple
of
operation
in
the
product
service.
For
example,
let's
say
that
you
put
all
of
this
inside
the
default
namespace.
B
You
can
go
into
situation,
something
like
you
can
change
something
in
from
meteors.
Instead
of
products
is
because
they
all
start
with
the
PR,
and
you
can
confuse
with
them
so
to
be
in
the
safe
soil.
You
can
you
can
limit
your
context
by
using
namespaces
again,
for
example,
if
you
have
the
cron
jobs,
you
are
using
kubernetes
cron
job.
You
can
use
a
very
poor,
cron
job
means
to
place
in
order
to
put
all
the
cron
job
related
stuff
inside
inside
this
namespace.
B
So
we
talked
about
the
tremendous
clusters
namespace,
but
in
order
to
switch
between
this
namespace
commands
context
can
be
a
bit
tricky
with
the
trip
Sevilla
a
tool.
So
instead
of
you
can
use
the
sticks,
tool
developed
by
Ahmed.
So
basically,
there
are
two
binaries
inside
this
project.
Trip
sticks
is
for
a
change.
Switching
your
context.
As
you
can
see
in
the
example
you
can
switch,
your
context
is
simple
command
and
just
you
can
see
the
pods
within
that
context
within
that
cluster.
B
B
So
we
have
the
kubernetes
cluster
right.
You
have
only
one
and
isolated
namespaces
or
you
can
have
different
classes
for
different
purposes.
But
let's
say
that
you
are
inside
one
environment,
for
example,
we
are
providing
the
hassle
case
as
a
service
to
the
customers.
That
means
we
are
using
same
same
or
group
of
instances
to
the
users
customers.
B
Of
course,
we
have
not
only
one
command
in
this
class
that
you
may
have
different
commands
colors
in
different
regions,
but
the
main
point
here
is
even
inside
the
one
command
s
cluster,
you,
you
need
to
share
same
resources
between
your
custom
stripe.
A
really
good
example
on
this
is
using
the
network
policies.
When
you
have
a
look
at
here-
and
you
will
see-
we
didn't
this
namespace,
as
you
can
see,
the
next
place
is
see
one
two
three.
Let's
say
that
this
is
the
namespace
of
the
customer.
B
One
two
three,
and
only
the
repairs
coming
from
the
monitoring
namespace-
can
be
involved
within
this
namespace
very
cool
because
we
have
helices
cluster
within
this
new
space
and
we
have
a
metric
collection
job
inside
the
monitoring
namespace,
and
this
can
be
made
request
to
these
namespace
in
order
to
get
some
metrics
from
different
kind
of
sources
like
case
management
center
or
something
like
that,
but
other
than
that,
no
other
name
spaces
or
no
other
for
the
reakless
can
come
to
another
custom.
B
B
If
we
are
inside
AWC
environment,
maybe
you
already
know
that
the
AWS
meta
data
service
right,
if
you
are
inside
the
EC,
you
can
call
this
IP
in
order
to
get
your
metadata
about
ec2
instances.
So
we
do
not
allow
our
customers
to
access
this
metadata
service.
So
you
can
do
that
by
using
this
notation,
so
egress
for
the
outgoing
connection.
Let's
say
that
you
are
inside
your
main
space
and
you
cannot
do
any
kind
of
you
can
do
any
kind
of
request
out
point.
B
B
Within
this
context,
there
are
tutorials
the
control
plane
and
the
data
plane.
Maybe
you
have
heard
these
firms
in
different
places,
but
basically
in
control
plane.
We
have
microservices
cron
jobs,
monitoring
tools,
and
this
is
somehow
open
to
public,
but
for
example,
they
open
the
console
application.
You
would.
Your
request
will
come
to
here,
which
is
public
and
also
you
are
your
UI
application,
making
some
requests
the
backend
services
micro
services.
Again,
this
phase
is
over
the
pub,
but
the
data
plane
is
have
a
restricted
access
to
the
outside.
B
B
B
You
have
the
curators
clustered
and
you
need
very
good
monitoring
architecture
in
order
to
have
a
good
focus
on
the
business
logic.
When
you
very
search
for
the
monitoring
on
kubernetes,
you
will
see
most
most
of
the
time
you
can
see
this
diagram,
so
we
for
the
monitoring
part
we
are
using
drama
tears.
Primitives
is
capable
of
collecting
metrics
from
long
sources.
For
example,
indoor
Kokoro
you
only
they
know
this.
B
The
adviser,
so
primitive
is
capable
of
collecting
mix
from
see
adviser
in
order
to
install
chroma
keys
in
your
community
is
cluster
most
of
the
time
you
need
to
do
this
manually,
but
you
can
use
be
operator
which
is
called
primitive
operator
inside
your
communities.
Cluster.
This
operator
know
how
to
install
cubic
Prometheus
and
be
related
compound.
So
you
don't
need
to
do
anything
about
the
same
installation.
B
Just
add
your
ripple
to
the
system
and
install
the
Prometheus
operator
and
that's
all
it
will
install
the
Prometheus
site
components
and
a
couple
of
custom
resources
for
you
here.
Helm
is
the
package
manager
of
the
kubernetes.
Then
you
can
use
it
to
install
and
the
people
kind
of
packages.
So
we
have
parameters
and
it
is
collecting
the
metric
from
the
default
sources.
Now
what
about
the
visualization?
B
You
need
to
prepare
your
force
on
yourselves,
but
if
you
use
formatives
operators
it
will,
it
will
be
stalled,
a
couple
of
predefined
dashboard
for
you,
the
track
your
Cabana
test,
Alaska
competitors
capacity
planning
is
one
of
them.
You
can
see
the
system
load,
memory
usage
and
in
this
space
usage.
Here
as
an
example,
we
said
that
from
me
says
the
metric
collection
component,
and
also
it
has
an
alert
manager
since
formative
collecting
metrics.
You
can
create
any
kind
of
online
by
using
that
metrics.
B
For
example,
you
can
create
an
alert
config
in
order
to
notify
yourself
for
unreachable
components
for
more
than
five
minutes.
This
is
this
is
another
an
acceptable
situation,
but
this
is
just
an
example,
so
you
will
be
notified
if
there
is
an
unreachable
component
more
than
five
minutes.
There's
a
second
example:
you
will
see
if
you
have
original
agency
more
than
one
second,
you
will
be
notified.
So
let's
say
that
you
have
micro
service
pool,
you
have
100
micro
services
and
average
latency.
B
If
the
everything
is
more
than
one
second,
you
will
be
notified,
but
here
you
see
only
the
island
config
but
notifying
is
the
different
part.
As
a
notification
you
can
use,
you
can
set
different
destinations,
it
can
be
emailed,
it
can
be
slack,
it
can
be
abstaining
in
order
to
create
an
incidents
etc.
You
can
have
a
look
at
for
the
example
configurations.
B
As
a
nature,
you
may
have
different
commands
clusters
right.
In
our
case,
we
have
tremendous
clusters
on
multiple
regions.
So
what
can
you
do
on
this
situation?
You
do
wanna
install
these
parameters
on
every
clusters
and
manage
them
separately,
or
there
is
another
solution.
So
since
I
am
saying
this
one,
of
course
there
is
another
solution
which
is
the
primitive
Federation
in
primitive
Federation.
Basically,
you
connect
the
different
parameters,
instances
to
central
one.
In
the
example
you
can
see,
you
have
I
have
three
more
tremendous
clusters.
B
Of
course
there
is
Prometheus
instance
inside
it
and
inside
the
this
Federation
sample
is
inside
the
central
one,
and
this
from
it
is
core.
We
collect
the
metrics
from
those
extra
three
resources.
When
you
open
the
Prometheus
central
parameters,
you
will
see
the
extra
three
metrics
coming
from
the
different
commands
class.
This
is
very
cool
on
our
chasm
because
we
have
only
one
dashboard.
We
have
on
one
parameter
central
one
and
we
are
able
to
see
lots
of
metrics
comes
from
the
different
commands
clusters.
B
B
B
Just
ignore
it:
ok!
Ok!
So
until
this
point
you
have
my
croissants,
but
what
about
exposing
this
services
to
outside
the
important
part?
In
this
case,
I
will
mention
about
the
AWS
nru.
So
you
have
installed
cavitus
cluster
inside
AWS.
You
can
use
the
managed
service
or
you
can
use
cops
in
order
to
install
the
words
cluster.
But
what
about
moving
these
services?
Exposing
this
services
to
outside
there
is
a
term
which
is
called
engine
of
ingress.
This
is
basically
when
installed
the
engine
deploy
nginx
nucleus
into
your
communities,
cluster.
B
B
Of
course,
you
need
to
have
a
DNS
entry
right.
In
AWS
case,
we
have
route
53
inside
the
route
53.
You
have
Domino
record
with
the
scene,
a
bind
it
to
already
created
load
balancer.
So
you
have
domain
and
trees.
You
have
only
one
load
balancer,
but
what
about
it?
Traffic?
Let's
say
that
example.com
/
prologue,
example.com
/
categories,
for
example,
how
the
traffic
goes
to
necessary,
desired
destination
service
inside
the
communities
plaster.
B
In
order
to
do
this,
we
need
to
add
some
aggressive
rules
in
the
example.
This
is
for
the
product
service
inverse
entry.
When
you
make
a
request
for
example.com
with
the
slash
product
context
path,
you
will
see
the
request
will
be
propagated
to
the
service,
which
has
a
name
product
service
and
it
will
use
the
8080
part.
This
is
very
simple.
So
under
the
one
domain
or
multiple
Dunham's
for
me,
you
can
define
lots
of
context
path
in
order
to
propagate
it
on
the
necessary
service.
So
this
is
very
simple
deployment
here,
nginx
ingress.
B
Basically,
when
you
have
a
look
at
the
nginx
in
this
pod,
you
will
see
there
is
a
nginx
instance
inside
the
container.
Whenever
you
add
an
ingress
rule,
it
repopulate
nginx
config.
Basically,
they
have
the
reverse
proxy
inside
the
configuration
and
that's
all
as
an
overall
picture,
you
will
see
behalf
of
route.
53
and
request
comes
with
a
load
balancer
and
by
using
nginx
troll
it
will
go
to
product
service,
user
service,
etc.
B
B
As
you
already
know,
whenever
you
try
to
deploy
something
in
monolithic
application
paste,
you
intubate
lots
of
times
deploy
to
production
right,
for
example,
you
change
the
product
page
title
and
you
need
to
wait
for
this
for
several
minutes.
I
know
45
minute
interval
deployments
for
only
one
instance.
So
when
you
compared
model
to
the
micro
serves,
maybe
this
will
be
the
best
candidate,
but
after
switching
to
Microsoft's
architecture,
you
divide
your
model
to
into
multiple
services,
and
all
of
your
problems
has
gone
right.
B
Actually,
in
first
case,
this
is
very
specific
to
my
deployment
strategy,
so
you
divide
into
multiple
services
and
you
provide
lots
of
amore
to
each
of
them,
and
you
see
you
have
lots
of
bills
at
the
end
of
the
month
or
here.
So
you
need
to
have
some
best
practices
in
your
micro
source,
ID
textured,
in
order
to
fit
in
with
your
super
Cabana
this
class.
B
Only
the
micro
source
architecture
term
cannot
solve
our
architectural
problem.
You
need
to
apply
some
best
practice
to
have
this
good
architecture.
Now.
The
first
item
I
need
to
mention
about
the
glory
of
rest.
Maybe
you
ever
this
term,
there
is
a
major
it.
A
limit,
Richards's
Maitre,
D'
model.
This
is
very
important
one.
If
you
are
using
the
rest
services,
they
are.
Basically,
there
are
four
levels
here
on
the
level
0.
It
is
something
like
soft
services,
so
you
have
on
the
one
at
point.
B
You
have
payload
and
XML
soft,
but
you
can
say
that
envelope
and
within
this
envelope
you
can
change,
provide
your
action
right,
one
and
point
one
body
and
you
provide
your
action
inside
the
camera
in
level,
one
resources,
so
you
have
multiple
endpoints,
but
this
time
your
is
the
same
HTTP
method.
To
do
everything.
For
example,
you
always
use
get
method
to
create.
B
Something
update
something
or
delete
something.
This
is
not
a
bad
level,
a
good
level
of
response
architecture,
but
in
level
2
you
have
the
HTTP
verbs.
What
is
this
so
in
order
to
create
something
you
are
using
post
method
in
order
to
replace
you
are
using
put
fetch
method
for
the
partial
update
bill
for
the
deleting
something
right.
So
this
is
very
good
level,
but
there
is
another
better
one:
the
hypermedia
controls
in
level
3.
You
are
the
best.
So
in
this
one
in
the
rest,
API
responses,
you
will
see
some
embedded
resources.
B
Let's
say
that
you
get
some
detail
from
the
user
user,
slash
one
to
tree.
You
get
the
detail
of
them
to
the
Bonta
tree,
and
within
that
response
you
may
add
a
couple
of
extra
needed
information.
For
example,
you
can
put
links
for
the
comments,
category
articles
or
other
stuffs,
so
you
have
a
response.
You
have
some
links
or
it
can
be
a
list
response
and
you
can
put
the
next
page
link
inside
that
response.
B
So
when
you
do
this,
these
links
can
be
used
by
the
web
application
by
your
mobile
application
or
in
the
other,
desktop
application.
So,
since
you
provide
your
user
information,
useful
links,
you
don't
need
to
do
any
kind
of
your
generation
stuff
within
your
consume,
inside
your
consumer
application.
B
You
have
micro
services,
so
you
can
think
that
these
micro
services
are
like
him
in
order
to
interact
with
each
other.
They
need
to
have
some
good
language.
In
order
to
do
this,
you
can
use
some
kind
of
make
me
you
can
use
the
HTTP
HTTP
request
from
scratch
for
every
operation,
but
according
to
me,
this
is
not
the
best
practice.
Otherwise,
you
can
generate
some
client
code
for
every
service
or
the
other
topics.
Let
me
provide
you
three
examples
on
this.
You
can
search
for
the
faint
client
in
faint
client.
B
Basically,
you
define
some
interface.
This
is
just
a
Java
example.
Within
this
interface,
you
provide
your
field
client
value.
These
users
is
basically
the
service
name
within
your
terminals
class
inside
the
communities
cluster,
you
don't
use
the
IP
or
the
domain
notation.
You
use
the
service
name
in
order
to
in
order
to
access
between
services,
because
there's
an
internal
DNS
servers
inside
the
communities,
one
of
the
advances
of
the
Copernicus
clusters
and
inside
this
interface,
you
see
some
method,
metal,
signature
and
when
you
make
a
request
to
a
venue
call
this
get
user
method.
B
Actually
it
will
call
the
users
service
and
it
will
use
the
user's
context
pad
and
with
indeed,
as
a
request
for
me,
it
will
add
a
user
ID
parameter
to
cut
variable
and
of
course
it
will
be
a
get
request.
You
can
use
this
notation.
There
are
different
kind
of
implementation
of
inclined,
but
as
a
second
option
you
can
use
swagger
documentation,
part.
Basically,
you
enable
swagger
documentation
can
be
in
your
project.
Let's
say
that
you
have
a
Java
project,
you
just
ask
at
the
configuration
simple
configuration
and
enable
spec
or
top.
B
Then
you
do
this.
You
will
have
to
yours.
One
of
them
is
for
the
documentation.
They
offer
your
figure
your
HTML
file.
You
will
see
a
very,
very
good,
API
documentation
paper.
You
can
you
can
do
request
online
writing
the
documentation.
You
can
search
for
speaker
hat
and
you
must
say
good
demo
for
this
and
the
second
one
which
is
important
is
API
specs.
So
this
second
second
one
is
important.
B
A
B
Am
saying
that
just
use
my
spec
URL
I
just
wanted
use?
They
generate
the
code
base
for
spring
one
and
just
extract
this
one
to
this
folder.
This
is
very
simple,
so
you
have
you
already
have
a
documented
API
and
just
generate
the
code
base,
but
providing
three
parameter
C.
So
what
is
the
best
practice
or
automation
float
for
this?
B
So
by
using
every
customer
using
your
snapshot,
print
can
use
the
NIV
features
directly
without
doing
any
kind
of
manual
implementation,
but
for
the
prod
environment,
of
course,
whenever
you
deploy
or,
for
example,
product
service,
the
stable
one,
you
can
trigger
another
Jenkins
job
to
generate
your
client
code
for
different
kind
of
languages
at
the
same
time.
So
this
is
very,
very
time-saving
operation
for
your
actual
implementation.
B
This
was
the
basic
micro
service
best
practice,
but
what
about
is
using
these
micro
services
within
the
community
semi?
This
is
a
simple
goal
and
project.
Basically,
my
business
logic
resizing
the
app
folder
and
I
prefer
a
kubernetes
folder
that
contains
deployments
of
service,
your
mod
files
and
also
you'll,
see
they
deploy
a
sage,
docker
file
and
Jenkins
fine.
B
So,
as
you
can
see,
everything
is
stated
as
a
configuration
inside
the
codebase,
there
is
no
manual
operation,
so
I
am
NOT
trading
at
Jenkins
job
manually
inside
the
changes
every
time
so
I'm
using
Jenkins
file
notation,
let's
dip
tile
inside
this
component
a
bit.
So
what
does
the
deploy
normal
here?
Basically,
in
this
configuration
you
see
we
have
a
project
docker
image
right.
So
this
is
the
event
service,
and
this
is
a
docker
image
of
this
project.
I
am
saying
that
this
project,
this
deployment
will
have
three
replicas.
B
That
means
when
you
deploy
this,
you
will
see
three
pods
under
this
deponent
replica
set
and
the
other
stuff,
and
we
also
provided
some
environment
variables.
One
of
them
is
host
here,
so
when
you
deploy
this
one,
actually
this
is
a
black
box.
You
cannot
access
this
one
from
the
outside,
but
you
can
of
course,
do
tricks
easier
for
a
portfolio.
You
can
do
it
still
exec
and
see
if
it
is
working
on
that.
But
basically
this
is
the
black
box.
In
order
to
Expo
these
two
outside
you
can
use
a
service
notation
here.
B
We
didn't
disturb
this.
You
need
to
be
careful
about
the
tagging
architecture.
So
here
when
you
deploy
this
one,
you
are
saying
that,
okay,
in
order
to
access
this
deployment
unit,
Leo's
destroyers
up
which
deployment
have
this
service
Maps
their
specific
deployment.
Actually,
the
main
point
here
is
the
selector.
So
this
selector
saying
that,
okay,
whenever
someone
calls
me,
for
example,
agrestic
you'll,
remember
that
big
nurse
troll,
whenever
someone
calls
me
actually
I,
will
find
a
component
that
has
a
event
that
and
I
will
propagate
the
requester.
B
That
the
point
of
this
is
the
main
birthing
strategy.
So
this
is
how
the
expose
service
to
the
outside,
but
we
said
that
you
can
have
one
cluster
multiple
clusters,
but
how
you're
accessing
this
one?
Of
course
we
are
using
the
good
city
up,
but
configuring
this
one.
Is
there
none
other
topic,
so
I'm,
not
deep
time
in
this
topic
right
now.
B
Basically,
you
clone
your
project,
you
go
inside
the
project
and
you
could
see
tail
apply,
dash,
F
and
the
folder,
and
when
you
do
this,
every
jumbo
file
will
be
automatically
push
to
the
communities
API
by
using
chip
CTL
command.
All
the
necessary
resources
will
be
created
on
the
commander's
cluster,
but
we
said
that
we
provide
some
in
mind
variables
right,
but
what
about
the
configuration
data?
You
may
have
DB
per
search,
some
kind,
other
second
set
cetera.
Basically,
you
can
use
the
secrets
in
in
this
comment.
B
I
am
saying
that
in
micro,
service
namespace
just
create
a
generator
separate.
The
name
of
the
second
will
be
product
service
and
inside
this
product.
So
you
can
say
that
this
is
the
vault
inside
this
world.
I
am
putting
some
key
DB
passer,
which
is
equal
to
another
place
or
the
DB
password.
Most
probably,
this
comes
from
some
credentials
source
it
can,
for
example,
if
you
are
using
jenkees
file,
it
comes
from
the
changes,
credential
binary
operation
and
as
an
example
of
this,
the
first
amount
variable
is
the
static
one.
B
B
Basically,
whenever
you
push
something
in
the
git
repository,
it
will
be
built
tested
and
the
deployment
cloud
provider
is
very
simple
step.
But
when
you
think
of
this
gentes
file,
we
check
out
code
base,
they
build
it.
We
tested
and
we
deploy
by
using
the
poi
aside
so
basically
inside
Jenkins.
Why
I
do
not
put
different,
complex
business
logic
inside
the
gentes
file?
If
there
is
something
complex,
I
extract
it
to
be
a
sage
file
and
just
hold
it
with
different
parameters?
B
So
when
you
have
a
look
at
this,
is
this
is
the
product
apart
because,
as
you
can
see,
there's
an
notify,
slag
started
and
for
the
failure
case
we
have
not
effects.
Legs
failed.
Let's
say
that
you
deploy
something
by
using
Jenkins
and
you
will
be
notified
for
every
step.
So
this
is
very
good
if
you
want
to
have
a
proactive
in
mind,
because
otherwise,
when
you
deploy
something,
you
will
see,
it
will
be
failed
and
you
will
know
nothing
about
the
status
of
the
job.
So
if
you
integrate
this
someplace,
it
will
be
better.
B
When
you
have
a
look
at
the
deploy
asset
file,
basically
the
build
to
the
image
we
take
it,
they
push
it
to
docker
registry
and
then
we
operate.
The
chip
still
apply
common,
because
this
Yama
files
depends
on
the
NIMH
dr.
Umar's
version,
but
what
about
the
actual
strategies
by
default?
Kubernetes
uses
role
in
country.
That
means
all
the
instances.
For
example,
we
said
that
there
are
trivial
casts
and
may
do
a
nil
deployment.
B
You
will
see
the
replicas
will
be
deployed
one
by
one,
because
otherwise
all
of
them
will
be
done
at
the
same
time
same
time.
That
means
you
will
have
a
thumpin
right,
so
we
don't
want
to
get
into
this
situation
so
growing
up.
They
will
be
sure
about
this
situation.
What
about
the
calorie
deployment
in
calorie
deployment?
Basically,
we
try
to
simulate
our
new
features
in
the
production
in
mark.
Let's
say
that
you
put
a
new
feature
on
your
console
and
in
production
environment.
B
Basically,
you
have
a
current
application,
which
has
a
three
replicas,
and
here
be
careful
about
this
one.
We
have
a
docker
image
version,
which
is
one
oh
right
version
one
then
you
deploy
your
new
feature.
Let's
say
that
this
is
Canada.
It
has
only
one
replica,
but
it
has
a
different
version,
which
is
oh
okay,
so
this
contains
a
different
picture.
But
what
is
the
difference
between?
Do?
Did
those
two
deployments,
as
you
can
see,
the
names
are
different.
Okay.
B
But
when
you
have
a
look
at
this
survey,
when
you
have
a
look
at
these
tags,
they
all
of
them
has
the
physical
event.
That
means,
whenever
a
request
comes
to
service,
the
service
will
go
to
both
of
these
deployments.
So
for
the
fourth
request,
three
of
them
will
go
the
first
one.
The
one
of
them
will
be
go
to
the
second
one,
which
is
the
new
one,
so
25
and
the
75%
will
be
distributed
between
these
deployments.
But
what
is
the
next
step?
B
Let's
say
that
you
install
those
to
the
appointments
and
you
are
continuously
checking
your
log
management
turns
alarm,
matrix
sources,
etc.
You
get
nothing
as
an
exception
within
a
specific
time
interval
right
and
then
you
can
say
that
okay
I
need
to.
Let
me
let
me
increase
the
replica
of
the
neat
feature.
You
can
be
50/50,
25,
25,
75
and
then
finally,
you
need
to
have
this
bar.
So
prod
AB
is
my
application.
I
have
three
replicas
and,
as
you
can
see,
the
docker
image
version
is
finalized.
B
B
In
Bolivian
deployment,
it
is
also
very
good.
They
be
controlled
one,
but
this
is
a
bit
expensive
when
you
compare
the
kernel
release
because
you
need
to
have
exact
same
environment
at
the
same
time,
in
order
to
simulate
this
one,
you
have
the
current
production
environment
and
you
deploy
new
feature.
You
deployed
in
another
environment,
which
is
basic
the
same,
but
request
is
currently
coming
to
the
current
one.
Let's
say
that
you
test
it
the
new
one
internally,
and
there
is
nothing.
B
Maybe
you
can
expose
this
new
environment
to
your
specific
customers
and
you
decided
okay,
this
is
very
stable.
You
can
switch
your
traffic
from
the
current
one
to
the
new
one.
So
basically
we
are
saying
this
one
below
green
and
after
this
fixed
team
operation,
the
belief
of
being
green
and
green
will
be
blue,
and
that's
all
so.
This
blue
green
deployment
contains
different
strategy
and
you
can
have
a
look
at
and
detail
maybe
later
in
the
future,
so
logging,
let's
say
that
you
have
a
spring
build
application.
I
can
java
application.
B
You
are
basically
writing
every
log
in
the
console
out
nothing
in
the
file
and
other
place
just
into
the
console
of
nutriment.
It
is.
There
are
two
types
of
logs,
basically
not
leveled
in
the
class
level,
in
not
level,
basically,
you
generate
marks
and
they
can
tap
into
the
note,
and
maybe
you
can
be
a
logger
thing,
but
this
is
very
not
specific
when
you
have
a
look
at
cluster
level
logging.
B
Basically,
you
have
a
daily
demon
self,
these
demons
that,
for
example,
this
you,
you
have
flat
feet
this
plan
build
is
collecting
data
from
the
container
output
blocks,
and
this
will
push
this
data
to
the
log
stash.
Most
probably,
you
have
a
box
such
a
genocidal
commence
environment,
and
this
log
stash
elect
these
locks
and
send
at
the
bottom
better.
It
can
be
great
long,
it
can
be
locally
can
be,
who
knew
or
other
Church
may
L
kiss
tack
whatever
you
want.
B
So,
in
order
to
look
at
your
locks,
you
can
use
your
logging
vegetable,
but
then
you
have
a
look
at
this
dashboard.
You
will
see,
for
example,
you
can
see
the
locks
by
looking
by
the
container
next
be
very
simple.
For
example,
shipping
service
users
service
payment
service.
You
can
see
every
locks
in
st.
for
basically
whatever
you
use.
It
can
be
anything
you
need
right
after
you
create
your
account.
You
will
have
a
demon
step
configuration
your
multi
just
grab
it,
deploy
it
to
your
environment.
B
B
B
As
an
open
source,
well,
you
can
use
Zipkin,
for
example,
in
the
Java
C.
If
you
are
using
spring
wrote
application
you
can
use
Springwood
slowed
in
order
to
generate
some
trace
IDs
in
order
to
do
concentration
stops
and
also
you
can
define
your
resistant
URL
in
order
to
send
these
traces
to
this
zip
investor.
This
is
one
alternative
and
also
you
can
use
the
paid
resources
like
new
predict,
a
dynamic,
Steiner,
trees,
etc,
but
all
of
them
contains
the
same
strategy.
B
In
my
case,
I
will
nation
about
the
Installer,
so
in
Stannah
is
the
account
tracing
tool
for
your
system.
Whenever
you
install
this
into
your
communities,
cluster
basically
generates
your
architecture
level
graph
diagram
for
so
they
installed
your
environment.
You
can
see,
there
is
some
region,
specific
infra
other
turns,
and
also
the
important
part
is.
If
you
have
a
micro
service
pool,
you
we'll
see
the
machinery
requests
mesh
from
between
the
microservices.
So
since
you
are
seeing
this
one,
let
me
deep
dive
into
the
shop
service.
B
For
example,
you
can
see
the
error
rate
diagram
inside
this
specific
dashboard
within
this
one.
Let's
say
that
you
have
an
incident
with
a
within
a
specific
time.
You
can
go
here
and
see:
okay,
I
get
a
huge
latency,
but
what
is
that,
when
you
dip
dye
and
drill
down
the
actual
repress
wall,
you
will
say:
okay,
I
have
a
problem
inside
the
elasticsearch
cold.
Maybe
I
am
trying
to
start
someone
some
there's
some
query
applied
on
an
onion
text
field
right.
B
Maybe
you
need
to
regenerate
your
analyzer
and
then
make
another
call,
so
you
can
easily
detect
this
kind
of
problems
within
your
system
by
the
way.
In
order
to
handle
this,
you
don't
do
any
kind
of
specific
configurations
because
they
installed
in
Stara.
It
automatically
call
a
gravimetric,
add
sources
and
then,
and
then
and
know
what
is
the
non
source
java
or
it
is
elastic.
So
it
knows
everything
I
know.
B
So
if
you
have
a
console
TI
a
sector,
you
can
put
the
JavaScript
code
also,
and
you
can
have
a
bundled
monitoring
tool
for
deep
inside
for
this
specific
one.
Here
you
can
define
your
alarm
again
and
in
case
of
any
infra
or
application
problem.
It
can
automatically
create
an
incident.
It
can
notify
upstream
sector,
it's
very
simple,
I
think
we
are
done
here.
If
you
have
any
kind
of
pests
you
know
I
can
happily
answer.
Yes,
maybe.