►
From YouTube: Istio is a long wild river how to navigate it safely
Description
#IstioCon2021
Presented at IstioCon 2021 by Raphael Fraysse.
At Mercari, we have few hundreds of services running in Kubernetes. We spent the last year and a half trying to integrate Istio in our microservices infrastructure at scale, with many trial-and-error and lessons learned. This presentation will explain what is making Istio a long wild river and how we managed to navigate it. It will focus on several aspects:
Stabilizing Istio
Adopting Istio
Running Istio By sharing our learnings, we hope to make Istio a long quiet river for the community.
A
Hi
everyone.
Thank
you
very
much.
I'm
really
excited
to
present
this
session
today.
I
used
to
come
despite
the
time
now,
but
it
makes
me
even
more
motivated
to
go
through
that
together.
Okay,
so
maria
already
presented
me
quickly,
so
I
will
go
directly
with
today's
agenda.
So
today
we
will
have
a
quick
introduction
on
eastern
mercury
and
two
main
sections.
The
first
section
is
about
stabilizing
istio
and
the
second
about
adopting
issue
so
quickly
about
east
chat.
Mercury.
A
First,
a
quick
pr
time:
merkley
is
a
japanese
c2c
marketplace
where
people
can
sell
used
items
and
we
are
also
in
the
us,
and
I
think
we
had
cmcm
during
the
super
bowl.
So
if
you
saw
this,
maybe
you
know
what
is
america
and
quickly
about?
Numerically,
there
are
like
few
numbers:
200
microsoft,
more
than
200
microservices,
one
main
cluster
to
a
12
000
parts.
More
than
705.
A
First,
we
will
quickly
explain
the
specifications
of
the
osteocycar
proxy
to
understand
how
and
why
we
had
to
stabilize
it
on
kubernetes.
After
that
we
will
explore
an
important
question.
We
hope
we
had
asked
ourselves
one
year
and
a
half
ago,
then
we
will
explain
how
a
full
mesh
is
utopian
and
the
importance
of
knowing
only
what
you
need.
And
finally,
we
will
briefly
explain
about
some
broad
rays
for
istio.
A
So,
let's
start
with
the
isocycle
proxy
specifications,
we
have
spot
running
in
kubernetes.
It
has
two
containers,
an
application
container
and
a
psyca
container
with
esto.
All
incoming
traffic
must
flow
through
the
cycle
first,
when
entering
the
pod,
and
all
outgoing
traffic
must
flow
through
the
cycle
before
leaving
the
pod.
A
A
Then
there
are
like
usually
two
cases
where
it
happens.
The
first
one
is
during
the
pod
creation
and
the
second
one
on
pod
deletion.
This
is
quite
troublesome
because
these
events
happen
very
often
in
kubernetes,
so
we
need
to
prevent
them
by
making
sure
that
envoy
is
started
before
any
other
container
in
the
pod
and
envoy
is
stopped
after
any
other
container
in
a
pod.
So
now
that
we
explain
the
istio
specifications
and
the
issue
with
the
cycle,
enability,
let's
go
to
how
currencies
handles
pods
with
cycles.
A
And
so,
in
this
regard,
kubernetes
lacks
some
good
control
apis
to
customize
the
container's
lifecycle
in
a
pod.
In
fact,
there
is
no
official
way
to
instruct
a
pot
to
stop
the
sidecar
of
container
first
or
stop
the
sidecar
container
after
the
app
container
is
stopped.
A
However,
we
can
wrap
up
lifecycle
using
the
container
lifecycle
hooks
to
achieve
our
goal,
so
the
workaround
consists
of
using
lifecycle
hooks
called
bust,
stat
and
pre-stop
first
to
ensure
that
envoy
is
started
before
any
container
in
the
pod.
We
need
to
use
a
post-start
lifecycle
hook
in
the
easter
proxy
container
manifest,
so
the
lifecycle
is
basically
basically
delaying
the
next
containers
from
running
until
the
command
exists
exits,
so
it
waits
until
the
proxy
is
ready.
A
Secondly,
we
need
to
ensure
that
envoy
is
stopped
after
any
other
container
in
the
pod,
and
there
are
two
things
required.
The
first
one
is
to
use
a
pre-stop
lifecycle
hook
in
the
easter
proxy
container
manifest,
so
this
command
may
look
a
bit
unfamiliar,
but
it's
quite
simple.
It
will
simply
wait
for
the
application
connections
to
be
drained
before
stopping
the
container.
A
The
second
thing
required
for
the
stop
phase
is
to
use
a
pre-stop
lifecycle
hook
in
the
application
container
manifest
this
pre-stop
hook
will
first
make
the
application
to
slip
to
let
the
downstream
grpc
connections
terminate
then
drain
the
android
listeners
and
finally
sleep
to
give
enough
time
for
draining
the
remaining
connections.
The
last
command
is
about
handling
the
container
restart
cases,
because
we
had
some
issues
with
application
container
restart
that
would
trigger
the
hook
and
leave
the
cycle
in
endless
training
mode.
A
The
last
thing
to
do
here
is
to
make
sure
that
you
adjust
your
pods
termination
grace
per
second
to
be
more
than
the
sum
of
all
sleeps
in
the
priest
of
hooks,
because
once
the
period
expires,
a
sick
kid
is
sent
to
the
pod,
dropping
all
the
connections
that
were
not
drained
yet
leading
to
5xxx,
so
make
sure
that
you
have
enough
time
to
finish.
The
train.
A
Also
be
please
be
careful
because
these
are
workarounds,
no
solutions
so
test
before
using
you
need
to
understand
what
you're
adding
to
your
cluster.
It's
the
same
as
linux
copy
paste.
You
paste
you
don't
paste
command
that
you
don't
understand
ideally,
and
this
should
be
deprecated
once
the
comment
is
supports
the
cycles
officially.
A
Unfortunately,
it's
not
very
smart
at
scaling
out
pods
with
multiple
containers
with
hpa,
but
fortunately
it
was
fixed
in
kubernetes
1.20
by
specifying
a
container
resource
as
an
hp
target,
but
we
are
not
in
that
version
yet.
So
in
the
meantime,
we
need
to
add
the
issue
cycle
into
the
hpa
calculation.
A
So
now
we
have
a
cycle
added
to
the
pod,
with
100
millicourse
of
cpu,
and
since
the
hpa
takes
the
average
of
all
container
cpu
request
values,
the
hp
will
now
scale
at
770
milligrams
cpu.
So
we
had
a
change
in
the
threshold
to
scale
out
which
is
quite
problematic
for
existing
users
that
don't
know
about
this.
A
So
to
limit
the
availability
risk
of
hp.
We
have
two
options.
The
first
is
to
make
the
easter
proxy
cpu
very
low
compared
to
the
application
cpu.
So
ideally,
it
should
be
between
some
x
percent
and
white
percent
of
the
application
cpu
to
minimize
the
variance,
but
we
need
to
keep
in
mind
that
a
too
low
value
will
make
the
proxy
to
throttle
decreasing
the
performance.
A
The
upper
bound
should
also
be
in
an
acceptable
chain
range
to
minimize
the
impact
on
the
initial
hpa,
and
it
will
only
work
also
if
the
amount
of
traffic
can
be
handled
by
the
proxy
resources,
then
the
second
option
is
to
adjust
the
hpa
threshold
to
match
the
original
cpu
absolute
target,
in
this
case
700
miracles.
So,
for
this
specific
case,
we
would
need
to
reduce
the
target
percentage
to
63.6
percent
to
have
the
same
scaling
behavior
as
before.
A
We
currently
tried
the
second
option
automatically,
so
both
options
have
their
drawback
and
the
main
one
is
that
you
need
to
involve
users
in
the
calculation
or
doing
it
by
yourself.
In
both
cases,
it's
a
huge
time
consumer
so
and
it's
pretty
hard
to
estimate
what
is
the
ideal
istio
cycle
container,
cpu
capacity
and
resource
to
use-
and
we
talked
about
this
in
the
second
part
of
the
presentation.
A
You
have
so
many
things
to
check
when
you
added
proxies
everywhere
in
your
infrastructure,
and
this
increase
with
the
number
of
features
that
you're
using
the
second
biggest
time
consumer
is
spreading.
The
adoption
evangelizing
sharing
knowledge,
convincing
the
business
convincing
your
users,
the
human
interaction
is,
takes
a
lot
of
time
and
it's
very
hard,
and
the
last
thing
is
supporting
the
new
features.
A
Learning
mastering
abstracting
all
those
things
that
take
time,
especially
the
abstraction,
considering
that
we
think
that
to
succeed
in
adopting
istio,
you
need
first
and
it's
very
important
to
have
dedicated
resources
for
it.
The
more
the
better
we
had
the
opposite
case,
where
we
have
few
people
and
it
took
time
despite
our
best
efforts,
then
you
also
need
a
good
in-house
knowledge
of
networking
like
people
that
know
about
linux,
kubernetes
and
for
cloud
networking.
A
And
finally,
you
need
mechanisms
to
improve
the
variability
of
issues
such
as
rules
processes,
abstractions
extra
also,
please
choose
your
fights
and
start
small
with
few
simple
features
such
as
injecting
the
cycle
doing
out
of
the
box
http
to
load,
balancing
or
traffic
shifting
for
canary,
and
the
important
point
is
to
build
the
confidence
in
the
system
and
the
understanding
of
istio.
Then
you
can
onboard
some
users,
get
feedback,
improve
rinse
and
repeat
so
we
are
not
in
a
dream.
We
are
now
in
the
dream.
A
The
reality
is
that
the
control
plane
is
burning
down
when
pushing
your
thousand
services
updates
to
the
hundreds
of
proxies
running
that
your
proxies
are
om,
killed
every
x
minus
because
you
cannot
handle
the
change
frequency
that
your
proxies
are
heavily
cpu,
throttling
and
consuming
cpu,
even
without
traffic.
That's
incredible
and
you
also
have
a
resource
usage
that
is
tremendous
because
of
that
and
finally,
your
android
configuration
files
are
terribly
big
that
you
would
not
even
want
to
look
into
that.
A
A
It
is
even
written
in
the
official
documentation
and
actually,
although
it's
formulated
differently,
but
reference
values
are
only
disclosed
for
when
the
name
space
isolation
is
enabled.
So
what
about
namespace
isolation?
Actually
this
is
called
the
cycle
crd,
which
is
a
bit
misleading
name
from
time
to
time.
A
It
allows
basically
to
control
the
exposure
of
the
mesh
configuration
to
specific
proxies
based
on
namespace
or
labels.
So
here
we
have
a
simple
cycle
example.
I
will
not
go
deep
into
how
cycle
works.
We
only
really
want
here
to
know
that
we
have
an
allow
list
that
shares
the
listeners
because
of
clusters
and
points
with
the
local
proxies
to
leverage
the
still
features.
A
Next
is
about
some
graphs.
We
took
from
our
data
dog
production
study
monitoring,
and
you
can
see
that
it's
clearly
night
and
day
above
on
top
of
that
is
without
the
psychiatry
you
can
see
so
many
spikes
periodically
happening
and
with
the
sidecar
implemented
everywhere.
It's
like
never
spiky
or
when
its
spike
is
very,
very
controlled,
and
I
think
it
was
taken
when
we
only
had
10
percent
of
our
products
and
services
with
esto
in
the
cluster.
So
I
wouldn't
really
imagine
the
difference
now
with
now.
A
So
it
looks
great,
but
there
is
a
drawback
to
this
approach.
The
the
thing
is
that
you
need
your
services
to
know
their
dependencies
document
and
update
them.
So
if
it
wasn't
the
case
before
it
may
be,
not
really
good
for
your
users
like
they
may
feel.
That
is
to
ask
too
much
from
them,
and
the
thing
is
that,
when
the
dependency
is
not
in
the
allowed
list
of
the
cycle
crd,
then
the
service
mesh
features
will
not
be
available
for
that
traffic.
A
So
that
causes
a
lot
of
different
issues
because
of
pass-through
cluster,
more
information
in
the
link,
if
you're
interested.
So
we
we
found
some
approaches
to
mitigate
that,
and,
first
of
all,
in
any
case,
do
not
expose
the
cycle
crd
to
users.
Instead,
try
to
use
a
service
definition
to
generate
the
cycles,
the
the
users
don't
really
need
to
know
the
underlying
implementation.
A
You
can
use
a
format
that
you
can
reuse
for
other
purposes,
such
as,
for
example,
egress
security
policy.
There
are
other
options
you
can
use.
Protocol
specific
traffic
sniffing
like
grpc
call
discovery
to
find
the
dependencies
of
maybe
some
ebpf
magic
to
get
the
service
calls
and
by
the
way,
if
anyone
does
that,
we
are
really
interested
in
hearing
more
about
this.
A
So
don't
hesitate
to
reach
me,
and
now
we
are
using
the
first
approach
because
it's
protocol
agnostic
and
it
works
before
having
live
traffic,
because
the
the
second
and
third
options
require
you
to
have
some
traffic
flowing
before
so
the
last
part
of
the
stabilizing
history
is
about
grad
rays.
So
it's
an
excerpt
from
my
last
year
presentation.
So,
basically,
the
service
mesh
is
common
to
all
users
and
any
change
to
it
spreads
across
the
world
mesh.
A
A
The
first
one
is
to
leverage
linters,
such
as
conf
tests
to
catch
issues
at
ci
level
and
by
keeping
a
short
feedback
loop
for
the
user
experience,
and
the
second
one
is
to
leverage
admission
web
hooks
such
as
opa
gatekeeper
to
protect
your
resources
and
check
what
cannot
be
checked
at
the
internal
level,
especially
in
inventory.
It's
important
because,
ultimately,
the
source
of
truth
is
what
is
in
your
cluster
at
any
time.
So
if
you
have
the
rules,
matched
against
it,
it's
the
most
relevant
there.
A
So
if
you
want
more
details,
please
check
my
last
year
implementation
preparing
the
graduates
for
escort
scale
now
that
we
are
done
with
stabilizing
issue,
let's
summarize
takeaways,
so
we
first
saw
that
kubernetes
doesn't
handle
the
cycle
content
as
well,
and
that
we
need
to
use
the
post
start
and
pre-stop
container
hooks
to
gracefully
handle
the
pod
lifecycle.
A
A
So
now
that
we
stabilize
sto,
let's
move
on
to
the
adoption
part,
which
is
equally
important
in
my
opinion.
So
in
this
section
we
will
go
through
several
adoption
challenges
that
blocked
us
or
severely
slowed
us
down.
So
the
first
one
is
about
our
move
from
client
side
http
to
load
balancing
to
envoy.
So
at
merkley
we
use
grpc
quite
easily
in
our
services,
but
cubase
is
pretty
bad
at
load
balancing
it.
So
we
solved
it
with
a
client-side
load.
A
Balancing
coupled
with
headless
services,
and
the
interesting
thing
is
that
to
us
as
headless
services
are
what
cluster
ip
services
are
to
common
people.
So
we
call
closer
ip
service,
a
non-headless
service
that
that's
very
interesting
and
confusing
for
many
people
sometimes
and,
however,
our
cube
dns
was
not
really
happy
at
all
with
all
the
sre
requests
that
were
requested
by
that
library.
A
So
we
were
considering
some
options
and
then
easter
came
with
its
awesome.
Http
2
load,
balancing
capabilities
out
of
the
box.
We
tried
it
as
it
is
with
our
existing
grpc
services,
but
the
result
was
that
weird
5xx
on
upstream
service
rollout
were
happening.
So,
no
matter
how
well
our
services
handled
graceful
termination
easter,
would
make
the
headless
services
worse.
So
as
a
conclusion,
we
stopped
to
use
headless
services
with
istio
and
gradually
migrated
to
cluster
ib
services.
A
A
So
the
next
adoption
challenge
we
faced
was
the
label
selector
update
for
the
app
and
version
labels
in
kubernetes
deployment.
First,
I
want
to
ask
a
question
to
everyone
in
the
old
audience,
even
though
I
cannot
see
you,
but
is
there
anyone
in
the
audience
that
was
efficient
enough
to
use
the
apple
version
before
starting
istio
and
if
you're
like
us,
chances
are
huge
that
you
have
to
modify
your
deployments
to
put
these
labels.
But
why
do
we
want
these
labels
anyway?
A
A
First,
we
had
the
header
services
now
the
labels
and
we
are
thinking
what
is
coming
next
and
then
that
issue
is
really
not
about
only
adding
cycles.
It's
very
important
to
realize
that
there's
a
lot
of
like
implications
and
preparation
required,
but
fair
enough.
Let's
do
it
so
we
have
four
steps
to
go.
The
first
is
to
create
a
new
deployment
with
a
new
name,
because
name
is
an
immutable
field
as
well
with
the
app
investment
labels,
then
making
sure
that
the
existing
service
is
serving
both
all
the
new
deployments.
A
Then
we
can
create
the
hpa
for
the
new
deployment
to
make
sure
it
is
scaled
properly
and
then
finally,
the
led
all
deployment.
It's
it's
looks
simple,
but
if
you
retry
this
for
like
300
400
services,
I
wish
you
good
luck
but
side.
We
cannot
really
rely
on
luck
in
our
field.
Isn't
it
so
we
need
a
more
sustainable
approach.
So
our
current
approach
is
to
my
leverage,
the
cd,
continuous
delivery,
tooling,
so
we're
using
speaker,
for
example,
to
automate
that
migration.
A
A
We
were
wondering
what's
coming
next
after
headless
services
and
labels,
and
now
we
have
two
more
surprises
from
istio
and
good
or
bad.
It's
your
judgment,
but
in
fact
istio
retries,
all
http
requests
twice
by
default,
so
it's
quite
fun
when
you
have
non-either
important
apis
and
you
can
confess,
because
we
are
all
in
the
same
boat
you,
you
must
have
some
apis.
A
Like
that
in
somewhere,
but
and
it's
not
the
only
surprise-
you
have-
we
have
an
even
better
surprise,
which
is
that
we
cannot
disable
it
or
change
it,
at
least
not
the
default
level.
So
that's
terrible
because
you're
stuck
with
adding
a
retry
policy
with
every
single
kubernetes
service
that
is
served
by
istio.
So
even
if
you
started
the
adoption,
when
you
want
to
reach
over
committee
services
that
are
done,
that
don't
have
easter.
Yet
you
have
to
do
this
to
ensure
that
you
call
them
safely
when
they
have
no,
neither
important
stuff.
A
So
it's
kind
of
loose
coupling
between
non-eastern
esto,
so
that
I
opened
an
east
an
issue
last
year.
That
was
really
like
huge
support
from
the
community
and
explained
quite
well
the
program
hopefully,
but
thankfully
the
community
is
working
on
solution,
and
I
want
to
use
this
occasion
to
emphasize
that
contributing
is
important
and
giving
feedback
helping
discussing
is
helping
everyone
to
build
a
better
assertion
for
everyone
and
why
I'm
being
a
bit
ironic
at
some
points
in
this
slide,
I
am
extremely
grateful
in
the
community
and
maintenance
for
helping
and
improving
istio.
A
So,
thank
you.
So
much
for
all
the
hard
work
and
but
in
our
case
we
didn't
really
have
the
time
to
wait
for
the
adoption.
So
what
did
we
do
with
42?
A
That's
the
kind
of
last
resort
thing
you
should
consider
in
the
last
position
of
all
your
possible
options
and
that
you
cannot
wait
until
under
resolution
is
happening,
but
in
our
case
it
wasn't
that
big
deal,
because
it's
just
a
one-liner
change
in
the
code.
We
just
removed
all
the
status
that
we
thought
unsafe
for
non-item
potency
and
just
kept
the
connect
failure,
and
it's
pretty
simple
to
to
release
your
own
build
of
history.
A
You
only
need
to
build
the
easter
pilot
path
for
our
change,
for
build
the
image
project,
tag
and
use
it
in
the
east
operator
manifest
be
careful.
We
don't
ask
people
to
focus
still
it's
like
if
you
really
need
it
as
a
last
resort.
That's
what
we
did
and
the
next
adoption
challenge
is
about
issue
proxy
performance
and
capacity,
because
putting
sidecar
everywhere
has
a
huge
cost
latency,
but
mostly
compute
resources,
some
reference
values
from
the
community.
I
will
not
go
through
there,
but
we
need
to
ask
ourselves
what
do
we
want
when
implementing
istio?
A
Then,
how
do
you
define
a
common
answer
to
the
previous
questions?
It's
really
nearly
impossible
because
it
changed
so
much
from
one
to
one
and
you
need
to
involve
a
lot
of
people
to
understand
and
think
about
this.
A
So
considering
that
we
need
to
think
of
use
cases
that
will
give
us
an
answer
and
the
cost
part
is
in
partly
answered
by
the
compute
cost
of
istio,
and
there
is
an
effect
in
istio
that
when
you
enable
store
in
all
parts
in
a
cluster
for
n
parts,
you
have
n
cycles
and
the
case
one
we
we
choose
here,
for
example,
is
one
size
fits
all.
So
you
you
take
your
biggest
workload.
You
put
the
default
size
to
fit
that
capacity,
and
then
it's
easy
to
set.
A
You
have
one
thing:
you
know
it
will
fit
everything,
but
the
cost
is
terrible,
because
you
have
so
much
waste
on
everything
that
it
may
be
totally
crazy
that
your
boss,
your
colleagues,
may
think
you're
crazy
and
the
second
case
it's
just
based
on
specific
workloads.
So
the
resource
cost
is
low
because
you
really
fit
the
perfect
value
of
resources
for
each
workload.
But
the
cost
is
tremendous
in
load,
testing
and
adjusting
each
values.
A
So
for
us
the
one
size
fits
all
was
really
too
costly
for
us,
and
maybe
for
you
too,
and
how
can
we
adjust
the
cycle
size?
Then
we
have
the
vpa
that
is
not
working
for
cyca.
We
have
the
hpa
that
is
not
applicable,
and
so
the
only
way
we
found
was
to
load
test
ourselves,
but
we
just
want
a
dynamic
smart,
auto
scaler
for
cyca.
So
if
anyone
has
that
or
is
working
on
that
of
needs
have
on
that,
we
we
will
be
really
happy
to
help
with.
A
So
when
load,
testing
service,
easter
sidecar.
We
need
to
ask
few
questions.
How
many
rps
would
we
have
without
sto,
and
also
how
many
hops
per
request?
So
basically
how
many
requests
we
generate
when
we
get
one
request
from
the
client?
If
we
call
a
lot
of
services,
we
have
a
lot
of
different
requests,
and
that
looks
like
that
three
use
case.
So
the
the
first
one
is.
You
have
two
requests
at
any
time
in
the
service
a
part
and
in
the
most
right
diagram.
A
You
have
five
requests
at
any
time
so,
depending
on
the
topology,
it
totally
differs
the
need
for
for
that
service
and
one
quick
example
very
quickly
to
request
a
10k
rps
at
library
level
with
htrps
it's
like
20k,
but
when
you
have
five
requests,
it's
like
50k
for
the
same
rps
account
and
then,
if
you
add
parts,
it's
totally
different
because
you
will
have
a
different
kind
of
rps
spare
port,
so
the
the
sizing
will
be
totally
different.
A
So
that's
pretty
hard
to
narrow
it
down
and
also
the
android
concurrency
setting
is
also
very
important
for
performance.
It's
two
by
default
in
istio,
and
I
don't
remember
if
it's
a
d14
in
envoy2
but
for
minimal
performance
impact,
you
should
have
at
least
one
worker
per
vcpu,
and
maybe
more
than
that,
because
there
are
some
like
cpu
management
in
a
linux
kernel.
A
A
So
the
last
part
of
this
section
is
about
abstracting
istio
and
let's
ask
some
questions:
should
you
expose
a
whole
new
layer
of
yamls
to
people
that
are
already
overfed
with
that?
The
answer
is
no.
If
you
say
yes,
you're
kind
of
static,
I
think,
should
you
require
your
users
to
understand
every
single
parameter
in
the
virtual
service?
The
answer
is
also
no
and
the
main
reason
is
probably
that
you're
paid
to
improve
your
users-
productivity,
not
decreasing
it.
A
So
in
the
same
way
as
we
built
libraries
and
interfaces
to
improve
productivity,
we
need
to
build
the
proper
abstractions
to
maximize
the
added
value
of
easter
to
our
users.
So,
for
example,
we
have
like
automating
the
eastern
building,
making
a
seo
feature
fully
automated
and
managed,
and
it's
improved
by
a
lot:
the
user
experience
for
developing
services
and
the
maintainability
of
istio.
For
operators,
so
here
how
we
abstract
is
too
quickly.
A
We
are
using
terraform
to
handle
the
cycad
policy
and
github
cicd
pipeline
to
to
apply
them,
and
we
are
also
exploring
qlang
to
template.
A
simple
dsl
for
managing
various
features,
such
as
full
eastern
boarding
through
managed
canary
release
with
maker,
the
one
with
baseline
and
more
coming
in
the
future.
A
Okay,
then,
now
the
takes
aways
of
the
last
part
adopting
these
two,
so
we
saw
that
headless
services
are
erratic,
with
sql,
so
better
use
cluster
ip
services
instead
and
to
plan
the
migration
wisely
using
automation,
pipelines
to
label
the
deployments
for
traffic.
Shifting
helps
a
lot
to
reduce
the
cost
for
migration.
A
Easter
has
a
risky
default
return
policy
for
non-hide
important
apis,
so
my
effort
is
to
solve
it
in
temporary,
but
hopefully
the
community
proposal
will
be
merged
within
this
year,
hopefully,
and
having
cycles
everywhere
is
a
huge
cost,
so
make
sure
to
mitigate
it
by
processing
with
testing
and
finally
abstracting.
The
easter
features
is
the
real
only
way
to
spread
the
adoption
and
maximize
the
added
value
for
instio.
A
Thank
you
so
much
for
joining
today.
If
you're
interested
in
helping
us
to
go
to
the
next
step
in
adopting
seo
and
share
your
findings
with
the
community,
we
don't
hesitate
to
reach
me
via
twitter,
linkedin
and
the
link
for
our
team.
Stop
description
is
in
the
slide,
so
don't
hesitate.
Thank
you
so
much
for
today.
I
I
it's
been
a
pleasure
at
that
time,
even
though
to
have
you
today.
Thank
you.