►
From YouTube: Istio July Meetup/ Istio as a HyperMesh: Distributed Responsibility, Centralised Control
Description
Md Zannatul Ferdous/ Software Engineer at AirAsia
Abdul Munim Dibosh/Senior Engineering Manager at AirAsia
This talk describes how we created a HyperMesh using Istio, that connects this kind of distributed infrastructural nodes from a single point, with distributed responsibility across teams and a central point of control by the SRE.
A
So
we
are
pretty
excited,
and
but
what
we
are
going
to
share
with
you
guys
today
is
something
that
we
are
currently
building
in
eurasia
and
like
we.
What
we
want
to
focus
today
is
mostly
what
we
are
trying
to
build,
why
we
are
trying
to
build
it
and
how
this
theory
is
coming
into
the
picture
right.
A
So
I
I
think,
like
the
the
first
first
thing
that
we
really
want
to
focus
is
that
when,
when
you
have
a
like,
you
know
a
big
company
where
you
have
so
many
different
teams
who
are
working
together
for
different
different
products,
you
have
so
many
lines
of
business
and
over
the
period
of
time
you
have
different
different
products
going
growing
in
and
to
support
and
serve
them.
You
have
different
different
gateway
solutions.
A
That's,
like
you
know,
spawning
over
over
the
period
of
time.
Right
at
that
point,
the
the
first
major
problem
that
you
start
getting
is:
how
do
I
really
you
know,
consolidate
the
practice.
How
can
I
start
introducing
some
standard
procedure
and
everything?
And
if
you
have
a
legacy
like
you
know,
team
like
who
and
legacy
systems,
it
becomes
quite
a
difficult
thing
right
and
that's
that's
where
we're
gonna
start
today.
To
be
very
honest.
A
So
today
we
are
here
with
you
ferdaus,
as
as
aj
was
saying,
he's
a
software
engineer
and
working
with
we're
working
in
the
same
team.
We
represent
the
airasia
sra
team
and
I'm
a
senior
engineering
manager
apart
from
sra,
I
handle
few
other
segments
of
airasia,
but
sre
is
one
of
the
core
segments
that
I
take
care
of.
A
Okay.
So
that's
that's
pretty
much
like
who
we
are,
let's,
let's
start
a
bit
about
understanding
like
what
eurasia
does
and
what
what
kind
of
scale
that
we
handle
today
right.
A
Those
of
you
who
know
about
airasia-
it's
mostly
it
started,
I
mean
airasia,
started
their
business
as
an
airline
provider
and
it
it's
it's
a
low-cost
airline
provider
for
the
southeast
asia
and
it
has
operations
in
malaysia,
singapore,
indonesia,
thailand,
you
name
it
all
the
southeast
asian
region
and
like
it's
also
expanded
into
some
certain
regions
with
other
partnerships,
and
it's
a
huge
network
of,
I
would
say,
airline
provide
providing
network,
and
for
that
we
have.
A
A
We
started
thinking
about
what
what
what
other
different
things
that
airasia
can
do
with
the
current
infrastructure
that
we
have
and
by
infrastructure
I
mean
like
the
maintenance,
like
the
operational
infrastructure
infrastructure
that
we
have
and
also
the
people
we
have
and
expertise
that
we
have
in
what
what
else
we
can
do
and
that's
when
we
started
voting
into
a
super
app,
and
we
can
proudly
say
now
that
this
day's
in
in
in
the
in
the
in
the
southeast
asia
region.
A
We
started
quite
recently
to
be
very
honest,
but
we
started
providing
a
lot
of
different
segments
of
support.
So
one
one
segment
is
troubles
under
which
you
have
airline,
hotels,
snap
and
so
many
other
things.
And
then
we
have
e-commerce
segment
where
we
have
fresh
food
health
and,
like
beauty,
different
types
of
product
segments,
and
then
we
have
like
you
know
very
soon.
A
We
we're
gonna
like
bring
in
right,
e-healing
business
in
malaysia,
and
then
we
have,
some
of
you
know
might
have
known
actually
from
the
the
price
release
that
we
had
very
recently
that
we
actually
acquired
the
business
from
gojek
in
in
thailand.
So
we
are
going
to
expand
our
food
business
there
as
well,
so,
which
means
I
I'm
just
trying
to
give
you
an
idea
like
we
are
expanding
in
different,
different
verticals
and
different
different
ways
and
lines
of
businesses.
A
So
it's
it's
quite
a
challenging
environment
right.
So
just
to
give
you
an
idea
of
like
the
few
of
the
numbers
that
that
we
wanted
to
share.
For
example,
today
it's
it's,
so
there
are
two
numbers
that
you
can
see
in
the
screen
right.
The
the
number
on
the
right
side
of
yours
are
the
numbers
that
we
had
before
the
pandemic.
A
It
was
almost
3
million
users
per
day
and
almost
almost
two
million
search
for
every
day
and
during
any
sell
event
we
would
have
like
around.
You
know,
20
to
30
k
booking
done
within
an
hour
or
some
in
some
crazy
events
we
had,
like
you
know,
50k
bookings
done
orders
done
within
an
hour
of
time.
That
was
a
pre-dynamic
situation,
but
it's
quite
great
to
see
that,
even
even
during
the
pandemic
situation,
we
we
kind
of
had
almost
a
consistent.
A
A
I
mean
they
are
either
searching
for
flights,
hotels
and
lots
of
other
things
like,
as
I
said,
there
are
e-commerce
segments
and
other
things,
and
then
just
to
give
you
an
idea
of
like
how
much
what
kind
of
infrastructure
that
we
are
handling
today
as
a
as
a
as
a
sra
team
right,
we
almost
have
around
400
gcp
projects
in
in
which
our
gcp
projects
are
consisting
of
different
types
of,
of
course,
like
the
infrastructures
are
different,
we
used
to
have
just
a
year
and
a
half
back,
I
would
say
the
the
beginning
of
2020.
A
A
That
was
the
norm
at
that
point
of
time,
and
and
that's
exactly
the
time
I
remember
I
just
joined-
maybe
like
three
four
months
back,
so
I
had
the
opportunity
to
see
it
from
very
closely
that
how
we
started
adopting
gke
everywhere
kubernetes
and
that's
where
a
lot
of
I
would
say
a
lot
of
our
like
modernized
journey
began
like
in
terms
of
infrastructure,
how
we
wanted
to
architect
our
entire.
A
You
know
ecosystem
and
since
then
it's
it's
just
one
and
a
half
year,
but
you
you
can
see
how
aggressive
we
have
been
right.
Now
we
are,
we
are
standing
in
a
in
a
moment
like
where
we
have
almost
92
percent
of
our
projects,
which
are
already
migrated
to
gke.
A
A
So
just
this,
this
is
to
give
you
an
idea
like
that
how
for
how
long
we
have
been
using
steel
in
our
entire
ecosystem,
even
though
it's
not
adopted
that
much
yet,
because
we
have
so
many
other
legacy,
get
resolutions
like
so
many
different
different
gateway
solutions
and
over
the
period
of
time
people
adopted
to
a
lot
of
others
as
well
like
nginx
controller,
like
kong,
and
so
many
others
right
and
those
are
still
there.
A
But
if
you
see
like
how
istio
has
made
it
made
the
penetration
in
that
ecosystem
ecosystem,
it's
it's
pretty
huge,
like
almost,
as
I
said,
almost
12
to
15
of
it.
Okay.
So
that's
that's
some
numbers
to
give
you
guys
an
idea
of
like
what
what
we
are
dealing
with
today
and
just
to
give
a
bit
of
idea
about
like
why.
A
It's
like
it's
a
lot,
because
the
sre
team
is
not
quite
big.
It's
it's
like
a
very
minimal
number
of
people
in
the
sra
team
that
we
have
today,
but
we
are
proud
and
we
we
have
certain
pride
to
say
that
the
people
that
are
working
in
this
area,
each
and
every
one
is
quite
competent
and,
like
all
of
them,
are
having
solid
experience
in
the
you
know,
the
cloud
native
frameworks
and
tools
and
stack.
A
So
that's
that's
why
we
we
were
able
to
pull
this
off,
and
now
we
are
moving
to
a
different
journey
which
we
want
to
discuss
in
the
in
this
talk,
so
we
have
multiple
chapters.
I
think
the
first
one
as
you
can
see,
we
we
have
decided
to
go
with
some.
You
know
a
bit
of
like
funny
or
like
a
funky
titles,
so
we
have
six
chapters
that
we
want
to
cover
today.
The
first
one
is
at
your
wits
end
and
it's
it's
basically.
A
What
we
want
to
cover
is
the
problem
analysis
from
our
site,
like
what
problem
that
we
actually
started
when
we,
what
the
problem
space,
that
we
want
to
actually
bring
in
front
of
you,
so
that
you
can
understand
what
we
are
trying
to
talk
about
and
then
and
then
what
was
our
solution
for
that
right.
So
I
I
I
want
to
like
give
a
floor
to.
I
have
been
talking
a
lot,
so
I
just
want
to
bring
for
those
and
and
hear
those.
A
Why
don't
you
go
ahead
and
tell
us
a
bit
about
like
what
problem
that
we
had
with
the
standardization.
B
Thanks
mani,
you
guys
can
hear
me
properly
right.
C
B
Okay,
so,
as
many
actually
mentioned
that
you
know
throughout
the
years
when
we
actually
tried
to
adopt
kubernetes,
a
different
team
actually
came
to
adopt
kubernetes
at
different
point
in
time.
The
definition
of
standards
were
actually
different
in
different
point
in
time,
so
different
team.
B
Due
to
this
rapid,
you
know,
adoption
of
kubernetes
different
team
actually
tried
to
adopt
the
best
practices
that
were
available
at
that
point
in
time.
So
now,
after
one
and
a
half
year,
what
we
have
is
almost
a
same
or
similar
set
of
tools
that
actually
solves
almost
identical
problems.
B
So
now,
since
we
have
a
very
variety
of
different
tools
that
actually
solve
the
same
problem,
we
kind
of
have
to
figure
out
a
common
group
of
features
and
know
how
to
make
it
work
in
all
those
stacks,
and
since
there
is
a
variation
of
different
tools,
standardizing
a
common
practice
or
an
optimal
practice
or
a
secure
practice
becomes
very,
very
difficult,
because
not
all
the
tools
necessarily
support
all
the
features.
B
Let's
say,
for
example,
we
are
trying
to
introduce
a
certain
feature.
That
feature
is
actually
not
available
in
the
open
source
version
of
the
tool,
or
that
feature
is
not
supported
at
all.
So
if,
if
you
want
to
guide
the
different
teams
that
you
know
when
you
are
actually
trying
to
use
this
tool
for
a
certain
gateway
or
certain
service
mesh,
certain
load,
balancer
ingress,
it
becomes
really
hard
to
standardize,
because
there
are
so
many
different
things
that
is
actually
in
adoption
across
the
whole
community
across
the
whole
different
deaf
teams.
B
The
second
problem
that
we
see
is
that
you
know
we
are
at.
We
in
airasia,
the
devops
team
is
kind
of
like
a
centralized
team.
What
happens
is
that
individual
sres
are
actually
assigned
to
different
line
of
business
and
assigned
to
different
dev
teams.
B
So
when
there
is
a
troubleshooting
issue,
what
happens
is
that
you
kind
of
have
to
jump
into
a
certain
ecosystem
until
on
the
fly,
you
have
to
learn
what
what
exact
tool
that
they
are
using,
and
you
have
to
learn
how
to
upgrade
that
you
have
to
learn
how
to
fix
that,
and
the
knowledge
is
not
necessarily
transferable.
B
Let's
say,
for
example,
you
are
working
with
a
certain
stack,
a
certain
tool.
You
kind
of
over
the
period
of
time
learn
how
to
actually
upgrade
that
solve
starting
problem
with
that,
and
then,
if
you
actually
switch
to
another
troubleshooting
issues
some
days
later,
you
completely
see
you
will
see
that
completely
different
picture.
B
So
the
knowledge
is
not
necessarily
transferable
and
let's
say,
for
example,
if
if
I
actually
talk
about
the
control,
since
there
is
a
lot
of
entry
point
in
your
whole
ecosystem,
where
traffic
is
actually
flowing
in
you,
when
you
want
to
impose
some
control,
some
standardization,
it
becomes
really
hard
to.
You
know,
impose
those
rules
and
control,
because
there
are
so
many
different
places.
B
There
is
traffic
that
that
traffic
is
actually
penetrating
your
system
and
the
attack
surface
becomes
exponentially
bigger
and
you
know
any
control
measurement
that
you
want
to
take.
It
takes
that
much
more
effort
and
when,
when
you
know,
different
teams
are
actually
adopting
different
variation
of
tools,
different
version
of
tools-
I
remember
it
for
for
a
certain
api
gateway,
we
actually
had
three
different
versions
of
the
same
tool
that
was
actually
adopted
in
three
different
point
in
time.
B
So
you
know
upgrading
that
or
managing
the
cost
of
that
was
actually
really.
You
know
troublesome,
because
we
had
to
develop
expertise
in
three
different.
You
know
it
was
almost
like
developing
expertise
in
three
different
tools,
because
the
you
know
the
version
or
the
technical
detail
about
those.
The
version
of
the
same
tool
in
different
point
in
time
was
quite
significantly
different
and
you
know
fine-tuning
the
resource
and
how
you
actually
want
to
use
a
certain
tool.
B
Those
actually
come
with
experience,
so
unless
you
actually
puts
a
certain
amount
of
amount
of
time,
you
actually
don't.
You
know
get
to
that
granular
level
of
understanding
that
how
you
actually
can
optimize
the
cost
of
a
certain
tool.
Can
we
go
to
the
next
slide.
B
So,
since
the
you
know,
responsibility
and
the
control
actually
was
actually
scattered
in
you
know,
in
different
number
of
teams,
it
was
actually
a
recipe
for
disaster,
and
you
know
every
now
and
then
a
certain
system
breaks,
devops
team
or
a
sre
team
actually
has
to
jump
into
the
you
know
to
the
effort
and
sort
of
learn
on
the
fly:
how
to
debug
the
system.
What
are
the
intricate
details
to?
B
Actually
that
is
needed
to
debug
the
system
and
then
take
a
decision
based
upon
that,
and
you
know
anytime,
the
system
is
down,
it's
actually
directly
related
to
the
you
know
business
hours
and
you
actually
lose
the
business.
So
this
is
actually
an
inherent
complexity
that
is
actually
related
to
a
large
legacy
system.
So
when
you
try
to
actually
organize
that.
A
I
want
to
add
a
few
things
actually,
on
top
of
what
you
said
and
run,
I
I
mean
just
to
just
to
give
a
a
few
more
perspectives
of
what
imran
just
explained.
I'm
I'm
sure,
like
you,
guys,
have
already
grasped
the
idea
of
like
what
we
are
trying
to
tell
here,
but
I
just
want
to
give
a
few
more
perspectives
of
it.
Like
you
see
the
the
problem
that
imran
explained
that
standardization,
when
we
are
talking
about
this
standardization,
it's
it's
huge
because
of
two
things.
A
One
is
number
of
ways
of
doing
same
thing
as
as
imran
said
already
like.
If
I
have
to
bring
in
a
gateway
so
most
in
most
of
the
teams,
they
just
need
a
gateway.
Maybe
they
just
need
a
like.
Let's
say
we
haven't,
we
have
multiple
api
gateways
right
and
each
of
the
api
gateway
is
hosting
different,
different
subsets
of
apis
from
different
different
teams.
A
Then
it
becomes
really
problematic
for
you
to
come
up
with
something
and
for
some
standardization
it's
not
easy.
That
penetration
is
very
hard
right.
There
is
a
lot
of
frictions
why
I
have
to
do
that.
Why
I
have
to
do
this
right.
That
is
right
there
and
if
you
have
to
bring
in
that
standardization
across
you
have
to
come
up
with
a
very
good
strategy
right.
You
have
to
have
a
proper
template
templates
in
place,
or
maybe
a
certain
you
know
like
structure
for
people
to
follow
lots
of
documentations
on
board
them.
A
A
On
top
of
what
imran
said,
I
think
the
major
problem
with
control
that
we
have
we
were
looking
at
is
that
we
have,
for
we
will
show
you
in
in
the
later
part
of
the
talk
is
that
our
airasia.com
is
actually
pointing
to
certain
layers
and
after
that
there
is
a
reverse
proxy
right
behind
that
and
that
reverse
proxy
is
actually
pointing
to
different
different,
of
course,
the
application
different
different
product.
You
know
upstreams,
but
what
happens
is
like
whenever
certain
portion
is
going
down.
A
It's
I
mean
everyone
has
some
segregated
tools
for
observing
the
failure
and
to
take
action
against
it,
but
it's
not
centralized
it's
not
centralized
control.
If,
if
a
particular
upstream
is
down
that
team
knows,
but
that
information
has
to
propagate
up
to
everyone
else,
hey
you
know
what
this
particular
x
upstream
is
down,
but
that
centralized
observability
was
missing.
I
mean
we
had
all
the
data
everywhere
it's
coming
in.
A
We
were
not
being
able
to
put
this
together
and
stitch
everything
so
that
we
get
that
visibility
right
and
as
a
team
as
a
sorry
team,
we
really
needed
that
that
that
was
missing
right
there
and
just
to
add
on
top
of
cost
part
of
it
is
because,
since
every
team
has
different
different,
you
know
stack
running
in.
Somebody
is
running
this
running,
that
everything
is
actually
two
different
types
of
costs.
Right,
one
is
infrastructure,
cost
and
another
is
the
maintenance
cost
development
costs
right?
Whatever
you
do?
A
Is
deployment
cost
whatever
you
do
operational
cost?
You
can
say
as
well,
so
that
was
there
and
that's
piling
up
right
and
that's
piling
up
over
the
period
of
time.
If
I
had
just
one
single
kind
of
technology
right
there
for
this
particular
problem,
I
wouldn't
have
that.
I
would
knew
that.
A
What
exactly
is
a
resource
consumption
that
I
have
to
go
for
and
I
would
have
planned
it
accordingly
and
that
way
I
would
have
never
ended
up
with
over
provisioning
my
resources
for
those
purpose
right,
but
today
that's
what
is
the
case.
We
have
over
provision
a
lot
of
our
those
get
ways,
and
you
know
reverse
proxies
and
systems,
and
and
it's
very
hard
to
control,
come
on
coming
up
with
a
standardized
resource
allocation
plan
for
each
of
them,
because
each
of
the
tools
are
different.
The
resource
consumption
trend
is
different.
A
Their
resource
consumption
process
is
different,
it's
very
hard,
and
that,
of
course
adds
up
to
your
cost
right,
and
I
think
this
this
part
imran,
has
beautifully
covered
and
already
told
us
about
so
yeah.
I
we
just
wanted
to
give
you
a
very
high
overview
of
like
the
problem
we
were
having
as
an
sra
team
and
the
problem
that
we
were
looking
at
is:
is
it's
not
immediately
visible
to
other
people,
other
part
of
the
organization
because
they're
not
handling
it,
but
when
we,
but
we
were
receiving
scattered
complaint
that
hey?
A
You
know
what
we
are
using
this
and
today
that
this
problem
happens,
we
have
to
always
either
go
to
you
or
we
have
to
fix
it,
blah
blah
blah.
They
are
using
this.
Why
cannot
we
use
that?
So
all
of
these
questions
were
piling
up.
All
of
these
complaints
were
coming
up
right.
So
all
all
of
this
together
eventually
made
us
think
about
something
else,
something
that
we
really
wanted
to
come
up
with.
A
So
that's
what
is
about
what
we
are
going
to
discuss
next,
the
solution
thinking
before
before
going
there
imran
do
you
do
you
want
to
add
anything
on
top
of
like
what
I
just
mentioned,.
A
Great
yeah
imran,
but
I
I
I
think,
like
the
friends
in
the
call,
I
think
we
can
give
them
an
idea
about
like
how
we
have
adopted
while
we
are
at
it
before
going
to
the
solution.
Why
why
we
went
for
a
steal?
Maybe
we
can
we
can?
We
can
give
them
a
brief
idea
about
like
before,
even
proposing
what
solution
we're
going
to
do.
You
can
basically,
I
hope,
already
know
that
we
we
came
up
with
something
that
that
depends
on
steel.
That's
on
top
of
us
there
right.
A
B
So
yeah
that
would
be
an
interesting
discussion.
So
initially
you
know
before
actually
coming
up
with
the
proposed
solution.
We
have
been
using
steel
in
different
projects
about
a
year
now
and
in
in
several
of
the
projects
we
have
been
using.
Is
there
as
a
service
mesh,
and
there
has
been
a
lot
of
effort
to
buy
in
or
getting
the
different
development
team
used
to.
The
fact
that
you
know
these
are
the
you
know
necessary.
B
It
was
actually
successful,
let's
say,
for
example,
in
some
of
the
projects
we
have
used
still
as
a
very
you
know,
for
jwt
authentication
in
some
of
the
projects,
we
have
used
it
as
a
very
basic
ingress,
even
well,
after
actually
and
in
some
of
the
projects.
One
of
the
core
features
that
we
actually
used
was
the
canary
deployment
and
throughout
the
throughout
different
point
in
time
when
we
actually
used
this
theo.
B
The
development
experience
for
you
know
the
for
the
exp,
the
user
experience
as
a
developer
and
also
as
an
sre
individual.
It
was
quite
good.
The
abstractions
that
you
use
to
you
know
adopt
ethereum
in
different
in
your
different
projects.
They
are
actually
quite
similar
how
you
use
kubernetes
in
your
project
as
well.
B
The
abstractions
are
actually
quite
similar,
and
also
one
of
the
cool
features
that
we
actually
chose
this
theo,
for
is
that
it
gives
you
a
way
to
decentralize
your
configuration
in
in
a
lot
of
the
open
source
tools
that
we
have
used
so
far.
You
know
different
ingresses
and
different
api
gateways.
B
The
common
pattern
is
that
all
the
configurations
are
actually
centralized
in
in
one
place,
since
the
configuration
is
actually
in
in
one
single
place.
Previously,
the
control
was
actually
in
one
single
place
as
well.
So,
even
though
the
application
details
are
actually
with
different
dev
teams,
they
know
how
their
applications
should
be
accessed.
They
know
how
their
course
policy.
B
Actually
works,
they
know
how
which
headers
should
be
allowed
and
which
headers
should
be
exposed
and
all
those
necessary
details
are
actually
with
the
developers
itself,
but
when
they
needed
to
change
anything
in
in
those
configuration
details,
they
had
to
come
to
the
sre
team
and
actually
ask
for
a
change
request.
You
know
they
actually
asked
for
a
certain
change.
Sree
team
will
actually
change
change,
make
those
changes,
and
then
the
debt
team
has
to
you
know,
test
those
things
out.
You
can
actually
see.
B
There
is
a
lot
of
back
and
forth
communication
between
two
different
teams,
one
having
the
control
but
lacking
the
context.
Another
one
had
the
context,
but
lacked
the
control
to
make
their
own
changes
one
one
of
the
benefit
that
is
still
give.
B
We
will
actually
show
you
a
pattern
in
a
in
in
a
later
point
in
this
presentation
that
if
you
actually
give
a
certain
development
team,
their
own
virtual
services,
that
they
can
actually
define
their
own
different
paths,
their
course
policy,
their
header
requirements
and
everything
they
are
responsible
for
for
changing
that
and
the
different
development
team
when
they
need
access
to
a
certain
other
applications
in
which
is
actually
managed
by
a
completely
different
team.
They
can
actually
communicate
among
themselves.
B
The
sre
team
doesn't
have
to
intervene
in
the
middle
or
you
know,
control
the
communication
in
the
middle.
So
this
is
this
is
way
less
communication
and
the
development
team
actually
focus
on
can
actually
focus
on
the
development
bit.
B
So
this
is
one
of
the
primary
selling
point
that
we
that
actually
led
us
to
this
idea
that,
eventually,
since
the
context
is
with
different
development
team,
if
we
can
actually
distribute
the
responsibility
to
different
development
team,
then
what
would?
What
would
eventually
happen
is
that
the
gradually
their
development
speed
would
actually
increase,
because
there
is
less
overhead.
So
that
was
one
of
the
primary
motivation
that
actually
sort
of
gave
birth
to
this
idea.
A
Yeah,
I
think
I
think
one
important
part
actually
add
to
on
top
of
what
you
just
said
is
would
be
to
to
put
it
together
in
in
like
this
is
that
we
in
different
different
parts
of
the
team,
different
api
gateways,
different
solutions
that
they
were
using.
They
had
different
different
features.
They
needed
right,
I
think
when
we
started
mapping
them
together.
A
That,
actually
is
a
the
relevant
selling
point
of
why
we
built
hyper
mesh,
but
why
we,
if,
if
we
want
to
answer
why
we
chose
a
steel
in
the
first
place,
is
actually
that
it's
actually
a
bundle
of
whatever
we
we
were
looking
for.
We
did
a
very,
I
would
say,
rigorous
feature
mapping
like
what
what
we
really
want
to
go
where
we
really
want
to
go.
What
are
the
things
that
we
want
to
use
from
these
and
that
I
have
a
number
of
teams,
a
number
of
teams
out
of
a
number
of
teams.
A
This
team
needs
that
this
team
is
y
z.
What
not?
How
can
I
really
map
back
to
them
to
a
single
two?
How
that
will
reduce
the
overhead,
the
technical
overhead
that
we
will
have
over
the
period
of
time?
We
don't
have
to
know
any
number
of
things.
I
just
can
know
one
single
particular
technology,
one
single
particular
tech,
and
that
will
work
for
everyone
right.
That
was
one
of
the
one
of
the
thing
that
we
really
found
istio
as
a
as
a
as
an
ideal
tool.
A
To
be
very
honest,
the
second
one
is
like
what
imran
mentioned.
I
would
like
to
give
a
bit
more
emphasize
on
that.
Since
people
are
already
I
we
have
been
using
one
kubernetes
for
one
and
a
half
year,
people
are
already
used
to
the
kubernetes
manifest,
so
it
it's
actually
much
easier
for
us
to
onboard
them
with
a
similar
configuration
language.
Similar
configuration
pattern
obstructions
it's
much
easier
whenever
we're
going
to
them.
It's
much.
Oh,
I
understand.
Oh
there's
a
virtual
service,
there's
a
this
resource.
A
I
understand
that
there's
a
resource,
it's
much
easier
to
communicate
with
them
in
that
language
in
that
way,
in
that
thought
process
and
to
really
convince
these
people
to
onboard
that's
also
another
selling
point
because
of
the
easiness
because
of
the
you
know
already
having
a
quite
good
penetration,
which
is
like
your
your
mental
model
is
already
quite
used
to
right
yeah.
So
these
are
a
few
core
reasons
why
why
we
chose
the
steer
for
whatever
we
are
going
to
show
you
in
in
next
few
slides
cool,
okay.
A
So
today,
let's
start
with
before
even
going
into
where
we
want
to
go.
A
So,
as
you
can
see,
our
asia.com
is
actually
the
base
airasia.com
food,
slash
fresh
is
the
base
of
all
the
products
that
airasia
has
today
as
a
super
app,
so
that's
basically
pointing
to
a
cdn
that
dns
is
pointing
to
city
and
then
a
wife,
and
then
that
wealth
is
pointing
to
a
reverse
proxy
and
then
that
reverse
box
is
pointing
to
all
these
application
of
upstreams
right.
All
the
you
know
the
product
that
we
just
mentioned
same
way.
We
have
api
gateways,
for
example.
A
Let's
say
we
have
get
way,
one
g1.api
ratio.com
right,
it's
actually
pointing
to
a
wav
again
and
that
ref
is
pointing
toward
gateway.
What
is
the
technology
of
that
gateway?
Is
not
it's
not
the
purpose
of
this
discussion?
Let's
say
it's
something:
some
some
gateway,
that
gateway
is
pointing
to
a
certain
set
of
bots
different,
different
apis
from
different
different
teams.
A
Some
other
api
gateways
there
as
well
g2.apiration.com
for
some
reason.
For
some
reason
we
have
to
maybe
create
another
api
gateway,
and
that
is
containing
another
subset
of
api
upstream,
even
though
I'm
showing
you
just
two
api
gateways
here,
there
are
actually
so
many,
and
even
though
the
base
setup
for
airasia.com
is
with
one
reverse
proxy,
but
multiply
that
with
number
of
different
environments,
you
have
multiple
different
environments.
You
have
different
different
same
setup
like
same
reverse,
proxy
setup,
but
different
different
configurations
are
there.
A
So
it's
a
nightmare
to
be
very
honest
right,
this
setup
is
a
nightmare.
So
our
approach
here
is
first,
how
can
I
really
get
rid
of
you
see?
There
are
if
I
start
with
an
application
path,
it's
three
layers
right,
cd
and
wav
reverse
proxy.
So
I
start
with
an
api
gateway.
It's
like
craft
and
then
get
me.
A
What
is
the
exact
point
here
that
we
can
really
replace
with
something
else
and
that
something
else
can
start
having
pointing
to
applications
as
well
as
api
paths
as
well,
and
the
major
pointer
here
is
that
for
your
consumers?
Should
nothing
should
change
if
anyone
is
using
g1.api
ratio.com
for
them?
It
should
remain
that
way.
It
should
not
change.
Nothing
should
be
changed
there
if
they're
using
g2.api
ratio.
That
is
absolutely
fine.
It
should
be
the
same
way
but
you're
gonna
change.
A
Something
in
this
architecture,
replace
something
and
put
something
in
so
that
everything
remains
same,
but
you
get
a
little
bit
benefit
on
top
of
wherever
you
are
and
you
move
towards
a
direction,
a
vision
that
you
can
eventually
evolve
to
something
more
right.
That's
where
we
are
coming
up
with
hyper
mesh.
Now
this
hyper
mesh
is
just
another
layer,
but
it's
built
using
istio.
It
has
so
many
other
things
we
we
will
cover
a
bit
of
it,
maybe
in
a
few
of
our
next
slides.
A
But,
yes,
we
have
hyper
mesh
layer
now
that
hyper
mesh
layer
is
built
using
istio
and
what
we
are
trying
to
really
replace.
If
you
just
compare
that,
instead
of
n
number
of
different
setups
for
applications
and
api
gateways,
why
don't
we
combine
them
together?
In
one
layer,
everything
comes
together
in
one
layer
from
your
waff
you're,
still
pointing
to
that
hypermesh
ip
for
each
and
every
domain.
A
You
are
pointing
to
that
hyper
mesh
ip,
but
that
hypermesh
has
that
much
segregation
of
application
of
streams
versus
api,
app
streams,
and
those
of
you
who
have
already
worked
with
this
to
I'm
sure
you
know
it's
it's
very
easy
to
do
that
right,
it's
very
easy!
So,
since
it's
very
easy,
we
are
just
leveraging
that
easiness
of
doing
things
and
we
are
bringing
in
value
now
what
value
you
are
looking
at
today.
A
If
we
can
go
in
this
direction,
is
first
of
all,
you
are
getting
rid
of
all
those
different
setups.
You
had
that's
maintenance
overhead.
For
you,
it's
gone
out
of
the
equation.
You
can
throw
it
out
gone
right
and
the
second
one
is
the
cost
of
maintaining
all
those
different
different
infrastructures
and,
like
whatnot,
you
don't
have
to
worry
about
them
at
all.
It's
gone
out
of
the
equation.
A
Now
this
is
what
is
hyper
mesh,
but
actually
hypermesh
is
way
more
than
that,
but
while
we
are
at
it,
we
want
to
show
you
a
few
few
simple
things
in
terms
of
like
when
we
started
designing
hyper
mesh
because,
as
you
can
understand,
when
you
are
on
boarding
the
entire
ecosystem
of
systems
for
a
very
large
organization,
I
mean
it's
not
large,
like
google,
of
course
no,
but
it's
quite
large
right
and
it's
quite
large,
for
I
would
say,
a
14
people,
sred
right
when
you
are
doing
that,
and
at
that
moment
of
time
in
in
this
layer
you
you
are
putting
everything
in
you
have
to
come
up
with
certain
very
detailed
integration
plans.
A
How
will
these
integrations
happen
right?
What
will
be
the
folder
structure?
That's
also
a
very
important
thing
by
the
way,
how
you're
going
to
structure
each
and
everything
under
your
hypermesh
that
central
source
code
of
your
hypermesh
is
very
important,
because
eventually,
what
you're
going
to
do,
as
you
can
understand,
see
airasia.com
might
be
a
responsibility
of
a
certain
team
right,
maybe
home
page
team.
Let's
see,
the
g1.apiration.com
is
maybe
a
certain
team's
dependency
as
well
here
this.
A
A
So
that's
when
you
have
to
plan
in
it
in
a
way
so
that
at
some
point
you
can
onboard
different
ethernets
and
you
can
tell
them
that
you
know
what
this
particular
segment
of
source
code
is
yours
to
manage,
maintain
right
and
that
way
you
give
them
it's
it's
in
central
place,
but
it's
still
segregated
for
team
to
team.
They
can
change
it
from
one
single
place
do
work
and
we
don't
have
to
worry
about
multiple
distributed
different
different
configs
here
and
there
and
we
can
standardize
everything
from
one
place.
A
So
that's
our
value
proposition
in
the
very
beginning
for
hypermesh,
there's
many
more
but
before
before
going
there.
Let's
take
a
look
at
our
folder
structure,
how
we
structured
it
and
then
I
think,
a
bit
of
demo
like
how
it's
working
today,
a
bit
of
local
demo.
I
mean
it's
hosted,
but
we
we
are
doing
the
host
from
our
local
and
then
pointing
it
to
there.
So
imran
is
gonna.
Take
over
again.
B
Let
me
share
my
I'm
actually
going
to
first
show
you
how
we
have
actually
structured
the
whole
setup,
so
it's
actually
easier
for
different
dev
teams.
To
sort
of
you
know
be
bothered
only
about
their
piece
of
the
pie
and
you
know
extending
their
configuration
from
only
that
segment.
The
thing
is,
let's
say,
for
example,
the
different
upstreams,
as
of
right
now
might
be
using
completely
different
tools
and
technologies.
B
We
want
to
give
them
enough.
You
know
room
and
space
so
that
their
gradual
migration
towards
sdo
in
their
upstream
ecosystem,
they
can
actually
take
their
own
time
and
different
teams
might
actually
have
their
different
adoptions.
You
know
different
timeline
for
easter
option.
We
want
to
bring
istio
across
the
board,
but
we
cannot
actually,
you
know,
switch
over
to
istio
in
all
the
different
dev
teams
overnight.
B
So
what
we
started
to
explore
is
that:
how
can
we
actually
centralize
all
the
configuration
and
we
make
certain
changes,
but
for
the
upstream
users
or
the
consumers
or
different
dev
teams,
the
disruption
should
be
as
minimal
as
possible.
They
should
you
know
at
a
certain
point
in
time.
The
dns
record
update
should
be
you
know,
just
a
cut
over
of
traffic
from
their
own
legacy,
ingress
to
this
hyper
mesh,
and
this
should
be,
as
you
know,
as
seamless
as
possible.
B
So
if
I
actually
just
show
you
inside
the
service
upstreams
and
in
this
upstream
folder,
they
will
just
have
their
different.
You
know
domains
and
inside
their
domains.
They
will
have
their
destination
rule
service
entry
and
virtual
service.
B
This
destination
service
entry
is
just
to
indicate
that
which,
upstream
this
this
entry
should
point
to,
and
in
this
virtual
service
they
would
actually
define
all
their
configurations
regarding
their
course
policy.
Their
you
know,
traffic
routing
rules
and,
if
they
want
to
you,
know,
do
any
subset
for
canary
deployment
on
and
things
like
that,
but
all
the
configuration
actually
should
reside
in
this
folder
and
a
certain
dev
team
should
only
be
bothered
about
this
one
specific
folder,
all
the
other
structures.
This
plug-in
based,
you
know,
feature
adoption
it.
B
It
will
come
in
in
a
little
slide,
but
just
to
get
a
chunk
of
the
traffic
routing
all
the
traffic
that
is
actually
relevant
to
your
particular
upstream.
You
just
add
a
folder
and
add
these
configuration
files.
You
should
get
the
traffic
to
your
upstream
just
like
that.
B
So
that's
on
on
how
we
had
actually
have
structured,
the
different
upstream
folders
and
in
overlays.
If
you
have
different
in
requirements
based
upon
different
in
environments,
you
can
actually
just
write
a
patch
file
and
sort
of
modify
your
requirements
based
upon
your
different
environments.
B
So
one
of
the
one
of
the
one
of
the
primary
benefits
of
using
you
know
istio
in
this
setup.
Is
that
not
only
it
not
only?
It
actually
supports
you
know
a
front-end
application
and
a
back-end
api
based
application.
It
can
also
actually
support
your
static,
back-end
buckets
as
well.
So
as
of
right
now,
we
actually
have
a
variety
of
different
upstream
applications.
It
might
be
an
app
engine,
it
might
be
a
vm,
it
might
be
another
kubernetes
cluster.
B
The
upstream
cluster
might
even
have
a
service
mesh,
but
it
doesn't
matter.
Actually.
We
are
not
really
bothered
about
what
exactly
lies
in
our
upstream,
but
we
are
actually
bothered
about
giving
a
seamless
experience,
so
people
can
just
connect
to
this
hyper
mesh
and
in
their
own
pace
of
development
in
their
own,
you
know
timeline,
they
can
actually
add
modif.
B
They
can
actually
adopt
istio
in
their
upstream
ecosystem.
B
So,
just
to
give
you
an
you
know,
understanding
if,
if
I
just
open
this
is
a
pro,
this
is
a
very
basic.
You
know,
example
of
a
front-end
application
that
is
actually
loading
through
hyper-mesh.
Give
us
a
give
it
a
second
yeah.
B
This
is
a
very
you
know
basic
example
where
it's
it's
loading,
a
completely
front-end
application
that
has
all
the
static
contents,
and
if
I
show
you
another
example
of
an
application-
and
this
actually
contains
a
back-end
api
based
setup,
so
it
doesn't
matter
how
you
know
what
sort
of
back
end
you
have.
Even
if
it's
a
group
of
api,
endpoints
or
or
this
is
a
front-end
application,
we
have
tried
this
out
with
cloud
functions,
storage
buckets,
it
actually
works
across
the
board.
B
So
that's
one
of
the
you
know,
one
of
the
you
know
key
benefits
of
using
steel,
so
it
doesn't
matter
what
type
of
up
streams
that
you
have.
You
use
the
same
set
of
abstractions
to
reason
about
different
infrastructural
components,
different
flavor
of
infrastructural
components,
so
the
knowledge
is
actually
pretty
very
easily
transferable
across
different
components
that
that's
it
you
want.
Let
me
share
the
slides
again.
A
Yeah
yeah,
sorry,
I'm
I'm
gonna
take
over
anyway
thanks
thanks.
Everyone
thanks
for
the
demo,
I
mean
what
iran
showed.
I
hope
every
everybody
it's
very
basic.
I
I
mean
for
for
obvious
reasons.
We
we
cannot
show
certain
other
parts
of
it,
but
we
just
tried
to
give
you
an
overview
of
like
where
what
we
are
trying
to
build
in
in,
like
you
know
like,
for
example,
we
just
want
to
create
it
in
a
way
that
it's
it's
extensible
from
there.
A
A
You
have
understood
that
it's
just
replacing
a
bunch
of
api
gateways
and
application
reverse
proxies
that
we
had
so
I
mean
it's
giving
a
benefit
of
consolidation
and,
like
you
know
like
centralized,
I
would
say
excess
and
everything,
but
what
are
other
things
that
we
discovered
along
the
way
when
we
started
building
this
tool
right?
Let's,
let's
let
us
walk
you
through
to
those
things
very
easily,
so
one
one
of
the
major
problem
that
we
had
I'll
try
to.
I
just
had
a
quick
question
for
artesia.
A
Okay,
sorry
because
I
I
think
like
we,
we
covered
so
many
things
in
one.
So
it's
the
content
is
a
bit
too
much.
Maybe
so
yeah.
C
I
mean
there's
some
people
who've
already
dropped
off,
but
it's
okay
go
on
we're,
recording
this
and
and
actually
the
the
people
who
have
had
to
left
to
leave.
They
said
they'll
check
the
recording
later
so
yeah
carry
on
and
we'll
record
it
and
yeah.
If
somebody
can't
stay
for
the
whole
presentation,
they
can
watch
the
recording.
D
Okay
and
we're
still
gonna
be
doing
the
q
a
session.
So
it's
fine,
okay,.
A
Cool
okay
I'll
be
quickly
covering
this.
The
problem
that
I
wanted
to
discuss
here
is
like,
whenever
you
have
all
when
we
said
that
all
of
these
all
application
up
streams
and
api
streams,
we
have,
how
are
they
actually
connected
from
the
different
api
gateways
and
the
you
know
the
reverse
proxies,
because
those
all
of
these
api
up
streams
or
application
upstreams,
they
have
different
different
subdomains
right
from
our
base
domain.
A
They
they
have
different
different
subdomains
and,
of
course,
if
I
have
to
make
sure
that
the
communication
from
reverse
proxy
or
different
gateway
layers
to
the
upstreams
has
to
be
secure,
we
had
to
provision.
You
know
we
had
to
get
certificates
for
each
of
these
subdomains
and
since
all
of
these
different
subdomains
are,
as
I
said,
of
course,
90
of
them
are
in
kubernetes,
but
a
lot
of
them
are
still
in
jee,
like
as
as
imran
mentioned
in
bucket
in
cloud
function.
A
A
With
this
setup,
we
we
figured
out
that
we
have
a
way
for
automating
this
entire
thing,
using
a
self-signed
start
process
where
we
can
still
have
encryption
in
between
the
high
permission
to
the
upstream
as
well
as
we
can
go
for
a
very
strong
self-signed
search.
We
can
have
the
ca
install
on
our
hyper
mesh,
and
that
way,
we
we
can
have
a
very
secure
communication
and
the
great
part
of
it
is
like
we
can
automate
how
the
certificates
will
be
placed
in
each
and
every
upstream
right.
A
That's
one
of
the
benefit
that
we
we
saw
immediately
when
we
we
started
thinking
about
it
and
the
other
one
was.
This
is
an
interesting
problem
because,
let's
say
we
have
a
subdomain
which
is
for
a
particular
product,
let's
say
x,
dot
airasia.com,
that's
being
pointed
from
the
reverse
proxy
in
path
x.
Right
and
when
that
happen,
you
you
expect
in
whenever
you
are
talking
from
seo
perspective.
A
Whenever
I'm
searching
for
product
x
from
airasia,
I
should
be
seeing
erasure.com
x,
but
since
your
sub
domain,
which
is
x,
dot
aurelia.com,
is
also
public,
then
of
course
that
is
crawled
by
google.
So
whenever
you
search
actually
x,
dot
airasia.com
starts
popping
up
at
the
top,
not
thereasia.com
x
right.
It's
a
huge
problem.
If
you
see,
and
and
that's
where,
like
our
seo
team,
were
like
really
pissed
off
like
why
this
is
happening,
why
we
cannot
fix
it
with
this
solution
as
well.
We
saw
another
another
improvement
here.
A
Is
that
for
all
these
upstreams
that
we
have
with
with
proper
firewall
rules
in
place,
what
we
can
do
is
like
we
can
do
the
net
ip
white
listing.
We
have
multiple
net
ids
from
the
cluster
where
hyper
meshes
like
running.
So
if
we
can
wireless
those
net
ips,
so
the
only
part
that
is
allowed
is
from
hyper
mesh
to
those
sub
domains.
You
can
never
go
to
those
sub
domains
and
just
go
there
and
check
it
out,
so
the
crawling
is
turned
off
and
also
those
subdomains
gets.
A
A
Another
thing
as
as
you
can
see
that
we
have
wavelayer
and
cdn
layer,
we
are
using
a
paid
web
solution
today
from
a
vendor.
Then
we
started
thinking
about
that.
Why
don't
we
build
an
on
top
of
some
open
source
technology?
Why
don't
we
build
an
in-house
rav
that
can
prevent
us
from.
Let's
say
why
don't
we
just
give
it
a
try
and
we?
A
Why
don't
we
bring
it
in
an
extensible
manner
to
our
hypermesh
like
it's
a
it's
like
a
plug-in,
you
can
add
it,
and
then
it's
started
working
for
everyone.
The
benefit
would
be
that
again.
We
don't
have
to
right
now,
for
we
we
pay
for
each
and
every
domain,
which
is
behind
what
I
mean
which
is
protected
by
that
raf.
So
you
see
consolidated,
like
n
number
of
domains
that
we
have
today,
which
is
being
protected
by
the
wef.
A
You
put
it
in
in
the
hyper
mesh
and
that's
our
built-in
plug-in,
and
then
you
start
getting
the
benefit
of
wef
and
the
wav
that
we
have
built
in
house.
Our
infosec
team
has
done
a
rigorous
testing,
and
so
far
we
have
seen
like
it
can
protect
almost
98
percent
of
the
vulnerabilities
that
we
have
today.
A
So
it's
quite
a
progress
and
that's
something
we
are
exploring
that
we
we
want
to
bring
it
in,
to
be
very
honest,
that
waff
is
a
middleware
in
front
of
the
I
mean
inside
the
hyper
mesh,
so
just
to
give
an
idea
of
like
if,
if
you
guys
are
curious
like
how
we
are
doing
it,
we
are
actually
utilizing
envoy
filters
and
there
is
a
particular
of
course,
another
different
web
service,
all
together,
which
is
coming
together
getting
piped
in
right.
A
And
we,
if
we
are
in
the,
I
would
say,
internal
alpha,
because
it's
an
internal
product
where
in
in
which
state
we
are
it's
an
internal
alpha
and
we
are
planning
to
go
a
pilot
with
subset
of
sites
in
by
mid
of
august.
And
if
we
can
do
that,
you
can
see
that
we
have
reduced
another
layer
and
what
is
the
benefit
of
this
another
layer?
A
Reduction
of
this
another
layer
is
that
previously
the
communication
from
cdn
and
wav
had
to
be
secure
as
well
right
that
had
to
be
encrypted,
so
we
had
to
place
our
certificates
in
rough
layers
as
well.
Now,
when
it
comes
like
this
again
for
cdn
or
for
dns,
or
whichever
the
forefront
is
actually
the
hyper
mesh
ip,
so
hyper
mesh
domain
right
all
the
domains
that
we
have,
that
we're
gonna
on
board
with
hyper
mesh.
So
we
can
just
put
all
those
certificates
there.
A
One
layer,
one
layer
of
maintenance
and
everything
is
sorted.
We
have
one
less
layer
and
just
another
third-party
dependency
is
only
on
the
cdn
we,
which
we
don't
want
to
get
rid
of.
To
be
very
honest
right.
It
remains
there.
We
cannot
build
it
on
in-house,
but
this
is
something
that
we
want
to.
We
we
are
exploring.
We
have
seen
quite
a
good
success
here,
so
I
think
we
we
are
going
to
give
it
a
try
for
like
a
subset
of
our
systems.
A
Very
soon,
another
thing
that
we
are
exploring
is
extending
how
far
we
can
extend
this
hyper
mesh
right,
one
of
the
one
of
the
key
requests
that
we
have
had
quite
a
lot
of
time
from
our
like.
You
know,
all
of
our
developers
and
all
of
our
different
teams
that
they
have
different
different
api
gateways
or
even
in
the
back
end,
the
the
actual
upstream
itself.
A
They
have
rate
limiting
implemented,
and
it's
it's
segregated,
different,
different,
logical
portions,
someone's
great
limiting
is
depending
on
certain
you
know,
memory
store,
redis
and
whatnot,
failing
lots
of
components
to
take
care
of
right.
So
that's
where
we
we
thought
about
since
the
way
we
we
built
this
wav
component
and
we
plugged
it
in.
Why
don't
we
build?
A
We
use
a
rate
limiting
engine
as
well,
and
we
plug
it
in
in
the
system
using
the
same
way,
approach
same
envoy,
filter
and
everything
and
again
this
was
a
poc
we
started
and
we
we
had
a
quite
a
great
success.
We
we
saw
that
we
can
actually
define
rate
limiting
now
by
different
upstream
different
part
based
setup.
A
So
we
eventually
want
to
extend
the
support
to
our
all
our
dev
teams
and
I'm
sure
they're
gonna
love
it
they're
super
excited
already
about
it,
and
this
is
something
that
we
are
also
thinking
about
to
bring
in
right.
A
So
yeah
I
mean
this
is
this
is
so
far
like
where
hyper
mesh
is
going
where
we
are
at
is
we
showed
you
that
we
just
replaced
one
layer
of
the
entire
journey?
Now
we
are
actually
consuming
more
and
more
layers.
I
think
the
the
obvious
question
should
be.
We
are
introducing
a
single
point
of
failure,
just
to
assure
you
that,
since
we
are
building
it
as
the
front
layer
of
our
all
the
systems,
we
have
already
taken
that
into
consideration.
A
We
have
thought
about
like
how
we
will
make
it
fault,
tolerant,
having
more
high
availability,
having
a
diff
different
failover
setup,
ready
and
having
a
dear
setup
ready
as
well.
So
those
are
in
the
pipeline
of
like
you
know,
plan,
but
we
are
still
very
young.
We
are
just
rolling
it
out
in
pilots,
so
yeah
we
are
planning
to
do
that.
A
The
obvious
part
of
the
entire
system,
of
course,
at
the
beginning
we
said
we
want
to
bring
in
more
control,
so
we
want
to
see
the
drawn
and
we
want
to.
We
want
to
bring
in
more
monitoring
and
alerts.
A
So
here's
the
fun
part,
if
you,
if
you
look
at
hypermesh
when
you
are
looking
at
the
steel
steel
itself,
is
whenever
is
being
used
as
a
high
service
mesh.
It's
actually
a
service
measure
of
internal
services
and
what
we
built
here
is
basically
an
external
service
mesh
right.
All
the
different
nodes
that
you
have
here
are
actually
external
services,
so
in
our
kelly
dashboard
we
can
see
which
which
services
are
down,
whichever
external
service
is
going
down
or
something
traffic
is
like
this
percentage.
A
A
How
how
do
I
really
look
at
that?
And
how
do
I
really
get
that
data
to
also
know
why
that
node
is
failing?
Interestingly,
we
have
another
project
on
going.
Maybe
we
can
get
another
chance
and
some
other
talk
where
we
can
explain
that
as
part
of
which
project
we
are
able
to
type
in
that
data
of
different
different
external
loads,
we
have
whether
it's
an
internal
service
mesh,
whether
it's
using
a
steel
or
not
using
a
different
gateway,
different
ingress.
A
We
don't
care,
we
can
pipe
in
that
data
and
that
data
is
coming
in
in
a
central
place.
Already
we
have
that
big
data
lake
set
up
it's
coming
in.
What
we
plan
to
go
is
that
we
we
tap
on
that
data,
because
that
data
is
coming
in.
We
tap
on
that
data
using
a
graph
on
a
dashboard,
and
we
start
setting
up
those
alerting
for
all
those
different
data
bits
and
points
that
we
have.
A
We
are
very
hopeful
about
it
because
we
we
have
already
built
the
data
lake.
We
have
seen
the
data
is
coming
in.
It's
just
a
matter
of
time
like
we,
we
build
those
custom
dashboards
and
we
start
like
setting
up
those
alerting.
So
our
hope
is
eventually
that
gives
us
the
entire
picture
of
the
entire
ecosystem
that
we
own
as
an
organization
yeah.
So
that's
that's
the
monitoring
and
alerting
part
of
it
how
we
are
planning
it.
This
is
just
the
phase.
One!
That's
going
on
the
phase.
A
Two
is
the
harder
part,
as
you
can
understand,
in
the
phase
one,
we
are
doing
everything
we
are
setting
it
up
and
everything
the
phase
two
is
actually
the
onboarding
part
of
the
different
dev
teams
so
that
they
can
come
in
they
on
board.
A
A
Apart
from
that,
what
we
are
also
exploring
is
we
are
trying
to
push
people
so
all
of
these
upstream
nodes
that
we
said
external
modes
they're
also,
you
know
in
in
kubernetes
right.
A
lot
of
them
are
in
kubernetes
and
those
a
lot
of
people
are
using
today,
gk
ingress
or
nginx
controller.
A
So
eventually
we
want
to
push
these
people
to
use
a
steel
even
in
those
so
that
we
have
only
one
choice
of
stack
everywhere
and
eventually
we
we
are
also
exploring
ideas
like
trying
primary
and
remote
setup,
so
so
that
we
we
can
see
that
if
we
have
istio
in
different
different
external
modes
and
how
can
we
really
plug
and
play
this
data
together
to
get
the
actual
connected
observability
from
a
central
place
which
is
hyper-meshes
theo
right?
A
That's
again,
it's
it's
an
idea
phases
we're
just
ideating
we're
seeing
what
we
can
do
there.
Of
course,
certain
limitations
in
terms
of
how
these
are
implemented
in
in
istio
and
all
the
different
other
tools
that
we
are
exploring.
But
we
are
getting
there
and
we
are
we're
really
really
hopeful
about
it.