►
From YouTube: Kuma Service Mesh And Backstage.io at American Airlines
Description
Karl Haworth from American Airlines explains how his team used Backstage.io to automate the developer experience and Kuma service mesh to connect our apps between regions.
Try Kuma: https://kuma.io/
Watch the full video: https://konghq.com/videos/kuma-and-backstage
For more Kong demos and use cases, watch our Kong Summit 2021 videos: https://konghq.com/kong-summit/2021-videos
#ServiceMesh #Developer #KumaMesh
A
Last
year
we
started
the
developer
experience
product
at
american
airlines
as
we
transitioned
into
the
later
half
of
2020
and
into
2021.
We
wanted
to
tackle
kubernetes
app
deployments.
Kubernetes
is
hard
and
cloud
is
hard.
We
aim
to
compartmentalize
and
make
it
easy
for
the
users
to
always
do
the
right
things,
no
matter
how
difficult
those
tasks
were
in
a
matter
of
minutes.
Through
our
kubernetes
journey,
we
created
reproducible
patterns
for
application
teams
to
use
to
make
things
even
easier
being
a
large
company.
We
have
a
lot
of
opinions.
A
A
We
built
our
developer
experience
platform
runway
on
spotify's
backstage
platform.
Backstage
brings
a
solid
base
for
an
enterprise
framework
that
we
could
build
upon
and
you
can
see
their
logo
in
the
right
next
to
the
kuma
logo.
Developer
experience
at
american
airlines
loves
open
source
technology
within
our
technology
and
transformation
space.
We
look
to
the
open
source
marketplace
to
help
our
engineering
standards.
A
A
The
ability
for
multiple
meshes
with
explicit
requirements
for
traffic
rules
for
sensitive
data,
the
kuma
gateway,
will
allow
us
to
ensure
api
security
on
our
applications
and
kuma
offers
cross-region
and
cross-provider
support
as
well
developer.
Experience
at
american
airlines
and
kuma
seem
to
share
similar
goals.
We
want
to
make
things
easy
with
minimal
upfront
configurations.
A
This
is
a
high-level
overview
of
what
our
infrastructure
looks.
Like
it's
simplified.
Of
course,
we
start
with
our
security
delivery
content
providers
and
our
global
traffic
managers.
We
have
a
kubernetes
provider
with
multiple
clusters
in
multiple
regions
and
we
have
kuma
installed
on
all
of
our
clusters
with
akuma
global
control
plane
we
deploy
kuma
using
github
actions.
Github
actions
allows
us
an
easy
way
to
deploy
kuma
on
both
global
clusters
and
our
remote
zones.
We
also
use
the
github
issues
and
pull
request
comments
to
document
our
cluster's
life
cycles.
A
We're
looking
to
make
the
developer
experience
at
american
airlines
a
delightful
experience
to
add
on
to
cluster
creation
and
bootstrapping.
We
have
also
simplified
our
global
traffic
management
strategy
in
the
past
internet
facing
applications
had
a
lot
of
hurdles
to
jump
through
we've
simplified
the
process,
while
keeping
proper
security
in
place
anytime.
A
new
ingress
is
found
on
our
clusters.
Our
gtm
provider
is
notified
about
the
changes
as
apps
change
clusters.
A
A
You
can
see
our
gtm
sync
service
application
being
reached
out
to
by
our
custom
operator
for
gtm
crud
and
that
gtm
sync
service
can
exist
on
any
cluster
does
not
have
to
exist
side
by
side.
With
our
operator,
we've
used
runway
as
a
base
to
encourage
inner
stores
at
american
airlines.
The
platform
has
made
it
easy,
even
for
non-developers,
at
american
airlines
to
get
started
to
contribute
and
to
build
their
team's
service
requests
into
the
platform.
Our
api
management
team,
built-in
management
capabilities
for
apis
right
within
runway.
A
A
Our
file
management
team
has
built
in
waste
information
to
view
information
on
our
file,
transfer
jobs
and
they're
continuing
to
push
forward
with
job
creation.
Our
compute,
as
a
service
group,
has
built
in
ways
to
create
and
view
information
on
virtual
machines.
Those
are
just
some
of
the
highlights
on
our
inner
source
contributions.
A
A
large
portion
of
what
we've
tried
to
achieve
is
around
our
create
app.
This
is
how
applications
join
our
clusters
and
our
mesh
in
a
friendly
way.
We
wanted
to
make
it
easy
for
the
application
developers
at
american
airlines
to
launch
their
apps
into
our
kubernetes
environment
and
not
to
worry
about
the
underlying
infrastructure
click
and
deploy.
We
have
options
for
new
app
creation,
as
well
as
deploying
existing
apps
into
our
mesh,
and
we
have
big
plans
on
extending
these
blueprints.
A
We
have
a
lot
of
abstraction
layers
that
help
us
out
in
developer.
Experience
runway
that
I
just
showed
helps
our
app
teams
onboard
easily
to
our
clusters
into
our
mesh,
and
it
gives
us
a
user
interface
to
provide
our
abstractions
on
the
runway.
Kubernetes
operator
takes
in
a
small
subset
of
yml
to
expand
and
create
kubernetes
resources.
A
We
shot
for
describe
your
app
in
10
lines
of
yml
or
less,
and
we
made
it
pretty
close.
This
operator
is
also
awesome
because
we
can
change
how
app
teams
are
deploying
apps
without
teams
actually
having
to
be
fully
aware
when
we
needed
to
add
kuma
locality,
aware
routing
labels,
we
updated
the
operator
in
all
apps
deployed
in
our
ecosystem,
we're
gracefully
updated.
A
A
And
once
this
video
kicks
off
we'll
see
logging
into
the
runway
platform
along
with
the
runway
homepage,
we
have
an
option
to
create
an
app
right
in
the
sidebar
and
once
we
authenticate
with
github
it'll,
take
just
a
moment
we'll
see
certified
blueprints
show
up
with
a
gold
star.
Those
are
blueprints
that
have
been
approved
by
our
group.
A
We
also
have
community
blueprints
as
well
with
the
toggle
on
top
of
the
last
screen,
once
a
team
fills
in
their
short
name,
their
application
short
name
and
description,
and
some
other
items
we'll
go
ahead
and
select
the
cluster.
This
screen
is
pretty
important
because
when
a
team
selects
their
short
name,
we're
automatically
applying
tags,
labels
and
annotations
in
the
background,
based
on
that
selection,
we'll
go
ahead,
trigger
our
first
release
and
create
an
aks
public
ingress,
we're
selecting
our
clustering
namespace.
A
Looking
at
github,
we
can
go
ahead
and
clone
the
code
locally
make
those
updates
push
the
updates
back
up
to
github,
we'll
also
check
out
that
our
initial
docker
image
build
is
being
created.
This
is
going
to
be
pushed
up
to
our
private
registry,
using
one
of
those
internalized
github
actions
jumping
in
to
the
runway
catalog
right
now
we
can
see
basic
information.
This
page
was
loaded
before
the
app
fully
had
time
to
deploy
and
before
the
image
got
produced.
A
So
when
we
refreshed
the
page,
we
saw
a
little
bit
more
information
and
we
see
a
url
that
url
when
clicking
you'll,
see
the
basic
app
that
says
change
me.
So
that
was
a
pretty
quick
demo,
but
let's
kind
of
go
over
everything
that
was
created
during
that
demo,
while
we
did
do
a
little
bit
of
cooking,
show
magic
to
speed
things
up
for
a
full
deployment.
A
It
took
only
six
minutes
overall,
that
included
docker
image,
build
time,
security
scanning
cooma
resources
being
created,
the
global
traffic
management
property
being
created
out
in
our
traffic
manager.
Argo
cd
deployed
the
app,
which
is
what
you
can
see
here.
This
is
an
argo
cd
image
if
we
jump
into
the
diagram
we'll
see
that
we
have
a
custom
defined
kind
of
web
app
that
web
app
has
been
expanded
into
normal
k8
services,
external
services,
deployment,
horizontal
pod,
auto
scaler
and
ingress.
A
If
we
had
secrets
or
config
maps
to
find
you'd
see
those
as
well
from
the
previous
screen,
we
all
saw
the
kubernetes
resources
that
got
created
from
the
runway
platform
using
the
runway
operator.
All
of
those
resources
were
produced
from
the
above,
the
ammo
just
10
lines
of
yaml,
that's
very
minimal,
compared
to
what
it
normally
takes
and
on
the
next
screen,
we'll
see
what
it's
been
reduced
from.
A
I
pulled
a
random
app
that
got
deployed
through
runway
recently.
You'll
see
information
on
the
screen
about
that
app.
They
added
some
environment
variables,
some
secrets
and
some
auto
scaling
settings
to
their
lines
of
yaml.
So
they
went
a
little
bit
bigger
than
the
10.
They
went
to
about
45
lines
of
ammo
with
all
their
environment
variables
and
all
their
other
custom
settings.
However,
the
the
app
is
the
exact
same
thing
we
just
built
in
the
demo.
It's
a
python
fast
api
app.
A
The
team
spent
a
little
bit
more
time,
working
on
it
and
iterating
through
it,
but
it
also
just
took
six
minutes
to
deploy
to
build
and
have
their
updated
image
running.
We
went
from
45
lines
of
yaml.
In
the
example
I
found
to
over
1400
lines
of
yaml
we're
really
trying
to
ensure
our
teams
don't
have
to
be
presented
with
tons
of
infrastructure
complexity,
but
we'll
allow
that
complexity
when
necessary.
A
One
team,
in
particular
we've
partnered
with,
has
a
pretty
cool
new
app
coming
out
to
enhance
a
customer's
onboarding
experience.
The
team
who
typically
deploys
many
microservices
has
seen
a
reduction
in
a
new
app
deployment
time
from
days
to
hours
outside,
teams
that
have
less
infrastructure,
typically
ready
can
see
a
reduction
in
time
from
months
to
hours.
Yes,
I
did
say
months,
so
the
team
has
also
seen
an
increase
in
app
portability
and
less
of
a
reliance
on
our
cloud
partners.
A
That's
a
huge
plus
when
we're
trying
to
support
the
world's
largest
airline
and
utilize
multiple
cloud
providers
for
our
applications.
Typically,
teams
in
our
organization
need
to
be
kubernetes
experts
to
launch
apps
to
clusters,
but
not
this
team.
Due
to
the
amazing
efforts
by
our
contributing
members
of
developer
experience,
the
team
mastered
containerization
to
continue
with
the
new
infrastructure
and
that's
it.
They
really
don't
need
to
know
much
about
kubernetes.
A
We
also
gave
teams
a
detailed
view
into
the
service
mesh
using
the
kong
tracing
the
kuma
tracing
integration.
Our
customers
can
trace
traffic
from
the
cluster
ingresses
through
the
mesh
to
the
app.
This
gives
teams
a
much
deeper
insight
into
what's
going
on
throughout
the
entire
lifespan
of
a
transaction
without
any
blind
spots.
Another
batteries
included
item
from
kuma
and
kuma
made
it
easy
throughout.
All
of
this,
we've
been
able
to
create
abstractions
to
create
to
gain
benefits
without
users
being
overwhelmed
with
yet
another
technology
layer.
A
Our
blueprints
provide
a
great
starting
point
for
teams
with
batteries
included.
The
operator
provides
the
ability
to
hook
up
apps
into
clusters
and
into
the
service
mesh
without
ever
realized,
as
well
as
reducing
the
number
of
bml
lines
of
ammo
are
required
to
launch
an
app
into
kubernetes
our
github
actions.
Automation
provides
the
ability
for
our
team
to
automate
kuma
policies
and
infrastructure
creation.
A
The
global
traffic
management
service
we
build
makes
it
easy
to
update
ingress
pointers
to
access
our
apps
within
our
mesh.
The
kubernetes
infrastructure
makes
it
easy
for
apps
that
are
portable
between
our
cloud
providers
and
kuma
makes
it
super
easy
to
control
traffic
permissions,
routing
and
gateways.
Kuma
is
easy
to
set
up
and
easy
to
jump
in
with
a
service
mesh.
It's
easy
to
span
multiple
regions.
Service
providers
without
manual
steps.
Kuma
makes
it
easy
to
communicate
with
services
wherever
they
reside
and
with
kuma
we
didn't
need
to
seek
outside
components.
A
We
simply
used
kuma
as
we
look
towards
the
future
we'd
like
to
include
more
information
about
the
kuma
service,
mesh,
kuma
policies
and
global
traffic
management
information
to
our
customers
right
through
runway,
so
that
teams
can
easily
find
the
information
they're
looking
for
we'd
like
to
find
ways
as
well
to
contribute
back
to
open
source.
With
some
of
these
efforts,
maybe
through
backstage
plug-in
we're
looking
at
multiple
meshes
for
different
purposes.