►
From YouTube: Kuma Community Call - February 8, 2022
Description
Kuma hosts official monthly community calls where users and contributors can discuss about any topic and demonstrate use-cases. Interested? You can register for the next Community Call: https://bit.ly/3A46EdD
This month we welcome a special guest Karl Haworth from American Airlines to share how Kuma played an integral role in launching their new developer experience platform.
A
All
right
sounds
good,
thank
you
and
all
right,
so
I
wrote
this
blog
post
about
developer
experience
at
american
airlines
and
how
we've
used
the
kuma
service
mesh
and
a
lot
of
our
strategy,
I'm
just
going
to
pull
up
one
other
screen.
A
lot
of
our
strategy
has
been
around
abstracting
out
the
hard
parts
of
kubernetes
and
compartmentalizing
the
complexities
and
making
it
really
easy
to
do
the
right
thing
through
reproducible
patterns
for
our
applications.
A
A
We
started
really
early
on
with
the
framework
developed,
unfortunately,
lots
of
technical
depth
that
we're
getting
rid
of,
but
we're
really
liking,
seeing
the
direction
that
the
backstage
framework
is
going
and
keeping
up
with
the
latest
and
greatest
now
that
some
of
the
dust
is
settled
but
being
early
adopters,
we
had
to
figure
out
our
own
ways
to
do
a
whole
lot
of
things,
so
we
had
a
whole
lot
of
imagination
right
from
the
start.
A
We
wanted
to
make
it
really
easy
to,
as
I
mentioned,
deploy
applications
and
make
things
easy
from
the
start.
So
coming
into
our
create
application,
we
have
a
lot
of
different
blueprints
for
our
application
teams
to
use.
If
I
jump
on
screen,
we've
got
application
templates
that
deploy
specifically
to
kubernetes,
based
on
the
tag
that
I've
selected.
A
All
of
our
services
that
are
launched
into
the
kubernetes
platform
that
we
have
for
our
shared
clusters
do
get
added
to
our
service
mesh.
We
really
try
hard
to
again
abstract
that
detail
from
our
customers.
We
really
don't
want
our
application
teams
to
have
to
worry
about
infrastructure,
about
connectivity
about
networking
those
sorts
of
things.
We
really
want
them
to
be
able
to
just
focus
on
their
application
code
and
what
different
items
they
might
have
to
connect
to
so
through
runway.
We
can
create
new
components.
A
A
A
Once
teams
do
that
and
they
go
ahead
and
click
next
step
and
generate
their
application
code
out
on
github,
we
have
used
the
technology
inside
backstage
again
as
another
abstraction
layer
to
do
continuous
deployment
through
the
argo,
cd
application
and
platform.
Argo
cd
will
watch
repository
files
and
then
go
ahead
and
update
the
kubernetes
deployment
based
on
those
files.
A
If
I
continue
on
a
little
bit
I'll
show
one
of
the
files
from
the
repository
that
was
created,
so
here's
a
test
application
I
created
yesterday
with
the
kubernetes
directory
and
environment
folder
and
what
we're
calling
our
web
app
yaml.
Another
abstraction
layer
mentioned
in
the
blog
post.
So
with
the
web
app
ymo,
we
can
simplify
what
is
required
to
launch
a
kubernetes
application.
We
don't
need
deployment
ammos
or
service
emals
or
ingresses,
or
anything
like
that.
In
order
to
make
kuma
work,
we
need
to
introduce
the
external
service
name.
A
So
again,
our
custom
operator
with
our
web
app
email
is
what
we
call.
It
takes
care
of
a
lot
of
the
complexities
behind
the
scenes
for
kubernetes
itself,
kuma's
not
complex
with
setting
up
the
external
service,
but
it's
one
more
step
that
we
didn't
want
teams
to
have
to
worry
about.
A
We've
also
set
up
through
our
web
app
emo
and
our
kubernetes
operator,
the
runway
operator,
standard
traffic
permission
policies,
and
this
one
has
an
api
permission
policy
applied.
So
through
the
kuma
service
mesh
and
the
traffic
permissions
along
with
tags
or
labels
on
the
kubernetes
resource,
we
can
restrict
what
applications,
what
services
can
then
communicate
with
this
test
application?
A
If
I
continue
on
to
the
next
page,
this
web
app
yaml
that
we're
seeing
here
this
is
a
live
application
out.
On
my
argo
cd
instance
right
now,
you
can
see
that
that
web
app
email
created
external
service.
While
this
is
the
normal
service-
and
it
looks
like
I
haven't
refreshed
recently-
but
an
external
or
a
normal
service,
an
external
service,
a
deployment
horizontal
pod,
auto
scaler,
ingress
and
then
all
my
pods
associated
with
that.
A
So
for
the
service
mesh,
we
can
go
from
the
ingress
to
the
external
service
name
and
then
continue
our
hopping
through
the
service
mesh
as
needed.
So
abstracting
those
layers
out.
When
I
look
at
the
external
service,
we
can
see
the
dot
mesh
url,
which
works
with
the
kuma
service
mesh
and
that's
all
automatically
generated.
A
So
we
can
see
the
url
that's
available
for
our
developers
to
start
using
pretty
quickly,
as
well
as
the
mesh
url
and
information
on
how
and
what
to
use
that
url
for
the
whole
process
for
create
to
creating
the
kubernetes
resources,
getting
it
registered
into
the
service
mesh,
creating
the
docker
image,
at
least
for
this
python
fast
api
application
that
we're
using
as
a
sample.
The
whole
process
takes
about
six
minutes
to
get
this
url
fully
responding
out
on
the
internet,
protected
by
our
security
providers
and
cdn
providers
as
well.
A
As
I
continue
on
this
is
the
application.
That's
live
out
there
again,
it's
an
example,
but
it
has
our
continuous
deployment
mechanisms
built
in
it,
has
unit
tasks
and
other
functionality
built
into
the
github
repo
and
different
workflows
to
test
code
and
make
sure
that
we're
doing
the
best
we
can
for
our
applications
for
the
whole
service
mesh
installation,
we're
doing
everything
through
github
actions
with
our
clusters
and
with
the
kuma
installation
for
our
global
and
our
remote
zones.
A
Because
of
all
of
the
github
action
work
that
we've
done
along
with
terraform
and
helm,
charts
and
everything
else.
We
can
set
up
an
entirely
new
cluster
environment
with
a
service
mesh
running
through
all
of
our
github
actions,
in
probably
about
30
to
40
minutes,
depending
on
our
cloud
provider
with
their
kubernetes
startup
time
and
all
so.
A
We've
got,
I
think,
a
pretty
good
system
to
automate
things
and
make
things
really
easy
for
the
end
user
and
not
have
to
think
about
the
complexities
that
come
with
kubernetes
and
then
have
to
learn
service
mesh
technology
on
top
of
it
as
well.
A
We
plan
in
the
future
to
continue
enhancing
this
catalog
screen,
with
more
information
about
all
the
components
that
are
involved
in
delivering
an
application
out
to
the
internet,
showing
a
full
route,
including
the
service
mesh
information,
the
policies
as
well
and
then
being
able
to
abstract
some
of
that
policy
application
through
runway
our
developer
platform
here.
A
So
those
are
the
plans
that
we
have
going
forward
and
continuing
to
make
the
service
mesh
just
really
easy
for
all
of
our
customers
to
use
we
evaluated
a
lot
of
different
service
meshes
pretty
much
everything
under
the
cncf
landscape
and
with
kuma.
It
just
made
it
really
easy
to
install
things,
get
going
and
do
things
that
could
be
harder.
You
might
have
to
add
additional
components
with
other
service
mesh
technologies.
A
Kuma
just
offered
it
right
out
of
the
box
with
batteries
included,
so
we
were
able
to
get
going
configure.
It
use
some
of
the
more
advanced
well,
maybe
some
intermediate
capabilities
and
then
moving
into
more
advanced
capabilities,
and
do
things
really
easily
and
very
quick
and
iteratively
as
well.
A
So
I
think
that
pretty
much
sums
up
what
was
in
my
blog
post,
we
cover
a
lot
of
the
abstraction
layers
and
offer
some
more
details
and
some
images
through
the
blog
posts,
but
as
a
whole
for
the
kuma
service
mesh
and
the
runway
platform.
The
customer
experience
it's
pretty
straightforward
and
easy.
B
Hey,
thank
you
carl.
This
is
marco,
fantastic
presentation.
I
have
a
few
questions
for
you
and
I'll
start
with
the
first
one.
Really.
What
would
you
like
to
see
in
the
project.
A
That's
difficult,
I
think,
we've
seen
a
lot
of,
I
guess:
documentation,
updates
and
more
examples,
and
things
like
that.
We
had
to
work
through
some
of
that
ourselves,
but
other
than
that
continuing
to
promote
it
through
the
cncf
landscape,
getting
more
contributors
and
more
involved
in
it,
but
specific
functionality
and
feature
feature
set.
A
I
know
that
we
utilize
a
not
the
same
api
management
platform
that
you
guys
might
use,
and
I
know
that
they
have
an
envoy
plug-in
so
maybe
more
information
around
how
we
can
incorporate
those
sorts
of
plug-ins
into
the
envoy
instance
that
kuma's
running
could
be
helpful,
but
overall
the
traffic
policies
and
everything
that's
available
has
been
super
helpful
and
then
digging
into
the
slack
channels
has
been
really
helpful
as
well
and
getting
help
on
support
there.
So
thank
you
not
a
whole
lot
of
enhancements
that
I
can
mention.
B
Okay,
great
well,
we
also
have
a
couple
questions
in
the
chat.
Some
from
charlie.
We
have.
How
do
you,
how
do
you
map
your
traffic
permission
short
names
to
actual
traffic
permissions
in
kuma,
and
do
they
end
up
being
tags
in
your
services.
A
So
that
is
how
we're
doing
it
with
the
when
I
showed
this
standard
traffic
permission
here,
we're
applying
labels
on
the
deployment
itself.
So
when
we
want
to
route
traffic
through
our
in
this
case
we
use
the
apogee
api
management
platform
and
the
apigee
micro
gateways.
When
we
go
ahead
and
apply
this
traffic
permission,
we
use
labels
on
the
deployment
to
then
reroute
traffic
in
specific
ways
to
make
sure
it's
going
through
that
apigee
micro
gateway.
B
A
So
we
do
have
the
default
retries
in
there.
We
do
want
to
investigate
the
circuit
breakers.
We
haven't
had
time
to
investigate
that
and
we
are
using
the
health
checks
as
well.
B
Okay,
if
anybody
else
has
any
questions,
feel
free
to
unmute
or
drop
them
in
the
chat.
A
A
B
Yeah,
maybe
just
I
will
let
you
finish
and
then
I
will
ask
my
question:
okay,.
A
And
just
for
order
of
magnitude
and
data
plane,
this
platform,
I
would
say,
is
still
kind
of
getting
up
and
going.
It
moved
from
the
experimental
phase
kind
of
to
more
the
adopt
phase,
we're
working
with
our
customers
to
make
sure
that
we
can
service
all
the
requests
and
everything
so
some
applications
that
are
out
there
are
just
kind
of
test
demo
applications
seeing
what
the
environment
is
capable
of,
others
that
we
have
out
there.
A
We
actually
have
like
at
our
gate,
agents
that
are
using
them
on
their
iphones
and
all
their
mobile
devices,
and
things
like
that.
So
there's
it
varies
as
far
as
what
kind
of
applications
we
house
on
this
platform.
A
So
we're
going
to
right
now
it's
a
single
mesh,
we're
investigating
adding
additional
meshes
for
different
capabilities
and
different
reasons
right
now,
we're
heading
down
the
pci
path
and
looking
to
see
what
our
strategy
is
going
to
be
around
pci
applications
and
whether
or
not
they're
going
to
use
the
same
mesh
or
different
meshes
and
then
just
continuing
to
go
on
through
that
path.
But
I
think
we've
kind
of
only
skimmed
the
surface
of
what
kuma
can
provide
and
do
for
us
and
we're
as
customer
requests
come
in
for
different
capabilities.
B
Okay
and
what
is
pci
or
what
was
the
abbreviation.
A
Pci
and
I'm
drawing
a
blank
on
what
it
stands
for,
but
like
handling
credit
card
data
and
customer
information
and
things
like
that,
so
being
in
airline.
Of
course,
we
run
lots
of
different
transactions
with
secure
data
and
we
have
different
regulations
that
are
required
to
handle
that
data.
B
Okay
and
when
it
comes
to
policies
right
now,
do
you
only
use
traffic
permission,
or
do
you
also
use
or
maybe
expose
any
other
policies
to
user.
A
Exposing
policies
to
users,
the
only
one
is
the
traffic
permissions
right
now
we
haven't
had
any
other
use
cases
for
exposing
any
of
the
other
permissions
or
any
of
the
other
policies
to
users
within
american
airlines.
We've
had
the
discussion,
a
lot
about
service
meshes
in
general
and
what
the
capabilities
are,
and
it
seems
to
be
a
hot
topic,
but
it
doesn't
seem
like
a
lot
of
our
users,
have
more
advanced
reasons
for
different
policies,
and
things
like
that.
So
as
those
discussions
come
up,
we'll
continue
expanding.
A
So
we're
using
for
kuma,
specifically
or
just
in
general,
I
guess
the
hardware
we're
just
using
aks
clusters
in
azure,
so
we're
using
pretty
beefy
nodes.
I
really
can't
give
you
what
the
specs
are
right.
Now,
though,
we
have
different
node
pools
based
on
system
workloads,
user
workloads,
different
types
of
workloads,
so
we
have
different
sizing
techniques
for
that.
A
A
Okay
and
we
auto
scale
as
well,
so
we've
done
lots
of
load
testing
for
the
different
workload
cases
as
well
to
minimize
the
auto
scaling.
But
again
it's.
A
But
with
the
service
mesh
and
with
the
different
abstraction
layers
that
we've
chosen
as
well,
specifically
like
our
rancher
platform
that
we're
using
to
abstract
out
the
user
management,
our
goal
is
to
be
able
to
expand
to
other
cloud
providers
much
more
easily
and
then
use
the
kuma
service
mesh
to
help
us
out
with
that
as
well,
and
then
use
the
locality
aware
load,
balancing
our
whole
plan.
A
When
we
moved
into
azure,
there
was
a
whole
lot
that
you
had
to
set
up
and
specifically
from
a
user
permission
perspective,
but
through
using
the
runway
platform,
we've
started
moving
some
of
those
user
permissions,
at
least
for
kubernetes
and
application
deployments
over
to
the
rancher
platform
as
our
management
layers,
so
that
when
we
want
to
move
to
another
cloud
provider,
we
can
simply
add
those
clusters.
We
can
then
use
our
argo
cd
platform
to
onboard
the
applications.
A
We
don't
have
to
really
deal
with
user
access
management
in
the
different
cloud
providers
and
we
can
just
move
much
more
quickly
and
kuma
helps
with
that
as
well,
because
we
can
find
our
applications
wherever
they
might
be.
A
We
do
have
other
items
that
are
mentioned
in
our
blog
here
over
on
kong,
such
as
our
runway
kubernetes
operator,
that
helps
us
at
for
our
gtm
provider,
because
that
will
relay
any
new
ingresses
that
come
up
on
our
clusters
out
to
our
gtm
provider
and
that
doesn't
run
on
every
single
cluster.
That
just
has
an
instance
or
two
within
our
service
mesh.
A
So
we
also
save
resources
there
by
doing
minimal
deployments
in
different
localities
and
then
having
the
service
mesh,
be
able
to
help
us
with
that
discoverability
and
making
sure
too,
because
not
all
of
our
services
are
available
through
the
internet,
such
as
our
gtm
sync
services
and
then
just
being
able
to
route
that
easily
through
the
mesh.
And
then
the
american
airlines.
Private
network
is
a
huge
benefit,
as.
B
A
We're
at
one
three
one
right
now,
I
know
with
one
four
zero:
there's
the
new
authentication
method
so
or
the
authentication
changes.
So
we
are
going
to
continue
to
stay
current
and
upgrade.
We
just
have
been
upgraded
to
one
for
zero
yet,
and
I
thought
141
was
out
so
hopefully
we're
not
too
far
behind.
A
Oh
sorry,
go
ahead
finish
your
I
was
just
gonna
say
we
do
use
argos
cd
when
possible
to
keep
applications
in
sync.
We
haven't
looked
to
use
that
for
kuma.
Quite
yet,
but
that's
probably
gonna
be
something
that
we
explore.
I
don't
know
how
that's
gonna
interact
with
upgrades
and
the
fact
that
our
ghost
cd
is
pulling
everyone's
mom,
but
should
be
something
interesting
to
dig
into
sorry.
John.
B
No
worries,
I
was
it's
not
specifically
a
mesh
question,
but
I
mean
this
is
a
fantastic
use
case
and
like
a
great
presentation,
I'm
really
interested
in
general
in
like
abstraction
layers
on
top
of
kubernetes.
So
I'm
curious:
when
did
you
decide
to
make
that
leap
to
writing
your
own
operator
and
creating
your
own
crds
like
what
were
the
kind
of
factors
that
went
into
that
decision?
Did
you
already
have
the
technical
expertise
to
do
that,
or
was
it
really
driven
by
the
user
experience
you
wanted
to
provide
like?
A
It
was
really
around
the
user
experience
we
wanted
to
provide.
It
was
around
the
fact
that
we
had
some
new
management
come
in
some
key
players
in
the
devops
space,
and
one
of
the
challenges
was
being
able
to
describe
your
application
in
10
lines
of
the
mlr
loss,
and
I
don't
think
we
quite
got
there.
A
If
you
take
out
the
environment
variables
and
the
standard
traffic
permission
policy,
maybe
a
few
other
things
we
could
have
gotten
quite
close
to
that,
but
the
whole
goal
was
to
make
it
really
easy
for
teams
to
describe
their
apps
and
get
it
deployed.
So
that
is
where
the
custom
operator
custom
operator
came
into
play.
That's
where
we
already
had
the
runway
platform.
A
Last
time
I
checked
at
least
that
you're
supposed
to
apply
to
the
deployment
that
says
what
region
you're
in
so
that
you
can
do
locality,
aware,
load,
balancing
and
with
the
operator
we
take,
the
node
labels
read
the
node
labels
as
far
as
what
region
we're
in
and
then
we
apply
that
to
the
deployment.
So
then
kuma
knows
where
all
of
our
applications
are,
so
we
were
able
to
without
teams
having
to
go
fill
in
that
information.
A
We
were
able
to
do
that
behind
the
scenes
without
them
really
knowing
about
it
or
having
to
care
worry
about
it
using
our
operator
using
argo
cd.
We
wanted
to
take
away
some
of
the
hassles
of
having
tokens
and
credentials
and
github
repos
and
things
like
that
to
deploy
through
actions
and
when
I
say
secrets
and
repos.
Of
course
I
mean
in
the
github
repo
secrets,
not
in
code,
but
so
from
the
start,
all
the
abstraction
layers.
It
was
all
about
just
trying
to
make
a
really
easy
customer
experience
and
trying
to
make
something.
A
A
So,
with
argo
cd
at
least,
teams
are
able
to
supply
straight
kubernetes
yaml
if
they
wanted
to.
I
don't
know
if
there's
any
teams
that
have
done
that,
yet
our
operator
deployment
has
kept
up
with
customer
needs
and
requests
pretty
well,
but
overall
they
could
do
that.
A
We
do
have
security
mechanisms,
though,
in
place
at
various
levels,
to
stop
teams
from
doing
kind
of
things
that
they
shouldn't
and
we
don't
run
calico
on
any
of
our
nodes
and
one
of
the
safety
points
to
ensure
that
we're
able
to
stop
traffic
if
needed,
is
to
ensure
all
of
our
pods
have
a
kuma
side
car.
So
we
don't
actually
allow
any
of
our
applications
in
our
cluster
to
run
without
the
sidecars.
A
We
have
policies
implemented
with
a
policy
engine
to
ensure
that
so
there
are
different
kinds
of
restrictions
we
implement
to
make
sure
certain
components
are
used
and
that
we
can
do
things
like
traffic
permission
policies
to
stop
traffic,
but
otherwise
we're
open
to
user
experimentation
and
open
to
different
requests
that
we
get
into
trying
to
implement.
B
Those
quickly,
I
hope,
that's
awesome,
cool.
Thank
you
yeah.
It
does
cool
great.
We
are
at
the
end
of
the
allotted
time.
Do
you
have
time
for
two
more
questions?
Carl,
or
should
we
answer
those
online.
B
Something
charlie
mentioned
that
in
the
summit
talk
you
were
about
to
experiment
with.
A
B
A
So
we
haven't
had
a
use
case
yet
so
a
lot
of
our
work
is
all
around
use
cases
and
we're
moving
more
away
from
vms
and
creating
vms
and
moving
more
towards
kubernetes.
Our
it
vision
is
app
portability
and
from
our
standpoint,
kubernetes
is
the
technology
that
is
preferred
and
offers
that
so
no,
we
haven't
tried
it
and
we're
reducing
our
reliance
on
vms
and
creating
new
vms
as
well.
A
I
can't,
but
I
can
read
the
question
which
is
around
the
gateway
question:
what
kind
of
policies
deep
at
the
edge
do
you
employ?
So
we
are.
A
We're
not
because
not
everything
coming
into
the
service
mesh
requires
apogee.
It's
actually
very
little
at
this
time,
we're
still
seeing
what
our
strategy
is
and
what
the
capabilities
are
for
app
teams,
so
we're
not
doing
a
whole
lot
at
that
layer.
Everything
comes
through
our
security
provider,
which
is
akamai,
so
akamai
has
a
lot
of
different
policies
that
they
implement
sure
and
we
implement
network
security
rules
at
the
azure
level
to
ensure
that
all
of
our
traffic
comes
to
rakumai.
So
we
already
have
that
later
with
them.
A
Cool
well
thanks
everyone
for
the
questions.
I
appreciate
it
a
lot
and
thanks
for
checking
out
what
we're
doing
at
american
airlines.