►
From YouTube: Comcast’s Self-Service API Gateway Development Journey
Description
Comcast has taken a journey to develop an API gateway initially using open source software and in-house enhancements, and later transitioning to open core commercial software to improve the overall service delivery experience for developers. In this session, Comcast will discuss enhancements that the team made to adapt the community edition of Kong, and subsequently the enterprise edition, to support a self-service, multi-tenant, yet still managed, production API gateway solution, as well as open source contributions made to the Kong community along the way.
A
Next,
this
meant
the
way
we
did
our
work
changed
as
well.
We
shifted
from
working
with
internal
data
center
hosting
bare
metal
systems
to
working
with
public
cloud
offerings,
so
we
could
offer
the
best
in
class
or
take
advantage
of
best
in
class
services
and
features
that
weren't
necessarily
available
internally.
A
We
also
had
a
pension
for
monolithic
artifacts
and
there
were
various
associated
issues
regarding
ongoing
maintenance
and
technical
debt
reduction
that
we
had
to
weigh
against
feature
development
and
deployment
complexity
in
these
monolithic
deployments.
So
we
thought
it
would
be
better
to
pursue
a
process
of
decomposition
by
peeling,
the
onions
so
to
speak
and
gradually
reducing
the
size
and
scope
of
the
artifacts
that
were
monolithic
and
replacing
them
with
microservices.
A
A
There
were
some
implications
of
this
change.
The
shift
in
focus
toward
improved
customer
experience
led
to
strong
efforts
to
move
security
to
the
left
by
building
it
in
rather
than
bolting
it
on,
and
this
means
that
services
that
are
developed
or
deployed
have
to
be
deployed
within
security
frameworks
that
are
as
close
as
possible
to
the
ones
that
they
are
ultimately
deployed
to
production.
A
With
the
proliferation
of
apis
associated
with
our
microservice
approach,
a
very
large
burden
was
being
added
to
the
client
teams
tracking
appropriate
hosts,
recalling
these
apis
across
multiple
environments.
In
addition
to
knowing
how
to
use
the
apis,
another
byproduct
of
our
shift
would
improve.
Customer
experience
was
the
way
it
became
a
constituent
part
of
our
interworking
culture.
Not
only
do
we
want
to
improve
the
end
users
experience,
but
we
want
to
delight
our
development
partners
as
well.
Next
slide.
A
So,
in
response
to
these
implications,
we
look
to
leverage
shared
infrastructure
where
reasonable
and
practical.
We
also
look
to
add
an
api
gateway
for
api
management
and
provide
abstractions
that
simplify
connectivity
requirements.
And,
finally,
we
want
to
empower
the
developers
through
self-service
administration.
A
A
B
All
right
thanks
a
lot
john,
my
name
is
tyler
rivera
and
I'm
going
to
talk
a
bit
about
how
we
actually
implemented
kong
and
how
we
migrated
over
to
the
enterprise
edition
of
kong
and
some
of
the
challenges
that
we
encountered
there
and
specifically
a
lot
of
interaction
that
we
had
with
the
open
source
community
around
kong.
B
So
to
start
and
give
folks
a
little
bit
of
context.
I
wanted
to
talk
a
little
bit
about
our
initial
toolkit.
These
are
some
open
source
tools
that
we
find
in
our
toolbox
and
that
we
use
frequently.
These
are
a
set
of
tools
that
are,
I
would
say,
fairly
ops,
focused
tools.
Our
team
is
a
fairly
ops,
focused
team,
so
the
initial
toolkit
that
we
used
reflects
that
and
where
I
think
most
of
this
starts
is
with
ansible,
we
use
ansible
to
define
roles
of
playbooks.
B
We
use
packer
to
basically
leverage
those
roles
that
we've
defined
in
ansible
and
bake
them
into
an
image
that
we
then
propagate
out
to
our
various
environments.
We
we
test
all
those
images
with
goss
during
the
image
building
process,
and
then
we
manage
all
of
our
infrastructure
in
public
cloud
accounts
using
terraform.
B
We
are
basically
setting
up
our
kong
instances
to
be
backed
by
rds
instances
in
in
each
region,
so
they
basically
there's
no
replication
happening
across
region
across
regions.
They
are
just
two
separate
databases
with
identical
configuration
in
each
the
way
that
we
look
to
configure
these
instances
to
begin
with
was
to
use
kong
provider,
so
we
defined
all
of
our
services
and
routes
and
plug-in
information
within
terraform
and
then
propagated
that
out
to
both
our
east
and
west
regions
concurrently.
B
So
this
worked
fairly
well,
but
it
was
fairly
slow,
going.
The
way
that
we
were
managing
services
coming
on
board
and
using
the
gateway
was
that
you
know
we
would
have
a
ticket
come
in
and
somebody
on
our
our
team,
the
api
gateway
team,
would
actually
craft
their
request
into
a
terraform
configuration
and
then
propagate
it
propagate
that
out
through
our
ci
cd
pipelines,
and
this
tended
to
be
could
could
be
a
fairly
slow
process
and
we
realized
wasn't
a
wasn't
a
great
developer.
Experience
for
users.
B
Once
we
moved
over
to
enterprise
edition,
we
introduced
kong
manager
for
those
who
are
not
familiar
with
kong
manager.
Kong
manager
is
essentially
a
kong
instance
that
provides
a
gui
interface
like
a
web
interface
for
users
to
to
configure
services
and
plugins
and
routes
up,
streams,
etc
in
workspaces
within
their
within
com,
and
this,
the
goal
in
implementing
in
implementing
kong
manager
was
essentially
to
provide
teams
with
the
ability
to
self-service
their
apis.
B
This
model
meant
that
not
only
are
we
now
managing
configuration
of
services
and
routes,
plugins,
etc.
We
also
have
to
manage
kong
manager
or
admin
entities,
enterprise
entities
within
kong,
so
we
now
have
to
manage
the
admin
accounts
that
are
being
used
in
kong
manager,
the
workspaces
and
roles
and
permissions
that
that
are
associated
with
those.
So
this
this
brought
up
a
whole
new
set
of
questions
that
our
team
had
to
answer
those
questions.
Were
you
know?
How
do
we
manage
kong
workspace
creation?
B
This
meant
that
it
made
sense
in
our
case,
to
let
teams
sort
of
self-organize
the
way
that
their
teams
are
structured.
So
all
of
this
doesn't
necessarily
tie
back
to
a
really
great
enterprise
model
like
like
an
80
group,
or
something
like
that.
It
works
a
lot
better.
If
teams
are
able
to
dynamically
create
the
team
structure
they
want
and
the
best
tool
for
them
to
do
that
and
something
they're
very
familiar
with
is,
is
github,
so
we
basically
mapped
workspace
creation
to
team
creation
and
github.
B
The
other
question
was:
how
do
we
manage
kong
admins
and
we
wanted
to
make
sure
that
teams
had
full
control
over
who
has
access
to
their
workspaces
in
the
control
plane?
So
we
had
to
find
a
way
to
manage
that
and
to
do
it
fairly
in
a
fairly
automated
way.
B
We
also
wanted
to
use
that
same
sort
of
model
for
access
control,
so
really
pushing
to
the
teams
all
the
ability
to
craft
the
structure
of
their
teams
and
how
access
to
their
workspaces
work
all
them
all
themselves
via
via
some
automated
process.
So
we
started
evaluating
solutions
on
how
we
might
do
this.
B
B
So
the
first
thing
that
we
looked
at
was
to
basically
extend
this
provider
to
support
ee
en
points
like
workspaces,
admins,
users,
roles
and
permissions.
For
those
who
aren't
familiar
with
how
a
terraform
provider
works.
This
diagram
on
the
right
basically
shows
the
the
structure
there.
So
you
have
a
terraform
provider
which
is
written
in,
go
that's
going
to
define
a
set
of
resources
that
you're
going
to
manipulate,
and
then
those
resources
leverage
some
sort
of
api
binding.
B
So
in
doing
that,
we
discovered
a
set
of
challenges.
We
found
that
the
provider
itself
and
the
go
client
go.
Kong
were
not
actively
maintained.
B
We
also
realized
that,
if
we
wanted
to,
if
we
wanted
to
allow
our
dev
teams
to
leverage
this
sort
of
pattern
to
manage
their
own
services,
that
they
might
not
necessarily
have
a
great
deal
of
terraform
experience,
we
have
a
huge
range
of
teams
within
comcast
and
a
huge
range
of
skill
sets
within.
B
So
we
weren't
sure
if
ops
focused,
tooling,
like
terraform
would
be
a
great
fit,
and
ultimately
we
realized
that
this
setup
doesn't
necessarily
fit
our
roadmap
with
where
we
want
to
go
with
with
delivering
an
api
gateway
to
our
teams.
B
So
we
took
this
problem
out
into
the
open
and
started
engaging
with
the
community.
We
had
an
open
discussion
about
the
the
existing
terraform
provider
to
see
what
level
of
community
support
might
be
there.
B
We
engaged
with
kong
engineer
to
to
get
his
feedback
on
on
on
what
he
felt
might
be
possible
as
far
as
support
and
actually
was
actually
able
to
add
that
engineer
as
a
maintainer
on
the
provider
project,
and
we
also
made
the
decision
to
migrate.
The
existing
provider
over
to
a
different
go
different,
go
api,
binding
called
go
dash
kong,
which
was
supported
by
that
engineer
and
which,
since
that
time,
since
we
initially
engaged
the
community
that
that
project
go,
kong
has
actually
moved
into
the
kong
org
proper
in
github.
B
So
we
decided
to
migrate
all
this
stuff
to
godash
kong.
That
project
is
also
leveraged
by
deck,
which
some
folks
might
be
familiar
with,
and
it's
much
more
actively
maintained.
B
We
realized
that
in
in
shifting
work,
some
of
this
extension
work
that
we
wanted
to
do
to
support
enterprise.
It
would
also
open
up
the
possibility
of
of
providing
that
support
in
in
deck,
so
we
we
felt
that
it
was
a
good
opportunity
not
only
to
provide
a
solution
for
our
team,
but
also
to
provide
a
great
foundation
to
provide
additional
features
for
tooling
like
deck.
B
B
We
then
have
a
set
of
automation
that
we've
built
to
manage
enterprise
entities
using
our
gokong
extensions,
and
that
is
essentially
managing
the
admins,
the
the
roles
and
the
permissions
that
that
are
organized
via
github,
in
our
in
our
in
our
enterprise
github
instance.
B
B
We
built
a
tool
to
provision
global
plugins
to
workspaces.
One
interesting
thing
that
we
found
in
moving
to
enterprise
edition
is
and
and
moving
to
workspaces,
is
that
there
is
no
longer
this
idea
of
a
global
plug-in.
The
top
level
for
a
plug-in
in
enterprise
edition
is
actually
the
workspace,
so
we
had
to
find
a
way
to
manage
those
plugins
for
teams.
B
We
also,
you
know,
built
some
tools
to
easily
onboard
users
and
teams,
according
to
the
abstractions
that
we
created
at
the
comcast
level
and
to
manage
role
assignment
just
wanted
to
mention
some
other
open
source
tools
that
we
use
here.
Dec.
As
I
mentioned
previously,
we
managed
that,
on
at
our
team,
to
create
some
loopback
services
and
management
services
to
make
operation
of
the
platform
a
lot
easier,
and
we
also
use
that
to
propagate
configuration
from
kong
manager,
the
control
plane
to
our
data
plane.
B
Instances
in
the
future,
we'd
really
like
to
use
we'd
really
like
to
introduce
deck
to
a
lot
of
our
development
teams,
so
that
they
can
start
using
a
declarative
configuration
to
manage
it.
Their
services
and
plugins
right
now,
they're
mostly
making
changes
through
the
kong
manager
interface,
and
I
also
want
to
shout
out
pongo
here.
B
As
john
mentioned
in
his
part
of
the
presentation,
we
do
have
some
custom
plugins
that
we
write
and
that
open
source
tool
has
been
a
great
resource
for
us
to
to
test
plugins
that
we're
building
to
improve
the
experience
of
our
developers.
B
B
So,
in
doing
that,
you
know
we
I
mentioned
before
we'd
like
to
familiarize
teams
with
dec.
We
think
it's
a
fantastic
tool
and
could
provide
some
excellent
patterns
for
teams
to
rapidly
deploy
their
services
routes
and
plug-ins
we'd
like
to
develop
patterns
around
deck
that
they
can
use
within
our
ci
cd
platforms
to
quickly
spin
up
services
and
change
them
and
we'd,
also
like
to
centralize
a
lot
of
the
spec,
the
api
spec
that
we
have
floating
around
various
teams
into
a
centralized
place
using
the
com
developer
portal.
B
B
So
now
that
ee
support
exists
in
gokong
we'd
like
to
extend
enterprise
entity
management
to
dec,
we
realized
that
a
lot
of
the
api
endpoints
for
these
enterprise
entities
were
built
under
the
assumption
that
they'd
probably
be
used
by
kong
manager.
So
we'd
like
to
work
closely
with
the
community
to
extend
that
functionality
to
dec,
we
feel
like
it
might
be
a
bit
easier
to
manage
those
entities
there
we'd
like
to
continue
to
push
innovation
out
into
the
open,
the
stuff
that
we're
building
at
comcast.
B
I'm
super
excited
about,
and
I
think
that
there's
a
lot
of
opportunities
for
us
to
continue
to
give
back
to
the
open
source,
community
and
yeah
I'd
like
to
just
to
continue
to
engage
with
the
kong
engineers.
I've
worked
closely
with
harry
bagdy
over
the
past
couple
months
on
a
lot
of
this
work
and
yeah.
B
So
thank
you
very
much
for
listening
to
our
presentation,
we'll
open
it
up
at
this
point
for
questions.
Thank.