►
From YouTube: Rocking the Migration from Apigee Edge Microgateways to Kong Gateways with Zero Downtime!
Description
For our first User Call for 2023, Dominik Schmid and Benjamin Bertow from Mercedes-Benz Tech Innovation GmbH, are joining us to show us how they evolved API management to an open platform, integrating the Kong Gateway as the preferred gateway solution.
Have you ever thought about how big companies manage their APIs?
Mercedes-Benz teams currently manage around 1500 APIs.
We will talk about the past, present, and vision of the Mercedes-Benz API platform.
And most importantly, you will learn how only three Kong Gateway instances replaced hundreds of Apigee Edge Microgateways with zero downtime.
A
Hello,
everyone
good
morning,
good
afternoon,
good
evening
from
wherever
you're
joining
us
I'm
so
happy
to
have
all
of
you
here
today:
happy
New
Year.
My
name
is
Dalia
and
I
work
as
a
community
manager
here
at
Kong,
I'm
so
happy
to
welcome
you
for
our
first
user
call
for
2023.
exciting.
A
Today
we
have
two
of
our
Kong
Champions
here,
Dominique
and
Benji.
They
will
be
presenting
on
how
they
migrated
from
apogee
to
Kong
with
zero
downtime.
So
super
interesting
presentation.
A
They
will
take
all
of
the
questions
at
the
end.
So
please,
whatever
you
want
to
ask,
insert
it
in
the
Q
a
function
at
the
bottom
and
they
will
answer
at
the
end.
So
with
that
I
would
like
to
pass
it
to
Dom
and
Benji.
Go
ahead.
B
Thank
you,
hello
and
welcome
to
our
presentation.
It's
such
a
pleasure
for
us
to
be
here
and
help
hold
our
presentation
again
that
we
held
at
the
Kong
Summit
last
year.
My
colleague,
penny
and
I
are
here
today
to
tell
you
how
we
at
Mercedes-Benz
Tech
Innovation,
rocked
our
migration
of
around
420
apigee,
Edge
micro
gateways
to
only
three
con
instances
with
zero
downtime.
B
C
B
Hi,
my
name
is
Dominic
and
I'm
a
senior
technical
lead
at
Mercedes-Benz
Tech
Innovation.
That
means
that
I'm
leading
our
devops
team
and
if
there's
some
time
left
as
I'm,
also
developing
our
platform,
Mercedes-Benz
Tech
Innovation,
is
a
subsidiary
of
the
Mercedes-Benz
group.
As
you
know,
Mercedes-Benz
manufactures
luxury
cars,
however,
without
Mercedes-Benz
Tech
Innovation
as
an
integral
part
by
providing
IT
solutions.
Putting
those
luxury
cars
on
the
streets
would
hardly
be
possible.
B
B
C
Yeah,
so
if
you
drive
a
Mercedes
like
this
beautiful
eqs
here,
then
maybe
you
already
are
familiar
with
this
interface,
which
is
our
car
configurator.
C
Now
this
configurator
UI
actually
uses
a
lot
of
apis,
so
there's
obviously
the
vehicle
configurator
API,
which
allows
you
to
basically
select
colors
and
the
rims
and
hundreds
of
options.
C
How
you
want
to
configure
your
car
and
then
the
result
of
this
is
also
sent
to
the
vehicle
image
API,
which
basically
renders
a
picture
of
what
your
car
will
look
like
and
not
just
from
the
outside,
but
also
from
the
inside
various
angles,
and
you
can
actually
get
to
kind
of
experience
your
next
car
already
there
in
addition
to
that,
there's
more
apis
used
on
this
page,
like
the
dealer
locator
and
your
account
data,
API
and
so
on.
All
of
those
apis
are
managed
by
our
API
management
platform
called
one
API.
C
So
this
was
basically
a
central
platform
for
IPM
API
management
used
within
the
whole
company.
Currently,
it's
used
to
document
about
1500,
apis
and
there's
two
main
entry
points.
So
we've
got
the
provider
portal
which
can
be
used
by
anybody
wanting
to
provide
an
API
to
others
that
you
basically
document
all
about
your
API
and
you
can
manage
everything
you
can
get
analytics
data
and
then
there's
the
consumer
side.
So
if
you
want
to
consume
apis,
you
can
use
that
part
to
search
for
apis.
You
can
subscribe,
you
can
get
a
documentation
so
yeah.
C
C
There's
multiple
gateways
connecting
to
it
and
those
gateways
are
distributed
all
over
the
company
and
basically
anybody
can
host
such
a
Gateway
and
they
can
connect
it
to
the
central
platform
and
the
Gateway
will
then
retrieve
data
about
your
API
about
access
controls,
traffic
quota
about
what
metrics
it
should
send
to
the
back
end.
All
of
this
is
enforced
by
the
gateways.
How
that's
going
to
work?
That's
what
dominic
will
explain
now.
B
B
B
These
are
the
main
benefits
now.
There's
a
standardized
Gateway
integration
API,
allowing
integration
of
different
Gateway
types,
Mercedes-Benz
decided
to
focus
on
Kong
as
their
recommended
choice,
because
it's
suitable
for
both
small
and
large
instances
providing
simple
configuration,
operation
and
monitoring.
B
B
Which
integrates
a
lot
of
functionality
within
a
single
plugin?
It
connects
the
con
gateway
to
the
API
management
platform
and
automatically
enables
authentication
and
authorization
based
on
our
platform
subscription
model.
Additionally,
we
implemented
some
other
highly
requested
features
like
Upstream
basic
of.
C
B
Plugin
has
a
multi-level
cache
to
improve
performance
and
is
therefore
also
very
resilient
against
connectivity
issues.
While
we
want
to
follow
the
Federated
Gateway
approach,
some
business
partners
just
don't
want
to
host
their
own
gateways.
In
the
past
we
implemented
a
Gateway
as
a
service
based
on
apigee
edge
micro
gateways.
C
Yeah,
so
this
is
what
the
old
service
looked
like.
We
basically
had
a
kubernetes
cluster
and
it
was
running
hundreds
of
micro
gateways
which
were
configured
via
config
files
that
we
could
check
into
digit
repository
and
then
basically,
a
Jenkins
job
would
run
that
would
then
deploy
and
undeployed
or
Skateway.
It's
basically
spinning
them
up
and
down,
depending
on
the
need
now
for
security
reasons,
not
everyone
in
the
company
had
access
to
that
good
repository.
C
B
C
Was
a
little
messy
and
a
little
complicated
and
basically
with
coming
up,
we
said:
okay,
we
want
to
switch
to
Kong
and
we
want
to
switch
to
a
new
plugin
and
we
want
to
simplify
the
setup.
So
the
main
idea
was
to
basically
replace
all
those
micro
gateways
with
just
one
large
con
instance,
but
keeping
the
rest
of
the
stack
and
the
process
around
it.
That's
just
not
desirable,
so
that's
not
what
something
we
wanted
to
do.
We
wanted
to
go
self-service.
C
Also,
a
data
center
move
was
basically
impeding
so
yeah.
We
had
to
plan
for
that
as
well,
and
so
we
said.
Okay,
we
need
a
big
migration
plan.
We
need
to
set
it
up
new
now,
not
everything
about
that
was
easy
and
we'll
get
there
in
a
minute,
but
first
Dominic
will
show
you
what
our
stack
then
basically
looked
like.
B
B
B
Oh,
let's
start
with?
Why
are
we?
Why
are
we
creating
this
service
at
all,
so
this
question
was
and
is
still
at
the
center
of
creating
and
improving
our
product.
The
purpose
of
our
gorilla
service
is
to
make
Kong
and
our
common
API
plugin,
easily
accessible
and
consumable
for
tech,
savvy
and
non-technical
people
at
Mercedes-Benz.
B
Also,
some
people
just
want
managed
services
like
me,
so
in
a
nutshell,
we
want
to
enable
our
business
partners
to
focus
what's
most
important
to
them.
Next,
we
need
to
keep
in
mind.
How
can
we
achieve
this
there's
a
simple
answer
to
this
question:
simplification,
that
is
by
reducing
advanced
settings
to
the
bare
minimum
and
providing
live
input,
validation
and
Sanity
checks?
B
B
B
B
B
B
B
B
Our
one
API
plugin
uses
redis
as
a
cache
for
the
API
management
data.
All
of
that
and
some
more
is
running
inside
a
kubernetes
cluster.
Currently
we
are
serving
a
very
happy
customer
base.
However,
we
plan
to
expand
our
platform
to
be
even
more
convenient
by
integrating
more
features
and
making
it
possible
to
bring
your
own
domain.
B
C
C
C
What
follows
are
just
simplified
diagrams,
of
course,
but
if
you
have
any
further
questions,
you
can
just
access
me
q
a
later,
so
this
is
basically
step
one.
This
shows
how
the
API
traffic
was
routed,
so
think
hundreds
of
those
micro
gateways
here
instead
of
just
the
two,
but
obviously
we
want
to
keep
it
simple,
each
API
that
an
API
provider
wants
to
expose
basically
had
their
own
micro,
Gateway
running
and
each
micro
Gateway
has
its
own
Ingress
configuration
in
kubernetes.
C
So
basically,
this
was
done
via
the
path
and
basically
with
a
path.
Prefix
kubernetes
decided
where
to
Route
the
traffic
to
it,
which
Gateway,
basically
rubbing
and
so
in
this
first
step
in
the
compiler
phase,
as
we
called
it,
we
just
installed
Kong
in
parallel
and
com
was
installed
with
an
Ingress.
That,
basically,
is
like
a
catch-all,
so
anything
that
did
not
match
any
of
the
other
Ingress
objects
was
routed
to
come.
C
Then
what
we
did
is
that
we
set
up
all
routes
on
the
com
site
based
on
the
configurations
that
we
had
in
the
micro
gateways.
C
So
we
basically
had
a
path
filter
that
matched
what
we
had
in
Ingress
before,
and
we
also
added
an
additional
route
with
a
prefix,
just
like
slash
Kong
underscore
and
then
the
basic
have
this
allowed
testing
in
parallel
already
so
with
the
micro
Gateway
is
still
running
and
serving
the
traffic.
A
consumer
could
also
prefix
the
path
with
con
underscore
and
the
same
traffic
would
be
routed
by
a
com
to
the
same
Target
already,
and
they
could
start
testing
and
they
could
see.
Okay
does
it
work?
C
Does
it
fulfill
my
needs
and
the
next
step
was
then
to
start
moving
actual
live
traffic
over
to
com,
and
we
did
that
by
removing
ingresses
individually.
We
kept
the
micro
gateways
running,
but
they
didn't
receive
any
traffic
anymore,
but
instead
a
call
to
slash
first
API
would
now
be
routed
to
Kong
automatically,
and
this
gave
us
the
option
that
we
had
an
easy
fallback.
C
So
if
anything
doesn't
work,
we
could
just
install
the
Ingress
again
and
the
traffic
would
be
routed
back
to
the
microwave.
So
we
did
that
for
the
other
micro
gateways
too
slowly
increasing
the
traffic
that
was
routed
by
a
pump
and
in
the
end,
basically
everything
was
routed
by
a
compiler
and
yeah.
Everybody
was
happy,
but
of
course
we
ran
into
issues
and
it's
good
basically
to
go
by
our
faces.
So
with
this
first
Pace
we
could
already
run
into
some
issues.
C
C
So
all
the
targets
that
we
basically
routed
to
well,
most
of
them,
are
using
https
and
some
of
them
didn't
have
proper
certificate
that
some
didn't
have
any
validly
signed
certificates
at
all
and
some
had
a
valid
certificate,
but
it
used
a
different
host
name
than
one
that
we
wanted
to
contact
and
some
had
simply
expired
and
yeah.
This
was
an
issue
because
most
of
the
micro
gateways
were
actually
configured
to
ignore
those
issues
and
just
Route
traffic
anyway,
and
with
the
common
default
conflict
that
was
not
replaced.
C
So,
instead
of
weakening
our
security,
we
actually
made
a
hard
decision
to
tell
our
customers
to
well
fix
your
certificates
and
once
you've
got
them
set
up
correctly.
Traffic
will
work.
In
the
meantime,
we
could
route
them
back
by
the
micro
gateways,
but
basically
there
was
a
cup
of
page
second
of
all,
commonly
have
slightly
different
from
the
microwave
Quest
I
mean
ideally
and
in
theory
we're
just
proxy
in
traffic.
C
Kong
might
be
helped
differently
from
the
micro
gateways,
so
we
ran
into
issues
where
a
backend
was
actually
sending
the
same
header
twice
with
different
values
and
com
would
basically
strip
one
of
them
and
micro
Gateway
did
the
same,
but
it
actually
split
the
other
one
and
so
yeah.
That's
kind
of
an
issue
for
some
of
the
consumers
that
we're
relying
on
other
information.
There
also
conflict
maps
have
a
size
limit.
C
So
in
the
first
setup
we
set
up
Kong
without
a
database.
We
just
had
a
standalone
setup
with
a
static
configuration
that
we
basically
just
updated
by
setting
up
a
new
con
instance
all
the
time
when
we
had
to
make
changes
and
while
in
kubernetes
a
config
map
can
get
up
to
a
size
of
one
megabyte,
which
should
be
fined
for
quite
a
few
routes
and
services,
but
in
kubernetes,
if
you
apply
an
update
on
a
conflict
map,
it
also
stores
the
history
in
its
annotation.
C
So
the
conflict
map
kind
of
grew
and
suddenly
we
couldn't
update
come
anymore.
It
was
kind
of
bad
and
basically
also
gave
us
a
good
idea
that
yeah,
we
will
definitely
run
with
the
DB
when
we
are
moving
to
the
gorilla
service
and,
lastly,
well
some
customers-
they
just
don't
test.
You
can
tell
them
as
many
times
as
you
want
to
that.
Yeah.
Now
is
the
time
they
should
test,
whether
their
routes
are
still
working,
some
just
don't
and
they
will
complain
later
basically
weeks
after
the
migration
is
completed.
C
So
this
is
kind
of
an
issue
that
we
ran
into
not
getting
responses
from
some
of
our
consumers
now
step
two.
We
set
up
the
gorilla
service
and
this
was
actually
already
in
the
new
data
center,
so
we
had
the
compilot
still
running
in
the
old
data
center,
using
the
existing
domain
names
and
everything,
and
we
set
up
the
new
Kong
instance
with
a
DB
in
this
case
postgres,
and
we
pointed
some
new
domain
names
at
it
and
we
had
that
set
up
in
parallel
now.
C
In
order
for
this
to
work
for
testing,
we
imported
our
configuration
using
deck.
So
basically
we
took
the
static
configuration.
We
had
ported
it
over
to
the
service
and
yeah.
So
now,
basically,
everybody
could
start
testing
their
routes
via
the
new
gorilla.
So
it's
just
by
changing
the
domain
names
right,
so
yeah
in
theory,
I
mean
we
had
the
same
con,
but
it's
a
different
Data
Center
and
some
of
our
targets
did
not
have
firewalls
open
for
our.
A
C
Kong
instances
to
access
them,
we
were
aware
of
that,
of
course
in
advance,
so
we
knew
that
some
might
run
into
issues
because
they're
they
only
permitted
traffic
from
that
one
data
center
earlier
and
we
did
some
batch
checks
and
we
did
some
scans
to
basically
see
who
might
run
into
issues.
C
But
again
that
takes
a
little
while
and
also
there's
customers
who
are
just
not
testing
and
also
another
issue
we
ran
into
with
the
new
domains.
We
set
up
a
new
certificate
process,
so
basically
our
certificates
were
auto-generated
using
that
syncrypt
and
we
also
had
a
more
strict
Cipher
settings
where
we
said.
Okay,
we
want
to
be
as
secure
as
possible,
but
some
of
the
clients
that
basically
access
the
apis.
They
come
with
old,
Windows
Server
boxes,
which
maybe
don't
have
the
most
current
ciphers
installed,
so
yeah.
C
They
ran
into
issues
there.
Luckily,
a
lot
of
them
basically
actually
did
test
and
we
could
catch
them
with
this
fights
already
and
work
with
them
to
yeah,
basically
get
their
systems
updated.
No,
the
big
day
step.
Three,
we
had
the
big
bang,
so
the
idea
behind
that
is
very
simple.
We
just
pointed
the
old
domain
names
to
the
new
data
center
IPS,
so
we
just
did
a
DNS
switch
over
and
everything
all
the
traffic
was
running
via
Verizon
search.
C
Now,
of
course,
we
were
also
ran
into
issues
here,
there's
clients
who
actually
ignore
DNS
timeouts,
so
we
reduced
the
DNS
timeout
for
our
domain
names
for
the
old
ones
before
moving
them
over,
but
some
clients
they
just
ignore
it
all.
Together.
Basically,
even
days
after
the
migration
took
place,
we
still
had
traffic
running
by
the
old
conch
pilot,
because
some
of
the
API
consumers
just
didn't
care
about
changing
IP
addresses
behind
the
DNS
like
it
was
probably
a
client
just
requesting
it
once
in
the
beginning,
and
then
they
just
point
everything
to
that
IP.
C
So
yeah
this
took
quite
a
while
to
basically
get
to
know
who
is
behind
that
scholar,
and
who
do
we
contact
to
tell
them
they
should
restart
their
line
of
protection.
C
The
second
thing
was
that
basically,
the
DNS
switching
was
done
via
a
ticket
that
we
had
to
send
in
a
different
team.
So
we
didn't
have
an
easy
fallback.
Luckily
everything
went
fine
and
we
just
migrated
to
development
gateways
first
and
then
integration
and
then
production,
but
yeah.
C
This
was
kind
of
a
tough
situation
to
be
in
because
if
we
would
have
had
an
issue,
we
wouldn't
have
had
an
easy
fall
back
there
again,
of
course,
some
clients
just
don't
test
and
the
migration
we
did
it
without
downtime
I
mean
with
the
DNS
which
traffic
could
flow
via
either
side,
and
it
worked
fine.
C
We
even
had
some
customers
who
didn't
read
their
emails
and
they
didn't
even
notice
the
move
so
like
weeks
after
we
got
conflict
files
that
they
sent
us
for
the
micro
gateways
where
they
want
to
be
able
to
change,
and
we
were
like
no.
No,
we
just
migrated
some
weeks
ago.
You
know,
like
we
announced
many
many
times
and
we
should
have
tested,
but
yeah.
It
worked
so
fine
and
so
smooth
that
they
didn't
even
notice
that
all
right
so
yeah
that's.
C
What's
basically
our
migration
that
we
did
and
in
summary,
we
have
that
wonderful,
Central,
API
management
stack,
which
is
designed
for
self-service
and
simplicity
and
using
our
open,
Gateway
infrastructure.
We've
got
all
the
options
for
basically
everybody
hosting
their
own
gateways
and
yeah.
We've
got
a
great
API
provider
community.
B
So,
to
recap,
our
large
migration
was
possible
due
to
careful
planning
and
thorough
execution.
The
main
focus
of
the
gorilla
service
is
to
enable
our
business
partners
to
focus
on
their
Core
Business.
It
has
been
designed
for
self-service
and
simplicity
and
guides
as
an
example
for
other
teams
that
might
adopt
similar
Solutions
in
other
regions
and
there's.