►
From YouTube: Hybrid API Management With Kong | Checkr
Description
Learn more about Kong: https://bit.ly/2I2DypS
Declarative configuration management allows organizations to scale efficiently, but it is not a silver bullet. At Checkr, we faced additional scenarios, where declarative configuration is not suitable by design. In this talk, Checkr Staff Software Engineer Ivan Rylach will look into these cases and walk through the architecture that addresses them.
A
A
I
am
passionate
about
designing
and
delivering
complex
enterprise
level
distributed
systems
keenly
focused
on
high
performance,
scalability
security
and
quality
of
those.
Let's
go
over
our
agenda
for
this
session.
First
we'll
talk
about
checker.
Then
we
will
review
an
imperative
approach
for
api
management
and
its
challenges.
A
A
A
A
A
A
A
A
A
Originally,
when
cohn
was
presented
into
the
system,
the
organization
was
quite
small
and
the
imperative
management
was
enough
for
one
team
to
handle
api
gateway.
If
you
wanted
to
make
a
change
to
it
like
presents
in
any
route
or
changing
the
rate
limit,
you
could
just
access
doing
it.
At
some
point.
We
had
to
present
a
user
acceptance,
testing
environment,
so
customers
and
partners
could
evaluate
checker
api
ahead
of
the
main
launch.
A
A
A
A
Both
of
us
enable
so
excuse
me
both
of
them
enable
us
to
store
code
configuration
in
files
which
could
be
tracked
with
git
cone.
Ingress
controller
additionally
allows
us
to
express
all
configurations
using
custom
resource
definitions,
since
we
have
been
using
kubernetes
and
helm
to
handle
our
workloads.
Already,
we
decided
to
pick
on
kubernetes
invoice
controller
to
help
us
with
this
challenge.
A
Helm
on
its
own
is
already
extremely
powerful.
Too
hell
natively
supports
inheritance
of
declarations.
We
use
helm
to
the
templates
to
create
kubernetes
english
rules
via
crds.
We
then
apply
com
plugins
to
ingresses.
Once
in
the
current
values
file.
During
the
deployment,
we
only
need
to
override
target
hostnames
for
different
environments
and
group
and
regions.
A
As
it
was
mentioned
before
we'll
leverage
on
to
authenticate
an
incoming
request
by
validating
an
api
key
or
an
authorization
bearer
token,
every
single
api
key
must
be
associated
with
a
con
consumer,
so
we
can
enable
rate
limiting
per
application
identity,
since
new
developers
and
partners
can
sign
up
for
the
platform
in
a
self-serving
manner.
We
cannot
manage
api
keys
and
consumers
using
declarative
approach
by
design,
as
we
discussed
previously.
A
A
During
the
migration
to
declarative
configurations,
we
also
had
to
upgrade
our
platform
from
an
antiquated
version
of
con
to
the
latest
one
without
any
downtime.
Specifically,
this
upgrade
required
us
to
go
through
multiple
major
conversions
during
switch
to
the
new
major
version
would
have
to
restart
our
cone
data
plane.
What
increases
risks
of
something
going
wrong.
A
To
guarantee
eventual
consistency
between
multiple
cohen
deployments,
we
decided
to
implement
a
total
order,
broadcast
pattern
which
requires
a
persistent
queuing
mechanism
like
apache
kafka
at
checker.
We
have
been
using
kafka
for
a
while
now
to
orchestrate
background,
checking
and
screening
workflows
in
the
solution
that
we
have
designed
app
identity
service
publishes
all
up
changes
to
the
kafka
topic.
A
Consumers
controller,
which
is
responsible
for
management
of
on
consumers
and
associated
plugins,
has
one
deployment
per
corresponding
cone
control,
plane
or
admin
api
and
comes
with
its
own
kafka.
Consumer
group
kafka
guarantees
orders
of
messages
per
partition,
hence
to
make
sure
that
all
changes
for
a
given
app
are
processed
in
the
same
order
as
they
have
occurred.
A
A
As
you
remember,
we
had
another
requirement
cross
regional
support.
The
option
to
propagate
on
consumer
changes
through
kafka
helps
us
to
have
multi-regional
deployments
of
the
system
as
well.
We
leverage
kafka,
mirror
maker
to
replicate
a
kafka
topic
into
another
region
where
we
deploy
local
consumers
controller.
With
this
setup,
all
deployments
across
all
regions
become
topologically
identical.