►
Description
Kong for Kubernetes is an open source Kubernetes Ingress Controller based on the Kong Gateway project.
Ingress management is an important part of your configuration and operations. When services are exposed outside a cluster, one needs to take care of authentication, observability to maintain SLOs, auditing, encryption and integrations with other third-party vendors, amongst other things.
In this webinar, Harry will take you on a deep dive into how to leverage the Kong Ingress Controller for:
-Encrypted credentials
-Native gRPC routing
-Plugins for combination of Ingress and KongConsumer
-Admission Controller
A
B
A
Moderating
today's
webinar,
it's
gonna,
be
presented
by
hairy
bagged,
a
senior
cloud
engineer
at
Kong.
Before
we
get
started,
though,
I
have
a
few
housekeeping
items
to
go
over
during
the
webinar
you're,
not
able
to
talk
as
an
attendee
sorry,
but
there
is
a
Q&A
box
at
the
bottom
of
your
screen.
Please
feel
free
to
drop
your
questions
in
there
and
we'll
get
through
as
many
as
we
can.
At
the
end,
this
is
an
official
webinar
of
the
CNC
f
as
such
as
subject
to
C&C
f
code
of
conduct.
A
B
A
B
Of
this
is
available
to
you
right
away
right
now,
so
let
me
get
started
with
the
agenda
for
today.
Today's
agenda
is
pretty
simple:
we
have
a
new
open
source
gateway,
come
to
dotto
that
we
released
earlier
this
month,
and
then
we
will
get
into
how
kong
woods
with
kubernetes
and
helps
you
do
ingress
management.
We
also
have
a
new
release
of
our
interest
controller,
which
we
call
Kong
for
Cuban.
It
is
0.7,
so
we'll
take
a
deep
dive
in
some
features
and
we
also
go
through
a
demo
there.
B
B
2018
and
in
last
one
year
we
have
been
working
towards
com2
dot,
o
counter
dot.
O
brings
in
a
major
feature
asks
from
the
community,
which
is
called
go.
Plugins
kong
is
built
on
top
of
nginx
on
top
of
open
resti,
and
it's
written
completely
in
newer.
He
used
logit
for
performance
reasons
and
count
is
super
blazingly
fast,
but
the
newer
community
is
pretty
small,
so
we've
opened
that
up
to
write,
enable
you
to
write
plugins
and
go
go
is,
as
you
all
know,
the
cloud
native
language,
essentially
cuban
it
is
darker.
B
B
The
second
most
important
feature
is
hybrid
data,
plane
control,
plane
separation,
so
which
we
call
the
hybrid
mode
contradiction:
aliy
required
database
to
run
with
kong
1.1.
We
remove
that
requirement
altogether
and
Konkan
function
with
the
outer
database
in
the
control,
plane,
data
plane
mode,
the
control,
plane
nodes
are
associated
with
the
database
and
they
configure
all
the
data
plane
notes.
So
the
redoubling
notes
do
not
need
to
have
any
connection
other
than
the
control.
B
Which
could
be
running
any
you
know
and
be
configuring
all
your
ingress
clusters
across
the
world
across
multiple
Kuban?
It
is
clusters
as
well.
If
you
want
that
to
be
the
case,
we
released
a
new
plugin
called
Acme
plugin,
which
is
named
after
the
Acme
protocol,
the
most
popular
certificate
authority.
There
is
legs
and
crypt,
so
this
sort.
This
plugin
essentially
allows
you
to
automatically
encrypt
your
api
traffic
using
TLS
certificate,
so
you
have
HTTPS
by
default.
B
So
that's
another
feature,
and
then
we
have
a
lot
of
feature
under
the
hood,
something
like
buffered
proxy,
which
allows
for
more
advanced
request
and
response
transformations.
So
that's
all
in
the
details
so
do
check
it
out
if
you
are
using
Kong
1.2
already
kongjui
total
does
not
have
a
lot
of
breaking
changes.
There
are
only
two
that
I'm
aware
of
and
they're
pretty
easy
to
work
around,
so
the
upgrade
path
is
also
very
simple
and
easy.
B
So
with
that
we
have,
we
want
to
talk
about
Cuba.
In
essence,
this
is
a
Cuban.
It
is
community
we
are
talking
about.
Kong
is
essentially
agnostic
to
the
platform
it
runs
on,
but
we
cater
very
heavily
to
the
Cuban
and
its
ecosystem,
because
it
allows
you
to
do
so
many
so
much
automation,
so
count
integrates
tightly
with
Cuban.
That
is
what
it
works
across
hybrid
infrastructures,
where
you
have
multiple
kinds
of
orchestration
platforms
deployed.
B
So
let's
get
into
what
is
an
ingress
spec
and
what
is
an
ingress
controller
on
how
kong
fits
into
it?
So
ingress
is
a
specification
that
was
initially
launched
in
about
2015
or
2016,
and
it
has
been
stuck
in
the
vivre
and
beta
1
phase
for
about
three
or
four
years
now
we
are
moving
into
v1
with
Cuban.
It
is
1.80
which
is
scheduled
to
be
release
later
this
quarter,
so
that
transfer
to
vivo
be
one
spec,
which
will
always
be
supported
by
Cuban
artists.
B
So
what
are
what
is
ingress
spec,
so
ingress
is
a
vendor-neutral
way
of
defining
access
to
your
services
that
run
inside
cuban
artists,
so
you
might
be
having
a
few
hundred
services
running,
but
you
want
a
single
point
of
entry
through
which
you
can
control.
How
is
the
traffic
routed?
How
is
the
traffic
authenticated
or
maybe
have?
How
do
you
want
to
log
this
traffic
monitor
and
connect
metrics
on
it?
So
ingress
is
essentially
right
now
HTTP,
based
only
so
you
can
route
traffic
based
on
HTTP,
host,
headers
or
virtual
hosts
and
paths.
B
Con
can
extend
that
and
some
other
windows
also
extend
that
quite
a
bit
and
that's
what
we'll
be
talking
looking
into
it.
More
inglis
has
a
lot
of
wide
adoption.
Majority
of
cloud
providers
also
bring
in
their
controllers,
and
the
community
has
seen
a
huge
number
of
controllers
which
conform
to
the
spec,
and
it
makes
it
super
easy
to
switch
between
the
different
vendors.
So
if
you
are
running
something
else,
some
other
controller,
you
can
switch
that
out
and
put
that
other
controller
in
your
Cuban.
It
is
cluster
with
relatively
easy
of
easy
migration.
B
B
So,
as
you
can
see
in
the
green,
what
we
have
is
called
referred
to
as
the
routing
policy.
So
what
we
are
saying
here
is,
whenever
the
any
request,
any
HTTP
request
that
comes
into
it
comes
into
our
Inglis
point.
If
it
has
the
host
header
or
the
virtual
host
as
example.com,
follow
these
two
rules,
so
the
rules
are,
if
slash
bills.
If
the
request
path
starts
with
slash
bills,
then
send
it
to
the
bill
service
on
ODT,
which
is
running
inside
a
Cuban.
B
It
is
cluster,
and
then
we
have
the
slash
orders
endpoint,
which
will
be
sent
to
the
holder
service.
So
this
using
the
same
respect,
you
can
essentially
tie
up
your
micro
services
that
are
running
inside
Cuba
notice
and
present
them
as
one
single
EPR,
or
maybe
you
have
different
groups
of
micro
services
that
you
want
to
present
differently,
so
this
spec
allows
you
to
define
those
policies
now,
as
you
can
see
here,
there
is
nothing
specific
to
a
vendor
here.
You
are
not
specifying
how
to
do
this.
B
Me
so
kong
contry,
you
notice,
is
an
English
controller.
So
right
now
we
are
here
talking
about
comm,
but
imagine
any
other
reverse
proxy
or
a
cloud
load
balancer
that
you
can
replace
with
calm.
And
then
you
have
a
controller
component
controllers
are
the
way
in
Cuba.
It
is
to
to
manage
configuration
and
reconcile
state,
so
you
specify
a
desired
state
and
the
controller
takes
that
desired
state
to
the
target
to
the
current
state,
so
it
matches
the
current
state
of
the
kubernetes
cluster
to
match
the
desired
state.
B
So,
as
we
can
see
here,
we
have
the
kubernetes
api
server
on
the
left
and
we
have
a
proxy
running
in
this
case.
Kong
and
the
controller
sits
between
those
two
components,
the
kong
or
need
any
other
vendor
does
not
understand
what
kubera
is.
Api
server
is
talking
about
by
d,
but
you
can
put
in
a
controller
in
place
and
the
controller
essentially
translates
what
is
api
server
configuration
is
into
comm,
so
the
controller
is
what
configures
the
proxy.
The
controller
is
not
itself
drums
sending
over
the
traffic.
B
It's
just
configuring
a
proxy
which
then
sends
over
this
traffic
into
different
services.
So
in
our
case
we
have
bins,
orders
an
inventory,
so
let's
focus
on
the
controller
piece.
So
this
is
a
piece
that
is
cuban.
It
is
specific.
The
proxy
could
be
agnostic
to
cuba.
It
is
you
know
you
can
have
loot
balancers
in
non
humanities
environments.
You
can
also
have
proxies
running
anywhere
in
the
cloud,
but
the
controller
is
what
configures
it
and
makes
it
specific
to
Cuba
notice.
B
This
also
opens
up
the
realm
of
possibilities
where
the
controller
interacts
with
different
CDs.
So
imagine
you
using
sort
manager
to
do
certificate
management
or
interacting
with
how
to
configure
Prometheus
matrix
and
things
like
that.
So
all
the
intelligence
is
essentially
made
into
the
controller.
B
The
proxy
software
is
has
the
proxy
capabilities,
so
each
of
those
two
components
are
doing
their
own
thing.
So,
let's
focus
on
on
Kong
for
Cuban
artists
and
what
Kong's
controller
can
do?
We
released
a
new
version?
Okay,
where
it
is
in
the
wrist
control
0.7,
which
is
compatible
with
the
proxy
version
of
contacts
or
compute
on
X.
So
with
0.7
we
have
released
something
called
encrypted
client
credentials.
B
The
credentials
are
now
stored
in
Cuban.
It
is
data
store
which
is
at
CD,
and
we
use
Cuban
artists
secrets
to
store
these
credentials.
So
you
get
encrypted
credentials
at
rest
that
are
loaded
into
condign,
a
mcclee
by
the
controller
count
it
does
not
need
a
database
or
anything.
So
it's
simply
a
deployment
with
the
pod
having
two
containers.
The
controller
gets
all
these
configurations
and
loads.
B
It
up
into
comment
and
verify
our
client
connections,
and
these
can
include
anything
from
key
authentication,
basic
concatenation
or
using
some
form
of
Earth
or
YDC
any
kind
of
authentication.
You
can
store
the
credentials
and
it's
e
and
concludes
demo
or
you
cannot
even
use
your
own
identity
provider.
Is
there.
B
Another
feature
is
gr,
PC
routing,
so
Congress
introduced
something
called
a
native
gr,
PC
routing
with
plug-in
support.
So
essentially,
if
you
are
using
gr,
PC
or
gr
PC,
then
to
expose
your
services
as
GL
PC.
Instead
of
you
know,
JSON
over
HTTP
or
other
protocols,
you
can
use
again
comm
to
expose
this
traffic
and
Kong
is
aware
of
each
and
every
G
RPC
request
and
traffic.
So
you
can
route
your
G
RPC
request
to
different
services
based
on
you
know,
different
gr,
PC
methods.
B
So
if
one
server
is
handling,
you
know
your
inject
rum
ingestion
of
events
and
the
other
server
is
the
or
service
is
pointing
to
reading
requests.
You
can
split
those
up
at
calm
and
then
Kong
in
also
run
plugins.
So
you
get
all
previous
matrix.
You
get
all
authentication
schemes
with
Kong
right
out
of
the
box,
so
can
keep
your
GL
PC
service
fairly
small,
just
having
the
business
logic
in
there
and
Kong
talks
to
GRP
sieve
in
the
client
and
with
your
service.
So
it's
on
its
upstream
and
downstream
both
days.
B
Another
highly
requested
feature
was
mutual
TNS,
so
a
lot
of
customers
are
on
in
very
sensitive
environments,
and
compliance
is
a
very
important
part
of
their
infrastructure.
In
such
a
case,
people
want
to
encrypt
even
internal
cluster
traffic
and
Kong
allows
you
to
do
that
using
using
mutual
DNS.
So
you
can
bring
in
your
own
certificate
authority
or
you
can
use
any
default
one
or
use
a
fee
for
that
and
Kong
notes
that
certificate
up
and
it
can
authenticate
itself
to
your
services
and
then
on
the
service
side.
B
You
could
probably
have
some
authorization
taking
place
that
only
Kong
can
talk
to
it
and
you
can
prevent
you
know
service
to
service
communication.
If
you
require
to
do
so
or
you
can
use
something
like
you
know,
eesti
or
Kumar,
or
any
other
sort
of
mismatch
to
manage.
That
is
well
for
you,
so
comp,
it's
pretty
nicely
with
that
ecosystem
of
source
mesh
as
well.
Kong
focus
is
on
you
know
the
north-south
ingress
traffic
and
you
use
the
base
of
this
mesh
solution
for
you.
B
That's
that's
the
general
overview
of
0.7
with
that.
Let's
get
started
with
a
demo,
we're
going
to
look
at
a
few
like
gr,
PC
and
some
some
rate
limiting
plugin
and
a
little
bit
of
admission
controller
as
well.
So
with
that,
let's
get
started
so,
as
you
can
see
on
my
screen,
I
have
a
cube
on.
It.
Is
cluster
I'm
using
gke
for
this
demo,
just
because
I've
set
that
up
in
my
dev
environment,
but
you're
free
to
use
any
cluster
can
use
any
Kuban.
It
is
cluster.
B
All
you
need
is
a
support
for
a
service
of
type
load.
Balancer.
You
can
get
away
without
that
service
as
well.
If
we
go,
if
the
need
be,
you
can
use
node
old
or
any
other.
You
know
any
other
proxy
software
that
you
like
to
use
so
with
that
I'll
go
ahead
and
get
started
and
deploy
Kong
for
kubernetes
first,
so
this
handy
bitly
link
here
I'm
just
going
to
apply
that
it's
a
single
installation.
So
with
this
we
have
created
a
few
resources.
So
let's
take
a
look
at
it.
B
So
first
we
have
created
something
called
a
namespace
for
Kong,
in
which
we
don't
know
these
specific
services,
and
then
we
have
for
custom
resources.
Custom
resources
allow
you
to
define
your
own
API
on
top
of
cube
or
at
is
api's,
and
we
use
these
custom
resources
to
extend
calm,
extend
the
ingress
specification,
so
these
are
things
that
are
specific
to
Kong,
but
are
not
present
in
the
interest.
B
Pacification
now
ingress
resource
is
also
fairly
narrow,
there's
a
general
consensus
on
it
and,
as
I
said
earlier,
in
working
on
a
v2
spec
as
a
cuban
artists
community
in
the
Signet
world
channel.
So
if
you're
interested,
please
come
on
board
in
the
second
network
channels
and
you
can
find
like
a
whole
new
set
of
EDIS
that
are
being
designed
for
that.
Next,
we
create
some
Arabic
resources.
These
essentially
allow
Kong
to
talk
to
the
kubernetes
api
server.
So,
as
we
saw,
there
is
calm,
there
is
the
controller.
There
is
the
API,
sir.
B
So
the
controller
gets
these
permissions
of
no,
it
wants
to
list
all
the
in
the
specification
it
wants
to
with
the
boards
are,
and
things
like
that,
and
then
we
have
a
config
map
for
some
defaults
or
road
blocks
which
can
which
is
not
strictly
required,
and
then
we
have
the
services.
So
we
have
two
services
to
count
proxy,
which
is
of
service
of
pipe
load.
Balancer.
B
So,
as
you
can
see
here,
we
have
the
service
of
type
load
balancer
and
because
we
are
running
in
g,
ke
g
k
automatically
assigns
a
cluster
ID,
an
external
IP
address,
and
we
also
have
a
kong
validation
of
a
book.
It's
a
service,
but
we
are
going
to
install
the
web
hook
next.
So,
if
you
take
a
note
of
this
IP
address,
you
can
actually
hit
the
IP
address
from
your
box
as
well.
This
is
the
public
IP
address.
B
I
am
just
going
to
set
an
environment
variable
so
that
you
can
use
that
later
on.
So
if
I
proxy
send
a
request
here,
we
can
see
that
kong
responds
back
with
a
request
and
it
responds
back
with
a
404
nothing
found.
This
is
because
we
do
not
have
anything
on
people
in
our
cluster.
So
if
we
see
we
do
not
have
any
interest
resources,
kong
does
not
know
how
to
send
this
request
for
next.
What
I'm
going
to
do
is
I'm,
going
to
also
set
up
something
called
an
admission
controller.
B
So
let
me
show
you
the
script
that
I'm
running
so
I'm,
just
using
open,
SSL
client
to
generate
a
self-signed
certificate
for
the
kong
validation
of
a
book
service,
then
I'm
creating
a
Cuban.
It
has
secret
secret
of
type
or
TLS
secret
once
I
have
that
I
am
enabling
the
admission
of
the
book.
So
we
have
the
kind
of
validating
admission
of
a
book
and
we
are
going
to
validate
each
consumer
each
come
plug-in
that
are
being
created
or
updated.
So,
let's.
B
We
have
dated
the
deployment
to
use
the
self
signed
certificate
and
private
key,
and
then
we
finally
update
the
validation
webbook,
so
this
makes
it
super
hard
for
users
to
shoot
themselves
in
the
foot,
so
if
you're
making
any
mistakes,
no,
while
configuring,
it's
super
easy
to
not
indent
things
correctly
or
something
just
goes
here.
This
will
catch
most
of
those
things.
B
B
We
are
deploying
a
service
of
type
cluster
IP,
so
it's
an
internal
service,
well
G
RPC,
and
then
we
have
a
single
part
of
the
G
RPC
service
running.
So
this
service
understands
G
RPC
protocol
on
port
9000
one.
So
let's
go
ahead
and
see
how
we
can
expose
this
G
RPC
service
to
the
outside
world.
So
here
I
am
creating
an
ingress
resource.
Now
of
name
demo.
B
We
have
the
slash
path.
So
basically
we
are
not
specifying
a
host
header.
So
every
request
that
comes
in
we
wanted
to
go
to
the
G
RPC
bin
service.
Now,
let's
go
ahead
and
create
this
ingress
resource.
But
now
one
thing
to
note
is
ingress
is
HTTP
by
default.
We
do
not
know
that
it's
a
G
RPC
service
around,
so
we
use
the
make.
We
make
the
use
of
annotations
here
so
here
specifying
a
set
of
protocols
essentially
telling
Kong
that
treat
the
traffic
as
GRP
see
any
traffic
that
comes
from
the
client.
B
B
So
as
you
can
see
the
method
resolved
correctly,
you
say
using
the
RPC
method
and
then
we
have
the
response
header
received
as
we
can
see,
the
service
system
type,
G
RPC
and
the
response
is
basically
echoing
the
same
content
back.
So
instead,
let's
go
hello,
CLC
community
and
we
can
see
hello
conciencia
community
here.
So
we
have
got
G
RPC
request
going
back
and
forth,
which
is
nice.
You
can
just
expose
the
RPC
traffic,
but
then
what
does
this
bias?
Because
we
could
do
this
just
by
using
a
service
of
type
load
balancer?
B
Why
use
increase
so
for
that?
What
we
have
got
here
is
how
you
can
extend
your
ingress,
so
how
we
can
extend
and
do
more
things
once
you
have
exposed
traffic
via
calm.
So
here
we
are
going
to
create
a
custom
resource
called
comm,
plug-in,
let's
go
ahead
and
create
that,
and
that
returns
an
error.
Now,
as
we
can
see
here,
we
you
see
that
it's
admission,
webhook
failed
and
it
says
that
foo
is
an
unknown
field.
B
So
what
happened
is
this
was
intentional
that
I
have
put
in
a
full
config
field,
which
is
not
a
valid
configuration
for
this
plug-in
plugins
in
conger,
essentially
a
way
to
extend
calm,
you
can
create
any
amount
of
custom
plugins.
So
there
are
a
lot
of
plugins
already
that
come,
bundled
in
come
log,
Lee
being
one
of
those,
so
you
can
log
to
elastic,
so
you
can
use
proven
D
or
whatever.
Is
your
logging
infrastructure,
but
let's
say
you
were
using
Laurie,
so
we
are
going
to
delete
this
thoroughness
line
and
create
the
plug-in.
B
B
We
are
going
to
add
another
annotation
and
instruct
Kong
to
execute
the
log
lip-locking
whenever
any
requests
matches
this
any
of
these
rules.
So
if
we
find
out
just
a
single
rule
but
any
of
the
rules
that
are
matched
to
which
in
this
increase,
if
ocation,
you
won't
come
to
run
the
nog
limit,
alright,
so
configured
that
now,
let
me
try
to
see
if
I
can
open
the
log
window.
B
Alright,
so
we
have
I,
have
this
log
li
window
up
here
and
I'm
going
to
search
for
last
ten
minutes.
So,
as
you
can
see,
there
are
no
no
events.
Anything
at
all.
This
is
just
a
simple
trial
account
from
logging
I'm
going
to
go
ahead
and
send
the
request
now
so
you're
going
to
do
Jian
PC
curve,
hello,
CNCs
community.
B
So,
as
you
can
see,
we
did
not
have
any
latency
increasing
latency
Congress
Pro
Kong's
did
not
inject
any
latency
the
upstream
to
12
milliseconds
to
execute
the
request.
We
got
the
response
back
now.
If
everything
is
good
and
demo
gods
are
kind
to
me.
I
should
see
a
request
here,
so
Cong
batches
these
or
sometimes
it
can
take
a
while.
But
as
we
can
see,
we
have
got
a
request
here
and
we
can
see
all
the
details
in
here
and
this
is
configurable
where
what
are
the
headers
that
were
sent
whatever
the
request.
B
Latency
is
responsive
agencies
which
have
centered,
and
if
you,
this
is
the
wrong
message,
and
we
can
see
that
the
upstream
URL
was
hello
service,
slash,
say
hello,
so
you
can
get
all
kinds
of
logging
here,
so
you
do
not
have
do
not
have
to
even
implement
logging
in
your
micro
services.
You
get
that
out
of
the
box
with
calm
all
right,
so
that's
proxying,
G,
RPC
traffic
and
procs,
and
how
to
use
plugins
on
G
RPC
traffic.
B
Now,
let's
take
a
look
at
what
else
we
can
do
so,
let's
look
at
how
we
can
take
an
API
that
we
have
developed
and
how
we
can
expose
and
do
like
have
different
tearing
capabilities,
so
yeah
I'm
deploying
something
on
HTTP
bin,
which
is
a
pretty
popular.
You
know
just
an
equal
service
around
HTTP
I'm,
going
to
create
to
increase
resources,
so
here
I'm
creating
an
HTTP
being
free
tire.
B
Here
it's
the
path
starts
with
slash
free
and
all
the
traffic
is
sent
to
the
HTTP
bin
service
and
then
I
have
paid
tier,
where
the
slash
paid
all
the
requests,
starting
with
slash
page
and
also
get
it
sent
to
the
same
service.
So
we
have
the
same
service,
but
we
have
two
endpoints.
So
if
I
do
slash
gate
of
slash
free,
slash
status,
200,
they
get
back
a
response.
B
B
Excuse
me.
The
first
thing
I
am
going
to
do
is
I'm
going
to
try
to
use
key
authentication
on
the
paid
tier,
so
free
tier
isn't
be
open
to
the
world.
You
don't
want
any
kind
of
authentication
limits
on
that
for
now
and
we
introduce
key
authentication,
so
I
created
a
plug-in
called
key
odd.
So
the
plug-in
is
key
art.
B
So
you
have
two
key
authentication
and
the
name
is
HTTP
be
knocked
next
and
go
ahead
and
edit
a
tiering
ingress
resource
and
a
comp
to
execute
a
plug-in,
as
we
did
before
with
log
we
plug
in
this
time
we
are
going
to
enable
an
authentication
plan
able
the
plug-in
and
now,
if
I,
send
a
request
to
the
patron
point.
Kong
returns
back
with
an
unauthorized
401
unauthorized
because
we
did
not
send
an
API
key.
So
how
do
we
get
the
API
key
now
so
for
that
we
create
something
called
a
secret
in
Cubao
notice.
B
So
here,
if
specifying
here,
the
credential
type
is
key
authentication
and
the
key
here
is
my
super
secret.
Of
course,
this
is
not
the
most
secure
way
of
doing
it
because
astray,
but
let's
go
ahead
and
create
the
secret.
So
we
created
a
Cuban
at
a
secret
given
encrypt
and
store
the
since
into
each
database,
and
then
we
have
consumer
of
comm,
so
creating
a
consumer
Harry
and
the
credentials
it
has
is
Harry
API
key.
This
Harry
API
key
essentially
is
a
reference
to
this
is
Cuban.
B
Alright,
so
let's
go
ahead
and
create
that
secret
and
now
let's
go
ahead
and
use
that
API
key
to
authenticate
against
tracked
API.
So,
as
you
can
see
now
we
are
getting
a
response,
200
201.
So
if
I
do
to
accept,
it
will
go
back
that
if
I
use
the
wrong
API
key
Gong
will
return
back
with
an
authorized.
So
this
is
a
key
authentication
example,
but
you
could
do
an
ID
be
as
well
all
right,
so
we
have
differentiated
ourselves
so
that
we
have
a
free,
endpoint
and
a
fade
in
point.
B
What
else
could
we
do
more?
So,
let's
do
a
rate
limiting
rate
limiting
is
something
that
almost
anybody
uses.
It's
like
the
basic
defense
mechanism,
so
that
not
somebody
cannot
like.
Just
simply
do
us
you,
although
it's
not
foolproof,
but
it's
a
basic
one.
So
we
have
issued
in
between
free
tier
plug-in
resource,
and
here
we
are
saying
anybody
who
accesses
the
service
from
the
same
ip.
A
B
B
B
B
B
Authenticating
the
same
point,
so
any
request
that
comes
first
needs
to
be
authenticated.
We
will
also
ask
to
execute
another
plugin,
so
can
execute
n
number
of
plugins
that
you
want
to
and
enable
the
authentication
them
all
right.
So
we
have
authentication
enabled-
and
we
have
rate
limiting,
enabled
now
when
I
make
this
request.
As
you
can
see,
I
get
nine
requests
that
are
remaining.
B
Excuse
me,
so
we
have
got
a
different
rate
limit
for
the
free
end
point
and
a
different
rate
limit
for
our
paid
in
point.
This
is
awesome,
so
you
took
a
service,
we
deployed
it
into
cube
or
lattice,
and
without
writing
any
code.
Just
configuration
you're,
exploiting
the
power
of
Khan
by
using
authentication
rate
limiting.
You
are
also
logging.
The
GRP
see
traffic
ring,
so
they
are
kissy.
Traffic
here
is
still
going
through,
so
your
Crocs
in
G,
RPC
HTTP,
all
the
traffic
just
using
a
single
service.
B
Now
as
a
bonus,
let's
go
ahead
and
see.
Where
do
we
want
a
golden
tier?
So
we
have
some
special
customers
who
are
paying
us
a
lot
more
money
and
we
want
to
have
them.
We
want
them
to
have
a
higher
rate
limit,
so
here
I'm,
creating
for
gold
at
tier
plugin
and
I'm,
giving
them
100
requests
per
minute.
So
almost
of
our
customers
get
ten,
but
our
special
user
will
get
hundred
requests
so
I'm
going
to
create
another
authentication
credential.
So
here
we
have
an
API
key
called
user
one
key.
B
So
let's
go
ahead
and
create
that
coupon
at
a
secret
and
correspondingly
we
will
create
a
consumer
as
well
so
creating
consumer
and
on
the
consumer
resource.
This
time
we
are
adding
the
plug-in
HTTP
bin
gold.
Here
now
we
will
edit
the
ingress
specification
of
the
plate
here
and
we
will
ask
Kong
to
run
yet
another
plugin.
B
B
End
point
I
get
five
requests
per
minute,
so
you
can
have
different
kinds
of
rate
limiting
and
you
can
also
impose
other
policies
where
you
know
you
need
to
have
an
authorization
as
well
when
somebody
can
access
a
service
based
on
a
given
on
authorization
or
arbok
resource
only
so
that
one
is
as
possible
as
well
so
got
the
GRP
C
traffic.
We
have
got
logging
and
monitoring
in
place.
B
B
There
are
a
whole
lot
of
other
features
that
are
out
there,
so
we
do
a
lot
of
load
balancing
scheme,
so
Kong
can
act
as
a
load
balancer.
So
essentially
you
need
a
single
node
balance
of
four
Kong
and
you
do
not
need
any
other
load
balancer
for
your
services.
You
expose
a
single
load.
Balancer
all
traffic
goes
where
that
load
balancer
and
you
can
impose
all
kinds
of
policies
at
one
single
point
of
ingress
and
you
can
do
health
checking.
You
can
route
based
on
different
protocols
as
well.
B
B
So
that's
based
on
routing
and
load
balancing,
oh
I
said
that's
the
basic
essence
of
ingress
and
we
extend
ingress
using
plugins.
So
we
saw
authentication
and
rate
limiting.
We
can
do
caching,
you
can
do
request
transformations,
so
you
can
regulating
from
v1
to
v2
API.
You
can
do.
You
can
also
use
Prometheus,
no
or
any
other
data
dog.
If
you
want
to
use
for
matrix
and
analysis
of
your
API
is
in
themselves,
you
can
do
that.
You
can
also
do
salt
management
in
external
DNS.
B
B
Alright,
that's
that's
all
that
I
have
today
for
you
a
few
important
resources.
If
you
could,
if
you
want
to
take
a
pitch
of
each
take
a
note
of
this
slide,
the
first
resource
is,
you
can
install
Kong
just
with
one
click
on
your
kubernetes
cluster.
It
could
be
mini
queue,
kind
or
any
cloud
provider
or
a
metal
cluster.
The
second
is
a
very
important
link,
Kong
elapsed
or
die
o
/
keyboard
notice.
B
This
is
essentially
sort
of
a
category
thing,
but
you
get
all
you
can
practice
how
to
use
Kong
in
Cuba
notice
in
your
browser
itself.
So
this
is
the
custom
build
environment
where
we
have
steps,
and
we
also
have
a
Cupid's
cluster
running
in
the
cloud
for
you.
You
can
just
use
that
to
get
a
feel
of
how
to
use
Kong,
you
can.
B
This
demo
there
as
well,
if
piece
together
this
demo,
based
on
all
the
exercises
that
are
available
at
Kong
labs,
that
are
your
slash
keyboardist
and
if
you
have
any
questions
drop
into
the
Kong
channel
in
Cuban,
it
is
slack
server
all
of
our
maintenance
already
active
there.
So
if
you
don't
need
to
pick
up
so
you
have
any
questions,
you
can
ask
us
questions
right.
B
A
A
B
B
Reverses
the
ingress
controller,
so
imagine
you
have
a
PIL
dot
example.com
you
have
to
dot
example.com
and
yet
you're
hosting
all
these
services
inside
your
keyboard
is
cluster
and
you
want
to
route
to
different
services
based
on
the
host
or
based
on
the
DNS
name
of
the
zones.
That's
what
this
host
enforced.
So
it's
got
nothing
to
do
with.
If
the
I
saw
one,
it
says
the
request
host.
It's
used
for
yeah.
A
B
B
B
A
A
B
B
Good
questions
yeah,
so
con
credentials
are
something
that
existed
before.
He
did
encrypt
its
secrets.
We
do
not
have
like
a
ready
script
for
you
to
do
that.
But
if
you
open
a
github
issue
or
drop
in
to
the
channel,
then
we
do
not
have
that
regarding
how
to
write
plugins
and
the
configuration
the
dogs
are
on
Doc's,
not
Kong
hq.com,
so
you
can
figure
out
which
properties
of
plugins
are
supported
now
and
then
you
can
just
so.
B
A
A
B
B
You
can
have
a
different
economic
strategy
as
well,
but
this
is
super
simple
way
of
doing
it,
where
you
deploy
n
number
of
comforts
to
scale
out
horizontally
and
each
part
is
configured
using
the
controller
that
is
running
as
a
side
car,
so
you
have
like,
even
if
a
machine
dies
or
if
what
gets
stuck.
That's
fine.
The
other
requests.