►
Description
In this talk, Claudio Acquaviva (Senior Solution Architect at Kong) and Ádám Sándor (Solutions Architect at Styra) discuss how to leverage Kong Konnect API Gateway and Styra Declarative Authorization Service (DAS) to build Open Policy Agent (OPA) authorization policies for GraphQL APIs. You will first learn how to implement a GraphQL API at Kong Gateway with OPA, followed by diving into Styra DAS to provide an enterprise-ready policy management platform to build, test, and deploy authorization policies.
A
Again,
my
name
is
Claudio
koviva
I'm,
a
software
tactia
Kong.
As
a
matter
of
fact,
I've
been
working
on
for
almost
five
years
now:
I'm
a
member
of
the
lysis
team
and
as
such
I'm
responsible
for
integrating
comp
Technologies
along
with
our
main
technology
Partners.
So
in
this
sense,
in
this
context,
starva
plays
a
very,
very
important
role,
no
pleasure
talking
to
you
all.
He
said,
or
maybe
once
you
introduce
yourself.
A
A
good
so,
let's
get
it
started
with
the
agenda.
First
I'm
going
to
have
the
the
first
part
I'm
going
to
talk
a
little
bit
on
Technologies,
more
precisely
the
connect
with
which,
which
is
our
SAS
based
app
API
management
platform
and
then
Adam's
going
to
describe
a
little
bit
more.
What
Sarah
is
bringing
to
the
table
when
we
come
to
the
authorization
topic
and
then
finally,
we're
going
to
run
the
demo
session,
exploring
the
the
integration
points
with
both
Technologies.
A
So
in
the
end,
of
course,
the
usual
q
a
at
the
end.
So
let's
get
it
started
so
again
to
get
it
started.
It's
very
important
to
consider
that
companies
in
general
are
in
the
journey
to
decentralization
so
and
then
we
can
analyze
the
journey
from
two
main
different
perspectives.
Number
one.
The
first
one
is
the
patterns,
the
active
patterns,
companies
going
to
use
right
now,
they've
been
using
the
famous
monolith
partner
for
a
long
time,
but
this
time
for
several
reasons
we
want
to
explore
to
use
other
patterns,
including
microservices.
A
Which
is
I
considered
the
net
revolution
of
your
microservice
project
serverless
and,
as
a
matter
of
fact,
any
you
know
emerging
pattern,
including
surveillance,
not
just
serverless,
but
you
know
event
streaming
pattern
base
pattern
as
well
and
graphql,
and
by
the
way
this
is
our
topic
for
today's
session
and
that's
the
the
first
perspective
you
should
consider
the
second
one
is
the
the
platforms
will
are
the
platforms
the
customers
they're
going
to
use
to
run
their
new
applications.
A
So
not
just
these
platforms,
not
they
should
support,
not
just
the
the
the
traditional
historical
on-premise
platforms,
including
Linux,
based
deployments,
but
also
VMS
container
based
runtimes,
including
Docker,
kubernetes,
all
flavors
or
kubernetes,
and
more
than
that,
not
just
supporting
on-premise
platforms
but
also
Cloud
platforms.
As
a
matter
of
fact,
our
customers
are
looking
for
hybrid
deployments,
some
components
running
on
premise,
other
components,
running
Cloud
number,
one
other
components,
running
Cloud,
number,
two
and
so
on,
and
so
on.
A
A
So,
with
connect,
we
are
able
to
first
of
all
to
expose
the
workloads
you're
going
to
have
along
the
way
again,
not
just
the
the
monolith
based
workloads,
but
any
other
flavor
of
workloads
we're
talking
here,
but
also
to
control
these
exposure
with
specific
policies.
We
typically
Implement
at
the
API
GTA
layer
to
connect
consists
of
control.
Plane
talk
a
little
bit
about
the
main
component
as
a
connect
and
since
it's
a
control
plane.
First
of
all,
entrepreneur
is
responsible
for
the
administration
tax.
A
So
you
as
an
ad
me,
you
go
to
the
control
plane
to
define
a
new
API
to
expose
these
or
that
service
your
operation
service.
You
bet
you
got
back
there
and
then
the
control
not
just
to
Define
your
apis
to
Define
policies
to
control
the
service
consumption
version,
your
existing
apis
and
the
precrate,
some
apis
you
have
already
defined,
and
so
on
and
so
on.
A
On
the
other
hand,
connect
provides
what
we
call
the
database
or
runtime
or
data
plans.
If
you
will,
they
are
the
actual
components
responsible
for
the
API
consumption.
So
you
as
a
consumer
this
time,
you
should
go
to
one
of
the
data
planes
in
order
to
send
requests,
therefore,
to
consume
the
apis.
The
services
City
behind
and
so
on,
and
so
on.
A
The
beauty
of
the
cpdp
kind
of
topology
is
that
you
know
you
can
deploy
your
data
planes
when
whatever
and
whenever
you
want
in
order
to
support
your
topology
and
Technology
decisions,
you're
going
to
have
along
the
way
for
your
modernization
process,
for
your
move
to
Cloud
processes
and
so
on,
and
so
on.
So
the
data,
in
other
words
the
data
planes
they
support
in
theory
all
the
existing
existing
runtimes.
They
got
this
time.
You
know
Marketplace
again:
VMS
Linux,
based
container
based
Docker,
kubernetes,
all
flavors,
kubernetes,
and
so
on.
A
So
talking
a
little
bit
more
about
the
control
plane,
control
plane
as
the
the
main
component
when
we
come
to
the
administration
perspective,
provides
multiple
modules,
so
the
most
important
ones
you
have
responsible
for
the
service,
catalog
API
catalog.
So
you
go.
You
add
an
admin
or
API
designer
you
go
to
control
play
more
precisely
Hub
in
order
to
create
your
apis
version,
existing
ones
and
so
on.
Runtime
manages
another
component
responsible
for
monitoring
taking
care
of
the
data
planes
themselves.
A
So
you
can
see
how
many
data
planes
you
got
in
a
given
moment.
You
can
spin
up
new
runtimes
along
the
way
in
and
so
on.
The
portal
is
specific.
For
Developers
is
responsible
for
a
documenting
the
API
specs.
You
got
so
the
developers
they
can
check,
not
just
check
the
documentation,
but
also
we
provide
some
substances.
They
can't
the
developers
they
they
can
issue
their
own
credentials
in
order
to
consume
the
apis.
A
Last
but
not
least,
the
analytics
component
responsible,
as
you
can
imagine,
to
keep
track
of
the
main
kpis
in
terms
of
API
consumption
and
the
main
metrics
in
terms
of
leadership,
time,
caching,
performance
and
so
on,
and
so
on
more
than
that
connect
to
responsible
for
providing
multiple
capabilities
for
your
developers
for
the
developers
themselves.
So,
first
of
all,
we
provide
a
extensive
list
of
plugins
each
plugin
responsible
with
specific
policies
to
support
all
kinds
of
processing.
A
You
want
to
to
implement
the
API
detail
here,
including
security
policies,
including
transformation
policies,
including
throttling
policies
and
so
on.
Security
policies
as
well
as
I
said
before,
is
a
specific
chapter
of
it.
That's
where
the
specific
Opa
plugin
is
I'm,
going
to
talk
about
the
Europa
plug
in
a
minute
and
then,
of
course,
Adam's
going
to
explore
it
in
more
detail
way.
A
So
we
streamline
conception
again,
we
provide
steps,
you
have
the
developers,
they
can
go
to
the
subscribe
to
check
not
just
subscribe,
but
also
develop
again
to
check
the
the
specs
themselves
and
to
try
them
out.
Last
but
not
least,
we
also
provide
some
components
to
extend
your
existing
CI
CD
pipelines
in
order
to
support
not
just
the
micro
server,
the
service
life
cycle,
but
also
the
API
lifecycle
as
well.
A
In
terms
of
extensibility
is
another
big
capability
we
provide,
as
a
matter
of
fact,
the
Gateway
more
precise
data
plane
is
fully
extensible.
You
can
extend
the
data
plane
with
so
many
policies.
These
slide
over
here
is
just
outlining
the
main
ones,
so
we
provide
plugins
to
support
Advanced
retention,
team
policies,
Let
The
Institute
support
secret
management
infrastructures,
including
AWS
gcp,
and
so
on,
transformation
plugins.
A
You
know
that
you
process
transformation
to
request
transformation
before
routing
to
the
to
the
Upstream
Services
before
getting
back
to
the
consumers
and
so
on
plugin,
as
you
can
see
over
here
responsible
for
implementing
Access
Control
policies,
that's
the
one
we're
going
to
explore
to
detail
a
little
bit
in
our
demo
session,
another
plugins
as
important
as,
for
instance,
to
implement
oidc
based
authentication
I'm,
going
to
use
this
one
I'm
going
to
show
this
one
in
action:
Implement
no
IDC
base
authentication
process
as
well,
very,
very
important
and
so
on,
and
then
of
course,
as
I
said
before,
we
can
support
your
kafika
based
event
streaming
platforms
you
you
might
have
behind
the
gateway
and
so
on.
A
Typically
speaking,
this
is
the
young.
Let's
call
it
the
reference
architecture.
When
you
have,
we
can
see
the
main
components
of
a
typical
Gateway
and
service
deployment.
So
let
me
just
start
with
the
components
first,
so
you
got
your
graph,
that's
our
topic
for
today's
session,
the
graphql
service.
Again,
you
can
have
any
other
service
behind
the
gateway,
not
just
graphql,
as
you
can
imagine,
rest
base,
GRTC,
based
and
so
on
and
so
on.
A
A
More
than
that,
we
got
the
last
component
over
here:
the
identity
provider
responsible
for
the
authentication
process,
so
the
Gateway
mobile
data
plane
is
going
to
rely
on
the
external
identity
provider
in
order
to
implement
a
specific
oidc
based
authentication
process.
So
again,
this
is
Step
number
two,
that's
the
the
most
important
step
when
we
come
to
the
authentication
process,
there's
a
specific
handshake
between
the
data
plane
in
the
the
data
provider
in
order
to
attend
KD
consumer.
A
Last
but
not
least,
the
consumer
is
going
to
send
regular
requests
to
the
data
plane,
injecting
the
credentials
in
our
case
client,
ID,
client
secret.
For
those
familiar
with
the
oidc
based
authentication
process.
We
have
so
many
nuances
of
the
year,
so
many
options
so
many
grants
this
time
and
just
describing
what
we
call
the
clients
credentials
Grant.
So
again,
the
consumers
responsible
for
injecting
the
credentials
trying
to
decline
secret.
A
The
data
plane
is
going
to
take
these
credentials
to
authenticate
the
consumers
going
during
the
request
time
to
get
the
provided
that
the
improviser
will
be
will
be
responsible
for
authentication,
not
just
authentication,
but
also
to
issue
the
tokens,
the
job
tokens
to
data
plane
and
then,
if
everything
is
fine,
the
data
plane
is
going
to
Route.
The
request
finally
is
going
to
Route
the
request
to
the
graphical
wealth
service
and
then
you're
going
to
consume
the
service
as
well.
A
But,
as
you
can
see
over
here,
the
the
topology
is
not
is
not
supporting
the
authorization
process,
which
is
our
topic
today,
just
implementing
the
authentication
process.
This
is
the
perfect
moment
to
hand
over
to
to
add
and
to
describe
a
little
bit
what
Sarah
can
help
us.
You
know
to
prevent
these
specific
authorization
process.
A
B
Yeah,
thank
you
Claudio,
so
yeah.
Today's
scenario.
C
That,
where
we
want
to
show
the
value
of
open
policy
agent
and
stera
Das,
is
a
more
complex
authorization
scenario
where
you
would
be
making
authorization
decisions
on
something
like
the
like
the
graphql
query.
Let's
say
certain
users
can
query
certain
objects
using
the
graphql
API,
but
they
can't
query
search
and
other
objects,
and
this
kind
of
of
queries
are
are
pretty
difficult
to
to
handle
on
the
on
the
level
of
let's
say,
a
Gateway
or
more
generic
components.
C
They
could
be
built
into
the
graphql
service
itself,
of
course,
but
in
that
case
the
the
authorization
logic
lives
together
with
the
business
logic
of
the
of
the
application,
which
would
be
fine
with
one
application,
but
once
you
have
10
or
100
Services,
it
is
a
very
good
practice
to
remove
that
authorization
and
have
a
specialized
component
that
can
do
this
on,
while
interacting
with
the
Gateway
and
basically
Shield
any
of
your
graphql
services
from
unauthorized
access
and
how
this
works
in
practice
is
the
is
we
have
a
pair
with
the
open
policy
agent
and
styrodes,
where
the
open
policy
agent
provides
policy
decisions,
so
con
Gateway
will
reach
out
to
the
open
policy
agent
to
ask
whether
a
certain
request
should
be
authorized
or
not.
C
C
A
few
words
about
about
the
open
policy
agent:
it's
a
general
purpose
policy
engine
which
makes
it
very
useful
in
many
different
scenarios,
not
just
the
one
we
are
describing
today.
It's
a
graduated
cncf
project,
so
it
it's
been
developed
by
styra
and
mostly
maintained
by
styra
to
this
day,
but
it
is
a
fully
open
source,
cncf
project
and
yeah,
and
it's
and
its
purpose
is
to
be
able
to
use
one
piece
of
technology
in
different
use
cases
in
different
layers
of
the
stack
to
enforce
authorization.
C
In
our
cases,
I
already
mentioned
decouples
the
authorization
policy
from
application
logic
and
to
write
this
policy.
We
will
use
the
regular
language,
which
is
the
language
of
the
open
policy
agent
and
it's
a
declarative
language
which
is
very
important
because
it's
authorization
can
be
implemented
using
old
school
like
normal
procedural
programming
languages.
However,
those
are
actually
too
powerful
and
allow
to
too
many
side
effects
and
and
different
ways
of
implementing
the
same
thing.
C
While
Regal
was
designed
to
express
policy
in
a
very
coincise
and
simple
way,
can
you
go
to
the
next
like.
C
So
here,
on
the
left
side,
we
have
some
some
regular
policy
and
on
the
right
side
we
have
the
basic
open
architecture
without
the
control
plane.
So
styradas
is
not
not
shown
here,
so
basically,
oppa
always
interacts
with
one
or
more
services
that
are
requesting
policy
decisions
from
it.
The
service
would
send
in
any
relevant
data
to
the
open
policy
agent
and
inside
the
open
policy
agent.
C
The
data
about
an
HTTP
request
that
it
has
received-
and
here
is
the
here-
is
an
example
of
a
policy
that
allows
users
to
request
their
own
salary
and
as
well
as
the
salary
of
direct
subordinates,
but
nothing
else
I'm
not
going
to
explain
this
in
detail
now,
but
you
can
see
that
the
basic
structure
of
the
policy
is
that
we
say
we
by
default.
We
don't
allow
a
request.
Unless
the
request
is
input,
method
gets
and
the
path
is
finance,
salary,
slash
some
user.
C
Basically,
we
Finance
salary,
slash
the
user,
who
is
actually
making
the
request
and
and
then
similarly,
with
the
with
the
subordinates,
we
actually
iterate
over
all
the
subordinates
of
the
user,
and
if
they
are
actually
a
subordinate
of
that
of
this
user,
then
we
then
they
can
that
the
user
can
request
their
salary.
You
can
see
that,
for
example,
for
iteration,
we
don't
need
to
write
a
for
Loop.
We
just
say
any
inside
this
list
with
this.
C
C
This
is
how,
and
here
is
here,
is
the
point
where
styra
Daz
comes
into
the
picture
as
the
as
the
control
plane
for
for
our
open
policy
agents
and
basically
oppa
would
be
handling
many
different
use
cases
I'll
not
detail
them
here,
but
you
can
see
that
from
kubernetes
admission
control
through
terraform
to
traffic
authorization
on
gateways,
oppa
can
do
it
all
and
it
requires
a
well.
C
It
doesn't
it's
it's
not
an
absolute
requirement,
but
at
a
certain
scale
a
control
plane
is,
is
necessary
to
to
manage
all
the
Oppo
deployments
and
all
the
policies
that
they
all
need.
So
styrodesk
provides
a
policy,
Auto
environment,
a
testing
and
impact
analysis
environment.
So
you
can
safely
deploy
policies,
policy,
distribution
mechanism,
so
it
will
get
the
right
policy
to
the
right,
open
policy
agent
and
and
provide
an
audit
logging
facility.
C
So
that's
the
Auditors
can
be
made
happy
about
your
authorization
solution,
yeah
and
that
was
I,
think
everything
I
wanted
to
to
say
on
a
high
level
and
we
will
see
once
we
get
to
the
demo.
Well,
this
really
works.
A
Yeah,
so
again,
now
is
the
demo
time
we're
going
to
show
we
try
to
to
keep
things
a
little
bit
apart,
like
you
know,
first
of
all
to
to
to
showcase
the
the
connect
Gateway
when
I
say:
Continental
control,
plane
data
plane
again
protecting
the
graphical
wealth
service
with
some
policies
and
then
we're
going
to
include
the
access
control
policy
later
on
so
again,
here's
the
topology
or
our
demo
environment
over
here
so
I'm,
going
to
to
show
first
of
all,
going
to
show
my
ident
provider
we're
using
key
cloak,
as
our
can
could
be
anyone
just
I'm
using
key
cloak
just
to
to
make
my
life
easier.
A
But
again
you
can
use
any
oidc
base
and
then
I
think
provider,
including
Cognito,
including
OCTA,
including
Azure,
active
directory,
including
again
any
oidc
base.
Add-In
provider
is
fully
supported
by
our
specific
plugin
again
I'm
using
geek
cloak
just
to
make
our
lives
easier
and
then
key
cloak
as
such
a
responsible
for
authenticating
and
issue
the
tokens.
So
we
can
move
on
and
apply
these
actual
control
policies
based
on
these
jaw
tokens.
At
the
same
time,
here's
my
oh
yeah,
we're
going
to
use
these
Rick
and
Morty
api.com
website.
A
That's
where
the
graphql
service
itself
it's
running
right
now,
it's
a
public
service
and
then
we're
going
to
add
the
Gateway,
not
just
the
Gateway,
but
the
Opera
in
key
cloak
layers
on
top
on
top
of
the
service
in
order
to
control
these
the
consumption
of
the
server
itself.
So
I'm
going
to
to
hear
the
service
without
the
Gateway,
just
to
show
you
how
the
the
service
works
and
so
on,
and
then
last
but
not
least,
is
my
connect.
A
Control
plane
again
running
now
in
is
a
fast
solution
when
running
narcloud,
and
then
you
can
see
first
of
all,
very
important
options
here.
So,
first
of
all,
the
runtime
manager,
as
I
said
before,
that's
the
the
module
responsible
for
monitoring
and
controlling
the
data
planes
I
got
so
I
got
only
one
running.
As
a
matter
of
fact,
my
data
plane
Just
for
information
is
running
in
a
kubernetes
cluster
I'm
using
AWS
Cloud
could
be
any
other
one.
You
can
stream
up
new
runtime
instance.
A
A
The
second
component
is
very,
very
important
to
show
is
the
what
we
call
the
con
Gateway
service.
So,
as
you
can
see,
I
got
a
service
already
fine
and
it's
it's
an
abstraction
of
these
endpoint
provided
by
the
weak
and
Morty
api.com
website.
A
As
a
matter
of
fact,
I'm
going
to
hit
the
the
website
using
https
protocol,
as
you
see,
and
then
the
default
port
again
I'm,
not
I'm.
The
server
is
not
here
to
expose
anything
a
matter
of
fact.
You
just
you
define
a
reference
or
the
existing
endpoint
you're
heavy
you'll
you'll
have
already
in
place.
So
in
our
case
again
we're
going
to
consume
the
Rick
and
Morty
api.com
website.
A
On
top
of
the
service,
we
got
what
we
call
the
Congo
route.
Now
we're
going
to
expose
the
service
with
the
data
plane
to
the
connect
more
precisely
with
data
plate.
So
on
top
of
the
graph
graphql
service
we
defined
before
we
have
defined
this
graphical
route,
exposing
the
service
with
this
specific
path,
the
graphql
route.
A
Yes,
that's
the
path
we're
going
to
use
to
in
order
to
send
a
request
to
Gateway,
therefore,
to
consume
the
graphql
server
service
sitting
behind
it,
and
if
we
check
the
route,
don't
you
see,
another
important
notion
object
can
provides
that's
how
you
define
the
policies.
A
So
here's
here's
the
plugin
tab
available
for
you
and
then,
as
you
can
see,
my
route
has
already
two
plugins
enabled
as
a
matter
of
fact,
only
Define
not
exactly
enabled
so,
as
you
can
see,
I
got
the
operating
connect,
plugging
in
they've
already
enabled
for
my
route,
meaning
that
the
route
is
going
to
be
protected
by
these
oidc
based
authentication
process
and
then,
as
you
can
see,
I
have
already
configured
the
OPA
plugin
to
the
sync
route,
but
this
time
disabling
it
again
just
to
keep
things
apart,
I'm
going
to
show
the
authentication
process
first
and
then
we're
going
to
include
authorization
along
the
way,
in
other
words,
I'm
going
to
enable
the
open
plugin
later
on
and
then
again,
Adams
group
is
going
to
show
you
a
specific
Access
Control
policies
integrated
with
the
data
plane.
A
More
than
that,
the
last
thing
is
consumers.
So
I
got
two
consumers
over
here:
API
consumers,
so
this
consumes.
They
represent
Key
Club
consumers
I
have
created
before
so
and
then
again
the
consumers
they
are
already
defined
for
the
data
plane
perspective.
A
So
let's
try
to
consume
these
sources
of
an
API,
then
so
I'm
going
to
use
in
order
to
consume
then
I'm
going
to
use
another
product.
We
provide
it's
called
insomnia
and
Sony
is
our
API
spec
editor
and
then
you
can
not
just
Define
create
your
API
specs,
but
also
you
can
send
requests
to
your
endpoints,
including
the
gate,
the
data
plane
itself.
So
here's
the
Rick
and
Morty
endpoints
again
I'm
not
sending
requests
data
plane
this
time
just
consuming
the
services,
the
public,
the
the
graphical
service.
We
have
available
this
time.
A
So
again,
here's
the
the
query
itself.
You
can
Define
query
variables,
for
instance,
show
in
case
returning
only
the
episodes
with
the
dark
word
on
the
side
of
it.
You
can,
for
instance,
change
it
to
any
other
thing
and
then
you're
going
to
to
get
of
course,
you're
going
to
do
to
get
a
specific
response
to
it
and
so
on
and
so
on.
Nothing
to
do
with
Kong,
again
I'm
consuming
the
graphql
service
directly.
So,
as
you
can
see,
there's
there's
nothing
controlling
this
exposure.
A
A
So
that's
what
we
going
to
do
right
here
so
this
time,
as
you
can
see
over
here
now,
I'm
sending
a
request
with
this
any
point
over
here.
By
the
way,
this
is
a
last
AWS
elastic
load,
balancer
in
front
on
the
data
plane
deployment
in
my
kubernetes
cluster
and
then
I'm,
sending
the
request
this
time
to
the
data
plane
itself
and
then
I'm
going
to
use
to
get
exactly
the
same
results,
but
a
very,
very
important
differences
right
now.
A
So,
first
of
all,
in
order
to
consume
this
API
I'm
supposed
to
present
my
credentials
so
I'm
able
to
consume
it
because
I
have
presented
the
right
credentials.
What
happens
if
I
in
the
wrong
credentials.
A
Is
not
going
to
allow
me
to
consume
the
API?
That's
the
main
purpose
of
the
oidc
base
authentication
process
to
prevent
the
the
service
to
be
consumed
by
a
non-recognized
customer.
So
again,
I
am
supposed
to
present
my
credentials
on
the
ID
is
one
of
them.
A
A
So,
if
I
keep
on
sending
requests
to
the
gate,
to
data
plane,
eventually
I'm
going
to
get
this
429
error
code,
meaning
that
I
have
reached
the
rate
thinking
policy
plug
in
two
policies
in
place,
oidc
base
authentication
process
and
the
rate
linking
products
so
again,
like
you
know,
nobody's
taking
care
of
the
authorization
policy
this
time.
That's
why
it's
so
important
to
include
Opa
to
our
topology
to
our
architecture
and
then,
before
handing
over
to
Adam,
to
show
us
and
styra
playing
together
with
the
con
gate
in
data
plane.
A
I
have
to
enable,
as
I
said
before,
I
have
to
enable
the
Opera
plugin
and
then,
besides
all
these
two
first
policies,
the
Gateway
is
going
to
to
work
with
opa
oops,
we're
going
to
hope
to
work
without
in
order
to
implement
the
authorization
policy
along
with
these
initial
ones
and
then
I'm
going
to
hand
over
to
Adam
one
more
time
and
showcase
the
the
access
control
policy,
I'm
working,
email.
B
C
But
now
Oppo
is
is
enabled
the
Oppa
plugin
in
the
Gateway
is
enabled.
That
means
the
Gateway
will
now
be
making
calls
to
Oppa
which,
by
the
way
the
the
the
Oppa
agent
is
running
next
to
the
Gateway
on
the
same
kubernetes
cluster,
while
the
control
plane
is,
is
a
sus
running
elsewhere,
and
now
we
can
see
that
the
same
request
is
unauthorized,
which
is
great,
and
that
means
things
are
working
as
they
should
and
consumer
one
is
authorized.
C
So
why
was
consumer
2
rejected
the
the
it's
not
possible
to
to
communicate
the
rejection
reason
in
a
in
a
body,
but
we
can
set
a
header,
and
you
can
see
here
the
denial
reason
is
user,
Kong,
ID
2
can't
view
a
character's
location
and
can't
view
any
episodes.
So
we
can
see
that
that
the
user
is
requesting
locations
of
characters
and
also
episodes.
So,
let's,
let's
let's
see,
if
we
can,
we
can
change
this.
C
So
let's
remove
the
location
yeah
now,
just
our
only
error
is
user.
Kong,
ID,
2,
contview
episodes.
So
let's
remove
the
episodes
too,
and
now
the
authorization
has
succeeded.
Variable
is
so.
B
It's
now,
this
filter
episode
is
not
used.
C
But
basically
the
author
authorization
already
succeeded
here,
so
you
can
see
that
we
got
the
limited
results
back
that
confuser
2
has
access
to
well
user.
One
has
access
to
more,
let's
see
how
this
is
actually
implemented,
so
here
we
are
in
in
styrodes.
This
is
the
control
plane
for
for
Oppa.
C
We
have
on
the
left
side.
We
can
see
already
the
policy
here,
but
let's
first
go
to
this
object
here,
that's
called
a
system,
so
the
system
is
called
con
Gateway
and
this
system
controls
now
the
policies
for
a
single
con
Gateway,
meaning
that
we
could
deploy
the
same
policy
to
multiple
agents,
but
they
their
purpose
would
be
to
serve
the
same
policy
decision.
C
So
the
only
only
reason
why
we
would
connect
multiple
agents
to
this
same
system
would
be
to
have
high
availability
because
they
would
be
giving
the
same
answers
to
the
same
questions.
We
can
see
here
some
some
graph
of
like
how
many
allowed
and
denied
decisions
or
errors
were
made
by
the
agent,
and
we
can
actually
view
the
decisions
that
were
made.
So
these
are
the
decisions
that
the
agent
was
just
making.
For
example,
we
can
check.
C
Why
was
this
decision
denied
here
and
we
get
a
big
S
Json
for
for
with
all
the
data
in
there-
and
this
is
the
in
this
input.
Part
is
the
part
that
we
are
getting
from
the
Gateway,
so
you
can
see
that
this
is
not
exactly
an
HD
HTTP
request,
but
it's
it's
all
the
data
about
the
HTTP
request,
so
they're
all
the
headers.
C
Here
there
is
the
authorization
token,
and
the
here
is
the
the
graphql
query
encoded
inside
it,
and
then
here
is
the
result
that
the
open
policy
agent
produced
at
that
and
That
Was
Then
consumed
by
the
con
Gateway
and
used
to
basically
reject
the
request.
So
you
can
see
here
that
we
are
setting
this
header
here.
The
contview
episodes
error
and
the
decision
type
is
allowed.
The
error
code
is
403
Etc
now
to
see
how
was
this
error
produced?
Really?
We
have
to
dive
into
the
regular
code,
that's
defined
under
the
under
our
system.
C
So,
first
of
all
we
are
defining
an
allow
rule
for
that
says
that
by
default
we
are
not
allowing
request.
However,
we
we
actually
flip
it
around
and
we
will
have
a.
We
will
have
basically
reasons
to
deny
the
request
and
if
we
don't
have
any
denied
reasons,
any
denial
rules,
then
we
will
allow
it.
So
this
would
this
be
actually
are
turning
this
decision
into
a
default.
Allow,
because
if
there
is
no
reason
to
deny
them,
we
just
still
have
it
and
what
would
be
our
reason
to
deny
here?
C
We
have
the
two
deny
reasons
defined,
so
this
is
again
a
rule
that
will
produce
a
list
of
denier
reasons,
meaning
that
if
this
rule
evaluates
to
true,
it
will
add
this
message
into
the
list,
and
this
other
rule
will
also
add
another
message,
and
potentially
there
can.
B
C
To
any
number
of
messages,
depending
on
how
many
of
these
rules
actually
fire-
and
we
can
see
here
that
we
want
to
deny
the
the
request
if
the
character's
location
has
been
requested-
this
is
another
rule
that
defines
it
we'll
see
in
a
moment
and
the
client
ID
in
the
token
that
we
parsed
actually
in
a
helper
function
below
is
Con
guide,
is
not
Kong
ID.
So
basically
we
are
defining
that
that
user
one
can
do
anything
because
and
and
any
other
users.
C
Then
then
Kong
ID
user
will
be
denied
and
the
reason
will
be
this
little
print
function
is
putting
together
the
error
message
and
with
this
defined
that
this,
this
error
message
will
go
into
the
reasons
why
it
was
denied.
So
this
was
the
character's
location.
If
episodes
were
requested,
then
the
similar
rule
will
fire
to
add
the
episodes
error
message
in
there
and
if
the
list
of
error
messages
is
not
zero,
then
allow
will
be
false
and
we
are
rejecting
the
request
and
then
we
have
all
kinds
of
helper
functions.
C
Here
we
have
the
graphql
helper
function
that
actually
parses
the
query
and
produces
the
query.
Syntax
tree
and
then
based
on
that
syntax
tree
we
need
to
we
are.
We
are
a
bit
massaging
that
syntax
tree,
which
I
will
not
go
into
the
details
because
it's
pretty
complex
the
graphql
syntax
tree.
C
So
with
these
helper
functions
we
actually
make
it
a
bit
simpler
and
then
we
can
use
this
path
to
look
for
the
location
where
we
basically
say
that
in
the
characters,
data
structure,
which
is
a
list,
go
over
each
element
and
if
any
of
the
elements
that
it
will
have
selection
sets
inside
it,
which
will
have
more
selection
sets
and
one
of
those
selection
sets.
Alias
is
going
to
be
location.
It
is
not
pretty.
C
You
can
just
Define
it
with
a
one-liner
and
say
over
here,
looking
anywhere
for
location
where
the
Alias
would
be
location
and
then,
basically,
if
there
are
any
episodes
found
with
this
transformation,
then
the
the
episodes
requested
means
that
the
count
of
episodes
is
larger
than
zero,
while
the
location
means
that
we
are
looking
for
an
alias
called
location
inside
that
that
data
structure
by
the
way
I
can
just
quickly
show
how
yeah
this
this
ASD
actually
looks
like
yeah.
So
it's
this
Json
structure,
so
we
are
digging
through
this.
C
Basically
transforming
this
and
and
digging
through
it.
There
is
documentation
on
the
open
policy
agent
website
on
on
how
to
work
with
with
graphql.
So
you
can.
You
can
check
that
out
some
other
stuff
in
here
the
we
are
parsing,
the
bearer
token
out
of
the
authorization
header
and
then
decoding
the
token.
C
We
are
not
doing
any
token
verification
here,
because
we
know
that
the
the
Gateway
has
already
done
that
and
we're
also
checking
the
the
whether
the
user
has
the
default
realm
access,
so
basically
anything
out
of
the
Json
web
token.
We
can
of
the
job
token.
We
can
we
can
use
in
our
our
policies,
so
some
excess
rules
Etc
and
then
then
build
up
our
final
decisions
on
when
would
we
deny
and
or
or
allow
the
request-
and
that
is
basically
it
whenever
so.
C
Let's
say,
I
want
to
I
want
to
allow
now
the
the
episodes
to
be
viewed
by
user
too.
So
I'll
just
comment
that
rule
out
and
I'll
publish
my
my
changes
and
now,
if
I
go
back
to
sending
some
requests
by
consumer
2,
reverting
some
of
the
changes
here
so
now,
consumer
2
should
be
able
to
ask
for
episodes
and
there
we
go
so
we
can
see
now
this
has
actually
worked
and
we
can
see
now
see
this
decision
in
the
decision
logs,
which
are
uploaded
by
the
open
policy
agent.
C
C
Okay,
I'll
just
grab
a
previous
denied
decision,
because
I'm
sure
that's
with
the
with
user.
Two,
yes
and
and
I
can
now
see
that.
Actually,
this
previously
denied
decision
where
the
user
was
actually
not
asking
for
the
episode
was
asking
for
the
episodes.
But
now
those
are
allowed
would
now
be
allowed
with
the
with
the
new
rules.
Now
that
I
commented
out,
this
will
verify
uncomment
it
again.
Then
we
would
get
the
same.
C
Allow
false
at
the
deny
reason
as
we
got
originally
and
we
can
even
rerun
all
the
previous
decisions
against
our
changes
in
the
rules
and
see
what
kind
of
impact
would
our
policy
changes
have
on
the
rules,
and
we
can
see
that
we
would
only
have
one
decision
that
changed
as
the
the
one
when
that
I
made.
While
the
rule
was
uncommented.
C
If
I
comment
it
now
and
run
the
validate
again,
then
we
should
have
a
larger
impact
because
we're
basically
now
allowing
a
bunch
of
decisions
that
have
not
been
allowed
before
and
that
did
not
work
out.
Okay.
For
some
reason,
it's
just
replaying
two
decision
section
which
I
don't
know
why
right
now
so-
and
we
are
out
of
time
so
I'm
not
going
to
dig
into
into
that
but
yeah.
The
impact
analysis
is
a
really
powerful
feature,
because
without
you
can
basically
test
in
production,
you
can
run
your
new
rules.
C
You
can
run
these
previous
decisions
through
your
new
rules
and
see
if
they
would
produce
different
results
with
the
with
the
updated
rules,
all
right,
and
so
that
was
that
was
policy
management
using
styrodes
and
Oppa
and
Gateway
so
I'll
hand
it
back
to
Claudio
for
any
final
words
or
we
just
go
straight
to
the
Q,
a
I'm,
not
sure.
A
Yeah
I
think
you
can
go
to
to
the
Q
a
session,
so
be
more
than
happy
to
do
this
kind
of
discuss
a
little
bit
more
of
these
exciting
use
cases.
A
Yeah
the
same
thing
from
the
Gateway
perspective,
we've
been
focusing
on
the
graphical
but,
as
I
said
before,
you
can
have
any
any
flavor
service
sitting
behind
the
gateway,
including
graphql,
but
then
again,
rest
based,
grpc,
based
and
so
on,
and
so
on.
So
yeah
and.
B
C
Is
asking?
Are
there
any
additional
considerations
when
using
graphql
Federation
honestly
from
the
Oppa
perspective,
I
I
can't
really
say,
because
I'm
I'm
not
really
an
expert
on
on
graphql,
so
I
just
don't
know
how
graphql
Federation
works.
I
don't
know
if
that's
a
concern
from
an
Opa
perspective
or
whether
that's
just
something
that
the
Gateway
needs
to
handle.
A
Yeah
from
from
our
perspective,
graphical
Federation,
there
will
be
sitting
behind
the
gateway,
so
if
I
would
say,
like
you
know,
a
little
bit
before
specific
traffic,
we
have
some
use
cases
controlling
the
Federation
users
have
smash
this
time,
not
exactly
API
Gateway.
But
that's
you
know,
that's
an
you
know
totally
different
story
like
you
know,
you
would
deserve
a
specific
webinar
to
go
through
it
and
so
on.
A
A
So
regardless,
if
you
got
a
graphical
service
or
as
grpc
service
or
serverless
or
anything,
you
know
you're
going
to
normalize
the
access
to
these
guys
in
in
a
simple
way
and
there's
this
way,
of
course,
will
be
managed
by
the
Gateway.
So
that's
another
reason
to
not
just
to
expose
and
to
control
the
the
exposure,
but
actually
to
normalize
the
consumer
perspective,
all
the
multiple
different
Services
you
might
have
along
the
way,
and
then
you
will
have
these
multiple
services
along
the
way.
B
Yeah
but
I
think
as
as
I
understand
the
question
like
how
to
consume
a
graphql.
C
A
There's
another
good
good
reason
to
go
to
Gateway
along
with
going
not
sure
if
you
notice,
but
you
know,
you're
going
totally
transparent
like
no
no
intrusive,
so
you're
not
supposed
to
change
your
services
just
because
you're
going
to
get
you're
going
to
to
add
the
Gateway
City
in
front
of
it.
D
C
Yeah,
so
that's
that's
one.
That's
an
interesting
consequence
of
like
offloading
authorization,
decisions
or
and
other
things
to
the
Gateway
and
Oppo
that
that,
basically,
your
your
app.
It
will
not
reach
your
application
logs,
because
if
a
request
is
denied,
it
will
never
actually
reach
your
application.
C
So
you
would
have
to
do
some
kind
of
request,
correlation
some
kind
of
tracing
and
and
push
all
the
all
the
all
the
all
the
logs
to
a
single
place
which
can
be
done
like
you
can.
You
know,
grab
the
OPA
logs,
grab
the
Gateway
logs
and
then
push
them
to
a
single
place
and
and
what
we
would
need
to
correlate
them.
And
basically,
if
you,
if
you
put
some
correlation
ID
into
a
header
or
something
that
will
make
it
to
all
the
places.
A
Yeah
there's
another
comment
here
in
in
the
regarding
the
Gateway
itself.
Again,
like
you
know,
anything
that
happens
at
the
Gateway
layer
is,
can
be
externalized
your
log
processing
infrastructure.
A
So
again,
like
you
know
all
the
errors,
all
the
all
the
requirements
that
have
been
processed
by
the
Gateway
can
be
externalized
and
therefore
you
can
use
your
existing
log.
Processing
Constructor,
whatever
they
are
like
you
know,
could
be
elastic,
could
be
explained.
Plan
could
be
Prometheus
could
be
all
these
guys,
like
you
know,
can
will
be
able
to
to
check
all
the
information
is
coming.
A
All
the
the
request
results
coming
from
the
Gateway,
that's
for
the
Gateway,
but
as
as
Adam
said
before,
that's
another
thing
like
you
know
what
we
don't,
we
we
don't
introspect
the
applications.
Log
so
kind
of
you
know
we're
just
going
to
externalize.
Whatever
has
happened
to
the
Gateway
layer
to
your
log
processing
or
real-time
monetary
infrastructure,
to
make
onboarding
of
new
user
using
the
same
platform.
Yeah.
That's
a
you
know,
not
sure.
If
you
understand
the
question
again,
but
you
know
my
understanding
is
that
you
know
yeah.
A
There
will
be
one
thing
to
keep
in
mind
again,
like
you
know
how
to
to
go
easy
from
the
consumer
perspective
in
order
to
to
get
a
seamless
transition
to
a
Gateway
infrastructure,
and
then,
whatever
you
have
behind
the
gateway,
so
I
would
say,
like
you
know,
as
a
it's
not
wouldn't
be
really
related
to
the
technology
itself,
but
you
know
how
to
to
to
provide
a
documentation
and
information
for
your
consumers
in
order
to
create
to
develop
their
own
code
and
so
on
and
so
on.
C
A
C
Yeah
and
there's
an
interesting
question
from
from
Anonymous
attendee
in
case
we
have
thousands
of
client
IDs
and
each
client
will
have
a
different
rule.
What
is
the
best
way
to
handle
it
by
styra?
C
Well,
there
is
one
one
pair
part
of
that
answer.
Is
that
I
didn't
make
that
clear
during
the
demo
that
we
were
checking
like
individual
user
IDs
in
the
rules,
but
that's
like
that's
just
because
it's
a
demo,
okay,
so
that's
not!
Normally
you
would
be
checking
their
roles
or
groups
or
some
other
information.
You
would
be
aggregating
this
or
attributes
or
attribute
based
Access
Control.
C
However,
if
you
would,
but
you
also
mentioned
that
you
would
have
you
might
have
a
different
rule
for
each
and
every
user-
I,
don't
think
it's
a
very
realistic
scenario,
but
basically,
if
you
would
have
lots
of
rules,
it's
in
the
end,
it's
just
code.
So
if
you
can
somehow
abstract
over
them,
then
you
will
have
less
code
if
every
single
one
of
them
really
needs
a
different
world,
and
we
will
have
a
lot
of
rules
in
your
rules
file
or
you
can
have
several
rules,
files
and
and
split
up.
A
Irwin
is
a
beautiful
question.
I
would
say,
like
you
know
what
I
can
say
from
the
common
perspective,
we're
going
to
support
whatever
the
marketplace
is
going
to
use
along
the
way.
So
again,
you
know
it's
hard
to
say
at
this
time
and
then
any
comments
on
the
side.
C
C
That
is,
that
is
not
true.
We,
the
Oppa,
did
not
support
easily
using
graphql
because
of
the
conversion
needed
from
graphql
to
to
an
ASD
that
is
Json
based
and
actually
that's
a
pretty
recent
addition
to
Oppa
to
be
able
to
parse
graphql,
but
once
that's
there
yeah,
let's
it
works.
D
D
D
There
will
also
be
links
in
that
email
if
you
have
any
further
questions
for
Claudio
or
Adam,
and
we
can
help
get
those
answered
for
you.
So
with
that.
Thank
you
all
for
joining
us
today
and
I
hope.
We
see
you
at
a
future.
Tech
talk
have
a
great
day.