►
Description
Lin and Christian will be joined by many guests from the service mesh community to share their passion and belief in service mesh as well as Istio. You will also hear from a few Gloo users and learn about our technical roadmap for Gloo along followed by some interesting demos.
A
So
my
name
is
christian
posta
and
I
am
the
field
cto
here.
At
solo,
I've
been
involved
in
the
service
mesh
and
envoy
ecosystem
and
communities
for
the
last
four
years.
I've
also
worked
with
large
organizations
actually
large
and
small
organizations
across
the
world
to
modernize
their
application
architecture,
move
to
microservices
and
adopt
operational
best
practices
for
their
cloud-based
infrastructure.
C
A
Scale
scales,
applications
up
up
and
down
and
on
decentralized
cloud
networks,
and
we
know
that
the
network
is
not
reliable,
not
secure,
there's,
no
time
guarantees
and
so
on
and
so
building
these
cloud
applications.
Where
is
much
more
network
interaction
increases
the
chances
that
something
will
fail
and
it
will
so
what?
A
We
covered
some
of
our
glue,
edge
and
glue
mesh
products
yesterday
and
some
of
the
things
that
we're
doing
around
that.
We
also
yesterday
announced
glue
cloud,
which
is
the
first
and
only
istio
as
a
service
and
is
a
fully
managed
istio
for
any
cloud
that
you
want
to
deploy
your
workloads
on.
You
can
go
back
and
see
the
announcement
from
yesterday.
A
Today
we
will
have
joe
kelly
and
I
will
have
a
session
on
the
glue
mesh
roadmap,
which
will
include
more
detail
about
blue
cloud,
and
one
thing
that
you
should
take
away
from
glue
cloud
or
the
glue
cloud
announcement
is
that
you
can
get
started
very
very
easily
solving
these
problems
on
any
cloud
and
use
istio
all
right,
the
everything
is
managed
for
you.
All
you
have
to
do
is
install
the
data
planes
next
to
your
applications,
but
the
control
planes
are
all
managed.
A
A
I
want
to
point
out,
if
you
haven't
noticed
already
the
trend
in,
or
rather
the
common
thread
throughout
our
announcements,
what
we're
doing
here
at
solo,
what
we're
doing
with
our
customers
and
and
what
you're
seeing
in
our
our
product
line,
and
that
is
istio
the
service
mesh
is
seo,
and
what
we're
gonna
see
is
that
solo
is
fully
committed
to
istio.
We
see
that
as
the
dominant
service
mesh,
it's
the
one
that's
most
deployed
into
production,
it's
the
one!
A
That's
the
most
mature
solves
the
edge
cases
that
a
lot
of
enterprises
have
and
we're
investing
more
and
more
into
istio
and
you're,
going
to
see
that
and
continue
to
see
that
trend,
including
in
some
of
our
announcements
today
before
we
get
to
that.
However,
I
want
to
introduce
lynn,
sun
who
joined
solo
about
a
month
ago,
she's
from
the
istio
community
on
the
istio
technical
oversight
committee
and
generally
very
well
known
in
the
in
the
istio
community.
A
E
E
E
My
second
encounter
is
kubecon.
Europe
in
barcelona
remember
those
good
days
when
we
meet
face
to
face.
I
remember:
edith
was
presenting
how
to
debug
in
microservices
in
istio
service
mesh
using
squash
the
night
before
the
conference
at
an
issue
meetup
throughout
the
conference
I
talked
to
betty
and
scott
about
super
glue
and
smi.
E
E
I
generate
this
data
from
their
raw
data
out
of
the
1000
plus
response
of
the
surveys.
There
are
36
percentage
of
users
are
running
service
mesh
in
production
and
from
these
users
issue
is
clearly
leading
the
way
we
just
finished.
A
hugely
successful
is
your
car.
I
was
the
program
co-chair
very
incredibly
honored
to
be
program
co-chair
for
the
conference.
E
E
We
have
a
lot
of
big
household
names,
join
us
on
stage
at
istio,
call
and
talk
about
their
journeys
of
adopting
issue
in
production.
We
also
have
very
rich
set
of
partners,
including
solo.
I
o
my
five
key
takeaway
of
isro
car
issue.
2020
is
really
the
year
of
innovation
for
the
istio
project,
the
control
plane
components
have
been
simplified
for
multiple
components
into
one
single
control,
plane
components
called
istio
daemon.
E
E
E
E
E
Istio
and
kubernetes
are
coming
closely
together.
Istio
has
already
started
to
adopt
the
kubernetes
gateway
and
route
api.
We
intend
to
turn
that
api
into
the
stable
networking
api
of
sdl.
The
kubernetes
community,
has
also
produced
multi-cluster
service
api
and
the
istio
community
is
actively
implementing
those
apis.
E
F
Enterprises
have
a
lot
of
polyglot
or
legacy
workloads
which
are
extraordinarily
expensive
to
maintain
and
even
harder
to
get
new
capabilities
out
of.
There
are
a
lot
of
you
know:
new
regulatory
challenges
or
new
requirements,
around
management
or
observability
or
auditing
requirements,
and
so
there
are
all
these
best
practices
that
you
know
you
want
applications
to
be
able
to
keep
up
with,
but
you
can't
do
that
for
the
application
fleet
that
was
built.
I
F
When
you
know
there's
a
basic
set
of
things
that
people
want
to
be
able
to
do
around
telemetry
and
traffic
management,
and
so
what
was
holding
the
industry
back
from
being
able
to
do
that?
You
know
so
there
were
performance
concerns
or
lack
of
standardization
or
just
lack
of
momentum
or
lack
of
investment.
I
was
also
pretty
motivated
by
that.
So
those
two
things
combined
were:
what
made
me
want
to
go
work
in
service
management.
I
thought
we
could
go
and
do
something
that
would
move
the
industry
forward
in
a
useful
and
purposeful
way.
J
D
G
I
believe
that
the
service
mesh
design
pattern
solves
for
these
common
problems
really
well
in
out
of
process.
Sidecar
proxies
allowing
you
to
build
an
application
networking
layer
that
has
all
the
required
plumbing
in
place
to
make
service
to
service
communication,
secure,
reliable
and
observable,
so
that
service
owners
can
focus
on
business
logic.
Now.
D
L
L
K
H
L
I
D
Currently
spearheading
the
project
of
landing,
an
istio-based
service
mesh
at
airbnb.
We
are
in
the
middle
of
a
vast
migration
of
our
hundreds
of
microservices
and
tens
of
thousands
of
pods
huge,
shout
out
to
ling
sun
for
making
it
so
much
easier
for
us
to
deploy
eco
control
plane
outside
of
the
mesh.
G
M
Believe
service
match
have
reached
the
stability
to
be
used
in
enterprise
projects.
It
used
to
be
friction
to
adopt
service
match
of
having
a
sidecar
in
the
application,
because
multiple
reasons
like
latency
or
resource
waste,
but
the
service
match
and
the
community
have
doubled
down
on
performance,
efficiency
and
simplicity
that
have
made
the
service
match
a
de
facto
component
in
most
microservices-based
applications.
D
When
I
started
looking
into
scaling's
inter-service
communication
at
airbnb
two
years
ago,
service
mesh
seemed
like
a
risky
bet
fast
forwarding
two
years,
I'm
amazed
at
by
the
progress
and
momentum
service
match
community
has
gained.
I
look
forward
to
seeing
service
mesh
becoming
the
go-to
solution
for
inter-service
communication
on
the
cloud
soon
in.
H
K
I
think
the
future
of
service
mesh
is
that,
if
you're
thinking
about
adopting
an
enterprise
great
service
mesh,
you
should
maybe
consider
having
that
managed
for
you.
I
think
there
are
plenty
good
reasons
to
build
a
data
center
and
operate
a
kubernetes
cluster
and
to
manage
your
own
service
mesh,
but
I
think
the
future
is
going
to
be
to
let
somebody
else
handle
that
complexity
for
you
and
to
just
get
back
to
fixing
bugs
shipping
features
for
your
users.
I
J
J
F
H
M
I
think
solo
io
is
doing
great
work
in
the
open
source
community.
I
like
the
work
that
they
are
doing
around
watson
modules
and
having
a
hub.
I
have
done
envoy
filters
using
lua
and
let
me
tell
you
it's
not
very
easy.
Having
wasm
opens
the
door
for
me
to
use
other
languages
or
leverage
an
existing
module.
Also,
they
have
done
great
work.
Integrating
with
k-native
by
providing
the
networking
layer.
I
L
L
Companies
can
run
ahead
of
open
source
projects.
We
take
a
long
time
to
standardize
on
things.
Sdo
launched,
webassembly
support
in
envoy
and
solo
was
ready
with
tooling
and
distribution
options,
not
just
for
their
customers,
but
for
any
seo
user.
We're
excited
to
keep
working
with
solo
and
istio
extensibility
and
look
forward
to
seeing
their
work
adopted
upstream.
L
J
Has
incredible
expertise
and
knowledge
in
the
industry
of
service
mesh
with
some
of
the
brightest
minds
working
there?
I
really
love
the
work.
They
do
pushing
the
capabilities
of
service
connectivity,
be
it
how
your
applications
are
accessed
from
outside
the
cluster,
how
your
community
or
developers,
discover
or
document
apis
the
extension
of
service
mesh,
using
wasm
modules
or
other
fantastic,
open
source
tools
that
enable
developer
workflow
with
service
match
solo
are
absolutely
killing
it
right
now,.
E
N
My
company
is
building
the
next
generation
of
iit
platform
for
the
industrial
sector.
We
came
to
an
open
architecture
using
kubernetes
and
the
need
of
an
api
manager
and
an
api
gateway
was
growing
day
after
day.
So
we
decided
to
run
a
pc
comparing
three
four
products
on
the
market.
Blue
got
the
first
rank,
especially
winning
for
the
following
reasons:
kubernetes
are
the
first
class
citizen
and
envoy
performances.
N
O
I'm
a
staff
software
engineer
working
in
the
foundation
site,
reliability,
team.
We
operate
many
internal
software
services
and
data
processing
pipelines
which
are
all
used
internally
by
our
scientists
and
lab
automation.
Engineers
we've
been
transitioning
from
large
monoliths
running
on
ec2
instances
to
services
running
in
containers
in
kubernetes
and
in
lambdas
we've
been
running
blue
edge
as
an
internal
api
gateway
for
almost
a
year.
P
N
C
My
experience
with
solo
and
the
community
around
these
solo
products,
this
far
has
actually
been
quite
nice.
What
I
really
like
about
the
product
is
it's
pretty
easy
to
use
it's
very
intuitive
and
it
actually
has
great
ideas
in
it,
and
I
really
like
some
of
the
features
you
can
actually
do
with
it,
like
the
access
policy
is
on
the
traffic
policies
which
allows
you
to
abstract
away
some
of
the
seo-ish
policies.
N
P
There
are
a
handful
of
features:
I'd
like
to
highlight:
glue
edge
is
able
to
integrate
with
aws
lambdas,
which
allows
us
to
invoke
them
from
the
glue
gateway
proxies.
It
also
lets
us
integrate
our
own
homegrown
api
key
validator
for
api
authentication.
It
was
a
huge
factor
in
deciding
to
stick
with
glue
because
we
want
our
developers
to
focus
on
what
they
do
best,
instead
of
fighting
with
api
configurations
compared
to
the
aws
api
gateway,
which
ties
every
aspect
and
configuration
of
the
gateway
together.
P
O
Edge
has
allowed
a
small
team
of
us
experts
to
automate
and
use
git
ops
workflows
to
configure
complex
routing
scenarios
without
requiring
all
our
software
engineers
to
be
expert
in
our
networks
or
kubernetes.
It
enables
us
to
keep
unified
api
contracts
with
our
clients.
While
we
move
the
service
implementations
around
in
the
background.
P
And
the
best
part
about
blue
edge
is
that
it's
a
kubernetes
native
application
that
comes
with
its
own
set
of
custom
resources
or
crds.
For
short,
this
allows
us
to
delegate
routes
to
different
app
teams,
so
they
can
create
and
manage
their
own
routes
without
relying
on
another
team
to
accomplish
this,
we
used
helm
and
created
our
own
route,
template
and,
depending
on
their
labels,
the
appropriate
virtual
service
picks
up
the
route.
C
O
P
E
E
A
We
also
see
folks
deploying
it
as
with
their
service
mesh.
Now,
let's
take
a
closer
look
at
the
deployment
model
of
glue
edge,
which
is
basically
envoy
proxies
which
are
listening
for
traffic
at
the
ingress
of
a
particular
kubernetes
cluster.
Although
the
ledge
is
not
tied
to
kubernetes
and
can
run
outside
of
kubernetes,
as
I
mentioned
yesterday,
blue
edge
could
also
be
the
edge
gateway
for
multiple
clusters
for
lines
of
business
and
again
be
deployed
in
a
decentralized
way.
A
A
Now
today
we
can
run
glue
edge
as
an
extension
to
istio,
so
you
might
run
initial
service
mesh
in
your
cluster
and
what
we
can
do
is
mount
the
istio
certificates
into
the
glue
edge
proxy
and
then
have
traffic
coming
into
the
system
and
have
blue
edge
route.
Two
services
within
the
service
mesh
and
using
that
certificate
establish
mutual
tls,
establish
an
istio
based
identity
or
spiffy
based
identity
and
participate.
As
a
citizen
in
the
service
mesh.
A
A
Now
I
want
to
caution.
There
are
good
reasons
to
have
two
separate
control
planes,
but
with
glue
edge
and
what
we
are
announcing
today
is
blue
edge
2.0.
We
can
merge
the
control
plates
and
have
the
glue
edge
proxy,
be
configured
and
managed
by
the
istio
control
plane.
So
now
you
have
fewer
control
components
and
you
can
control
everything
from
a
single
source
of
truth.
A
There
might
be
use
cases
where
you
want
to
keep
those
separate
like
you
do
today,
so
the
same
model
that
you
deploy
glue
edge
today
is
available
in
2.0,
so
you
can
have
separate
control
planes
where
you
might
want
separate
failure,
domains
for
different
parts
of
the
networking
application,
networking
infrastructure
in
your
cluster.
So
you
might
want
to
separate
out
control,
plane,
upgrades
and
possible
failures
for
the
service
mesh
outside
of
that
from
ingress.
A
A
Q
A
In
essence,
have
for
glue.
2.0
is
a
more
optimized
deployment
model
when
you
are
using
a
service
mesh,
but
if
you're
not
using
a
service
mesh,
nothing
really
changes.
You
still
have
the
same
blue
deployment
model
with
a
control
plane
for
your
envoy
proxy
same
set
of
capabilities
that
you
have
today.
So
the
benefits
of
doing
this.
Now
we
are
sharing
some
of
the
the
effort
around
building
an
envoy
control
plane
with
that
of
the
istio
community.
A
We,
if
you,
if
you
want
to
deploy
with
a
service
mesh,
then
you
can
do
that
in
an
optimized
way,
with
just
one
control
plane.
If
you
want
to
have
separate
split
the
control
planes
split
out
the
failure
domains,
you
can
do
that
or
if
you're
not
using
a
surface
mesh
at
all,
then
you
know
just
assume.
A
And
deployment
model
that
you
have
today
so
with
that
I
want
to,
I
want
to
introduce
kevin
and
shane
kevin,
runs
the
the
glue
engineering
team
and
shane's
the
engineer
in
our
product
engineering
team
and
they're
going
to
go
into
more
detail
and
show
you
blue
edge.
2.0.
Q
Hi
everyone
we're
excited
to
show
you
some
of
the
features
we've
been
working
on.
Let's
take
a
quick
look
at
two
features
in
particular,
which
are
multi-cluster,
multi-mesh,
ingress
and
advanced
external
authentication.
Before
we
jump
into
the
demo,
we
just
want
to
do
a
quick
overview
of
the
demo
environment
we'll
be
looking
at.
We
have
this
multi-cluster
multi-mesh
demo,
environment
setup,
where
both
clusters
have
istio
installed.
Q
Great,
so
here's
the
environment
we
just
described,
you
can
see.
We've
got
cluster
one
here
running
on
the
left
with
product
page
and
we've
got
cluster
two
running
here
on
the
right
with
details.
Let's
just
take
a
quick
look
at
the
yamls
we're
going
to
be
applying.
I
don't
want
to
get
into
too
much
depth
here.
Q
We've
got
a
bigger
talk
later
on
on
this,
but
specifically
just
take
note
that
this
is
going
to
be
applied
to
two
clusters,
so
you'll
be
seeing
the
ingress
pop
up
on
both
different
clusters
and
also
notice
that
we're
using
the
kubernetes
gateway
api
spec
for
this.
So
if
you
use
this
api
spec
today
on
istio,
which
is
supported,
you'll
be
able
to
migrate
to
this
super
easily.
Q
I
just
want
to
call
it
that
we're
exposing
a
listener
and
port
8080
on
both
of
these
new
ingresses
and
for
the
routing
we're
going
to
expose
two
routes,
one
to
product
page
running
in
the
management
cluster
and
another
to
details
running
the
remote
cluster.
So
we've
got
routing
to
services
running
in
different
clusters,
even
though
we're
only
applying
this
configuration
in
one
place.
Q
Q
and
on
the
right
I'm
going
to
do
the
same
thing,
but
just
port
forward
it
to
8081,
so
we
can
test
both
of
them
in
parallel
and
now,
let's
try
to
hit
that
details
route
that
we
just
showed
you
configured
in
the
yaml
and
you
can
see
we
get
the
request
back.
You
can
see
it
hit
our
port
forward
here.
Similarly,
if
we
change
the
port
to
8081
we're
hitting
the
ingress
in
cluster
2.,
you
can
see
it
hit
here
and
we
get
the
response
back
just
as
expected.
Q
That's
a
great
question,
so
you
might
notice
that
we've
got
our
external
auth
service
running
in
these
clusters.
So
let's
just
take
a
look:
quick
look
at
some
yammer
we've
created
earlier
that
does
have
auth
configured
now.
This
syntax
will
be
familiar
to
anyone
who's
familiar
with
our
external
auth
config
for
the
existing
blue
edge
api
gateway,
we've
taken
the
exact
same
syntax
and
brought
it
to
this
new
service
if
you're
not
familiar
with
the
syntax.
Q
R
And
I
just
want
to
call
out
here
that
this
is
the
xdot
service
that
ships
with
gluten
today,
and
so
it
includes
all
the
other
offloads
that
are
supported
in
run
production
today
for
other
environments,
things
like
open
policy,
agent,
ldap,
custom,
authentication,
pass-through
plug-ins.
All
that
is
supported
as
well.
Q
If
we
apply
an
api
key
which
we
configured
to
work
now,
you
can
see
we're
getting
a
200
again
and
we're
getting
the
request
response
as
expected.
If
we
supply
a
bad
api,
key
we're
back
to
403
forbidden-
and
you
remember,
we
also
said
basic
auth
was
allowed.
So
here's
some
basic
auth
credentials
that
we
configured
earlier
just
user,
slash
password
and
you
can
see
here
we're
getting
the
200
response.
We
expect
with
the
respected
expanse
json.
E
E
It's
really
cool
from
the
demo
that
sean
was
able
to
demonstrate
that
glue
edge,
2.0
also
adopts
the
kubernetes
api.
That
is
your
adopts.
Not
only
that
user
can
config
glue
ads
to
route
requests
to
the
detail
service
that
has
the
service
running
and
also
add
authentication
to
the
product
page
service.
E
B
Thank
you,
lynn.
My
name
is
chris
gohan,
I'm
the
director
of
product
management.
Here
at
solo.
I
o
you
can
find
me
on
the
solo.
I
have
slack
at
godnard
I'd
love
to
hear
from
you.
You
can
also
find
me
on
twitter
at
gone.
Eddie's
I'm
joined
today
by
the
glute
portal
lead
marco
schmidt.
You
could
also
find
him
on
slack,
I'm
going
to
talk
a
little
bit
today
about
glue
portal
and
how
you
need
a
developer
portal
for
istio
and
how
glue
portal
is
the
best
fit
in
the
market.
B
It
is
yes,
a
ui
that
you
could
use
to
expose
those
apis
to
those
developers,
those
those
services
running
on
kubernetes
with
istio.
It
is
a
ui,
but
it
is
also
more
than
a
ui.
It
is
written
to
be
kubernetes
native.
It
uses
crds,
so
anything
from
your
git,
ops,
workflow
or
your
knowledge
of
kubernetes
and
wrangling
yaml
and
wrangling
crds
that
all
works
with
the
portal.
B
Okay,
what
are
the
portal
enhancements
and
what
are
we
trying
to
deliver
to
istio?
Well,
first,
the
thing
you'll
see
is
we've
added
grpc
api
types
into
the
portal
and
they
have
a
similar
look
and
feel
to
the
rest,
api
types
that
were
there
previously.
This
is
extremely
exciting.
This
was
a
lot
of
work
in
order
to
make
this
look
and
feel
the
same,
and
the
team
worked
very
hard,
and
I'm
excited
to
show
this
to
you
today.
B
The
third
thing
that
we'll
talk
about
you
won't
see
it
today,
but
you're
also
able
to
make
apis
public
to
those
developers
to
those
partners
to
your
community
so
that
they
could
read
the
documentation
for
your
apis
that
you're
running
with
istio
before
they
actually
try
them
out
in
the
portal
before
they
authenticate
in.
So
this
is
also
a
extremely
popular
use
case
that
we've
seen
and
actually
is
required
by
some
european
regulations.
B
B
Well,
rest
is
about
61
of
the
market
right
now,
but
that
market
is
expanding
quickly
and
the
reason
for
that
is
a
similar
story
that
we've
seen
in
many
tech
industries
in
the
past.
So
you
have
a
technology
that
comes
out
that
becomes
very
popular
in
this
case
rest,
which
came
out
at
the
dot-com
era
in
2000,
quick
adoption,
it's
not
going
away
all
aws
services
or
restful
services,
or
have
a
restful
api
that
that's
documented,
but
you
start
to
see
some
larger
companies
some
edge
companies
start
to
think.
B
B
We
saw
that
with
machine
learning
with
tensorflow
and
what
happens
when
those
two
technologies
drop
is
that
you
see
a
huge
adoption,
an
exponential
adoption
happening
within
the
community,
and
so
let's
look
at
these
new
technologies.
Graphql,
which
came
out
of
facebook
and
grpc,
which
came
originally
came
out
at
google,
but
then
was
donated
again
to
the
cncf.
B
The
clown
native
computing
foundation
when
compared
to
rest
rest
is
still
obviously
doing
very
well
and
is
continually
growing,
but
you
see
these
new
technologies
really
breaking
out
and
you
really
see
adoption
and
the
acceleration
that's
happening
with
an
already
cambrian
explosion
in
cloud
native
development
that
we
see
in
the
market
today.
B
This
is
going
to
really,
if
you
have
them
all
in
one
place,
really
accelerate
your
cloud
native
adoption
within
your
organization,
and
it's
important
to
remember
that.
Not
only
is
glue
portal,
not
just
a
developer
portal
for
istio.
Yes,
it
is.
It
is
the
only
developer
portal
that
I
know
of
for
istio
on
the
market,
but
is
also
the
only
enterprise
grpc
portal.
B
There
are
some
open
source
grpc
portals
out
there,
but
you
don't
get
the
enterprise
support.
Nor
do
you
get
grpc
next
to
restful
apis
as
well.
This
is
the
only
option
in
town,
if
you're
interested
in
empowering
your
developers
to
have
all
their
api
types
in
one
place,
catalog
that
they
can
manage,
and
so
with
that
I'm
going
to
hand
it
over
to
marco
so
that
we
could
see
grpc
and
authentication,
and
some
of
these
exciting
features
that
we're
working
on
marco.
S
Hello:
everyone
in
this
demo,
I'm
going
to
show
you
how
glue
portal
allows
you
to
catalog
and
publish
the
rest
in
grpc
apis
that
you
have
deployed
in
your
istio
service
mesh.
We
will
see
how
we
can
use
glue
portal
to
quickly
iterate
on
the
design
of
the
apis,
together
with
the
developers
that
consume
them
and
how
you
can
use
it
to
automatically
configure
the
istio
ingress
gateway
to
route
to
your
apis
and
to
apply
rate
limit
and
off
policies.
S
You
can
see
that
the
ingress
gateway
is
deployed,
and
I
also
have
deployed
glue
portal
now.
Let's
say
that
I
have
developed
two
simple
applications,
a
rest
application
and
a
grpc
application,
and
I've
used
one
of
the
many
available
api
design
tools
to
define
specifications
that
describe
these
services.
S
The
question
is:
how
do
I
share
these
specifications
with
the
users
so
that
they
can
give
me
feedback
about
my
apis?
And
ideally,
how
do
I
enable
the
users
to
test
these
apis
while
they
are
under
development?
Well
glue
portal
can
help
you
with
both
of
those
problems,
so
I'm
going
to
start
by
opening
the
admin
dashboard
and
by
importing
the
api
definitions
into
the
system.
S
So
I
have
configured
my
rest
service
to
serve
the
open
api
specification
that
describes
it
on
an
endpoint
and
glue
portal
can
consume
that
endpoint
and
parse
the
specification
you
can
see
that
we
have
detected
all
the
operations
defined
on
that
open
api
spec.
I
can
do
a
similar
thing
for
grpc,
but
instead
I
will
use
the
reflection
endpoint
that
I've
configured
my
server
with
so
that
blueportal
can
introspect
the
server
and
pull
the
methods
that
it
contains.
So
you
can
see
that
we
have
all
the
operations
here
complete
with
a
remote
procedure
called
type.
S
Now,
once
I
have
my
api
docs
in
the
system,
I
can
create
what
we
call
api
products,
so
api
products
are
a
way
of
bundling
different
apis
together,
so
that
you
can
expose
only
the
operations
that
you
want,
your
users
to
be
able
to
query,
and
it
also
allows
you
to
version
the
api
product,
so
you
can
have
different
versions
of
an
api
living
in
the
system.
At
the
same
time,
an
api
product
must
be
published
to
an
environment.
S
An
environment
is
basically
a
domain
on
the
gateway
that
your
apis
will
be
associated
with,
and
you
can
define
different
policies
for
the
same
api
in
different
environments.
So
in
my
case,
I
have
an
integration
environment
which
will
not
require
any
authentication
and
have
a
production
environment
which
instead
will
require
authentication
and
also
enforce
rate
limit
policies.
But
the
environment
is
what
is
what
instructs
glue
portal
to
generate
a
configuration
for
your
istio
gateway
so
that
it
can
implement
the
policies
that
you
have
defined
and
brought
to
your
services?
S
S
S
For
example,
I
can
query
the
integration
environment
off
for
my
rest,
api
and
test
an
endpoint
and
get
the
live
response
back
from
the
backend,
and
I
can
do
the
same
for
my
grpc
service.
So
you
can
see
that
it
looks
the
same
as
my
rest,
api
documentation.
I
can
see
the
service
all
the
methods.
I
can
see
the
input
and
output
messages
to
my
service
methods
and
I
can
query
the
backend
and
get
data
back.
S
Let's
not
filter
anything
here,
and
here
you
go
now.
Let's
say
that
I
have
decided
to
promote
my
id1
application
to
the
production
environment.
So
if
I
configure
blueportal
to
do
that,
I
can
navigate
to
the
documentation
for
the
production
version
of
the
api
and
if
I
try
out
this
version,
you'll
see
that
I
get
a
403
error
because
it
didn't
provide
any
credentials.
S
So
this
concludes
the
demo.
One
last
thing
I
want
to
mention
is
that,
although
I
have
used
the
admin
ui
to
configure
the
system,
everything
you
see
here
is
driven
by
declarative
configuration
in
the
form
of
kubernetes
custom
resources,
so
glue
portal,
like
all
the
other
products
in
the
glue
api
infrastructure
platform,
can
be
very
easily
integrated
in
any
continuous
delivery
pipeline.
E
E
E
E
E
Certainly,
as
christian
mentioned,
you
could
also
run
your
own
dedicated
control
plane
for
the
edge
but
you're
going
to
have
consistent
experience
of
your
control
plane,
whether
it's
the
dedicated
control
plate
for
the
edge.
Oh,
it's
dedicated
control
plane
for
your
sidecar.
It's
the
same
experience.
E
I
believe
this
architecture
change
will
also
simplify
the
transition
from
issue
user,
who
adopted
issue
for
the
edge
capabilities
and
would
want
additional
capabilities
and
move
to
glue
edge,
such
as
web
application,
firewall,
authentication
and
ready
meeting.
We
can
just
apply
the
extension
and
move
to
blue
edge.