►
From YouTube: 019 Ingress and Service Mesh One Step at a Time
Description
How Pegasystems and Solo partnered to accelerate Pega's journey to Kubernetes and the service mesh. This would discuss how Pega started with Gloo and then moved to Gloo + Istio so that we could gradually introduce new technologies as our business requirements and knowledge of k8s/envoy grew.
A
B
We'll
talk
a
little
bit
more
about
services
that
we
had
to
support,
we'll
also
dive
in
into
integration,
integrating
with
seo
and
finally,
we'll
talk
about
results
and
future
plans.
B
What
we
do
is
we
deliver
software
that
crushes
business
complexity.
The
win
business
today
is
complicated
and
it
it
and
it
is
not
getting
any
simpler.
We
know
that
no
matter
which
challenges
businesses
are
facing,
we
don't
give
up
until
we're
more
streamlined,
more
agile
and
more
flexible,
we're
ranked
a
leader
based
on
the
strength
of
our
offerings
and
our
ability
to
deliver
on
our
promises.
B
B
As
we
embarked
on
adopting
kubernetes,
we
set
two
main
goals:
deploy
a
single
internal
service
running
on
a
cluster
and
support
external
ingress
north-south
architecture.
I.E.
We
only
focused
on
the
external
ingress
and
no
support
for
in-cluster
service
to
service,
since
we're
only
deploying
one
service.
B
B
B
A
So
we
started
looking
at
some
of
the
technologies
in
the
kubernetes
ecosystem
and
we
decided
that
we
wanted
to
start
with
an
api
gateway
in
an
api
gateway.
All
traffic
with
throw
would
flow
through
this
ingress
in
order
to
keep
traffic
patterns
consistent,
and
it
would
allow
us
to
use
a
single
load
balancer
instead
of
deploying
a
load
balancer
per
service
in
the
future.
A
A
It
had
all
of
the
features
that
we
needed
and
as
a
bonus
blue
edge,
uses
envoy
the
same
underlying
proxy
technology
as
the
seo
service
mesh.
Since
we
knew
we
wanted
to
adopt
istio
in
the
future
being
able
to
start
to
deploy
envoy
to
learn
about
it
to
get
it
into
production,
we
felt
was
a
very
good
step
towards
our
ultimate
goal,
so
with
glue
in
the
picture,
our
architecture
ends
up.
A
Looking
like
this
still
pretty
simple
our
end
users
would
score
traffic
to
a
load
balancer
we're
deploying
on
aws,
so
it's
an
application,
load
balancer
and
then
the
load
balancer
would
forward.
All
the
traffic
to
blue
glue
within
our
kubernetes
cluster
would
perform
all
of
the
smart
actions,
although
all
of
the
routing,
in
order
to
send
traffic
to
our
service.
A
This
is
one
of
our
security
requirements
within
our
company,
so
we
needed
to
make
sure
that
all
the
traffic,
all
the
way
from
the
client
to
our
application
to
our
internal
applications
was
secure.
A
However,
when
we
were
thinking
about
this,
we
didn't
want
to
start
having
our
teams
deploying
their
own
tls
certificates,
worrying
about
things
like
certificate
rotation,
which
cipher
versions
to
use
things
like
that.
We
wanted
our
infrastructure
to
be
able
to
handle
that
for
them.
This
is
a
service
mesh
use
case
and
seo
handles
protecting
services
with
tls.
However,
we
still
thought
it
was
too
early
to
go
with
istio
we're
still
learning
about
kubernetes.
So
we
looked
for
alternate
approaches.
A
The
thing
that
we
did
is
we
ended
up
deciding
that
it
was
pretty
easy
for
us
to
simply
take
the
envoy
container
same
thing
used
by
glue
and
istio
and
simply
embed
it
into
our
applications
as
a
sidecar.
This
is
similar
to
what
istio
does,
but
our
use
case
for
envoy
was
much
simpler.
It
only
had
to
take
traffic
that
was
coming
from
the
from
our
api
gateway
and
terminate
tls
and
then
forward
the
traffic
to
our
container.
A
We
didn't
need
to
do
anything
like
egress
or
any
kind
of
dynamic
round
dynamic
route
management.
Everything
was
extremely
static,
so
this
proved
to
be
a
pretty
good
learning
opportunity
for
us.
We
had
an
opportunity
to
configure
envoy
ourselves
and
deploy
it
with
in
some
of
our
applications.
A
A
Another
challenge
we
hit
is
that
we
needed
to
create
a
bunch
of
kubernetes
resources
in
order
to
configure
glue
for
those
that
haven't
worked
with
blue.
There
are
a
few
different
resources
you
need
to
configure
in
order
to
set
up
routing.
The
first
is
a
virtual
service.
A
virtual
service
contains
the
writing
information
for
a
domain
or
a
set
of
domains,
and
then
a
virtual
service
is
often
associated
with
one
or
more
route
tables
route
tables
contain
some
sort
of
matcher
that
can
match
the
traffic
that's
being
sent
to
a
domain
and
a
destination.
A
In
our
case,
these
destinations
were
upstreams
which
another
which
is
another
glue
resource
which
references,
kubernetes
services
and
our
route
tables.
The
matchers
that
we
were
configuring
were
path-based
routes,
so
all
of
our
services
that
we
were
deploying
would
need
to
configure
a
a
path
prefix,
something
like
my
service
name
that
would
be
unique
to
their
service
and
then
that
would
need
to
be
added
to
the
route
table
so
that
the
virtual
service
could
correctly
forward
the
right
traffic
to
our
service.
A
So
one
of
the
goals
of
our
architecture
that
we
decided
on
early
on
was
actually
to
make
sure
that
all
of
the
resources
that
a
particular
service
needed
to
create
would
be
isolated
within
a
specific
namespace
for
that
service,
so
that,
if
you
wanted
to
deploy
your
service,
you
didn't
have
to
add
things
to
multiple
namespaces
or
to
a
centralized
namespace,
such
as
the
glue
system.
Namespace
glue,
helped
us
in
this
regard.
A
Since
glute
supports
it's
called
route
table
delegation
with
that,
in
your
virtual
service,
you
would
configure
a
label
selector
and
when
you
created
the
route
tables,
you
would
make
sure
that
the
route
tables
had
the
labels
that
were
specified
in
the
virtual
service.
A
So
then,
if
the
virtual
service
is
created
ahead
of
time
and
then
when
a
one
of
our
applications
is
deployed,
the
application
created
the
route
table
and
the
upstream
glue
would
automatically
realize
that
the
route
table
and
the
virtual
service
need
to
be
associated,
and
it
would
stitch
together
all
the
things
that
are
needed
in
order
to
route
traffic
from
blue
to
our
service.
A
So
this
using
this
delegation
worked
out
well
for
us,
it
allowed
us
to
isolate
our
route
configurations
per
service,
but
now
we
had
not
just
glue
resources,
but
we
also
had
the
onboard
configuration
that
we
would
need
to
create
as
part
of
our
application
deployments
and
again.
These
are
these
are
details
that
we
really
didn't
want
to
have
to
expose
to
our
teams.
A
Things
like
what
container
images
they
want
to
use,
how
many
replicas
that
container
should
have
what
ports
are
the
containers
listening
on
and
what
paths
they
want
the
glue
to
listen
for
traffic
in
order
to
forge
the
application
at
deployment
time
when
the
application
chart
is
rendered,
it
uses
the
templates
from
the
base
chart
in
order
to
create
all
of
the
kubernetes
resources
that
are
required
to
run
the
application,
including
the
envoy
configuration
deployments,
as
well
as
the
blue
resources.
A
We
think
this
approach
worked
out
well
for
us,
it
gave
us
a
centralized
chart
structure
so
that,
as
applications
were
deployed
on
our
infrastructure,
the
infrastructure
team
always
had
a
pretty
good
idea
of
what
the
chart
was
going
to
look
like
what
resources
it
would
contain.
What
the
naming
conventions
would
look
like
it
abstracted
away
some
of
the
complicated
portions
of
our
the
envoy
and
glue
configuration
from
the
teams
they
just
needed
to
provide
the
information
that
was
very
specific
to
their
service.
However,
there
were
definitely
a
few
issues
with
this
approach.
A
One
thing
our
service
teams
are
no
longer
fully
in
control
of
their
chart.
They
were
using
our
base
chart
which
contained
a
lot
of
our
templates.
They
weren't
creating
it
from
start
to
finish,
so
this
required
our
team
to
support
the
service
teams
that
were
coming
online
in
order
to
help
them
use
the
base
chart
and
often
we
had
to
provide
them
with
enhancements
as
they
discovered
new
feature
requirements.
A
This
also
posed
an
upgrade
risk
for
us.
If
we
needed
to
roll
out
a
change
to
the
outboard
configuration
or
to
our
glue
configurations,
we
had
to
roll
out
a
new
change
to
the
base
chart.
We
would
then
have
to
get
our
service
teams
to
upgrade
their
chart
and
to
use
our
new
base
chart
and
roll
out
those
changes
to
our
environments.
A
We
are
continuing
to
use
the
base
job.
We
are
trying
to
make
things
simpler
on
the
upgrade
side,
notably
by
providing.
A
Better
abstractions
for
how
our
resources
get
created,
we
can
provide
either
custom
resource
definitions
that
exist
within
the
helm,
charts
and
then
create
custom
controllers
that
will
process
those
definitions
and
then
automatically
create
things
like
blue
resources
or
other
configuration
objects.
And
then,
if
we
need
to
change
those
how
those
objects
are
created,
we
simply
update
our
controller
and
the
changes
get
rolled
out
automatically.
A
Similarly,
we
can
also
create
have
our
controllers
process
annotations
on
resources,
like
deployments
and
services,
in
order
to
similarly
auto
generate
configuration,
so
that
is
making
our
upgrade
processes
much
simpler
for
us.
A
So
there
were
some
challenges
the
base
chart.
However,
everything
ended
up
working
every
we
put
everything
together
and
our
first
service
deployment
was
successful.
We
were
able
to
get
deployment
to
an
internal
production
environment
and
their
internal
clients
started
using
the
software
everything's
great.
You
know
we
can
celebrate
except
we
now
have
a
line
of
services
queuing
up
and
waiting
to
be
deployed
and,
of
course,
with
more
services
comes
more
complexity.
A
A
So
we
knew
that
this
additional
complication
or
would
need
some
additional
technologies.
We
kept
talking
to
solo.
I
o
worked
out
a
plan
and
we
decided
that
now
it
was
time
for
istio.
A
So
with
istio
in
the
picture,
we
could
keep
using
blue
edge
and
forward
traffic
to
our
pods,
which
are
living
in
the
istio
surface
mesh
the
way
istio
works
is
it
automatically
injects
a
side,
an
envoy
sidecar
container
to
any
pods
that
you
create,
so
that
would
replace
our
embedded
envoy
sidecar
container
that
we
added
to
our
base,
chart,
envo
or
istio
would
help
us
control
our
service
to
service
traffic
encrypting
it
with
tls,
and
it
would
help
us
protect
our
storage
services
by
giving
us
granular
access
so
that
services
could
not
access
another.
A
A
different
services
storage
service
in
a
variable,
so
we
knew
seo
was
going
to
be
a
little
complicated.
Then
there
were
going
to
be
a
bunch
of
new
objects
that
we'd
have
to
configure
in
order
to
get
istio
running
I'll
run
through
a
few
of
those.
Now
first
is
a
pure
authentication
resource.
A
This
allowed
us
to
enforce
mtls
mesh
wide.
The
way
istio
works
by
default
is
when
one
pod
within
the
service
mesh
talks,
another
pod.
They
will
exchange
certificates
and
validate
those
certificates
as
part
of
an
mtls
request,
so
that
the
traffic
is
encrypted.
However,
if
you
have
a
pod
outside
of
the
service
mesh
and
you
try
to
send
traffic
to
a
pod
within
the
service
mesh,
then
by
default,
istio
will
allow
that
plain
text
communication,
and
this
would
not
pass
our
security
standards.
So
we
enabled
mtls
mesh
wide.
A
We
also
took
use
of
authorization
policies.
Authorization
policies
allow
us
to
provide
pretty
granular
security
policies
to
control
which
services
can
talk
to
which
other
services
it
allows
allowed
us
to
also
put
in
a
default
policy,
so
that
if
we
didn't
explicitly
allow
one
application
to
talk
to
another
application,
then
the
communication
would
not
be
allowed
again.
This
helped
us
to
fulfill
our
security
requirements.
A
A
We
also
provided
custom
sidecar
configurations
in
seo.
A
sidecar
configuration
is
used
to
configure
the
proxy
container
and
we
used
our
custom
sidecar
configuration
to
tie
service
entries
to
certain
sets
of
pods,
and
by
doing
that,
we
could
make
sure
that
certain
pods
could
access
certain
technolo
certain
storage
services,
but
only
the
ones
that
they
were
allowed
to
access.
A
So,
let's
go
into
a
little
more
depth
about
service
entries,
since
this
is
one
of
the
areas
we
spent
a
lot
of
time
on
and
we
encountered
a
few
problems
that
we
weren't
expecting
to
hit
so
service
entries,
as
I
said,
allow
us
allow
a
service
outside
the
cluster
to
be
entered
into
istio's
service
registry.
A
The
service
entries
can
be
used
in
conjunction
with
the
default
deny
policy
where,
if
istio
finds
traffic
that
you're
trying
to
send
through
the
sidecar
to
a
a
resource
or
a
endpoint
outside
of
the
service
mesh
that
it
doesn't
know
about
through
a
service
entry,
it
will
deny
that
policy.
It
will
deny
that
traffic
and
one
key
thing
is
that
the
proxy
needs
a
way
of
matching
traffic
to
a
service
entry.
When
you
route
traffic
from
your
container
through
the
sidecar
proxy,
the
sidecar
proxy
needs
to
be
able
to
figure
out.
A
Does
this
traffic
apply
to
a
specific
service
entry
or
not?
Where
is
this
traffic
supposed
to
be
sent?
There
are
a
few
different
ways
of
doing
this.
If
you're
using
an
http
service,
for
example,
you
could
have
istio
look
for
a
host
header
on
the
request
and
then
use
that
to
match
to
a
service
entry,
but
for
some
of
the
storage
services
that
we
were
using.
A
There
are
things
like
relational
databases
where
the
protocol
is
tcp,
so
istio
has
fewer
options,
really
it
can
only
use
a
port
or
an
ip
address
in
order
to
match
traffic
and,
unfortunately,
for
us,
I
key
addresses
really
wouldn't
work,
because
we
were
using
a
lot
of
cloud-based
storage
services
and
the
cloud-based
storage
services
can
often
change
ip
addresses,
sometimes
without
warning
in
the
case
of
a
scale-up
event
or
in
the
case
of
a
failover
event.
A
So
really
that
only
left
us
with
ports
and
unfortunately,
if
you're,
just
using
a
port
in
order
to
match
traffic
to
a
service
entry,
the
sidecar
doesn't
have
a
way
of
differentiating
between
two
different
service
entries
that
both
use
the
same
port.
This
example
below
I
have
two
different
service
entries
both
using
the
same
port,
but
I'm
trying
to
send
traffic
to
two
different
storage
services
outside
of
my
service
mesh.
A
So
my
container
sends
traffic
to
this
to
the
sidecar
to
two
different
ip
addresses,
but
with
the
same
port,
the
port,
the
sidecar
will
match
the
traffic
to
one
of
the
service
entries,
usually,
whichever
one
is
created
first,
but
it's
undefined,
and
then
it
will
try
to
forward
that
traffic
to
whatever
destination
is
configured
by
that
service
entry.
This
means
that
traffic
can
unexpectedly
get
rerouted.
A
So
this
was
a
surprise
to
us.
It
made
sense
once
we
understood
what
the
sidecar
was
trying
to
do,
but
it
did
present
a
problem
since
we
were
hoping
to
have
multiple
storage
services
using
the
same
port.
A
However,
we
ended
up
talking
with
our
teams
and
we
decided
that
for
a
specific
pod,
we
only
needed
to
support
a
single
port
per
storage
service,
and
if
we
wanted
to
use
multiple
storage
services,
each
of
them
would
end
up
needing
a
unique
port,
so
in
recent
releases
of
seo,
they've
actually
given
us
a
way
of
preventing
this
problem.
A
Istio
now
supports
dns
proxy
for
service
entries
and
internal
ip
address
allocation,
meaning
when
you
create
a
service
entry,
it
can
get
allocated
an
internal
virtual
ip
address,
and
you
can
also
configure
a
hostname
for
that
service
entry.
Then,
once
the
container
tries
to
resolve
the
ip
address
of
that
host
name
given
in
the
service
entry,
istio
can
intercept
the
dns
request
and
return
the
virtual
ip
address.
A
Your
container
will
then
send
traffic
through
the
sidecar
to
that
virtual
ip
address,
which
istio
will
easily
be
able
to
recognize,
because
it's
you,
because
your
service
entry
is
tied
to
a
specific
virtual
ip
address.
It
will
know
exactly
where
you're
trying
to
send
the
traffic
and
it
can
forward
it
appropriately.
A
Another
thing
that
we
had
to
figure
out
when
we
were
implementing
istio
was
how
we
would
integrate
it
with
glue.
A
Fortunately,
for
us
glue,
provides
us
an
out-of-the-box
way
of
integrating
with
this
seo.
When
you
enable
this
integration
glue
edge,
all
the
pause
for
blue
edge
will
create
two
new
containers.
One
is
simply
an
seo
sidecar
container
that
will
communicate
with
the
seo
control
plane
and
will
fetch
its
own
certificates.
A
A
This
was
great
for
us
and
the
way
that
we
actually
rolled
this
out
is
we
enabled
istio
integration
on
glue
edge
before
we
actually
started
migrating
our
services
to
use
blue
edge.
I
mean
to
use
istio,
so
we
started
using
the
seo
base
certificates
just
as
client
certificates
to
encrypt
traffic
to
our
non-service
mesh
based
services,
which
worked
since
no
one
was
blue
edge,
was
not
validating
the
service
certificate
and
the
service
was
not
validating
the
glue
certificate.
A
So
since
we
started
using
those
certificates
when
we
started
migrating,
our
pods
to
the
istio
service
mesh
and
they
came
online
glue
was
tried,
sending
traffic
to
them
and
since
it
was
already
using
istio-based
certificates,
the
traffic
continued
correctly.
A
So
this
was
great
for
us
and
it
was
really
good
because
we
could
keep
using
blue
edge.
We
didn't
have
to
migrate
to
the
istio
ingress
gateway,
and
at
this
point
we
were
quite
familiar
with
blue
edge.
It
had
a
lot
of
features
that
we
were
using
and
it
was
just
working
well
for
us,
so
we
were
happy
to
be
able
to
keep
it.
A
So
yet
again
we
were
able
to
get
our
services
successfully
deployed,
but
this
time
with
istio
enabled
it's
been
running
well
for
us,
and
I
think
istio
has
been
adding
a
lot
of
value
for
us
from
both
an
operations
and
a
security
perspective,
and
now
fisher
will
go
and
talk
more
about
what
we're
doing
where
we'll
go.
Next.