►
Description
For more great content, visit https://solocon.io
SoloCon 2022:
How Constant Contact is Leveraging Gloo and Istio to Transform our Platform
Speakers:
Dave Ortiz
Senior Principal Software Engineer, Constant Contact
Session Abstract:
Interested in modernizing your microservice architecture? Join senior principal software engineer Dave Ortiz as he walks you through Constant Contact's journey as they designed their microservices, exposed APIs, and integrated new and old services together.
Track:
Service Mesh and Application Networking
A
A
A
It
comes
to
contact,
but
these
days
I'm
spending
more
of
my
time
with
a
fantastic
group
of
engineers
on
our
platform
team,
and
we
have
a
very
modest
goal
and
our
modest
goal
is:
we
want
to
transform
the
way
developers,
deploy
and
run
and
operate
their
software
at
constant
contact.
We
want
to
adopt
a
cloud
native
model
so
as
part
of
this
transformation,
there's
obviously
more
here
than
how
we're
bringing
traffic
into
our
clusters
and
our
gateways
and
our
service
mesh.
You
know
this.
A
This
encompasses
everything
from
our
adoption
of
kubernetes
from
the
way
we've
adopted,
git
ops,
but
for
the
for
the
most
part,
this
presentation
is
going
to
be
how
we're
using
technology
like
blue
edge
and
glue
mesh
and
istio
as
part
of
this
wider
transformation.
A
So,
just
as
a
high
level
overview
of
what
I
want
to
do
is
I
want
to
talk
a
little
bit
about
constant
contact's
journey
with
microservices
and
how
we
got
to
where
we
are
today.
I
want
to
talk
about
some
of
the
limitations
we've
seen
and
we've
run
into
and
how
we're
trying
to
fix
them
and
and
how
they're
they're
causing
us
they're,
causing
us
to
not
be
able
to
do
what
we
want
to
do
today.
A
I
want
to
talk
about
specifically
how
we're
using
glue
mesh
and
glue
edge
and
sdo
and
and
how
we've
solved
some
problems
and
then
finally,
at
the
end
I
want
to,
I
want
to
let
you
know
where
we
think
we're
going
in
the
future.
So
with
that,
let's
take
a
quick
step
back
to
2012.
Before
I
even
was
at
the
company
and
at
the
time
constant
contact
was
largely
implemented
as
a
monolithic
application
written
in
java.
A
We
had
a
monolithic
database,
we
had
a
bunch
of
components
and
we
were
running
into
the
issues
that
a
lot
of
folks
run
into
when
they're
running
these
big
monoliths.
And
it's
that
you
know
it's
operational
problems
like
you,
have
a
big
giant
database.
That's
a
single
point
of
failure.
We
were
running
into
performance
issues.
A
It
was
difficult
to
scale
different
parts
of
the
product
and
on
the
engineering
side,
we
were
running
into
issues
because
we
weren't
deploying
the
thing
fast
enough
or
often
enough
rather-
and
that
was
because
the
deployments
were
big
and
complex
and
we
had
a
bunch
of
really
smart
people
who
we
needed
to
help
us
deploy
this
thing.
So
when
you
have
big
scary
deployments,
you
don't
deploy
as
frequently
and
this
monolith
was
limiting
the
ability
of
our
teams
to
choose
different
kinds
of
technologies.
A
Right
when
you
have
a
when
you
have
a
java
monolith
and
the
the
language
or
the
the
the
interface
between
the
different
components
is
a
java
api.
It's
really
hard
to
adopt
a
different
language,
so
we
we
did
what
a
lot
of
companies
were
doing
at
the
time
and
we
started
looking
as
at
a
micro
service
architecture
as
a
way
to
kind
of
solve
a
lot
of
the
problems
we
were
having
at
the
time.
But
we
did
what
most
folks
do
we
took
all
of
our
components.
A
We
built
rest
apis
for
all
of
them.
We
pulled
them
apart,
we
gave
them
separate
deployment
pipelines
and
there
was
a
lot
of
work
done,
and
the
idea
was
that
if
we
had
these
well-defined
rest
interfaces
between
components,
we
could
deploy
them
independently.
Things
would
move
faster
and,
ultimately,
that
that
was
true.
We
were
able
to
move
faster
and
by
2015
we
had
migrated
the
vast
majority
of
our
of
our
system
into
this
micro
service
architecture.
A
So
our
major
platforms,
like
our
click
stream
ingestion
our
email
platform,
our
contact
management
platform,
our
editors,
all
of
them
were
were
rewritten
in
some
cases
or
refactored
and
moved
into
this
architecture
and
for
the
most
part
things
were,
things
were
really
good.
We
had
significantly
matured
our
deployment
pipeline.
You
know
we
initially
ran
into
issues.
A
We
had
some
relatively
sophisticated
cd
pipelines
with
canaries
and
rollbacks
and
all
sorts
of
cool
things
that
we
had
built
on
our
own,
and
the
reason
is
that,
because,
at
the
time
we
had
started,
we
were
relatively
early
to
the
microservice
architecture
and
there
were
the
wealth
of
tools
and
libraries
and
open
source
products
to
help
you
solve
some
of
the
concerns
that
we
ended
up
having
to
solve
so
being
a
company.
That's
an
engineering
company.
A
We,
we
came
up
with
our
own
solutions
right
and
we
implemented
a
lot
of
those
solutions
as
libraries
that
developers
could
pull
into
their
microservice,
so
secret
management
and
authentication
between
microservices.
These
are
these
are
things
we
had
to
build
and
we
did
build
them,
and
things
were
going
really
well
by
2016,
the
company
was
acquired
by
by
a
larger
groups
and
endurance
international
group,
and
at
the
time
we
were
trying
to
figure
out
how
we
could
take
products
and
other
other
things
that
were
built
around
the
endurance.
A
You
know
under
endurance
and
figure
out
ways
to
to
integrate
them
into
our
product,
and
we
had
figured
out
that
it
wasn't
easy
and
we
had
learned
a
lot
so
by
2017
things
started
to
change
even
more.
We
had
mostly
been
focused
on
deploying
our
microservices
in
our
own
private
data
centers
that
was
working
for
us.
We
felt
that
it
was
cost
effective,
but
by
2017
the
world
shifted
a
bit
because
we
were
trying
to
run
workloads.
A
We
wanted
to
run
workloads
that
really
didn't
make
a
lot
of
sense
to
run
in
a
private
data
center.
Specifically,
we
were
looking
at
these
big
data
machine
learning
and
these
analytics
workloads
that
you
know
may
require
you
to
have
a
lot
of
capacity
for
a
short
amount
of
time
and
and
we
really
started
to
adopt
aws.
A
So
the
issue
we
ran
into
is
is
one.
The
technologies
in
aws
were
different
and
our
pipeline
didn't
support
them.
So,
for
example,
developers
were
deploying
on-prem
with
with
our
own
automation,
that
would
you
know,
spin
up
vms
and
stick
them
in
a
load
balancer
and
do
all
those
things
and
in
aws
we
were
using
a
different
set
of
technologies.
So,
instead
of
jenkins,
we
were
using
code
build
and
code
pipeline
and
things
were
different
right.
We
started
to
notice
that
there
was
a
bit
of
fragmentation.
A
The
teams
were
adopting
different
languages,
so
at
the
time
java
was
or
java
was,
was
it
at
constant
contact
with
some
of
our
application
written
in
ruby?
But
when
we
started
moving
things
into
the
cloud,
we
started
seeing
the
adoption
of
things
like
python
and
go
so
things
were
starting
to
change.
Our
existing
tooling
didn't
work.
Well
here
our
existing
libraries
for
things,
like
you
know,
authentication
were
working
as
well
as
we
wanted
to
in
the
cloud.
A
We
started
to
see
some
adoption
of
of
kuretti
some
kind
of
organic
growth,
but
really
really
hadn't
hadn't
gained
wide
traction
in
the
company.
We
had
some
small
products
that
were
deployed
at
kubernetes,
but
by
2020.
Things
were
really
really
beginning
to
change,
so
we
had
started
to
look
at
transformation,
and
while
we
were
looking
at
transformation,
ctct
has
gone
private
and
we've
also
acquired
another
company
called
retention
science
and
they
deploy
everything
in
the
cloud
so
really
2020
towards
the
back
half
of
the
years.
A
When
we
really
started
deciding
that
we
need
to,
we
need
to
transform.
We
need
to
figure
out
what
we
need
to
transform
into
and
we
want
to
be
able
to
make
it
so
that
developers
don't
really
need
to
worry
about
where
they're
deploying
it
should
be
the
same
in
the
cloud.
It
should
be
the
same
on-prem
and
we'll
pick
where
it
makes
sense.
A
A
We
wanted
to
find
innovative
ways
to
help
our
developers
with
the
complexities
of
operating
at
scale
and
because
the
technologies
were
changing
and
we
were
acquiring
companies
that
did
things
differently.
We
wanted
to
be
able
to
apply
microservice
best
practices
at
the
network
layer
in
a
scalable
and
sustainable
way,
and
we
didn't
want
it
tied
to
a
particular
language
or
stack.
A
You
know
we
also
want
to
be
able
to
facilitate
tools,
so
developers
would
be
able
to
experiment
in
production
in
ways
that
were
low
risk,
so
deliver
deliver
functionality
to
production
quickly
and
be
able
to
experiment
with
our
customers
and
what
was
really
important.
The
last
the
last
of
our
principles
was
that
we
wanted
to
build
something
that
was
better
than
what
we
had
today
developers
actually
really
liked
what
we
had
today
and
if
we
didn't
build
something
that
was
better.
A
A
They
want
to
make
requests
to
our
apis
and,
at
the
end,
those
apis
are
fronted
by
gateways,
and
then
we
have
a
bunch
of
internal
microservices
that
are
not
exposed
directly
to
the
internet,
but
are
exposed
to
you
know,
gateways
and
other
services.
So
the
thing
that
ties
everything
together
here
is
all
of
these
internal
microservices.
We
have
firewalls
between
our
clients
out
on
the
internet.
A
The
requests
come
in
generally
through
a
load
balancer
and
they
hit
a
gateway,
and
when
I
state
gateway,
I
mean
it's
a
it's
a
it's
an
application
generally
on
the
on-prem
side,
they're
they're
homeworld
applications
that
basically
authenticate
and
authorize
requests.
A
So
that
means
all
of
our
all
of
our
you
know.
We
all
of
our
services
are
running
on
one
single
network
when
we
want
to
integrate
another
another.
A
You
know,
for
example,
another
system-
that's
maybe
in
in
a
different
cloud
or
in
their
own
bbcs
one
either
we
have
to
stand
up
a
new
gateway
or
we
need
to
figure
out
how
to
get
everyone
networked
in
a
way
that
that
makes
sense,
which
is
which
is
complex
so
on
the
cloud
side,
we're
doing
more
we're
doing
things
with
amazon's
api
gateway,
but
a
lot
of
the
still
the
same
work
is
going
on
right.
A
We're
we're
using
a
little
more
of
amazon's
api
gateway,
we're
not
rolling
the
whole
thing
from
scratch,
so
you
know
see
rate
limiting
and
things
like
that,
but
we're
also
we're
also
having
to
add
functionality
there,
that's
very
similar
to
what
we're
doing
in
the
on-prem
gateways,
so
we're
using
things
like
lambdas
to
do
authentication
and
authorization
and
to
do
dispatch
to
our
internal
microservices,
and
you
know.
So
what
we're
doing
is
we're
doing
a
lot
of
a
duplicated
effort.
A
It's
a
little
bit
different
depending
on
if
you're
going
in
into
aws
or
the
cloud-
and
you
know
we
have
our
application
gateway,
which
is
used
by
our
browsers
and
our
mobile
clients.
And
then
we
have
other,
you
know
api
gateways
that
we're
using
for
third-party
developers
and
our
click
stream
data
and
and
things
are
getting.
You
know
they're
a
little
bit
fragmented
right,
like
I
explained.
A
If
we
jump
in
just
a
little
bit
deeper,
this
gives
us
a
good
idea
of
what
what
the
actual
traffic
flow
looks
like
and
you
know
how
we're
doing
things
behind
our
gateway.
So
the
client
comes
in
wasn't
on
the
last
diagram,
but
typically
they
come
in
and
they
hit
a
they
hit
a
cloudflare
pop.
First,
the
the
request
code
gets
sent
to
an
external
load,
balancer
that's
running
outside
of
our
firewall,
and
we
fire
we
effectively
fire
firewall
off
everyone,
except
for
the
cloudflare
pop.
A
Once
we
we
get
through
there,
we
have
some.
We
have
some
libraries
that
we've
implemented
because
we
have
multiple
gateways.
We
want
to
share
this
logic
for
handling
some
of
the
handling
some
of
the
session
stuff,
so
how
our
cookie
gets
thrown
down
on
the
browser.
Some
see
serve
protection,
authentication
and
authorization.
So
just
knowing
that
users
logged
in
is
not
enough,
we
have
different.
A
So
now
that
we've
kind
of
jumped
in-
and
we
under,
we
understand
some
of
the
the
issues
we
have
or
we
see
how
things
are
actually
working.
We
have
a
couple
of
issues
here
that
make
things
harder
for
us
than
we'd,
otherwise
want
them
to
so
our
our
use
of
dns
based
service
discovery.
It
makes
it
really
harder
more
risky
than
we'd
like
it
to
to
be
able
to
swing
a
service
to
another
data
center,
for
example.
A
If
I
wanted
to
move
a
service
to
aws,
I'd
have
to
re-point
dns,
and
you
know
I
it's
not
the
best
way
to
to
route
traffic
we'd
like
to
do
it
some
other
way.
It's
hard
to
swing
back
dns
is
slightly
unpredictable.
A
You
know
it's
not
always
implemented
correctly.
The
other
thing
we
have
is
because
a
lot
of
these
these
shared
concerns,
like
rate
limiting
and
back
off
and
retries
they're
implemented
as
libraries
and
even
on
the
java
side.
It
looks
different
than
what
we
do
on
the
ruby
side,
because
java
apps,
you
know
we
have
things
like
the
netflix
oss
stack.
A
We
have
you,
know
retries
and
back
offs
and
circuit
breakers,
but
it's
not
the
same
as
what
we're
doing
on
the
ruby
side
and
because
we
don't
have
centralized
places
to
manage
these
policies
about
how
we
do
back
off
and
we
try
and
we
don't
have
the
same
set
of
schools.
It's
really
difficult
to
reason
about.
What's
going
to
happen,
if
one
service
goes
down,
we
have
pro
we've
had
problems
with.
A
So
you
know
if
anyone
had
to
deal
with
the
the
fallout
of
log
for
shell,
you
know
what
the
impacts
are
of
a
vulnerable
library
that
you
have
to
ship
to
a
bunch
of
of
different
clients.
So
we
want
to
really
start
thinking
about
the
way
we're
doing
things,
because
right
now,
we've
essentially
we
have
a
high
cost
of
operating
each
micro
service
and
the
last
problem
that
we
have
here
is
that,
because
we
have
all
these
libraries
for
different
languages
and
ecosystems,
if
we
want
to
adopt
something
new.
A
If
someone
came
to
me
today
and
said,
hey
dave,
I
want
to
write
a
go
app
or
I
want
to
write
a
node.js
app.
The
first
thing
we
have
to
do
is
re-implement
a
lot
of
these
things,
like
our
custom,
homegrown,
authentication
logic,
which
is
it's
not
ideal
right.
We
want
to
be
able
to.
Let
teams
pick
the
right
tool
for
the
job,
so
you
know
we
started.
We
started
off
and
I'll
hop
over
to
the
next
slide.
A
Iteratively,
as
as
the
application
team
was
developing
their
application,
we
were
iterating
on
the
architecture
which
was
which
was
very
helpful
for
us,
so
we
kind
of
started
with
something
that
looked
like
this,
but
wasn't
quite
here
and
we
just
started
iterating
right,
so
what
we
ended
up
with-
and
this
is
kind
of
the
the
idealistic
state
of
the
world
is
you
know
we
we
found
out
pretty
early,
you
know,
even
before
we
had
built
this
architecture,
we
thought
that
istio,
for
a
variety
of
reasons,
would
help
us
solve
a
lot
of
these
issues.
A
A
So
what
we
did
here
is
one
thing:
that's
interesting
is
we
still
wanted
to
keep
cloudflare
in
the
mix,
so
the
browser
goes
directly
to
cloudflare,
but
instead
of
going
to
a
public
load
balancer,
that's
firewalled
we're
actually
using
something
called
cloudflare
d.
Many
of
you
may
have
heard
of
it.
A
It's
always
previously
called
argo
tunnels,
but
this
was
actually
something
that
our
security
team
had
suggested
to
us
and
for
the
first
time
in
my
career,
the
security
team
actually
suggested
something
that
was
incredibly
helpful,
instead
of
just
another
crazy
requirement
that
we
have
to
implement.
But
what
it
does
is
so
the
traffic
comes
in
directly
to
a
cloudflare
pop.
A
We
have.
We
have
a
service
called
cloudflare
d
or
some
pods
running
cloud
4d
in
our
kubernetes
cluster
and
what
they're
doing
is
they're.
Making
outbound
connections
to
a
local,
cloudflare
pop
and
traffic
is
flowing
from
one
cloud
flare
pop
to
another
into
our
cluster,
without
us
having
to
expose
anything
directly
to
the
wider
internet,
which,
which
is
awesome.
It
also
gives
us
the
ability
to
kind
of
load
balance
across
different
clusters.
So,
for
example,
we
want
to
send
browser
traffic
to
a
different
cluster.
We
can
do
that
up
at
the
cloudflare
level.
A
We
don't
have
to
do
things
like
failover,
dns
cloudflare
is
terminating
the
original
tls
connection
and
at
the
end
of
the
day,
when
we're
through
cloudfair
we've
come
through
cloud
4d
we're
hitting
our
glue
edge
and
one
of
the
things
we
really
liked
about
blue
edge.
When
we
were
looking
at
different
options,
you
know
we
knew
we
wanted
a
kubernetes
native
gateway.
We
looked
at
a
bunch
of
options.
A
One
of
the
things
was
great
about
blue
edge
is
that
it
integrates
with
istio's
sds
and
it
becomes
a
first-class
citizen
in
our
mesh
without
me
having
to
run
multiple
proxies
up
there
on
the
edge,
so
that
was
a.
That
was
a
huge
thing
for
us
in
terms
of
the
way
traffic
is
flowing:
east
and
west.
We
have
our
services
running
across
multiple
clusters.
A
They
do
so
via
a
load
balancer,
that's
ultimately
pointing
to
an
seo
gateway.
For
us.
This
is
actually
a
huge
thing
because,
as
we're
integrating
these
companies
into
our
portfolio,
having
a
way
that
we
can
kind
of
talk
to
each
other
services,
you
know
it
product
integration
is
incredibly
complicated.
A
The
best
we
can
do
from
my
team's
perspective
is
make
sure
that
the
getting
things
to
just
talk
is
not
the
complicated
part.
We
can
focus
on
the
data
integration
and
all
the
other
problems
that
we
have
integrating
these.
These
platforms
together
right.
A
So
all
a
service
and
one
cluster
needs
to
do
to
talk
to
a
service
and
other
clusters
be
able
to
see
it's
its
load
balancer,
and
this
is
amazing
right,
because
we've
really
we've
really
taken
a
lot
of
the
complexity
out
one
of
the
things
we
did
right
from
the
start
that
we
wanted
to
do
right
from
the
start
is
we
wanted
to
start
with
strict
mtls
and
make
sure
that
we
weren't
having
to
to
you
know
back
into
it
later
and,
like
I
said,
the
the
big
thing
here
is
that
we
don't
require
flat
networking
anymore.
A
This
is
kind
of
where
we
want
it
to
go,
but
when
we
started
we
knew
we
wanted
to
build
a
new
product
here,
but
the
the
migration
wasn't
100
clear
to
us.
So
we
we
started
going
some
of
the
problems
we
ran
into.
Is
that
not
every
place
we
wanted
to
run
software,
we
could
run
a
kubernetes
cluster
or
some
of
our
kubernetes
clusters
might
be
old.
A
We
didn't
have
a
solution.
So
when
you
start
off
with
this
new
architecture,
you
don't
have
a
lot
running
there,
you're
going
to
need
to
you're
going
to
need
to
call
other
things
that
haven't
been
moved
to
the
mesh.
Yet
we
didn't
know
how
we
were
gonna.
You
know
replace
a
lot
of
the
functionality
we
had
in
our
gateway.
A
When
you
write
your
own
gateways,
you
can
do
anything
you
want
and
then,
when
you
move
to
something
off
the
shelf
you're
a
little
bit
more
constrained,
so
we
weren't
sure
exactly
how
things
were
going
to
look.
So,
let's
start
with
how
we
implemented
our
gateway
functionality.
A
So
one
of
the
things
that
we
did,
that
I
thought
was
relatively
interesting-
is
we
wanted
to
use
whatever
gateway
in
this
case
glue
edge?
We
wanted
to
use
as
much
functionality
as
we
possibly
can
at
the
gateway
level.
So
you
know
authentication
authorization
where
possible,
common
logging,
how
we
operate,
how
we
scale
how
we
monitor
that
was
really
important
to
us,
but
at
the
end
of
the
day
we
had
parts
of
the
gateway
that
we
had
written.
A
So
particularly,
you
know
the
way,
we're
doing
request,
patching
the
way,
we're
doing
the
request
transformation
that
was
likely
to
have
to
live
in
code
for
some
time.
So
we
wanted
to.
We
wanted
to
make
sure
that
we
could.
We
could
leverage
a
lot
from
from
blue
edge,
but
still
keep
the
code
around
that
we
had
today
to
do
some
of
these
more
complicated
things,
and
what
we
came
up
with
is
that
we
actually
started
using.
We
started
using
glue's
custom
x-soft
server,
which
was
actually
was
actually
changed.
A
The
way
we
did
things
so
just
for
a
quick
example.
Imagine
we
have
a
client
somewhere
out
there.
It's
it's
a
browser,
it's
it
wants
to
create
a
campaign
marketing
campaign.
It
has
a
cookie
that
was
laid
down.
You
know
by
our
session
management
system
how
it
doesn't
matter
exactly
how
it
works,
or
in
this
case
it
was
actually.
Let's
imagine
it
was
laid
down
by
by
blue
edge
because
we're
using
glue,
edges
oidc
to
authenticate
the
user,
but
just
knowing
who
the
user
is
in
our
system
isn't
enough.
A
Just
because
you
can
log
in
doesn't
mean
you
can
do
whatever
you
want.
So
when
the
request
comes
in,
we
we
want
to
be
able
to
figure
out.
You
know
what
account
you
belong
to.
What
your
user
id
is,
what
your
permissions
are.
Maybe
have
you
paid.
Can
you
even
use
this
product
and
that's
really
hard
to
shove
into
the
gateway?
Maybe
we
couldn't
even
do
it.
A
So
one
of
the
first
things
we
did
is
we
implemented
our
own
custom
x
off
for
for
glue
which
what
it
allowed
us
to
do
is
have
glue.
Do
the
first
level
of
authentication
check,
so
you
know,
is
this
user
logged
in?
Do
they
have
the
right
token?
Is
it
valid?
We
put
all
that
at
glue
edge
after
that
check
passes.
We
call
a
second
xoth
server,
which
allows
us
to
additionally
can
can
veto
the
request
at
that
point.
A
So
we
can
decide,
we
don't
want
to
let
it
go
through,
even
though
the
user's
logged
in
maybe
they're
banned
from
our
system.
But
what
we
can
also
do
is
once
we
say:
okay,
this
user
is
allowed
in.
We
can
then
instruct
glue
edge
to
add
a
bunch
of
headers
about
who
the
user
is
and
then
pass
those
along
to
our
external
api,
where
we
can
start
to
do
the
the
transformation,
the
invocation
of
downstream
systems.
We
can
keep
that
logic
there
in
our
code.
Well
off.
A
You
know
offloading
a
lot
of
the
what's
the
session
cookie
and
how
do
I
look
up
the
user
and
all
that
stuff?
That's
up
at
the
gateway,
and
then
the
external
api
gets
those
things
as
http
headers
and
they
can
trust
those
headers,
because
glue
edge
is
part
of
our
mesh.
The
external
api
is
part
of
our
mesh
and
we,
we
use
istio's
policies
to
say
the
only
thing
that
can
talk
to
that
external
api
is
blue
edge
and
then,
ultimately,
that
will
have
to
invoke
some
sort
of
internal
microservice.
A
So,
as
we
started
right,
we
had
most
of
our
functionality
is
outside
the
service
meshes,
that's
just
that's
where
we
started
and
we
quickly
had
to
figure
out
well,
if
we're
going
to
build
anything
in
this
in
this
new
mesh,
how
are
we
going
to
call
the
old
things
we
have
and
what
we
decided
was
we
decided
we
were
going
to
implement
a
sort
of
a
strangler
powder,
so
what
we
did
is
we
built
a
proxy
that
would
allow
us
to
talk
back
to
our
internal
micro
services
and
it
would
abstract
away
how
to
talk
to
our
legacy
secret
system.
A
You
know
the
account
service,
but
not
anything
else
right
or
it
could
only
call
gets
on
the
account
service.
So
we
got
this
really
rich
permission
model
that
we
didn't
even
have
for
our
old
system
for
new
services
running
in
the
mesh,
and
this
was
actually
really.
This
was
really
something
that
allowed
us
to
ultimately
deliver
a
product
on
top
of
the
service
mesh.
Now
we
have
a
variety
of
services
running
there
and
they're
able
to
integrate
back
seamlessly
to
our
legacy
system.
A
So
at
the
end
of
2021
or
in
the
back
half
of
2021,
we
did
it.
We
shipped
a
product.
We
had
it
out
in
production,
we're
getting
feedback
from
customers.
It's
been
mostly
positive,
but
a
lot
of
the
things
we
did.
We
had
solved
one
direction
in
the
match.
A
Should
we
call
in
the
mesh
to
out
of
the
mesh
and
that
that
wasn't
a
major
issue,
but
the
other
way
was
a
problem
and
because
of
it,
we
had
implemented
some
things
in
some
sub-optimal
ways,
and
it
was
mostly
because
you
know,
for
example,
if
we
had
a
reporting
system
running
outside
the
mesh,
that
was
aggregating,
reporting
stats
from
all
these
different
things.
It
can
talk
back
into
the
mesh.
Now
we
had
to
re-implement
that
functionality
somewhere
else,
and
we
didn't
want
to
do
that.
A
The
other
problem
we
had
is
that
kind
of
as
a
consequence
of
the
way
our
system
worked
before
every
api
or
internal
microservice
was
was
secured
by
an
api.
Key
developers
had
gotten
really
used
to
being
able
to
develop
a
microservice
on
their
local
machine
and
just
invoking
it
right.
So
I
could
build
my
microservice
locally.
I
could
play
with
it
locally.
It
could
call
real
things
in
our
qa
environment
and
they
had
gotten
used
to
it,
but
on
our
service
mesh
we
had
ultimately
made
things
too
secure.
A
It
was
difficult
to
invoke
things
running
inside
the
service
mesh,
so
we
had
these
two
problems
and-
and
we
didn't
really
know
how
to
solve
them
at
the
time.
So
I
I
basically
had
started.
You
know,
we'd,
look
at
a
couple
of
different
approaches
and
one
of
them
that
we
thought
might
work
is
we.
We
started
looking
at
istio's
vm
integration,
but
for
anyone
who's
looked
it.
A
It
ultimately
couples
your
vms
to
a
particular
kubernetes
cluster,
which
means
that
when
you
recreate
a
kubernetes
cluster
or
you
want
to
upgrade
it
and
flip
over
another
one,
now
you
have
to
move
those
vms
with
it.
So
we
didn't.
We
didn't
really
like
that.
We
weren't
sure
about
how
we're
going
to
automate
moving
service
account
tokens,
which
is
how
sdo's
vm
immigration
works
now,
and
we
didn't
think
this
would
work
for
you
know
we
didn't
think
this
worked
for
a
developer
laptop.
A
So
at
some
point
I
went
to
nick
at
solo
and
I
I
said
you
know:
I'd
love
to
be
able
to
run
an
envoy
proxy
in
a
vm
or
in
a
you
know,
on
my
development
machine
and
and
basically
I
I
think
I
can
make
it
talk
if
I
can
get
the
certificates
it
needs.
My
hope
is
that
I
get
to
talk
to
our
ico
clusters
via
those
those
east-west
load
balancers
that
we
had
popped
out.
A
We
talked
about
earlier
and
what
we
ultimately
came
up
with
as
a
way
to
to
integrate
these
two
worlds
for
the
short
term
anyway,
and
maybe
for
the
long
term
in
other
ways,
but
we
basically
were
able
to
operationalize
what
nick's
posted
on
on
the
istio
blog
about
and
he's
calling
it
or
solos
called
it
vm,
one
way
which
allows
essentially,
instead
of
having
the
two-way
communication
that
you
get
with.
You
know,
istio's
vm
integration.
A
It
allows
something
running
outside
of
istio
to
invoke
services
inside
seo
and
basically
how
we
built
this
is
we
we
took
an
off-the-shelf,
istio
proxy.
We
built
a
very
small
control
plane
for
envoy
based
on
go
control
plane
and
we
even
implemented
some
xdoth.
So
what
this
lets
us
do
is
this
effectively
lets
us
make
service
calls
from
outside
of
our
istio
cluster.
A
For
example,
if
we
have
an
old
kubernetes
cluster
that
it
can't
run
the
same
version
of
istio
for
a
variety
of
reasons
and
we're
not
going
to
upgrade
it
for
six
months,
it's
fine.
We
can
use
this
we've
implemented
x
off,
like
I
said
so
that
way.
For
example,
you
have
a
service
running
in
the
mesh.
You
want
to
expose
it
outside
the
mesh,
but
you
want
to.
We
want
to
do
it
in
the
way
that
constant
contact
does
does
its
old
services
the
way
it
does
api
keys.
A
A
It
just
requires
essentially
you
to
to
operationalize
the
delivery
of
certificates.
So
with
that
said,
real
quickly,
2022
is
the
year
that
we're
adopting
istio
at
constant
contact,
we're
going
all
in.
So
it's
been,
it's
been
a
resounding
success.
It's
been
ultimately
a
fun
journey.
We
still
have
a
lot
left
to
go
and
with
that
I
just
want
to
say
thank
you.
Thank
you
for
listening
to
me
I'll,
be
here
to
field
any
questions
you
have
and
yeah
I'll
just
say
goodbye.
Thank
you
very
much.