►
From YouTube: Istio Use Cases at Sagecore Technologies
Description
Istio is a key gatekeeper and routing engine for Microservices running in Kubernetes. Nate Martin, CEO of Sagecore Technologies, explains their early implementation of Istio with successes and challenges. It was held in January 2020 by the New Mexico CI/CD Community Meetup, hosted by DeployHub(DeployHub.com) and BigByte (bigbyte.cc).
Future events of the New Mexico CI/CD Community Meetup can be found at https://www.meetup.com/nm-cdf-Area-Meetup/
A
And
on
that
note,
I'm
going
to
introduce
Nathan
Martin,
he
is
the
CEO
and
co-founder
of
a
company
called
Sage
core
technologies
stage.
Quartz
technologies
does
custom,
CRM
systems,
inventory
management
systems,
I,
think
they're,
working
on
some
budgeting
modules
for
companies
and
organizations
of
all
sizes.
They
originally
came
to
deploy
hub.
They
reached
out
to
us
because
since
they're
customizing,
this
type
of
platform
for
organizations
they
have
the
need
to
be
able
to
define
what
micro
services
that
they're,
creating
that
our
that
are
created
for
all
of
their
customers
and
then
what
are
the
differentiators.
A
So
they
have
a
set
of
base
micro
services
and
they're
building
custom
ones
that
their
end
of
their
unique
customers
utilize.
So,
as
one
of
the
first
problem
sets
that
Nate
came
to
us
about,
he
has
since
really
jumped
into
micro
services
and
kubernetes.
He
attended
the
coop
cron
conference
in
San
Diego,
and
we
didn't
see
him
at
all.
He
was
either
hanging
out
with
this
uncle
or
he
was
at
sessions,
but
I
think
he
was
in
a
lot
of
sessions
and
I
think
he
spent
a
long
time
at
security
sessions.
A
So
if
she
was
a
a
a
sort
of
an
advanced
topic
around
containers
and
kubernetes,
but
it's
never
too
early
to
start
understanding
it,
because
it's
going
to
be
a
critical
piece
as
companies
move
forward
and
sto
is
going
to
be
our
main
request.
Routing
it's
it's
going
to
be,
as
as
he
points
out
there,
the
gatekeeper
so
on
that
I'm
going
to
introduce
Nate
and
I
didn't
introduce
myself.
I
am
Tracy.
A
B
So
again,
my
name
is
nathan,
martin,
obviously
I'm
the
CEO
of
Sage
core
technologies.
Amazing
introduction
that
is
literally
everything
about
leading
into
what
we're
using
is,
do
for
and
really
looking
for
the
future
and
why
we
should
use
this.
Oh
really
helping
people
to
understand
some
of
the
really
core
fundamentals
of
service
mash
and
some
of
these
things
that
are
ultimately
a
little
bit
ethereal,
especially
when
you're
coming
in
to
micro
services
for
the
first
time.
B
So
we
have
some
experienced
people
here
so
I'm
going
to
skip
over
some
slides
that
really
you
already
know
all
about.
But
with
that
said,
let's
move
on
so
stage
four
technologies.
We
are
in
operational
enterprise
software
like
she
said
we
do
a
lot
of
ERP
CRM,
Sales,
inventory,
mobile,
app
stuff
like
that
tetra
core,
that
allows
business
consultants
and
people
of
the
like
in
technology
who
don't
quite
understand
the
programming
aspects
of
data
and
resources
and
planning
and
micro
services.
All
of
that.
B
So
what
we
aim
to
do
is
tie
that,
all
together
with
an
engine
that
allows
people
to
jump
into
this
world
and
utilize
these
things
for
their
business
without
having
to
go
through
the
same
steps
that
we
had
to
take
in
our
evolution
to
get
here
right.
So
with
that
said,
we
use
tetra
core
in
kubernetes.
B
Not
quite
in
the
production
room
for
it
direction,
it
adds
some
complexity
to
your
clusters
and
your
configurations.
You
really
have
to
understand
everything
that
sto
does
some
of
the
concepts,
some
of
the
whys,
do's
and
don'ts,
and
really
take
the
time
to
plan
out
every
single
micro
service,
how
it
will
connect
to
the
rest
of
your
environment,
your
application,
etc
down
the
line,
and
if
anyone
has
questions
about
Tetrick
or
after
this,
please
let
us
know:
I
have
some
colleagues
here
too
to
talk
about
that
so
sto.
B
Obviously,
you
really
can't
talk
about
SEO
without
talking
about
micro
services,
so
a
little
preface
about
what
micro
services
are
for,
anyone
that
doesn't
back
before
microservices
everything
would
be
housed
in
essentially
a
single
environment,
especially
when
you
start
out
small.
You
might
have
a
server
a
VM
bare-metal
server
sitting
somewhere.
All
of
your
code
and
services
are
essentially
housed
there.
B
If
that
goes
down,
like
I'm
sure,
we've
all
experienced
at
some
point
in
our
careers,
it
sucks
you
have
to
work
on
a
physical
server
or
a
machine
that
is
ultimately
you
know
you
have
to
take
it
down
in
order
to
fix
it
right
that
takes
time
it
costs
money.
It
costs
your
clients,
money.
Everyone
loses
out
now,
with
micro
services,
we
actually
distribute
our
computing
load
across
multiple,
it's
it's
an
abstract
environment,
but
essentially
multiple
VMs
across
multiple
servers.
You
can
configure
these
things
to
go
across
different
time.
B
Zones
have
services
and
in
other
countries
you
know
make
sure
that
your
services
are
closest
to
the
people
that
are
concerning
that's
consuming
right,
the
users.
This
is
a
very
basic
diagram,
but
I
think
it
illustrates
this
perfectly.
All
of
your
eggs
are
in
one
basket
on
the
left
and
all
of
your
eggs
are
distributed
on
the
right.
Microservices
clearly
is
the
future
of
the
whole
data
Internet
the
whole
game
right,
especially
as
we
move
into
Internet
of
Things
I'll,
throw
that
out
there
I
really.
B
Image
because
it
kind
of
sums
up
the
old
thinking
right
of
monolithic
architecture.
So
your
request
in
is
the
ping-pong
ball
that
might
be
an
API
call
or
something
like
that,
and
it's
coming
through
and
you're,
hitting
it
with
your
monolithic
architecture
and
basically
it's
extremely
overkill
and
look
how
much
room
you
have
the
hack
right,
I
mean
I,
love
that
it's
Bill
Gates
by
the
way.
C
B
True
true,
so
it's
a
little
humorous,
but
it
kind
of
shed
some
light
on
so
these
things,
so
why
we
use
maker
services
just
in
a
some
scalability,
we
can
scale
up
the
right
resources
at
the
right
time
and
pay
for
only
the
scalability
of
those
specific
resources.
Right
saves
us
a
ton
of
money
in
the
cloud
space,
isolation
of
your
computing
again,
you
want
to
hit
it
with
the
specific.
You
know,
peel,
pull
queue,
kind
of
thinking
right.
B
This
service
needs
to
go
to
one
specific
spot
and
do
that
efficiently
and
securely
easily
deployments
of
these
services,
especially
when
one
goes
down.
They
typically
spin
up
very
quickly.
Ours
are
running
in
about
50
seconds
so
that,
if
anything
happens,
it's
back
up
and
the
users
are
clueless
they're
doing
their
day
to
day
stuff,
just
like
anyone
else
that
would
expect
in
a
stable
environment
right
and
it's
way
easier
to
configure
zero.
Just
networking
we're
not
really
talking
too
much
about
that
today.
However,
there
are
a
lot
of
differences
outside
of
this
deal.
B
That
can
make
this
a
little
bit
easier.
It
still
has
some
very
cool
features
in
the
the
package,
so
to
speak,
that
that
you
can
do
these
kind
of
things,
these
policies,
security
policies
on
that,
but
personally
we're
moving
in
the
direction
of
calico
for
that
so
I'm
going
to
talk
too
much
about
that
presentation.
B
So
what
are
the
challenges
before
we
get
into
sto?
It's
do
is
obviously
solving
some
sort
of
problem
for
us
right.
Otherwise,
why
does
it
exist?
Well,
I
use
it
so
we're
dealing
with
very
complex
systems.
You've
got
all
these
micro
services
out
there
I,
really
like
the
Deathstar
analogy
where
you
just
got
all
these
things
and
you
really
have
to
have
kind
of
a
mesh,
a
Google
Maps,
so
to
speak
of
your
services
right.
This
is
just
one
of
the
things
that
I
co
solved.
Another.
B
As
you
deploy
more
services,
it
really
does
make
testing
a
lot.
More
I
mean
you're,
not
just
as
teaching
into
a
single
machine
and
starting
to
you
know,
look
at
its
logs
and
stuff
like
that.
You
really
have
to
be
very
thoughtful
and
plan
where
these
things
are
going,
how
you're
consuming
logs
and
all
these
different
things
we'll
get
into
more.
How
we
do
that
in
just
a
bit
you
must
implement
in
our
service
communication.
B
B
B
B
There's
a
lot
that
is,
introduces
that
the
tournament
so
I
think
the
future
of
some
of
this
stuff
is
kubernetes
will
probably
be
natives
leer
on
an
sto
to
do
all
these
things.
I
think
we'd
agree
on
that.
So
how
does
this
you
actually
solve
some
these
problems?
Let's
look
at
a
quick
example
before
I
get
it,
so
we
can
see
that
we've
got
three
different
versions
of
our
reviews:
micro-service
right
now,
the
first
one
might
have
been
deployed
three
months
ago.
B
Let's
say
the
second
one
was
to
play
two
months
ago:
the
third
one
might
have
just
been
deployed
now
in
this
scenario.
It
it's
essentially
sto,
sits
in
between
these
services
now
on
play.
Where
you
see
this,
that
is
these
by
car.
That
sto
is
essentially
you
configure
to
basically
get
the
traffic
before
the
micro
service
right.
What
that
does
is
use
things
like
the
Gateway
resource
virtual
services,
all
those
things
in
in
sto
that
we'll
talk
about
here
a
bit
to
basically
configure
how
these
things
are
what's
happening
here.
B
I'll
kind
of
use-
my
mouse
right
out
here,
so
you've
got
the
difference
between
your
product
page,
which
is
loading
on
the
screen
right
and
then
your
product
page
is
going
in
requesting,
probably
in
this
component
that
will
return
essentially
HTML,
showing
either
blank
black
reviews
or
read
reviews.
So
it's
basically
a
very
simple
variation,
but
what
the
system
is
determining
how
these
actually
get?
Essentially,
we
routed
right
and
there's
so
many
ways
you
can
do
that
path
based
routing
DNS
space
routing
you
can
use
cookies.
You
can
use
headers.
C
B
This
does
this
automatically
with
a
steer
you
have
to
do
such
it
has
an
on
it
does
not.
It's
got
a
mode
that
will
not
enforce
so
to
speak
that
encryption,
so
keep
that
in
mind
as
you're
going
down
your
sto
path
that
you
don't
want
to
have
that
off.
If
you
really
want
your
stuff
to
be
encrypted
between
your
notes
now,
we've
got
a
lot
of
cool,
advanced
service,
routing
methods
that
are
a
lot
more
powerful
than
your
current
ingress
injuries
or
whatever
environment
you're.
B
In
so
one
of
the
examples
we
love
is
traffic
shifting,
so
we
want
to
really
implement
this
across
the
board
in
our
deployment
procedure
to
basically
start
to
phase
shift
new
versions
out
in
a
controlled
manner.
So
certain
systems
are
certain
features
of
say.
Our
ERP
tetra
core
product
might
have
a
new
version
that
we
want
to
test
on
a
precipitate
and
literally
test
because
they
ordered
it
or
they
wanted
an
upgrade
or
something
but
they're
really
isolated.
We
spin
up
the
micro
service,
so
in
this
instance
we
can
say
version.
B
1
is
what
everyone
else
is
using
right
now:
version
3
is
what
I'm
pushing
for
my
customer,
or
my
specific
instance
now
in
this
scenario,
we're
putting
50%
of
the
traffic
to
one
and
50%
to
the
other.
However,
you
can
do
many
different
things
with
this
to
say
only
if
there's
a
cookie
go
to
this
route,
this
user
to
this
service
right
test,
it
out,
look
at
the
logs
consume.
What
you
can
set
really
perfect
that
and
then
we
can
actually
use
this
kind
of
traffic.
B
B
However,
you
want
to
keep
the
old
service
and
I
think
this
is
really
the
a
huge
fundamental
with
sto
is
being
able
to
keep
those
services
and
being
able
to
visualize
that
you
have
these
old
versions
of
the
service
as
you've
got
500
different
variations
of
a
service.
For
example,
it's
going
to
take
time
for
that
kind
of
skill,
but
ultimately,
if
that's
the
direction,
you're
moving
with
your
your
clusters,
your
environment,
that
you're
using
sto
is
going
to
be.
Your
best
bet
is
what
we're
finding.
C
B
B
Then
what
we
would
do
as
far
as
us
internally,
this
kind
of
a
change
depending
on
your
methods
of
of
testing,
but
what
we
would
do
is
we
would
basically
inject
either
a
path
based
or
cookie
based
to
say.
Okay
me
as
the
developer
I'm,
the
only
one
that
gets
version
3
for
now,
until
I
fix
this
problem,
and
then
we
would
deploy
our
mo
file
for
this
virtual
service
to
say:
okay,
let's
redeploy
50%
to
ordinary.
B
What's
your
question
to
speak
to
that
with
gke
and
the
traditional
ingress,
there's
kind
of
this
lag
period
that
I
think
sto
really
solves
very
cleverly,
where,
if
you
do
roll
back,
it's
very
instant
rather
than
being
having
like
GK's
external
ingress
trying
to
catch
up
with,
then
you
know
your
actual
ingress.
It
right,
I
hope.
B
Okay,
so
this
ultimately
sum
this
all
up.
This
has
helped
us
to
really
try
to
solve
this
problem
of
deployment
anxieties,
kind
of
what
we
call
it
right.
We
want
to
know
that
a
particular
service
is
working
perfectly
before
we
sent
it
out
to
the
rest
of
the
world,
and
even
so,
if
we're
even
the
least
bit
unsure
or
you
want
to
have
a
beta
group,
you
know,
let's
send
it
out
to
10%
of
our
users.
You
know
and
I
think
the
the
granularity
and
the
control
that
you
can
do
with
this.
B
Pretty
amazing
we'll
go
into
some
more
advanced
stuff,
I
think
at
the
end,
because
I
don't
want
to
run
out
of
time
either,
but
there
are
some
really
cool
stuff
like
timeouts,
and
you
can
do
certain
braking
and
gosh
there's
so
much
that
you
can
actually
configure
to
say
you
know
if
you
ping
this
service
and
it's
two
seconds
you
know,
do
something
else
or
wait
a
second.
You
know,
let's,
let's
recap
and
whatnot
so
anyway,
moving
on
so
as
we
deploy
more
and
more
of
these
things,
it
becomes
really
prudence.
B
You
actually
visualize
what
you're
doing
in
this
world
of
Micra
services
right.
I.
Think
that
the
the
visualization
tools
with
this
new
are
just
amazing.
Griffen
a--
is
one
of
them.
There
is
another
one
that
actually
does
a
lot
better
route
based
I.
It
starts
with
a
cake
on
the
top
of
my
head
someone.
B
What
was
it
kibana?
Yes,
thank
you,
but
anyway,
the
all
of
these
logs
are
being
sent
out.
You
can
configure
them
to
go
to
essentially
either
of
these,
and
what
happens
is,
as
you
start
to
be
able
to
visualize
a
lot
more
than
what
you
could
normally
without
having
some
magic
on
your
own
time.
Right,
essentially
coding
these
things.
To
do
that,
you
really
don't
want
to
do
that.
B
We
currently
are
using
the
gk
interface
to
help
us
to
do
that
still.
But
I
think,
as
we
go
down
this
road
as
we
have
a
lot
more
services,
it's
going
to
be
prudent
to
have
something
like
this.
We
even
personally
really
want
to
absorb
these
logs
for
misty
or
ourselves,
which
I
really
that's,
that's
a
whole
other
conversation.
But
I
think
that
the
potential
for
that
is
actually
very
huge.
B
When
should
you
use
this?
Do
I've
been
talking
a
lot
about
this
and
whatnot,
but
just
to
make
it
super
clear,
there's
a
lot
of
people
on
this
journey
of
micro
services
and
we're
finding
that
you
know
you
start
out
small
right.
Typically,
like
we
start
out
with
literally
just
it
was
basically
just
a
micro
service
version
of
our
monolithic
right.
Then
we
moved
down
and
we
had
say
our
front
end
and
our
back
end.
Now
we've
got
like
six
services
that
are
serving
all
the
same
hole
and
we're
having
a
huge
need
for
Lily.
B
We've
got
like
four
versions
of
every
service
and
we're
seeing
okay.
What
Willy
makes
the
most
sense
in
certain
environments
or
first
certain
clients-
and
you
know
all
these
things
might
have
variations
right.
So
the
best
advice
you
would
say
is
couple
services
and
if
you
have
the
need
to
change
those
services
without
affecting
your
user
base
and
your
your
actual
application,
SDO
is
gonna,
be
crucial.
Absolutely
I
mean
it's.
B
A
B
B
So
the
question
was:
when
you
have
a
developer,
that's
looking
to
come
in
and
expose
essentially
their
traffic
and
only
their
traffic
for
the
sake
of
testing,
debugging
and
whatnot.
How
are
we
looking
to
accomplish
that,
whether
that's
header
based
cookie-based,
dns-based,
etc?
We're
really
looking
more
at
the
cookie
based
just
because
I
can
dynamically
change
that
our
our
ERP
and
our
sales
model
is
very
DMS
intensive
to
separate
the
identity
of
our
users
right
and
so
that's
kind
of
out
of
the
question
we
don't
want
to
have
beta
user,
Descamps,
right
or
tetrachord
com.
B
Well,
we
want
to
have
is
basically
I
can
log
into
any
of
my
users.
My
my
client
systems
change
a
cookie
dynamically
and
be
able
to
then
see
in
real
time
in
their
production.
What
that
feature
or
that
service
would
how
that
would
change
right
without
necessarily
giving
them
access.
So
what
that
really
allows
we're,
seeing
that
allows
us
to
do
is
be
able
to
test
those
in
a
in
production
without
the
client
getting
that
yet
right.
C
B
So,
basically,
and
and
to
speak
to
security
of
that
because
you
don't
want
to
just
have
a
cookie
that
someone
can
go
in
their
browser
and
say:
oh
my
gosh
I'm
just
going
to
change
this,
to
see
what
happens
right
so
super
simple
concept,
but
make
sure
that
your
default
routing
is
done
with
zero
presence
of
a
cookie
right.
You
don't
at.
B
Primary
base
of
routing
it
just
but
then
the
other
thing
is
to
really
validate
that.
That
cookie
was
actually
essentially
changed,
application
side
right.
So
what
we
do
a
lot
is,
will
encrypt
the
let's
just
call
it:
let's
call
it
token,
for
example,
and
use
that
as
a
cookie,
that's
authenticating
to
that
service,
and
only
for
that
instance.
So
then,
if
we
forget
about
this
thing
five
months
from
now,
it's
you
can't
use
that
same
cookie.
It's
over!
It's
done.
B
You
know
you
set
a
time
line
on
it
and
it's
in
the
past
right
that
way
we
can
really
control
I.
Don't
just
have
this
cookie
on
my
laptop
and
I'm
at
you
know,
I'm
like
at
the
clients,
office
and
I
forget
it's
in
there
and
all
of
a
sudden
I'm,
showing
him
a
feature
that
I
didn't
want
to
write
another
security
bit
that
just
really
is
crucial
in
the
cookie
based
routing.
A
B
B
A
B
I
would
say
that
while
I
was
there,
the
biggest
thing
is
the
native
compatibility
with
calico.
Since
we
were
spending
a
lot
of
time
with
security,
we
want
to
implement,
implement
zero
trust,
not
only
between
our
services
but
very
similar
to
like
our
cookies.
We
don't
want
ever
there
to
be
a
situation
that
compromises
a
certain
service
to
someone
that
should
not
have
it
right.
On
the
other
side,
as
you
layer
these
things
down,
there's
kind
of
this
misconception
that
these
things
are
just
inherently
secure
and.
B
The
compatibility
with
calico-
that's
that's
really.
The
main
thing
I
would
say
that
in
google
pushing
it,
because
the
fact
that
they
really
seem
like
that's
just
gonna,
be
part
of
their
kubernetes,
offering
very
soon
which
it
already
is
in
beta
and
whatnot.
But
that's
my
best
answer:
I
will
throw
it
out
there
that,
when
you're
using
TK,
you
might
want
to
consider
installing
it
yourself
and
not
using
their
installer,
it's
a
little
less
controllable.
B
C
B
Essentially
and
I'll
back
up
a
little
bit
because
this
alright,
so
as
of
right
now
right
we
have
a
service
and
that's
a
basically
deployment
communities
right
and
all
that,
and
what
we
would
essentially
do
is,
as
we
implement
isseo
we're
going
to
define
our
gateway,
define
our
virtual
service
defined
version.
One
of
that
is
being
our
existing
service,
but
as
we
roll
out
a
new
version,
what
we're
doing
internally
is
essentially
marking
a
version
for
why
it
exists.
Why
are
we
updating
this?
B
It
all
whether
that's
for
a
specific
client
and
then
that
cookie
is
only
available?
On
be
honest,
all
of
our
stuff
is
dns-based
right,
so
that
would
then
lock
that
down
to
be
in
the
scope
of
their
testing
right
then,
as
we
perfect
and
go
through
our
protocol
to
say,
okay,
this
is
now
perfect.
It's
ready
for
production
in
this
client's
environment.
You
know,
get
the
client
on
your
side
and
let
them
you
know
test
a
bit
see
before
you
roll
it
out
so
well.
C
B
C
C
B
Know
we've
actually
been
toying
with
two
concepts
that
it's
kind
of
we're
kind
of
torn
at
the
moment,
because
right
now,
the
way
that
our
application
really
works
is
you're.
Your
entry
point
service,
never
really
changes,
and
so
there
are
some
things
that
we
we
feel
might
actually
be
better
off
left
outside
of
this
yeah.
For
the
sake
of
less
complexity,
yeah.
C
B
You
know
it's,
it's
I'm
glad
you
asked
that
because,
ultimately,
when
I've
been
researching
all
of
this,
everyone
thinks
it's
like
all
inclusive
right,
but
with
the
use
of
the
namespaces
and
being
able
to
actually
define
which
is
which
not
to
mention
using
the
namespaces.
The
proper
way
anyway
is
actually
better
for
security.
B
Right
I
mean,
let's
secure
your
secrets,
people,
you
know,
I
mean
you,
don't
want
this
namespace
to
access
this,
so
that's
all
inherent
to
communities
anyway,
but
that's
I
think
we're
actually
leaning
towards
the
split
so
that
our
back-end
all
the
actual
web
components,
API
calls
all
that
is
handled
within
Sto,
which
will
make
our
upgrade
a
lot
easier
into
production.
I.
Give
it
much
easier.
The
reason
being
is
the
SSL.
Certificates
is
kind
of
my
my
main
linchpin,
not
there
or
lazy,
and
don't
want
to
migrate,
SSL's
and
do
all
this
stuff.
B
But
you
know
the
chances
of
downplaying
this
stuff,
or
just
really
you
don't
want
to
chance
with
that.
You
know
and
someone
having
a
security
area
pop
up
on
their
screen
is
not
what
I
mean
whenever
once
yeah
I
get
that
call
and
yeah
it's
sucks
across
the
board.
I
get
90
of
them.
What
alone
you
know
what
I
mean
so
I
think
what
we're
gonna
actually
do
is
and
and
we're
still
testing,
but
it
seems
like
it's
complete
within
the
realm
of
possibility.
Is
your
I
wish
I
had
a
diagram
or
a
whiteboard?
B
Is
there
one
back
here?
Well,
we
can
essentially
say
traffic
comes
in
right.
We
route
to
our
entry
points,
since
our
entry
point
is
literally
just
an
entry
point,
that's
what's
looking
at
DNS
and
then
the
way
that
we're
securing,
because
we
have
to
also
integrate
with
a
lot
of
external
services,
is
kind
of
our
older
problem.
B
So
we've
almost
got
like
two
versions
of
our
API
and
it's
the
same
exact
API,
but
you
have
to
have
a
more
secured
version,
for
you
know
certain
external
users
right,
so
the
SSL's
come
in
right,
first
application
points
and
then
encrypt
that
connection,
and
then
that
sto
do
its
thing
and
just
let
it
I
believe
it's
the
TLS
termination
in
sto.
You
can
actually
allow
it
to
not
terminate
your
SSL
connection
when
you're,
going
through
at
least
its
initial
arrest
does.
B
We
are
using
it
it's
a
little
bit
different
in
our
case,
because
we're
actually
using
so
in
our
case
being
that
we're
doing
a
lot
of
JavaScript
based
calls
right.
The
browser
were
that
entry
point
is
essentially
connect.
It's
it's
encrypted
there
right,
so
that's
the
kubernetes,
Inger
brass,
essentially
right
and
then
from
there
we're
connecting
to
an
access
point.
That's
always
a
certain
DNS
there's.
C
B
This
variation
between
this
domain
I
the
sto
side
rather
than
having
all
of
your
certificates
so
like
our
wildcard
for
our
entry
points
right,
that's
how
we're
getting
around
the
problem
now.
I
know
that
and
I
might
be
wrong,
but
it
seems
to
me
that,
even
so,
when
you
pass
the
initial
ingress
you
get
to
sto,
if
you
have
CLS
termination
I
believe
you
actually
can
continue
that.
So
then
it's
encrypted
between
your
quads
and
your
services,
but
then
upstream,
the
response
is
still
encrypted
with
that
original
certificate.
Right
that
the
best
explanation.
B
Now
I'm
gonna
say
this
is
my
particular
entry
point
we're
using
the
kubernetes
ingress
to
get
into
this
entry
point
now.
Our
entry
point
may
have
different
versions
based
on
certain
things,
but
that's
super
easy
when
you
only
have
a
tiny
variation
that
very
well
right
now,
it's
a
single
entry
point
right
now
when
we
go
through
and
we
send
our
request
out
the
way
ours
works,
is
it's
actually
going
essentially
back
through
and
routing
into
sto?
For
that
second
request:
does
that
makes
sense
with
this
diagram,
so
we've
got
our
sto
balancer,
so.
C
B
C
B
It's
essentially
like
let's
say:
we've
got
our
just
call
this
our
little
appointment
area
right
and,
let's
just
say
it,
but
obviously
in
your
enameled
files,
basically
you're
defining
the
label
of
version
1
right
version
2
this
isn't
exactly
a
syntax
correct,
but
then,
when
you're
labeling,
when
you're,
defining
exactly
when
there's
cookie,
baster
or
well.
However,.
C
A
C
C
B
C
B
So
essentially,
when
you're,
when
you're
sending
that
request
back
out
the
second
time,
what
you
can
do
is
when
you're
coming
back
in,
you
can
actually
ping
a
database
essentially
and
then
based
on
the
result
of
the
eaters
you
have
in
that
method.
Instead
of
using
the
animal
files
to
essentially
distribute
your
features,
which
is
a
little
that's,
that
is
difficult.