►
From YouTube: Dual Stack Cluster Setup
Description
Presented by Josh Tischer at IstioCon 2022.
Dual Stack support is very limited in today’s cloud ecosystem. Learn how to run/test Istio on a Dual Stack cluster in AWS on both Openshift 4.8+ and KubeAdmin. OpenShift 4.7+ is one of the few options that officially supports Dual Stack mode for bare metal clusters and Azure. We are excited to share our experience and empower your team with another option for Dual Stack support.
A
Everybody,
my
name
is
josh
tischer
and
I'm
a
lead
devops
engineer
at
aspen
mesh,
and
I
want
to
talk
to
you
today
about
dual
stack
networking
on
istio
and
aws
to
get
started.
We
really
need
to
understand
some
of
the
prerequisites
for
dual
stack
networking.
You
really
need
to
be
aware
that
dual
stack
networking
really
hasn't
been
enabled
until
kubernetes
120..
A
That
will
include
you
know
things
like
the
bare
metal
network
cards
configurations
of
the
linux
operating
systems
on
their
on
your
hosts.
A
The
overlay
networking
refers
much
more
in
detail
to
the
virtualized
networking
or
more
like
the
cni
layer
in
the
kubernetes
network
cluster
itself,
and
then,
of
course,
your
applications
have
to
respond
to
ipv6
as
well
as
ipv4,
so
make
sure
you
upgrade
those
to
dig
more
into
istio
and
dual
stack.
Aspen
mesh
has
been
working
on
these
features
for
about
a
year
and
we've
been
working
in
addition
with
the
community
to
open
source.
A
That's
what
we
really
love
infrastructure's
code
for
in
automation,
to
get
started
on
on
openshift
on
aws,
there's
a
lot
of
different
ways
to
do
that:
we're
following
the
ipi
or
infrastructure,
installer
provision,
infrastructure
deployed
from
their
installer
and
client
tools
which
require
aws,
cli
and
some
other
things.
Please
take
a
look
at
our
open
source
repository.
A
A
A
So
we've
prepared
another
script
here
that
follows
the
openshift
documentation
on
on
how
that's
possible
so
go.
Take
a
look
at
that
and
then,
of
course,
when
you're
done
make
sure
you
clean
up
your
resources
so
that
you
don't
incur
some
extra
costs
that
you
weren't
aware
of,
because
openshift
clusters
require
five
or
six
nodes,
make
sure
that
you
clean
those
up
so
to
dig
more
into
the
underlay
physical
networking
side
of
aws.
A
We've
of
course
got
vpc
layers,
load,
balancers,
subnets,
all
the
way
down
to
the
ec2
instances
themselves.
All
of
those
infrastructure
pieces
need
to
be
able
to
respond
to
ipv4
and
6.,
something
that
we
have
managed
to
find.
As
a
part
of
this
is
the
load
balancers
themselves
for
aws.
Don't
fully
use
ipv6
directly,
they
do
some
ipv4
network
translation.
So
you
will
see
that
in
your
curl
and
in
your
curl
requests,
so
once
you've
dealt
with
your
underlay
network
and
your
physical
layer,
we
move
on
to
the
overlay
networking.
So
you
can
see
here.
A
We've
got
two
sort
of
samples
of
what
it
might
look
like
to
upgrade
your
cni
and
your
kubernetes
cluster
configuration
for
dual
stack.
You
can
see
in
openshift
it
is
applying
a
patch,
that's
really
simple,
to
the
network
versus
cube
admin,
which
has
much
more
complicated
steps
as
well
as
probably
restarting
the
cluster
openshift.
You
can
do
it
on
the
ready
on
the
fly
and
it
will
accept
and
reload
the
pods
as
necessary
for
any
of
the
resources
that
it
needs
from
here.
A
Once
you
have
your
underlay
network
and
your
overlay
network
prepped,
you
can
install
aspen
mesh.
We've
got
our
installation
version,
1118
am2,
which
has
our
dual
stack
network
features,
enabled
and
on.
The
right
here
is
a
sample
overrides
file
that
will
help
address
some
of
the
configuration
issues
that
you
might
have
to
get
it
installed
on
openshift
from
here.
Once
you
have
aspen
mesh
installed,
you
can
validate
your
pod
networking
our
test,
dual
stack
script
here,
installs
a
sleep
pod
and
an
http
bin
in
a
dual
stack
namespace.
A
Just
so
you
can
run
some
curl
commands,
maybe
some
ns
lookups
to
validate
that
it.
It
is
indeed
responding
to
those
network
traffic
requests
appropriately.
Taking
that
a
step
further,
we've
created
a
in
that
same
script.
There
is
a
virtual
gateway
there,
an
istio
ingress
gateway
that
sets
up
our
load
balancer
so
that
we
can
validate
public
access
as
well
all
the
way
to
our
http
pin
pod,
and
you
can
see
that
this
is
accepting
traffic
on
both
ipv6
and
four.