►
From YouTube: SoloCon 2022 [Lightning Talk]: IPv6 and Service Mesh
Description
SoloCon 2022:
[Lightning Talk] IPv6 and Service Mesh
Speakers:
Kasun Talwatta
Field Engineer-APAC, Solo.io
Marino Wijay
Developer Advocacy and Relations, Solo.io
Abstract:
The rapid depletion of Public IPv4 addressing has led to increased usage and need of IPv6. In this lightning talk, we discuss how Service Meshes can support and extend application networking using IPv6.
Track:
Service Mesh and Application Networking
A
Hi
there
and
welcome
to
today's
lightning
talks,
I'm
samantha
kim
and
I'm
part
of
the
marketing
team
here
at
solo.
I
o
I'm
excited
to
welcome
our
lightning
talk
speakers
today.
So
please
help
me
extend
a
warm
welcome
to
caisson
and
mourinho,
who
will
be
sharing
information
about
ipv6
and
service
mesh
casino
moreno
over
to
you.
B
Sorry,
for
that
welcome
to
the
session
titled
multi-cluster
communications
with
ipo6
hope
you
guys
have
been
enjoying
solarcon
so
far,
so
my
name
is
cousin.
I'm
a
field
engineer
at
solo
in
the
apec
team
based
out
of
new
zealand
and
with
me
is
my
colleague
mirino.
Do
you
want
to
save
a
few
words.
C
Sure,
hey
everyone,
my
name
is
marina
wijay.
I'm
a
developer
advocate
at
solo
up
in
north
america,
and
I'm
here
to
talk
to
you
about
ipv6
and
and
service
smash,
so
looking
forward
to
it
and
welcome
to
solocon
everyone.
B
So
before
we
talk
about
some
of
the
challenges
in
today's
internet,
let's
take
a
step
back
and
look
at
how
the
internet
protocol
has
evolved.
I'm
sure
you've.
Everyone
has
heard
of
what
the
internet
protocol
is.
It's
just
the
main
set
of
rules
that
govern
the
exchange
and
or
transmission
of
data
between
devices
on
separate
networks.
B
The
first
non-experimental
version
of
the
internet
protocol
version
4
or
ipv4
as
we
as
we
call
it
has
been
the
cornerstone
of
the
internet
since
its
inception,
when
the
specification
was
first
publicized,
nobody
really
envisioned
the
increasing
amount
of
devices
and
services
communicating
over
the
internet.
There's
massive
amounts
of
traffic
traffic
over
the
internet.
To
do
this,
and
that's
specifically
due
to,
or
especially
due
to
technologies
such
as
iots
5g,
there's
even
6g
coming
out
in
the
future.
B
There
are
other
challenges
you
know,
such
as
ip
mobility,
when
devices
move
between
networks
having
to
maintain
a
constant
or
permanent
ip
address
inefficiencies
in
address
allocation,
in
other
words,
inefficient,
iv
distribution.
When
organizations
had
no
supply
of
ip
addresses
and
then
there's
security
implications
such
as
you
know,
addresses
being
easily
discoverable
and
there's
there's
much
much
much
more
challenges
that
we're
facing
today
excited
please,
so
to
me
the
challenges
and
demands.
B
B
But
you
know:
if
we
talk
about
the
ipv4
ibv4
side
of
things,
then
ipv4
has
32
bits.
So
sorry,
I
think
mary
and
I
think
we
may
have
gone
up
the
slide
like
sorry.
So
I
so
ipv4
has
you
know
32
bits
of
addresses,
which
means
that
it
only
allows
about
4.3
billion
unique
addresses.
So
there's
obvious
issues
with
that.
So
if
you,
if
you
move
to
the
next
slide,
thank
you.
So
what's
the
catch,
you
ask
right:
why
hasn't
it
kind
of
taken
off?
B
You
know
similar
to
ipv4?
So
if
you
look
at
these
google
trend
graphs,
the
ipv6
adoption
has
been
quite
slow
and
that's
largely
due
to
you
know,
people
being
reluctant
to
migrate
across
to
ipv6.
B
So
if
you
take
a
ipv6
workload,
they
may
not
be
able
to
talk
to
ipv6
only
workload.
You
need
some
sort
of
netting
or
translation
in
between
to
be
able
to
do
that,
and
some
people
don't
even
understand
enough
about
ipv6.
B
You
know
it's
just
it's
just
that
complex
compared
to
a
pe4
and
then
having
to
remember
the
ipv6
edges.
Remember
it's!
It's!
128
bits
long
right,
so
it's
just
really
difficult
to
remember
as
well,
so
for
that
reason,
ipv4
and
ipv6
will
coexist
and
it
will
exist
for
some
time.
B
It
has
been
released
as
a
single
stack
mode-
that's
been
out
since
118
in
in
beta,
and
it's
also
been
released
as
dual
stack
mode
and
it's
been
out
since
one
to
one
and
it
actually
has
gone
ga
quite
recently
in
one
two
three.
So
what
do
these
actually
mean?
So
single
stack
means
that
it's
it's
a
single
interface
single
ip
for
your
workloads,
whereas
in
dual
stack
it's
actually
multiple
different
ips,
ipv4,
ip
and
then
ipv6
ip.
B
Now,
if
you
know
there
are,
there
are
some
factors
that
may
or
may
not
allow
ipo6
workloads
in
the
cluster.
You
need
to
make
sure
the
cloud
vendor
support
exists
in
the
underlying
infrastructure.
B
Aws
is,
is
one
of
the
first
cloud
vendors
to
to
introduce,
ipv6
and
and
a
set
of
apis
are
on
that,
and
also
in
kubernetes
there's,
what's
called
the
cloud,
the
container
network
interface
or
the
cni,
it's
actually
the
so
the
underlying
network
layer
in
kubernetes,
and
so
that
needs
to
support
ipv6
as
well,
regardless
of
whether
it's
single
or
dual
stack.
B
You
know
there.
There
are
popular
cni's
like
coleco
and
psyllium,
that's
46.,
and
so
you
know
before
I
hand
over
to
to
marina.
I
just
I
just
also
want
to
reiterate
that
you
know
you
need
when
you
plan
to
run
workloads,
ipv6
workloads,
you
need
to
make
sure
the
the
cloud
render
support.
B
Is
there
and
also
really
think
about
whether
you
want
single
single
stack,
compatibility
or
dual
stack
compatibility,
because
in
in
the
case
of
single
stack,
you
may
have
some
challenges
talking
to
ipv4
workloads,
either
internally
or
externally.
B
C
Yeah
thanks
cason,
so
when
considering
a
service
mesh
or
using
a
service
mesh,
the
underlying
kubernetes
cluster
needs
to
support
ipv6
and,
as
cason
already
mentioned,
as
of
kubernetes,
1.23,
ipv6
and
even
ipv4.
Dual
stack
is
a
stable
feature
so
consider
when
cert
when
provisioning
services
for
deployments
in
your
cluster.
If
you
aren't
using
a
service
mesh,
then
you
would
normally
need
to
expose
your
services
as
normal,
but
if
you're
using
something
like
dual
stack,
then
you
just
need
to
specify
this
as
a
flag
in
your
services.
C
Configuration
now
in
the
case
of
using
a
service
mesh,
the
ingress
gateway
object
will
need
to
ensure
its
service.
Ip
is
using
the
necessary
stack
so
whether
we
want
v4
or
v6,
we
just
go
ahead
and
specify
that
but
realize
a
lot
of
these
requirements
actually
happen
at
layer
3..
We
are
simply
telling
kubernetes
to
just
honor
the
v6
addresses
we
would
consume
at
the
service
level
further.
Furthermore,
because
pods
are
on
an
ipv6
enabled
network,
they
have
no
issues,
communicating
with
that
ingress
gateway
and
anything
beyond
it.
C
The
last
remaining
piece
we
need
to
consider
is
actually
dns
and
the
quad
a
records
or
the
aaa
records
take
for
a
moment
here
that,
when
we're
actually
creating
a
dns
record,
the
difference
between
ipv4
and
ipv6
doesn't
doesn't
change.
So
the
operation
for
creating
a
quad
a
record
doesn't
deviate
from
that
of
an
a
record
aaa
just
maps
to
an
ipv6
address.
C
When
this
beca.
When
this
is
in
place,
it
becomes
quite
straightforward
to
proceed
to
deploy
a
service
mesh
with
ipv6
with
service
mesh.
You
can
go
ahead
and
deploy
the
control
plane,
which,
if
you're
using
istio,
would
be
sdod
and
proceed
to
deploy,
let's
say
an
ingress
gateway
resource
that
handles
incoming
http
requests.
C
Nothing
changes
from
an
operational
point
of
view,
as
the
service
mesh
provides
the
necessary
abstractions
to
allow
for
less
concern
around
the
underlying
ip
stack
or
the
tcp
stack.
Now,
let's
take
a
look
at
glue
mesh
and
how
this
doesn't
really
change,
even
if
we're
using
v6
so
glue
mesh.
It
aims
to
simplify
the
configurations
of
things
like
your
security
policies
and
traffic
management,
while
also
providing
additional
extensibility
to
the
actual
envoy
proxy.
C
That
sidecar,
that
sits
alongside
your
your
main
containers
and
your
pods
glue,
the
glue
mesh
management
plane
itself
sits
in
a
kubernetes
cluster
which
actually
doesn't
participate
in
the
service
mesh.
However,
it
does
control
blue
mesh
and
it
controls
all
of
the
the
clusters
that
are
running
istio
and
the
glue
mesh
agent.
These
are
our
workload
clusters
where
our
services
and
our
pods
live
as
long
as
the
glue
mesh
management
plane
itself
has
ipv6
reachability
to
the
workload
clusters,
those
ones
running,
kubernetes
and
istio.
C
This
enables
the
controlling
of
these
workloads,
along
with
pushing
things
like
policies
or
for
like
traffic
management
or
even
security
policies.
When
considering
security
and
working
to
create,
let's
say
like
an
authorization
or
our
back
policy
glue.
Mesh
access
policies
can
streamline
these
configurations.
C
If
we
attempt
to
route
to
various
services
or
endpoints
inside
of
our
clusters,
virtual
services,
destination
rules
and
service
entries
normally
would
be
deployed,
except
we
can
simplify
this
using
a
route
table
resource
in
glue
mesh.
If
you
think
about
it,
nothing
here
has
has
to
do
with
ipv6
or
even
ipv4.
C
We
are
working
on
an
abstraction
on
top
of
vistio
to
just
streamline
those
configs,
but
ipv6
just
works.
That
being
said,
inbound
communications
should
continue
to
function,
provided
that
you
have
dns
set
up
correctly.
So
in
a
nutshell,
like
ipv6
is
there
blue
mesh
is
there.
Istio
is
present,
but
at
the
end
of
the
day,
whether
you
had
ipv4
or
ipv6,
as
a
stack,
doesn't
really
matter
so
to
wrap
up
ipv6
despite
being
sidelined
by
nat
technologies.
For
so
many
years
has
many
use
cases
with
many
cloud
providers
natively
supporting
ipv6
in
their
core
networks.
C
There
is,
there
is
even
stateful
support
or
full
stable
support
for
for
this
inside
of
kubernetes
1.23
and
while
istio
provides
a
powerful
service
mesh
that
gives
us
traffic
management,
observability
and
even
security
capabilities.
We
can
leverage
bluemesh
to
streamline
these
configurations
to
simplify
how
we
go
about
running
service
mesh
with
ipv6.