►
Description
Opensource Kube-router supports only IPv4 address. Due to less availability of IPv4 address, the kubernetes community and industry is moving towards dual-stack (IPv4 along with IPv6). Also, there is a need for direct server return with dual-stack. Now-a-days video streaming, online gaming is increasing day to day. So, streaming those application requires direct server return method on CNI for dual-stack. So, I will present about how we can achieve dual-stack. Also, I will present what is direct server return and the need for DSR approach in the present situation.
A
Today,
I'm
going
to
talk
about
dual
stack
kubernetes
with
the
direct
server
written
and
this
is
a
lit.
This
is
a
little
bit
background
about
me.
I'm
kanan,
I'm
I'm
just
working
as
a
senior
technical
lead
at
a
tata
communications
limiter.
I
have
almost
13
plus
years
of
experience
in
tech,
industry
and
yeah.
Just
I
have
provided
my
twitter
plus
linkedin
link
yeah,
so
yeah.
Let
us
let
us
talk
about
the
actual
speak.
A
The
agenda
is
like
what
is
dsr
and
how
dsr
data
routing
will
happen
and
what
is
the
business
use
case
for
dsr
and
how
dual
stack
kubernetes
with
the
dsr
setup
looks
like
and
and
then
I'll
show
the
demo,
then
yeah,
then
the
questionnaire.
A
Basically,
what
is
dsr
dsr
is
nothing
but
direct
server
return,
so
which
means
whenever
you
are
requesting
any
data
to
the
server
I
mean
from
a
client,
I
mean
from
our
mobile
application
or
anything
is
our
client
right.
So
then,
when
we,
whenever
we
are
trying
to
reach
out
to
the
server
first
it
will,
it
will
just
reach
out
to
the
first
it
will
reach
out
to
the
load
balancer
for
that
particular
server.
A
Then
it
will
then
it
will
reach
to
the
server
the
target
server,
where
we
need
to
fetch
the
data
for
the
security
purpose.
I
mean
basically
what
what
they
did
is
like
by
default.
They
are
just
applying
the
source
nat,
which
means
our
source
ip
would
be
completely
netted.
I
mean
it
would
be
completely
trans
translated
to
the
different
ip
address
and
then
it
will
send
it
send
out
to
the
application.
A
A
In
a
traditional
way
of
routing,
first
of
all,
from
from
the
internet,
just
I
mean
as
a
client,
we
are
just
requesting
any
data
to
the
server
and
then
first
it
will
reach
out
to
the
load.
Balancer
then
load
balancer
will
do
the
source
netting.
A
Here
the
problem
is
like,
whatever
the
whatever
I
mean
from
the
intern,
from
the
client
to
server
whatever
the
data
path,
it
is
sending
with
the
same
data
part
it
has
to
reach
the
back,
but
this
we
cannot
do
it
for
all
the
application
say,
for
example,
for
highly
data
streamed
applications
and
all
we
cannot.
We
cannot
send
the
same.
I
mean
I
mean
we
can
send
it
via
load
balancer.
A
A
First
of
all
the
data
I
mean
whatever
the
data
we
are
requesting,
so
that
request
will
reach
to
the
load,
balancer
and
then
load
balancer
will
send
it
to
the
corresponding
web
server
and
then
web
server
will
directly
reply
back
to
the
client
through
the
switch.
A
Basically,
here
we
are
avoiding
the
so
many
hops
you
can
see
like
from
the
client
it
has
to
reach
load
balancer
and
then
from
load
balancer
to
the
particular
node
correct
and
then
from
the
node
to
the
application.
But
here
in
this
case
the
application
will
directly
reply
back
to
the
client
which
were
originated
by
which,
which
originated
the
traffic
yeah.
So
I
mean
basically,
this
is
this:
is
the
dsr.
A
So
what
is
the
business
use
case
for
this
guy
this
this
dsr,
the
number
number
one
suppose,
if
you
are
the
client
and
you
want
to
see
how
much,
how
much,
how
much
server
you
are
utilizing
it
I
mean
if
you,
if
you
want
to
see
it,
we
can
see
it
through
this
dsr
method
and
you
wanted
to
will
based
on
the
source
ip
I
mean.
A
Then
we
can.
We
can
go
with
dsr
method,
then,
as
a
server
you
wanted
to
allow
particular
I
mean
in
a
server
you
wanted
to
allow
particular
source
id
then
also
we
can
we
can.
We
can
go
ahead
with
dsr
method,
because
we
we
never
masqueraded
the
incoming
source
id.
So
since
we
have
complete
privilege
to,
you
know
see
the
source
ap
we
can.
We
can
just
secure
our
application
based
on
the
train
source
and
then
the
latest,
the
the
trending
one
like
video
games,
multimedia
content
delivery
network.
A
A
The
reason
is,
like
I
mean
the
moment
you
are
requesting
for
a
video
I
mean
if
you,
if
you
are
requesting
youtube
to
play
the
video
that
request
would
be
just
only
one,
just
you
just
some
one
or
two
packets
of
iv
packet
right,
but
the
reply
might
be
such
a
very
huge
data
packet,
correct
and
when,
whenever
you
are
requesting,
your
request
will
go
via
the
load
balancer,
because
your
request
is
like
hey.
I
want
this
particular
video.
A
That's
the
only
request
you
will
be
having
it,
but
the
video
content
would
be
such
a
so
huge
data
correct
so
that
never
come
back
with
the
same
load
balancer.
So
I
mean
if,
if
we,
if
we
allow
the
data
to
be
transferred
via
load
balancer,
I
mean
it
would
be.
Like
you
know,
almost
the
load
balancer
will
get
overloaded
right,
so
just
to
avoid
such
kind
of
conditions.
We
are
going
with
that
server
data.
A
Okay,
basically
the
streaming
streaming
services
need
a
huge
amount
of
video
data
from
the
server
to
the
mobile
client
application,
but
actually
the
actually,
whenever
you
are
requesting,
that
request
would
be.
A
You
know
very,
very,
very,
very
less
number
of
packet,
but
but
the
response
would
be
huge
number
of
packet
correct,
so
yeah
I
mean
this
is
the
only
reason
and
like
the
emails
and
the
emails
and
dollars
is
one
of
the
best
use
best
business
use
case
for
for
direct
direct
server
item,
I
mean
basically
the
email,
if
you
are
attaching
such
a
very
huge
content
and
if,
if
all
the
mails
go
load
balancer
the
load,
balancer
cannot
even
understand
it
right.
So
that's
the
whole
idea
for
direct
server.
A
So
here,
in
our
case,
yes,
we
have
brought
up
the
dual
stack
kubernetes
with
direct
server
written.
Basically,
this
is
our
whole
architecture.
First
of
all,
for
dual
stack
kubernetes.
What
is
required
the
par
ip
should
be.
The
pod
should
support
both
ipv4
as
well
as
ipv6.
I
mean
both
ip
address.
It
should
support
right,
and
then
I
mean
that
is
the
main
criteria.
A
So
now,
with
the
pod
networking,
we
have
supported
both
ipv4
as
well
as
ipv6,
so
in
the
in
the
cni
we
we
would
have.
We
would
have
clearly
mentioned
what
is
the
pod
subnet?
A
What
is
the
pod
ipv4
subnet,
ip
and
ipv6
subnet
ap,
so,
based
on
that,
your
pod
will
be
allocated
with
both
ipv4
as
well
as
ipv6
for
ips,
okay
and
now?
Yes,
we
need
to
have
a
router,
this
physical
router.
A
I
mean
there
might
be
a
case
like
you
know,
in
a
single
deployment
can
have
multiple
replica
and
which
may
which,
which
might
run
across
all
the
nodes.
So
I
mean
this
is
one
of
the
use
case.
Like
you
know
it
is.
It
is
running
on
both
the
nodes.
This
is
we
can
consider
like
you
know.
This
is
one
of
the
deployment.
A
Now
whenever
the
data
is
being
requested.
First,
it
will
first
it
will
reach
to
internet.
Then
it
will
reach
to
the
then
it
will
reach
to
the
router
now
router
should
should
send
it
to
the
corresponding
kubernetes
cluster
node.
Okay.
Prior
to
that
there
is
something
called
service.
Yammer
correct
I
mean
you
can
see.
There
is
something
called
service
amongst
so
the
service
yaml.
A
There
is
a
parameter
called
external
ip.
So
the
moment
you
have
mentioned
the
ipv4
as
well
as
ipv6
external
ip,
then
your
kubernetes
cluster
node
should
advertise
those
external
ip
to
the
physical
router.
Okay.
Basically,
the
first
step
would
be
your
kubernetes
cluster
should
have
both
ipv4
and
ipv6
capability.
I
mean
your
power
should
have
allocated
with
both
ipv4
as
well
as
ipv6
ip
number
one
number
two
on
your
service
yaml
there
is
a
parameter
called
external
ip.
A
Okay,
then,
your
kubernetes
node
will
be
having
ipvs
adm
ipvs
adm
is
a
linux
utility,
which
is
which
is
which
is
basically
used
to
route
the
data
in
an
effective
way.
Okay,
so
the
moment.
Okay!
So
now
we
have,
we
have
just
allocated
some
of
the
ipv4
and
ipv6
external
ip
to
the
part,
and
the
moment
the
traffic
is
originated
from
the
client
first,
it
first
it
will
reach
out
to
the
internet.
Then
it
will
reach
out
to
the
router.
A
A
Reach
out
the
incoming
packet
to
the
kubernetes
customer,
and
then
you
know
the
the
router
will
forward
the
data
to
the
the
shortest
path.
Kubernetes,
cluster
node.
Okay,
then,
once
that
data
is
reached
with
that,
once
the
data
is
reached
on
the
kubernetes
cluster
node,
then
how
it
will
be
routing
to
the
pod
and
how
it
will
reply
back.
I
mean
that's
what
we
are
going.
We
are
going
to
see
in
depth
actually
in
a
further
slide
clear
to
that
how
we
achieved
it,
I
mean
that's
what
I'm
going
to
explain
now.
A
There
is
a
open
source,
cube
router,
which
supports
ipv4
with
dsr.
Yes,
actually,
we
have
taken
the
cube,
router,
open
source,
ipv4
supported
with
dsr
package,
and
then
on
top
of
it
we
have.
We
have
just
enabled
that
dual
stack
external
external
light.
A
A
Okay,
I
mean
so
that
both
in
I
mean
for
a
same
application
for
both
ipv4
and
ipv6.
Can
you
know
it
can
send
the
send
and
receive
the
traffic
at
the
same
time?
A
A
A
Actually,
how
we
are
achieving
achieving
this.
A
There
is
something
called
mangle
ip
table,
so
whenever
the
service
is
being
whenever
the
service
has
been
applied
first,
we
will
create
a
manual
table.
Okay.
Under
the
manual
table,
there
is
a
parameter
called
firewall
mark,
okay,
just
we
will
set
the
firewall
mark
and
whenever
that,
whenever
the
ip
packet
comes
with
the
destination
ip
as
a
external
ip
which,
which
is
mentioned
on
this
service,
this
manual-
I
mean
this
manual
table
just
it
will.
It
will
just
mark
the
incoming
packet
with
some
hexadecimal
value.
A
And
then,
once
we
have
the
once
we
have
the
firewall
mark,
we
will,
we
will
add
the
entry
to
the
ipvs
adm
table.
Okay,
basically
I
mean
in
the
demo.
I
will
show
you
each
and
every
ipvs
adm
table
contains
its
pod
id
okay
and
the
moment
the
moment
the
data
comes
with
the
external
I
mean
destination,
ips
external
ip,
then
through
mangle
table
we
are
just
marking
the
packet
with
a
firewall
mark,
then
it
will
be
router
to
the
ipvs
adm
table.
A
Once
it
is
routed
to
the
ipvs
adm
table
in
the
ipvs
stadium
table,
we
will
be
having
an
entry
for
this
particular
firewall
mark.
These
are
all
the
I
mean
I
mean
following
some
following
parts
would
be
supported.
I
mean
such
kind
of
entry
would
be
that
I
mean
I
am
showing
it
in
the
demo
like
that,
I
mean
for
in
a
deployment.
A
There
might
be
a
multiple
replica
correct,
so
there
might
be
a
chance
that,
like
a
single
single,
I
mean
for
the
entire
deployment
will
be
having
one
ipv4
and
one
ipv6
external
ip.
So,
in
that
case,
for
ipv4
firewall
mark
will
be
having
the
corresponding
ipv4
part
ip
and
then
ipv6
firewallbar
will
be
having
the
corresponding
ipv6
for
ip.
A
Now,
since
we
were
creating
the
ipvs
adm
table
with
the
firewall
mark
mode,
the
incoming
data
never
be
masked
masked.
A
Rather
on
top
of
the
incoming
packet,
the
ipvcm
will
add
another
20
bytes
of
data,
the
source
ip,
would
be
the
node
ap
and
then
the
destination
ip
would
be
the
pod
ip,
and
then
it
will
just
route
the
data.
A
A
So
on
the
part
we
were
creating
the
ipip
tunneling,
so
that
the
I
mean
once
it
once
it
reaches
to
the
part
that
it
just
removes
the
first
first
20
20
bytes
of
header
and
then
the
further
I
mean
with
the
with
the
further
data
it
can
it
is,
it
will
be
able
to
detect,
like
you
know,
that
that
is
belongs
to
that
node
and
then
it
will
send
it
to
the
application
once
the
application
is
processed.
It
will
just
reply
back
to
the
it
will.
A
A
There
is
a
I
mean
in
my
keyboard
does
in
my
simple
kubernetes
cluster.
Just
I
am
running
only
one
just
one
of
the
part,
which
is
a
simple
web
server.
Okay,
so
you
have
seen
inside
that
part
there
is,
there
are
two
ips
will
be
created.
A
Were
created
for
this
part,
one
one
for
ipv4
and
another
one
for
ipv6
I
mean
just,
we
have
enabled
the
dual
stack
functionality
for
the
entire
tuperontous
cluster,
also
with
respect
to
the
cni,
just
we
have
assigned
both
ipv4
as
well
as
ipv6
address.
A
So
with
this
way,
you
know
for
a
single
part,
we
are
locating
both
ipv4
as
well
as
ipv6
ip
address.
A
So
you
can
see,
I
mean
every
every
deployment
or
every
stateful
site
will
have
its
corresponding
service
service
correct.
So
likewise,
yes,
our
our
application,
also
having
a
service.
So
here
you
can
see
there
are.
There
are
two
ip
address
where
mentioned.
In
fact,
these
two
ip
address
are
the
public
ips.
Okay,
one
one
one
is
for
ipv4
another
one
is
for
ipv6.
A
A
So
now
apply
this.
In
fact,
just
now
we
have,
we
have
seen
it.
There
is.
No
tunnels
were
created
only
only
the
only
the
you
know
for
for
a
particular
part,
we
have
both
ipv4
as
well
as
cycling,
basic
ip.
In
fact,
this
this
is
springable
and
the
you
can
see
that
okay,
it
is
springable
within
the
kubernetes
cluster.
A
A
A
A
A
Then
we
are
routing
this
packet
to
ipvs
adm.
Yes,
actually
we
are,
we
are
we
are,
we
are
setting,
I
know
a
muscles
as
well.
In
fact
this
is.
This
is
just
to
avoid
you
know
in
case
we
have,
we
have
set
in
mtu,
and
we
should.
I
mean
this
is
just
auto,
calculate
the
entire
mtv
and
then
it'll
it'll
just
start
just
accordingly.
A
I
mean
that
is
the
functional
divided
by
default
functionality
of
cube
router,
but
yeah
we
have
extended
with
ipv6
also
so,
if
you
have
seen
it
so
here
with
this,
we
are
setting
the
in
case.
If
this
destination
ip
comes.
We
are
just
said.
We
are
just
marking
that
particular
tcp
packet
with
this
id
and
when
you
see
this
ipvs
adm
command,
you
can
see
like
you
know,
there
are
two
firewall
marks
were
created.
A
I
mean
the
highlighted
highlighted
things
were
two
firewall
marks,
so
each
firewall
mark
the
corresponding
ip
table
were
created
so
that
when
the
packet
comes,
we
are
just
marking
it
so
and
then
it
is
routing
to
the
ipvs
adm.
So
basically,
the
ipvs
adm
is
a
linux
utility
which
which
just
forward
the
packet
to
the
destined
to
the
destination.
For
our
case,
that
destination
is
a
pod
ap
correct.
A
So
when
the
external,
when
the
data
with
the
external
ip
comes
external
destination,
ib
comes,
we
are
just
routing
to
the
ibb
cdm
table
by
marking
that
packet
and
then,
with
the
same
firewall
mark,
we
have
the
corresponding
particle
right.
So
yeah
I
mean
we
have
individual
mark
for
the
individual
ipv4
and
the
port
combination.
So
here
I
have
one
ipv4
and
I
pick
one
ipv6
external
ap.
A
I
mean
public
ap,
so
that's
the
reason
that
corresponding
firewall
mark
has
been
calculated
and
then
it's
corresponding
for
ips
being
assigned
okay
on
the
ipvs
adm
table.
So
now
I
mean
due
to
this
ibbs
area
table
the
packet.
The
incoming
packets
were
directly
routed
to
the
pod,
so
the
pod
might
be
situated
situated,
I
mean
it
might
be
placed
on
on
the
same
node
or
on
the
different
node.
If
it
is
on
the
different
node,
the
packet
would
be
routed
so
before
routing
it.
A
What
is
happening
is
like,
like
you
know,
it
is
just
encapsulating
the
I
mean
it
is
just
encapsulating
the
ip
packet
with
this
part
ip
and
then
it
is
simply
it
is
routing
it
so
when
it
reaches
through
that
the
strained
part.
Basically,
this
is
this
is
a
part
ap
right,
so
when
it
reaches
to
that
the
straight
part.
Okay,
we
have
this.
A
We
have
the
tunnel
since
we
have
the
tunnel
under
the
pod
eyepiece,
so
once
it
reaches
here
it
it
just
remove
the
first
first
ip
header
so
which
belongs
to
the
part.
Then
when
it
see
the
ip
address,
so
it
is
belongs
to
the
external
line.
It
is
belongs
to
the
public
ap
and
that
same
public
ap
has
been
assigned
in
our
part
and
then
it
will.
It
will
allow
the
application
to
consume
the
data.
A
Okay
application
to
allow
application
to
allow
the
consumer
data
with
the
preserved
source
ip.
Actually,
we
just
preserved
the
source
ip
by
putting
another
ip
another
another
ip
packet
on
top
of
it.
Basically,
we
have
preserved
it
right.
So
I
mean
we
have
just
a
tunnel
date
so
when
it
when,
when
when
it
reaches
here,
just
we
remove
the
first
first
20
bytes
and
then
we
have
the
further
packets.
So
with
this
we
are,
I
mean
we
are
able
to
consume
with
our
application.
A
Then
the
application
will
directly
reply
back
to
the
destination.
I
mean
the
the
I
mean
the
source
id
will
be
there.
The
state
public
ap
the
destination
would
be
the
destination.
Would
be
that
the
traffic
which
is
originated
by
the
the
traffic
which
which
you
are
originated
by
the
user,
so
I
mean
with
this-
the
packets
were
directly
reaching
towards
the
source.
Ip
okay
I
mean
in
between.
There
might
be
some
some
default
gateway
to
reach
out
the
packet
to
that
to
the
user.
A
I
mean
it
never
follow
the
same
ideal.
Ideal
way
like
you
know:
first
it
it.
It
did
not
to
follow
the
follow
to
come
through.
I
firewall
firewall,
firewall
rule
and
then
the
actual
physical
router,
then
from
the
physical
router
to
the
firewall
rule,
then
then
load
balancer,
then
kubernetes
node.
Then,
when
it
reaches
to
the
kubernetes
node
again,
we
might
have
some.
A
We.
We
might
keep
some
ingress
so
through
that
it
will
reach
out
to
the
part,
but
in
this,
but
with
this
with
this
solution.
With
this
dual
stack
direct
server
return
solution
in
a
single
deployment,
we
can
have
a
number
of
public
ips
which
we
can
assign
it
on
both
ipv4
as
well
as
ipv6
and
then
at
the
same
time,
the
application
can
serve
both
the
traffic.
A
A
Again,
it
will
be
reach
out
reachable,
so,
in
fact,
this
part
would
be
reachable
from
outside
my
network.
Also,
because
I
mean
we
were
advertising
this
ip
towards
towards
the
router
correct.
So
that's
how
the
complete
dual
stack
with
the
dsr
has
been
is
being
established
yeah.
So
thank
you.
Thank
you.
So
much.