►
From YouTube: Kubernetes networking and Cilium
Description
Cilium is a solution for providing, securing, and observing network connectivity between workloads. In this video, we'll start with the basics of Kubernetes networking and explain how the Container Network Interface (CNI) plugins, such as Cilium, implement the Kubernetes network model.
A
A
I
had
to
close
my
LinkedIn,
because
I
was
still
getting
the
countdown
anyway
welcome
everyone.
It's
hoot
number
I,
think
it's
number
50.
Looking
at
the
description,
all
right.
Let's
say
it's:
50.
how's,
everyone
doing
hey
shark,
how
you
doing
it's
very
cold
in
Seattle.
It
was
very
hot
in
the
past
couple
of
days,
but
today
is
pretty
cold.
So
where's
everyone
from
where
are
you
guys,
joining
hey
Leon,
hey
up
Netherlands,
it's
getting
late
there
right.
A
Wait
for
more
people
to
show
up
and
then
we'll
start
we'll
talk
about
kubernetes,
networking
and
psyllium
today.
So
it's
going
to
be
an
interesting
one:
yeah,
it's
7,
P.M
India,
oh
France,
all
around
the
world!
Look
at
that
perfect
all
right!
Well,
let's,
let's
get
started
yeah
50
Shane.
How
about
that
all
right!
Let
me
switch!
A
Let's
do
this
view,
how
about
this
one
Abu,
Dhabi,
Nigeria
yeah,
it's
probably
very
hot
there!
Oh
awesome!
Well
thanks
everyone
for
joining
it's
good
to
good,
to
see
all
the
comments
feel
free
to
ask
questions.
I'll,
try
to
glance
over
to
my
to
the
stream
yard.
I
don't
have
a
producer
yet
so
it's
just
me!
But
anyway,
let's
let's
get
started
so,
as
I
said
today
will
be
all
about
kubernetes
networking
and
psyllium.
A
Now
I
have
to
tell
you
when
I
started
preparing
all
this
I
was
thinking.
Well,
let's
just
use
psyllium
right.
Let
me
investigate
psyllium
and
do
like
a
explain
what
it
is.
Do
the
features
do
the
demos
Etc,
but
then
I
created
this
slide
right
and
if,
if
you've
been
to
any
of
my
talks,
you've
probably
noticed
that
I
like
to
start
with
a
definition
of
the
technology
that
I'm
gonna
gonna
talk
about.
A
So
I
started
doing
the
same
thing
here,
but
then
I
was
like
well
like
there's
a
lot
of
a
lot
of
like
talks.
Videos
live
stream,
blog
posts,
Etc
out
there
that
actually
explain
what
psyllium
features
are
right
and
go
through
it.
That
way,
so
I
was
thinking
instead
of
doing
that.
Let's
take
a
step
back
and
actually
try
to
explain
how
networking
in
kubernetes
works
right
so
because
that's
kind
of
a
what
psyllium
is
trying
to
help
with
or
it's
helping
with.
A
So
if
we
understand
the
networking
and
kubernetes,
then
it's
going
to
be
much
easier
to
see
how
and
where,
where
psyllium
fits
it
right.
So
we're
going
to
start
with
the
definition
and
then
there's
going
to
be
a
lot
of
networking
and
kubernetes
type
of
things
and
explanations
and
a
demo,
but
we
will
get
into
we'll
get
into
psyllium
towards
the
end
I'm
trying
to
keep
this
episode
up
to
an
hour,
but
I
do
promise
you
that
they'll
definitely
be
another
episode.
They'll
build
on
top
of
this
one.
A
So
if,
if
you're
here,
all
great
you'll
get
all
the
all
the
basics
and
then
in
the
next
one
will
it'll
be
much
easier
to
dig
deeper
into
psyllium
and
understand
the
different
features
and
actually
understand
how
they
work
all
right.
So
what
is
psyllium
so
the
definition
on
the
website
is
saying
that
psyllium
is
a
cloud
native
solution
for
providing
securing
and
observing
network
connectivity
between
workloads
and
it's
fueled
by
revolutionary
revolutionary
kernel
technology
ebpf.
A
So
the
thing
that
we'll
start
with
is
network
connectivity
between
workloads
and
actually
explain
what
that
means.
Now,
as
I
alluded
to
to
understand
that
we
have
to
understand
how
networking
in
kubernetes
works,
so
kubernetes
relies
on
container
network
interface
and
cni
plugins
to
assign
IP
addresses
to
pods
and
to
manage
their
routes
and
connectivity
between
all
those
spots.
A
Kubernetes
doesn't
chip
with
a
default
cni
plug-in,
however,
managed
kubernetes
offerings
come
with
pre-configured
cni
so
later
in
the
demo,
I'll
use
kind
kind
has
a
its
own
pre-configured
cni
that
it
ships
with,
and
if
you
go
to
any
of
the
cloud
vendors,
it's
the
same
thing
right.
They
have
this
pre-configured
for
you
now.
When
talking
about
networking
and
communication
between
workloads
and
kubernetes
is
there
at
a
high
level?
A
There
are
four
problems
we're
trying
to
solve,
so
we're
trying
to
figure
out
how
to
do
containers
to
container
communication,
how
to
do
part
to
pod
part
to
service
and
then
how
to
do
the
external
external
communication,
basically
bringing
the
traffic
into
the
cluster.
And
then
how
does
the
traffic
exit
the
cluster
to
access
some
external
services?
A
So
the
first
one
is
container
to
container,
and
this
is
solved
by
pods
through
the
use
of
network
namespaces.
So
the
network
namespaces
feature
in
Linux
allows
us
to
have
a
separate
network
interfaces
and
routing
table
from
the
rest
of
the
system.
So
host
has
its
own
network
network
namespace
and
then
the
Pod
can
have
its
own
network
namespace.
So
every
pod
has
its
own
network
namespace,
which
means
it
gets
its
own
IP
address
and
a
shared
Port
space.
A
Next
one
is
part
to
pod.
Now
the
nodes
in
the
cluster
have
their
IP
address
and
a
cider
range
from
where
they
can
assign
IP
addresses
to
pods
I'm
I'm
pronouncing
it
cider,
but
I've
read
also
that
people
pronounce
it
Cedar,
so
I
I,
don't
know
I'll
go
with
cider
and
if
anyone
corrects
me
I'll
I'll
correct
my
pronunciation,
so
there's
a
cider
range,
each
node
gets
its
own
and
then
that's
the
range
where
the
node
pulls
from
to
assign
the
IP
address
to
the
pot.
So
what
that
does?
Is
it?
A
It
guarantees
a
unique
IP
address
for
every
pod
in
the
cluster.
Now
the
Pod
to
pod
communication
happens
through
these
IP
addresses
a
virtual
ethernet
devices
are
used
to
connect
the
two
virtual
ethernets
across
the
different
network
namespaces.
So
in
this
case
we
have
one
virtual
interface,
that's
connected
to
the
network
namespace
in
part,
so
that's
at
zero,
and
then
we
have
the
other
side
of
the
virtual
interface
that's
connected
to
to
the
root
Network
namespace
on
the
Node.
So
for
each
pod
that
runs
on
the
Node.
A
There's
this
virtual
ethernet
pair
that
connects
the
pods
namespace
to
the
nodes,
Network
namespace,
so
the
traffic
from
one
pod
I
have
it
on
the
slide
on
the
left
right,
10
20
for
10
to
44
1
1
right,
so
the
traffic
from
there
will
go
to
from
each
zero
through
and
it'll
show
up
on
the
Viet
interface
on
on
the
Note
side.
And
then
it's
going
to
go
through
a
virtual
Bridge
through
the
routing
rules
to
another
virtual
interface.
That's
connected
to
the
destination
pod.
A
Now
one
of
the
requirements
of
kubernetes
and
in
terms
of
networking
is
that
the
pods
must
be
reachable
bar
by
their
IP
addresses,
also
across
nodes,
not
just
on
the
same
node.
However
kubernetes
is
not
doesn't
specify
nor
dictate
how
this
should
be
achieved.
Now,
assuming
each
node
knows
how
to
route
the
packets
within
the
node.
A
The
problem
that
we
need
to
be
solved
here
is
how
do
we
get
traffic
from
each
zero
on
node
one
so
that
yellow
thing
at
the
top
there
to
eat
zero
on
another
node
unknown
two,
so
this
is
where
container
network
interface
comes
into
play
now,
amongst
other
things,
cni
knows
how
to
set
up
or
sets
up
this
cross
node
communication
for
pods.
A
So
if
you
use
this
output
here
is
from
the
kind
cluster
right,
so
kind
uses
a
cni
called
kind
net
dot
cni
runs
as
a
Daemon
set,
meaning
there's
one
one
pod
per
node,
and
that
pod
on
node
is
responsible
for
setting
up
that
bridge
between
the
nodes,
and
it
allows
one
pod
to
call
another
pod
using
an
IP
address.
A
So
if
we
look
at
the
routing
table
on
one
of
the
nodes,
we'll
see
that
the
the
routing
table
will
specify
where
the
packets
should
be
sent
so,
for
example,
on
on
the
slide
here,
I
have
an
entry
from
the
routing
table
on
node
one.
So
this
output
is
from
node
172
180.
two
that
specifies
that,
amongst
other
things,
that
where
the
packet
should
be
sent
for
pods
that
are
running
on
node
two,
so
that
highlighted
highlighted
line.
There
is
saying
that
any
packets
that
are
destined
to
the
IPS
in
that
cider
range.
A
So
that's
the
cider
range
on
node
two
right
should
be
sent
to
that
IP
address,
which
is
the
IP
address
of
that
of
that
other
node
through
each
zero
foreign
and
here's
the
more
visual
representation
of
of
that
rule.
So
if
we're
looking
at
the
node
only
it's
same
as
it
was,
but
then
once
it
comes
to
the
bridge
and
goes
to
the
routing
rules,
those
routing
rules
will
basically
say
hey.
If
the
IP
falls
under
10
244
to
0
cider
range,
then
we
know
it
must
go
to
to
that.
A
Ip
address
the
IP
address
of
nodes,
node
2
in
this
case,
and
then
once
it
re
once
it's
received.
On
that
note,
the
internal
node
routing
rules
will
know
how
to
route
it
to
that
pod.
So
Alessandra
is
saying
that
he
always
pronounces
it
as
C.
A
A
Oh
no!
Let's
see!
Oh,
this
works
Perfect
all
right,
so
the
next
one
is
pod
to
service
communication
and
service.
Here
is
referring
to
kubernetes
service
right.
We
know
that
the
pods
IP
addresses
are
unique,
and
but
we
also
know
that
the
nature
of
bod
right,
they're,
ephemeral
right
means
that
we
can't
really
rely
on
the
pods
IP
address,
because
as
soon
as
the
Pod
is
deleted
and
as
soon
as
the
Pod
gets
recreated,
it's
going
to
get
a
different
IP
address
right.
A
So
the
way
that
this
is
solved
in
kubernetes
is
by
using
a
concept
of
a
kubernetes
service.
So
when
you,
when
you
create
a
kubernetes
service,
what
that
will
give
you
is
it
it
will
give
you
a
virtual
IP
address
and
that
IP
address
is
backed
by
one
or
more
pods.
So
what
that
means
is,
whenever
you
send
a
traffic
to
that
service
IP,
the
traffic
will
get
routed
to
one
of
the
backing
pod
IPS.
A
Now
for
this
to
work,
there's
a
couple
of
questions
we
need
to.
We
need
to
answer
or
we
need
to
figure
out
so
first,
one
is:
how
do
we
assign
an
IP
address
to
the
service?
This
one
is
fairly
straightforward
because
we've
already
solved
this
for
pods
right.
So
the
way
that
we
do
it
is
we
just
follow
the
same
approach:
we're
going
to
assign
an
IP
from
cider
range,
that's
typically
reserved
for
services
right
and
it's
different
from
The
Cider
range
for
pots
right.
A
So
first
one
is:
how
do
we
assign
the
IP
to
the
service
similar
way
that
we
did
for
for
pods?
Second,
one
is:
how
do
we
route
the
traffic?
That's
sent
to
the
service
IP
and
then
we
write
it
from
that
IP
to
the
backing
pods
and
third,
one
is:
how
do
we
keep
track
of
the
pods
that
are
backing
the
service
right?
We
know
that,
as
I
said,
Bots
will
come
and
go
right.
A
We
might
scale
out
right,
we
might
add
multiple
replicas
how
or
who
or
what
updates
those
IP
addresses.
So
that's
solved
by
a
component
called
Cube
proxy
and
there's
two
approaches
more
or
less
IP
tables
or
ipvs.
Ipvirtual
server
and
I'll
I'll
talk
about
this
a
little
bit
later
and
I'll
show
it
in
the
in
the
demo
as
well.
A
So
what
is
Q
proxy
just
very
high
level?
It's
a
controller
that
connects
to
the
kubernetes
API
server
and
what
it
does
is
it
watches
for
any
changes
that
are
made
to
services
and
parts.
So
whenever
a
service,
kubernetes
service
or
pod
gets
updated,
Cube
proxy
will
go
and
update
the
iptable
rules
so
that
the
traffic
that's
sent
to
the
IP
to
the
service
IP
will
get
correctly
routed
to
one
of
the
backing.
Pods,
so
at
pods,
get
created
and
destroyed.
A
Q
proxy
will
update
the
iptables
rules
to
reflect
those
changes
so
when
we
send
packets
or
requests
from
the
Pod
to
the
service
IP
service
IP
has
a
destination.
Let's
say:
pin
96
1-1
that
those
packets
will
flow
through
the
iptables
rules
and
as
part
of
those
rules,
the
destination
IP
in
this
case
is
the
destination.
Ip
of
the
service
will
get
changed
to
one
of
the
eyepiece
of
the
backing,
pods
right
so
work.
The
the
caller
side
will
always
call
the
servers.
A
However,
the
iptables
rules
will
make
sure
that
to
switch
that
service
IP
to
the
IP
of
one
of
the
backing
pots
and
then
on
the
way
back
so
when
the
destination
part
is
returning
the
sending
a
response
back
that
destination
pod
IP
right.
So
it's
not
going
to
be
the
Pod
IP
anymore.
A
It's
going
to
be
changed
back
to
the
service
IP,
so
the
original
pod
that
made
the
call
or
send
the
packets
things
that
it's
receiving
the
response
from
from
the
service
from
the
service
IP,
instead
of
one
of
the
backing
Bots.
A
Looking
at
questions
no
questions,
that's
that's
good,
hoping
hoping
I'm
explaining
it
well!
So
that's
why
there's
no
questions
or
everyone
already
knows
everything.
So
that's
good
as
well!
Arca
Niraj!
Yes,
this
will
be
recorded.
This
will
be
on
YouTube
forever,
all
right
and
the
last
last
group
of
problems
or
last
thing
that
we're
trying
to
solve
is
how
do
we
deal
with
external
communication,
so
external
meeting,
any
communication?
That's
or
any
packets
that
are
exiting
the
cluster.
A
For
example,
a
pod
is
making
a
call
to
and
a
service
that
lives
out
on
the
internet,
and
then
any
packets
that
are
entering
the
cluster
so
internet
to
service
calls
right.
So
on
the
way
out,
the
IP
tables
will
ensure
that
the
source
ipe
right
of
the
Pod
gets
changed
to
the
internal
IP
address
of
the
node
right
and
then
typically,
when
we're
running
this
in,
let's
say
a
cloud
hosted
cluster
the
nodes
have
a
private
IPS
and
they
run
inside
a
vpca
virtual
private
Cloud
Network.
A
Now,
in
order
to
allow
those
nodes
access
to
the
internet,
we
typically
need
to
attach
an
internet
gateway
to
that
VPC,
Network,
and
what
that
internet
gateway
does.
Is
it
performs
the
network
address
translation
and
it
changes
the
internal
node
IP
to
the
public,
node
IP?
So
what
this
does
is
it
allows
the
response
from
the
internet
when
it's
coming
back
to
be
routed
back
to
the
node
using
the
public,
node
IP,
and
then
eventually,
it
gets
routed
back
to
the
original
caller
and
on
the
way
back
as
the
packets
are
flowing
back.
A
Similar
translations
happen,
but
in
the
reverse
order
right,
so
the
IPS
get
replaced
with
on
the
way
out.
It
gets
replaced
from
the
Pod
internal
IP,
external
IP
and
then
on
a
way
back.
It's
reverse
now
on
the
Ingress
side,
so
bringing
the
cluster
bringing
the
traffic
inside
the
cluster.
In
order
to
do
that,
what
we
need
is
we
need
a
public
IP
address
right
and
the
way
we
do
that
is
by
using
a
kubernetes
load,
balancer
service.
A
So
what
load
balancer
Service
allows
us
to
do
is
to
obtain
an
external
IP
address.
A
Now
behind
the
scenes
there
is
a
cloud
provider
assuming
you're
running
in
Cloud,
right
or
metal
lb,
for
example,
if
you're
running
on
kind,
there's
a
specific
controller,
that's
going
to
go
and
create
an
actual
load.
Balancer
instance
in
the
cloud
right
and
then
download
balancer
has
an
external
IP
address
and
we
can
use
that
to
send
traffic
back
to
the
cluster.
You
can
hook
it
up
to
your
custom
domain
name
and
and
so
on.
A
So
when
the
traffic
arrives
at
the
load
balancer,
it's
going
to
get
routed
to
one
of
the
nodes
in
the
cluster
and
then
the
iptables
rule
on
the
chosen
node
will
kick
in
and
they
will
do
the
necessary
Network
address,
translation
and
they're
going
to
direct
the
packets
to
one
of
the
parts.
That's
part
of
the
of
the
service,
so
there's
a
question,
so
the
more
number
of
PODS
or
cluster
has
the
less
efficient
communication.
A
Yes,
exactly
exactly
and
there's
another
like
downside
to
using
I
guess
iptables
is
that
everything
has
to
be
processed
sequentially
right.
So,
the
more
you
add
the
slower,
the
slower
it
is
and
I
think
as
I
was
researching
and
reading
about
all
this
I
think
someone
did
a
talk.
A
I'll
have
to
find
that
we'll
post
it
on
YouTube,
later
I'm,
trying
to
remember
what
the
specific
numbers
were,
but
there's
like
a
lot
of
services,
a
lot
of
Cl,
a
lot
of
PODS
running
in
like
huge
clusters
and
just
a
mere
like
IP
tables
rules
updates
we're
taking
like
in
in
like
hours.
It's
not
it's
not
even
minutes
or
seconds.
It
was
in
hours.
So,
yes,
it's
iptables
alone
can
be
a
bottleneck,
especially
well.
They
don't
scale
well
right.
A
So
where
were
we?
Oh
okay
request
coming
back
to
the
load,
balancer
service
and
then
to
one
of
the
backing
pods?
Now,
if
you've
done
any
kubernetes,
you
know
that
it's
not
it's
not
very
wise
to
have
a
load,
balancer
kubernetes
service
for
every
application
you
want
to
expose,
because
you
would
end
up
with
10s
20,
whatever
load,
balancers
and
different
external
IP
addresses,
and
it's
going
to
cost
you
a
lot
of
money.
So
what
you?
A
Ideally,
what
you
want
to
have
is
you:
have
a
single
external
IP
address
right
and
then
have
an
ability
to
serve
multiple
kubernetes
services
or
applications
behind
that.
So
the
way
you
do
that
is
by
using
the
Ingress
resource
in
a
backing,
Ingress
controller.
So
what
is
an
Ingress
controller?
It's
just
a
pod
that
runs
inside
the
cluster
and
it
watches
the
changes
for
changes
to
the
Ingress
resource.
So
when
a
new
Ingress
resource
gets
created,
the
Ingress
controller
will
create
the
necessary
rules
to
Route
the
traffic
to
the
correct
service.
A
So
Ingress
controller
in
this
case
is
just
like
think
of
it
as
a
proxy
server
right.
Then,
when
the
traffic
goes
from
the
load
balancer,
it's
not
going
to
go
to
your
service
right,
it's
going
to
go
to
that
Ingress
controller
and
then
that
Ingress
controller
is
going
to
Route
the
traffic
to
the
correct
service.
A
All
right
so
Cube
proxy
I
mentioned
it
before
so
Q
proxy.
A
network
proxy
runs
on
every
node
on
each
node
in
the
cluster,
it's
connected
to
the
API
server
and
it's
responsible
for
keeping
the
services
and
the
backing
endpoints
the
Pod
IPS
up
to
date.
Additionally,
it
also
configures
and
manages
the
routing
rules
on
the
notes.
A
So
when
talking
about
Cube
proxy,
there
are
four
four
different
modes
that
you
can
run
it
in
user
space
kernel
space,
so
user
space
is
running
the
queue
proxy
as
a
web
server
and
then
routing
all
service
IPS
to
the
web
server
using
the
IP
tables,
and
then
the
web
server
is
responsible
for
proxying
to
the
pods.
A
This
is
deprecated,
I,
think
they're,
mentioning
that
it
should
not
be
used
anymore.
Maybe
it's
not
deprecated,
but
it
should
definitely
not
be
using
it.
A
kernel
space,
that's
a
Windows
only
mode
and
it's
an
alternative
to
the
user
space
mode
for
kubernetes
on
Windows,
and
then
we
have
the
iptables
and
ipvs.
So
the
IB
tables
mode
of
running
Q
proxy
solely
relies
on
IP
tables,
and
this
is
the
default
mode
and
I'm.
A
Looking
at
my
notes,
Here
I
have
those
numbers
here
that
I
was
measuring
earlier,
so
there
was
a
test
that
they
did
and
I'll
link
to
the
source,
I
guess:
a
5000
node
cluster,
with
2
000
services
and
10
pods
each
caused
at
least
20
000
iptables
rules
on
each
worker
right.
So
that's
a
lot
of
a
lot
of
rules
and
it
takes
a
lot
of
time
to
process
all
this.
A
Hence
iptables
don't
scale
well,
a
more
performant
alternative
is
the
ipvs
or
IP
virtual
virtual
server
mode.
Ipvs
supports
multiple
load
balancing
modes.
Well,
we'll
see
in
the
demo
shortly
the
way
that
iptable
does
load
balancing
is
by
randomly
routing
to
one
of
the
well,
not
randomly
routing,
based
on
probability,
one
of
the
backing
pods,
whereas
ipvs
supports
like
round
robin
lease
connection,
destination,
Source,
hashing
and
so
on,
which
means
that
it
allows
us
to
spread
the
load
more
evenly
and
effectively.
Now
ipvs
Mode
still
uses
iptables.
A
A
All
right,
so,
let's
go
to
the
demo.
How
let
me
read
this
question
piyush
how
the
load
balancer
gets
in
public
IP
address.
Does
it
have
a
fixed
rate,
so
this
is
solved
by
by
the
cloud
provider
right
in
that
case
right,
so
the
cloud
provider
is
the
one
who's
gonna
provisional
load
balancer
for
you
in
a
way
that
they
provision
load
balances.
A
For
you,
however,
if
you're
using
let's
say
kind
with
metal
lb,
for
example,
then
you,
when
you
set
up
metal
lb,
you
can
say:
hey
here's
a
cider
range
for
for
load,
balancer
type
Services,
where
you
can
pull
an
IP
address
out
of
it
all
right.
Let's
do
stop
presenting
present
share
screen.
A
Let's
move
this
one
down
here,
I
hope
it's
visible.
Let
me
try
to.
A
A
Oh
man
I
know
I'm
spending
so
much
time
on
this,
but
okay
I
think
it's
looks
good
now.
Let
me
know
if
it's
not
looking
well
so
another
question:
can
we
have
two
services
with
thousands
of
bonds,
along
with
some
management
pod,
which
is
multiple
services?
With
few
plots
yeah
I
mean
you
can
have
as
many
whatever
the
limitations
are.
I,
don't
remember
what
the
limitations
actual
hard
limitations
are
right,
but
there's
you're
gonna
have
two
services
with
thousand
with
some
management,
which
is
multiple
yeah
I
mean
you.
A
Can
you
can
scale
up
your
Ingress
controller?
If
that's
what
you're
asking
right,
you
don't
have
to
have
a
single
English
controller
pod.
You
can
scale
that
up
as
well
and
then
your
applications
right.
You
can
scale
those
up
as
well
hope
that
answers
the
question
all
right.
So
let's
go
to
the
demo.
So
what
I'll
do
is
I'll
set
up
oops.
Let
me
go
this
way:
I'll
be
using
a
kind,
cluster
and
I'll
set
up.
A
Let
me
see
if
I
can
make
it's
a
little
bit
bigger
yeah,
so
I'll
just
set
up
a
kind
cluster
with
a
control
plane
and
two
worker
nodes.
Note
that
I'm
not
specifying
anything
else,
because
the
default
configuration
for
kind
is
to
use
the
IP
tables
mode
right.
So
let's
do
that.
Let's
move
this
all
the
way
up
here
and
then
we'll
do
kind,
create
cluster
config
and
then
you
pass
in
the
kind
iptables
yeah
my
file,
so
this
might
take
minutes.
A
So
will
this
reduce
the
requirements
to
update
the
IP
table
since
the
same
pause?
Well,
yeah,
so
the
IB
tables
the
rules
will
change
every
time
you
scale
up
or
scale
down.
However,
even
when
the
request,
when
you're
sending
the
request
when
the
packets
are
being
sent,
the
IP
tables
are
traversed
sequentially
right.
So
the
more
rules
you
have
there,
the
longer
it's
going
to
take
right
and
we'll
show
I'll
show
the
iptable
rules
shortly.
A
So
you
can
like
see
how
how
big
those
can
get
and
I'll
only
have
a
couple
of
replicas
all
right.
So
we
have
the
cluster.
So,
let's
see
it's
kind
kind
and
we'll
just
get
the
nodes
there.
You
go
sorry
control
plane.
We
have
two
workers,
so
what
I'll
do
is
I'll
deploy,
HTTP
bin
and
I'll
scale
it
up
to
let's
say
three
replicas
and
then
we'll
wait
for
this
to
it's
still
being
created.
A
Let's
do
the
wide,
so
we
can
see
where
stuff
is
running
so
I
have
two
pods
will
be
running
on
kind
worker
and
one
pod
will
be
running
on
kind
worker
too.
So
what
we
can
do
because
I'm
running
kind,
all
the
Brady's
nose
are
basically
Docker
containers
right.
So
what
we
can
do
is
we
can
go
inside
of
that
Docker
container
and
we
can
execute
commands.
So
what
I'll
do
is
I'll
take.
A
A
It
says
all
the
rules
on
that
one
node
for
basically
three
well
two
in
this
case,
because
we
only
have
two
two
of
the
workloads
running
on
the
same
same
same
node,
but
this
is
how
it
looks
like
right.
So
there's
not
a
lot
of
workloads,
but
there's
there's
a
lot
of
text
here.
So
you
can
imagine
how
this
looks
like
if
you
have
a
thousand
services
or
or
or
five
thousand
pods
or
workloads,
for
example
right.
A
So
if
we
start
at
the
pre-routing
pre-routing
chain,
you'll
notice
that
the
traffic
is
going
to
get
routed
to
the
cube
Services
chain,
that's
saying
all
right
from
anywhere
from
anywhere
destination
and
sources
anywhere
go
to
cube
services.
So
if
you
go
down
to
cube
services,
we
have
a
couple
of
entries
here
and
what
this
chain
contains.
Is
it
contains
rules
for
each
service,
that's
running
in
the
cluster,
and
you
can
see
that
from
the
comments
here
right.
A
So
these
were
comments
that
were
added
by
by
the
cube
proxy
right.
So
there's
a
cube
system,
Cube
DNS
kubernetes,
and
then
we
also
have
our
HTTP
bin
service
there
right
and
what
this
one
is
saying-
and
this
is
the
name
random
name
of
of
the
chain
where
the
traffic
should
get
routed
to
so
I'll.
Do
I'll
highlight
this
one
right.
So
what
this
is
saying
that
we
don't
care
what
the
source
off
the
source
of
the
of
the
packets
are
right.
A
However,
we
do
care
that
the
destination
IP
is
this
109
IP,
and
this
is
the
IP
address.
If
we
look
at
the
service
here
notice
that
1096.27
10
9.
So
this
is
the
IP
address
of
the
kubernetes
service,
the
HTTP
bin
service
and
then
the
last
portion
here,
TCP
DPT
8000
specifies
that
this
rule
is
going
to
apply
for
TCP
traffic,
that's
destined
for
Port
8000.
Basically,
this
is
saying
any
traffic
that
goes
to
this
IP
and
and
this
port.
That's
what
it's
saying
right
and
what
it's
saying.
A
Well,
if,
if
the
traffic
is
going,
there
then
look
at
this
this
one,
it
should
go
and
execute
this
chain
here.
So
now
we're
down
here
and
what
this
one
is
doing
is
so
the
the
first
one
is
basically
what
it
does
is
marks
all
the
packets
that
are
not
from
this
subset
Source
subset.
Basically,
they
aren't
originating
from
within
the
cluster
right,
so
any
packet
not
from
that
cluster
they're
destined
to
go
to
that
service,
go
to
the
masquerading
chain
and
then
the
masquerading
chain.
A
What
it
does
is
it
just
marks
the
packets
with
a
specific
mark
right.
That's
then
used
later
and
what
is
masquerading.
That's
a
form
of
Source
Network
address
translation
and
it's
used
when
the
source
address
for
the
outbound
packets
should
be
changed
to
the
address
of
the
outgoing
network
interface.
So
remember
that
whole
internal
VM,
ipe
to
external
vmip
right
so
instead
of
using
internal
external
IP,
should
be
used
now.
A
A
So
these
are
just
the
backing
right:
the
backing
backing
pods
for
the
service
and
the
way
that
this
works
is
it's
basically
saying
from
anywhere
to
anywhere
go
to
this
right,
this
specific
pain.
So
if
you
find
out
one
what
was
it
this,
this
is
marked,
it
will
mark
them
and
then
it's
going
to
do
the
destination
network,
address
translation,
dnat
right
and
it's
just
gonna
route
directly
to
in
this
case
this
pod
down
here
running
on
Port
80
right.
A
So
it's
once
you
understand
how
it
like
how
everything
fits
together.
It's
not
that
complex
to
read
these,
but
it
can
be
very
daunting
if
you're
reading
it
for
the
first
time,
so
this
was
the
using
purely
iptables
iptables
approach
or
mode.
So,
let's
do
let
me
delete
the
cluster
Arrowhead
we
made
it
welcome
and
what
I'll
do
next
is
I
will
set
it
up
just
quickly
because
I
have
a
couple
of
slides.
A
More
I
actually
want
to
start
talking
about
cilia
a
little
bit,
but
so
this
is
using
the
ipvs
ipvirtual
virtual
server
right
and
notice
that
I'm
sending
the
Q
proxy
mode
explicitly
to
this
because
the
default
is
default
is
IP
tables.
So,
let's,
let's
set
this
up
and
then
I'll
do
the
same
thing.
I'll
scale
up
the
scale
up,
deploy
to
http
bin
scale
it
up
and
then
we'll
just
quickly.
A
A
We
were
lucky
I
guess
because
all
of
the
all
of
the
workloads
are
scheduled
on
kind
worker,
two:
okay,
that's
fine!
So
what
I'll
do
is
find
worker
two,
so
923,
Docker,
exec,
it923
and
we'll
do
iptables
L
at
an
IP
tables,
ipvs.txt.
A
A
You
can
see
that
there's
just
looking
at
lines
right,
64
here
when
using
ipvs
and
then
130
lines
when
we're
using
well,
maybe
a
little
bit
less
because
we
moved
things
around,
but
almost
double
right.
Let's
say
and.
A
It
looks
very
similar
right,
so
we
have
rewriting
Cube
Services,
we'll
look
at
the
cube
Services.
We
have
the
masquerading
there
and
then
we
have
this
match:
set
Cube
cluster
IP
destination
destination.
Right
now
previously
we
we
had
a
chain
that
contained
all
the
all
the
services
in
the
cluster.
Now
this
time
we
have
this
Cube
cluster
IP
IP
set.
So
what
is
an
IP
set?
It's
just
an
extension
to
the
IP
tables
and
it
allows
us
to
Match
multiple
IP
addresses
at
once
right,
so
we
can
match
against
the
group.
A
So
this
IP
set
then
contains
all
the
kubernetes
service.
Ip
IP
addresses-
and
we
can
look
at
that.
So
let's
do
exactly
1893
in
bash.
Let's
go
into
the
node
and
I
think
I
need
to
update,
and
then
we
have
to
install
the
ipsecs
tools
to
take
a
look
at
the
IP
addresses
there
and
then
app
to
install
IP
set.
A
A
A
Two
two
three
six,
two
three
six
one
of
these
corresponds
to
our
HTTP
bin
bin
service
right,
so
in
iptables,
these
IPS
are
used
in
in
the
iptables
rules.
Now
this
time
the
services
are
stored
in
the
IP
set,
and
then
the
IP
tables
only
reference
this
IP
set.
So
anything
if
there
are
any
changes
to
the
services
or
backing
pods
right.
It's
not
happening.
Iptables
rules
are
not
going
to
change
right
because
they're
already
pointing
to
this.
A
So
let
me
install
ipvs
admin
tool,
so
we
can
look
at
the
IPS
there
and
then,
if
we
run
list
you'll
notice,
this
is
our
service
right
and
you'll
notice.
Rr
here
stands
for
round
robin
and
then
notice.
These
are
the
IP
addresses
if
you
look
at
their
pods
rewind,
two
three
four,
two,
three
four
right.
So
these
are
the
backing
packing
pods
right
now.
A
The
advantage
here,
as
I
mentioned
is
we
can
actually
do
proper,
like
load
balancing,
whereas
iptables
rules
is
just
doing
using
the
probability
and
then,
whenever
we
scale
up
or
scale
down
the
the
deployments
or
if
we
do
scale
deploy,
let's
do
five
replicas
right.
The
only
thing
that's
going
to
change
is
IP
address.
Is
here
right,
there's,
no,
there's!
No,
there's
not
going
to
be
any
changes
in
in
the
iptable
rules
right,
oh
yeah!
Oh
we.
A
A
A
A
Window
this
one
share
all
right,
so
this
was
the
last
one
that
we
stopped
at
right,
all
right.
So,
let's,
let's
dive
into
cni
a
little
bit
getting
closer
to
that
psyllium
topic
that
I
mentioned
at
the
beginning.
So
what
a
cni
so
container
network
interface
so
think
of
it
as
it's
just
a
software.
It's
a
binary
plus
some
configuration
that's
between
the
container
runtime
and
the
actual
network
implementation.
So
the
cni
project
itself
is
it's
a
collection
of
specs
and
libraries
for
developing
plugins
to
configure
network
interfaces
in
Linux
containers.
A
So
practically
the
cni
plugins
are
just
called
by
the
container
runtime.
Whenever
a
container
is
created
or
container
gets
removed,
then
the
cni
plugin
is
responsible
for
connecting
the
containers.
Network
namespace
inside
the
Pod
to
the
host
Network
namespace,
assigning
the
IP
address
to
the
container,
doing
the
IP,
address
management
right
and
then
setting
up
routes.
A
The
container
runtime
the
binary
itself
takes
in
a
configuration,
that's
going
to
specify
just
a
general
Network
information
like
subsets
and
then
what
it
does.
It
sends
the
cni
commands
based
on
the
spec
to
the
cni
plugin.
That's
going
to
go
and
configure
the
network
according
to
the
configuration,
some
examples
of
I,
guess,
well-known
cni,
plugins,
psyllium,
right,
L7
and
HTTP,
aware
cni.
It
can
enforce
Network
policies
on
from
L3
to
L7.
It
uses
ebpf
for
for
that,
as
well
as
Envoy
for
L7
flannel,
Calico,
AWS,
Azure,
cni
right.
A
Basically,
all
the
cloud
provider
specific
cnis
didn't
know
how
to
set
up,
assign
IP
addresses,
set
up
networking
and
routes
in
those
clouds.
So
what
is
a
cni
spec,
so
cni
spec
is
just
a
contract
between
the
container
runtime
interface
and
the
plugin
at
a
high
level.
Just
specifies
four
commands
that
every
plugin
has
to
implement
the
ad
command
that
gets
called
when
the
container
is
created
and
it
needs
to
be
added
to
the
network.
A
So
this
is
where
IP
addresses
are
assigned,
for
example,
right
and
the
routes
have
to
be
configured
delete
command.
This
is
when
the
container
is
deleted.
So
when
you
delete
the
container,
the
cni
plugin
is
responsible
for
releasing
the
resources
deleting
any
any
rules
and
so
on,
and
then
check
is
just
going
to
check
whether
the
network
is
set
up
properly
and
then
the
version
command
is
going
to
return
the
cni
versions
that
are
supported
by
the
plugin,
so
the
plugin
itself
is
just
a
binary
and
that
binary
runs
on
on
the
host.
A
It
uses
a
configuration
that
I
said
it
configures
the
settings
for
IP,
address
management
and
any
other
plugins
that
need
to
be
invoked
so
cni
plugins
can
also
be
chained.
One
thing
that
I
want
to
bring
up
here
there's
also
if
we
go
service
mesh
World
a
little
bit,
there's
also
istioc
and
I
plug
in
right.
A
So
there's
a
it
is
a
CNS,
it's
a
cni
right,
but
it's
it's
one
that
can
be
chained
all
right,
so
we're
almost
at
psyllium
before
we
talk
about
that.
I
have
to
mention
ebpf.
Briefly,
if
you
remember
the
initial
definition
that
I
showed
in
the
first
slide.
I
mentioned
that
psyllium
uses
ebpf
right
to
provide
secure,
to
provide,
secure
and
observe
network
network
can
equip
connectivity
between
workloads.
A
So
basically,
what
ebpf
allows
us
to
do
is
it
allows
us
to
run
sandbox
programs
in
the
Linux
kernel
without
going
back
and
forth
between
the
kernel
space
and
the
user
space,
which
is
what
iptables
do
so.
The
image
that
I
have
on
the
slide
shows
the
difference
like
what
is
the
difference
between
the
user
space
and
kernel
right.
So
kernel
is
basically
the
layer
between
the
hardware
and
your
applications
and
what
it
does
it
can
handle
things
such
as
networking
memory
and
Process
Management
files
and
so
on
right.
A
Any
of
your
applications
are
typically
run
mostly
run
into
user
space
in
an
unprivileged
mode
right,
meaning
that
you
can't
directly
access
or
interact
with
the
hardware.
However,
what
you
can
do
is
you
can
access
the
kernel
functionality
through
system
calls
syscalls
right
now.
A
One
of
the
features
of
the
kernel
is
that
it
allows
us
to
write
these
custom
programs
that
can
get
dynamically
loaded
into
the
kernel
and
execute
it
so
think
of
wasm,
plugins,
right
or
office
add-ins
or
any
any
add-in
or
plug-in
model
more
or
less
at
a
high
level
which
allows
you
to
install
something
load
it
and
then
have
it
execute
without,
like
changing
the
actual
kernel
in
in
this
case
right.
So
this
feature
that
allows
to
do
that
is
called
BPF
or
ebpf
right
now.
A
Also,
as
part
of
this,
the
kernel
supports
so-called
BPF
hooks
in
the
networking
stack,
so
we'll
talk
specifically
about
networking
so
think
of
a
different
event
points
or
hooks
where
you
can
attach
your
programs
and
then
those
programs
will
be
executed
by
the
kernel
when
that
event
happens.
So
if
we
talk
about
Network
packets
instead
of
like
processing
the
packets
through
the
chains
and
rules
and
the
IP
tables
in
the
user
space,
what
we
can
do
is
we
can
write
a
program
and
attach
that
program
to
a
specific
hook
in
a
kernel.
A
A
A
Finally,
psyllium
we're
almost
done
and
we're
finally
there
to
the
topic
right
now.
One
of
the
features
like
what
are
the
some
of
the
features
of
psyllium
right.
We
talked
about
load
balancing
right
so
when,
when
you
deploy
sillying
psyllium,
what
it
allows
you
to
do
is
to
ensure
that
there's
connectivity
between
workloads
set
up,
and
it
also
can
set
up
load,
balancing
loan,
balancing
between
services
and
pots.
A
Another
feature
is
the
ability
to
enforce
Network
policies,
so
if
you
know
kubernetes
ships
with
a
resource
called
Network
policy,
however,
that
research
doesn't
have
any
implementation
right.
So
psyllium
is
one
of
the
cnis
that
provides
the
implementation
of
of
that
Network
policy
resource
and
that
allows
you
to
apply
policy
and
silly
is
going
to
go
in
and
force
them
now,
as
you
can
imagine,
as
the
packets
flow
through
the
network
stack
in
the
kernel
right,
there's
different
programs
that
are
attached
different
hooks
psyllium
also
provides
visibility
into
the
network.
A
Traffic
flows
right,
there's
a
UI
component,
it's
called
Hubble
and
that
one
can
be
used
to
to
view
these
flows
and
there's
also
the
ability
to
do
encryption
using
ipsec
or
wireguard.
A
So
what
are
some
of
the
building
blocks
when
you
I
guess,
install
psyllium?
Psyllium
agents
are
they're
run
as
a
demon
set,
so
there's
one
agent
per
node.
Think
of
it
like
a
cube
proxy
right,
one
per
node
that
psyllium
agent
talks
to
the
API
server.
It
interacts
with
the
Linux
kernel,
so
it
loads.
The
ebpf
programs
updates
the
ebpf
maps.
So
maps
are
the
way
to
store
exchange
information,
but
it
also
talks
to
the
psyllium
cni
plugin
and
gets
notified
whenever
a
new
workload
gets
created
or
deleted.
Right
then,
on
the
L7.
A
If
we
think
about
L7,
psyllium
agent
is
also
the
one
that's
going
to
start
or
launch
the
envoy
proxy
when
needed.
So
whenever
there's
a
L7
policy
that
we
create
a
ceiling,
the
agent
is
going
to
make
sure
to
launch
an
Envoy
proxy
to
process
the
L7
request:
cni
plugin,
basically
installs
the
cni
onto
the
host
configures,
the
nodes
cni
to
use
the
plugin
and
then
talks
to
the
psyllium
agent
using
a
socket
hover
relay
cluster
y
observability.
A
It
connects
to
the
grpc
service
that
each
psyllium
agent
provides,
and
then
the
operator
that
just
does
like
cluster
wide
one-time
operations
of
installing
things
basic
concepts
of
psyllium
we're
going
to
start
with
endpoints.
So
endpoint
is
a
collection
of
containers
or
applications
that
share
the
same.
Ip
address
so
probably
heard
that
before
that's
basically
a
pod
right.
A
Each
endpoint
also
gets
an
internal
ID
That's
Unique
within
the
cluster
node,
and
a
set
of
labels
that
psyllium
derives
from
the
Pod
or
from
the
container.
So
the
endpoint
will
get
created
whenever
a
new
pod
is
started
and
that's
what
also
kicks
off
the
creation
of
psyllium
identity.
So
identity
is
assigned
to
all
endpoints
and
this
identity
is
used
for
enforcing
the
basic
connectivity
between
endpoints.
A
The
identity
of
an
endpoint
is
derived
based
on
the
labels
that
are
assigned
to
the
Pod
right.
So
if
the
labels
change
on
the
Pod,
the
identity
of
endpoints
will
get
updated
as
well.
We
know
that
pots
can
have
a
diverse
set
of
labels
and
not
all
of
them
are
relevant
or
necessarily
useful
for
identity.
So
things
like
time
stamps
and
hashes
so
cilia
allows
to
well.
It
automatically
excludes
some
of
the
labels
on
your
pods
that
are
irrelevant
already,
like
IO
kubernetes,
beta
kubernetes,
kubernetes,
IO
bot,
template
hash
and
so
on.
A
But
all
this
is
of
course
customizable
right,
so
you
can
provide.
You
can
provide
your
own
patterns
for
either
excluding
or
including
labels
for
identity
purposes,
and
on
top
of
that,
there's
also
some
well-known
identities
for
like
Cube
DNS,
core
DNS
cilium
operator
and
so
on.
Some
of
those
have
like
102,
that's
Cube,
DNS,
right
105,
that
psyllium
operator
and
so
on,
all
right.
So
we
came
to
the
last
Quick
demo
and
I'll
do
this
very
quickly
since
we're
already
at
time.
Let
me
stop.
A
I
would
love
to
take
any
more
questions
while
I'm
setting
this
up,
if
anyone
has
has
any
all
right
so
we're
here.
So
let
me
close
this
exit
from
here.
Do
this
and
we'll
do
delete
clusters
and
we'll
create
a
new
cluster
for
psyllium,
so
what
I'll
do
is
I'm
still
using
whoops?
A
Let
me
close
this,
so
I'm
still
going
to
be
using
kind.
However,
what
I'll
do
is
I'll
set
the
Q
proxy
mode
to
none
and
I'll
disable
the
default
cni.
So,
as
I
mentioned
at
the
beginning,
right
kind
uses
a
cni
called
kind
net,
but
through
configuration
you
can
disable
it
right.
So
we
can
say:
let's
disable
this.
Let's
I
don't
want
any
cnis,
because
we'll
install
we
will
install
psyllium
to
use
as
our
cni.
So
let's
do
that
so
do
kind.
Create
cluster,
create
cluster
config
kind,
no
cni
dot
yaml.
A
So
let's
do
this.
Now,
in
this
example,
I'm
gonna
be
psyllium
is
going
to
completely
replace
the
functionality
of
the
cube
proxy.
However,
there's
other
I
think
there's
two
other
modes
that
psyllium
can
run
run
in
or
run
with,
where
it's
not
completely
replacing
the
Q
proxy
right.
So
you
can
run
in
this
mix
mode.
I
guess
I,
don't
know
what
to
call
it
right
where
the
psyllium
is
not
replacing
your
Cube
proxy
right,
but
in
this
case
we're
completely
replacing.
A
A
So
what
I'll
do
next
is
I'll
install
psyllium,
I'll
use,
Helm
I've
done
like
Helm
update,
Helm
ad
repo
and
all
that
so
psyllium
psyllium
psyllium,
this
version
Cube
system
and
we're
setting
Q
proxy
replacement,
equals
strict,
meaning
completely
replace
the
proxy
setting
the
service
host
and
the
port
address
just
enabling
Hubble
enabling
relay-
and
this
is
something
that
we'll
do
next
time,
but
I
just
want
to
set
it
up
here
already.
A
So
what
what
will
happen
here
is
this
will
install
the
psyllium
agent
and
this
will
install
the
psyllium
operator
and
other
stuff.
So
I
also
have
a
CLI
called
psyllium,
which
you
can
use
to
look
at
the
status
so
notice,
there's
still
errors
right,
they're
still
unavailable,
the
pods
are
still
coming
up,
I!
Guess
right!
They're
pending,
you
can
also
look
at
the
stuff
here.
This
might
take
a
little
while,
but
not
sure,
be
too
crazy.
A
But
one
thing
that
you
might
have
noticed
here:
there's
no
Q
proxy
right
because
we're
completely
replacing
that
functionality
with
psyllium
psyllium
status.
Right
now
you
notice
that
okay,
all
is
good.
All
is
green.
That's
what
we
like
to
see
and
then
we'll
do
the
same
thing:
I'll
deploy,
HTTP
bin.
A
And
you'll
notice
that
these
these
pods
will
get
IP
addresses,
but
this
time
the
IP
addresses
will
be
dolled
out
by
the
psyllium,
not
not
Q
proxy
wait
for
this
one
to
come
up.
Another
thing
that
I
want
to
show
quickly
is
there's
a
CLI
that
I
have
like
installed
locally
right.
This
is
mostly
for
like
enabling
things
installing
stuff
uninstalling,
but
there's
also
a
psyllium
CLI
running
inside
each
of
the
psyllium
pods.
So
we
can
do
Cube,
exec,
namespace
Cube.
Oh
there,
you
go
I've
already
run
it.
A
So
let
me
just
update
this
8v6b5,
for
example,
and
I'm
running
the
command
psyllium
endpoint
list,
which
is
going
to
list
all
the
endpoints
and
I
picked
the
one
that
doesn't
have
any
workloads
there.
So,
let's
do
maybe
this
one
look
at
this
one
right,
so
I'm
just
listing
endpoints
I'm
running
the
psyllium
cni,
that's
deployed
or
installed
in
in
the
psyllium
psyllium
agent
pod,
and
if
I
run
the
endpoint
list,
you'll
notice
that
there's
a
identity
number
right
or
a
unique
ID,
that's
assigned
to
my
pod
right
here.
A
So,
look
at
the
all
the
labels
that
were
derived
the
ipv4
and
then
there's
another
pod
here
and
notice.
It
has
the
same
same
identity
all
right,
we're
at
time.
So
I
guess
you
can
say
I
perfectly
timed
this
to
be
an
hour
long
hope.
This
was
helpful
and
next
time,
I
think
in
two
weeks
or
so
we'll
we'll
continue
the
conversation
and
we'll
actually
go
deeper
into
psyllium.
A
Look
at
the
different
different
features
see
how
how
to
apply
policies
and
all
that
good
stuff,
but
this
was
more
of
a
Prelude
to
what's
going
to
come.
I
hope
this
was
useful.
Please
leave
a
comment
if
it
was
I'll
leave
a
comment
if
it
wasn't
either
so
we
can
can
do
better
next
time,
but
thank
you
for
joining
and
I
hope.
I'll
see
you
next
time
as
well.