►
From YouTube: Cloud Native Live: Gear up for performance - Leveraging eBPF on Openshift with Project Calico
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
cloud
native
live
where
we
dive
into
the
code
behind
cloud
native,
I'm
itai
shakuri,
I'm
director
of
open
source
at
aqua
security,
I'm
also
a
cloud
native
ambassador
and
your
host
for
today's
show.
So
this
is
cloud
native
live
every
wednesday.
We
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud
native
technologies.
A
They
will
build
things
and
break
things
and
they
will
answer
your
questions.
This
week
we
have
chris
pumpkins
from
taguera
with
us
to
talk
about
how
to
leverage
evpf
with
openshift
with
project
calico.
He
will
introduce
himself
and
the
technology
in
a
second
before
that.
Just
a
quick
reminder:
this
is
an
official
live.
A
So
chris,
how
would
you
like
to
introduce
yourself.
B
Yeah
well,
the
first
thing
I'm
just
really
happy
to
hear
you
say
that
I'll
break
things,
because
that
covers
me.
If
I
break
things
now
so
great,
so
my
name
is
chris
tompkins,
I'm
a
developer
advocate
at
tigera
tigera,
who
developed
project
calico
as
part
of
the
project
calico
community.
I
worked
as
a
network
engineer
for
many
many
years
and
gradually
got
drawn
towards
automation,
large-scale
automation,
technologies
and
when
we
started
to
do
kubernetes,
I
was
deploying
calico
and
I
really
liked
the
product
and
believed
in
it.
A
Cool
so
yeah
do
tell
us
about
project
calico.
I
think
for
start.
B
Yeah
yeah,
I
think,
that's
a
great
place
to
start
so.
Calico
is
a
an
open
source.
Networking
solution
for
networking
and
network
security
for
containers,
virtual
machines
and
native
host-based
workloads.
B
So
the
idea
is,
it
provides
a
consistent
experience
and
set
of
capabilities
for
all
of
those
kinds
of
workloads
in
public
cloud
or
on-prem
from
a
from
a
you
know,
a
tiny
cluster,
all
the
way
up
to
a
multi-thousand
node
cluster,
and
we
can
support
environments
like
kubernetes,
openshift,
mirantis,
kubernetes,
engine
and
and
and
so
on,
and
the
idea
is
to
prevent,
prevent
present
the
same,
the
same
experience
to
to
developers
and
and
engineers.
B
We
offer
a
standard,
you
know
best
practice
security
model
and
and
a
high
performance
data,
plane,
data
planes
actually
and
incredible
scalability
and
and
it's
a
real
world
production
hardened
product
and
open
source
setup.
That
is,
you
know,
as
heavily
deployed
already.
B
Yeah,
so
there
is
an
enterprise
product,
but
everything
we
do
everything
we
do
today
will
be
in
the
open
source
product
and
the
enterprise
product
adds
on
some
features
around
observability,
well,
lots
of
features,
but
primarily
around
observability.
A
All
right,
that's
cool,
so
the
topic
of
today's
show
involved,
eppf
and
openshift
and
calico.
Do
you
wanna
set
up
the
the
context
for
this.
B
Yeah
yeah
yeah.
Definitely
so
I
guess,
if
you,
if
you
think
of
it
as
calico,
allows
you
to
implement
your
network
policy
and
your
networking
across
those
environments
that
we
discussed
and
it
uses
it,
uses
some
technology
in
the
data
plane
of
the
nodes
to
actually
do
that
work.
B
Now
I
did
a
separate
talk
which
I
could
talk
for
half
an
hour,
but
I
won't
about
the
you
know:
control
plane
and
data
plane
model
for
for
networking
and
so
on,
but
just
suffice
to
say
that
the
data
plane
is
how
we
implement
the
actual
networking
policy
that
enables
the
networking
to
happen
now.
Calico
actually
offers
a
choice
of
data
planes.
B
The
chat
window
has
just
disappeared
on
my
browser,
but
hopefully
it
will
come
back
so
yeah
calico
is
a
choice
of
data
planes
and
the
idea
behind
that
is
that
different
people
have
different
use
cases
and
expectations
for
the
data
plane
of
their
of
their
network
environment.
B
So,
in
order
to
achieve
high
performance,
you
want
to
have
the
minimal
data
plane
amount
of
data
playing
code
that
allows
you
to
implement
the
feature
set
that
you
actually
need.
So
our
main
original
data
plane
was
implemented
mainly
with
iptables,
which
is
great.
It's
still
in
production
for
many
users
and
the
performance
is
good.
B
It's
rock
solid
and
battle
hardened,
but
we
wanted
to
offer
another
data
plane
which
has
some
advantages
over
that,
and
so,
as
well
as
the
standard
linux,
ip
tables
data
plane
and
the
windows
ip
the
windows
data
plane
that
we
offer.
We
have
this
third
data
plane,
which
is
the
linux
ebpf
data
plane,
which
is
what
we'll
what
we'll
be
focusing
on
today
and
then
just
to
say
as
well.
B
There
is
actually
a
fourth
data
plane
in
tech
preview,
which
is
vpp
vector
packet,
processing
data
plane,
but
it's
in
tech
preview.
It's
really
interesting.
I
encourage
people
to
look
at
it,
but
we
won't
talk
about
that
anymore.
Today.
A
You
go
ahead.
No,
no
just
I
wanted
to
comment.
Certainly
ebpf
is
really
exciting
technology.
I
think,
and
very
I'm
happy
that
we
get
a
chance
to
talk
about
it
today,
because
it's
very
relevant
specifically
now.
I
feel
and
see
a
lot
of
movement
around
it
in
the
industry,
especially
from
the
networking
perspective,
of
course,
so
yeah
just.
B
B
B
If
you
wanted
to
have
some
code
that
ran
in
the
kernel
of
the
linux
kernel,
then
you
would
need
to
either
actually
submit
code
into
the
kernel,
repos
and
get
it
approved,
and
obviously
that's
a
really
long
process
and
a
very
difficult
process
for
good
reason
or
the
other
way
you
could
do.
That
would
be
to
write
a
kernel
module,
but
what
ebpf
actually
is
is
a
way
to
run.
B
You
can
think
of
it
as
a
a
way
to
run
virtual
machines
inside
the
kernel
and
those
virtual
machines
are
extremely
high
performance,
they're
heavily
secured
because
they
only
have
access
to
a
to
a
very
limited
number
of
kernel
functions
depending
on
where
they
are
mounted
in
the
kernel
and
so
as
well
as
being
high
performance
and
secure
they're
small
bits
of
code
that
live
inside
the
kernel
and
operate
inside
the
kernel
under
those
restrictions.
B
So,
in
the
case
of
of
how
calico
uses
ebpf,
what
we're
essentially
doing
is
replacing
the
functionality
that
we
previously
implemented
in
ip
tables
in
ebpf
inside
the
kernel
and
and
the
the
actual
performance
increases.
Obviously,
but
there
are
also
some
side
effects
that
are
later
on.
I
I
I
don't
have
many
slides,
but
I
do
have
one
diagram
I'd
like
to
show
and
when
I
show
that
diagram,
I
can
explain
why
why
that
functionality
also
improves
with
with
ebpf.
A
Yeah,
I
once
heard
a
description
of
ebpf
that
I
really
liked.
Sorry,
I
don't
remember
who
said
it
so
I
apologize
to
the
person
who
said
it
to
me,
but
they
described
it
as
like.
Javascript
is
to
the
browser,
if
you
have
to
the
kernel,
basically
a
mechanism,
a
vehicle
for
us
to
extend
the
kernel
and
to
put
our
own
coder,
which
otherwise
would
be
very
dangerous
and
difficult.
So
yeah.
B
Yeah
exactly
and
it
as
you
know,
I'm
actually
not
a
developer.
In
my
you
know
in
my
first,
the
first
part
of
my
job
is
not
a
developer,
so
I
have
to
be
careful
not
to
overstep
my
knowledge,
but
but
as
well
as
only
being
able
to
make
certain
function
calls
and
those
kind
of
things.
B
Ebpf
programs
are
automatically
limited
to
execution
time
and
those
kind
of
things,
so
they
can't
they
can't
get
into
tight
loops
and
they
can't
they
can't
chew
up
the
tpu.
That
kind
of
thing.
So
it's
great
because
there's
a
lot
of
protection
there
to
prevent
to
prevent
your
program
from
accidentally
misbehaving,
and
yet
you
get
excellent
performance.
B
There's
also
this
concept
of
a
bpf
map,
which
is
basically
just
a
key
value,
store
that
the
bpf
programs
are
able
to
access
which
allows
them
to
exchange
data
between
each
other,
to
you
know
to
record
flows
that
kind
of
thing
or
whatever
you
might
want
to
do
now,
bpf.
Obviously
it
has
a
lot,
a
lot
of
use
cases.
B
A
All
right
sounds
good
and
just
the
the
final
piece
there
we
talked
about
calico,
we
talked
about
ebpf
and
then
we
are
going
to
run
all
of
this
on
top
of
openshift
right.
A
So
so,
like
I'm,
assuming
most
of
the
people
know
about
openshift,
but
just
to
set
set
the
record
straight
for.
A
You
just
say
a
few
words
about
objects:
yeah.
B
Sure
and
I'll
be
honest,
open
shift
isn't
is
the
weakest
part
of
my
knowledge.
You
know
it's
a
bit,
it's
a
big
beast
so,
but
but
openshift
is
the
way
I
like
to
think
of
it,
like
you
had
a
good
analogy
about
ebpf,
but
the
way
I
like
to
think
of
it
is
that
a
kubernetes
cluster.
B
You
can
extend
it
in
so
many
interesting
ways
with
different
tools,
and
you
know
for
for
observability
and
for
for
for
storage
and
for
cni
and
all
these
things,
and
you
could
spend
a
lot
of
time,
both
time
and
money
and
trying
to
figure
out
what
a
good
kubernetes
deployment
model
is
for
you
and
I
like
to
think
of
openshift
as
a
container
application
platform
based
on
kubernetes.
B
That
takes
a
good
set
of
a
good
set
of
additional
tools,
so
as
well
as
being
a
container
orchestrator
it
it's
an
entertain
enterprise,
kubernetes
distribution
and
it
has
a
validated
set
of
integrations.
B
It's
still
kubernetes,
it's
still
certified
kubernetes
and
it's
still
open
source,
but
it,
but
it
allows
you
to
build
kind
of
like
a
consistent,
complete
enterprise
grade
kubernetes
ecosystem
and
and
to
the
and
then
to
run
that
wherever
you
choose
to
run
it
you're
not
limited
to
running
in
one
place.
The
demo
we
do
today,
the
demo
I
do
today
will
be
will
be
in
aws,
but
obviously
you
can
run
openshift
in
any
environment.
A
B
A
I
think
that's
a
very
good
intro,
so,
let's
get
to
it.
Let's
get
to.
B
Yeah
yeah
great,
so
I
think
if
you
there's
a
couple
of
things
we
can
come
back
to
if
we
end
up
with
spare
time,
but
I
think
it's
a
good
idea
to
get
to
flip
over
to
just
literally
the
the
couple
of
slides
I
have
and
then
a
name
to
get
to
the
demo.
B
Yeah,
I
think
you
can
see
my
screen
here.
We
go
yeah,
so
this
just
kind
of
slide.
Just
kind
of
this
is
red
hat
phone
slide
instantly,
which
I've
I've
used.
B
The
url
is
at
the
bottom
there,
but
but
really
this
just
helps
to
understand
that
openshift
is
kind
of
a
big
suite
of
of
tools
around
kubernetes,
for
you
know,
for
observability
and
for
developer
services
and
so
on,
but
I
won't
drill
into
that
because
it
would
be
challenging
for
me
to
talk
about
everything
on
that
slide
meaningfully.
B
But
this
slide
is
is
a
really
interesting
one,
and
this
is
the
one
I
said
that
I
would
briefly
mention
I
I
said
I
would
briefly
mention
about
where
bpf
hooked
in
so
this
is
the
packet
flow
diagram
and
you
can
see
at
the
bottom
of
the
screen.
This
is.
This
is
courtesy
of
jan
engelhart,
which
is
it's
on
wikipedia
this
diagram.
B
If
you
want
to
see
it
yourself,
but
this
is
the
packet
flow
through
any
linux
node,
and
you
can
see
that
the
green
part
there
is
the
layer,
3
network
layer
and
that's
where
ip
tables
and
all
of
those
kind
of
things
happen.
So
you
have
a
mangle
table
and
that
table
and-
and
you
can
see
that
the
track
the
packets
flow
through.
All
of
that
and
the
reason
I
mention
all
this
is
because
that's
also
where
coupe
proxy
is
implemented.
B
So
when
you
run,
services
on
on
kubernetes
couproxy
implements
those
services
and
it
exists
in
that
green.
That
green
section.
However,
down
here
you
can
see.
Hopefully
it's
just
about
readable.
You
can
see
ingress
and
egress
q
disks,
and
these
two
points
here
are
where
we
actually
attach
our
bpf
code,
and
that
means
that
the
bpf
code
can
entirely
sidestep
the
the
main
packet
flow.
B
So
that's
why
I
wanted
to
highlight
this
diagram,
and
so,
as
a
result
of
that,
the
advantages
of
of
actually
running
ebpf
excuse
me:
yeah.
The
advantages
of
running
pppf
in
calico
are
performance,
but
there's
a
secondary,
a
really
interesting
secondary
benefit
which
I'll
demonstrate
in
the
demo,
which
is
that
is
source
like
preservation.
B
So
if
we
have
a
look
at
this
diagram
and
we'll
see
this
for
real
live
demo
in
a
moment,
but
you
can
see
with
a
traditional
kubernetes
cluster
with
coop
proxy
running,
you
can
see
that
if
an
external
client
comes
into
a
to
a
service
pod,
the
first
thing
they
do
is
they
hit
a
service
on
the
kubernetes
node
and
then
that
that
the
coup
proxy?
B
That's
that's
serving
that
service
does
a
destination
that
and
the
sourcenap
to
replace
the
source
ip
with
its
own
ip
and
the
destination
ip
with
the
pod
ip.
And
then
the
traffic
gets
forwarded
on
to
the
service
pod,
and
then
it
comes
back.
But
it's
important
to
note
that
the
traffic,
when
you're
running
with
coop
proxy
without
the
ebpf
data
plane,
you
can
see
that
the
traffic
has
to
go
back
through
the
coop
proxy
and
therefore
it's
destination
and
source
netted
and
the
side
effect
of
that
is
that
the
service
pod
down
here.
B
Never
actually
sees
the
source
ip
of
the
traffic,
the
real
source
ip.
It
sees
the
source
ip
of
the
load.
Balancer
of
excuse
me
the
coupe
proxy,
so
one
one
of
the
advantages
is
that
once
you
turn
on
ebpf,
you
get
a
flow
that
looks
like
this.
B
So
when
the
external
client
comes
in
and
they
talk
to
the
service,
the
service
is
implemented
as
a
bpf
program,
not
in
coop
proxy,
and
that
means
that
the
ppf
program
can
forward
on
the
traffic
and
that
the
service,
the
service
pod
on
the
destination
node
sees
the
real
source
ip
of
the
external
client,
which
maybe
might
be
good.
If
you
have
an
auditing
use
case,
or
if
you
have,
you
need
to
restrict
a
particular
ip
block
by
region
by
by
country
or
something
like
that
and
then
the
final.
B
The
final
step
here
shown
as
number
five
is
that
if
the
network
allows
it,
the
packet
can
actually
be
returned
directly
without
going
back
via
the
ingress
node.
So
those
benefits
are
yeah.
You
get
better
performance,
lower
latency
and
you
get
and
you
get
source
ip
preservation.
So
yeah,
it's
pretty
cool.
A
Cool
is
that
is
that,
thanks
to,
I
know,
is
it
because
you
are
doing
the
network
routing
work
at
the
lower
level
in
the
network
stack.
B
B
B
So
anyone
who
wants
a
lot
more
depth
on
this,
if
you,
if
you
watch
those
sessions,
you
know
frankly,
he
his
understanding
of
the
depth
is
much
deeper
than
mine.
But
yes,
it's
that's
exactly
what
we're
implementing
at
a
lower
level
and
therefore,
by
replacing
coop
proxy
we
can.
We
can
introduce
other
examples.
Sorry
other
benefits
excuse
me
yeah,
so
I
think
we
should
go
ahead
and
show
it
right.
Yeah.
Of
course,
let's
do
it
cool,
okay,
so.
B
So,
just
to
explain
what
I'm
gonna
do
I
wanted
to
create
a
I
wanted
to
create
a
demo
that
anyone
who's
watching
can
actually
do
this
themselves
afterwards.
So
in
order
to
set
up
openshift,
we
need
a.
We
need
a
dns
name,
a
top-level
dns
name,
which
obviously
would
cost
money.
But
I
was
made
aware
by
a
colleague
of
of
this
website
freenom.com.
B
I
can't
actually
recommend
it.
You
know
for
production,
I
don't
know
whether
it's
you
know.
I
don't
know
what
their
production
services
are
like,
but
if
you
want
a
quick
domain
top
level
domain
name,
you
can
get
that
here
totally
for
free.
So
if
I
come
into
into
here,
I've
registered
this
domain
registered
this
memorable
memorable
domain.
Here
I
mean
it's
not
going
to
replace
amazon
anytime
soon,
but
but
I
have
this
domain
and
it's
registered
for
a
few
months.
B
So
the
reason
I
show
this
is
because
we
need
we
need
a
top-level
domain
in
order
to
to
be
able
to
get
openshift
set
up.
So
the
very
first
step
that
I
did
before
today,
because
it
takes
24
hours
to
propagate.
If
you
come
in
here,
you
grab
these
you,
you
need
to
specify
where
the
domains
dns
should
be
directed
to.
B
So
we
change
it
to
these
custom
name
servers.
So
let
me
show
you
where
I
got
those
from.
B
B
A
B
Hosted
zone,
and
then
you
specify
the
the
name
that
you
want,
which
is
the
same
name
that
we
just
saw
and
amazon
aws
will
come
back
and
give
you
the
top
level
name
servers
that
they
want
you
to
use.
So
all
I
did
was
take
these
four
of
these
four
names
here
and
put
them
across
into
this
gui.
B
So
that's
step.
One.
B
Once
we've
done
that,
the
next
thing
we
need
to
do
is
is
actually
get
the
openshift
installer
tool.
So
I'll
come
back
here
and
there
are
two
openshift
installer
tools.
B
Yeah,
good,
okay,
so
yeah,
so
the
first
thing
to
do
is
is
to
download
the
openshift
installer
tools,
and
that
is
just
a
couple
of
w
commands
here.
So
you
can
see
it's
hitting
mirror.openshift.com
and
we're
getting
the
stable
openshift
installer
for
linux
and
then
the
second
one
is
the
openshift
client.
This
won't
take
long.
B
Yeah,
thank
you,
tim
yeah,
okay
cool,
so
those
are
done.
So
we
just
we
just
extract
out
those
to
wrapped
up
those
two
tar
files
and
you
can
see
it
just
gives
us
a
readme
file
and
a
couple
of
binaries.
B
So
I'm
going
to
put
our
current
our
current
working
folder
on
the
path,
so
we
should
be
able
to
type
ot
yeah
we
can
so.
I
just
was
testing
that
my
path
was
working.
You
can
see
the
openshift
client
is
there,
so
that's
fine,
so
I'm
going
to
switch
now.
Let's
say
I
you
and
I
talked
about
this
briefly
before
we
came
onto
the
call,
I'm
going
to
shift
to
a
terminal
recording
just
for
a
short
time
and
I'll
explain
why
that
is
once
it's
actually
playing.
A
Where
you
play
it,
if
you
could
just
resize
your
window
so
that
it
doesn't
reach
all
the
way
down,
because
your
name
kind
of
oh.
B
That's
perfect,
how's
that
great
okay
yeah,
so,
oh
goodness,.
B
I
can
deal
with
kubernetes
clusters,
but
I
can't
operate
my
browser
and
I
wouldn't
let
me
try
again
if
I
go
too
near
to
the
edge
it
tries
to
maximize
that's.
A
B
Yeah,
so
let's
start
that
playing
now-
and
this
is
now
the
recording,
but
you
can
see
that
it's
it's
just
showing
you
can
see
that
this
recording
was
actually
done
back
in
june,
but
this
is
only
the
first
part,
so
we
will
be
doing
a
live.
A
properly
live
demo.
B
It's
just
this
first
part,
and
the
reason
is
because
you
can
see
that
I'm
in
the
demo,
I'm
recording
that
I'm
downloading
those
files
that
we
just
downloaded
the
problem
is
that
the
openshift
install
tool
actually
gives
away
quite
a
lot
of
information
that
I
I
that
you
that
we
don't
want
to
be
sharing
you'll,
see
in
a
moment
that
when
I
run
it,
it
shows
all
of
the
dns
names
registered
to
route
53
for
the
account
the
aws
account
that
you'll
currently
log
into
it,
also
shows
the
secret
keys,
which
again
I
don't
want
to
share
so
I've
actually
edited
this
recording
so
that
the
keys
you
see
in
a
minute
are
they're,
not
real
keys,
they're,
they're
edited
and
similarly
the
dns.
B
So
here
we
go
so
now
we're
getting
to
the
point
where
we're
moving
on
past.
So
you
run
this
tool:
open
shift,
install,
create,
install
config
and
we're
not
actually
creating
a
cluster,
yet
we're
creating
an
install
config,
we're
creating.
You
know
the
the
declarative
definition
of
how
we
want
our
clusters
to
look.
So
we
have
to
specify
what
public
ssh
public
key
we
want
to
use.
We
have
to
say
where,
where
where
do
we
want
to
run
it?
So
we
choose
aws.
B
Yeah,
you
can
see
it,
you
can
see.
It
was
a
live
demo,
because
I
made
a
mistake:
you
choose
which
region
do
you
want
to
run
it
in?
So
I'll
choose
us
west
2.,
and
this
is
the
point
where,
if
I
hadn't
edited
this
video,
it
would
be
listing
all
the
domains
here.
So
that's
why
I
had
to
edit
this,
but
you
can
see
that
we
go
and
we
go
down
the
list
and
we
choose
the
domain
that
we're
interested
in,
which
is
the
test
domain
that
you
saw
before.
B
So
anyone
following
along
you
can
you
can
search
for
red
hat,
open
shift,
developer,
sandbox,
and
if
you
come
here
to
this
website,
you
can
see
that
you
get
to.
You
can
start
a
trial
and
you
get
a
free
trial,
30
days
trial
of
of
an
open
shift
about
the
sandbox.
When
you
sign
up
for
that,
then
you
get
this
dashboard
and
in
this
dashboard
you
get
your
pull
secret
and
you
basically
just
need
to
copy
this
pull
secret
here
like
so,
and
then
paste
it
in.
A
So
someone
asked
if
you
could
do
just
recap:
the
last
10
commands
that
you
ran.
I
guess
maybe
you
can
do
even
better
and
share
the
the
script
later
on.
A
Just
I
mean
this:
isn't
a
script
that
installs
openshift.
Is
it
doing
anything
special.
B
No,
it's
not
not
yet
no,
it
will
do.
We
will
go
into
more
interesting
stuff,
but
let's
I
think
it's
better
I'd
rather
recap
it
here
than
share
the
recording
just
in
case
there's
anything
hiding
in
that
recording
in
the
file
that
I
don't
want
to
share,
but
yeah.
So
so
that's
fine!
So
so
all
we've
done
so
far
really
is.
To
recap:
we
we
set
up
a
free
domain.
Somewhere
we
put
that
into
aws
route
53.
B
Then
we
downloaded
the
two
open
shift,
client
and
openshift
install
packages
which
you
can
google
for
those
we
expanded
those
using
tar,
and
then
we
ran
this.
We
then
we
put
we
put
our
current
working
directory
on
the
path
and
at
that
point
that's
when
we,
then
we
actually
run
the
openshift
install
tool
and
this
by
using
open
shift,
install,
create
install
config,
it's
going
to
create
a
it's
going
to
create
an
install
config
file,
and
then
these
are
all
just
this.
B
This
is
all
just
an
easy
wizard,
so
you
can
see
it's
finished.
It's
created
an
install
config
file,
and
this
is
the
only
the
only
edit
we
need
to
make
to
that
file
at
the
moment.
B
Is
this
edit
that
I've
made
here
so
if
anyone's
not
familiar
with
said
what
we're
actually
doing
here
is
we're
saying
the
install
config
was
created
and
we're
saying,
search
for
it
and
replaced
any
occurrences
of
openshift
sdn
with
calico,
so
we're
telling
openshift
that
when
it
builds
the
cluster,
it
should
build
it
with
calico
networking,
not
with
openshift
networking.
Just
to
be
clear,
though,
this
is
not
going
to
enable
calico
ebpf.
Yet
this
is
just
calico
traditional
ib
tables
data
plane.
B
Yeah,
for
some
reason
that
seems
to
have
stalled,
but
that's
okay.
I
can
actually
from
here
I'll
pick
it
up.
So
you
can
see
I've
pressed
ctrl
c.
So
we're
now
back
to
a
live.
You
know
actual
live
live
demo,
so
you
can
see
that
the
next
command
you
use
is
open
shift,
install.
B
Create
manifests-
and
I
won't
press
enter
now
because
it
won't
work,
because
I
did
this
already
this
morning,
because
I
knew
that
this
would
take
too
long
for
the
live
demo.
So
what
that
actually
does
is
it
consumes
the
install
config
and
it
spits
out
the
actual
manifests
which
are
going
to
be
deployed,
so
it
tells
that
into
a
folder
called
cluster.
B
So
if
we
go
into
this
holder
called
cluster
in
here,
there
will
be
a
lot
of
manifests
and
then
there's
an
extra
step
which
is
documented
on
the
docs
dot
project
calico
website,
which
is
actually
to
download
some
extra
project
calendar
manifest
and
drop
them
into
this
folder.
So
I'll
just
do
one
example,
but
there
are
about
30
or
so
so
you
just
it's
a
curl
command
like
this
and
we
drop.
We
drop
these
manifest
from
project
helico.org
into
them.
B
And
then
it
will
take
about
40
minutes,
which
you'll
be
pleased
to
hear
we're.
Not
gonna
do
that
now.
But
but
what
happens
is
it
will
then
go
and
build
the
cluster
and
once
you've
done
that,
you
will
have
a
working
kubernetes
cluster,
which
is
what
I
have
now.
So
you
can
see
that
this
this
cluster
was
built
this
morning
in
preparation
for
this,
so
this
cluster.
If
we
have
a
look
now,
you
can
see
that.
B
You
can
see
that
we
have
a
calico
node
daemon
set,
so
this
cluster
is
now
running
calico
networking
it's
ready
to
go,
but
the,
but
the
calico
node
is
still
running
in
ib
tables
mode,
not
in
epf
mode.
At
this
moment,
so
I'm
conscious
of
time
time
always
runs
away.
So
I
need
to
run
on
a
bit
so
I'm
going
to
demonstrate
the
the
source,
ip
print,
the
lack
of
source,
ip
preservation.
So.
B
I
have,
I
have
a
manifest
called
echo
server.
B
And
if
I
show
you
what's
in
there,
this
manifest
deploys
makes
a
deployment
called
echo
server
with
just
one
replica
answering
on
port
8080
and
it
creates
a
service
and
the
service
is
type
nlb
network
load
balancer.
So
this
is
an
aws
load
balancer,
let's
step
to
nlb,
it's
important
that
we
use
nlb,
because
without
nlb
we
can't
do
the
source
ip
preservation.
B
So
it's
echo
server
external
service
load,
balancer
and
as
well
as
having
a
cluster
ip.
It
has
an
external
ip
and
it's
answering
on
port
8385
and
it's
redirecting
to
that
pod.
So
if
I
grab
that
now
it
won't
work.
Yet
it
takes
a
moment
before.
B
But
I
tried
to
hit
http
and
then
the
the
new
load
balancer
ip
and
then
port
8385
yeah,
it's
not
working.
Yet
we
need
to
wait
a
moment,
but
in
a
moment
we
will
see
that
we're
able
to
hit
that
hit
that
pod.
But
when
we
look
at
the
pods
logs,
if
you
recall
back
to
that
diagram
that
we
looked
at
you,
we
won't
actually
be
able
to
see
the
real
ip
of
this
of
my
my
public
ip.
Essentially
so
what
actually?
Let's?
While
we're
doing
that,
let's
have
a
look.
B
A
B
A
So
there
is
a
question
about
support
in
the
ebf
data
plan
for
ipv6.
If
you
know.
B
B
No,
it
doesn't
it's
listed
as
a
limitation.
Actually,
so
if
your
users,
let
me
pull
that
up
now,
that's
I'm
glad
that
I'm
glad
I
checked
that
I'm
gonna
give
the
wrong
information
so.
B
B
Oh,
no,
I
know
it's,
I
know
it's
being
considered,
but
I,
I
definitely
don't
know
the
eta,
but
if
you
want
to
join
if
the
listener
wants
to
join
our
slack
calico
uses
slack,
that's
the
best
place
to
ask
that
question
and
then
I
can
go
away
and
find
out
exactly
what
I'm
allowed
to
say.
Officially
and
and
I'll
deliver
the
best
answer.
I
can.
A
Yeah
sounds
good.
Another
question
about
the
pros
and
cons
of
ebpf
yeah
security
perspective.
B
It
really
makes
I
don't
think,
there's
a
great
deal
of
difference
from
a
security
perspective,
because
at
the
end
of
the
day,
the
policies
being
implemented,
the
same
policy
is
being
implemented
regardless,
which
mode
you
use.
I
suppose
that
you
could
say
that
the
iptables
data
plane
has
been
around
for
longer
and
therefore
perhaps
you
could
make
a
case
that
it
was
more
secure.
However,
the
the
ebpf
data
plane
requires
a
new
kernel
and
requiring
a
new
new
account
or
arguably
has
security
benefits.
B
So
I
don't
think,
there's
I
don't
think,
there's
a
big.
I
don't
think
it's
a
big
consideration.
Actually,
I
think
I
would
say
they're
functionally
equivalent
really.
A
That's
a
good
point,
actually
a
question
that
I
also
had
about.
What's
the
requirements
for
installing
the
abpf
data
plan
from
a
kernel
or
operating
system
perspective,.
B
Yeah,
so
that's
documented.
Let
me
give
you
the
right
information.
You
need
a
supported
linux
distribution,
which
is
either
ubuntu,
ubuntu,
2004
or
newer
or
1804.
B
If
you
have
an
updated
kernel
or
red
hat
version,
8.2,
which
has
a
kernel
version
of
418
or
above
or
another
distribution
which
another
supported
distribution,
which
has
a
kernel
of
5
5.3
and
above
so,
actually
on
that
same
url
that
I
just
that
I
just
showed
you
a
second
ago,
if
you
go
to,
if
you
just
search
for
project
calico
enabling
ebpf
you'll,
find
all
the
information,
including
the
the
prerequisites
around
the
kernel,
there's
a
couple
of
other
prerequisites
as
well
around
mounting
the
bpf
file
system,
so
yeah,
all
of
that
is
on
there.
B
Cool,
I
think
we
should
push
on
just
to
make
sure
I
get
through
this
in
time.
Otherwise
we
may
overrun,
so
you
can
see
that
in
the
meantime,
the
the
echo
server
is
now
responding
on
the
port.
So
if
we
have
a
look
at
the
logs.
B
There
you
go
so
you
can
see
that
we're
looking
at
the
logs
of
the
actual
echo
server
pod
and
you
can
see
that
it
didn't
see
the
real
public
ip
of
the
user.
So
just
keep
that
in
mind
because
we
haven't
turned
the
epf
on
yet
so
once
we
have,
it
should
change
now.
The
other
part
I
was
going
to
do
at
this
point
was
to
actually
demo
the
performance,
but
to
be
honest,
we're
getting
low
on
time.
B
So
I
suggest
that
on
the
tigera
blog
there
is
a
detailed
blog
post
which
includes
the
performance
graphs.
So
I
suggest
that
people
can
go
and
have
a
look
at
that,
but
just
suffice
to
say
that
the
ebpf
data
plane
performance
is
significantly
better
and
if
anyone
you
know,
if
anyone
wants
to
test
it
themselves,
they
can
they
can
follow
along
and
run
run
their
own
tests.
B
So
let's
actually
turn
on
the
evf
data
plane
now.
So
the
first
thing
we
need
to
do
is
to
switch
the
encapsulation
for
calico
from
ipnip,
which
is
the
default
to
vxlan.
So
we
do
that
with
calico
cuttle.
A
B
B
You
can
see
that
I
ran
two
calico
cuttle
commands
here.
There's
an
ipool
custom
resource
and
this
ipv
it's
called
default,
ipv4
pull.
So
what
we
did
is
we
turned
off
ip
and
ip
encapsulation
and
we
turned
on
the
x-line
encapsulation.
B
B
B
B
Well,
I've
created
a
a
bit
of
yammer
config
map
called
kubernetes
services
endpoint
in
the
tiger
operator
namespace,
and
it
has
the
host
name,
which
should
be
the
same
yep
that's
the
same
and
it
has
the
service
port.
So
what
we're
basically
doing
is
we
haven't
applied
this
yaml
yet,
but
let's
do
that
now
and
while
it's
applying,
we
can
talk
about
what
it
actually
does.
B
Okay,
so
it's
created
this
kubernetes
services,
endpoint
config
map,
which
has
this
configuration
and
the
I'm
just
going
to
put
it
sleep.
If
we
wait
one
minute
now
and
let
kubrick
detect
that
new
config
map,
and
then
we
restart
the
tigera
operator,
it
will
actually
tell
calico
to
talk
directly
to
the
api
rather
than
talking
to
the
kubernetes
food
proxy
service.
A
While
we
wait
yeah
sure
yeah,
I'm
just
going
to
read
this
one
is
preserving
source
ip.
Just
an
example
of
uniform
software
defined
general
purpose,
networking
rules
or
is
source
ip
preservation,
the
primary
value
of
calico.
B
Well,
there's
two
parts
to
that.
The
first
part
is
it's
not
the
advantage
of
calicos,
specifically
of
the
ebpf
data
plane
and
actually
the
way
I
see
it
is
actually
that
it's
more
of
a
side
effect
of
the
benefit
of
ebpf.
B
I
think
it
I
wasn't
in
the
room
when
they
originally
decided
to
make
an
ebpf
data
plane,
but
I
think
the
primary
reason
for
making
an
ebpf
data
plane
is
the
performance
advantages,
but
then
I
showed
that
large
diagram
of
the
packet
flow
through
a
linux,
node
and
because
q
proxy
is
implemented
in
in
the
layer.
3
green
part
of
the
diagram
and
the
ebpf
hooks
are
at
the
start
and
the
end
of
the
flow
in
order
to.
B
Implement
policy
in
ebpf
you
kind
of
also
have
to
replace
coup
proxy.
That's
my
understanding
and
if
you're
replacing
coup
proxy
anyway,
then
you
can,
you
can
improve
it,
and
I
think
that
was
the
side
effect,
but
source
ip
preservation
is
actually
a
really
nice
side
effect.
It's
it's
something
that
surprising
was
not
there
by
default,
being
able
to
be
able
to
see
the
sports
ip
of
your
of
your
user,
because,
especially
if
you
consider
a
compliance
type
environment,
you
know
you
need
to
be
able
to
accurately
record
your
logs.
B
A
Just
a
follow-up
from
the
same
person,
the
incumbent
or
alternative
to
the
ebpf
data
plan
is
calico
in
calico
is
based
on
iptables.
Is
that
correct?
That's
correct,
yeah.
B
Exactly
so,
if
the,
if
the,
if
the
viewer
has
time
to
watch
it,
I
did
a
talk.
There
was
a
kubernetes
security
and
observer
observability
summit
earlier
in
the
year,
and
I
did
a
short
talk.
I
think
it's
about
20
minutes
about
why
we
offer
multiple
data
planes
and
what
the
advantages
and
contrasting
those
so
rather
than
talking
specifically.
B
I
said
here
are
the
data
planes
that
we
have
here
are
the
advantages
of
them
all
and-
and
I
really
believe
that
that
no
single
data
plane
is
the
perfect
solution
for
all
users,
because
they
all
have
pros
and
cons,
and
that
includes
evpf.
It
has
high
performance,
but
it
has,
for
example,
it
has
the
kernel
requirement,
which
obviously
some
some
environments.
That's
that
immediately
means
it's
not
possible,
so
yeah
all
right.
B
So
now
that
we've
we've
waited
if
we
restart
the
tigera
operator,
so
we're
deleting
the
pod
in
the
namespace
tiger
operator
and
it
will
immediately
be
recreated.
So
what
we've
done
there
is
actually
told.
B
B
Oh
yeah,
I
remember
now
so
there
is
there's
an
open
shift
operator
and
in
here
we
can
tell
it
that
we
want
to
turn
off
coup
proxy.
So
right
now,
if
we.
B
If
we
have
a
look,
you'll
see
that
there
should
be
five.
No
six
excuse
me
coupe
proxies
running,
and
now
we
just
patch
we
catch
the
openshift
operator
to
tell
it
that
we
want
to
turn
off
cue
property
and
as
soon
as
we
do,
that,
we'll
see
that
the
they're
terminating
already.
A
B
We're
still
running
the
calico
iptables
data
plane,
but
we've
turned
off,
we
turned
off
coop
proxy
and
we've
got
the
we've,
got
it
talking
directly
to
the
to
the
api.
So,
finally,
now
we
can
actually
turn
on
ebp
and
we
do
that
again.
This
is
an
operator
command,
so
we're
merging
this
config
and
we're
saying
that
we
want
to
turn
on
bpf
the
next
day
to
play
in
bpf.
B
And
the
cool
thing
is
that
enabling
ebpf
mode
shouldn't
disrupt
any
existing
connections.
So
if
you
have
live
connections,
they
will
continue
to
use
the
standard
linux
data
path
until
they
time
out
and
when
they
time
out
they'll
re-establish
using
the
eppf
data
plane,
which
is
pretty
cool.
B
So
if
we
look
at
now
we're
looking
at
all
the
pods
in
the
calico
system
name
space
and
you
can
see
that
the
modes
are
starting
to
restart,
so
you
can
see
that
one
restarted
14
seconds
ago
28
seconds
ago.
So
it
takes
a
moment,
there's
still
one
that
hasn't
restarted,
still
three
that
haven't
restarted.
Actually,
so
we
just
need
to
wait
a
moment
again.
B
On
the
wrong
net,
of
course,
of
course,
it
will,
of
course.
B
And
it's
back
initializing
cool,
okay,
so
they're
all
running
now,
so
we're
running
the
bpf
data
plane
now
there
are.
There
are
lots
of
different
ways
that
we
can
show
that
we're
running
the
bpf
data
plane,
but
one
of
the
quickest
ways
to
prove
that
we
are
is
to
look
at
the
logs
for
the
calico
node.
B
B
B
Nothing's
changed:
this
is
still
the
same
service
still
running
up
time.
16
minutes
now,
I'm
never
sure
whether
or
not
I
need
to
recreate
the
service,
so
we're
going
to
find
out.
So
if
we
curl
it
again,
you
can
see
that
we've
we're
proving
essentially
that
the
ebpf
data
plane
is
working
and
that
remember
the
service
wouldn't
be
working
without
coop
proxy
if
we
weren't
using
the
ebpf
data
plane.
So
the
fact
that
I
got
a
response
from
my
echo
server
proves
that
the
ebpf
data
plane
is
both
enabled
and
working.
A
B
Yeah,
it's
good,
isn't
it
so,
and
obviously
that
means
that
your
you
know
your
apache
logs
or
whatever
are
going
to
reflect
that
and
you
can
start
to
do.
Maybe
another
use
case
actually
might
be
for
understanding
where
your
users
are
coming
from
your
geoip,
maybe
as
well.
So
there's
quite
a
lot
of
use
cases
for
that
cool.
So
I
think
this
is
good
timing.
Actually,
because
we,
because
we
skipped
the
performance
part,
we
can
actually,
we
can
do
two
things.
B
The
first
one
is
for
me
to
show
you
the
performance,
graphs
and,
like
I
say,.
B
Oh
never
mind.
I
thought
I
was
fine,
the
blog
post
I'll
try
to
find
that
in
a
minute.
B
Yeah,
perfect
yeah,
that's
fine!
Actually
I
think
I
can
grab.
Let
me
see
if
I
can
grab
it
right
now.
B
So
this
is
a
a
presentation
on
a
similar
topic.
Here
we
go
so
there
is
this
caveat,
and
this
is
an
important
caveat,
which
is
that
traffic
between
two
instances,
which
is
a
single
flow
in
aws,
can
only
maximum
do
five
gig,
and
this
is
nothing
to
do
with
calico.
This
is
an
aws
limitation.
B
This
this
this
screen
grab
here,
is
actually
from
aws
documentation,
just
to
kind
of
show
that
this
is,
you
know
a
real
thing.
So
the
reason
I
say
that
is
because
we're
testing
with
a
single
flow,
so
this
is
openshift
with
ip
tables.
This
is
the
throughput
and
you
can
see
that
the
iptables
is
the
blue
and
more
is
better.
So
you
can
see
that
tcp.
It
makes
very
little
difference.
But
if
you
look
at
this
udp
performance,
it's
nearly
twice
as
much.
B
I
don't
really
like
spending
a
long
time
on
graphs
anyway,
so
you
know
people
can
come
back
and
validate
this
themselves
and
you
can
also
see
that
the
cpu
utilization
is
lower.
So,
but
I
think
more
interesting
than
that-
let's
go
back
to
the
cli
stuff,
because
people
can
look
up
graphs
anytime.
They
like,
so
let
me
show
you
one
more
other
cool
thing
that
we
can.
We
can
we
can
see
and
then
I
think
that
should
leave
us
with
a
couple
of
minutes
for
any
more
questions.
B
So
one
nice
trick
we
can
do
is
we
know
my
ip
now.
B
So
I
created
a
variable
called
ebpf,
interesting
ip
and
it's
my
ip,
my
public
ip
and
the
other
one
is
the
interesting
port.
The
port
number
we
care
about.
We
can
run
this
for
loop
and
what
this
is
actually
going
to
do
is
this
looks
pretty
funky
when
you
first
look
at
it,
but
it's
not
too
complicated
so
just
to
break
it.
B
B
Then
it's
going
to
iterate
over
those
it's
going
to
print
out
the
name
of
the
calico
node
and
then
it's
going
to
cube
cuttle
exec.
So
it's
going
to
exec
onto
the
calico
node
and
it's
going
to
dump
the
bpf
connection
track
table
and
then
it's
going
to
grab
my
ip
and
my
port
and
the
port
we
care
about.
So
this.
What
this
should
do
is
there
we
go.
B
B
B
You
you
can
see
the
flow,
but
you
what
you,
what
what
we
should
have
seen
there
is
is
the
flow
coming
in
on
the
ingress,
node
and
then
reaching
the
workload,
and
we
can
actually
see
that
it's
not
matted
through
the
flow.
I
don't
know
why
that
isn't
working,
but
I
don't
have
time
to
troubleshoot
it.
Now,
germany,
we're
almost
up
on
time
already.
Do
you
want?
Are
there
any
more
questions.
A
Yeah
so
viewers,
please
ask
any
questions
if
you
have
any
and
while
you're
thinking
about
your
questions,
maybe
chris
I
can
ask
you
about:
where
can
people
go
next
if
they
want
to
ask
more
questions,
find
out
more
about
what
we've
learned
today.
B
Yeah
yeah
great
so
so
the
best
place
to
go
is
doc
well
in
terms
of
documentation.
B
The
best
place
to
go
is
to
go
to
docs.projectcalico.org,
which
is,
is
that
one
I
showed
here,
and
you
can
see
that
on
here,
there's
quite
a
lot
of
high
quality
resources
and
if
you
go
to,
if
you
search
here
for
ebpf
you'll
find
that
there's
quite
a
lot
of
information
about
what
ebpf
is
and-
and
actually
you
know
quite
a
lot
of
detail
about
how
it's
working
and
also
there
are
deployment
guides
for
all
of
not
just
openshift
but
for
deploying
ebpf
on
clusters
on
other
platforms
as
well.
A
You
mentioned
you
have
a
slack,
so
maybe.
B
There's
a
pocket
calico
community
page
and
is
it
that
one.
B
No,
that's
taking
me
back
to
the
same
place.
Yeah.
If
you
search
for
calico
uses
slack,
you
will
find.
Oh
here
we
are
yeah.
So
it's
this
tigera.io
teleco
community
and
you'll
find
that
there's
quite
a
lot
of
information
here
about
how
you
can
get
involved.
There
are
these
certifications
which
are
free
which
are
brilliant.
These
are
for
the
open
source
product
community
meetings
and
you
can
visit.
You
can
find
our
calico
users
slack
here
as
well.
B
I'm
on
there
all
the
time
and
there
are
people
on
there
as
well
sean
and
other
people
who
you
know
who
are
deeply
deeply
knowledgeable
on
our.
So
any
question
you
have.
We
should
be
able
to
answer.
A
Wonderful
all
right,
so
I
think
we're
out
of
time
and
we
don't
have
any
further
questions
perfect,
so
yeah.
Thank
you
chris
some
one
of
our
viewers
said
this
was
provoking.
I
can
agree
so
yeah.
So
thank
you
for
that
and
thank
you
for
all
of
our
viewers.
A
Yeah
good
pleasure
and
yeah
see
you
everyone
next
wednesday,
every
wednesday
on
cloud
native
live
thanks
again
chris.
Thank
you.
Bye,
bye,.