►
From YouTube: eBPF Data Plane Deep Dive - Chris Tomkins, Tigera
Description
Are you always curious? Then let’s take the lid off a cluster running the Calico eBPF data plane and see what’s going on in there.
You will learn:
* The theory of a packet walk through a cluster running the Calico eBPF data plane
* How to see the real thing on a cluster running Calico eBPF
* How to use available tools for diagnostics or to gain visibility of Calico’s eBPF data plane
A
Yeah
thanks
for
coming
along
and
thanks
kimbo
for
for
hosting
so
just
to
introduce
myself
first
quickly,
I'm
the
developer
advocate
at
project
calico
tigera,
formerly,
I
was
a
calico
user
in
a
previous
role
and
I
just
really
liked
the
project,
so
I
kind
of
fished
for
for
to
be
involved
in
in
the
team,
and
here
I
am
I
like
to
change
this
obsession
every
time
that
I
post
this
slide.
A
So
today's
obsession
is
christmas,
mince
pies,
which
I
really
like,
which
don't
contain
any
meat
strangely,
and
I'm
always
looking
to
to
learn
and
share
and
connect.
So
you
know
get
in
touch
and
I'll
share
my
contact
details
at
the
end
of
the
presentation
so
yeah,
let's,
let's
let's
chat
so
hopefully
you
know
what
calico
is,
but
if
not,
then
the
project
calico
community
and
tigera
develop
and
maintain
calico,
which
is
an
open
source.
Networking
and
network
security
solution
for
containers,
virtual
machines
and
host-based
workloads.
A
Sorry,
all
three
of
my
monitors
just
informed
me
that
they
they're
going
to
turn
off
in
a
minute
to
save
power,
which
I
really
don't
want
them
to
do.
So
just
fixing
that
so
here's
some
details.
We
have
more
than
6
000
people
in
our
slack
more
than
150
contributors
and
more
than
a
million
nodes
are
powered
by
calico
every
day.
A
So
there
are
lots
of
ways
to
get
involved
and
contact
us,
but
the
calico
uses
slack.
There
is
a
particularly
good
one,
I'm
very
active
on
there
and
and,
as
are
many
other
uses,
so
get
involved.
So
what
we're
going
to
dive
into
today
is
to
quickly
talk
about
background
knowledge
and
prerequisites.
A
A
So,
from
a
background
knowledge
point
of
view
on
such
a
short
session,
I
wasn't
able
to
go
into
ebpf
itself
or
to
to
teach
you
how
to
build
a
calico
ebpf
cluster.
The
reason
I
haven't
done
either
of
those
things
is
because
there's
a
lot
of
really
great
documentation
out
there
about
both
of
those
things.
A
So
if
you
are
unclear
what
eppf
does
then,
then
obviously
come
back
to
this
session
after
after
you've
learned
about
that.
But
I
think
you'll
pick
up
quite
a
lot
here
as
well,
and
our
documentation
does
a
really
good
good
job
of
giving
a
basic
example
of
how
to
build
an
ebpf
cluster,
it's
surprisingly
straightforward,
to
convert
a
vanilla
cluster
to
our
edpf
data
plane.
A
If
you
want
to
try
this
on
your
own
cluster,
you
can
just
follow
along
and
see
what
I
do
today.
But
if
you,
if
you
want
to
to
build
your
own
ebpf
cluster,
then
you
can
convert
any
cluster
to
an
ebpf
cluster
and
you
you
just
need
to
go
to
docs.projectcalico.org
and
those
three
links
there.
I
think
we'll
probably
be
sharing
a
recording
for
the
session
later.
Those
three
links
there
will
take
you
to
the
resources
that
will
tell
you
how
to
do
that.
A
So
I
won't
take
too
long
to
dwell
on
this,
but
why
dive
deep
at
all
a
cluster
usually
just
works
right.
So
why
do
take
the
time?
Well,
it
enables
you
to
diagnose
your
own
problems
and
to
get
a
deeper
understanding
of
how
it
works
and
it
improves
support
as
well
commercial
or
open
source
in
in
an
open
source
scenario.
A
If
you
are
able
to
frame
the
question
well
and
give
us
the
information
we
need,
then
you
get
a
much
better
chance
of
of
the
kind
of
reply
that
will
help
you
out.
So
I
think
it's
beneficial.
So,
let's
just
dive
straight
in.
A
So
the
first
thing
I
thought,
I'd
focus
on
is
checking
the
basics.
I
wanted
to
show
you
the
mistakes
that
can
generally
cause
I'm
just
doing
a
chat.
Would
these
demos
work
on
a
kind
cluster?
A
I
will
jerome
I'll
come
to
that
point
in
a
short
time
when
it
gets
there's
a
point
at
which
it's
best
to
discuss
that
actually
very
soon,
so
I'm
not
going
to
show
you
every
setting
to
enable
an
ebpf
cluster,
because
the
documentation
does
that
what
I
am
going
to
show
you
is
how
there
are
a
few
settings
which
result
in
a
sub-optimal
cluster
rather
than
a
non-functional
cluster.
A
Now
this
is
really
important.
This
at
this
blue
mount
point
here
is
really
important,
because
if
you
don't
have
that
mount
point
set,
the
bpf
cluster
will
continue
to
work.
But
when
the
data
plane
restarts,
you
will
get
a
longer
than
necessary
service
interruption.
So
I've
been
discussing
internally
with
the
team,
whether
we
may
change
this
to
become
a
hard
requirement
that
if
this
file
system
is
not
mounted
then
then
maybe
we
should
not
spin
up
the
ebpf
data
plane.
A
The
second
part
of
this
is
either
you
need
a
supported
linux
distribution,
so
ubuntu,
2004,
jerome,
jerome,
red
hat
8.2
or
another
supported
distro.
And
if
you
go
to
our
documentation,
you
can
see
the
full
list
and
that's
to
do
with
both
with
the
the
kernel
version
and
the
headers.
A
So
the
short
answer
to
your
question
jerome
is
it
depends,
but
you
will
need
to
have
met
the
requirements
defined
here.
So
as
long
as
you
have
that
file
system
mounted
and
you're
running
a
new
enough
kernel
version
and
the
headers
are
built
into
into
your
distribution,
then
it
will
probably
work.
If
you
want
to
be
100
sure
it
will
work,
then
it
needs
to
be
one
of
these
districts.
A
So
some
other
basics
quickly.
Before
we
dive
deeper,
you
need
to
have
a
coup
proxy,
either
fully
disabled
or
in
the
event
that
you
don't
have
coop
proxy
disabled.
You
need
to
you
need
to
use
this
configuration
flag,
bpf2
proxy
iptables,
cleanup
enabled
set
to
false
and
the
out
of
those
two
settings.
The
preferred
one
is
to
disable
coup
proxy
and
the
reason
you
just
don't
can't
do
that
in
all
scenarios
is
because.
A
Because
in
some
scenarios,
like
k3s
coup
proxy
is
actually
built
into
the
main
binary,
and
so
it
can't
be
disabled,
and
in
that
scenario
you
use
the
bpf2
proxy
argument.
A
The
final
basic
that
I
wanted
to
highlight
and
I'll
show
you
these
on
a
real
cluster
later,
is
that
your
encapsulation
should
be.
It
should
be
vxlan
or
no
encapsulation
at
all,
because
ipnip
encapsulation
is
not
highly
performant
with
with
our
ebpf
data
planning.
A
A
Into
more
detail
a
theoretical
packet
flow,
so
what
we're
seeing
here
is
not
our
ebpf
data
plane.
Yet
what
we're
seeing
here
is
the
packet
flow
in
net
filter
through
a
single
linux
host,
and
this
includes
a
kubernetes
node.
So
this
diagram
is
courtesy
of
yan
engelhart.
This
is
an
amazing
diagram,
but
what
you'll
notice
is
that
there's
a
large
section
in
the
middle
with
the
input,
path,
forward,
path
and
output
path
and
the
majority
of
the.
A
However,
bpf
programs
are
attached
in
different
places,
they're
attached
here
at
the
xdp
ebpf
hook
here
at
the
ingress
q,
disc
and
here
at
the
egress
q
disk.
So
we're
able
to
attach
ebpf
programs
in
the
linux
kernel
at
the
start
of
the
flow
and
at
the
end
of
the
flow,
but
not
in
the
middle.
So
all
of
the
bpf
processing
happens
either
at
the
start
or
the
end
of
the
flow,
and
you
can
see
ebps
can
sidestep
the
whole
net
filter
flow.
A
Now,
evpf
programs
are
attached
at
these
points
and
those
are
called
q
disks
they're
attachment
points.
Now
there
are
some
q
disks
in
particular.
There's
one
called
the
class
act
qdisk.
Is
it
insuring
on
this
diagram?
A
No,
it
is
not,
and
that
is
an
attachment
point
for
attaching
an
ebpf
program
to
to
do
arbitrary
work.
Now
the
class
act.
Q
disk
is,
what's
called
a
no
op
q
disk,
which
means
that
it
doesn't
perform
any
action
at
all.
It's
simply
an
attachment
point
if
you
wish
to
attach
your
programs,
so
the
programs
are
attached
at
are
compiled
at
build
time
and
the
per
node
agent
felix.
The
calculate
node
agent
attaches
the
policy
programs
at
runtime.
A
So
at
ingress
you
can
see
that
the
packet
is
received,
and
then
it
goes
through
the
if
we
go
back
essentially
we're
seeing
this
part
of
the
diagram
zoomed.
In
now,
some
bbh
programs
can
be
attached
at
the
xdp
hook
here,
but
the
majority
of
the
ebpf
policy
happens
here
attached
at
the
tc
ingress
tree
of
q
disks
and
the
class
act
q
disk.
Now
this
leads
back
to
a
previous
point.
A
I
made
you
might
recall
that
I
said
coup
proxy
needs
to
be
disabled
for
our
ebps
data
plane
and
the
reason
for
that
is
because
coup
proxy
is
implemented
inside
this
green
box
here.
A
Work
alongside
the
coup
proxy
functionality
and
it's
it's
difficult
to
interleave
the
ebps
program,
functionality
and
the
coup
proxy
functionality
and,
as
a
result,
it
was
necessary
to
replace
the
proxy
function.
Functionality
in
kubernetes
with
the
ebpf
functionality,
essentially
to
rewrite
that
functionality
in
ebpf,
fortunately
rewriting
that
coup
proxy
functionality
in
ebpf
actually
allowed
our
dev
team
to
make
some
improvements.
A
So
actually
we
have
some
features
that
couproxy
does
not,
and
I
think
I'll
go
into
that
later
in
this.
In
this
presentation,.
A
So
at
the
egress
we
have
a
similar
scenario:
net
filter
post
reading
happens
and
then
we
attach
our
ebpf
programs
here.
A
Now
we
can
attach
those
eppf
hooks
that
I
showed
if
we
dive
back
to
this
diagram.
Essentially,
these
hooks
here
at
ingress
and
egress
can
happen
on
any
of
these
red
lines.
A
A
And
the
traffic
is
still
routed
normally
through
iptables
and
the
fib
just
checking
my
notes
to
see.
If
there's
anything
else,
I
should
point
out
the
ingress
and
egress
programs
are
attached
at
cali
data
and
tunnel
interfaces.
Other
interface
types
are
handled
as
exceptions.
I
should
make
that
point.
Yeah
ipnip
tunnel
interfaces
and
wireguard
interfaces
are
handled
as
exceptions.
A
A
A
A
This
diagram
shows
the
same
thing.
Essentially
it's
just
a
different
diagram.
I
felt
like
it
should
in
a
different
way,
so
I
thought
I'd
include
both
so
to
restate
the
first
packet
for
each
flow
will
be
processed
as
usual.
A
A
So
I'm
just
keeping
an
eye
on
the
time
we
have
quite
limited
time,
so
I'm
going
to
show
you
a
live
demo
and
we'll
show
all
of
these
tools
in
that
live
demo.
So
I
just
want
to
give
you
the
key
points
at
the
moment.
A
A
Ebpf
data,
I
should
say
so.
First
of
all,
we
have
a
tool
called
calico
bpf,
it's
a
tool
to
examine
calico's
ebpf
maps.
So
if
we
just
jump
back
quickly,
we
talked
about
how
ebpf
stores
data
like
ip
sets
in
in
maps,
which
we
consider
to
be
data,
storage,.
A
A
Sorry
calculate
vpf
runs
within
the
calico
node
pod,
so
we
run
it
by
using
cooperative
exec
in
the
calculator
name
system,
namespace
the
name
of
the
calico
node
that
we're
interested
in
and
then
the
command
and
I'll
give.
You
live
examples
of
this.
So
this
command
as
with
so
many
kubernetes
commands
it
doesn't
really
roll
off
the
tongue.
But
but
when
you
see
it
live,
hopefully
it
will
stick.
A
It's
a
tc
is
a
tool
for
showing
and
manipulating
traffic
control
settings,
but
it
allows
you
to
view
the
qdisks
to
view
and
manipulate
traffic
control
settings.
It
can
be
used
to
see
if
an
ebpf
program
is
dropping
packets.
So
if
we
have
time
we'll
do
that
live
in
a
moment.
A
Okay,
the
next
tool
I
wanted
to
highlight
was
our
ebpf
program,
debug
logs,
so
these
are
the
logs
generated
by
calico
and
that
warning
is
really
important.
This
has
a
significant
impact
on
program
performance,
so
you
shouldn't
turn
this
on
on
a
production
cluster.
A
A
A
Okay,
I
flew
through
that
because
I
wanted
to
have
plenty
of
time
to
demonstrate
this
on
a
live
cluster.
As
a
result,
we've
actually
ended
up
with
enough
time.
I
think
I
need
about
20
minutes
and
we
have
just
over
that.
So
we
have
time
if,
if
anyone
wants
to
go
back
over
any
of
that
content,
for
me
to
speak
in
more
detail
or
has
any
questions
now
would
be
an
okay
time
for
that.
A
All
right,
so
let
me
show
you
what
I've
built.
This
is
what
we're
going
to
examine.
So
we
have
a
five
nose.
Cluster.
A
A
Obviously,
in
my
home
isp
changes
it's
ip,
I'm
connecting
to
a
tcp
connection
to
a
gcp
load,
balancer
again
that
ip
is
different
because
I've
I've
torn
down
this
environment
and
rebuilt
it,
but
the
porta
direct
port
is
the
same
and
we'll
actually
see
load.
Bouncing
forwarding
me
to
a
node
port
on
one
of
the
nodes.
A
That
program
will
make
a
forwarding
decision
and
it
will
refer
to
the
ebpf
maps,
then
I'll
be
vxlan
tunnelled
across
to
the
node.
That's
serving
the
workload
which
is
answering
on
port
8080
and
then
the
ingress
and
egress
q
disks
that
we
showed
that
processing
will
happen
here
on
the
physical
interface
of
the
node,
as
well
as
on
the
pods
of
ether
interface
pair
and
then
we'll
see.
A
direct
server
returned
to
the
client
so
notice
that
that
return
traffic
doesn't
go
back
through
the
ingress
node
as
it
conventionally
would,
with
crew
proxy.
A
So
let
me
keep
that
diagram.
Handy.
A
Okay,
so
first
things:
first,
if
we
look
at
our
notes.
A
We
can
see
that
it's
been
up
for
a
day
and
a
half
and
we're
running
up-to-date
kubernetes.
We
have
a
master
and
three.
A
A
A
A
The
next
thing
we
need
to
check,
as
I
as
I
showed
earlier,
is
that
we
have
the
sys
fs
bpf
file
system
mounted
wow,
so
you
can
see
that
we
have
a
if
we
run
mount.
We
have
a
ton
of
mount
points,
so
let's
run
something
a
little
bit
more.
A
We
don't
need
to
check
these
permissions,
particularly
it's
a
little.
It
seems
misleading
that
this
says
none,
but
that's
fine!
That's
correct!
If
you're
familiar
with
mail,
I
used
to
know
how
to
read
this,
but
I
can't
quite
recall
anymore,
but
the
key
point
is
this
is
fine.
This
is
what
you're
expecting
to
see
the
bad
outcome
would
be
that
you
have
no
is
that
you
have
zero
lines
of
output.
A
The
next
thing
we
need
to
do
and
the
next
most
common
misconfiguration
is
that
we
still
have
coop
proxy
running
as
I
described.
A
So
if
you
still
have
q
proxy
running
on
your
cluster,
alongside
the
ebpf
data
plane,
things
will
continue
to
work,
but
you'll
see
very
high,
cpu
utilization
and
that's
because
coup
proxy
and
the
bbf
data
plane
are
fighting
for
fighting
over
the
ip
tables
rules.
So
we
need
to
make
sure
coup
proxy
is
disabled.
So
the
first
way
we
can
do
that
is
to
simply
look
at
the
output,
and
we
should
see
that
we
have
no
q
proxy
running
and
I'll
show
you
how
we
disable
it.
A
In
kubernetes,
you'll
be
familiar
with
the
concept
of
a
demon
set
which
specifies
that
you
want
to
have
a
workload
run
on
every
node
and
that's
how
coop
proxy
conventionally
runs
so
there's
in
the
cube
system.
Namespace
there's
a
demon
set
called
qproxy
and
you
can
see
that
it's
not
running,
which
is
what
we
want
and
that's
the
case
because
we've
applied
this
non-calico
equals
true
node
selector.
A
A
As
I
mentioned
before,
if
you
had
a
cluster
type,
for
example
k3s,
where
you're
not
able
to
disable
it,
then
you
just
edit
your
felix
configuration.
So
if
I
just
scroll
up
and
show
you
that
more
carefully.
A
A
Then
you
would
actually
add
the
flag
that
I
shared
earlier
with
a
really
long
name,
which
I
never
remember,
which
instructs.
What
that
does
is
it
tells
felix
not
to
tidy
up
the
ip
tables
rules
that
are
created
by
coop
proxy's
proxy,
but
simply
to
ignore
them.
A
Okay.
The
last
thing
I
wanted
to
show
here
before
we
move
on
to
the
next
part
is
that
you'll
notice
vxlan
enabled
true,
and
this
is
to
make
sure
that
we
have
the
vxlan.
A
And
again,
if
you
had
ip
in
ip
encapsulation,
things
would
work,
but
the
performance
would
be
poor
and
that's
because
if
you
record,
you
might
recall
ip
ip
interfaces,
I
mentioned
how
the
how
they
were
handled
as
an
edge
case
in
the
bpf
data
plane.
A
Kubernetes
presents
a
bunch
of
services
by
default
and
one
of
those
services
is
called
kubernetes.
It's
in
the
default
namespace,
it's
a
cluster
ip
and
it's
actually
the
kubernetes
api.
A
A
But
of
course,
we
just
said
that
we
need
to
disable
coop
proxy,
so
we
can't
disable
coup
proxy
and
have
felix
use
coup
proxy
to
talk
to
the
kubernetes
api.
So
the
crux
of
it
is:
we
need
to
change
the
configuration
of
calico
to
tell
it
to
talk
directly
to
the
real
workload
endpoint
of
the
api
rather
than
talking
to
this
cluster
ip.
A
But
this
is
the
real
endpoint,
the
real
workload
endpoint
2.73
on
port
643.
And
if
we
look
again
at
the
pods
2.73
643.
A
A
Now
the
tigera
operator's
job
is
to
deploy
calico's
per
node
agent
and
to
bring
it
into
conformance
with
a
target
configuration.
So
what
happens?
Is
we
create
this
kubernetes
services
endpoint
and
we
specify
the
real
kubernetes
service
host
and
port
and
as
soon
as
felix,
the
per
node
calico
agent
sees
that
configuration
it
will
restart
the
daemon
and
the
kubernetes
the
calcodemon
and
speak
directly
to
the
kubernetes
api,
at
which
point
we
can
stop
the
q
proxy
service?
A
A
The
first
thing
I
created
is
a
network
set
called
ip
allow
set
and
iplow
set
is
applies,
a
label
again
called
ipl
outset,
and
that
label
is
set
to
true
it
has
an
ip
address,
and
that
ip
address
is
my
current
public
ip
address,
we'll
check
it
in
a
moment,
and
then
we
basically
say
if
you're
in
that
list
you're
allowed
to
access
anything
you
like
in
the
default
namespace
and
anyone
else
is
denied.
A
A
This
command,
which
looks
pretty
hairy
when
you
first
see
it,
but
I'll
explain
it
bit
by
bit
what
this
command
actually
does
is
it's
a
for
loop
and
it
grabs
the
names
of
all
of
the
calico
node
pods.
A
To
manipulate
together
to
get
rid
of
some
some
unnecessary
output,
so
it
looks
like
a
funky
command,
but
what
that
does
is.
A
A
B
A
A
A
So
so
that
quick
command
lets
us
see,
for
example,
the
ip
sets,
but
it
will
also
let
us
see
all
kinds
of
other
ebpf
maps
and
that
lets
us
see
what's
happening
on
the
node.
So
let's
see
some
other
cool
examples
of
that,
this
command
looks
really
crazy,
but
again
it's
it's
actually
a
variation
on
the
same
command.
A
A
Our
flow
actually
came
in
on
the
same
node.
That
happened
to
have
the
echo
server
pod.
Let's
run
it
one
more
time.
A
A
A
A
A
A
This
is
the
workload
pod
that
we're
interested
in
and
it's
this
is
the
best
pair.
A
So
all
it
remains
to
do
is
to
have
a
quick
look
using
tc
like
this.
I
mentioned
tc
earlier
show
the
q
disk
q
disks
for
that
device,
and
we
can
see
that
the
class
app
q
disk,
which
is
the
no
op
q
disk.
Like
I
mentioned,
we
can
see
that
it's
got
five
drops,
which
means
that,
as
well
as
the
legitimate
traffic
that
I've
been
sending,
there's
been
five
attempts
to
connect
to
that
port.
A
Now
those
were
actually
me
yesterday,
so
I'm
gonna
quickly
grab
my
phone
and
I'll,
which
I've
taken
my
phone
off
the
wi-fi.
So
we
should
find
that
if
I
quickly
try
that
url
again
this
one.
A
Go
on
this
there's
a
chance.
This
may
not
work.
I've
noticed.
Google
chrome
do
some
really
interesting
things
on
my
phone,
where
it
doesn't
connect
out
properly,
but
let's
give
it
a
go.
Also,
my
phone's
trying
to
connect
to
that
url
and
failing
because
it's
my
phone
is
on
4g
and
it's
not
coming
from
the
same
source
type
address.
A
And
that
is
the
logging
now.
This
is
the
logging
that
I
mentioned
that
you
that
we
shouldn't
turn
on
on
the
production
cluster,
but
turning
it
on
is
just
a
case
of.
A
I
get
the
debug
logs
now
this
looks
crazy
and
the
reason
there's
so
much
going
on
is
because,
if
I
ctrl
c
that
you
can
see
that
the
destination
port
is
port,
22
ssh,
so
all
this
traffic
is
actually
itself
being
evaluated.
It's
evaluating
the
ssh
session
that
I'm
connecting
to
the
host
fire.
But
that
being
said,
these
these
logs
are
incredibly
verbose,
which
is
why
you
shouldn't
turn
this
on
on
the
production
cluster.
A
But
you
can
see
this
is
a
connection
tracking
lookup
being
hitting
an
existing
rule
and
we
can
see
the
hexadecimal
source
and
destination
ip
addresses
and
so
on.
Let
me
just
turn
off
that
debugging.
While
I
remember.
A
Cool,
so
oh
mario's
got
a
question
I'll
just
answer
that
in
one
second,
so
mary
just
before
I
answer
your
question
just
to
give
to
come
back
to
this
final
slide.
I
just
wanted
to
to
give
you
these
contact
details,
so
you
can
make
a
note
of
them.
This
is
my
personal,
twitter,
linkedin
and
slack,
and
so
on.
So
I'm
very
happy
to
you
know
chat
if
any
of
you
would
like
to
so,
you
ask
what's
actually
stored
in
the
bpf
maps.
A
Let
me
go
back
to
my
slide,
which
lists.
Oh
my,
my
notes,
I
should
say
which
lists
all
of
the
components
that
are
in
there.
I
think
it's
back
here.
A
So,
mario
to
answer
your
question,
the
ebpf
maps
store
the
ip
sets,
but
there's
a
blog
post.
I
wrote
that
answers
this
question
really
well,
I'm
avoiding
answering
incorrectly,
because
I
won't.
I
know
that
I'll
miss
one
or
two.
If
I
don't
check
this
myself.
A
B
A
Yeah
here
we
are,
you
can
also
see
the
connect
time,
low,
balancing
programs,
ip
sets
and
nat
tables
and
routes.
So
I
think
that's
everything
that's
stored
in
there.
A
A
A
So
jerome,
you
asked
a
really
great
question:
I'm
going
to
publish
a
blog
post,
hopefully
in
the
next
few
days,
about
building
a
bgp
and
kubernetes
and
calico
iptables
data
plane
cluster
in
minikube.
On
my
laptop,
I
had
quite
a
lot
of
success
with
doing
that,
and
I
tried
adding
ebpf
to
that
cluster
and
it
didn't
work
because
the
mini
cube
iso
doesn't
contain
the
necessary
bpf,
headers
and
so
on,
even
though
it
is
running
a
late
enough
kernel.
A
A
Failing
that,
if
you
look
at
the
ebpf
course
that
I've
released
in
the
last
few
days,
if
you
search
for
ccol2
ebpf,
which
is
the
calico
ebpf
course,
you'll
find
that
there's
some
instructions
in
there
about
how
to
enable
ebpf
data
plane
on
a
vanilla,
kubernetes
cluster
on
vms
and
in
my
course
I
described
doing
that
in
gcp.
But
there's
no
reason
that
you
couldn't
do
it
on
your
local
host.
A
A
If
you
need
to
really
oh
fantastic,
francis
has
just
posted
a
francis
has
just
posted
a
link
to
potentially
getting
bpf
ready,
mini
cube.
I
did
look
into
that
briefly.
Thank
you
and
jerome.
Here
is
the
course
id
that
you
asked
for
so.
A
So
I
put
that
in
the
chat,
but
let
me
also
actually
I
don't
know
why
I
don't
just
show
it
on
my
screen.
So
if
you,
if
you
have
a
look
here,
you'll
see.
A
We
have
this
new
certification,
so
if
you
go
to
the
tiger
website,
you'll
see
this
here,
it's
an
ebpf
certification.
It
teaches
you
about
what
ebpf
is
how
to
enable
it.
Some
of
the
content
that
we've
just
covered
so
yeah.
I
think
it
should
be
helpful
for
you
no
worries.
A
Mario,
I
see
your
question
about
how
ip
roots
coexist
with
the
bpf
maps.
I'm
going.
To
be
honest,
I
don't
think
I
understand
it
in
enough
detail
to
give
a
definite
answer.
Now.
Perhaps
we
can
talk
on
calculus
slack,
because
I
don't
want
to
give
the
wrong
information
and
I'm
not
quite
sure
how
that
how
the
relationship
works
so
yeah.
Maybe
we
can
take
that
one
offline
francis.
Let's
have
a
look.
A
Yes,
that's
right,
francis
correct
an
ebpf
map,
so
francis
is
making
the
point
that
an
ebpf
map
can
store
any
kind
of
data.
It's
just
that
the
calico
data
plane
chooses
to
store
the
these
part,
these
pieces
of
information
so
but
this
also
relates
to
mario's
point
if
it's
storing
roots
in
the
edpf
maps,
how
does
that
relate
to
the
to
the
normal
routing
table?
And
the
answer
is,
to
be
honest,
I'm
not
totally
sure.
A
So
I
think
we
need
to
take
that
one
offline
cool
we're
already
used
up
all
of
my
time.
So
unless
there's
any
really
quick
questions,
I
think
we
should
hand
over
because
I'm
interested
to
see
the
next
session.