►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
They
will
be
build
things.
They
will
break
things.
I
hope
so,
and
they
will
answer
your
questions
now.
Generally,
as
ever
wednesday,
as
a
am
11
am
et,
and
this
week
we
have
a
pleasure
to
receive
our
friend
chris
tonkins
from
tigera
he'll
talk
about
calico
talk
about
ebpf
and
how
to
charging
aks
network
with
this
both
products-
amazing
products-
I
also
did-
I
should
remember
this-
is
an
official
live
stream
from
cncf
and
subject
of
cncf
code
of
pro
of
conduct.
A
Please
do
not
add
anything
to
chat
to
the
chat
of
questions.
It
will
be
in
violation
to
that
code
of
conduct.
Basically,
please
be
respectful
of
all
of
your
fellow
spirit,
friends
and
presenters
with
that's
how
I
hand
over
to
my
friend
chris,
to
show
to
us
break
the
code
for
a
chaos
with
bpa
bvf
and
calico
hi
chris.
How
are
you
doing
nice.
B
To
have
you
very
good,
I'm
very
good,
I'm
here
to
london
and
it's
unbelievably
hot
here
today,
so
I
have
a
fan
on
behind
me.
I
hope
you
can't
hear
it
too
loud,
but
yeah,
I'm
good,
I'm
good,
so
my
name's
chris
tompkins,
I
work
for
tigera
and
project
calico,
I'm
a
developer
advocate,
so
it's
my
job
to
get
out
in
the
community
and
understand
communities,
requirements
and
help
the
community
to
understand
our
products
and
our
open
source
tools.
B
So,
should
I
start
by
telling
you
a
bit
about
project
calico
just
to
make
sure
people
know
about
that.
A
Yeah,
oh
chris,
it's
we
listen
up
a
lot
about
calico,
listen,
a
lot
about
vbf,
and
so
it's
amazing
know
about
this
projects.
Please
show
us
a
little
bit
about
this
great.
B
Thank
you,
so
I
just
want
to
say
first
of
all,
don't
worry,
there
won't
be
too
many
slides.
I
don't
like
sitting
through
too
many
slides,
so
we
only
have,
I
think,
six
slides
in
total,
including
the
one
you're
looking
at
so
project
calico
is
an
open
source,
networking
and
network
security
solution.
B
It's
a
way
to
connect
together
your
containers,
your
virtual
machines
and
your
host-based
workloads,
and
it
it
implements
best
practices
for
cube
security
with
excellent
performance,
and
it
is
running
in
over
a
million
nodes
in
in
the
cloud
today.
So
it's
battle
tested
production
hardened
code
with
full
support
for
kubernetes
network
policy,
interoperability
with
non-kubernetes
workloads
and
a
really
large
active
contributor
community,
and
so
it's
a
really
successful,
successful
product
project
and
product.
B
B
B
So
but
we're
not
just
talking
about
calico
today
we're
talking
specifically
about
the
ebpf
and
how
ebpf
relates
to
calico.
So
do
you
want
me
to
talk
a
little
bit
about
ebpf?
First.
B
Cool,
I
wonder
if
we
should
jump
back
just
down
to
the
videos
to
art,
to
our
video
feeds,
take
away
the
slides
for
a
bit,
but
the
I
made
some
notes
to
share
with
about
what
ebpf
actually
is,
first
of
all,
so
forgetting
about
calico
for
a
moment
if
we
want
to
get
exceptional
networking
performance
in
a
linux
node.
B
One
way
to
do
that
is
to
implement
the
code
in
the
inside
the
linux
inside
the
linux
kernel.
Because,
obviously,
if
you
put
the
code
in
the
kernel,
you
can
get
really
great
performance.
B
But
that
brings
with
it
challenges
you.
If
you
want
to
put
code
into
the
kernel,
maybe
you
have
to
write
a
kernel
module
and
you
have
to
get
that
approved
and
get
your
prs
approved
and
that
can
be
quite
contribute
quite
challenging.
So
the
linux
kernel
back
in
the
90s.
B
It
had
the
berkeley
packet
filter,
bpf,
it's
not
ebpf,
but
the
original
barclay
packet
filter
added
and
really
it's
a
way
to
implement
a
safe,
secure,
lightweight
virtual
machine
inside
the
linux
kernel
and
it
runs
bytecode
that
can
take
advantage
of
a
subset
of
features.
So
more
recently
than
the
original
bpf
we
got
ebpf,
which
is
the
extended
barclay
packet
filter,
that's
much
more
recent
and
it's
dependent
on
a
linux,
4,
4.x
kernel
and
it's
entirely
restricted
to
linux.
B
For
the
time
being,
we'll
talk
more
about
that
later,
I
think
so.
You
need
a
fairly
recent
kernel,
but
once
you
have
that
you
can
run
a
safe,
secure,
lightweight
virtual
machine
inside
the
kernel
without
changing
any
kernel
source
code.
B
A
Yeah
sure,
let
me
ask
you
you
today
we
will
talk
about
aks,
how
you
leverage
the
ebpf
over
aks
dope
for
us
just
aks.
Has
this
capability
to.
B
A
B
Yeah,
I'm
really
glad
you
asked
that
question,
because
no
it's
not
just
for
aks.
We
we
want
to
when
we
do
later
on
in
this
session.
We'll
do
a
demo
and
we
wanted
to
focus
that
demo
on
a
particular
platform.
So
the
demo
is
clear
and,
and
everything
and
there
are,
there
are
subtle
implementation
differences
between
using
calico
ebpf
on
the
different
platforms,
but
you
can
use
it
on
azure.
You
can
use
it
on
aks,
eks
and
and
and
many
other
ways.
B
So
once
you
have
this,
once
you
have
this
code
that
you
can
run
in
the
kernel,
it
obviously
makes
sense
for
certain
use
cases
right
because
there's
a
limit
to
what
you
can
achieve,
because
you
are
only
given
certain
helper
calls
by
the
colonel
so
that
in
order
to
keep
you
secure
and
make
sure
you
don't
do
anything
malicious,
the
colonel
will
only
allow
you
to
to
call
certain
helpers
and
those
helpers
are
aimed
towards
doing
networking
doing
logging
firewalling
debugging
those
kind
of
things.
B
So
this
is
a
perfect
fit
for
for
calico,
because
if
we
so
if
we
pivot
over
to
talking
about
calico
rather
than
than
ebpf
for
a
second
we're,
essentially
a
network,
a
a
network
component
for
your
for
your
kubernetes
clusters
and,
like
most
networking
implementations,
we're
implemented
as
a
control,
plane
and
data
plane
in
the
same
way
that
if
you
go
back
to
a
to
a
router
or
a
switch,
you
know
you
will
have
a
control
plane
in
the
data
plane.
B
We
did
a
a
kubernetes
security
and
observability
summit,
and
in
that
talk
I
did
a
talk
called
the
importance
of
modularity
and
data
planes,
and
if
you
want
to
find
out
more
about
the
data
plane,
support
in
calico,
that
talk
would
be
useful.
But
for
today
what
you
really
need
to
know
is
that
calico
uses
a
control,
plane
and
a
data
plane
architecture,
and
we
have
several
options
for
the
data
plane
now
by
separating
out
the
control
plane
and
the
data
plane.
B
So
the
control
plane's
job
is
to
manage
the
high
level
view
of
the
network,
for
example,
maybe
to
run
to
run
bgp
daemons
and
to
run
the
the
routing
protocols
that
that
have
the
whole
holistic
view
of
the
network
and
the
data
plane's
job
is
to
forward
the
user
traffic
and
to
to
do
so
quickly.
B
B
So
for
that
reason,
calico
was
designed
with
modularity
in
mind
from
from
day
one
it
was
designed.
I
mean
I
haven't
been
with
project
calico
since
day
one,
but
I've
spoken
to
people
who
have
been
and
they
knew
from
day
one
that
they
wanted
to
have
this
clear
separation
between
the
control
plane
and
the
data
plane
components,
and
because
it
was
designed
with
a
modular
separation
and
the
and
a
clear
interface
between
the
layers.
It
made
it
very
easy
for
calico
to
how
to
implement
multiple
data
planes.
B
So
the
original
data
plane
that
calico
supported
was
the
linux
iptables
data
plane.
We
still
support
it
and
it's
quite
high
performance.
It's
battle
tested,
we
support
a
windows
host
networking
data
plane
and
we
support
the
linux
dbpf
data
plane.
B
So
that's
because
we
wanted
to
take
the
advantages
of
linux
ebpf
and
apply
them
to
our
product
without
without
needing
to
throw
away
any
of
the
hard
work
or
implementation
work
that
was
done,
making
the
control
plane
stable,
reusable
and
so
on.
B
So
we
have
those
those
three
data
planes,
linux,
ip
tables,
windows,
host
networking
and
linux
ebpf,
but
I
should
mention
also:
we
have
a
fourth
data
plane
which
is
vector
packet
processing,
which
is
another
data
plane
which
is
amazing
for
especially
for
high
encryption
performance.
So,
but
we
won't
talk
about
those
other
data
planes
today.
I
just
wanted
you
to
be
aware
that
they
exist
and
that
that's
the
that's
the
background
for
why
we
have
this
linux
ebpf
data
plan
does.
B
B
The
the
linux
ebpf
data
plane
is
really
fast
and
it
uses
less
cpu,
but
there
is
also
another
big
advantage
which
I'm
gonna.
This
is
where
I'm
gonna
have
to
use
some
slides.
Now
I
promise
you.
I
only
have
four
slides,
so
I
hope
it's
okay
I'll
jump
across
to
here.
So
this
is
the
first
one.
B
It's
the
data
plane
benchmark
now
this
is
this
benchmark
was
not
done
on
aks.
The
reason
it
wasn't
done
on
aks
was
because
we
wanted
to
test
40
gig
networking.
B
B
You
can
see
that
the
performance
is
dramatically
higher
right,
so
I
don't
want
to
do
slide
too
many
slides.
So,
let's
move
on.
I
don't
want
to
dwell
on
that
for
too
long.
The
other
one
is
this
one,
and
it's
just
the
cpu
usage,
so
you
can
see
that
the
ebpf
and
this
slide
is
for
specifically
for
aks
azure.
B
You
can
see
that
the
tcp
portapod
and
port
to
service
throughput
sorry,
cpu
utilization,
doesn't
change
very
much,
but
you
can
see
that
the
udp,
podopod
and
udp
pod
to
service
is
dramatically
lower,
cpu
utilization,
which
is
great,
so
that's
it
for
the
benchmark
slides.
But
there
is
another
advantage
to
this
data
plane,
which
is
pretty
cool
and
I'll
demo
it
in
a
moment
that
is
that
in
the
kubernetes
cluster,
I'm
sure
you
know
that
these
services
are
usually
implemented
by
couproxy.
B
So
if
you
have
services
in
your
cluster,
it's
coop
proxy
that
implements
those.
But
if
you
replace
coproxy
with
an
ebpf
data
plane,
you
don't
actually
need
to
run
coop
proxy
anymore,
so
the
service
functionality
that
is
offered
usually
by
coup
proxy
can
be
can
be
offered
instead
by
the
data
plane
itself.
B
B
Ip
I'll
demonstrate
that
in
a
minute,
so
this
is
how
it
looks
without
ebpf,
your
external
client
comes
into
a
kubernetes
node
and
they
talk
to
a
service,
and
you
can
see
that
the
you
can
see
that
the
the
kubernetes
node
that
is
running
couproxy
has
to
destination
that
and
sourcenap
the
traffic,
and
it
does
that
so
that
in
step,
two
it
can
forward
the
traffic
onto
the
other.
Node
then
step
three,
the
the
pod
that
is
responding.
B
The
pod
never
gets
to
see
the
the
original
external
client
source
ip
and
then
the
return
traffic
has
to
go
back
through
the
other
node.
So
this
is
a
problem.
B
It
does
a
destination
that,
but
not
a
source
nap,
and
that
means
that
when
the
service
pod
receives.
Actually
I
don't
like
the
terminology
service
port
here
when
that,
when
the
pod,
the
pod,
that
is
serving
the
the
content.
When
that
pod
sees
the
traffic,
it
sees
the
real
source,
ip
of
the
user.
B
So
so,
just
to
recap
and
then
that's
the
end
of
the
slides,
but
just
to
recap
the
reasons
you
would
want:
the
linux
ebpf
data
plane.
You
get
great
performance,
lower
cpu
utilization
and
you
get
these
benefits
of
being
able
to
see
the
source.
I
think
so
yeah.
So
let's
jump
across
and
I'll
do
the
demo
right.
B
Oh,
how
is
that
is
that
readable,
or
should
I
go.
B
There's
one
potentially
one
small
hold.
B
B
Problem
with
my
oh
there
we
are,
I
need
to
learn
to
use
my
computer
properly.
Okay,
that's
better
right!
Yeah,
it's
better!
We
might
have
a
small
problem
with
that
with
that,
because
I
used.
Are
you
familiar
with
asciimo.
A
B
Yeah
I
use
this
tool
called
to
record
the
demo
so
that,
rather
than
having
to
wait,
while
the
slow
parts
run,
we
get
to
see,
we
get
to
see
it
running
quicker,
but
it
can
be
a
little
bit
fiddly
with
the
terminal
size.
So
we'll
see
how
we
go.
Let's
go.
Let's.
A
B
So
the
first
thing
I'm
doing
is
using
the
azure
cli
and
I'm
turning
on
this
feature
flag
for
the
container
service,
namespace
called
enable
aks
windows
calico,
and
this
is
just
a
a
feature
flag
to
tell
aks
that
we
want
to
use
windows
calico
and
I
believe
it
also
changes
the
the
deployment
model.
It's
just
a
step
that
needs
to
happen
for
this
to
work
now,
because
I
have
already
done
that
before
it
immediately
says
state
registered.
B
You
can
run
this
other
command
as
your
feature
list,
and
this
will
tell
us
when,
when
the
command
is
finished,
registering
so.
B
We
go
so
you
can
see.
This
is
just
me
confirming
that
it's
registered.
B
And
then
the
last
part
is
we
just
need
to
re-register
the
features
we
need
to
re-register
them
with
the
with
the.
B
So
the
first
thing
I
do
is
I
use
these
ucla
again
and
I
create
a
service
principle,
which
is
a
it's
an
identity
for
the
an
identity
for
the
service
to
run
out.
Essentially.
B
So
once
I've
done
that
I've
stored
the
service
principle
in
a
variable
called
sp
and
the
reason
I've
done.
That
is
because
the
output
contains
credentials.
B
So
I
don't
want
to
share
those
credentials
with
the
internet
right
now,.
B
B
So
we
take
the
we
take
the
output
and
we
grab
the
service
principal
id.
B
B
Password
and
all
we've
done
really
is
we've
taken
those
two
variables
from
this
variable.
B
B
So
you
can
see
that
we
created
a
resource
group
called
live
demo
rg,
and
we
put
that
in
canada.
B
East
now
we
get
to
the
interesting
stuff
so
now
we're
actually
creating
the
cluster,
so
az
aks
create
so
we're
creating
an
aks
cluster,
and
I'm
because
I
was
working
from
some
old
notes
when
I
first
did
this.
This
is
proof
that
it's
a
real
demo.
When
I
first
did
this,
you
can
see
that
I
specified
kubernetes
1.20.2
and
you
can
see
that
it's
saying
1.20.2
is
no
longer
supported.
B
So
that's
no
problem,
so
I
just
run
az
aks
get.
B
B
So
it'll
take
a
moment
for
this
command
to
run.
You
can
see
what
we're
doing
is
we're
specifying
that
we're
creating
a
cluster.
We
want
to
put
it
in
this
resource
group.
We
give
the
cluster
a
name.
We
say
we
want
two
nodes
in
the
cluster.
B
We
specify
the
kubernetes
version
now
something
funny
happened
there
when
when
it
got
edited,
I
think
it's
because
of
the
zoom,
like
I
mentioned
on
the
terminal
size,
but
this
this
actually,
I
put
1.20.7,
but
you
can
see
that
the
7
appeared
down
here
for
some
reason,
and
then
we
specify
the
service
principle
id
and
the
client
secret
and
the
load
balancer
scoop.
B
B
Readable:
okay:
let's
go!
Let's
go
with
that,
then,
because
then,
hopefully,
we'll
get
less
problems
with
the
wrapping
now
in
real
life.
When
you
do
this,
it
will
take
maybe
around
seven
or
eight
minutes,
but
the
version
of
because
I'm
running
this
through
a
recording
tool.
It
will
be
a
bit
quicker,
so
we
don't
need
to
wait
five
minutes.
We
should
just
it
will
take
about
one
one
minute
90
seconds,
something
like
that.
B
So
there
is
something
I
should
point
out
here,
which
is
at
this
point.
You
can
see
that
we're
not
specifying
that
the
cluster
is
an
ebpf
cluster.
That's
because
at
this
point
we
are
not
creating
an
ebpf
cluster,
we're
creating
a
normal
calico
linux
iptables
cluster.
What
we'll
do
is
in
a
minute
we'll
run
a
benchmark
on
it
and
then,
when
we
finish
benchmarking
it,
then
we
will
convert
the
cluster
to
an
ebpf
cluster.
A
B
Yeah
all
right,
not
only
can
you
do
this,
but
any
existing
flows
should
not
be
disrupted.
B
When
you
do
this,
so
if
you
have
an
existing
tcp
flow,
it
should
remain
using
the
old
data
plane
until
that
flow
terminates,
and
when
that
tcp
flow
terminates,
then
the
new
any
new
tcp
flow
will
go
on
to
the
new
data
plane.
B
However,
even
though
that's
possible,
of
course,
we've
all
supported
networks
in
production,
whether
I
would
do
that
in
production-
maybe
not,
but
but
yes
in
theory-
that's
possible.
So
this
is
great
because
it
means
that
we
can
give
you
the
right
data
plane
today,
but
if,
if
in
three
years
time,
there's
a
new
technology
that
is
more
suitable,
you
can
switch
to
a
new
data
plane.
B
So
while
we
finish
talking
about
that,
you
can
see
that
the
command
completed.
B
So
it's
given
us
the
the
json
for
our
cluster,
so
we're
running
the
cluster
now
and,
like
I
said
before,
I've
I've
changed
all
of
these
ids
so
that
they
are
not
private,
and
this
is
a
public
key.
So
we
don't
need
to
worry
so
we're
now
running
on
new
kubernetes
cluster
and
this
this
cluster
is
running
calico,
but
it's
not
running
ebpf
yet
so
I
back
up
my
I
back
up
my
cube
config.
B
For
the
cluster,
I
I'm
just
realizing.
If
I
move
my
window
up
a
tiny
bit,
then
we
can
get
rid
of
that
tiny
banner.
That's
blocking
the
you
being
able
to
see
the
bottom
of
the
screen.
B
Okay,
so
if
we
look
back
at
what
we've
done
here,
I
think
you
probably
missed
this
command.
Didn't
you,
so
we
copied
the
my
coupe
config
away
just
to
back
it
up.
Then
we
asked
azure
for
the
credentials
for
this
cluster
and
now
we
can
use
cubecut
or
get
nodes
and
we
can
see
our
new
nodes
and
you
can
see
they're
both
being
up
for
a
short
amount
of
time
on
the
version
we
requested.
B
So
now
that
we're.
At
this
point
we
have
a
cluster
running
a
cluster
running
the
the
calico
linux,
ip
tables
data
plane.
So
let's
run
a
quick
benchmark.
B
So
we're
using
this
great
tool
called
kate's
bench
suite
and
you
can
see
that
we
just
specify
the
client
node
and
the.
B
B
A
Chris
just
to
to
help
the
our
audience,
maybe
someone
don't
know
what
is
knb?
Could
you
just
to
give
a
tip
about
this.
B
Yeah,
this
is
a
great
tool.
This
is
a
really
great
tool,
so
you
can
find
it
if
you
search
github
for
for
kate's
bench
suite
this
phrase
here
kmb
is,
I
actually
assume
it
stands
for
kubernetes
network
benchmark.
It
must
do
kubernetes
network
benchmark.
All
you
need
to
do
is
give
it
a
a
client
node
and
a
source
node,
and
it
will
deploy
a
pod
on
the
server
and
it
will
deploy
a
pod
on
the
client
and
then
it
will
run
iperf.
B
Like
so
you'll
see
it
detects
the
cpu
detects
the
kernel,
blah
blah,
and
then
it
does
the
tests,
and
then
it
gives
you
a
really
cool
bit
of
benchmarking,
like
this
pod
to
pod
pod
to
service.
Now
you
can
see
that,
because,
in
this
demo
we're
running
on
servers
that
have
gigabit
nics,
that's
why
we're
not
seeing
that's?
Why
we're
not
seeing
10,
gig
or
anything
here
we
we're
not
running
on
big
instances,
cool
so.
B
There
we
go
right
so
so
that's
how
we
that's,
how
we
that's,
how
we
deploy
so
just
to
to
wrap
up
where
we
are
now
that
was
the
process
for
deploying
a
linux,
ip
tables
calico
kubernetes
cluster.
So
if
we
jump
on-
and
we
move
straight
on,
I'm
going
to
show
you
on
that
part
of
it
now.
Do
you
remember
in
that
diagram?
I
showed
the
audience
that
with
couproxy
we
lose
the
ip
address
of
the
external
client.
B
I
just
want
to
demonstrate
that
quickly
before
we
move
on,
so
I
deploy
this
useful
tool
called
yell
bank,
which
is
just
a
simulated
microservices
deployment.
It's
very
simple
and
you
can
see
that
we're
running
a
pretend
database,
a
pretend
customer
and
two
pretend
summary.
So
it's
like
a
three
t.
You
know
three
tier
micro
services
thing,
so
we
just
wait
until
that's
running.
A
Chris,
if
someone
want
to
follow
something
similar
that
you
did,
there
is
some
place,
your
git,
or
something
or
tutorial
that
we
can
follow
to
to
try
like
yeah.
B
Yes,
absolutely
so,
specifically
for
the
azure
case,
there
will
be
a
blog
post
quite
soon
that
will
on
the
project,
calico
blog,
which
will
cover
pretty
much
the
same
steps.
We're
doing
here.
The
I'll
just
pause
that
for
a
second
hold
on
and
also
on
the
blog
post
you'll
see
there
are
lots
of
similar
blogs
which
tell
you
how
to
do
this
on
aws
how
to
do
this
on
other
other
clouds,
yeah
cool.
So
you
can
see
that
you
can
see
that
it's
running
now,
so
we
create
a
load.
B
B
And
here
we
go,
we've
got
an
external
ip.
So
now
I
can
curl.
B
B
B
B
And
you
can
see
this
is
the
important
thing
right
because
we're
not
running
the
ebpf
data
plane.
Yet
the
the
azure
load
balancer
has
done
a
source
nap
and
the
ip
address
that
we're
seeing
here
is
an
internal
rfc,
1918
nah
ip
address.
B
B
So
the
first
thing
we
do,
I
think
this
step
will
be
going
away
very
soon
or
may
have
gone
away
already,
but
it's
not
doing
any
harm,
but
essentially
what
we're
doing
here
is
we're
patching
the
the
installation
resource
and
we're
telling
it
that
we
want
each
node.
B
B
So
we
can't
have
calico
talking
to
the
kubernetes
api
via
a
coupe
proxy
service,
because
coup
proxy
won't
be
here
anymore.
So
that's
why
we
have
to
do
this
next
change.
This
next
change
is
to
find
out.
First
of
all,
we
find
out
from
cube
cuttle
from
a
config
mapping
cube
system.
B
B
Calico
that
we
want
to
talk
directly
to
the
kubernetes
api
rather
than
to
cooproxy,
I'm
just
wondering
if
something's
gone
wrong
with
the
tool
or
if
I
paused
it
by
accident.
One.
B
B
Never
mind
so
what
we're
going
to
have
to
do
now,
you
get
to
see
me,
do
something
live,
I'm
going
to
use
this
tool
called
asciinama
timing.
B
B
Okay,
good,
so
if
you
recall,
I
said
that
we
wanted
to
get
calico
to
talk
directly
to
the
api,
so
we're
we're
applying
a
config
map
in
the
tigera
operator
namespace,
and
it's
telling
the
this
config
map
is
telling
the
tiger
operator
that
deploys
calico
that
it
that
it
wants
to
see
the
endpoint
here
and
it's
basically
just
saying,
don't
use
the
kubernetes
of
the
coupe
proxy
service
use
the
api
directly.
B
Now
I
have
a
feeling
as
well
that
this
has
recently
changed
too.
I
have
a
feeling
this
that
the
the
pod
restart
may
not
be
necessary
anymore,
but
I
did
it
because
I
know
it
works,
so
I
think
it
may
be
that
after
we
apply
this
yaml
here
that
there's
no.
B
B
B
But
of
course,
all
the
nodes
are
calico.
So
essentially
we're
telling
we're
telling
coup
proxy
that
it
shouldn't
run
on
any
nodes.
B
And
the
last
step
is
that
we
cooper
patch,
we
patch
the
felix
configuration.
B
Now,
in
in
a
couple
of
weeks
time
I'll
be
doing
a
webinar
about
how
we
can
do
a
deep
dive
into
the
ebpf
packet
flow,
but
but
today
we're
not
doing
that.
So
I
just
want
to
show
you
that
ebps
running,
so
I
thought
the
easiest
way
to
show
you
that
ebpf
is
running.
Now
is
if
we
look
at
the
logs
for
one
of
the
calico
nodes
and
we
grep
for
bpf,
then
we
get
loads
of
bpf
stuff.
A
B
B
On
the
tigera
website
there
is,
if
you
go
to
the
tiger
website,
there's
an
events
section.
It
will
be
published
on
there
soon,
but
actually
I
don't
think
it's
published
yet.
I
think
the
event
is
mid.
I
believe
it's
mid
july,
but
it
will
be.
It
will
be
published
quite
soon,
but
we
have
we
have
loads
of
great
free
events
on
there.
So
yeah
I
you
know,
I
encourage
you
to
go
and
take
a
look.
A
And
and
the
recent
teacher
tiger
there
is
a
certification
right.
That's
right.
B
Actually,
we
have
yeah,
there
are
two
certifications
now
yeah
the
we
have.
We
have
the
level
one
certified
calico
operator,
which
is
a
really
great
course.
Actually
I
did
that
course,
myself
before
I
joined
tigera
and
I
really
enjoyed
it
and
we
also
have
a
new
aws
certification,
a
calico
aws
expert,
so.
A
B
Those
are
those
are
both
useful,
so
that's
it
so
we've
shown
now
that
we've
turned
on
ebpf.
Now
we
can
run
the
same
benchmark
that
we
ran.
B
Before
it,
to
be
honest,
the
only
reason
this
is
happening
quicker
is
because
I
changed
the
speed
of
the
recording,
but
the
point
is
that
these
results
will
com.
We
will
compare
these
results
in
a
minute
to
the
other
results
that
we
took,
and
then
the
last
part
that
we
want
to
show
now
is
before
you
look
at.
The
results
is,
if
we
go
to.
B
B
You
can
see
this
is
quite
funny.
Actually
I
came
back
when
I
did
this
when
I
recorded
this
demo.
This
morning
I
came
back
to
look
at
the
logs
before
I
hit
the
site
again
myself
and
you
can
see
two
bad
guys
on
the
internet
have
been
trying
it
already.
B
Again,
I
get
the
I
get
the
website
again,
but
now,
if
I
look
at
the
logs,
you
can
see
that
you
get
my
real
public
ip,
although
actually
I
edited
this
public
ip
because
I
don't
want
to
share
my
public
hobby.
But
but
the
point
is
you
get
you
get
the
public
ip?
So
that's
it!
So
so,
just
to
recap,
before
we
look
at
the
just
before
we
look
at
the
the
benchmarks,
just
to
recap,
we
showed
how
the
performance
is
better.
B
We
showed
how
one
for
the
other-
and
we
showed
how
you
get
the
force
ip
preservation.
A
Great
great
great
presentation
and
amazing
information,
calico
and
ebpf
are
open
source
projects
right.
A
A
I
know
we
know
that
we
have
many
many
different
opportunities
to
contribute
like
a
documentation
or
or
maybe
some
such
kind
of
training
etc,
but
how
how
what
how
you
can
contribute
for
the
both
projects?
B
A
really
good
question,
so,
both
if
you
go
to
projectcalico.org,
you
will
find
that
there's
a
community
there's
a
link
on
there
to
the
community
website
and
as
with
most
open
source
projects,
you
can
get
involved
at
whatever
level
you're
comfortable
getting
involved.
If
you
just
want
to
submit
a
docs
pr
and
help
us
to
improve
our
documentation,
that
would
be
amazing
and
appreciated
and
all
our
we
have
some
good
open
source
documentation,
but
obviously
we
we
would
love
some
contribution
there.
B
We
have
community
meetings
and
on
our
slack
channel,
it's
part
of
my
part
of
my
role
is
to
talk
to
users
and
understand
where
they're
encountering
challenges
with
our
with
our
product,
because
no
product
is
perfect
right.
So
so,
even
if
you
just
want
to
contribute
by
coming
to
slack
and
sharing
your
experience,
that
would
be.
B
That
would
be
useful
and
then,
if
you
have,
if
you
have
expertise,
you
know
if
you
have
deep
technical
expertise
in
networking
and
and
go
or
or
you
want
to
contribute
to
ebpf
those
are
both.
You
know.
Calico
and
eppf
are
both
public
open
source
projects
so
yeah
it's
all
there.
I
thought
we
should
look
at.
We
can
actually
compare
those
benchmarks
quickly
before
we
run
out
of
time.
B
We
never
actually
did
that.
Let
me
just
show
it
so
if
we
pull
up
the
ebps
results
now,
let's
pull
up
these.
A
A
good
point
we
we
can
see
a
better
performance
from
the
network
using
the
both
when
you,
when,
when
you
talk
about
a
few
implementation,
very
very
simple
fuel
micro
services
fill
a
long
traffic.
Maybe
you
you'll
be
comfortable
with
a
default
implementation,
but
when
you
go
to
a
very
large
implementation
with
much
many
many
microservices
much
conversational,
transactional
transactions,
etc,
you
can
feel
a
a
huge
advantage
to
adopt
this.
This
kind
of
chance
just
kind
of
patch
using
calico
and
the
bpf.
A
So
chris
from
your
for
your
experience
where
which
are
the
the
the
biggest
implementation
or
how
the?
How
was
the
challenges
and
and
the
results
was-
was
good
enough
to
justify
use
this
kind
of
the
calico
and
ebpf
do.
B
B
So
let
me
find
out
the
answer
to
that
one
offline
for
you,
because
I
don't
want
to
give
you
the
wrong
answer,
and
I
don't
know,
but
what
I
can
tell
you
is
that
one
of
the
advantages
you
get
when
you,
if
we
switch
back
to
that
slide
again,
one
of
the
advantages
you
get
when
you
replace
coop
proxy
is
actually
a
latency
reduction.
And
you
can
you
see
this
caveat
here?
It
says
most
noticeable
with
many
short-lived
latency
sensitive
applications.
B
So
what's
cool
about
this
is
that
this
latency
reduction
starts
to
be
more
and
more
impactful,
the
bigger
your
cluster
and
the
more
the
more
short-lived
connections
you
have.
So,
although
you
get
these
immediate
benefits
from
coup
proxy
from
taking
away
proxy,
excuse
me,
if
you
have
a
cluster
that
has
loads
of
short-lived
sessions,
you
will
see
loads
of
benefit,
so
actually
the
benefit
becomes
even
bigger.
With
a
more
with
a
larger
cluster,
I
was
going
to
show
you
these
as
well,
so
you
can
see
that
these
are
this.
B
This
is
the
standard
data
plane
and
you
can
see
that
this
is
a
gigabit
nick,
so
you
can
see
that
it's
doing
900
meg,
tcp
and
800
meg
udp,
and
you
can
see
that
the
ebpf
data
plane,
because
I
changed
tabs
up
here.
The
ebpf
data
plane
is
doing
900,
meg
tcp
just
the
same,
and
you
can
see
it's
now
doing
nearly
900
meg
udp
as
well.
B
So
the
the
increase.
The
improvement
here
doesn't
look
as
big
as
the
slide,
but
that's
simply
because
this
is
a
gigabit
nick
and
you
don't
see
the
improvement
so
much.
I
could
probably
have
done
a
demo
on
a
40
gig
bare
metal,
but
but
I
don't
want
to
pay
for
it.
So
so
here
we
are,
and
the
other
benefit
you
can
see
is
in
terms
of
cpu
utilization.
B
You
can
see
that
where
is
it?
B
A
Really,
amazing,
really
amazing,
that's
good!
That's
good!
Oh
chris!
It's
amazing,
a
very
strange
presentation
and
the
demo
many
things
to
learn
and
improve.
I
hope
you
can
participate
in
in
the
next
deep
dive
because
it
is
it's
really
amazing.
Thank
you
so
much
we
don't
have.
We
don't
have
more
questions.
We
have
a
question
and
I
I
want
to
really
thank
you
so
much
for
for
this
presentation
today.
Just
live
today.
B
Yeah
no
worries
at
all:
that's
fine,
I'll
I'll,
go
and
join
the
slack
channel.
Your
cncf
slack
channel
cloud
native,
live
and
also
people
are
welcome
to
join,
to
join
us
in
the
calico
user
slack
channel
or
to
or
just
look
at
my
twitter
or
anything
else
yeah.
So
thank
you
so
much
for
taking
the
time.
A
Thank
you
chris.
Thank
you.
Thanks.
Everyone
to
join
us
today
in
the
last
episode
of
cloud
native
live.
It
was
a
great
to
have
you
chris
with
us
talking
about
calico.
This
amazing
project
with
ebpf,
really
amazing
numbers,
half
half
use
of
cpu.
It's
really
amazing.
Thank
you!
So
much
more
performance.
Of
course.
We
also
really
love
it.
The
interaction
with
we
don't
have
many
questions,
but
oh,
oh,
I
yeah.
I.
A
Well,
I
have
one
question
here
now:
where
could
you
find
more
about
performance
studies
of
different
setups.
B
Probably
the
best
place
to
look
for
that
is
the
project
calico
blog.
We
are
testing
different
scenarios
and
lots
of
different
platforms,
and
we
continue
to
publish
performance
stats
for
different
for
different
clouds
and
and
platforms
there.
B
So
that's
the
best
place
to
watch
and
not
only
that,
even
if
there
are
even
if
there
are
some
particular
results
that
you
can't
see
there
right
now
we're
publishing
new
ones
every
so
often,
and
also
if
there's
a
particular
thing
you'd
like
to
see
then
come
and
tell
us
about
it
in
slack,
and
maybe
we
can.
Maybe
we
can
prioritize
that
one.