►
From YouTube: OpenShift Coffee Break | Cloud Native Networking & Observability for OpenShift with eBPF
Description
Welcome back for another year of OpenShift.TV flavored coffee breaks!
The Cilium project disrupts the traditional connectivity, security and observability in the hybrid/multi-cloud with the power of eBPF. Please join us to meet with Raymond de Jong, CTO at Isovalent, and hear the story first hand and what this means for OpenShift operators and application owners.
A
A
Hey
welcome
back
to
the
openshift
TV
coffee
break
show
today
we
restart
the
season
of
the
year
with
our
great
guest
I'm,
going
to
introduce
him
in
a
moment.
First,
let
me
introduce
what
this
show
is
about.
This
show
is
about
talking
about
Cloud
native
architecture
openshift
with
our
users,
Partners
customers
and
the
real
reason
for
the
show
is
to
have
coffee
in
the
morning
so
good
morning.
Everyone
and
welcome
to
Raymond
DeYoung,
which
is
Phil
CTO
at
isobalen.
How
are
you
Raymond.
A
Morning,
good
morning,
we're
really
excited
to
have
you
in
the
show,
because
Raymond
we're
gonna
talk
some
about
some
very
hot
technology,
really
really
popular
in
these
days.
In
the
kubernetes
landscape,
which
is
a
psyllium
and
ebpf
I'm
sure
we
have
the
chance
to
go
deeper
in
in
that
and
also
to
understand
how
celium
connects
to
openshift
and
what
are
the
use
cases.
But
I
give
you
just
the
the
the
chance
to
introduce
yourself
while
I
get
my
shot
of
coffee.
B
Cool
so
yeah
good
morning,
everyone,
my
name,
is
Raymond
De
Young
I'm,
based
in
the
Netherlands
I
am
the
field
CTO
for
the
Amiga
region
in
I
surveillance,
and
that
means
that
I'm
meeting
a
lot
of
customers,
potential,
new
customers
and-
and
we
are
very
interested
in
hearing
their
use
cases
to
build
our
product
and
to
make
our
our
product
better.
Based
on
ebpf.
A
That's
great
Raymond
I
have
a
question.
What
is
it
just
out
of
curiosity,
what
the
field
CTO
does
in
the
daily
job.
B
Yeah,
so
so
what
I'm
doing
a
lot
is
customer
introduction
sessions
so
also
presenting
about
psyllium
how
it
works,
what
difference
in
value
we
can
bring
for
a
given
environment,
also
understanding
current
customers
improving
our
product,
also
presenting
a
lot
at
events
such
as
cubecom
or
Regional
kubernetes,
Community
Days,
for
example,
spreading
the
word
about
what
Cillian
is
how
it
works.
What
the
new
features
are
and
getting
feedback
from
from
the
fields
and
bring
that
feedback
back
into
engineering
to
make
our
product
better.
A
That
sound
great
sounds
great
and
we
have
the
chance
to
also
to
understand
better
how
celium
works
today,
what
it
is
about
and
I
wear
some
people
in
the
chat
greeting,
say
hello
to
Sebastian
Blanc
and
to
ibra
good
morning.
Everyone
hey,
if
you
have
any
question,
please
send
the
question
in
the
chat
I'll
make
sure
to
drop
those
questions
to
Raymond
into
into
the
show,
but
please
join
us
also
in
the
chat.
Let's
make
this
the
most
interactive
we
can
and
let's
share
also
your
experience
with
psyllium.
B
Yes,
I
have
prepared
a
presentation
to
introduce
psyllium
and
ebpf
and
talk
about
sodium
and
psyllium
Enterprise.
So
if
you
want,
we
can
dive
in
there
and
if
we
have
time
I'm
happy
to
show
a
small
demo
on
how
psyllium
runs
on
openshift,
for
example,
and
also
show
our
Hub
Enterprise
view
for
monitoring,
namespaces
and
traffic.
There
I
will
pass
in
between
to
take
any
questions.
People
may
have
on
the
specific
section
so
I'll,
let
you
know
as
well,
so
if
you're
ready,
I'm
happy
to
dive
in
with
the
start
with
the
slides.
B
A
We
are
ready,
the
slides
are
on.
Thank
you.
B
B
If
we
talk
about
psyllium,
we
always
start
with
the
open
source
really
so
psyllium,
but
we
also
have
an
Enterprise
release
of
psyllium
and
I
will
talk
a
bit
about
the
difference
today
as
well,
and
this
is
all
powered
on
ebpf.
So
I
would
like
to
start
with
our
company
story,
so
the
company
has
been
founded
by
Thomas
Graham
in
windland.
Thomas
is
based
in
Switzerland.
He
worked
at
Red
Hat.
Actually,
he
contributed
on
the
open,
openshift
environment.
B
He
also
worked
at
Cisco
on
the
ACI
platform
and
then
wentland
worked
at
nysera,
which
actually
became
NSX
in
VMware,
so
they
both
have
extensive
experience
in
what
software-defined
networking
is,
and
with
that
experience,
the
building
the
new
technology
for
for
cloud
native
workloads
also
I
want
to
highlight
Liz
right,
she's,
very
well
known
as
our
chief
open
source
overseer.
You
will
see
her
a
lot
as
well
on
events
and
she's
writing
an
amazing
book
about
ebpf
and
how
it
works.
So
keep
an
eye
on
that
news
as
well.
B
These
are
our
reference
customers
we
can
talk
about,
and
some
of
these
are
particular
openshift
environments
such
as
the
the
fintech
and
the
the
banking.
They
typically
run
openshift
environments
and
psyllium
Enterprise.
On
top
of
that
to
meet
their
requirements,
of
which
we
are
going
to
talk
today,
we
are
being
funded
and
invested
by
companies
like
Google,
Cisco
and
recently
also
Microsoft
and
grafana.
B
We
are
extensively
working
with
the
partnership
with
grafana
and
Microsoft
as
well
to
be
the
de
facto
cni
on
the
clouds,
but
also
expanding
on
the
capabilities
we
have
in
terms
of
metrics
to
also
feed
grafana
and
making
sure
that
grafana
users
get
the
best
of
operating
system
using
psyllium
before
I'm
going
to
introduce
the
features
of
psyllium.
I
would
like
to
briefly
touch
on
ebpf
what
it
is
and
why
it's
different,
so
ebpf
is
basically
giving
superpowers
to
the
kernel.
B
What
I
like
to
say
is
what
JavaScript
did
to
the
browser
making
your
browser,
extensible
eppf,
is
to
the
kernel
and
what
that
means
is
that
we
extend
and
make
the
kernel
extensible
without
changing
the
kernel,
and
that
means
that
we
can
based
on
kernel
events,
Cisco
events,
for
example.
We
can
attach
ebpf
programs
to
that,
and
we
can
do
things
with
that.
Well,
synonym
is
very
networking
oriented.
B
That
means
that,
for
example,
if
you
send
a
packet
on
The
Wire,
that's
a
particular
event
where
we
can
attach
an
evpf
program
to
on
and,
for
example,
to
get
metrics
and
to
extract
that
metrics
and
Report
out
that
metrics
to
tools
such
as
profana
and
because
eppf
runs
at
kernel,
Linux
kernel,
space,
we're
also
very
effective,
very
efficient
and
very
powerful.
So,
typically,
when
you
run
psyllium
on
on
openshift
environments
or
any
Linux
environment,
the
overhead
is
very,
very
minimum.
It's
like,
if
you
only
use
the
networking
features,
it's
not
barely
noticeable
right.
B
B
We
also
maintainers
of
the
ebpf
Foundation.
We
also
chair
the
foundation,
so
we
also
have
huge
influence
and
knowledge
on
what
the
certain
technology
should
move
forward
to
and
what
we
need
to
do
next,
to
make
it
even
better
and
we're
in
the
ebpf
foundation,
together
with
companies
like
Facebook,
Google,
Netflix
and
Microsoft.
B
So
this
is
the
high
level
of
what
we
are
capable
of
with
isofilence,
with
psyllium
using
evpf.
So
there
are
four
main
topics.
First
of
all,
we
have
a
sidecar,
less
service
mesh
solution
based
on
evpf.
We
provide
observability
in
order
to
give
you
the
right
tools
to
secure
and
observe
your
traffic
running
on
your
clusters.
B
We
provide
powerful
networking
features,
especially
for
Enterprises
on-prem
and
hybrid
clouds
or
public
clouds,
and
we
also
release
a
runtime
security
solution
which
not
only
observes
the
runtime,
but
also
is
able
to
enforce
Security
based
on
kernel.
Events
live
right,
so
synchronously
we
also
are
supported
and
the
default
cni,
for
example,
in
Google
cloud
data
plane.
V2
is
basically
psyllium
under
the
hood.
Azure
is
moving
their
AKs
cluster
to
the
usillium
by
default
and
we're
partnering
with
Microsoft
to
make
that
make
it
work.
B
You
can
run
obviously
Salem
on-prem
on
Reddit
Linux
or
on
VMS,
or
run
on
openshift
or
bare
metal
on
on
any
Linux
distribution,
and
we're
also
because
of
the
partnership
with
Microsoft,
are
going
to
support
Windows
and
so
Microsoft
has
made
the
effort
to
make
ebpf
supported
in
Windows
kernels
as
well.
B
I
also
want
to
talk
a
bit
about
the
customer
journey
and
where
we
typically
come
in
so
most
of
the
time
typical
customers
start
with
kubernetes
environments
and
clusters,
they
need
to
provide
the
environment
for
application
teams
to
onboard
the
applications
to
develop
the
applications
in
a
cloud
native
way.
So
this
is
an
initial
stage
where
we
have
basic
connectivity
of
thoughts
and
services
and
at
some
some
states
you
would
most
likely
have
already
some
container
image
scanning
tool
to
make
sure
that
every
image
you
Deploy
on
such
an
environment
is
secure.
B
We
typically
come
in
in
stage
two
where
at
some
point,
the
applications
are
getting
onboarded
they're,
getting
ready
to
go
into
production,
most
of
the
time
we're
talking
about
on-prem
environments
or
hybrid
Cloud
environments,
where
we
need
to
consider
security
and
compliance.
So
that's
where
psyllium
is
very
powerful.
We
can
provide
support
zero
trust,
networking
provide
Sim
integration,
exporting
flows
for
forensics
and
compliance
and
do
the
transparency
transparent
encryption
for
traffic
in
transit
stage.
Three
is
when
we
are
moving
to
multi-cloud
we're.
B
Maybe
extending
the
kubernetes
data
plane
to
external
VMS
and
celium
is
also
capable
to
extend
a
cluster
running
kubernetes
with,
let's
say,
a
VM
or
bare
metal,
Implement
implementation,
where
a
database
is
running
by
installing
that
agent
and
then
you
get
this
fully
secure
mesh
of
of
identity.
Aware
security.
B
Typically,
we
need
features
like
static
egosapi,
for
a
predictable
IP
routing
into
the
fabric
or
into
the
clouds
layer.
7,
observability
and
traffic
steering
becomes
important
using
either
surface
mesh
or
layer.
7
Network
policies,
multi-cluster
multi-cloud,
of
which
we
are
going
to
talk
about
about
today
as
well
and
connectivity
to
Legacy
in
front
and
stage.
Four
is
when
we
are
also
considering
how
to
connect
from
the
internet,
using,
for
example,
service
mesh,
how
we
should
do
a
canary
rollouts,
how
we
should
provide
connectivity
on
layer,
seven.
A
B
A
Great
thanks,
yeah
there's
an
opportunity
to
say
again.
If
you
have
any
question,
please
send
those
questions
in
the
chat.
We
have
the
chance
to
have
Raymond
today
to
answer
those
questions
yeah.
We
will
check
later
but
Raymond
until
now.
Super
super
interesting.
You
know
where
all
those
features
to
be
honest.
Yes,.
B
Let's
dive
in
on
the
features
and
see
why
why
they
make
a
difference
using
psyllium,
so
typically,
kubernetes
networking
relies
on
superoxy
and
iptables
on
on
standard
cnis
like
flannel
or
Calico,
and
that
means
that
at
scale
this
become
can
become
an
issue
because
of
the
the
tables
need
to
be
maintained
and
updated
and
at
scale
you
have
a
lot
of
ports.
You
have
a
lot
of
churn
you're,
creating
surfaces,
you're
scaling
out
applications
or
scaling
down.
All
these
operations
means
that
IP
tables
need
to
be
updated
and
Beyond
40,
50
notes.
B
This
really
can
become
an
issue,
and-
and
it's
not
a
single
change
as
well
right,
the
whole
table
needs
to
be
reloaded.
So
how
psyllium
different
here
is
that
we
use
a
q
proxy
replacement
based
on
eppf,
so
our
sodium
agent
runs
as
a
demon
set
on
each
node
and
when
a
node
gets
installed
initialized.
We
prepare
that
node
to
mount
the
right
BPF
programs,
but
each
node
has
also
this
BPF
map
of
you
know
the
current
surfaces,
the
endpoints,
the
security
enforcement
and
each
change
as
such
is
a
atomic
change
right.
B
So
we
have
a
per
CPU
hash
table
and
we
can
update
each
change
of
each
endpoint.
We
add
dynamically
without
having
to
reload
the
whole
tables
again,
and
this
is
at
scale
super
powerful
and
we've
seen.
Customers
Beyond
thousands
and
even
specific
examples
like
data
dot,
reaching
almost
10
thousands
of
nodes
in
a
given
cluster
running
silicon,
also,
typically
with
a
cni.
Is
that
there's
no
load
balancer
included
right.
So
that
means
that
you,
you
need
to
have
some
kind
of
Ingress
controller
or
external
load.
B
Balancer
solution
you
may
want
to
install
metal
lb
or
install
nginx
or
proxy
or
any
other
kind
of
solution
can
be
a
service
mesh
based
solution,
but
psyllium
out
of
the
box
provide
low
balancing
capabilities
based
on
ebpf,
so
we're
extending
on
the
surfaces
side
there
and
we
also
support
maglev
direct
server
return
and
senium
can
also
run
not
only
distributed
on
a
kubernetes
cluster
but
also
Standalone,
and
we
have
some
customers
who
used
a
high
performance,
Standalone
load
balances,
running
on
bare
metal
and
achieving
line
rate
performance
using
psyllium.
B
Another
powerful
Network
feature
is
especially
in
the
Enterprise
environments,
is
egress
Gateway
and
we
have
an
open
source
and
ecos
Gateway
solution
and
for
Enterprise
solution,
and
what
it
means
is
how
what
it
solves,
I
would
say
is
typically
when
you're
running
clusters
in
Enterprise
environments,
you
want
to
have
predictable
egress
IP
connectivity
towards
destinations
such
as
Mission
critical
databases
right.
B
You
want
to
ensure
that
only
a
specific
namespace,
with
a
specific
in
with
specific
workloads
or
specific
label
like
a
port,
is
allowed
to
using
a
specific
IP
address
to
go
egress
out
of
the
cluster
through
a
parameter
firewall
into.
Let's
say
a
database
server
to
query
data
out
there.
Egos
Gateway
gives
this
ability
to
based
on
label
selectors,
such
as
namespace
and
pots,
to
provide
and
configure
predictable
IPS
on
the
Gateway
nodes
and
also
providing
h
a
using
a
Gateway
group,
meaning
that
any
of
the
external
or
the
Gateway
nodes
is
failing.
B
All
the
Gateway
nodes
can
take
over
and
provide
a
high
available,
egress
connectivity
with
predictable
IP
addressing
towards
that
database
and
that
helps
firewall
admins
in
Enterprises
or
in
Cloud
to
only
allow
a
specific
small
set
of
ips
into
their
mission,
critical
databases
and
such
I'll
pulse
here
for
him
for
a
second
any
questions.
So
far
on
the
networking
features.
A
No,
no
I'm
sure
I
see
the
The
Voice
security
I'm
sure
this
will
open
many
many
points.
Yeah.
B
B
B
A
front
end
is
trying
to
reach
a
back
end
when
it
sends
traffic
using
ebpf
and
cilium.
We
are
able
to
inspect
if
that
traffic
is
even
allowed
to
leave
that
now,
the
dead
fault
and
the
nodes
and
on
the
destination
we
again
use
it
identity
to
inspect
that
traffic
and
make
a
decision
if
we
allow
the
traffic
into
the
destination-
and
this
is
obviously
different
through
Network
policies
or
psyllium
Network
policies.
B
Beyond
layer,
3
and
layer,
four
psyllium
also
is
able
to
inspect
on
layer
7..
We
call
that
API
aware
authorization.
That
means
that
we're
not
only
allowing
let's
say
a
destination
port
on
our
destination
Port,
but
we're
also
able
to
inspect
traffic
on
layer
7
and,
for
example,
for
example,
only
allow
HTTP
get
traffic
on
forward
slash
public,
and
this
is
super
powerful.
It's
all
not
only
support
HTTP
but
all
common
applications
such
as
grpc,
memcachd,
Kafka
and
Cassandra
and
and
others.
B
So
this
is
really
powerful
in
Enterprise
environments
to
only
specifically
allow
a
specific
API
call
in
layer
7.
how
this
works,
and
this
is
a
Cassandra
example.
You
would
create
a
psyllium
Network
policy,
so
a
Serial
Network
policy
is
an
extension
CD.
We
own,
we,
we
redesigned
and
it's
going
beyond
the
default,
kubernetes
Network
policies,
which
we
also
support,
but
this
gives
this
layer
7
filtering
capabilities.
B
So
here
we
match
the
labels
on
the
Ed
Cassandra
and
we
only
are
not
only
allowing
access
to
port
9042
on
TCP,
but
we're
also
specifying
that
we
want
to
only
allow
layer,
7
protocol
Cassandra
and
only
allow
a
specific
select
action
on
the
table
my
table.
So
this
is
I,
guess
super
powerful
for
securing
access,
and
this
also
works
egress
and
Ingress.
B
Another
very
powerful
security
feature
is
that
we
have
DNS,
aware
senior
Network
policy
again
using
evpf.
We
are
able
to
inspect
traffic
when
pulse
2,
for
example,
DNS
resolutions
for
specific
destinations,
and
you
can
then
create
senior
Network
policies
matching
labels,
but
also
only
allowing
access
to
a
specific,
fully
qualified
domain
name.
So
in
this
case
a
wildcard.mydomain.io
on
a
specific
board
and
protocol,
and
this
is
super
useful
in
environments
where
you
have
Mission
critical
data,
for
example
in
the
cloud
on
object
storage.
B
We
all
know
that
this
kind
of
ips
in
Cloud
environments
change
all
the
time
it's
you're
unable
to
based
on
IPS
secured
at
access,
but
using
our
DNS
proxy
solution.
We're
able
to
provide
to
fully
qualified
domain
name
policies
to
only
allow
traffic
to
a
specific
S3
buckets,
for
example,
and
the
capability
makes
sure
that
we
caching
this
fqdn
cache,
and
it
also
means
in
Enterprise
environments
that
you
can
safely
do
a
rollout
restart,
for
example,
of
the
cinema
agents,
while
maintaining
access
for
this
well-known
object.
B
Any
questions
on
security
well
revisit
security,
actually
a
bit
with
runtime
security
and
how
that
works.
But
any
questions
so
far.
A
Raymond
I
have
a
question
so
I
remember:
openshift
had
a
custom
egress
Network
policy.
That
was
filtering
also
f
to
the
end,
so
you
know
you,
you
could
filter
the
traffic,
the
egress
traffic
say:
oh,
this
host
name
cannot
be
reached,
but
I
remember
there
was
also
some
cedar
block
is
is
the
same
thing
with
the
psyllium
egress
Network
policy.
You
can.
A
Okay,
okay,
cider
block
and
those
psyllium
Network
policies.
They
come
out
of
the
box
when
you
install
the
operator.
How
does
it
work?
Yes,.
B
A
B
Yeah,
that's
a
good
point
because
yeah
you
would
like
to
secure
an
environment,
but
you
need
to
know
what
to
secure
and
what
your
flows
are
and
and
that's
another
benefit
of
using
psyllium.
So,
let's
dive
in
there
and
have
a
look,
how
that
looks
like.
B
B
The
goal
is
obviously
in
Enterprise
customers,
most
of
the
time
to
have
a
zero
trust
enforcement
segmentation
multi-tenant.
But
that
means
that
you
need
to
be
aware
and
have
a
certain
tool
to
observe
that
traffic
and
and
make
informed
decisions.
So
that's
why
we
introduce
Hubble
and
Hubble
Is
Our
observability
tool
included
in
psyllium,
and
it
comes
with
Hubble
UI,
which
gives
you
a
surface
dependency.
Maps
shows
you
surface
to
service
communication,
allows
you
to
view,
for
example,
Network
policies
in
the
hopper
Enterprise
Edition.
B
We
also
have
a
CLI,
which
gives
is
more
for
advanced
troubleshooting
and,
if
I
think
about
platform
teams
needing
to
troubleshoot
connectivity
for
a
given
namespace
in
much
details.
It
has
extensive
filtering
capabilities
and
also
allows
you
to
Output
to
Json,
for
example,
exporting
flows
to
scene
platforms
and
such
and
the
Hubble
metrics
component
is
there
to
export
metrics
to
tools
like
Prometheus
and
grafana,
but
can
also
be
things
like
elastic
shirts
and
such
to
give.
B
You
know,
data
operations
being
able
to
identify
golden
signals
for
your
applications,
making
sure
your
platform
is
running
as
expected,
so
how
Hubble
operates
with
psyllium?
It's
it's.
It
uses
ebpf
for
all
these
metrics
and
observability.
So
when
you
open
the
UI,
when
you
are
inspecting,
let's
say
on
a
namespace
level
and
I
will
show
how
that
works.
Actually,
in
a
demo,
the
Hubble
UI
component
is
querying
the
agents
where
the
workloads
are
running,
live
and
using
ebpf
we're
giving
a
live
view
on
live
flow
view
of
service
to
server
communication.
B
So
we
have
made
Linux
identity.
Our
API
aware
remember
that
identity
based
security
I
talked
about
before
this
is
also
used
in
Hubble.
To
give
you
this
end-to-end
visibility,
surface-to-service
visibility,
and
that
means
that
we're
not
only
looking
at
IP
addresses
right.
You
may
use
use
traditional
networking
tools
on
on
your
shift
clusters
or
use
AWS
flows
to
inspect
traffic,
but
you
only
see
IPS
right
and
and
IPS
mean
nothing.
B
So
for
setup
teams
we
have
additional
features
to
be
able
to
export
all
these
identity.
Aware
Network
and
runtime
visibility
internet
analytics.
So
you
can
use
the
whole
UI
to
view
that.
But
we
can
also
export
for
compliance
purposes
to
scene
platforms.
You
can
inspect
TLS
versions,
for
example,
if
to
make
sure
you're
compliant
with
TLS
you're,
using
the
right,
the
right
versions
of
TLS,
the
runtime
security
Cisco
enforcement
to
get
alerted.
B
So
this
is
a
better
picture
of
what
I'm
talking
about.
We
need
to
do
some
forensics
inspect
or
make
sure
that
you're
compliant.
We
can
actually
have
a
process
tree
of
the
specific
Thoughts
with
their
specific
containers
and
even
get
this
tree
view
of,
let's
say
a
specific
crawler
process
and
seeing
what
it's,
what
it's
connecting
to,
and
here
you
see
an
example
that
this
note
G
server.js
is
connecting
to
api.twitter.com
to
the
elasticsearch
service
also
connecting
to
worlds
for
some
reason.
B
We
may
not
know
why
this
may
be
something
you
want
to
investigate,
but
even
better,
if,
if
there's
a
process
being
a
reverse,
shell,
for
example,
attempted
to
be
opened
using
netcat,
for
example,
we
can
also
identify
that
process
and
you
can
see
the
destination
it
tries
to
connect
to,
and
you
can
obviously
obviously
secure
that
access
as
well-
and
this
is
also
the
runtime
enforcement.
We'll
talk
talk
about
later
for
application
teams.
We
provide
role-based
access
so
with
hobo
Enterprise.
B
Golden
signal
with
HTTP
is,
is
available
for
Hubble
Enterprise
users
and
also
a
built-in
Network
policy
editor,
which
is
very
powerful,
because
you
don't
necessarily
need
to
learn
how
to
write
a
network
policy
yaml.
But
this
tool
is
actually
showing
and
helping
you
to
using
a
visual
UI
to
the
design,
Ingress
and
egress
connectivity
for
specific
workloads,
and
it
creates
this
yaml
template
for
you.
You
can
just
copy
and
put
in
your
cicd
pipeline
to
be
applied.
For
example,
it's
also
able
to
inspect
current
Network
policies.
B
So
let's
say
you
want
to
inspect
the
front
end
port
to
see
what
kind
of
network
policies
are
enforced
on
the
specific
port
or
service,
and
you
may
need
to
update
it
because
you
have
application
change.
You
can
see
the
current
Network
policy
and
you
can
just
append
using
the
tool,
let's
say
a
board,
and
then
it
creates
this
added
code
for
you
and
then
again
you
can
download
it
and
apply
it
in
your
assignments
and,
finally,
obviously
the
psyllium
grafana
integrate
using
evpf
we're
partnering
with
grafana.
B
We
are
extending
the
dashboards
we
already
have,
and
this
is
not
only
looking
at
you
know,
on
the
cluster
level,
how
we
are
operating
on
the
Node
level
for
the
performance,
how
the
operator
is
doing
how
the
cube
API
connectivity
is,
is
going
on
how
to
HUB
UI
is
performing,
but
we're
also
extending
to
export
metrics
to
grafana
being
able
to
export
logs
to
Loki
and
traces
to
Tempo.
B
So
you
can
use
the
full
grafana
lgtm
stack
to
Monitor
and
operate
your
your
clusters,
and
this
is
very
powerful
for
application
teams,
because
they
now
can
also
see
and
inspect
how
the
application
is
behaving
and
use
the
the
inspect.
The
golden
signals
to
see
how
the
services
Community
communication
is
and
if
there
are
any
latencies
on,
for
example,
specific
ports.
On
a
specific
note,
they
need
to
take
a
take
a
look
at
any
questions
there
before
yes,.
A
Yes
Raymond,
so
we
have
Johnny
and
asking
actually
those
two
questions
that
we
have
two
questions
for.
Do
people
in
the
chat
they
are
all
related
to
service
mesh
John
is
asking
is
in
the
sitting
service
mesh,
how
L7
traffic
is
managed.
B
Yeah
so
I'll
talk
a
bit
about
service
mesh
later,
but
we
in
short
surface
mesh
is
ebpf
powered
where
we
can
and
if
we
cannot
use
evpf.
For
that
purpose,
we
use
a
node
level
proxy.
That
means
that
we
we
leverage
Envoy
technology,
so
in
terms
of
layer
7.
Maybe
you
have
some
buff
based
rules,
either
through
Ingress
resource
or
a
Gateway
API
resource
and
we're
able
to
route
that
traffic
on
the
Node
without
using
sidecars
to
the
destination
endpoints
and
services.
A
Okay
thanks
and
we
have
another
question
from
Le
gigilon:
is
there
any
integration
between
istio
and
psyllium,
so
we
can
configure
security
using
ebpf
capabilities.
B
So
so
for
surface
mesh,
we're
we're
not
Reinventing
the
wheel,
we're
not
building
a
new
control
plane,
so
we're
considering
if
we
are
going
to
support
istio-based
resources
to
ensure
security
using
psyllium.
But
it's
that's
not
yet
there,
but
you
can
use
SEO
as
a
control
plane
for
for
service
method.
That
makes
sense.
B
There
is
this
need
to
expand
clusters
to
connect
them
together
to
create
high
availability,
available,
available,
High
availability
across
multiple
zones
or
multiple
data
centers,
either
on-prem
or
in-cloud,
and
create
this
mesh
where
we
provide
this
connectivity
and
security
across
workloads,
and
this
means
that
using
cluster
mesh
you
can
connect
two
or
more
classes
together
to
create
a
single
surface
mesh
data
plane
to
have
the
service,
Discovery
and
load
balancing
across
clusters
using
identity,
aware
security
and
network
policies
across
clusters.
What
I
mean
with
that
is
that
you
can
actually
base
on
the
source
cluster.
B
You
can
allow
traffic
so
beyond
the
layer,
7
we're
also
able
to
specify
a
source
of
destination,
cluster
and
network
policies
and
secure
traffic
across
clauses
in
terms
of
observability.
This
provides
multi-class
observability,
seeing,
for
example,
pots
connecting
from
cluster
a
to
bolting
cluster
b
as
such
encryption
in
transit,
using
either
either
ipsec
or
wireguard,
and
obviously
routing
and
overlay
networking
are
across
Clauses
and
it's
independent
of
if
it's
a
AWS,
Google,
Cloud,
Azure,
AKs,
Alibaba
or
red
hat
openshift
or
VMware.
It
can
also
be
on-prem.
B
It
doesn't
matter
as
soon
as
as
long
as
you
meet
the
requirements
we
can
connect
to
clusters.
You
obviously
need
to
set
up
some
kind
of
connectivity
from
let's
say,
on-prem
to
AWS,
using
a
VPN
or
something
else,
but
once
you've
had
that
layer,
free
connectivity,
you're
able
to
create
this
mesh
of
clusters.
B
B
What
it
means
is
that
you
using
simple
annotations
on
simple
surface
objects.
You
can
specify
that
it
should
be
a
global
surface
and
then
Celia
makes
sure
to
configure
that
surface
in
each
cluster
in
that
same
namespace
in
each
cluster
to
have
their
endpoints
being
advertised
to
remote
clusters.
So
we
have
this
High
available
end
boards
across
clusters,
and
what
it
means
is
that
one
back
ends
fail
in
a
given
cluster.
We're
able
to
fill
over
two
pack-ends
in
the
remote
cluster.
B
Another
use
case
is
shared
services.
You
may
have
multi-tenant
environments
or
Edge
clusters
or
environments.
You
you
don't
necessarily
want
to
spin
up,
let's
say
a
number
of
surfaces
again
and
again
in
each
cluster.
Instead,
you
want
to
have
a
shared
services
cluster
and
using
cluster
mesh.
You
can
connect
other
clusters
and
expose
shared
services
such
as
a
volt
service,
in
this
example
to
remote
shared
to
remote
tenant
clusters,
meaning
that
you
only
need
to
create
a
service
in
your
shared
services.
B
You
can
have
a
stateless
data
store
service
connecting
to
the
state
full
cluster,
where
you
have
the
actual
data,
endpoints
and
actually
storing
the
data,
and
this
is
very
powerful,
because
you
can
then
recycle
upgrades
all
those
stateless
Edge
clusters
very
easily
without
having
to
create
stateful
environments
again
and
again.
Okay,
we
also
introduced
local
surface
Affinity.
What
this
means,
obviously,
is
you
want
to
avoid
traffic
across
vpns?
You
you
want
to
avoid
that
latency
penalty.
B
You
may
have
get
across
multiple
clouds
or
hybrid
Cloud
environments,
so
you
may
want
to
have
this
local
service
Affinity,
where
you
want
to
prefer
local
endpoints
instead
of
the
remote
endpoints
unless
they
fill
so
what
this
means.
It
will
also
always
low
balance
to
local
endpoints
in
case
of
failure.
It
will
fall
back
to
remote
endpoints.
B
How
it
looks
like
this
is
super
simple.
It's
just
using
the
surface,
you
just
annotate
it
to
be
a
global
Service,
and
in
this
case
we
also
annotated
with
the
service
Affinity
local,
and
this
will
enforce
local
service
affinity
and
the
other
way
around
is
the
remote
service
Affinity
by
annotating
remote.
B
Well,
the
goal
is
actually
to
to
still
use
Q
proxy
I
would
recommend
to
use
the
ebpfq
proxy
powered
by
zillion,
because
that's
so
Superior
in
terms
of
performance.
So
when
you
install
psyllium
you,
you
can
set
up
cilium
to
have,
for
example,
the
Q
proxy
replacement
setting
on
strict,
which
means
that
we
basically
replace
the
iptables
implementation
with
the
full
ebpf
powered
psyllium
for
your
Q
proxy
replacement.
A
B
B
B
It's
it's:
it's
the
ebtf
powered
proxy
replacement,
which
means
that
we
on
the
notes
we
maintain
BPF
Maps
right.
So
let
me
explain
a
bit
so
when
you
create
a
surface,
each
node
will
be
creating
this
BPF
map
for
that
surface
and
its
related
endpoints.
So
each
node
knows
what
the
endpoints
for
that
given
surface
are,
and
this
is
maintained
using
ebpf
Maps.
So
when,
let's
say
on
a
node,
we
receive
traffic
destined
for
this
destination
service
and
endpoints.
B
A
Great-
and
this
should
avoid
you
know
the
overhead-
that's
understood.
B
A
A
When,
when
the
rules
are,
you
know
a
lot,
there's
lots
of
chains,
let's
say
or
in
the
IP
tables
table,
then
the
the
traffic
is
lower
right
in
this,
in
this
way
is
there's
an
optimization.
Do
you
have
any
number,
let's
say:
hey,
it's
a
faster
20,
more
or
fifty
percent
I.
Don't
know.
B
I,
don't
have
a
percentage
like
number.
What
I
do
have
is
like
I,
think
I
said
it
before.
We
typically
see
customers
running
clusters
with
about
40
nodes.
Obviously
this
is
a
ballpark.
Normally
depends
on
how
much
surfaces
you
have
how
dense,
how
density,
how
much
density
you
have
on
a
per
node
level
for
end
points
and
surfaces,
but
at
that
level
approximately
they
will
see.
B
A
B
You
you're
welcome
now
we're
going
to
the
surface
mesh
topic,
so
this
is
obviously
super
interesting
and
also
in
terms
of
Ingress
connectivity
and
and
the
surface
connectivity
for
your
applications.
We
provide
with
zillion
a
ebpf
supercharged
sidecar
free
service
mesh.
So
let's
have
a
look.
What
that
means.
I'm.
B
We
introduced
same
surface
mesh
for
all
these
use
cases
and
in
to
summarize,
we
want
to
support
all
the
control
planes,
we're
not
creating
a
new
control
plane,
we're
supporting
the
default
Ingress
Gateway
apis
and
we're
looking.
We
support
Envoy
crds
and
we're
working
on
spiffy
integration
and
SEO
integration
and
in
terms
of
observability,
we
leverage
what
we
already
support
right.
So
we
already
have
this
Rich
metrics.
B
You
already
already
have
this
surface
connectivity
and
the
capabilities
of,
for
example,
using
cluster
mesh
as
well,
in
combination
with
surface
mesh
to
to
Route
traffic
to
a
surface,
and
then
that
service
can
be
a
cluster
match,
enabled
global
Service
routing
traffic
across
clusters.
We
Leverage
The
metrics
capabilities
and
observability
capabilities.
We
have
with
Hubble.
You
can
also
use
open,
Telemetry
to
inspect
traces
and
spans
and
see
where
latency
is
and
still
those
same
platforms
inspect
your
compliancy
inspect
your
alerts
and
make
sure
your
environment
is
secure.
B
So
when
you
start
with
CDM
servers
mesh,
you
basically
have
two
options.
You
either
start
new,
you
don't
have
a
service
input
service
mesh
implementation.
Yet
then
you
have
the
power
of
starting
new
with
ebpf
Native
sidecar
free
service
mess,
but
you
may
already
have
an
Envoy
based
implementation,
existing
service,
mesh
implementation,
perhaps
istio
or
something
else,
and
you
can
leverage
in
a
way
already
if
you
use
celium,
because
in
the
sidecar
implementation,
the
connectivity
between
the
sidecar
ports
or
the
destination
Port
is
actually
going
through.
B
When
you
run
cilium,
we
already
shortcut
that
connectivity
on
layer
4
on
the
socket
layer
directly
to
the
destination
Port,
so
you
are
not
able
to
inspect
that
virtual
interface
on
the
wire
to
see
to
to
eavesdrop
traffic
and
to
to
inspect
that,
if
you
like,
so
that's
already
a
huge
benefit.
If
you
have
existing
implementations
with
cilium
foreign,
we
also.
We
also
wanted
to
solve
the
cost
of
sidecar
injection,
and
this
is
twofold.
B
First
of
all,
obviously,
the
latency
piece
is
that
every
time
you
need
to
send
traffic
from
and
to
through
a
sidecar
proxy,
you
basically
grow
three
times
to
through
the
tcpip
stack
right,
so
an
application
sends
a
packet
for
a
destination.
It
goes
through
the
stock
through
its
own
TCP
IP
stack,
it's
received
through
the
surface
proxy
TCP
stack,
it
does
its
magic
and
then
it
sends
again
the
package
through
the
TCP
IP
stack
on
the
actual
physical
Network
towards
the
destination,
and
the
same
process
is
obviously
again
being
done
on
the
destination.
B
B
This
means
that
when
you
create
or
change
your
your
surface
mesh,
that
also
means
that
you
need
to
wait
for
a
service
mesh
to
be
created
and
that
induces
latency,
and
that
means
that
the
connectivity
cannot
be
served
immediately
with
our
sidecar
less
implementation
using
ebpf.
When
you
do
the
chains,
it's
the
service.
Mesh
is
already
on
the
Node
level.
B
So
it's
just
a
matter
of
understanding
the
service
endpoints
using
BPF,
and
we
are
immediately
able
to
provide
that
connectivity
and
there's
obviously
also
the
overheads
problem
where
each
bolt
has
a
sidecar
and
at
scale.
You
have
a
lot
of
sidecars.
They
all
have
to
maintain
TCP
connections
which
cause
processing
time.
They
all
take
memory
and
CPU
and
bringing
this
back
into
the
kernel,
bringing
this
back
on
the
nodes
will
reduce
a
lot
of
overheads
in
your
clusters
without
having
the
sidecars.
B
So
what
we've
done
here
is
we
integrated
Envoy
on
the
Node
in
the
kernel
stack,
so
we
provide
already
separation
of
namespaces.
We
can
leverage
security
through
c
groups.
Each
namespace
will
have
a
separate
listener
and
has
separation
in
terms
of
C
group
on
the
Node
level
and
service
mesh
using
evpf,
we
are
able
to
increase
the
performance,
reduce
that
latency
export
the
metrics
being
able
to
provide
golden
metrics
for
your
environment
and
there's
no
need
to
stop
and
start
Envoy
on
the
Node
range
update
any
surface
for
it
for
your
service
implementation.
B
What
that
means
is
wherever
possible,
we
ebpf
implementation.
That
means
today
we
do
the
all
the
layer,
free
layer,
four
Canary,
topology,
aware:
routing,
multi-cluster
traffic
management
using
ebpf
again
based
on
these
Services
technology.
We
enforce
security
with
network
policies
using
ebpf
mtls
and
obviously
all
the
observability
capabilities,
such
as
tracing
open,
Telemetry,
metrics
and
all
the
inspection
of
HTTP
today,
TLS
DNS,
TCP
and
UDP,
where
ebpf
cannot
do
it
today.
B
That
means
that
we
fall
back
to
the
layer,
7
node
level
proxy
and
there
we
can
do
the
advanced
layer,
7
traffic
management,
path-based,
routing,
Canary,
rollout,
sorry,
percentage-based
routing,
Gateway,
API
and
advanced
cilium,
Android
configs
as
such
retries
layer,
7
rate,
limiting
and
TLS
termination
and
or
origination.
B
Gateway
API
is
obviously
coming
in
psyllium.1.13.
This
is
super
powerful
to
extend
the
capabilities
we
we
want
to
use.
So
Ingress
is
relatively
simple
to
use,
but
also
lacks
a
few
capabilities.
The
goal
is
to
use
the
Gateway
API
capabilities
for
all
the
advanced
layer,
7
buff
rate
routing
and
such
and
today,
if
you
want
to
do
Advanced
use
cases,
we
can
also
use
the
envoy
config
resources,
but
this
is
really
meant
for
power
users,
which
are
not
able
to
do
specific
resources
and
configuration
of
those
resources
in
in
Gateway
API
today.
B
A
We
don't
have
but
any
particular
question
from
the
chat
Raymond,
but
it
was
super
interesting
and
lots
of
screenshots,
because
you
know
this
Envoy
integration
inside
you
know
this.
This
is
something
really
particular
really
an
optimization.
A
Do
you
already
see?
Do
you
already
have
data
that
show
and
evidence
that
this
is
more
performant?
This
is
good.
This
is
better.
B
Yes,
yes,
absolutely
I,
I
didn't
include
those
slides,
but
what
I
can
say
it's
in
terms
of
latency
and
throughput
we're
seeing
a
performance
of
three
to
four
times
compared
to
sidecar
based
implementation,
wow.
B
We
have
an
Enterprise
and
open
source
as
well
here
and
what
it
does
it
not
only
is
able
to
observe
Security
based
on
Cisco
events.
It's
also
able
to
enforce
security,
preventing
runtime
events
or
things
happening
in
the
class.
You
don't
want
to
happen,
and
this
can
be
different
things
right.
B
This
can
be
function,
calls
code,
execution
process,
execution
file
access,
for
example,
trying
to
open
an
edit
a
file,
namespace
escapes
privilege,
escalations
data,
access
to
storage
and
obviously
the
networking
part
we
already
know
for
HTTP,
DNS
and
TLS,
and
this
extends
on
the
capabilities
we
already
brought
in
psyllium.
For
this
you
know
the
is
identifying
a
process
in
a
pot
where
it
connects
to,
and
if
it's
allowed
or
not
and
tetragram
brings
a
lot
of
more
capabilities
there.
B
It
runs
as
an
agent,
and
it's
also
a
search
able
to
export
all
that
data
and
all
those
events
and
alerts
and
security
compliance
to
platforms
such
as
Sim
as
well,
and
you
can
use
metrics
using
grafana
and
you
can
inspect
logs
there
as
well.
So
it
complements
the
networking
security
with
the
security
for
runtime
in
a
security
and
observability.
B
What
is
different
here
as
well
is
that
typical
employment
implementation,
such
as
Falco,
they
inspect
the
kernel,
either
using
ebpf
power,
powered
probe
or
a
kernel
probe,
and
what
happens
here
is
it's
a
synchronous.
What
it
means
is
that,
for
example,
someone
in
a
specific
port
with
code
runs
wants
to
do
an
exploit,
a
malicious
attempt
that
will
be
an
event
where
the
kernel
probe
will
be
triggered,
and
then
it
does
this
asynchronous
notification
to
do
something
with
that
right.
B
Tetracon
is
different
here
because
we
are
able
to
because
we're
using
ebpf
and
the
kernel
when
we
see
such
Cisco
events
and
we
we
have
enforced
that
security
using
tracing
policies.
We
can
immediately
do
that
sick
kill
synchronously
while
they
try
that
exploit
and
malicious
attempt.
B
So
this
is
an
example.
This
is
someone
trying
to
open
a
shadow
file
and
write
a
shadow
file.
So
in
this
case
we
are
monitoring,
for
example,
the
authorized
Keys
file.
B
This
is
a
small
snippet
over
tracing
policy
to
enforce
the
security,
and
what
you
can
see
here
with
the
logs
is
that
someone
tries
to
write
that
file
and
we
immediately
kill
that
process
and
we're
also
able
to,
for
example,
see
namespace
escalations
right,
so
we're
also
not
dependent
of
if
a
process
is
running
as
roots
right.
So
maybe
someone
has
got
that
escalation
of
privileges
tries
to
open
the
password
file,
for
example,
and
tries
to
update
it.
We
also
able
to
kill
that
instantly
as
Mom
any
questions
on
on
tetragon.
A
No
question
just
unders
is
saying:
looking
forward
to
113
will
be
great
start
implementing
the
service
mesh.
Yes,.
B
Yes
and
we're
happy
to
help
with
support
you
where
we
can
as
well
so
on
the
topic.
We
are
actually
actively
working
with
some
of
our
customers,
who
are
already
starting
with
surface
match,
to
replace
their,
for
example,
F5
external
load,
balancing
solutions
to
move
all
that
capabilities
closer
to
the
kubernetes
workloads
and
also
more
agile,
using
all
the
Gateway
API
and
Ingress
Resources,
with
the
cidc
cicd
pipeline
kind
of
way
of
github's
working
to
make
it
better
suited
for
their
workloads
and
applications.
Running
running
psyllium
or
running
on
psyllium
I
should
say.
B
Okay,
so
I
just
have
prepared
a
little
demo
to
show,
for
example,
this
observability
life
I
assume
you
can
still
see
my
screen.
Yes,
can
you
see
my
best
Michelle,
so
I've
opened
the
canines.
What
I
wanted
to
explain
here
is
how
celium
runs
on
openshift.
Again,
you
can
see
I've
ins
I've
installed
this
openshift
cluster.
We
have
a
operator
in
this
case.
We
have
CM
Enterprise
installed,
so
we
have
an
Enterprise
operator
and
what
this
does
is
constantly
reconciles
the
environment
based
on
the
psyllium
concrete.
B
B
It's
yes,
this
is
the
one.
So
this
is
basically
the
the
sort
of
Helm
values
file
I've
implemented
on
the
install,
and
this
psyllium
config
is
constantly
read
through
the
operator
and
reconciles
the
cluster
where
it
needed.
So,
if
I
do,
for
example,
an
update,
mycelium
configuration,
let's
say
I
want
to
add
some
metrics
or
delete
some
metrics
here
once
I've
done
that
the
operator
reads
that
file
reconciles
the
cluster
and,
if
needed,
to
restart
the
psyllium
agents
on
the
notes
to
have
that
running
configuration.
B
You
can
all
read
about
this
in
the
documentation
how
how
this
is
configured.
You
also
see
the
Q
proxy
replacement
is
strict.
So
this
means
we
install
celium
with
the
full
ebpf
powered
Q
proxy
replacement
on
openshift,
and
we
have
enabled
metrics,
and
this
is
running
sodium
Enterprise,
and
this
means
that
we
also
have
Hubble
Enterprise
enabled
and
the
Hubble
UI
enables
so
the
Hubble
UI
flag
make
sure
that
we
have
access
to
a
Hubble
UI.
So
we
can
inspect
service
to
service
connectivity.
B
The
hover
Enterprise
piece
gives
you
the
process:
visibility
on
a
given
cluster
foreign.
So
let's
have
a
look
here,
so
this
is
how
about
the
Hubble
UI.
This
is
a
demo
environment
and
I've
logged
in
with
my
isolated
accounts.
We
use
OCTA
in
this
case,
and
I
only
have
access
to
the
namespaces
I
own.
So
I
have
two
demo
apps
installed
on
the
mine
row,
so
I
can
now
inspect,
for
example,
the
micro
surfaces
application.
B
What
we
can
see
here
is
that
obviously
there's
worlds
to
front-end
connectivity,
but
also
the
front
end,
is
connecting
to
the
checkout
surface.
The
checkout
service
is
connected
to
the
cart
service,
the
guard
service,
the
redis
card,
etc,
etc.
So
this
gives
me
the
surface
surface
connectivity
map
of
my
application.
I
also
see
outbound
connectivity.
So
here
we
see
that
there
are
some
external
connectivity.
This
is
obviously
to
cube
system.
B
So
this
is
very
powerful
for
application
teams
and
platform
teams
to
understand
the
connectivity
of
of
the
components
in
your
applications,
and
generally,
we
recommend
to
start
with
namespace
security.
You
want
to
secure
access
to
a
namespace,
Ingress
and
access
from
a
namespace
egress
and
then
create
further
Network
policies
to
allow
specific
services
to
only
connect
to
other
services
on
a
specific
port
and
protocol.
B
What
I
also
want
to
highlight
here
is
this
live
flow,
so
we
can
see
posts,
get
calls
layer,
7
information
here,
so
we
can
see
what
API
skulls
are
being
done.
You
can
see
the
verdict
if
it's
allowed,
forwarded
or
blocked
I.
Don't
have
a
blocked
example
right
now,
but
if
it,
if
it's
there
it's
it's,
it
will
highlight.
So
this
gives
you
a
live
view
of.
What's
going
on
right
now
we
also
have
a
solution
called
Timescape.
B
This
allows
you
to
go
to
enable
disable
the
live
view
and
have
a
view
of
let's
say
yesterday
what
happened
between
2
and
3
A.M.
This
can
help
you
to
do
forensics
if
needed.
Thank
you
and
let's
have
a
look
on
the
network
policy
side.
This
is
actually
the
current
applied
Network
policy.
I'm
looking
at
the
ad
service
and
on
the
left,
I
can
see
all
the
current
applied
Network
policy.
B
So,
for
example,
I
can
double
click
on
the
HTTP
Ingress
visibility,
and
this
gives
me
the
current
applied
CDM
Network
policy
and
I
can
inspect
it.
I
can
see
that
it
has
this
Ingress
rule
two
ports,
a
list
of
ports
on
HTTP,
so
it
allows
all
HTTP
traffic
on
all
these
destination
ports
and,
let's
say
I
I-
want
to
change
that
right.
B
I
want
to
not
not
I,
don't
need
this
50
50
051,
Port
I
just
delete
it,
I
save
it,
and
you
can
see
that
this
network
policy
is
being
changed
life
for
me,
and
this
allowed
me
to
without
understanding
how
to
write
Network
policies.
I
just
can
download
this
file.
Save
it
push
that
code
through
my
cicd
pipeline
to
have
it
applied
in
my
environment.
So
this
is
a
very
powerful
tool
for
application
and
security
and
and
platform
teams
to
secure
their
namespaces.
A
Hey
Raymond,
that
was
impressive,
I
really
like
this
level
of
observability
or
did
a
really
granular
detailing
and
I
think
this
is
really
a
great
integration
again
talking
about
our
partnership,
there's
a
certified
operator
from
operator
Hub
inside
openshift.
So
if
you
want
to
add
psyllium,
you
can
add
by
via
the
operator-
and
this
is
the
supported
one
right.
B
Yes,
this
is
the
supported
one
and
you
you
can
I
either
start
with
the
open
source,
but
typically
at
some
point,
Enterprises
reach
out
to
us
to
have
the
Enterprise
operator
as
well,
with
the
all
the
edits
features
so
such
as
I
showed
just
now
right
on
the
demo.
That's
that's
how
about
Enterprise
and
what
that
means
that
the
isof
Vader
also
gives
support
in
terms
of
psyllium.
We
are
able
to
help
you
with
onboarding
psyllium
to
get
the
right
values
to
configure
the
right.
A
Sounds
great
sounds
great,
we're
going
at
the
end
of
the
show,
so
those
are
the
kind
of
follow-up
right.
A
If
you,
the
people
would
like
to
get
more
about
psyllium,
they
can
go
on
celio.io,
which
is
the
official
website
and
to
discover
the
project
and
also,
if
you
want
to
know
more
about
ebpf,
you
can
go
to
ebpf.io
I'm,
putting
in
the
chat
also
a
really
nice
blog
post
from
from
you
folks,
Isa
violent
and
on
how
to
use
Stadium
on
openshift
yeah,
and
there
was
also
a
very
nice
interview
in
our
technically
speaking
web
web
show,
with
our
CTO
Chris,
Wright
and
and
Liz
rice.
A
So,
if
you're
interested
on
knowing
also
from
there,
let
me
let
me
share
quickly
in
the
chat
the
link
to
the
this
show
and
hey
Raymond.
That
was
really
great.
I.
Think
everyone
enjoyed
this
really
deep
dive
on
psyllium,
eppf
and
office
ability
security.
It's
really
great.
Do
you
have
any
final,
ending
closing
words.
B
Yes,
yes,
just
one
I
would
like
to
highlight.
If
you
want
to
try
out
psyllium,
we
have
a
number
of
Labs
on
isovator.com
forward,
slash
Labs,
you
can
try
out
and
they
are
built
with
different
for
different
use
cases.
Maybe
you
want
to
explore
cluster
mesh,
we
have
a
cluster
mesh
lab
or
maybe
you
want
to
explore
the
network
policies
part.
B
We
have
a
network
policy
slab
for
that
and
on
a
number
of
labs
you
can
also
earn
batches
when
you
complete
the
lab-
and
these
are
great
because
the
these
are
based
on
instruct
technology,
meaning
that
you're
able
to
get
a
live
environment.
You
go
through
a
guided
lab
and
you
can
also
see
how
it
works
under
the
hoods.
So
I
really
recommend
that
if
you
want
to
know
more.
A
That's
great,
let
me
link
in
the
chat
and
Raymond.
We
also
have
instruct
Labs
on
for
openshift.
Maybe
we
can
create
an
openshift
Labs
with
psyllium.
B
A
A
You
folks
a
quick
reminder:
we
don't
see
each
other.
The
show
is
coming
back
bi-weekly,
but
we
don't
see
each
other
February,
the
first
we
see
each
other
February
8th
with
a
an
update
of
quantum
Computing
we've
openshift,
together
with
our
friend
at
IBM
research.
Thank
you,
everyone
for
being
today
with
us.
Thank
you
Raymond
and
go
try
Celio.
Now,
please.