►
From YouTube: TGI Kubernetes 103: Cilium: A Second Look
Description
Cilium has done some really cool stuff since @krisnova covered the project in 2018. Join me this week as I pickup the torch and play with things like CRD-integration, CNI-chaining, kube-proxy replacement, and hubble! Maybe we even try the 1.7-rc!?
00:00:00 - Welcome to TGIK
00:04:45 - Week in Review
00:11:21 - Agenda Overview and Topic Selection
00:15:08 - Deploying without kube-proxy (and with CRD backend)
01:00:56 - Deploying and using Hubble
01:36:53 - CNI-Chaining
A
Hey,
what's
going
on
everybody
happy
Friday,
welcome
to
TGI
K
episode,
103
I
am
so
psyched
to
be
back.
Talking
to
you
all
today.
Welcome
welcome
Martin
from
the
Netherlands
welcome
in
Madi.
Welcome
back
Rory
welcome
back
as
well,
hopefully,
if
I
say
anyone's
name
wrong,
feel
free
to
correct
me.
Suresh
from
Hamburg
welcome,
welcome,
Peter
from
Sierra
Nevada
Spain
cool
or
the
Sierra
Nevadas
in
Spain
kind
of
like
the
Sierra
Nevada
mountains
in
the
States
and
a
mountainous
mountainous
place.
Mike
Michigan
welcome.
Welcome
Mizzou
Mizzou
fur
from
Dubai.
Welcome,
while
lead
from
Saudi
welcome.
A
Welcome
Larry
Johannesburg
awesome.
Welcome,
Larry
glad
to
have
you
Oliver,
hey
how
you
doing
from
France
Steve
from
good
old
VMware
glad
to
see
you
buddy
Tim
how's
it
going
and
Keith
from
kind
of
Ireland.
Welcome!
Welcome,
Keith,
all
right!
Taylor,
hey
Taylor
nice
to
see
you
Taylor
and
I,
go
back
along
with
Steve
to
our
hep
D
o
days
and
hey
welcome
back
joy
from
Richmond.
What's
going
on
everybody!
A
So
if
it's
your
first
time
joining
us
40
gik,
you
are
welcome
to
hop
in
the
chat
and
say,
hey
and
we're
you're
signing
in
from
obviously
it's
optional,
but
feel
free
to
say
hello
if
you're
joining
us
for
the
first
time
or
or
recurring,
yeah.
Let's,
let's
get
right
into
a
little
bit
about
some
of
the
topic
of
this
episode.
So
I
will
switch
over
here
to
that
window.
A
Alright-
and
you
don't
need
to
see
me
there,
we
go,
you
want
to
see
that
all
right
cool
all
right,
everyone,
so
in
episode,
103,
we're
gonna,
be
taking
a
second
look
at
psyllium.
The
container
networking
plugin
for
kubernetes,
so
really
really
psyched.
To
look
at
this
with
you
all
I've
been
I,
guess
you
could
say
following
the
psyllium
project
to
some
degree
you
know
I'll
admit
the
last
time
I
really
looked
closely
at
it
and
played
with
it
was
probably
when
Chris
did
an
episode.
A
A
All
right.
Who
else
we
got
joining
us
Samir,
welcome
right!
Well,
lead!
Okay,
you
put
cool
cool,
yeah
and
Patrick.
Welcome
from
Prague
awesome
Arnaud
from
France.
Welcome
glad
to
have
you
all
glad
to
have
you
all
all
right
and
I
was
also
told
some
folks
from
the
cilium
group.
People
working
on
psyllium
might
be
joining
us
as
well.
So
if
we
do
have
anyone
from
psyllium
in
the
chat
feel
free
to
call
yourself
out,
that
would
be
really
cool
all
right.
So,
let's
get
into
a
little
bit
about
the
Week
in
Review.
A
So
again
those
of
you
who
might
be
a
little
bit
newer.
The
first
thing
we
kicked
these
episodes
off
with
is
a
bit
of
a
review
of
what's
going
on
in
kubernetes,
so
we'll
roll
through.
That
first
then
get
through
some
content.
So
some
core
things
just
to
be
aware
of
those
of
you
contributing
to
the
project.
118
is
now
in
enhancement.
Freeze,
so
just
be
advised
if
you
were
working
on
enhancements.
A
Some
information
in
that
is
in
our
hack
MD
as
well
really
exciting.
This
schedule
for
cube
con
cloud
native
con
has
been
published.
So
first
off
you
know,
I
know
a
lot
of
work
goes
into
writing.
Cfps
and
I
know
it's
pretty
hard
to
get
your
CFP
accepted.
So
if
you
wrote
something
up
good
work,
if
you
got
in
I'd
be
curious,
isn't
anyone
in
chat
by
chance
get
a
CFP
accepted?
Does
anyone
here
gonna
be
talking
at
cube
con?
A
That
would
be
really
cool,
but
regardless
great
work
for
everyone
who
took
time
and
energy
to
submit
CFPs,
you
know
keep
an
eye
out.
I
don't
know
if
they're
doing
rejects
this
conference
I'm
guessing
they
probably
will,
but
that
might
be
something
to
look
into,
or
maybe
even
you
know,
turning
your
CFP
under
like
a
blog
post
and
sharing
it
with
the
kubernetes
community,
or
something
like
that
could
be
a
could
be
a
really
cool
exercise
to
get
that
knowledge
out
there
cool
and
then
last
thing
in
core
Etsy
D
got
some
Jepsen
results
done.
A
So,
if
you're
curious
about
what
the
what
the
results
were
feel
free
to
check
this
out,
it
seemed
like
some
of
the
things
were
pretty
good,
but
they
did
Island
identify
some
issues
like
locking
some
things
around
documentation,
so
really
cool
in-depth
review
check
it
out.
When
you
get
a
chance
again,
that's
the
Etsy,
dee
Jepsen
results
cool
and
before
we
talk
about
ecosystem,
let's
see
who
else
is
joining
us
here,
so
I
left
off
on
Oh
Joe
from
psyllium
awesome
Joe
thanks
for
joining
us
and
we've
got
Dan
as
well.
A
Welcome,
Dan,
really
psyched
to
have
you
all
here,
Mattia.
We
already
said
Maddie,
of
course,
Tim
welcome,
Timmy,
glad
to
see
you
Timmy's,
a
teammate
of
mine
sobbin
from
Romania
awesome.
Welcome.
Welcome
alright
and
George
comes
through
with
the
cloud
native
rejects
link.
So
if
anybody
did
not
have
a
CFP
accepted,
you
should
check
that
out
and
hopefully
we'll
all
get
to
meet
up
there
and
talk
about
some
cool
kubernetes
stuff
I
submitted
some
CFPs,
but
they
didn't
get
accepted.
A
But
good
news
is,
if
you
haven't
heard
Duffy's
CFP
got
accepted
a
host
of
hosts
of
this
this
channel.
So
you
should
check
that
out.
Maybe
George
or
someone
can
put
the
link
in
I
haven't
looked
at
what
his
talk
is
yet,
but
I
know
it
got
accepted
so
I'm,
very,
very
stoked
for
him
all
right
ecosystem.
Let's
talk
about.
What's
going
on
in
the
world
of
kubernetes
the
there
was
a
interesting
post
I
believe
this
one
is
from
Griffon
a--
on
GCP
yeah
GCP,
persistent
disk
incident
on
that
snowballed
into
an
outage
I'll.
A
Let
you
kind
of
look
this
one
through
and
read
it,
but
give
it
some
time
and
give
it
give
it
a
little
bit
of
your
energy
to
kind
of
read
and
look
through
there's
kind
of
two
things:
I
want
to
say
one
is.
It
really
speaks
to
you
know
the
the
crux
of
the
problem,
well,
I
think
and
in
the
resolution
they
came
by
and
how
that
how
that
leads
into
other
things.
A
But
the
second
thing
I
wanted
to
say
is
you
know,
I
really
appreciate
when
companies
or
people
come
out
and
really
talk
about
these
types
of
stories
with
kubernetes,
because
we
all
know
running
kubernetes
isn't
always
easiest
thing.
So
those
who
come
out
and
kind
of
share
these
retros,
it's
extremely
valuable
for
everyone.
So
thanks
so
much
to
the
Griffin
a-team
for
for
doing
that,
and
what
else
do
we
got
going
on
Oh
a
lighthearted
one
that
I
think
it's
pretty
hilarious,
there's
a
dad
jokes
microservice.
A
A
So,
if
you're
looking
at
different
ways
to
build
the
containers,
especially
like
in
a
multi-tenant
cube
cluster,
this
could
be
a
really
cool
post
for
you
to
kind
of
check
out
and
get
some
perspective
on
on
your
options.
Alright,
alright
and
then
last
thing
really
cool
blog
here,
and
this
is
a
blog
that
I
think
if
I
remember
George,
saying
correctly,
he
kind
of
stumbled
upon
this
blog.
It's
got
a
this
particular
post
is
about
implementing
container
runtime
shims,
but
I.
A
Remember
George
in
particular,
saying
that
a
lot
of
the
posts
on
this
blog
are
extremely
well-written
and
go
pretty
deep.
So
give
this
give
this
blog
a
check
out?
Maybe
I
don't
know
if
he
has
a
subscribe
link
up
here.
Let's
see
if
there's
anything,
but
it
might
be
worth
kind
of
checking
this
out
and
if
you
like
the
content,
adding
it
to
your
your
favorites
here,
trying
to
see
where
his
name
is.
Let's
see
here,
let's
go
to
about
make
sure
we
give
him
a
call
out,
Ivan,
okay,
Ivan,
so
check
out.
A
Ivan's
blog,
it
has
some
really
cool
stuff
and
you
know
getting
in
the
weeds
like
implementing
a
container
runtime
shim
is
no
joke.
So
for
those
of
you
who
want
to
who
want
to
dive
under
the
hood
a
bit,
this
could
be
a
really
cool
blog
for
you
to
check
out
all
right,
cool
everyone,
all
right.
Let's
go
ahead
and
check
one
more
time.
Right,
Ian,
welcome
in
glad
you're
here
from
Minneapolis.
Congratulations
on
getting
your
CFP
accepted!
A
That
is
super
super
awesome,
Ian
and
Duffy
have
given
some
Killer
talks
in
the
past
around
cube
security
that
will
scare
you,
but
are
really
really
educational
and
neat
so
be
sure
to
check
those
out.
If
you
haven't
already
cool
all
right,
cool
cool,
all
right,
I
think
we
got
everyone
so
far.
So
let's
let's
go
ahead
and
get
right
into
it
all.
So
there
are
four
things
that
I
am
primarily
psyched
to
learn
about
today
and,
like
I
said,
I
haven't
looked
at
psyllium
in
a
while,
so
we
have
some
psyllium
folks
in
chat.
A
We're
gonna
be
playing
around
with
these
things
and
seeing
if
we
can
all
learn
about
them
together,
so
we'll
make
some
mistakes
and
try
to
correct
them.
Surely
so,
let's,
let's
break
down
some
of
these
four
things
and
I
want
to
ask
you
all
what
you
want
to
focus
on
first,
so
I'm
attempting
diagramming
again,
let's
see
if
we
can
do
this,
it's
gonna
look
like
a
five-year-old,
actually,
that's
kind
of
insulting
to
five
girls.
A
A
So
this
is
one
thing:
we're
gonna
we're
gonna
talk
a
bit
about
cube
proxy
replacement,
so
there's
a
style
in
which
you
can
deploy
psyllium
as
I
understand
it.
That
will
completely
replace
cube
proxy.
Another
thing
that
we're
gonna
try
to
check
out
is
the
CR
D
back-end
feature
set,
so
it
seems,
like
psyllium,
has
followed
a
similar
progression
to
a
lot
of
cni
plugins
where
they
started
with
this
model
of
using
a
key
value
store
like
etsy,
D
and
I.
Think
I
think
some
time
supported
console
back
in
the
day.
A
Maybe
they
still
do
where
you
would
use
that
key
value
store
for
configuration
and
various
things,
but
it
looks
like
they've
got
a
more
advanced
answer
for
those
who
want
to
use
a
CR
D
back
end,
which
is
just
a
really
fancy
way
of
saying:
hey,
I
am
going
to
go
in
and
use
kind
of
the
kubernetes
api
server
to
do,
XY
and
z
for
configuration
and
other
data,
also
something
that
I
learned
about
at
cube.
Con
in
North
America
is
Hubble,
which
appears
to
be
a
way
to
get
information
about.
A
A
A
You
should
route
to
or
their
network
policy
model,
and
things
like
that,
so
I
think
the
premise
as
I
understand
it
and
obviously
can't
spell
it-
is
to
give
you
the
ability
to
changing
you'd,
be
amazed
how
hard
it
is
to
write
when
you
know
people
are
watching
you
so
CNI
chaining
would
be
like
the
idea
of
running
psyllium
along
side
of
one
of
those
other
CNI
plugins
and
having
it
do
some
of
the
cool
work
that
it's
capable
of
doing
so
that
is
really
exciting.
Alright,
so
here's
here's!
A
What
I
want
you
all
to
help
me?
Do,
let's
figure
out
which
of
these
things,
we
should
look
at
first
kind
of
the
order,
if
you
will
so
the
cube
proxy
replacement,
the
cube
proxy
replacement,
the
CRD
back
and
I'm
going
to
group
those
together
because
I
think
if
we
deploy
psyllium,
we
might
as
well
do
it
with
the
cube
proxy
replacement
and
the
CRT
back
and
kind
of
together,
then
there's
Hubble,
which
is
that
again
kind
of
visual
layer
and
troubleshooting
layer
and
then
there's
CNI
chaining.
So
what
I
want
from
you
chat?
A
A
Dan
validated
yep
supports
at
CD
and
console
key
value
stores,
but
you
can
use
CRT
as
your
back-end
method
cool,
so
we'll
get
deeper
into
that
dan.
Maybe
you
can
help
us
out
understand
some
of
the
trade-offs
with
the
different
directions.
Alex
is
from
North
Carolina
cool
cool
right.
Some
votes
are
coming
in,
see,
sounds
cool,
curious
how
you
got
debugging,
CNI,
chaining,
config,
school
Maddie.
A
It
makes
sense
to
me
it
makes
sense
to
me:
okay,
Larry,
you
put
a
couple
A's
and
I
I
dig
that
a
couple,
a
couple
votes,
okay,
so
I
think
the
trend
is
and
I'm
kind
of
into
this,
because
well
I
think
it
will
give
us
a
good
flow.
Okay,
we'll
start
with
a
which
will
be
kind
of
deploying
calico
in
this
queue
proxy
replaced
mode
and
using
the
CRT
back-end.
On
top
of
that,
we're
gonna
then
try
to
set
up
Hubbell
if
we
can
and
then
do
some
CNI
chaining
as
well.
A
What
do
you
say?
Is
that
sound
good
everyone
let's
go
ahead
and
get
right
into
it
all
right?
So
all
right,
so
some
some
funny
things
that
I
kind
of
want
to
preface
this
with
as
I
was
I
was
looking
through.
You
know
how
some
of
these
different
things
are
set
up.
The
first
thing
that
I
knew
we
would
need
for
today
is
machines
that
we
can
use
so
I
took
a
picture
of
the
lab.
A
That's
gonna
be
running
our
our
workloads
today
and
I
thought
you
all
would
get
a
kick
out
of
looking
at
that
real
quick.
So
this
is
the.
This
is
the
machine
that
is
running
this
very
advanced
set
ups
today,
I
just
want
you
all
to
quickly
appreciate
how
elegant
my
my
home,
lab
setup
is
right.
Here
you
can
see
the
the
our
720
that
we're
going
to
be
using
the
VMS
on
top
of
we've,
got
a
backup,
KVM
server
right
here
right
and
in
particular,
you'll
notice.
How
good
the
the
cable
management
is
right.
A
It's
pretty
pretty
amazing
and
in
particular
you
might
also
notice
how
amazing
the
the
power
system
is
as
well,
so
I'm
not
being
sarcastic.
If
we,
if
we
lose
our
servers
today,
we'll
know
why,
because
my
home
lad
is
a
mess
but
across
from
us
right
now
is
where
we're
gonna
be
running
all
this
good
stuff.
So
I
hope
you
can
appreciate
the
the
chaos
that
is
Maya.
My
lab
that
we're
gonna
be
running
on
alright,
so
let's
talk
a
little
bit
about
the
the
cube
proxy
pieces.
A
Okay,
so
alright,
what
I'm
gonna
do
to
kind
of
show
off
the
the
cube
proxy
bits?
Is
I've
got
a
cluster
deployed
right
now,
so
I
don't
have
a
cilium
cluster
deployed
per
se,
but
I
have
a
a
cluster
running
Calico
as
an
right,
not
not
focusing
on
calico
but
just
to
show
kind
of
what
the
default
q
proxy
modes
would
look
like
and
I
think
the
important
things
for
us
to
keep
in
mind
with
cube
proxy
for
those
of
us
who
are
getting
into
this
stuff.
Is
that
cube
proxy
right?
A
So
cube
proxy
has
two
modes
or
actually,
let's
talk
about
the
concerns
queue
proxy
hands
so
handles
so
for
the
most
part,
queue
proxy
is
focused
on
letting
us
do
services
right.
So
we
know
that
when
we
go
in
and
we
request
an
IP
address,
if
it's
like
a
cluster
IP,
that's
usually
going
to
have
a
bunch
of
pods
backing
that
IP
address
and
queue
proxy
implements
these
services
typically
with
two
different
modes.
A
One
of
those
is
IP
tables,
okay
and
then
one
of
those
is
IP
vs
all
right
now
to
kind
of
show
you
all
a
quick
cluster
of
this
setup
here,
let's
go
to
my
Calico
cluster
and
I'll
zoom
in
here
to
try
to
make
it
a
little
bit
less
obnoxious,
let's
exit
out
of
some
of
these
windows.
I
was
literally
just
setting
up
these
clusters
before
we
before
we
started
here
so
get
rid
of
these
good
good.
Ok,
that
should
be
good
all
right.
A
So,
let's
see
if
I've
still
got
my
IP
tables
command
here,
all
right
cool,
so
that
should
be
the
first
one.
Ok,
so
I'm
running
a
cluster
that
you
all
see
right
here,
that
is
running
calico
as
a
CNI,
but
actually
calicoes
got
nothing
to
do
with
this.
For
the
most
part,
what
you
should
know
is
that
this
cluster
is
running
cube
proxy,
which,
as
I've
added
services
to
my
cluster,
it
has
gone
in
and
it's
just
in
the
default
iptables
mode.
It's
gone
in
and
written
a
bunch
of
rules
to
my
host.
A
So
this
is
one
of
the
workers,
workers
0
and
it's
written
rules
to
this
host
right.
So
you
can
see
all
these
different
rules
inside
of
here
now
at
the
at
the
expense
of
it
being
harder
for
you
to
read.
I'm
gonna,
pull
these
back
a
little
bit
and
I
can
see
some
interesting
ones
like,
let's
find,
let's
find
Cordy
NS,
since
that's
a
really
common
common
one,
so
core
dns
core
DNS,
which
would
be
under
cube,
DNS
where's,
the
service
at
so
there
it
is
ok
cool.
A
So
this
is
arguably
the
the
service
that
Cordia
NASA's
is
running
under
more
or
less
right
and
in
this
cluster
I'm
running
it
highly
available
to
a
degree
which
means
I
have
two
instances
running
so
cute
proxy
has
written
these
rules
in
place
and
you
can
see
they're
there
for
UDP,
because
you
know
it's
DNS
and
I'm
running
in
UDP
mode
by
default.
If
I
run
that
same
command
again
and
I,
ask
for
the
chain
that
I
know
is
associated
with
the
cubed
DNS
service
I
can
hit
enter
there.
A
Alright
and
then
you'll
see
these
two
rules
inside
of
the
IP
table.
So
it's
gone
in
and
it's
written,
you
know
I,
guess,
I,
don't
know
if
the
SCP
stands
for
someone
in
chat
might
know,
but
it's
written
these
two
different
rules
that
basically
represent
the
back
end
pods
and
you
can
actually
see
how
one
of
these
says:
hey
statistical
probability
1/2,
which
those
of
you
who
know
IP
tables
with
queue
proxy
generally
you're
gonna,
always
be
using
a
system
where
it's
like
a
round
robin
type
thing.
A
So
your
ability
to
do
more
advanced
load
bouncing
is
somewhat
limited.
Usually
nonetheless,
let's
take
a
look
at
this
chain,
so
I
should
see
two
different
IPS
behind
here
right.
So
if
I
go
back
and
I
clear
this
one
out,
we've
got
alright.
Here
we
go
so
we've
got
one
IP
of
a
core
DNS
pod,
ten,
forty
zero,
which
I
am
comfortable,
that
that's
the
IP,
because
I
set
up
the
pod
network
and
then
in
the
other
one.
A
Of
course
we
should
have
our
other
option,
which
is
inside
of
here
cool
cool
right
and
there
we
go
so
we
got
65
and
66
is
the
IP.
So
that's
what
IP
tables
is
using
for
kind
of
the
the
service
endpoint
all
right.
So
what
does
this
mean
in
regards
to
in
regards
to
cilium?
So
we're
gonna
deploy
cilium
into
this
cluster
right
here,
but
why
did
I
just
show
you?
The
IP
tables
perspective
first
well,
I'm
an
individual
at
psyllium
gave
a
really
great
presentation
at
Cube,
Conant
I'm
gonna
steal
one
of
his
slides.
A
Let
me
let
me
give
him
a
quick
shout-out,
because
I
thought
his
presentation
was
excellent.
Martinez
gave
a
talk,
and
it's
in
our
references.
If
you
get
a
chance
to
check
it
out,
it's
about
pooling,
cubed
proxy
out
of
kubernetes.
So
please
check
out
his
talk,
but
one
of
the
one
of
the
most
I
think
compelling
arguments
he
made
was
he
kind
of
showed
the
packet
flow
of
something
that's
moving
through
all
of
these
different?
All
of
these
difference,
the
service
flow
more
or
less
the
kinds
of
things
that
I
effectively
just
showed
you.
A
But
all
of
that
aside,
I,
think
this
tells
a
pretty
good
story,
because
what
we're
gonna
be
doing
with
psyllium,
deploying
it
into
into
a
cue
proxy
or
a
crude
proxy
off
mode
or
non
cue
proxy
mode
is
we're.
Gonna.
Take
this
flow
that
you
see
here
and
using
another
one
of
his
slides,
we're
gonna
change
it
and
simplify
it.
A
Using
this
thing
called
BPF
that
cilium
leverages
so
long
story
short,
it's
gonna
go
through
and
be
able
to
kind
of
look
up
in
a
map
more
or
less
based
on
analyzing
the
packet
or
knowing
what
what
it
needs
more
or
less
where
it
needs
to
send
that
thing
to.
So.
This
is
a
pretty
cool,
clean
thing
and
let's
see
if
we
can
actually
take
these
slides
and
and
show
them
in
reality
to
some
degree,
so
let's
go
ahead
and
get
into
it
with
deploying
all
right.
A
Just
checking
chat,
see
if
there's
any
questions,
all
right,
kudos
for
using
bare
metal,
Oh,
George,
I'm.
Sorry
to
tell
you
but
well
the
KVM
servers
kind
of
bare.
Well,
no,
it's
not
it's
all!
Vm's
I'm!
Sorry,
man,
I'm,
sorry,
always
disappointing
out!
Here
it's
it's
VMs!
On
top
of
those
hosts
anywho
all
right.
Let's,
let's
get
into
some
of
the
cilium
Doc's
and
look
at
what
the
deployment
looks
like
here.
So
I
have
it
bookmarked
I
think
I
do
cube
proxy
okay
there.
It
is
all
right!
A
So
I
took
a
little
bit
of
a
look
at
the
docs
last
night
to
just
save
us
some
time
and
also
wanted
to
make
sure
the
VMS
would
be
compatible.
Probably
the
first
gotcha
that
I
found
with
running
in
this
mode.
It's
probably
called
out
somewhere
here-
maybe
maybe
not,
but
just
to
make
anyone
aware
who's
who's
watching
this
video
in
the
future.
A
One
of
the
things
that
I
ran
into
was
that,
let's
see
here
so
I
have
it
in
here
I
think
it
seems
like
in
order
to
use
the
the
mode
without
cube
proxy.
You
do
need
a
slightly
newer
kernel
than
I.
Think
cilium
requires.
Without
this
queue,
proxy
thing
turned
off,
so
cilium
folks
feel
free
to
tell
them
tell
people
if
I'm
lying,
but
I
do
think
to
run
this
at
least
today.
A
You
maybe
need
a
kernel
higher
than
4.17,
so
I'm
glad
I
checked
that
last
night,
or
else
all
of
our
VMs
would
be,
would
be
no
good
without
fixing
the
kernel
all
right.
So
we've
got
that
in
place.
Let's
go
ahead
and
see
if
we
can
get
get
this
deployed
all
right,
so
we'll
go
back
to
cluster
all
right,
cool
cool,
all
right.
So,
let's,
let's
get
this,
get
this
thing
running
all
right.
A
A
Let's
go
ahead
and
make
sure
I'm
in
the
right
cluster
to
start
off
here
and
give
me
a
second
everyone.
While
I
clean
up
my
interface
just
a
smidge
okay,
there
we
go
all
right,
we're
back
out
so
episodes
one
zero,
three
good
good.
Okay,
so
psyllium
wants
us
to
run
that
download.
So
let's
go
ahead
and
start
off
with
that
and
I'll
check.
The
chat
to
psyllium
will
be
installed
as
a
plugin
for
Q,
proxy
and
Q
proxy.
A
Instead
of
iptables
for
packet,
filtering,
okay,
I
think
you're,
asking
the
cilium
folks
that
I'm
guessing
it
will
remain
a
standalone
replacement
but
I'm
not
super
close
to
sig,
not
working
so
I
don't
know.
Base
support
is
for
nine
plus
and
then
host
based
LD
is
for
nineteen.
Oh
okay!
Thanks
for
validating
that
Joe.
A
So
to
Joe's
point
you
know
just
be
aware:
your
kernel
will
have
to
be
a
little
bit
newer,
so
what
I
did
last
night
I
think
I
was
on
a
bun
eighteen,
something
and
I
just
bumped
a
bun
to
nineteen,
which
wasn't
a
big
deal,
but
luckily
you
don't
have
to
watch
me
download,
vm's
right
now.
So
that's
a
good,
a
sanity
saver
for
all
of
us
all
right.
So
we've
got
this
downloaded
that
all
makes
sense.
Let's
go
in.
A
Let's
see
they
wanted
me:
okay
I'm
in
their
directory,
and
they
want
me
to
run
this
helm
command
to
produce
a
cilium,
yeah
Mille.
So
let's
go
ahead
and
produce
that
real,
quick
and
we'll
take
a
look
at
it
because
you
know
we're
good
good
cooper,
kubernetes
users.
We
don't
just
deploy
things
off
the
internet
right.
A
So,
let's
see
I've
got
psyllium
here.
Let's
open
it
up,
psyllium,
okay,
cool!
So
make
make
note
of
these
things
too,
because
I
always
I'm
sure.
If
there's
like
a
psyllium
reference
out
there
that
just
uses
like
a
quick,
install
method,
it
probably
or
like
a
default
method,
it
might
not
have
you
do
these
helm
steps.
So,
let's
keep
in
mind
that
we
change
something
called
node
port.
The
services
hosts
were
set
as
well.
So,
let's,
let's
check
out
what
this
new
mo
looks
like
real,
quick
all
right.
A
So
what
do
we
got
going
on
here?
Everyone,
okay,
so
identity
allocation
mode
is
CRD,
so
I'm
guessing
this
impacts,
the
the
key
value
mode
to
some
degree
psyllium
people
is
that
is
that
correct?
Let
me
know
if
I'm
wrong
there
we're
gonna
be
using
ipv4.
That
makes
sense,
monitor
aggregation
mode
when
we
get
to
our
our
demo
on
kind
of
the
monitoring
or
observability
piece.
I
know
that
this
is
important
to
some
degree.
A
Oh
cool,
okay,
here's
something
to
take
a
quick
look
at
so
it
looks
like
psyllium
will
run
in
three
different
motes,
so
two
of
these
modes
are
encapsulated
modes,
so
the
two
at
the
bottom,
the
excellent
in
Geneva.
These
are
gonna,
be
kind
of
like
tunneling
based
protocols
where
it
will
encapsulate
the
packet.
So
that's
kind
of
cool,
especially
like
the
nice
thing
about
VX
LAN.
Is
it
kind
of
gives
you
like
this
extended
L
to
ish
Network
thingy?
A
So
that's
kind
of
neat,
but
what's
awesome
is
it
looks
like
they
have
a
disabled
mode
to
which
I'm
guessing
would
just
do
direct
or
native
routing,
which
probably
means
that
they're
not
encapsulating
things
they're,
probably
just
sending
them
across
the
host
without
encapsulation.
So
that's
kind
of
cool
one.
One
interesting
thing
for
for
the
cilium
people
on
the
call
I
wonder
if
this
is
a
common
use
case,
but
one
mode
that
I've
seen
that's
kind
of
compelling
is
to
like
combined
an
amount
of
encapsulation
and
non
encapsulation.
A
Meaning,
like
you
know,
a
lot
of
times.
You
only
need
encapsulation
across
subnets.
So
like
intra
subnet
communication,
you
can
use
kind
of
the
raw
methodology
or
the
direct
methodology
so
that
it
doesn't
take
the
encapsulation
overhead
on
but
still
can
encapsulate.
When
you
know
traversing
a
subnet
where
there
might
be
a
router,
you
don't
have
control
over
I,
wonder
if
that's
a
use
case
that
ever
comes
up,
I
found
it
to
be
kind
of
a
cool
model.
Okay,
so
our
cluster
names
default
that
all
makes
good
good
and
well
BPF
mounts.
A
Did
it
node
port,
okay,
so
I'm?
It's
interesting
that
it's
called
node
port
but
I'm,
guessing
that
this
is
what's
making
sure
the
cluster
is
gonna
run
in
in
the
non
queue
proxy
mode.
It
seems
like
that's,
probably
the
setting
and
then
I
know.
There
was
also
the
IP
address
somewhere.
Let's
see
if
we
can
find
that
what
was
it
called
again,
it
was.
It
was
a
kubernetes
somewhere
in
here
I
think
it
was
our
IP
address.
Oh,
you
know
what
it
was.
A
It
was
the
API
server
and
port,
which
I
don't
have
the
cluster
fully
deployed.
Yet
so,
let's,
let's
be
sure
to
start
there.
That
probably
would
would
help
us
from
getting
in
a
getting
in
a
mess
all
right.
So
here's
a
deal
real,
quick
everyone
just
just
to
bring
us
back
so
I
have
got
three
nodes
inside
of
this
cluster.
Make
sure
I've
got
the
cluster
right.
So
this
cluster
is
OH
one
cool,
so
SSH
into
1/9
SSH.
Here
192
OH
zero.
It
should
be
master,
that's
perfect,
all
right
and
I.
Don't
think.
A
I
have
kubernetes
deployed
great,
ok
cool.
So
let's
take
a
step
back
we'll
come
back
to
that
in
a
moment.
Let's
make
sure
kubernetes
actually
exists,
because
that
would
be
a
critical
piece
of
the
puzzle
here,
so
they
think
they
had
the
flag,
which
will
say
perfect.
Thank
you,
psyllium
saving
me
time.
One
thing
that
I
have
I
got
to
give
psyllium
props,
for
is
their
documentation,
is
pretty
good.
I've
I've
been
pretty
impressed
rolling
through
it.
A
I
won't,
say
it's
flawless,
I,
don't
think
anyone's
is
flawless,
but
it
definitely
has
been
a
has
been
pretty
cool
to
check
out
and
understand.
Okay,
so
the
pod
network
here
probably
needs
to
be
the
same
as
my
pod
network.
So
before
we
run
this
command,
it
would
probably
be
wise
if
we
check
in
at
sea
kubernetes
manifests
to
look
for
the
cube
cube
controller
manager,
which
won't
exist
because
I
haven't
run
this
yet
right,
duh.
A
Sorry
getting
my
steps
mixed
up,
so
this
pod
just
needs
to
be
consistent
with
what
I,
deploy
or
maybe
Salim
will
auto-detect.
My
pod,
cider
I
know
with
Calico.
It
I
usually
need
to
put
it
in.
So
that
would
be
kind
of
a
cool
thing
if
it
did
that
alright,
so
ten
to
one
seven
I,
don't
think
that
conflicts
with
anything
yeah
yeah
we'll
see
what
happens.
Okay,
shorthand
s,
flag,
coups
I,
messed
up
the
flag
there.
What
was
the
flag?
It
was
skipped
phases,
one
more,
my
bad,
alright
lovely.
A
So
we
have
a
kubernetes
cluster,
creating
thanks
to
cube
admin.
It
will.
It
will
be
pretty
quick
and
easy
I
wanted
to
have
cluster
API
set
up
to
show
this
stuff
off.
That
I
didn't
have
enough
time,
unfortunately,
but
cube
admins
still
works
really
really
well,
so
we'll
just
roll
with
that.
Alright,
so
my
two
worker
nodes
are
in
the
bottom,
we
will
oops.
We
will
join
these
two
worker
nodes
and
we
will.
We
will
see
how
cilium
deploys
and
we'll
check
chat.
A
So
Dan
verify
that
CR
D
is
the
default
back-end
that
is
good
selim
in
this
mode
for
queue
proxy
replacement
as
a
drop-in
replacement,
yep
that
makes
sense.
Jo
dan,
yes,
disabled,
sends
packets
to
the
Linux
stack.
Okay,
cool
cool
makes
sense
to
me.
Our
labels
transported
in
IP
cool
I
think
you
got
that
answered.
Lower
Alex
cool,
alright,
great
thanks
for
answering
all
these
questions.
Folks,
it's
it's
awesome,
alright
cool!
So
we've
got
a
cluster.
A
Let's
go
ahead
and
grab
the
configuration
for
the
cluster,
so
I
have
it
locally
and
can
run
cube,
cuddle
commands
and
then
let's
go
ahead
and
join
our
other
nodes
using
this.
So
we're
just
gonna.
Do
a
one
node
master,
two
node
work
or
cluster
here
and
see
how
that
goes
all
right.
It's
cool
so
we'll
join
those
up
and
get
cilium
deployed.
So,
while
we're
waiting
for
that,
actually
let
me
see
if
I
can
pull
up
the
cilium
blog
post.
Four
one
six
I
seem
to
remember
a
graphic
around
this
back-end
wasn't
in
here.
A
Maybe
it's
in
the
dock
somewhere.
Now
this
is
the
Hubble
announcement
where's
the
where's,
the
1-6
announcement
I'm
in
the
blog
okay
I'm.
Looking
at
all
the
blog
posts
that
make
sense,
do
you
I
was
looking
for
a
cool
okay.
So
looking
at
this
cool
so
as
it
was
either
Joe
or
Dan
verified
for
us,
you
still
can
use
a
dedicated
key
value
store
like
at
CD
or
console.
A
So
so
I
have
this
this
issue
with
with
Etsy
D
for
CNI
plugins,
usually-
and
it's
not
because
there's
anything
wrong
with
using
Etsy
G
for
CNI
plugins,
but
I
ran
into
this
thing
with
providers
who
allow
you
to
use
at
CD
where
a
lot
of
companies
were
using
their
kubernetes
at
CD
cluster
as
their
CN
eyes,
key
value
store,
which
you
know
to
a
degree,
it's
possible
to
do
that
and
might
not
be
an
issue
for
you.
But
I've
happened
to
seen
it
be
a
massive
issue.
A
Let's
see
if
there's
anything
about
scale,
I,
remember
reading
something
about
like
250
nodes
or
something
like
that,
we'll
see
if
we
can,
if
we
can
find
that
later,
but
basically
what
it
means
is
when
it's
relying
on
the
API
server,
sometimes
it's
not
as
scalable
as
relying
on
like
a
dedicated
key
value
store.
So
it
looks
like
psyllium,
maybe
has
a
even
kind
of
intermediary
mode,
we'll
use
at
CD
for
some
things
and
then
the
CRD
store
for
and
others
I'm
curious.
If
over
time,
psyllium
people
can
answer
this.
A
If
you
see
yourselves
finding
ways
to
make
the
CRD
based
store
more
scalable
to
make
it
kind
of
like
the
simple
preferred
option
for
all
use
cases,
or
if
there's
just
always
going
to
be
use
cases
where
a
dedicated
key
value
store
makes
sense.
Alright,
we
got
a
cluster
here.
Everyone
master
worker
worker,
let's
deploy
psyllium
so
before
I
completely
blow
up
my
environment,
I
think
what
would
probably
be
wise
is:
let's
go
ahead
and
open
up
one
more
of
these
windows,
we'll
do
an
SSH
inside
of
here.
A
Okay,
so
we'll
keep
our
masters
and
workers
down
here
in
the
bottom.
Just
so,
y'all
can
see
them
as
we
kind
of
look
through
stuff
and
let's
go
back
to
where
we
generated
that
psyllium
Emmel
and
let's
get
rid
of
that,
because
I'm
guessing
helm
did
not
have
access
to
my
cluster
details,
which
worries
me
that
I
messed
up
my
my
deployment
yell
there
a
second
ago.
Another
thing
that
I
should
probably
do.
A
If
we
just
do
a
quick
cube,
cuddle
get
nodes,
let's
make
sure,
cube,
cuddles
working
okay,
so
we've
got
three
nodes
there
right.
Looking
good,
we
got
a
master
and
we've
got
two
workers.
That
is
exactly
what
we
want
so
now,
if
we
go
in
and
let's
get
that
config
for
my
host,
so
I'll
SCP
that
over
doo-doo,
of
course,
I.
Don't
have
a
really
simple,
quick
one
available,
but
that's
okay,
we'll
work
with
what
we
got.
So
this
is
two:
oh
two:
zero.
A
This
will
be
home
home,
a
bunt
two
cubes.
What
I'm
doing
here?
Everyone
is
just
copying
over
my
cube,
config
file,
so
home
a
bunt
to
cube
that
all
looks
good
cool
and
then
we
will
copy
that
into
cube.
Let's
call
it
psyllium
all
right.
This
will
be
cube,
config.
Actually,
alright,
that
seems:
okay,
good,
okay,
so
we'll
export
and
we'll
go
to
psyllium.
I
know,
there's
a
context
which
are
for
kubernetes
by
the
way.
I
just
could
never
figure
out
how
contexts
work.
A
That's
my
confession:
for
the
day,
I
can't
make
kubernetes
context
works
for
the
life
of
me.
I
have
no
idea
why
I've
never
been
able
to
do
it
so
I
always
use
exports
instead,
so
we'll
get
nodes.
Let's
make
sure
we've
got
the
right
cluster.
Okay,
so
I'm
on
my
hosts
here
with
the
pink
bar
and
I,
am
using
the
the
different
looking
at
the
cluster
in
the
same
way.
So
this
means,
if
I'm,
correct
I,
should
be
able
to
go
and
run
home,
which
was
effectively
this
command
cool.
A
So
now,
I'm
thinking
the
API
server,
IP
and
port
will
show
up
so
we'll
go
in
there
to
cilium
looks
good
all
right
cool,
so
we
went
through
most
this
already.
I
won't
bore
you
with
it.
Let's
see
if
we
can
find
that
IP
address,
though
cube
IP
q
by
P
I'll
just
scroll
through
okay,
cube
IP,
did
it
propagate
I
actually
know
what
my
IP
address
is:
why
don't
I
just
search
for
one
nine,
two
I
didn't
see
the
cube
IP
I
would
have
expected
that
it
would
have
gotten
that.
A
A
Okay
looks
good
to
me:
I'm,
just
surprised
that
nothing
got
written
for
the
IP
and
port
next
generate
the
Amal
files,
replace
API
server
import
with
the
country.
Okay,
so
maybe
I
don't
actually
know
how
him
that
well,
either
I'm
gonna
give
you
another
confession:
don't
hate
me,
but
maybe
I
need
to
actually
put
in
my
static
IP
in
port
here.
That
might
be
what
my
problem
is
yeah,
let's,
let's,
let's
make
sure
we
do
that
because
I
just
if
he'll
needs
this
I
have
a
hard
time,
believing
that
it's
not
important.
A
So
let's
do
one
92168
202
0
and
then
let's
do
the
port
here,
which
I
think
the
default
port
is
6,
4,
4
3.
So
let's
that
looks
good
yeah
OH
seems
okay
to
me.
Let's
run
helm
again.
Alright
now
do
we
actually
have
an
IP
address
192?
That
is
ok.
Now
here's
an
interesting
thought
everybody
right,
because
this
is
why
I
was
thinking.
I
was
I,
was
thinking
what
they
might
be
using
this
for
and
can
can
anyone
in
chat
guess?
A
Why
do
you
think
we
need
to
put
the
kubernetes
Service
host
in
service
port
in
shouldn't?
We
just
be
able
to
use
the
service
IP
that
kind
of
comes
implicitly
with
kubernetes.
You
know
when
you
do
cube
cuddle
get
service
in
the
default
namespace.
It
has
the
IP
for
like
the
API
server.
Why
would
we
need
to
tell
it
about
the
host
and
port
for
our
API
server?
A
Any
thoughts,
there's
a
delay
in
chat,
so
I
won't
wait
too
long,
but
I'm
thinking,
I'm
thinking
the
reason
we
have
to
specify
this
is
because
we
don't
have
cube
proxy.
So
a
her
theoretical
level.
I.
Don't
think
that
the
IP
table
rules
got
written
for
the
the
cube
API
server
service,
so
I'm,
guessing
cilium
in
this
mode
probably
needs
to
know
where
your
API
server
is
because
you
can't
rely
on
that
same
construct
anymore
again.
That's
my
guess
feel
free
to
correct
me.
A
A
A
Did
it
do
its
a
knitting?
Well,
if
we
got
this
first,
try
I
will
be
I'll,
be
blown
away
all
right.
Let's
give
it
a
let's
give
it
a
sec
to
Annette's
here
and
I'll.
Look
at
chat,
cool,
wow,
there's
so
much
chat
activity.
You
all
are
awesome.
Thank
you
variables,
glad
Oh
Josh.
You
have
problems
with
context
too.
It
must
be
a
Josh
thing.
That's
gotta
be
what
it
is.
Yeah
Maddie
I've
checked
it
out.
I
still
can't
figure
it
out.
A
I,
don't
know
why,
like
I,
always
malformed
and
mess
up
my
contacts,
I'll
try
to
I'll,
try
to
pay
Duffy
off
and
have
him
do
a
TGI
K
on
cube
contacts
at
least
I
need
a
personal
lesson
on
it.
That's
for
sure,
okay,
because
we're
normally
rocked
Oh,
Mike
you're
right.
You
got
it
killer,
oh
boy,
everyone
I,
don't
know
if
we
should
believe
our
eyes,
but
I'm
pretty
sure
that
cilium
is
starting.
A
Okay,
let's
see
what
we
got
here.
So
we've
got
one
more
left
to
go.
Oh
hey!
Look
at
that
claps
in
the
in
the
chat,
they're
awesome!
We
got
it
alright,
so
cilium
is
theoretically
running
now.
What's
cool
about
this
is
we
should
be
able
to
prove
based
on
our
old
cluster,
that
the
cube
proxy
thing
isn't
doing
anything,
ideally
right?
So,
let's
see
if
we
can
prove
that.
Actually
this
would
not
be
a
good
one
to
prove
it
in.
Let's
go
to,
let's
go
to
the
worker
node
in'.
A
Let's
just
look
at
a
couple
things:
okay,
so
first
things.
First
IP
tables
just
look
at
the
list
here,
so
there
might
be
some
default
stuff.
It
still
has
in
rules
here
yeah,
but
I,
don't
see
anything,
that's
actually
substantial.
So
it's
got
just
a
couple
things
in
here.
That
all
makes
sense
to
me.
So
it
looks
like
as
far
as
I
can
tell
it's
not
using
IP
tables.
To
give
you
another
example
of
that
in
our
other
cluster
in
the
Calico
cluster.
A
Here,
if
I
do
a
quick,
iptables
L,
you
can
see
a
lot
more
going
on
with
IP
tables
right
now.
Calico
does
some
fancy
stuff
to
actually
make
IP
tables
more
performant
in
the
way
that
they
use
it
normally
I
think
IP
tables
has
like
an
O
n
type
complexity
but
I'm
pretty
sure
how
Cal
compl
ament's
that
they
get
it
closer
to
like
something
more
reasonable.
Nonetheless,
you
can
see
all
the
rules
that
a
normal
plug-in
would
normally
put
in
here
and
then
you
could
see
going
back
to
psyllium
wherever
it
was.
A
We've
got
not
too
much.
So
that's
really
really
cool
awesome,
all
right
sweet,
so
we've
got
psyllium
installed.
What's
next,
let's
see
what
else
do
we
have
going
on
here,
so
it
doesn't
seem
like
there's
any
IP
tables.
Let's
look
at
the
route
table
because
that'll
be
kind
of
interesting.
A
A
If
we
go
into
the
IP
tables,
cilium
VX
leon,
okay,
so
I
am
I'm,
gonna,
be
kind
of
encapsulating
my
traffic
and
all
around.
My
different
nodes,
so
that's
that's
pretty
cool
and
we're
running
in
the
CRT
back-end.
It
seems
so,
let's
go
ahead
and
we're
gonna
run
a
little
tool
to
kind
of
take
a
look
at
all
the
stuff
that
created
called
octants
shameless
plug.
It
is
a
VMware
tool,
but
it
works
really
really
well.
So
let's
go
ahead
and
get
octant
up
and
running
so
octants
cool
pulling
us
up
here.
A
Let's
see
what
psyllium
made
for
us.
So
first
thing
I
usually
do
when
I
go
to
these
CRD
back-end
things
I,
usually
look
in
the
custom
resources.
It
looks
like
we've
got
a
cool
sweet,
so
we've
got
a
custom
resource
for
each
one
of
the
notes,
which
is
great.
If
I
go
into
one
of
the
nodes,
perfect,
okay,
cool,
so
I've
got
an
idea
for
what
this
nodes
IP
address
is
I,
love
that
so
just
using
cube,
cuddlin
queering,
4c,
RDS
I
can
find
this
and
then
I've
also
got
the
IP
for
ciliate.
A
What
see
Liam's
internal
IP
is
so
I'm
guessing,
that's,
maybe
the
interface
we
were
looking
at
and
then
the
pod
cider.
So
this
is
perfect.
I
can
look
up
what
psyllium
thinks.
Each
of
my
nodes
are
figure
out.
What
the
IP
is,
what
the
range
is
all
that
good
stuff.
So
this
is
a
great
CRD
for
me
to
reference
and
I,
don't
see
any
other
see
RDS
out
of
the
gate.
So
that
sounds
pretty
good.
It
looks
like
we
had.
Let
me
go
to
the
right
namespace.
A
Actually,
that
would
be
extremely
important:
cool,
Rin,
cube
system,
okay,
there's
a
cilium,
endpoint,
Oh,
awesome,
okay,
great
another,
another
great
use
of
a
CRT
right,
so
I've
got
an
endpoint
CRT
for
each.
In
this
case,
I've
only
got
corgi
NS
installed,
but
it
represents
the
IP
of
these.
So
it's
really
easy
to
go
in
through
cilium
and
be
like
yeah.
What
does
psyllium
think
the
IP?
Is
it
for
this
particular
thing?
So
then,
click
on
Core
DNS
get
some
details
about
it:
BPF,
okay
policy,
okay,
ooh,
sweet!
A
This
is
cool,
I,
don't
know
yet
we'll
figure
out
in
a
second.
But
maybe
this
tells
you
whether
you
have
ingress
or
egress
policy.
That's
impacting
this
endpoint
and
then
I.
Don't
know
what
that
curly
braces,
but
maybe
they'll
show
us
the
policy
infecting
it
I
don't
know,
but
nonetheless
that's
really
cool
to
be
able
to
see
that
inside
of
the
the
CRT,
so
okay
sweet
that
all
looks
really
cool
and
really
good.
A
A
Where
did
I
put
these
things
manifest?
Okay,
great,
so
inside
of
manifests,
we
have
got
the
workloads
cool
all
right
great.
So
let's
deploy
some
stuff,
so
I'm
gonna
go
ahead
and
apply
two
pods
one
of
those
is
called
Team.
A
I'll
show
you
what
these
have
in
a
second
just,
let
me
get
them
deployed,
so
you'll
have
to
wait
for
me,
team
de
llamo,
so
team,
a
team,
B
they're,
basically
two
workloads
just
to
show
you
team
a
real,
quick.
It's
nothing.
Special.
A
A
Now
what
the
service
is
pointed
to
right,
that'll
be
a
really
cool
test
for
us
and
then
I've
also
got
a
policy,
but
you
know
before
we
forward
to
the
policy:
let's
see,
let's
see
what
impact
maybe
like
the
service
had
inside
of
here
so
we'll
go
in
and
first
off
just
do
a
cube.
Cuddle
get
pods
for
the
name,
space
org
one,
which
was
the
namespace
that
that
was
in
and
then
cube.
Cal
gets
pods
for
twerk,
okay
great,
so
these
are
pretty
much
identical.
A
A
So
if
I
curl
from
Team
A
to
the
endpoint
in
team
B,
which
I
should
be
able
to
call
the
service
but
I
can't
remember
the
exact
notation
now
because
it's
escaping
me,
someone
can
put
it
in
chat
if
they
remember,
but
for
now
I'll
just
do
you
keep
cuddle
get
pods.
Actually,
let's
get
service
for
the
namespace
namespace
org
too.
So
there's
actually
like
an
inherent
DNS
record
that
would
be
available
in
court.
Dns
I
just
can't
remember
the
syntax
off
my
hand,
it's
like
pod
name,
dot,
SVC,
namespace
or
something
like
that.
A
But
this
is
the
cluster
IP,
which
should
be
equivalent
I'll
exact
it
in
one
more
time
and
let's
just
make
sure
traffic
routes.
Great
sorry,
that's
exactly
what
we
need
right,
so
awesome
service,
dot,
namespace,
perfect,
perfect
thanks
for
that,
so
we're
calling
traffic
back
and
forth.
So
apparently,
Salim
is
setup.
Ips!
Let's
go
back
to
octant
and
see
if
there's
anything
different,
that's
going
on
here
so
inside
a
cilium
endpoints,
oh
yeah
I
got
stopped
forgetting
to
get
into
the
different
namespaces.
Okay,
great!
So
now
I've
got
a
cilium
endpoint
14a.
A
That
seems
good.
Oh,
this
is
cool
too,
so
ingress
enforcement
and
egress
enforcement
is
false.
So
that's
exactly
what
I
want
to
know
right
like
if
it
can't
tell
you
how
many
times
I've
been
with
customers
and
they're
like
I,
can't
get
traffic
into
my
pod,
and
you
know
we
spend
hours
trying
to
figure
out
what's
wrong
with
their
code
what's
wrong
with
their
pod,
what's
wrong
with
whatever
and
sure
enough,
the
entire
time
they
had
a
network
policy
that
they
had
forgotten
about.
It
drives
me
crazy.
A
So
this
little
sanity
check
here
is
fantastic,
is
their
egress
or
ingress
actually
on
that
endpoint
now
how
about
the
service
itself?
So
this
is
where
it
gets
interesting
right
if
I
want
to
get
on
the
host
and
look
at
the
service
now
I
would
think
I
without
like
something
like
Hubble
installed
or
without
going
into
CR
DS
I'm.
Guessing
I
could
look
it
up
in
BP
F.
So
that
probably
means
let
me
SSH
back
into
that.
A
So
that
probably
means
if
I
go
into
a
worker,
real
quick
I
know
psyllium
has
just
like
their
command-line
utility,
although
I'm
not
entirely
certain
where
to
get
it.
So
what
I
could
do
is
if
I
just
pull
up
my
docker
images,
we're
gonna
we're
gonna
hack
our
way
into
this
real,
quick,
a
very
non
elegant
way.
Let's
pull
the
psyllium
binary
psyllium
pod
Soudan.
You
can
cube
exactly
the
psyllium
pod
and
get
the
psyllium
CLI.
Okay,
cool
cuz
I
was
about
to
go
in
and
copy
the
binary
out
of
the
container.
A
So
I'm
glad
that
you
gave
me
that
call
and
let's
also
look
at
Joe's
way
to
probably
the
most
actually
cilium
service
list.
Okay,
so
Joe,
that's
the
command
and
then
Dan
to
your
point.
I
don't
have
to
copy
that
binary.
Why
don't
I
just
exact
into
the
pod
right?
So,
let's,
let's
do
exactly
that!
That's
perfect!
So,
if
I
do
a
cube,
cuddle
get
pods
for
the
namespace
cube
system,
we'll
put
it
wide.
So
we
know
what
nodes
it's
running
on.
Let's
look
at
this
real,
quick,
so
alright,
so
cilium
on
worker
one.
A
That
should
be
this
one
right
here
and
then
I've
got
psyllium.
Tx
WD
be
perfect.
Okay,
let's
copy
that
out
and
then
let's
go
ahead
and
exact
into
the
namespace
cube
system.
Do
you
all
have
been
bash
inside
of
your
inside
of
your
image,
psyllium
psyllium
psyllium
Q
system,
psyllium,
Ben
bash
that
look
right.
Yeah
there
we
go
cool
yeah
so
to
Dan's
point
whatever
I'm
gonna
be
looking
in
now,
theoretically,
should
be
whatever
it
looks
up
through
BPF
on
this
host
right,
dan
and
others
can
give
some
insight,
but
there's
something
about
like.
A
Usually
you
mount
the
BPF
file
system,
I,
don't
think
I
have
my
host
mounted,
but
I
think
they're
doing
something
to
maybe
prevent
me
from
doing
that.
Although
I
think
you
still
need
to
do
that
in
production
or
something
again,
it's
a
rough
reading
of
the
docs
I
barely
remember.
But
let's
look
at
the
psyllium
Command
real,
quick,
so
psyllium
there.
A
It
is
okay,
let's
see
what
we
got
so
Joe
your
command
with
psyllium
service
list
service
list,
awesome;
okay,
that's
really
really
cool
I
love
that
I
love
being
able
to
get
on
the
host
and
see
some
kind
of
more
structured
output
like
this
so
check
this
out.
Everyone.
This
is
super
super
sick.
Okay.
What
do
we
got
going
on
here?
A
10960
10,
so
let's,
let's
figure
out
what
that
is:
that's
probably
the
service
for
okay
here.
So
if
we
do
a
cube,
cuddle
get
services
for
the
namespace
cubes
system
right,
okay,
that's
cube!
Dns,
perfect!
That's
cube
DNS
on
top
of
our
two
pods!
So,
what's
so
great
about
this,
I
can
on
the
host
because
remember
on
the
host
itself,
like
it,
it's
gonna
resolve
what
end
point
IP
it
needs
to
go
to,
which
is
the
pods
IP
and
as
far
as
like
the
actual
container
networking
is
concerned,
that's
what
it's
mostly
worried
about
right.
A
It's
like
tell
me
what
pod
you
want
to
go
to
I'll
shoot
this
thing
through
the
VX
LAN
tunnel
and
make
it
go
there
right,
but
what
we
need
to
do
with
services
is,
we
need
to
provide
a
way
to
go
in
and-
and
you
know,
abstract-
that
on
each
host,
this
happens
kind
of
in
this
localized
manner,
as
we
saw
traditionally
with
cube
proxy
and
now
with
BPF.
So
what
we're
looking
at
here,
which
is
great
and
yeah,
Maddy
you're,
totally
spot-on-
this
is
core
DNS
I
can
see
here
all
right.
A
Here's
a
service,
IP,
that's
fronting
it,
and
here
are
the
IP
addresses
that
is
using.
So
that
is
really
really
cool
all
right.
What
other
cool
stuff
does
the
cilium
CLI
give
us
insight
into
BPF
new
policy
suite
so
cilium
policy
list?
Cilium
policy
get
cilium
policy
get
okay.
So
if
I'm
thinking
of
the
word
policy
right,
I'm
guessing
because
I
don't
have
any
ingress
or
egress
rules,
this
is
going
to
be
an
empty
array
right
now,
which
is
totally
totally
fine.
So
let's
go
ahead
and
apply
some
policy
real
quick.
A
So
if
I
go
into
the
episodes
103,
what
else
we
got
here?
One
of
three
manifests
workloads
cool.
So
let's
go
ahead
and
apply
all
right,
so
we're
gonna
apply
the
which
one's
a
good
one
deny
all
ingress
is
a
good
one,
all
right
so
denial
ingress
is
nothing
too
crazy.
Just
to
show
you
it
all
real,
quick
or
show
you
all
it
all.
It's
basically
gonna
do
is
say:
hey
traffic
that
comes
into
the
first
pod,
which
is
team.
A
is
going
to
be
blocked,
so
I'm
blocking
ingress
traffic
on
that
pod
I'm.
A
A
really
quick
and
easy
way
that
we
can
look
at
it
is.
We
could
do
an
exact
again
and
this
time
go
into
team
B,
which
is
in
the
org
2
namespace,
and
who
had
my
syntax
there,
yeah
service,
dot,
namespace
and
copy
that.
So,
if
I
go
to
pod
for
no
team,
a
dot
service,
namespace
oops
curl
would
be
helpful
great.
A
It
kind
of
hangs
up
there,
so
I'm,
probably
blocking
if
all's
going
well
so
let's
go
to
psyllium
and
see
if
we
can
find
that
policy
now
so
again,
I'm
on
my
host
I
am
my
in
the
container
anymore.
I
don't
think
I
am
did
I
leave
the
container,
which
actually
you
know,
here's
what
we
should
do.
We
should
find
the
container
that
has
it
on
there
an
exact
into
it
again.
So
let's
do
a
get
pod
for
the
namespace
org
one
Oh
wide,
and
this
is
team,
a
which
is
running
there.
A
A
Wonder
where
I
put
my
I
mess
something
up:
okay,
so
cube
system
tool.
Let's
go
to
the
wide.
Let's
make
sure
we
get
again
on
the
cilium
node
for
this
one
here,
which
is
V
and
that's
for
the
Masters.
This
is
T
xwb,
so
I'm
gonna,
exact
into
that
again,
just
like
you
all
saw
before
so.
If
I
exact,
alright,
let's
go
ahead
an
exact
into
here,
and
then
we
will
do
the
namespace
of
cube
system
cool.
A
Oh
wait
was
I
already
was
I
still
in
there
I'm
sorry
I
was
still
in
there
in
the
upper
buffer
geez.
It's
the
the
the
failure
of
having
too
many
windows
open.
Okay,
so
we'll
do
cilium
policy
get
there.
It
is
pretty
cool.
So
all
right.
What
do
we
got
here?
Everyone
we've
got
a
namespace,
selector
org,
one,
that's
cool!
We've
got
the
labels
in
here
I'm
trying
to
figure
it
out.
It
can
discern
so
okay.
So
this
is
some
cool
stuff.
I
can
figure
out
right.
A
I
can
know
that
it's
coming
from
this
policy
I
can
see
the
names
face.
It's
coming
from
that's
cool
I,
wonder
if,
in
this
view,
is
there
a
way
for
me
to
figure
out
like
the
pods,
it's
impacting
I'm,
guessing
not
like
at
this
level
of
like
the
map.
Look
up,
it
probably
doesn't
make
it's
probably
too
intense
to
like
correlate
those
two
pods.
A
It
impacts
at
least
on
this
low-level
thing,
but
just
thinking
out
loud
in
case
I'm
missing
something,
but
if
we
do
go
back
to
octants
and
we
go
back
to
team
a
check
it
out,
ingress
enforcement
is
now
true.
That's
really
really
cool,
so
ingress
enforcement.
True
team,
a
what
do
we
got
here?
Okay,
here
we
go
here
we
go
so
the
yes,
it's
impacting
it,
and
here
are
the
labels.
A
Okay
of
nuts,
the
identity
labels.
Is
that
the
same
thing
so
identity
labels
org
one
team,
a
runs
engine,
ax
org,
one
namespace,
okay,
so
maybe
the
policy
doesn't
show
up
in
here
unless
I'm
missing,
something
which
is
totally
fine.
It's
just
the
it's
letting
me
know
on
the
Sicilian
end
point
I
do
have
policy
enforced
and
I
bet,
there's
some
way
to
correlate
it
to
what
it's
being
impacted
by
I.
Just
don't
necessarily
know
offhand.
Okay,
that
is
really
really
slick.
A
A
Okay,
so
I
see
the
correlation
now,
so
the
endpoint,
that's
being
impacted
the
labels,
okay
or
actually
I
guess
these
are
all
the
endpoints
I
should
say,
but
this
is
kind
of
yeah
kind
of
like
what
we're
seeing
with
öktem,
so
I'm
guessing
endpoint,
one
to
seven
would
be
my
my
team.
A
pod
and
I
can
see
the
enforcement
yeah.
This
is
this.
A
So
the
the
kind
of
pro
tip
we
learned
here
is,
you
know
we
started
this
the
session
off
by
like
looking
through
IP
tables
rules
and
figuring
out.
What's
going
on
right,
but
in
in
reality
like
it's,
it's
such
a
nice
approach
to
be
able
to
go
in
with
a
tool
like
this
and
figure
out,
what's
happening
underlying
on
this
specific
worker
node.
A
Standpoint
and
I
totally
get
that
and
there's
implications,
and
you
should
read
information
about
like
the
scalability
and
whether
it
should
matter
to
you
and
so
on
and
so
forth,
but
there's
a
whole
other
side,
which
is
just
like
the
experience
of
using
it
interacting
introspecting
understanding.
What's
going
on
now,
I'll
be
the
first
to
admit.
If
you
asked
me
to
help
you
with
anything,
IP
tables
related
you're,
asking
the
wrong
person
and
you'll
be
in
a
lot
of
pain,
because
I'll
have
no
idea
how
to
help
you.
A
But
that
being
said
like
it
just
shows
how
this
model
can
really
help
clearly
visualize
and
see
what's
going
on
on
a
nodes.
That
is
something
for
someone
who
doesn't
understand.
Ip
tables
like
me
as
really
really
compelling
thing,
alright
killer,
so
I
think
we've
talked
about
services
at
this
point.
Maybe
what
we
move
on
to
now.
A
Everyone
is,
let's
see
if
we
can
deploy
we'll
see
if
we
can
deploy
hubbell,
real,
quick
and
kind
of
get
some
views
and
introspection
into
this
stuff
and
then
we'll
wrap
up,
and
we
might
not
have
time
to
do
CNI
chaining,
but
maybe
we'll
just
kind
of
talk
about
it
at
a
conceptual
level,
because
that's
one
that
I'm
super
interested
in
as
well.
So,
let's
see
if
we
can
get
Hubble
deployed
now,
all
right
so
do
all
right.
Those
in
chat
who's
pretty
impressed
with
what
we've
seen
so
far.
A
It's
really
really
cool
stuff,
I'm
I'm,
really
enjoying
learning
about
this
cool.
Alright.
So
let's
see
if
we
can
get
some
some
Hubble
setup
so
to
have
the
Hubble
docks.
I
do
okay,
cool,
alright,
so
Hubble,
what
do
we
got
going
on
with
Hubble?
Actually,
when
I
was
looking
through
the
blog
posts,
there
was
one
on
Hubble,
yeah
cool,
alright,
so
I
learned
about
this
at
cube.
Con
I,
don't
know
a
ton
about
it,
but
it's
interesting.
A
It
looks
like
a
way
that
you
can
run
an
agent
called
Hubble
and
then
there
will
be
some
higher-level
construct
that
you
can
pull
out
of
that
agent.
So
we're
gonna
look
at
the
Hubble
UI
today.
It
also
looks
like
it's
got
a
way
for
you
to
start
scraping
metrics
out
of
Hubble,
which
is
which
is
pretty
cool
for
your
Griffin
and
Prometheus
setups.
So
yeah,
let's
see
if
we
can
just
get
it
deployed
and
kind
of
look.
A
Oh
here's
a
screenshot
of
it,
so
we
should
be
able
to
get
something
that
kind
of
looks
like
this
a
day
at
a
high
level
cool
all
right,
yes,
shiny,
yeh
cilium
looks
cool
Larry
I'm
overwhelmed
Larry
I'm
with
you,
man,
I'm
totally
with
you.
Sorry
you,
you
all
couldn't
see
my
screen.
Sorry
for
the
delay
there.
So
I
was
looking
at
the
the
blog
post,
but
I
still
had
the
picture
open.
My
bad
ok!
A
A
Good,
ok,
this
looks
pretty
straightforward.
Ok,
the
one
thing
that's
gonna
get
us
a
little
bit.
We
deployed
cilium
1:6
earlier,
which
is
not
the
end
of
the
world.
So
what
we
can
do
here
is
I
always
get
really
worried
when
I
am
changing.
Cni
plugins
no
offence,
the
cilium
folks
I'm,
not
saying
you're,
not
capable
of
doing
appropriate
clean-up
on
my
hosts,
but
it
freaks
me
out
to
no
extent
so
what
I'm
gonna
do
here,
real,
quick
to
get
Hubble
installed,
we're
gonna,
do
a
quick
cube.
A
Adm
reset
and
I
am
gonna,
go
ahead
and
reset
the
entire
clusters.
Ok,
so
cube.
Adm
reset
will
just
take
a
sec.
We've
already
got
the
containers
downloaded
now,
so
it
will
not
take
very
long
cube,
ADM,
resets,
all
right
and
we're
gonna
wow.
This
will
be
cool.
We're
gonna,
get
to
look
at
an
I.
Think
one
sevens
unreleased
write
an
unreleased
version
of
psyllium
as
well
see
if
we
notice
any
cool
stuff
alright.
So
let's
do
this
thing
so
we're
gonna
go
ahead
and
in
it
one
more
time,
let's,
let's
anit
the
same
way.
A
Hopefully,
a
knitting
without
iptables
won't
be
an
issue.
Let's
give
it
a
shot
so
good,
we'll
kick
that
off
and
then
we'll
join
these
two
nodes.
In
just
a
moment
it
looks
like
we
got
another
helm
chart.
So
here's
what
we'll
do
everyone
will?
We
will
look
at
this
helmet
produce
the
gamal
and
we'll
dip
it
against
our
existing
amyl.
A
Now
one
thing
you
might
notice
that
this
is
missing
right
is,
it
doesn't
look
like
it's
got
the
API
server,
IP
and
port,
which
probably
means
that
this
is
assuming
that
I
am
running
in
Cube
proxy
mode.
I
don't
have
Q
proxy
to
remove.
So
we're
probably
gonna
want
to
add
that
to
whatever
this
manifests,
produce
feel
free.
A
Alright,
this
all
looks
great,
so
I'm
gonna
join
these
two
nodes
down
here,
great
great
all,
right
copy
that
up-
and
we
will
put
it
in
wait.
Let's
remove
that
real,
quick,
actually
someone
to
get
rid
of
the
old
Cube
config
from
our
previous
cluster
cube,
config,
ok
and
we
are
going
to
paste
that
in
place
cube
cuddle,
get
notes.
Are
we
not
ready
again?
No,
we
are
ready.
Interesting,
ok,
we'll
see,
if
that,
if
that
throws
us
off
at
all,
I
wonder
why
I
would
think
it's
ready
already.
A
Ok!
This
is
why
I
always
get
scared
redeploying
CN
eyes.
Ok,
we'll
see
what
happens.
Who
knows
so,
if
I
exit
out
of
actually
let's
open
another
buffer,
if
I
exit
out
of
here,
ok
and
let's
go
ahead
and
do
what
they
say
so
up
so
do
103
we're
gonna,
install
Hubble,
so
we'll
grab
the
cilium
templates,
which
looks
good
downloads.
Psyllium!
Oh,
wait!
Silly
me
sorry,
I
need
you
should
probably
download
it
first
right.
So
let's
do
that
so
we'll
curl
that
look
at
chat,
real,
quick,
the
helm
options
can
be
combined.
A
If
you
want
the
cube
proxy
free
mode,
these
Hubble
instructions
will
assume
okay
cool
cool
thanks
for
saying
that
Joe.
You
maybe
help
me
from
tripping
over
myself
here
all
right,
so
we're
in
there.
Let's
grab
this
okay,
all
right,
I'm
gonna
grab
that
first
one
here
and
then
let's
also
bring
open
our
old
helm.
One
from
before,
which
was
where
was
the
first
home
I,
think
it
was
1:7.
It's
probably
this
one
yeah,
that's
it:
okay,
cool!
So
we'll
grab
that
we'll
copy
this
I'm
gonna
put
it
inside
of
helm.
A
Shell,
which
will
be
this
and
we'll
come
back
in
here.
Real
quick
and
combine
them
in
a
second
so
get
out
of
here.
All
right,
so
I'll
show
you
all
the
commands,
so
helm,
dot,
shell,
here's
what
we
got
going
on.
So
what
I
was
talking
about
here
is
these
two
things
right.
So
I
probably
want
these
two
things
to
be
in
the
way
that
I
generate
cilium.
A
So
let's
grab
those
so
again
to
Joe's
point
I,
believe
node
port,
true,
is
going
to
make
sure
we're
running
in
queue
proxy
free
mode
and
then
we're
gonna
make
sure
we
include
our
API
server,
host
and
port
once
again,
we'll
put
these
in
here.
So
let's
go
ahead
and
do
one
92168,
202
0
and
then
for
port.
We
will
do
6,
4,
4,
3
and
probably
don't
need
the
dollar
signs.
This
don't
sound
like
variables
that
will
maybe
possibly
get
us
back
to
what
we
had
before.
A
Let's
hope
so
we'll
save
looks
good
all
right.
Let's
see
what
we
can
do
now
so
I'm
just
gonna
just
for
the
heck
of
it
copy.
This
produce
a
cilium
yeah
Mel
for
y'all.
So
let's
produce
that
cuckoo
and
okay
I.
Think
that
looks
pretty
good.
So
let's
do
a
quick,
VIN
diff
against
the
cilium
llamo
I
have
here,
which
should
be
one
seven
and
then
let's
go
back
to
cilium
one
six
and
see
what
the
big
differences
are
between
the
to
install
kubernetes,
cilium
llamo.
Ok,
what
do
we
got
here?
A
Everyone,
so
some
of
these
might
just
be
things
that
have
moved
around
by
the
way,
but
something
about
monitoring
which
probably
makes
sense
I'm
guessing
these
are
maybe
important
to
Hubble,
looks
like
some
changes
to
what
we
need.
As
far
as
the
service
accounts
go,
maybe
so
ingress
resource
changes
as
far
as
what
we
need.
Custom,
ok,
this
is
cool
all
right.
So
tell
me
tell
me
if
I'm
wrong
here,
but
if
this
is
a
feature,
that's
coming
out.
I'm
really
excited
about
it.
A
So
one
of
the
things
that
I
love
about
some
CNI
implementations
is,
they
will
add
their
own
network
policy.
Crd
so
I
know
psyllium
has
their
own
network
policy
CRT,
which
is
great
and
usually
kind
of
what
this
means
is,
like
you
know
the
cube,
the
cube
API,
the
network
policy,
it's
great.
It
works
for
a
lot
of
use
cases,
but
it
does
have
limitations
like
maybe
you're
trying
to
do
fancier
like
layer,
7
type
enforcement
or
maybe
you're
trying
to
do
like
these
weird
expressions
that
you
can't
normally
do
in
the
standard
API.
A
Nothing
is
available,
but
what's
awesome
for
especially
an
administration
team
to
set
cluster
wide
policies
that
you
can
then
poke
holes
in
based
on
like
admission
control
and
stuff
inside
of
the
user
namespaces.
So
if
these
are
cluster
Network
policies,
I
am
these
are
cluster
policies
that
are
going
to
impact,
basically,
the
entire
cluster
I'm,
very
stoked
about
that.
That
is
really
really
awesome.
I'm
actually
hoping
someday
that
maybe
cube,
will
have
a
model
for
this
in
a
generic
sense
for
everybody.
But
this
is
this
is
cool
stuff.
A
So
cilium
can
tell
me
if
I'm
my
rocker
and
that's
not
what
that's
for
anyhoo
cluster
service
is
true.
We
got
some
new
labels.
No
big
deal
looks
like
the
image
pool
policy.
Changed.
Excuse
me:
okay,
nothing
else,
too,
crazy.
Aside
from
some
of
these
new
CR,
DS
and
stuff,
it
doesn't
look
too
insane.
So
let's
go
ahead
and
apply
this
and
let's
see
what
we
get
all
right,
so
cute
cuddle
apply
psyllium
yeah
mo
here
we
go.
Everyone
take
two.
Let's
see
if
we
got
this.
A
Oh
that's
not
good,
probably,
would
be
helpful
if
I
exported
my
my
manifest
or
my
cute
config
again.
So
let's
go
ahead
and
do
that
from
202
psyllium
that
should
probably
overwrite
it
keep
cuddle,
get
nodes.
Yeah
I
need
to
don't
forget,
don't
know
how
contacts
work
okay,
so
we
need
to
keep
cuddle
get
nodes.
A
Okay,
it
worries
me
that
they're,
ready
I
have
no
idea
why
they're
ready
but
anywho.
Let's
actually
see
this
cube,
cuddle
I'll
get
pods
for
the
name,
space
cube
system,
okay,
so
core
DNS,
is
in
the
state.
I'd
expect
without
a
CNI.
So
maybe
that
ready
thing
is
just
a
weird
nuance
to
having
done
this
before.
So,
let's
go
ahead
and
apply
all
right.
A
We
will
set
up
a
watch
for
cube
system
and
fingers
crossed
everyone
that
one
seven
goes
as
smooth
as
one
six
did:
let's
hop
into
the
docs
and
figure
out
how
to
get
Hubble
while
that's
being
setup
so,
okay
good,
so
we
know
that
monitor
aggregation
medium
is
set
up.
Remember
earlier.
Today
we
looked
at
that
in
the
manifest.
So
that's
pretty
cool,
install
Hubble,
okay,
so
I'm
guessing
this
installs
Hubble.
Now
the
question
would
be:
does
this
install
the
agents?
A
A
A
All
right,
so
it
looks
like
the
cilium
stuff
is
still
getting
running,
which
there's
a
little
bit
of
a
delay
last
time.
So,
let's
make
sure
that
it
it
gets
up.
Okay,
we
got
one
healthy
one,
healthy,
Oh
Dan.
You
need
an
extra
flag
for
the
UI
okay
cool
thanks
for
thanks
for
saving
me.
There
I'll
take
that
flag
when
you're,
when
you're
ready
cuckoo
so
silly
I'm,
still
starting
up
yeah
Alex
I'm
with
you
on
the
cluster
Network
policies,
that
is
for
big
environments.
That
can
be
a
game
changer.
It
really
can
cool.
A
Alright,
so
we'll
see
if
we
can
get
Hubble
up
and
then
talk
about
some
CNI
chaining,
so
alright
so
see
if
I
can
beat
Dan
to
it,
maybe
deploy
Hubble.
So
he's
saying
we
aren't
very
creative,
ok
I,
like
it
I
like
it
configure
service
map
UI.
Is
this
it
creative
enough
for
me
mini
cube,
quick
install
deploy,
Hubble
service
account,
ok,
cool,
or
maybe
it
is
the
default.
Ok,
let's,
let's
roll
with
it!
Well
we'll
be
able
to
I
mean
heck.
A
A
A
A
Ok
failed
download,
Hubble,
my
in
the
wrong
directory
CD
Hubble,
install
kubernetes
CD
install.
Where
did
I
go
so
I'm
in
CD,
Hubbell
CD
install
what
did
I
do
wrong
here.
So
I've
got
Hubbell,
Gamal,
very
strange:
okay,
let's
not
screw
ourselves
up
we're
gonna
be
super
safe
here.
Okay
I
probably
did
something
silly,
so
I'm
going
to
get
out
of
the
cilium
repo
and
we're
gonna
focus
on
just
our
own
little
Hubbell
repo.
So
that
I,
don't
don't
mess
this
up
here?
Okay,
so
let's
do
a
clone
and
a
CD.
A
Cool
all
right,
that's
good
and
then
we'll
do
a
template.
Again.
Oh
I
see
where
I
was
at
I'm.
Sorry
yeah,
that's
silly
anywho!
We
have
a
Hubbell
definition
now
Hubbell
llamo
see
if
there's
a
UI
in
here
I,
don't
see
anything
in
the
manifest
about
a
UI.
Everyone
added
to
the
Hubbell
template
command
set
UI
enabled
true.
People
should
just
be
set.
Ui
enabled
true.
Okay,
let's
see
if
that
works,
Joe
I'll.
Add
that
to
my
command,
you
caught
me
just
before
I
hit
apply,
so
you
may
have
saved
the
day.
A
Let's
see
cool
yeah,
so
my
my
nano
piece
of
feedback
for
you
actually
I,
guess
a
dedicated
doc
for
installing
Hubble's,
probably
a
good
idea
if
they're,
just
gonna
use
the
agent
for
like
Griffin
or
for
Prometheus
and
stuff,
but
maybe
call
out
in
here
the
UI
I,
don't
know
anywho.
If
this
makes
a
lot
of
sense
cool.
So
let's
go
ahead
and
let's
do
helm
again,
let's
modify
it.
Let's
do
what
Joe
recommended
and
if
it
doesn't
work
we
can
just
blame
Joe,
no
problem,
so
we
go
over.
A
All
right,
what
do
you
think
look
good,
so
we'll
run
that
will
go
into
Hubble.
We
have
a
UI.
We
have
a
UI.
Okay,
that's
awesome,
let's
see
if
it
works,
so
we
will
do
a
apply
of
our
friend
Hubble
dot,
Hamill
all
right
and
then
let's
do
a
cube.
Cuddle
I
think
I
saw
that
it
was
in
cube
system,
so
watch
cube,
cuddle,
get
pods
for
the
name,
space
cube
system,
all
right
Hubble,
so
we've
got
an
agent
on
every
node
there's.
A
A
Cool
we
got
the
UI,
the.
Maybe
the
agents
are
still
starting
up,
but
let's
see
if
we
can
figure
out
where
the
UI
is
then.
So
if
we
do
a
cube
cuddle
get
service
for
the
namespace
cube
system,
I'm
guessing
you
all
have
a
Hubbell
UI
in
here.
Awesome,
awesome,
awesome,
ok,
so
I
could
open
up
a
node
port.
A
That's
one
option,
but
let's
keep
it
simple
here
and
let's
just
do
a
port
forward
for
the
Hubble
UI
in
the
namespace
cube
system,
and
we
will
do
what
port
are
we
on
twelve
one,
two,
three
twelve
hunt,
12,000
no
pods
called
Hubble
UI
port
wait
port
forward
service.
Is
that
how
it
goes
no
pod
service
found,
there's
some
way
to
tell
that.
It's
a
service
I'm
trying
to
remember
keep
cuddle
port.
A
A
Cool
ok.
What
do
we
got
going
on
here?
Everyone?
So
all
right!
This
is
cool,
so
ok,
first
impressions,
let's
just
kind
of
start
there,
I
guess
before
I
get
overwhelmed
I'm
looking
in
cubes
system.
Why
are
my
other?
Oh,
my
other
namespaces?
Aren't
there
they've
been
deleted?
Let's
do
this
real,
quick,
too.
Let's
go
ahead
and
bring
our
other
workloads
back,
so
I'm
gonna
go
into
episodes
again.
103
workloads,
weight,
manifests
workloads,
cool
and
we're
gonna.
We're
gonna
see
if
we
can
actually
check
Hubble
out
live
here.
So
if
we
do
cube,
cuddle
apply.
A
What
was
it
again?
It
was
team
team,
a
right
team,
a
oh
man,
y'all
are
gonna
hate
me
after
you
see
me,
switch
these
contexts
manually
too
many
times
team,
a
team
B
okay,
so
that
that's
pretty
good.
We
got
team
a
and
Team
B
backup
if
I
go
to
Hubble
I've
got
there's
my
namespaces,
okay
cool.
So
here's
the
deal.
What
do
we
got
here
in
inside
a
cube
system?
So
it
looks
like
we've
got
cube,
DNS,
which
totally
makes
sense.
A
It
looks
like
it
shows
when
traffic
is
egressed
out
to
the
world.
I
don't
see
any
arrows
coming
in
so
I,
don't
know
if
it
shows
traffic
coming
in
from
the
world,
because
I
would
think
when
I
call
my
API
server.
That
would
maybe
show
up,
maybe
I,
don't
know
actually
I,
don't
know
the
answer
to
that.
So
cube
dns,
ingress,
egress
and
ingress
egress.
A
So
maybe
the
locks
mean
whether
there's
policy
I'm
guessing
okay-
and
this
is
cool,
so
I
can
see
that
if
I
click
on
53
UDP
I
can
see
cube,
DNS
sent
a
request
to
the
world
for
53
UDP.
So
this
is
probably
when
it
needed
to
do
some
type
of
external
look
up
on
a
DNS
record
that
it
did
not
know
about
so
that
all
makes
sense.
Okay,
org
one
org,
two,
no
data,
okay,
so
this
probably
then
means
this
is
probably
what
the
timer
means
is.
A
It's
probably
only
collecting
data
on
active
traffic,
which
makes
sense
how
would
it
know
that
things
route,
otherwise,
unless
there
was
actually
traffic
to
look
at
so,
let's
send
some
traffic
through
if
we
go
back
and
exact
so
I'm
gonna
go
back
into
team.
A
the
team,
a
pod
and
I'm
gonna
once
again
send
some
traffic
to
team
B.
So
this
will
be
curl
team,
B,
dot,
service
dot-
and
you
know
it's
funny,
I
think
last
time,
I
literally
typed
in
name
space
which
wouldn't
have
worked
anyways.
A
Now
that
I
think
about
it,
instead
of
namespace
I
will
put
in
org
to
think
this
is
the
syntax,
maybe
not
I.
Have
it
wrong
again.
What
did
I
do
wrong?
Team,
B,
curl
service,
org,
okay,
forget
it
I'm
gonna
go
in
and
get
the
endpoint
again
get
service.
Namespace
org
there.
It
is
works
for
me
namespace
before
service.
Of
course,
let's
try
that
maybe
I
can
solve
this
in
my
brain
permanently,
so
team
be
org
to
service
hey.
We
got
traffic,
okay,
so
team,
a
and
Team
B
two
different
namespaces
they're
communicating.
A
Let's
go
to
Hubbell
and
see
what
we
got
here
so
update
okay,
so
it
updates
on
a
countdown
or
I
just
hit
refresh,
and
that
seems
to
have
worked
sweet.
This
is
exactly
it
so
I've
got
my
ingress.
I've
got
my
egress.
We
can
see
here
that
team,
a
egress
to
team,
B
I,
don't
know
if
it's
I
don't
know
if
there's
complexities
around
this,
but
what
could
be
really
cool
as
if
I
knew
the
namespace,
perhaps
that
team
B
was
in
I,
don't
see?
A
Oh
no,
the
names
okay,
so
the
namespace
isn't
the
label.
Here
it
looks
like
the
UI.
Just
doesn't
like
wrap
the
external
thing.
Yet,
maybe
if
I'm
reading
this
right-
okay-
so
that's
cool,
the
UI
makes
sense.
Let's
you
know
it
just
for
the
heck
of
it,
though,
let's
let's
curl
out
so,
if
I
curl
to
Google
com,
I'm
thinking
I
should
see
an
external
call
right.
A
A
Let's
see
here,
did
you
do
so?
We've
got
okay.
So
this
is
the
flow
for
everything
right.
So
we
can
see
the
source
pod.
We
can
see
the
destination
it
went
to.
So
this
is
clearly
my
Google
comm
request
right
because
it
needed
to
send
a
request
to
Core
DNS
I
guess
what's
interesting
about
the
UI,
though,
is
when
I
was
in
cubes
system,
the
core
DNS
traffic
showed
up,
and
in
this
view
it
doesn't
show
me
calling
cord
DNS
and
I
wonder
if
there's
a
reason
for
that.
A
Nonetheless,
so
I'm
going
out
to
core
DNS
I'm,
getting
the
DNS
record
and
then
there's
where
I
call
Google
and
then
prior
to
that,
you
can
see
Team
a
called
out
too
weird
that
I
call
Team
B.
Originally
the
order
or
something
maybe
seems
weird
here
unless
I'm
looking
at
it
wrong,
but
team,
a
called
team
B.
So
you
can
see
what
I
actually
called
Team
B
right
there.
That
looks
good.
A
You
can
see
some
of
my
DNS
lookups
inside
of
here
as
well,
which
yeah
again
I
use
the
service
IP,
so
I
suppose
it
did
go
to
core
DNS
if
it
didn't
have
it
cache
to
any
degree.
This
is
all
really
cool.
So
obviously,
if
I
was
doing,
real-world
stuff,
I'd
probably
have
to
filter,
I,
wonder
I,
wonder
the
cilium
folks
in
the
in
the
chat.
What's
like
the
performance
impact
on
running
this?
Is
there?
Is
there
any
at
all
or
is
it?
A
Is
that
Hubbell
daemon,
just
kind
of
like
basically
sweeping
stuff
up
as
it
gets
it
in
the
introspection
doesn't
have
a
massive
overhead
I'd.
Be
super
curious
to
know
like
is
this.
Some
way
to
put
it
is:
is
this
something
I
can
just
keep
on
whenever
or
do
you
only
recommend
having
this
on
when
I'm
trying
to
like
troubleshoot
things,
essentially
so
overall,
this
is
a
super
cool
to
play
around
with,
though
okay,
it
looks
like
it
has
HTTP
feature
set.
A
So
that's
pretty
cool
I'm
HTTP
is
interesting,
though
cuz
I'm
thinking,
I,
don't
think
it
would
know
about
HTTP,
given
the
nature
of
the
fact
that
we're
operating
on
a
different
layer.
So
if
I
come
back
here
and
I
curl
actually
I
know
it
doesn't
cuz
it's
blank,
so
it
doesn't
have
like
the
path
or
anything
or
any
of
the
statuses.
A
A
Does
this
mean
I
need
to
an
envoy
sidecar
with
all
of
my
workloads
kind
of
like
in
the
the
sto
model,
to
some
degree
or
like
a
service
mesh
model
where
you
package
a
sidecar
in
because
I
obviously
don't
have
that
today,
so
I'm,
guessing
I
wouldn't
be
able
to
get
HTTP
visibility
for
this,
the
following
cilium
network
policy
will
redirect
traffic
of
all
pods
on
port
80
to
the
HT.
Oh,
maybe
they
run
it.
That's
interesting,
so
cilium
network
policy,
HTTP
visibility
that
is
interesting,
psyllium
has
envoy
embedded
in
it
only
see
packets.
A
If
you
have
oh
okay,
so
I
see,
oh
I
see
okay,
so
it's
not
about
putting
side
cards
with
your
workloads.
It's
about
putting
this
policy
in
place,
which
then
tells
it
to
go
to
that
proxy
and
then
Hubble
can
read
what's
happening
at
that
level
since
that
proxy
operates
on
layer.
Seven,
if
that's
a
fair
summary,
let
me
know
so.
This
is
by
the
way.
This
is
an
example
of
one
of
those
psyllium
Network
policies.
A
So
in
kubernetes
you
have
the
network
policy
API
and
then
in
psyllium
they
have
their
own
extension
of
that
I.
Think
I.
Remember
that
you're
not
supposed
to
mix
the
two
from
back
in
the
day.
You
can
tell
me
if
that's
still
correct,
you're
supposed
to
use
one
or
the
other,
ideally,
but
this
time
it
looks
like
we'd
want
to
use
that
Network
policy
transparently
redirect
to
Envoy.
What's
confusing
me
about
this,
manifest
is.
How
does
this
know
to
send
to
envoy
like
I,
get
the
from
aspects
the
from
port
80?
A
But
what
in
this
manifest
is
saying,
go
to
Envoy
or
go
to
the
psyllium
baked
envoy
unless
there's
just
some
magic
here,
I'm
overlooking
something
which
could
be
possible
yeah,
let's
try
it
we'll
see
what
happens
so,
if
I
go
in
to
here
and
I,
let's
get
our
manifest
ready
here
for
a
moment,
all
entities
cool
okay,
so
it
looks
like
Sergey
rules.
Http
is
working
I,
don't
know
why
the
chat
says
some
of
your
comments
were
held
for
review.
I
have
no
idea
why
okay
cool
so
all
right,
I'll!
Try
it
I'll!
Try
it.
A
Let's
see,
let's
see
if
this
works,
so
if
I
do
a
okay-
let's,
let's
get
this
open
here.
So
if
I
do
a
vim
for
HTTP
introspection,
yeah
Mille,
this
will
be
a
cilium
policy
which
is
going
through
and
my
editor
is
upset
just
because
it's
a
CRT,
it
doesn't
understand
it
cool
all
right.
Let's
see
we
get
here
so
cute
cuddle
apply,
HTTP,
intro,
yeah,
well,
cool.
A
Yeah,
that's
what
I
thought
too
Dan
sorry
I'm
having
to
click
to
it.
Apparently
Google
thinks
you're,
saying
terrible
things,
because
I
have
to
keep
approving
the
things
you're
saying
for
some
reason,
but
yeah
that
makes
sense
to
me.
So
nothing
would
go
to
Envoy
by
default.
If
you
put
this
in
place,
which
makes
a
lot
of
sense,
it
will
then
shoot
through
envoy,
which
will
then
allow
that
introspection
that
we're
looking
for
here
so
you'll
want
to
apply
it
to
10
a
or
10
it
be
namespace
o,
because
it's
okay,
good
good
call.
A
Thank
you
for
that.
So
let's
apply
this
to
the
namespace
org
one
and
heck,
we'll
just
do
it
in
org,
two
as
well,
I
guess
in
cubes
system.
It
probably
didn't
make
a
lot
of
sense.
So,
let's
just
make
sure.
So
if
I
do
a
cube,
Caudill
get
silly
network
policy
for
the
namespace
org
one
great
okay,
we
have
HTTP
visibility,
and
so,
let's,
let's
see
we
can
do
here
everyone.
A
So
if
I
do
a
exact
again,
let's
send
some
traffic
to
Google
will
send
some
traffic
to
Team
B
and
let's
go
back
to
Hubbell,
and
if
I
refresh
oh
there
it
is.
We
got
it
very
cool,
very
cool
check
it
out
so
for
Team
B
you
can
see,
I
did
a
get
for
200
and
then
zero
milliseconds
on
that
okay
cool.
This
is
really
sweet.
Okay,
so
same
question,
which
I
think
you
probably
answered
in
chat,
but
there's
so
much
chat
now,
I'm,
not
sure
where
I
left
off
this
feature.
Obviously
routes.
A
It
changes
the
data
path
effectively
right.
So
is
it
fair
to
assume
that
this
is
not
something
you
would
run
all
the
time
I'm
guessing
the
models
vary
a
lot
right,
because,
if
you're
running
like
a
if
you're
running
sto,
you
probably
already
got
to
some
degree
something
like
this,
because
your
introspecting,
the
side,
cars
that
go
with
all
the
containers
but
I'd
be
curious.
What
your?
What
your
stance
is
there?
A
If
you
click
on
a
flow,
it
will
show
you
the
details:
okay,
let's
click
on
one
sweet,
okay,
cool,
so
flow
details
when
it
was
seen
forwarding
status
to
end
point,
source,
endpoint
labels,
cool,
so
I
know
the
source
information
IP
address
the
destination,
that's
really
sweet,
oh
look
at,
and
the
HTTP
data,
because
I
introspected,
that
that
is,
that
is
really
really
sick,
good
job,
that
is,
that
is
cool,
dig.
It
policies,
hey
check
it
out.
So
my
the
policy
is
inside
of
here
now.
A
Here's
something
we
haven't
tried:
let's,
let's
wrap
up
Hubbell
with
this.
Unless
there's
less
silly
on
people
there's
anything
else,
you
think
I
should
click
on.
Let's
wrap
up
Hubbell
with
this,
let's
put
a
policy
in
place,
let's
block
some
traffic
and
let's
see
what
shows
up
inside
of
this
interface
now
so
I'm
good.
So
if
we
go
back
to
terminal
and.
A
Okay,
where
are
we
at
Josh?
Where
are
we
at
so
we'll
exit
out?
We
will
edit
the
oh,
no
run
workloads
already
perfect.
So
let's
do
ok,
so
here
so
we're
gonna.
Do
everyone
let's
apply?
Let's
apply
the
deny
all
ingress
policy
reminder.
This
blocks
all
ingress
traffic
to
org,
one
okay.
So
if
we
do
a
cube,
cuddle
apply
for
deny
all
ingress.
A
A
Remove?
Oh
wait.
Dan
just
said
something:
people
running
HTTP
visibility
continuously
they're,
typically
for
selected
workloads;
ok,
that
makes
sense
so
in
there
doc.
I
also
saw
that
by
the
way,
if
you
want
to
turn
this
on,
you
might
not
just
want
to
turn
it
on
for
every
namespace
in
your
cluster,
but
they
do
have
a
cool
model
for
that
which
looked
like
this
right
here,
so
I'm,
guessing
that
my
people
he's
talking
about
where
you
can
add
this,
for
just
certain
workloads,
remove
the
HTTP
visibility
policy
before
dying.
A
Yes,
so
they
should
show
both
cilium
and
Kate's
policy.
So
what's
I
guess,
what's
the
stance
on
that
then
like
I?
Is
it
just
generally
recommended
you
don't
mix
the
two,
but
technically
this
should
show
both.
It
seems
like
Dan.
If
I
read
your
message
right,
Dan,
you're,
implying
that
I
should
delete
this
to
see
the
other
policies.
Maybe,
but
maybe
you
can
clarify
there
and
help
me
out
for
now
we'll
just
see
what
happens
anyways.
So,
let's
go
ahead
and
exec
cool.
Let's
curl,
oh
wait!
A
Team
B
for
org
cool.
Let's
make
sure
this
blocks
real
quick.
So
if
I
curl
for
team
a
team,
a
org
one
service
it
went
through,
did
I
ever
apply
that
other
that
change,
let's
see
so
apply
unchanged,
interesting
I,
wonder
why
it
would
go
through
okay.
If
I'm
going
from
from
my
let's
look
at
this
one
more
time,
I'm
going
from
Team
B
to
team
a
where
the
ingress
should
be
denied.
Oh
I
see
yeah
I,
see
what
you're
saying
Dan.
Okay,
sorry,
sorry,
okay,
so
the
h2!
A
The
HTTP
thing
is
it's:
it's
rerouting
the
traffic
right
because
it's
going
through
there,
so
the
mixing
thing
is
a
totally
different
concern.
If
I'm
rerouting
the
data
path,
how
these
policies
impact
things
is
different.
That
totally
totally
makes
sense.
So
let's
go
ahead
and
delete
that
real
quick.
So
if
I
do
delete
HTTP
intro
now
we
should
see
what
we
expect.
So,
let's
go
here.
A
Curl
still
worked
for
me
very
strange:
let's
see
what
I
got
here
so
now
the
policy
should
be
gone,
I'm
hoping,
oh,
then
it's
namespace,
scoped,
sorry,
everyone,
okay,
its
namespace
scoped!
All
right!
Here
we
go
so
let's
we'll
get
this
right
someday,
don't
you
all
worry
someday?
We
will
have
this
working
so
we'll
do
org,
1
and
org
too,
ok
and
then,
let's
exact,
to
org
all
right
fingers
crossed
everyone.
Is
it
going
to
work
I
think
it
worked
all
right.
A
Let's
see
what
we
get
so
we'll
go
back
here,
will
refresh
check
it
out.
We
got
it
so
team
B
was
dropped,
so
the
source
pod
was
team
B
we
were
targeting
now.
This
is
such
a
great
way
to
troubleshoot
stuff
right.
We
were
sourcing
team
B.
We
called
out
to
team
a
and
then
we
failed
I
I
still
don't
see
the
policies
for
one
reason
or
another,
or
one
should
have
this
policy
inside
of
it.
A
So
I
wonder
if
maybe
it
is
just
showing
the
the
CR
DS
for
some
reason,
or
maybe
I
did
something
wrong,
but
nonetheless
that's
that's
super
cool,
so
all
right,
I
think
we
have
looked
pretty
well
now
at
Hubbell.
So
this
is
a
really
cool
feature
set.
Actually
another
thing
I
did
want
to
ask.
Well,
we
have
zillion
people
here
is
it
looks
like
I'm
curious
about
these
little
locks
mean
do
these
represent
policy
or
what
are
they
supposed
to
represent
because
obviously
I've
applied
a
policy,
but
it
doesn't
look
like
it's
changed
much.
A
Oh
here's,
a
cool
thing
to
look
at
to
everyone.
You
can
see
with
the
HTTP
enabled
actually
see
the
paths
on
the
UI
and
I
can
see
the
get
response.
So
that's
pretty
cool!
You
can
see
how,
if
you're
traversing
through
that
Envoy
proxy,
how
you
can
go
a
layer
deeper
on
layer
or
on
seven
layer,
seven.
So
that's
super
cool
all
right.
All
so
I
want
to
wrap
up
with
just
one
conceptual
item
around
cilium,
because
I
don't
want
this
to
go
too
long.
A
I
know
a
lot
of
you
are
already
staying
later
than
you
expected,
and
that
is
just
a
quick
note
on
CNI
chain.
You
know,
honestly,
if
we
deployed
CNI,
chaining
I,
don't
think
it'd,
be
that,
like
amazing
cuz
there
wouldn't
really
be
much
to
see.
But
what's
cool
is
with
everything
we've
talked
about
so
far
we
can
bring
that
into
the
concept
of
CNI
chaining
I.
A
Think
so,
let's
see
if
we
can
find
that
real
quick,
I
think
I
had
it
I
think
I
had
it
bookmarked,
but
let's
just
type
in
CNI
chain
and
see
if
anything
comes
up
in
the
doc.
Cni
training
all
right
awesome.
So
these
dedicated
guides.
Okay,
perfect.
So
let's,
let's
kind
of
let's
kind
of
review
what
we've
talked
about
so
far
right.
So
we
know
that
we
know
that
from
a
cilium
perspective
they
make
big
the
one
of
the
big
compelling
kind
of
use.
A
Cases
of
it
is
this
use
of
BPF
to
do
this
kind
of
routing
type
concerns
we've
talked
about,
so
we
talked
about
the
idea
of
services
which,
to
some
degree,
can
be
thought
of
as
load
balancing
right,
balancing,
okay
and
then
we've
also
talked
about
net
policy
that
can
get
enforced
on
that
level,
also
moving
that
out
of
the
kind
of
iptables
mode
that
it
would
normally
work
inside
of
right.
And
then,
let's
see,
if
there's
anything
else,
advanced
features
like
encryption.
A
So
maybe
maybe
there's
an
encryption
feature
set
I,
don't
really
know
as
much
about
that
one
I
haven't
really
looked
into
it,
but
service
net
policy.
What
else
do
we
got
here?
Let's
see
a
cilium
attached,
BPF
programs,
network
devices,
cool
cool,
so
the
idea
that
I'm
kind
of
getting
at
here
with
this
CNI
chaining
model
and
why
I
think
it
can
be
kind
of
a
compelling
thing
is.
Maybe
there
is
some
technical
reason
why
you
want
to
use
a
different
CNI
plugin.
So
as
an
example,
let's
consider
calico.
A
So
calico
has
a
couple
different
routing
modes:
okay,
it
has
IP
and
IP
okay
and
it
has
the
X
LAN
and
it
has
I
always
want
to
call
it
direct,
but
I
think
it's
called
native,
where
you
basically
don't
encapsulate.
So
let's
just
call
it
that.
So
you
know
this
is
a
super
random
use
case.
But
let's
say
like
your
company's
already
super
heavily
invested
in
the
IP
and
IP
mode
that
calico
uses,
which,
as
I
was
talking
about
earlier,
you
can
actually
mix
native
in
IP
and
IP.
A
So
it
only
when
you're
crossing
subnets
you're
doing
that,
and
also
you
know
you
just
you
have
reasons
like
maybe
you're
using
the
BGP
functionality,
that's
built
into
it,
to
talk
to
your
top-of-rack
switches
right.
So
your
your
top
rack
switches
so
that
you
can
essentially
set
up
a
pod
network
in
your
in
your
environment.
Now
I'm,
not
saying
you
couldn't
do
architectures
like
this,
perhaps
with
psyllium
I
honestly,
don't
know
the
answer
to
that.
A
But
let's
say
that
you're
pretty
stuck
in
that
model,
but
you
really
want
to
take
advantage
of
all
of
let's
do
a
different
color
here.
So
we're
clear.
You
really
want
to
take
advantage
of
all
of
these
things
that
we've
been
talking
about
at
least
service
and
service
load,
balancing
and
net
Paul.
So
as
I
would
understand
it,
the
idea
would
be.
Is
you
can
go
in
and
do
things
to
apply
psyllium
in
a
mode
where
it's
not
handling
the
core
routing
concerns?
A
You
could
use
it,
but
also
you
can
have
it
kind
of
run
alongside
some
other
IPAM
and
CNI
functionality
that
you're
kind
of
already
bought
into
for
for
one
reason
or
another.
So,
overall
that
that's
a
pretty
pretty
cool
thing
see
if
there's
anything
else
in
chat
that
we
want
to
cover
we've
support
IPSec
our
commercial
support
setup
requires
us
to
use
the
nsx
TC
and
I
plugin
cool.
So
just
some
comments
on
the
different
setups
of
the
plugins
I'll
see
anything
else.
That's
really
big,
alright
everybody!
A
So
in
summary,
then
we
talked
about
a
lot
of
really
cool
stuff
in
it.
I
would
argue
it
kind
of
worked,
which
is
super
super
awesome.
So
we
talked
about
the
cube
proxy
replacement
and
the
CRT
back-end.
We
talked
about
deploying
Hubble
looked
at
some
of
the
feature
sets
and
while
we
didn't
actually
implement
it,
we
at
least
looked
at
CNI
chaining
to
get
an
idea
for
how
we
can
leverage
some
of
that
cilium
functionality
on
top.
So
I
guess
that's
about
it.
I
just
want
to.
A
Thank
you
all
so
much
for
joining
us
during
this
session
I.
You
know
selfishly
I
learned
an
insane
amount
about
how
some
of
this
stuff
works,
so
that
is
really
really
cool.
Those
of
you
who
are
still
hanging
out
and
still
in
chat
if
you
could
also
just
give
a
quick
thanks
to
the
cilium
folks
who
joined
us,
we
would
just
be
kind
of
guessing
about
stuff
if
they
weren't
there
to
give
some
more
context.
A
So
we
really
appreciate
you
all
taking
time
out
of
your
day
to
come,
join
us
and
talk
about
all
these
awesome
feature
sets
but
yeah
that
does
it
for
this
tee
gik
on
psyllium,
I
hope
this
tape
2
was
super
helpful
to
you.
Don't
forget
to
check
out
our
hack
MD
for
some
references
to
talks
and
things
that
we've
discussed
and
yeah
be
sure
to
check
out
project
psyllium
and
see
if
it's
something
that
you
can
play
around
with
in
your
environments
until
the
next
time.
Thanks
again,
everybody
see
ya.