►
From YouTube: eCHO episode 2: Introduction to Hubble
Description
Our regular livestream covering all things related to eBPF and Cilium, and the first in our US-friendly timeslot. This week Glib Smaga will be joining Liz Rice to give us an intro to Hubble.
Show notes: Show notes: https://github.com/isovalent/eCHO/tree/main/episodes/002
Find more info at https://github.com/isovalent/eCHO
A
Today's
main
topic
is
going
to
be
an
introduction
to
hubble,
but
before
we
get
to
that,
let
me
just
remind
you:
we
love
your
questions.
If
you're
joining
us
on
youtube
live,
do
say
hello
in
the
chat,
we'd
love
to
hear
what
you're
thinking
please
do.
Let
me
know
if
there's
any
issues
with
the
audio
or
font
sizes
or
anything
like
that
and
we'll
try
and
fix
that
as
we
go
along
there's
a
couple
of
seconds
lag.
So
if
I
don't
respond
immediately,
it's
not
that
I'm
ignoring
you!
A
A
All
right,
let
me
switch
to
sharing
my
screen.
Another
reminder
is
that
we
have
this
github
repo
under
I
surveillan
echo.
If
you
have
ideas
for
topics
you'd
like
us
to
cover
in
these
live
streams,
just
raise
an
issue
that
would
be
really
great:
okay,
fairly
quiet
week
this
week
for
news
around
kind
of
ebpf
and
psyllium.
There
are
a
few
things
we
wanted
to
share.
A
One
is
that
psyllium
has
been
certified
by
red
hat
for
open
shift,
so
you
can
find
it
in
the
red
hat
ecosystem
catalog
as
a
certified
cni
plugin
for
openshift.
So
that's
cool
further
afield
in
the
world
of
ebpf,
the
falco
project,
which
uses
ebpf
as
one
of
the
mechanisms
for
observing
security
events.
They
have
put
together
a
proposal
for
graduation
status
in
the
cncf.
A
A
Okay,
so
hi
to
everyone
who
has
joined,
I
see
a
few
more
people.
I
see
joe
and
carthy
cayenne.
Sorry,
if
I
pronounce
your
name
wrong
I'll,
do
my
best
all
right.
So,
let's
move
over
to
our
main
topic
for
today
and
it's
going
to
be
an
introduction
to
hubble,
which
is
the
observability
tool
that
accompanies
psyllium
and
I
am
joined
by
glib
smarga,
who
is
one
of
the
maintainers
of
the
hubble
project,
so
welcome
glib.
A
Great
so,
first
of
all
hubble:
where
does
the
name
come
from.
B
There's
a
very
good
question:
it's
a
question
that
we
get
fairly
frequently
because
you
know
jumping
from
psyllium
to
hubble
people.
Don't
see
that
connection
and
I'm
here
to
say
that
there
is
no
connection
between
between
the
two
names.
So
what
ends
up
happening
is
so
I've
been
working
at
ice
surveillance,
which
is
the
company
behind
psyllium
for
about
two
years
now
and
hubble
originated
about
early
summer
of
2019.
B
I
think
so,
several
months
after
I
joined
the
company,
that
was
one
of
my
first
projects
that
I
started
working
on
and
what
happened
about
a
few
weeks
before
that
I
did
a
tech
talk.
So
I
did
an
hour-long
tech
talk
on
orbital
mechanics
with
a
live
demo
using
kerbal
space
program,
which
is
a
fantastic
game
by
the
way
like
if
you
haven't
played
it.
It's
amazing
and
people
really
like
the
talk
and
in
general,
like
a
lot
of
people,
came
out.
It's
like.
Oh,
I
really
love,
you
know
all
the
stuff
nasa
space.
B
I
play
it
as
well.
I
play
ksp
or
I
have
you
know
a
lot
of
interest
of
recent
nasa
missions
and
so
on.
So
the
space
theme
kind
of
gripped
the
company
and
we've
been
going
down
that
road
ever
since
hubble
was
the
first
sort
of
product
that
we
put,
that
name
behind.
Obviously
the
connection
was
to
the
hubble
telescope
and
the
rough
idea
was
like
your
galaxies
and
stuff
is
your
data
and
we
help
you
look
at
it.
A
I
I
know
you
know
the
the
space
theme.
Sometimes
it's
real
life
like
hubble
and
sometimes
it's
star
wars.
I
remember
seeing
thomas
demoing
psyllium
with
you,
know
the
star
wars
and
death
star
and
everything
so.
B
That's
another
internal,
that's
another
internal
theme
in
the
company
and
again
the
echo
logo
is
also
heavily
inspired
by
star
wars,
so
we
kind
of
have
several
parallel
themes
going.
I
think
copyright
issues
are
slightly
easier
with
space
related
subjects,
but.
A
It's
true,
I
saw
a
really
flattering
comment
there
from
noel,
saying
I'm
getting
a
tgik
like
vibe
here
I
that
we're
hugely
indebted
to
the
joe
vader
and
the
ex
heptio
folks
for
yeah.
I
mean
we're
very
much
inspired
by
them.
So
that's
great,
oh
michael
asking
what
is
tgik?
It's
tgi
kubernetes,
thank
god!
It's
kubernetes
and
they
do
a
live
stream
every
friday.
I
think
it's
a
little
bit
later
than
this
one,
so
you
can
catch
it
after
hours.
B
B
A
All
right,
so
we've
talked
a
little
bit
about
hubble.
I
think
it'd
be
really
great
if
you
could
show
us
what
it
looks
like
if
you
can
share
your
screen
and
let's
see
some
hubble
components
running
on
a
kubernetes
cluster.
B
Sure,
let's
do
that?
Hopefully
you
know
the
demo
gods
and
the
zoom
gods
are
all
going
to
be
happy
today.
B
Okay,
so
the
the
two
things
that
I
will
be
showing
primarily
is
just
my
terminal
and
the
web
browser
we'll
get
to
the
web
browser
later.
But
we'll
start
with
the
terminal,
you'll
notice,
I
have
three
tabs.
I
have
one
reserved
for
using
hubble
observe
commands.
I
have
one
reserved
for
if
we
ever
edit
policies
we'll
get
to
that
and
one
for
port
forwarding
again
we'll
get
to
that.
But
if
you
ever
see
the
terminal,
suddenly
change
pay
attention
to
the
bottom.
B
B
I
have
deployed
like
a
sample
suite.
It
doesn't
really
matter
what
it
is
you
can
deploy.
You
can
look
at
your
own
applications.
You
can
deploy
some
something
from
you
know
the
google
cloud
demo
suite
they
have
like
a
whole
microservice
suite.
It
doesn't
really
matter.
I
just
deployed
something
that
we
use
for
internal
demos,
a
lot.
B
B
If
we
look
at
what's
going
on
inside
cubesystem,
we
see
that
the
majority
of
this
is
standard.
Jke
stuff,
if
you
deploy
or
if
you
use
a
different,
managed
kubernetes,
you
might
get
slightly
different
collection
of
components,
but
the
the
things
of
of
note
here
is:
you
can
see
the
psyllium
deployments.
B
So
what
we
have
I've
already
told
you.
We
have
three
nodes,
so
we're
gonna
have
three
psyllium
pods
one
for
each
node.
We're
gonna
have
three
cilium
node
in
its
again
one
french
node
and
you
notice
there's
only
two
operators.
Often
people
get
confused
by
that.
But
basically
the
operator
is
just
running
two
instances
for
benefits,
and
so
even
if
you
have
two
nodes,
three
nodes,
five
ten
we'll
still
run
two
operators.
So.
B
One
one
one
init
one
agent
for
each
node,
two
instances
of
operator
and
the
other
two
things
which
you
will
notice
is
we're
also
running
hubble
relay
and
hubble
ui,
so
we'll
get
to
that
a
little
bit
later.
B
B
Okay,
so
inside
of
that
pod
there
will
be
a
hubble
binary
and
there
will
be
a
cilium
binary.
So
we
can
look
at
the
status
of
both.
B
So
psyllium
is
what
folks
are
mainly
familiar
with,
and
hubble
is
something
that
we've
fairly
recently
started.
Shipping
package
together,
I
believe
1.8
was
the
first
version
that
we
did
it,
but
you
can
see
that
the
hubble
isn't
hubble
server
is
enabled
all
is
well
and
traffic
is
flowing.
So
basically,
what
what
hubble
server
component
inside
of
psyllium
is
doing
is
it
looks
at
the
flows.
It
looks
at
the
network
traffic
that
flows
through
that
cilium
agent
and
it
basically
extracts
it
out
of
the
ebpf
into
the
user
space.
B
So
we
can
act
on
it.
We
can
observe
it.
We
can
make
decisions
on
it
potentially,
even
but
more
importantly,
we
can
monitor
it
and
see
what's
going
on.
So
let's
do
that.
Let's
take
a
look
instead
of
doing
hubble
status,
I'm
going
to
do
hubble,
observe
and
again,
if
you're
not
familiar
with
these
commands,
we
try
our
best
to
include
help
and
just
have
good
contextual
help.
B
B
B
B
Actually,
no,
you
know
what
the
default
output
is
better
to
see
in
the
columns,
but
we
can
see
that
things
are
talking
to
each
other
and
it
appears
that
the
job
posting
core
api
are
running
on
this
node.
We
can
tell
that
because
they're
in
here.
B
B
Correct
this
agent
will
only
see
traffic
for
the
pods
that
are
running
here
and
the
responses
they
get
back
from
external
contributors.
B
So
occasionally
you
will
see
the
source
being
even
like
you
know
something
in
the
outside
internet
or
something
because
that's
a
reply
flow
okay,
so
doing
using
hubble
like
this
is
very
powerful,
but
also
potentially
difficult,
because
you
need
to
know
where
the
pods
are
running,
which
node
the
pods
are
running,
because
you
need
to
log
into
the
correct
pod
and
then
run
these
commands
so
that
you
know
that
becomes
very
difficult.
That's
where
hubble
relay
comes
in.
So
if
we
look
at
cube
system
namespace
again,
essentially
what
the
relay
does
the
relay
component?
B
B
So
the
hubble
observe
command
can
be
used
to
talk
to
the
relay
and
it
can
be
used
to
talk
to
the
local
node.
The
api
they
expose
are
exactly
the
same,
but
the
implementations
differ,
so
the
local
node
will
only
get
the
local
node
information,
whereas
if
you
ask
the
same
question
to
relay,
say:
hey
give
me
the
last
10
20
flows,
really
it's
going
to
collect
it
from
the
entire
cluster.
B
B
I
can
use
the
server
command
for
the
hubble
observe
and
I
will
actually
connect
to
that
relay
instance
of
local
instance.
So,
let's
just
maybe
ask
it
how
it's
doing
so
we'll
say:
hubble
server.
This
you,
okay,
okay,
looks
like
things
are:
okay
and
you'll,
see
that
there's
one
new
line
that
appeared
that
wasn't
here
before,
which
is
connected
nodes
three
out
of
three,
which
means
the
relay
is
actually
connected
to
all
the
three
nodes.
B
Let's
observe
the
last
20.:
okay
font
is
a
little
bit
too
small
I'll
scale
it
down
I'll
I'll
blow
it
back
up
when
we're
ready
to
read
stuff.
But
here
you
can
see
the
difference
because
we
now
can
see
stuff
that's
happening
on
the
entire
cluster,
not
just
on
that
particular
node.
B
A
I
don't
know
if
anyone
else
saw
it.
I
just
saw
another
pop-up
about
zoom.
If
it
goes
wrong,
the
live
stream
will
keep
going
and
I
will
send
you
another
zoom
link,
so
I
apologize
and
apologies
in
advance
if
we
get
interrupted
a
little
bit
with
the
with
the
zoom.
A
I
thought
we'd
fix
that
since
last
week,
but
neil
actually,
I
see
neil
is
online
if
he
could,
if
he
has
a
way
of
upgrading
this
call,
please
neela
do
that.
Now,
all
right
sorry
clip
do
go
ahead.
B
Oh,
it's,
okay,
all
right,
so
this
is
potentially
interesting,
but
not
as
interesting
as
it
as
it
can
be.
So,
let's,
let's
maybe
take
it
up
a
notch.
So
first
thing:
let
me
get
the
font
size
back
to
roughly
where
it
was.
I
know
we're
going
to
wrap,
but
that's
okay.
So
the
last
20
is,
you
know
it's
very
interesting,
but
it's
also
kind
of
you
know
you
have
to
time
it
and
you
only
get
the
last
one.
B
B
So
now
we
can
just
in
real
time
look
at
what's
going
on
inside
the
cluster
who's
talking
to
whom
and
some
basic
information
like
whether
or
not
it
was
forwarded,
what
what
observability
segment
it
was.
I
know
some
tcp
flags,
some
basic
information
again
pretty
good
right
now
we
can
see
what's
going
on
inside
of
the
entire
cluster.
That's
awesome,
however.
The
amount
of
information
we're
presented
with
for
the
entire
flow
is
quite
minimal.
B
B
B
Let's
follow
again,
but
in
this
time
let's
just
follow
it
in
json
and
I
will
pipe
it
through
jq
just
so
we
get
slightly
better
output
and
I'll,
let
it
run
for
a
couple
of
seconds
and
I'm
just
going
to
stop
it
here,
because
I
think
we've
got
enough.
So
now,
let's
take
a
look
at
one
of
these
events
that
we
see
here,
and
you
see
that
not
only
hubble
has
the
basic
networking
information
such
as
ip
from
where
to
where
it
was
going.
B
Is
the
real
difference?
Yes,
because
the
source
of
this
information
is
actually
inside
psyllium,
so
whenever
a
packet
goes
from
place,
a
to
place
b
inside
of
a
cluster?
That
information
is
not
encoded
in
that
packet.
That
information
resides
with
psyllium
based
on
identities
based
on
who
was
talking
to
whom
so
psyllium
actually
implements
all
this,
and
because
we're
hubble
server
runs
inside
psyllium.
B
We
actually
are
able
to
get
this
information.
So
now
look
at
this
instead
of
seeing
that
the
source
and
destination
were
these
meaningless,
ips
they're
not
meaningless,
you
can
go
and
look
it
up.
You
can
find
which
pod
this
is,
but
you
have
to
execute
other
queries
alongside,
whereas
here
you
can
clearly
see
that
the
source
was
the
identity.
B
This
it
was
that
this
psyllium
id
it
was
from
namespace
default.
It
was
from
app
recruiter.
It
was
from
pod
this.
So
if
you
have
multiple
pods
running
for
this
application,
even
though
the
labels
are
going
to
match
and
say,
this
is
from
this
particular
application
would
specifically
tell
you
which
pod
this
was,
and
you
can
see
where
it's
going.
It's
going
to
core
api
in
this
name
space
again,
it
was
routed
to
this
specific
pod.
B
So
this
is
how
you
can
actually
use
this
much
richer
metadata
for
troubleshooting
debugging,
your
cluster
or
just
observing
what
is
happening.
Who
is
talking
to
whom?
Maybe
you
notice
that
you
know,
even
though
you
have
ten
instances
of
the
same
pod
running
for
some
reason,
they're
not
load
balanced
properly,
and
it's
all
going
to
the
same
pod
or
things
like
that
right
that
information.
You
can't
really
tell
without
using
something
like
this.
B
That
is
a
good
question.
Initially,
the
actual
full
payload
was
also
included
in
here.
It
was
causing
some
sort
of
storage
issues,
and
so
we
took
it
out.
It
is
now
in
the
process
of
being
put
back
in
not
for
sort
of
all
use
cases.
You
will
have
to
turn
it
on
explicitly.
I
believe,
but
we
want
to
put
back
the
actual
payload
like
what.
What
is
the?
What
is
all
of
the
things
that
have
we
have
seen,
and
you
can
look
at
that.
B
It's
obviously
something
that's
very
useful
for
our
internal
developers
as
well.
When
we're
developing
features,
we
want
to
make
sure
that
stuff
works
properly,
but
in
extreme
or
in
more
complicated
use
cases.
It's
also
very
nice.
So
we
we're
going
to
put
the
raw
payload
back,
but
you're
going
to
have
to
ask
for
it
explicitly.
A
Got
another
question
and
if
the
host
is
handling
traffic
in
xdp,
how
do
we
observe
the
traffic
through
hubble
and
I
think
the
answer
to
that
is
ebpf
right?
We
are
able
to
hook
into
I.
I
don't
know
if
this
is.
If
hubble
is
particularly
hooked
into
the
xdp
path,
I
imagine
it
is,
or
rather
the
psyllium
is.
B
I
think
one
of
the
psyllium,
especially
the
kernel
devs,
may
be
able
to
answer
that
better
than
I
can,
but
the
sort
of
the
basic
answer
to
a
question
like
that
is:
if
psyllium
can
route
it
hubble
can
see
it
yeah,
that's
kind
of
the
the
basic
framework
that
you
can
think
about
right.
That
makes
sense
yep,
okay,
so
this
is
all
been
fairly
interesting
so
far.
So
this
is
a
lot
of
information
that
you
get
on
top
of
sort
of
a
basic
flow.
B
Let's
step
it
up
again,
right
so
I'll
clear
this
and
let's.
B
B
I
included
this
this
link
over
here
you
can
go
to
psyllium
docs
and
you
can
read
about
l7
visibility.
Basically,
psyllium
is
able
to
recruit
some
help
from
an
l7
proxy
to
actually
decode
l7
data
and
include
it
into
the
hubble
logs.
So
let's
take
a
look
at
this
policy.
B
This
basically
says
that
we
want
to
specifically
look
at
traffic
on
port
53
on
any
protocol
to
cube
dns,
and
we
want
to
map
the
sicilian
specific
data.
You
can
match
only
specific
patterns
again,
if
you're
talking
about
including
l7
proxy
you're,
talking
about
a
little
bit
more
cost
associated
with
it
but
anyway.
So
this
is
going
to
look
at
all
the
port
53
dns
circuits.
So
what
I'm
going
to
do
is:
let's
go
back
to
where
we
had
a
follow.
B
A
Know
what
I'm
gonna
we're
about
to
get
kicked
off.
This
zoom
call
I'm
so
sorry
about
this.
So
what
I'm
gonna
do
we're
gonna
end
this
zoom
call
I'm
gonna
create
a
new
one
I'll,
send
you
a
link.
I
apologize
to
everyone
who's
watching
and
we
will
be
back.
I'm
gonna
keep
the
stream
running
so
don't
go
anywhere
and
glib.
I
will
send
you
a
link
momentarily
all
right.
A
This
is,
this
is
live.
You
know
this
is
this.
Is
live
live
streaming
for
you?
I
guess
right
so
new
meeting.
Let's
put
that
in
full
screen
and
I'm
gonna
just.
A
Okay,
that's
live
streaming
for
us,
okay
yep,
so
you
were
about
to
show
us
a
level
seven
policy
in
place.
I
think
yes,
that
is
seven.
B
Okay,
so
let's
get
back
into
it,
so
what
I
propose
we
do
is
we
go
back
to
go
back
to
running,
hubble,
observe
and
follow
mode,
so
we're
just
gonna
tail
the
whole
cluster.
I
think
we
may
be
able
to
step
it
up
a
bit
by
only
just
to
reduce
the
noise
a
little
bit.
B
Maybe
we
can
only
observe
port
53.,
so
you
can
see
that
all
of
these
flows
now
only
show
port
53,
and
these
are
generally
going
to
be
all
all
pods
are
going
to
need
dns
for
the
most
part,
if
they're
doing
anything,
sort
of
networking
related
right,
you're
going
to
talk.
The
way
you
address
things
inside
kubernetes
is
generally
on.
B
Like
the
dns
level,
you
say:
hey,
I
want
to
talk
to
myapp.mycluster.
whatever
dot
kubernetes,
so
I'm
just
going
to
leave
this
running
and
then
in
this
thing
again
I'll
I'll
show
you
the
this
dns
visibility,
so
port
53.
We
want
all
the
information,
so
I'm
just
going
to
apply
this.
B
And
then
the
network
policy
is
created,
psyllium
is
going
to
do
a
bunch
of
magic
on
the
back
end,
we'll
just
let
it
do
its
thing
and
wait
for
some
more
flows.
Aha,
now
look
at
this,
so
let
I'm
just
gonna,
let
a
couple
more,
maybe
accumulate,
and
then
I
will
kill
it
and
we
can
look
at
it.
B
B
B
That
should
be
good
enough.
Oh
actually,
you
know
what,
in
addition
to
that,
you
can
also
filter
by
type
so
because
you
will
still
see
l3l4
data
as
well
as
the
l7
I'll
just
filter
out.
Only
the
l7
data
and
then
json
that
okay,
so
here
we
go,
you
see
for
the
for
the
flows
that
we
actually
have
l7
data,
for
we
actually
have
a
new
entry
in
this
json.
It's
called
l7
and
we'll
provide
much
more
information
for
this.
B
B
Okay,
so
let's
go
back
to
observing
stuff
I'll,
remove
the
port
53.
Now
we
can
keep
I'll,
remove
the
json
as
well.
Let's
just
look
at
all
the
l7
stuff
that
floats
around
I'll,
make
the
font
smaller
just
temporarily,
so
it
doesn't
wrap.
B
I
know
you
can't
read
that
very
well,
but
we'll
blow
it
up
in
a
minute.
So
let's
go
in
here.
I've
prepared
another
deploy.
Okay,
now
I
have
to
blow
it
up,
live
demos,
okay,
so
this
is
running
for
my
public
repo
on
github,
it's
the
service
called
a
b
chain.
B
It's
a
very
simple
binary
that
calls
itself.
It
runs
multiple
replicas
in
this
case
two,
but
they
basically
continue
calling
each
other.
So
one
will
start
the
alphabet
and
say:
hey.
Please
continue
and
I'll
say
I'll
start
with
a
and
I'll
just
send
it
off,
and
then
somebody
will
receive
and
say:
okay,
it's
a
b,
and
should
it
back
out,
oh
abc
is
basically
chaining
it
like
that,
but
we're
going
to
able
to
see
that
in
the
flows
as
well.
B
So
this
is
just
configuration
of
how
noisy
they
are.
If
you
really
want
to
test
the
cluster,
you
can
like
ramp,
this
up
and
they'll
be
super
noisy,
but
the
only
things
that
are
interesting
in
here
is
that
they're
exposed
on
port
3770,
no
idea
why
I
chose
that
number,
no
idea
and
then
the
other
interesting
things
is
that
I've
already
showed
you
l7
visibility
based
on
the
policy.
What
cilium
actually
does
is.
B
It
can
actually
support
l7
visibility
based
on
annotations
attached
to
entities
such
as,
in
this
case,
a
deployment,
and
these
will
be
like
more
surgical,
more
targeted
sort
of
more
targeted
visibility
because
of
the
added
cost
and
so
on.
Sometimes
you
just
want
to
say:
okay,
I
know
the
port
for
sure,
or
I
know
the
protocol,
and
I
want
this
very
specific
visibility.
So
in
this
case
we're
just
I'm
deploying
this
already
with
l7
enabled
for
ingress
on
3770
on
tcp,
and
I
want
to
see
http
and
this
maybe.
A
Relates
to
a
question
that
we've
had
about
latency,
so
I
think
one
of
the
you
know
there
is
some
impact
on
latency.
If
we
were
to
measure
absolutely
everything.
B
Yes,
yes
and
this
syntax
again,
you
can
find
more
documentation
on
psyllium
docs.
You
can
just
punch
in
l7
visibility
and
you'll
be
greeted
with
several
pages
on
these
annotations
and
things
like
that.
So
this
the
syntax
like
what
are
the
components
of
the
syntax?
What
else
do
you
support
sort
of
all
this
stuff?
Okay?
So
that's
enough
talking,
let's
just
let's
just
look
at
it,
so
I
will
apply
before
I
apply.
B
I
want
to
make
sure
okay,
so
we're
still
running
this
thing
and
I
will
apply
my
fingers
can
type
I'll,
just
I'll,
deploy
a
b
chain
and
now,
let's
look
at
here
what's
happening.
B
Again,
trying
to
share
with
extremely
large
font,
so
let's
take
a
look
at
json,
since
it
will
it'll
be
let's
play
better
here
we
go
bingo.
So
again,
let's
look
at
the
entire
flow
from
two.
You
see
that,
as
I
said,
a
b
chain
talks
to
itself,
although
what's
actually
interesting,
somebody
from
psyllium
can
enlighten
me
why
that
is
reserved
and
managed.
B
It
may
be,
because
it's
a
new
endpoint
that
it
hasn't
been
able
to
sort
of
propagate.
Let
me
let
me
just
try
it
again
to
see
if
the
cache
was
populated
yeah.
So
I
think
there
was
just
a
minute
when
psyllium
caches,
weren't
populated
with
the
destination
information.
Now
you
can
clearly
tell
that
the
service
ap
chain
nope,
let's
keep
system.
B
Apologies
there
we
go
now.
You
can
tell
that
a
b
chain
actually
talks
to
itself,
but
it
doesn't
talk
to
itself
when,
like
hey,
localhost
port
foo,
it
actually
shoots
it
back
out
into
the
cluster.
Say:
hey
send
this
request
out.
It
may
get
it
back
and
may
not
in
this
case
it.
Actually
it
didn't
right.
So
a
different
pod
got
that
request
and
you
can
see
that
the
the
http
level
information.
B
B
You
know
if
your
internal
business
infrastructure
revolves
around
some
custom
headers,
which
is
fairly
common.
For
you
know
internal
sort
of
http
based
applications,
and
you
can
see
that
this
was
the
reply
it
took.
One
millisecond
status
was
200
all
as
well,
and
if
we
keep
scrolling
up,
we
will
see
the
request
right
before
that
you'll
see
that
it
got
to
you
and
they
just
hit
continue
right.
So
the
point
of
this
is
just
to
illustrate
that
you
can
even
get
http
level
visibility
as
well.
B
B
Now
I
want
to
show
you
the
ui,
which
is
even
more
cool.
So
what
I'm
going
to
do
is
I'm
going
to
go
back
to
the
port
forward
and
I'm
going
to
kill
it
and
I'll
do
a
different
port
forward
to
hubble
ui
so
I'll
just
pour
forward
my
local
port
12000
to
port
80
of
this
cluster,
and
what
I'm
going
to
do
is
open
that
and
we'll
be
greeted
with
the
hubble
ui.
B
Yes,
so
part
of
part
of
the
appeal
of
the
ui
are
the
more
visual
things
and
one
of
the
things
we
can
do
is
per
namespace,
so
we're
looking
at
default
right
now.
We
will
actually
graph
what
is
going
on
inside
of
that
namespace
and
we'll
even
do
pretty
icons
for
sort
of
well-known
open
source
products
such
as
kafka
and
zookeeper.
B
We
see
that
a
b
chain
has
also
at
one
point,
talked
to
something
called
unmanaged
3770,
and
this
is
primarily
to
do
with
the
fact
that
I
don't
think
sulaim
cash
is
caught
up
quite
fast
enough.
Once
more
data
comes
in
and
those
flows
are
getting
pushed
out
of
the
ring
buffer.
This
will
disappear
and
we'll
see
like
a
more
complete
picture
of
a
b
chain
talking
to
itself.
B
B
That's
it.
You
can
immediately
tell
that
these
are
not
like.
You
know,
publishing,
publishing,
sensitive
data
to
information
to
s3
or
doing
anything
silly.
They
just
happily
talk
to
the
default
application.
That's
it
okay!
So
now
real
quick,
because
I
don't
want
to
take
too
long
with
a
demo,
I've
already
taken
quite
a
bit
of
time.
I
want
to
save
a
little
bit
for
questions
and
just
more
dialogue,
let's
step
it
up
even
more
and
look
at
something
like
a
actual
policy.
B
So
at
the
moment,
actually,
you
know
what
I
will
remove
the
dns
visibility.
B
It's
we
don't
need
it
anymore,
okay,
so
let's
look
at
actual
network
policy,
so
by
default,
everything
inside
kubernetes
can
talk
to
anything
inside
the
world
outside
world.
There
are
no
restrictions,
which
obviously
doesn't
sit
well
with
a
lot
of
people
that
run
applications
on
kubernetes.
They
want
to
make
sure
that
they,
you
know
things
are
locked
down
a
little
bit
more
than
not,
and
only
whitelisted
traffic
flows
through
the
cluster.
B
So
here
is
such
an
example
of
whitelisting
traffic
inside
of
the
cluster
and
the
details
about
these
you
can
find
on
psyllium
docs.
I
know
kubernetes
in
general
has
a
lot
of
yaml
and,
if
you're
not
familiar
with
it,
it
might
be
kind
of
difficult
to
follow.
But
I'll
just
I'll
tell
you
what
this
does.
This
basically
allows
egress
to
core
api,
so
anybody
can
talk
to
core
api,
which
is
good
and
then
to
make
our
demo
applications
work
because
we
heavily
rely
on
dns.
We
need
to
whitelist
dns
inside
of
the
cluster
else.
B
B
I'll
I'll
turn
it
back
on
real,
quick
and
I'll
turn
off
my
pv6
okay,
we
can
enable
the
ui
back
if
it
wanted
to
later
okay,
so
you
can
see
that
at
the
moment,
all
of
the
traffic
pretty
much
is
forwarded
right.
So
you
see
on
the
right.
A
b
chain
is
still
spamming,
its
stuff
and
everything
is
forward.
So
let's
go
ahead
and
apply
this.
What
I
call
lockdown
policy
and
let's
see
the
impact
on
of
that
onto
the
cluster.
B
B
So
a
lot
of
this
is
dns
dns,
dns,
okay,
so
I'll
just
stop
it
here,
because
I
see
some
denies
so
you
see
in
this
particular
example:
core
api
was
trying
to
talk
to
elasticsearch
and
that
was
actually
dropped
and
that
was
not
dropped
because
of
an
accident
that
was
specifically
dropped
because
policy
was
denied
and
you'll
see,
here's
a
bunch
of
things
that
were
forwarded
and
they
were
forwarded,
because
you
can
clearly
see
that
in
this
case
the
destination
was
the
core
api
and
we
clearly
whitelisted
the
traffic
towards
cord
api.
B
What
it's
doing,
because
it
should
be
getting
all
of
us
yeah
exactly
so.
You
see
that
a
b
chain
it
is,
it
gets
its
dns
requests
white
listed,
so
you
see
this
was
going
to
cube
system
cube
dns
on
53,
and
that
was
allowed
through
by
the
policy
that
we
created.
However,
the
subsequent
requests
were
dropped
because
they
didn't
match
the
policy.
B
And
now
we
can
see
a
lot
of
this
in
red.
We
see
drop
traffic
inside
the
core.
Api
was
trying
to
talk
to
somebody
and
we
again
that
doesn't
match
the
policy,
so
we
dropped
it.
A
B
Yeah,
so
basically
the
the
goal
of
my
sort
of
like
this
demo
was
to
demonstrate
that
there
we
can
layer
things
together.
We
can
start
basic
with
just
enabling
hubble,
I
believe
it's
enabled
by
default,
but
that
may
you
may
want
to
disable
it
enable
it
as
you
wish,
but
you
can
start
on
a
single
node
level.
B
You
can
then
say:
okay
well,
this
is
useful.
I
wanted
for
the
entire
cluster,
you
can
add
relay
to
the
mix
and
you
can
get
all
that
information
in
cli.
You
can
step
it
up
even
more
and
say:
okay,
let
me
add
the
ui
on
top
of
this.
Just
so
you
know,
humans
are
pretty
visual,
so
I
can
just
visualize
all
of
the
stuff
that's
going
on
inside
this
cluster.
B
B
I
imagine
this
is
to
do
with
just
how
g
key
operates
in
general
and
by
default.
We
also,
if
you
don't
see,
dns
or
something
like
that.
Often
we
will
hide
cube
dns,
because
everybody's
going
to
talk
to
cube
dns
and
that's
generally
going
to
pollute
and
make
the
graph
very
confusing,
because
everyone's
going
to
talk
to
companies,
so
by
default
we
allow
we
check
this
box.
That
says:
okay,
don't
show
that
traffic.
I
don't
care
fine,
everybody
can
talk
to
dns.
I
don't
care.
B
Yeah,
I
think
it
depends.
I
think
it
depends
what
you're
doing
this
is
something
that
can
be
very
useful
to
the
users
as
well
again,
if
you're
running
it
as
a
platform,
you
know
the
first
immediate
thing
that
comes
up
is
access
and
rdac
immediately,
so
like
is
it
okay
for
you
to
see
other
namespaces?
Is
it
not
and
so
on,
and
as
we
evolve
these
products,
like
those
kinds
of
things,
come
up
more
and
more
often
right,
yeah.
A
B
A
And
I
think
some
I
mean
there
are
use
cases
around
setting
up
your
network
policy.
You
know
you
might
use
this
to
visualize
what
traffic's
flowing
and
then
build
a
network
policy
and
make
sure
that
it.
You
know
the
policy
allows
the
traffic
that
you're
expecting
and
you
can
see
whether
or
not
that
traffic.
B
Can
flow
or
not
yeah
exactly
so
we
were
tackling
that
from
two
perspectives.
I
can
maybe
do
a
quick
plug
is
we
have
a
psyllium?
We
have
a
policy
editor
online,
which
is
something
that
can
actually
visually
help.
B
You
create
a
policy
to
make
sure
you
don't
make
any
mistakes
such
as
dns,
because
we
know
that
big,
unwieldy
ammos
can
be
difficult
if
you
don't
have
it
internalized
and
then
the
second
thing
we're
also
thinking
and
trying
to
do
sort
of
instead
of
applying
the
policy
like
outright
sort
of
like
trialing
the
policy
and
seeing
the
output
of
that
through
hubble,
say:
hey,
we
forwarded
this
flow,
but
we
would
have
dropped
it
if
you
actually
said
no,
no
enforce
this.
B
Yes,
it
can,
I'm
gonna,
stop
sharing,
yes,
it
can.
If
you
go
to
the
again,
you
can
go
to
cilium
docs,
you
can
search
for
metrics.
You
will
be
greeted
with
two
different
sets
of
metrics.
One
psyllium
itself
can
do
metrics,
but
hubble
can
do
metrics
on
the
actual
traffic.
So
you
can
get
information
and
we
we
even
have
like
examples
on
the
psylliums.
B
Maybe
I
stopped.
Maybe
I
stopped
sharing
too
soon.
Let
me
let
me
go
back
and
just
re-share
this
real,
quick,
the
home
page
of
silhouette
hubble,
will
greet
you
with
a
readme
that
actually
illustrates
some
of
this
metrics
and
monitoring
so,
and
I
believe
we
even
have
example
dashboards
for
this,
that
you
can
grab
for
for
grafana
and
then
tweak
to
your
liking,
but
we
can
visualize
things
like
http
requests,
latency,
request
response
and
so
on,
yeah
in.
B
It
allows
you
a
lot
of
flexibility
like
you.
Can
you
don't
have
to
use
the
cli
as
a
user
directly?
You
can
actually
build.
On
top
of
that,
you
can
also
use
the
hubble
api
on
the
psyllium,
like
we
have
this.
This
uses
a
protobuf
api
on
the
cilium
agent.
So,
if
you
wanted
to,
you
can
actually
take
that
that
proto
definition
yourself
and
write
a
totally
different
tool
that
matches
sort
of
your
environment
or
your
use
case.
Yeah.
A
One
thing
I
was
just
going
to
mention-
and
I
I
think
it's
in
in
github
at
the
moment
on
on
the
main
branch
or
master
branch,
but
not
because
we
haven't
released
1.10
yet
but
thomas
showed
us
internally,
a
really
nice
demo
of
setting
up
those
prometheus
metrics,
adding
in
it
there's
going
to
be
a
new
psyllium
cli
command
for
just
kind
of
one
stop
shop,
setting
up
those
metrics
and
the
grafana
dashboard.
They
looked.
B
Very
easy
yeah
I
can
also
we
can
also
plug
the
sort
of
parallel
effort
that
the
thomas
has
been
driving
and
the
team
as
well,
which
is
the
sodium
cli.
At
the
moment,
the
silium
cli
primarily
runs
inside
of
the
inside
of
the
node
as
well.
So
you
saw
me
at
the
very
beginning
I
exact
into
the
pod,
and
I
said:
hey
psillium
status.
Are
you
okay?
B
We're
now
do
using
the
hubble
model
as
well,
where
we
want
to
build
that
binary
on
your
actual
client
machine.
You
can
connect
to
the
cluster
and
you
can
ask
it
all
sorts
of
questions,
including
things
like
cluster
wide
psyllium
status,
enable
disable
hubble
enable
disabled
metrics.
A
lot
of
these
things
like
it's.
I
think,
if
anything,
the
hubble
proved
is
that
people
really
like
the
flexibility
of
having
client-side
binaries
that
can
connect
to
the
cluster
and
do
xyz,
as
opposed
to
like
executing
into
individual
node
and
doing
a
bunch
of
things
that
way.
A
B
B
A
Think
so
and
christopher
has
also
been
posting
in
some
some
useful
answers.
There
was
a
question
about
how
hubble
passes
the
http
headers
from
the
packet
and
I
think,
there's
a
blog
post.
That,
I
think,
is
the
answer
to
that
question.
So
yeah.
B
The
the
sort
of
10
000
foot
answer
to
that
is,
we
forwarded
it
to
an
l7
proxy,
which
was
envoy.
We
dissected
that
way,
basically.
B
So
he
frequently
yarno
contributes
patches
upstream
as
well
and
works
on
this
kind
of
stuff
yeah
and
then
again
you
know
I'm
just
a
maintainer
on
hubble,
like
I'm
representing
a
much
larger
team
like
we
have
several
maintainers,
I'm
not
the
only
one
and
just
the
overall
surveillance
team
has
been
doing
awesome
work
for
the
last
several
years.
A
Well,
thank
you
very
much
for
for
showing
us
those
demos
and
and
giving
us
such
a
great
tour
of
like
what
hubble
can
do.
Just
kind
of
a
quick
skim
through
make
sure
we
haven't
missed
any
questions
that
are
critical.
If
we
have
missed
any
critical
questions,
come
to
the
psyllium
slack
channel
and
there
are
tons
of
folks
that
are
really
happy
to
to
help
and
that's.
B
Yeah
so
psyllium
yeah,
psyllium
slack
actually
has
a
hubble
channel
so
feel
free
to
join.
That
and
myself
and
other
maintainers
and
just
community
folks
are
hanging
out
in
there.
Answering
questions
feel
free
to
jump
into
the
code.
Even
you
know,
clone
psyllium
hubble
out.
Look
at
some
good.
First
issues
see
if
you
want
to
help
out
we're
always
open
to
ideas
and
help
from
the
community
yeah.
A
Actually,
it
was
something
I
meant
to
mention
at
the
top
of
the
show
was
that
we
now
have
over
6
000
people
in
that
slack
community.
B
B
A
Awesome
so
come
and
join
us
there.
Maybe
you're,
probably
already
there,
if
you're
with
us
here
today,
all
right
with
that.
I
think
that
is
pretty
much
all
we
have
time
for.
Thank
you
again
glib
for
such
amazing
demos.
Thank
you.
Everyone
on
youtube,
chat,
who's,
been
throwing
questions
at
us
and
getting
involved.
We
will
be
back
next
week
at
the
earlier
times,
so
we're
alternating
time
slots
between
a
european
friendly
time
and
a
sort
of
west
coast
friendly
time.