►
From YouTube: TGI Kubernetes 047: Cilium (CNI)
Description
Come hang out with Kris Nova as she does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Kris talking about the things she knows. Some of this will be Kris exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
A
Whenever
we
start
teaching
I
k
like
where
you
cut
over
then
there
would
just
be
like
this
big,
like
like
a
cool
song,
so
I
don't
know
if
folks
have
an
idea
for
a
good,
intro
music
for
TGI
K,
let
us
know-
and
maybe
we
can
like
figure
out
a
way
to
like
get
that
up
and
running.
Also,
my
blood
Sugar's
pretty
low
today,
so
I'm
gonna
be
drinking
a
lot
of
really
sugary
tea,
so
I'll
probably
be
really
excited
today.
So
anyway,
let's
see
what
folks
are
saying
in
chat.
A
Everybody
knows
this
is
my
favorite
part
of
the
week.
So
let
me
scroll
down
here
at
the
bottom
and
see
what
folks
are
saying:
l'm
a
tea
first
in
line
I
want
to
like
check
metrics
on
how
many
times
l'm
a
tea
is
the
first
person
to
join
TGI,
K
and
say
hello,
happy
Friday
to
everyone.
It
really
is,
like
my
favorite
part
of
the
week
to
to
watch
folks
join,
and
it's
always
nice
to
see
you
in
mani.
So
it's
really
good
to
see
you
again.
Oh
love
says
yes,
I.
A
Second,
that
Suresh
hi
join
you
from
Hamburg
Jonas
hi
from
Dubai
it's
midnight
here,
thanks
for
joining
Jonas
thanks
for
joining
@midnight,
particularly
that's
pretty
intense
I,
don't
know
if
I
would
be
able
to
stay
up
that
late,
I,
even
usually
out
by
like
8
or
9
o'clock.
Every
night,
let's
see
Cynthia
says
happy.
Friday
George
is
on
the
hefty.
You
account
he
sells
how's
everyone
doing
today.
This
is
George
I'll,
be
helping
Chris
with
notes
and
general
stuff.
Today,
lumati
it
dawned
on
today
that
t
ji
k
is
just
over
a
year
old.
A
Now
maybe
someone
at
the
hep
D
office
can
hit
the
gong
to
commemorate
the
anniversary
yeah.
If
anybody
at
the
hep
do
office
is
watching
totally
grab
the
gong
and
come
bring
it
in
here
and
we
can
like
hit
the
gong
I,
don't
know
if
that
breaks
our
sales
rules
or
not
whatever
we
can
totally
hit
the
gong.
Also
I
start
my
vacation
after
today.
So
if
you
accidentally
grab
a
beer
as
well,
I
would
not
be
upset.
A
So
anyway,
we
say
we
have
dar
met
hello
from
India
when
Maddie
used
to
crack
me
up
when
the
gum
would
go
off
randomly
during
the
first
few
episodes.
I
lost
my
spot
here
we
go
okay,
so
Tobias
says
Berlin
well,
Matty,
happy
Friday,
George
greetings
from
Copenhagen
yeah.
That
was
classic
evening
from
Scotland.
A
That's
cool
hi
from
the
Starbucks
Support
Centre
hi
friends
at
Starbucks,
Jason
good
to
see
you
Miguel
hi
from
Mexico
hi,
everybody
happy
Friday,
hi,
I'm
reading
off
all
these,
because
it's
my
favorite
part
of
the
week
and
I
actually
get
really
happy
nice
to
see
you
Chris
our
new.
It
says
hi
folks
hola
from
NYC
morning
off
from
New
Zealand
Karl.
Thank
you
from
joining
from
New
Zealand
Timmy,
happy
Friday,
howdy,
Chris,
Syed,
hello
from
London
George,
says
another
hefty
update
all
right.
A
Everyone,
if
you
want
to
help
with
the
note-taking
okay,
so
yeah
I
should
pull
up
in
there
ticky.
But
I
wanted
to
finish
how
many
folks
I
people
good
to
see
you
out
boo,
cheers
from
Toronto
hi
Roy
hello
from
Romania
and
from
San
Francisco,
happy
Friday,
hey
from
Seattle
right
next
to
him,
fest
I
didn't
know
there
was
a
good
fast
going
on
right.
Now,
that's
exciting
in
autumn,
a
high
from
the
Netherlands
so
folks
for
joining
from
all
over
the
world.
A
A
Do
they
watch
it
or
in
their
DVR
just
trying
to
get
an
idea
of
what
our
audience
looks
like
as
we
start
to
move
forward,
and
also
we
have
some
exciting,
like
surprises,
40
gik,
coming
up
in
the
future,
I
can't
say
too
much
right
now
about
it,
but
having
an
idea
of
which,
which
media
channels
people
are
using,
it
would
be
helpful,
so
Todd
says
so:
Cal
laptop
big
screen
computer,
so
yeah
I'm
gonna.
A
Let
folks
answer
that
question
kind
of
concurrently
and
without
further
delay,
I'm
gonna
switch
over
to
my
screen
and
we're
gonna
look
at
the
hack
in
teen.
You
start
going
through
this
week
in
case
there's
my
screen
and
then,
where
did
my
there?
We
go
okay,
so
we
want
to
go
to
the
hacking
D,
which
is
here
and
almost
closed
everything
else,
so
that
we
can
start
fresh.
A
So
for
those
are
folks,
those
folks
at
home
who
don't
know,
we
use
this
really
cool
tool
called
hacking,
D
and
I've
started
lobbying
for
this
in
some
of
the
kubernetes
open
source
calls.
This
is
a
collaborative
markdown
editor,
so
it's
like
a
hybrid
between
excuse
me
too
much
sugar,
tea,
let's
like
a
hybrid
between
Google
Docs
and
like
the
github
markdown
editor,
so
I
charge
and
myself
are
both
able
to
come
in
here
and
edit
this,
and
if
you
have
anything
you
want
to
add
as
well.
A
This
is
a
good
way
for
you
to
share
ideas.
Ask
questions
to
add
notes
as
we
go
through
today's
episode
and
this
becomes
a
resource
for
everyone.
So
it's
my
responsibility.
It's
George's
responsibility!
It's
your
responsibility
to
help.
Take
notes
if
there's
something
that
you
would
like
written
down-
or
you
feel
like
folks
would
be-
would
benefit
from
knowing
okay,
so
we
usually
do
this
week
in
kubernetes
which
we're
gonna
go
through.
A
Updates,
so
what
I
was
wondering
is
if
folks
could
drop
in
the
chat
as
well
any
of
the
new
features
and
their
latest
release
of
go
that
looks
exciting
or
confusing
or
scary
to
them.
Mention
the
future
in
mention
what
your
thoughts
are
on
it
and
we
can
bring
that
up
together
and
we
can
sort
of
like
look
at
some
of
them
and
talk
about
some
of
them
for
our
going
update
today.
So
anyway,
Week
in
Review,
it's
the
first
one
we
have
here
is
we
have
kubernetes
deconstructed?
So
let's
pull
this
up.
A
Okay
coming
up
next,
oh
there's
an
Associated
github
project
with
this
one,
okay
and
I'm.
Assuming
this
has
got
a
caddy
file.
It's
got
some
Python
scripts
layers
assets,
demo,
okay,
this
is
just
God's
resources
from
the
demo
that
I
think
we're
we're
looking
at
in
the
video.
So
that's
always
cool.
It's
always
nice.
When
people
share
their
demo
at
work-
okay,
let's
move
on
to
the
next
one,
but
before
we
do
that,
let's
see
what
folks
in
the
trap
were
saying:
I
got
a
scroll
pretty
far
up
man.
A
People
are
chatting
today,
okay,
Miguel
says
watching
on
a
second
monitor,
Rory's
watching
on
their
desktop
computer.
Okay.
So
we
have
a
lot
of
computer
users.
Lots
of
laptops,
I
see
three
laptops
for
laptops,
an
iPad
another
computer,
another
laptop
okay,
so
autumn
a
says
all
reminds
me
of
etherpad,
which
is
older,
I
think
Google,
Docs
Paul
says,
was
watching
to
UGA
from
iPhone
a
lot
previously.
A
What
sometimes
to
be
watching
you,
while
moving
around
got
a
laptop
so
I'm
using
that
more
now
so
yeah
it
looks
like
folks
are
mostly
using
computers
and
laptops,
which
is
helpful.
Thank
you
for
sharing
that
wonderful
piece
of
information
about
how
you're
watching
TG,
okay,
okay.
Next
up,
we
have
cube
hunter
Ola.
Maddy
says:
oh
yeah,
okay,
so
Mark's
here
with
the
gums,
also
I
didn't
realize
you're
watching
TJ.
Okay,
do
you
want
to
hold
it
so
I
can
hit
it
really
quick
I
think
it's
necessary.
A
A
Okay,
so
coming
up
next,
we
have
cube
Hunter
a
cube
hunter,
an
open-source
tool
for
kubernetes
penetration
testing.
Oh,
this
is
exciting.
I
love,
good,
like
pen,
testing
tools,
they're,
always
really
exciting.
I
used
to
work
on
some
pen
testing
tools
when
I
worked
on
it
hipaa-compliant
thing
years
ago,
when
I
first
started
coding-
and
it
was
like
my
favorite
part
of
my
job-
was
trying
to
figure
out
clever
ways
to
like
access
the
system
and
exploit
random
bits
of
information.
But
this
is
cool
that
there's
an
open-source
framework
or
tool
out
there.
A
A
Pen
testing
is
really
big
for
people
who
need
to
be
PCI
compliant.
If
you
want
to
make
sure
that
your
your
systems
are
vulnerable
and
that
you
are
staying
up
to
date
with
vulnerabilities
and
CDs
whenever
that
happen,
so
that's
cool.
That's
really
exciting
that
there's
a
kubernetes
specific
one,
and
thanks
to
our
friends
at
aqua,
I'm,
not
gonna,
sign
up
for
the
blog
thanks
to
our
friends
at
aqua.
For
bringing
this
to
our
attention
coming
up.
Next,
we
have
sue
switches
to
keep
admin.
I
wonder
what
this
is.
A
Okay,
so
it
looks
like
it's
an
open
source
blog
from
open
seuss,
which
I
say
open,
suits,
I've,
heard
people
say
open,
suzay,
I
feel
free
to
tell
me
what
you
think
in
the
chat,
but
I'm,
probably
gonna,
still
keep
saying
open
seas
because
it
reminds
me
of
dr.
seas
anyway,
exciting
new
direction,
a
stable
base
for
the
future.
Reviewing
our
initial
premise,
the
world
has
shifted
okay,
so
this
is
all
talking
about
kubernetes.
How
can
I
help?
What
is
this?
A
Okay?
It's
been
over
a
year
since
we
started
the
Kubik
project
and
now
we're
looking
back
over
the
last
month's
than
evaluating
what
we've
succeeded.
Okay,
not
only
is
cubic
micro,
is
now
fully
integrated
part
of
open
seas.
Okay,
so
this
is
an
update
on
the
Kubik
project.
Not
keep
admin
I
think
did
this
a-cube
admin
who
switches
to
cube
admin,
okay,
cool,
so
I,
guess
we're
using
a
cube
admin
to
help
set
up
cubic
I'm.
Guessing
somebody
correct
me
here:
oh
here's,
some
cube
admin
language
here.
A
This
is
one
of
the
primary
motivators
for
creating
Phelim
and
key
part
of
the
soos
cast
stack.
However,
these
days
are
multiple
tools,
including
the
increasingly
pervasive
cubed
admin,
which
is
used
to
stand
alone
as
the
integrated
part
of
many
larger
solutions.
Ok
cool.
So
this
is
just
talking
about
how
open
Susan
key
bad
men
are
now
working
well
together
in
it
from
the
sound
of
the
title
that
George
put
in
there
looks
like
we
have
open
SUSE
support
in
cube
admin,
which
is
cool
pat
on
the
back
to
our
friends
at
cig,
clustered
lifecycle.
A
Also,
sorry
I've
been
missing.
A
lot
of
meetings
lately,
like
I,
said:
I've
been
crazy
busy,
yet
at
work
but
I'll
come
check
it
out
and
say:
hey
one
of
these
days.
Ok!
So
next
we
have
EAS
cuddle.
Oh
my
gosh,
I
love,
Ashley
Gophers.
Every
time
I
see
you
I
get
so
happy.
Ok,
so
let's
see
what
we
have
here:
major
release:
first,
major
release,
one
dot.
A
Oh,
so,
if
you
come
to
the
we've
works,
eks
cuddle,
it
looks
like
we've
came
out
with
their
first
one
dot
Oh,
let's
I
want
to
look
at
the
code
here.
I
was
like
looking
at
go
code
and,
of
course,
we
all
know
when
I
look
at
debugging
software
for
the
first
time.
The
first
thing
I
want
to
find
is
the
main
go
which
should
be
in
here
somewhere
and
yes,
we're
using
Cobra.
And
what
do
we
call?
First?
Oh,
it's
just
a
helped
man,
okay,
so
anyway,
II
chaos
huddle
one
dot.
A
Oh
thanks
to
our
friends
at
we've.
What's
next
we
have.
We
have
a
sound
boy
office
hours
if
you
missed
it,
so
you
can
come
here
and
check
it
out.
I
think
we're
still
waiting
on
doing
son,
a
boy
in
TGI
K,
although
I
really
want
to
demo
at
one
of
these
days
and
I
know.
We
keep
saying
that
like
as
soon
as
I
get
back
from
my
vacation
and
I
do
and
we
wrap
up
the
C&I
I
might
do
like
a
hefty.
A
Owe
to
leave
Q
GI
K
series,
or
something
so
hats
off
to
our
folks
at
Santa
boy,
docker,
hub
maintenance
on
the
25th
of
August.
Ok,
so
folks,
who
are
pulling
docker
images
from
docker
hub,
looks
like
docker
hub
has
a
planned
outage
so
be
prepared
for
that
make
sure
your
systems
teams
know
about
it
feel
free
to
share
this
link
and
maybe
shoot
an
email
off
to
your
engineering
department.
A
In
saying,
hey,
just
a
heads
up,
you
might
not
have
heard
about
it,
but
it
looks
like
doctor
had
it's
docker
hub
is
going
to
have
some
plant
maintenance.
So
that's
good
to
know.
I
know,
I
can
store
a
lot
of
my
container
images
in
docker
hub
I,
Kate's
113
alpha
is
out,
I
can't
believe
already
on
113.
How
does
this
keep
happening
to
me
and
it
looks
like
we
have
an
alpha
branch
here
that
folks
can
start
tinkering
with
and
trying
to
find
new
and
exciting
ways
for
us
to
improve
kubernetes.
A
So
we
had
a
I
think
I
want
to
say
this
right,
Alysia,
no
paint
me
in
slack,
I,
think
last
night
or
the
night
before
and
said
friendly
reminder,
cops.
One
flipped
ends
out.
I
think
this
is
another
testament.
I
mentioned
this.
When
we
were
on,
we
were
doing
the
cops
episode
a
few
weeks
ago.
You
know
cops
is
I
wouldn't
say
like
very
far
behind,
but
notably
behind
the
rest
of
kubernetes.
We
just
had
one
thirteen
alpha
released,
Treena,
kubernetes
and
cops
is
now
on
version.
One
point
ten.
A
A
One
point
ten
drops
like
two
days
ago
and
I
actually
used
one
point
ten
to
deploy
a
cluster
that
we're
going
to
be
looking
at
later
today
when
we
actually
get
into
Celia.
The
whole
reason
we're
here
today.
So
next
we
have
the
see
Liam
install,
guide.
Okay,
I
have
all
of
my
see,
Liam
notes
here,
but
I
want
to
talk.
I
think
I
want
to
make
sure
there's
nothing
else,
I'm
forgetting
okay.
This
is
all
see,
Liam
stuff,
so
I
guess
I'm
going
to
move
over
to
see.
A
Liam
I
didn't
see
you
too
much
in
a
chat
about
go.
So
maybe
we
can
do
an
episode
later
when
I
get
back
or
maybe
Joe
can
do
it
as
well.
Where
we
look
at
some
of
the
new
features
of
going
I
know
it's
caused
quite
a
ruckus
on
Twitter,
so
I'm
actually
kind
of
curious
and
again
just
haven't
had
time
to
check
it
out.
Yet
so
anyway,
let's
talk
about
it,
see
Liam.
A
So
if
you
go
to
see
Liam
gob
read
the
docs
I
owe
you
the
first
thing
I
wanted
to
point
folks
to
is
this
kubernetes
cops
installation
guys,
let's
see
what
what
is
George
say.
George
says:
if
you
have
episodes
ideas,
please
file
it
as
an
issue
and
we
will
check
it
out
and
yeah.
So
that's
the
bigot
hub
issue
tracker
here.
So,
let's
see
if
I
can
open
this
up,
it
opened
up
here,
but
I
can
move
it
up
here.
Well
there
it
is
so
we
already
have
43
of
these
issues.
A
Sometimes
we
tried
to
do
like
small
mini
series
about
it
and
I
know
I
come
through
this
thing
like
Tuesday
morning,
sometime
and
I'll,
just
kind
of
scroll
through
and
see
if
anything
looks
exciting
or
if
I've
heard
anything
about
one
of
these
projects
and
then
ultimately,
we
end
up
making
a
decision,
and
our
inspiration
for
this
week
was
after
we
did
calico
last
week,
jesse
Frizzell
on
twitter.
I
followed
up
on
one
of
my
comments
about
calico
and
said:
make
sure
you
check
out,
see
Liam
I
came
in
here
and
sure
enough.
A
Somebody
had
opened
up
an
issue
on
see
Liam,
let's
see
if
I
can
find
it
one
closed
yeah.
This
was
Lou
Matty,
so
you
can
see
that
will
actually
come
through
and
we'll
find
your
episode
Ian.
Actually
you
guys
do
it
and
close.
It
is
exciting,
so
feel
free
to
contribute
there.
It
is
also
a
good
place
just
to
like
ask
questions
or
if
you
have
any
thoughts
about
the
episode
out
in
there
as
well.
Okay,
let's
close
some
of
these
tabs,
so
that
we
don't
get
super
confused.
A
Okay,
okay,
we're
at
least
113
cops,
110
and
here's
our
installation
guide
and
I'll
leave
up
our
hack
and
be
here
in
case.
We
hit
anything
through
out
the
other
side.
Okay,
so
I'm
gonna
do
a
little
bit
of
storytelling
here
and
talk
about
my
week
and
getting
sealian
up
and
running.
The
reason
I
wanted
to
start
off
with
the
kubernetes
cups
installation
guide
here
is
to
let
folks
know
that
out
of
all
of
the
avenues
in
which
I
looked
at
demoing
sealian
this
week,
this
is
the
one
I
ultimately
picked,
although
I
did
try.
A
It
study
an
epithelium
with
cubic
horn
and
we're
gonna
actually
look
at
that
in
that
whole
process
and
see
where
I
got
off
into
the
weeds
and
actually
see
where
I
had
a
little
bit
of
trouble
with
TLS
series
and
we're
going
to
talk
about
how
see
liam
is
attached
to
EDD
and
we're
gonna
learn
about
the
SVD
operator.
That's
coming
out
in
the
next
version
of
see
Liam.
That
should
have
solve
a
lot
of
these
concerns,
but
I
still
think
it's
an
important
like
engineering
rabbit
hole
to
go
down
so
we're
gonna.
A
Look
at
this
other
cluster
I
set
up
not
using
cops
and
kind
of
poke
at
that
before
we
actually
get
takes
helium
for
a
drive
on
our
cops
cluster
that
I
already
have
up
and
running
so
anyway.
If
you
want
to
demo
helium
I
strongly
suggest
you
start
here
or
start
with
mini
cube
in
our
case,
I
didn't
want
to
do
mini
cube
because
I
wanted
to
see
how
a/c
and
I
was
working
in
a
cloud
provider.
A
In
this
case,
we
are
running
in
Amazon,
so
we
have,
and
actually
I
can
show
you
that
I'm
in
terminal
right
here
we
have
a
very
small
cluster
running
in
Amazon
and
the
reason
that
I
created
such
a
small
cluster
is
I
wanted
to
have
a
really
solid
idea
of
where
all
of
our
work
was
going
to
be
running,
so
that
we
can
actually
look
at
the
underlying
file
system.
So
we
have
one
node
so
that
I
can
SSH
into
that
node
and,
as
we
start
to
deploy
things
on
kubernetes
we
have.
We
can.
A
You
know
be
confident
that
it's
gonna
run
on
that
node
and
we
can
actually
see
what's
helium
is
doing
behind
the
scenes
when
I
first
did
this
I
had
three
nodes
up
and
running
and
I
found
myself
constantly
like
SS
aging
in
two
different
nodes,
because
who
knows
where
my
pod
would
get
deployed?
Kubernetes
schedule
would
do
a
great
job
at
distributing
that
workload
and
it
got
to
be
pretty
annoying
so
I
booked
us
down
to
one.
A
So
if
you
want
to
look
replicate
anything
that
I'm
doing
it
might
be
handy
to
just
have
one
node
as
well:
okay,
so
let's
go
back
to
our
Docs
and
I
want
to
kind
of
walk
through
the
cops
cluster
setup.
That
I
did
so
folks
can
see
exactly
how
I
got
my
cluster
up
and
running.
Okay,
so
here
is
what
I
did
is
I
exported
the
name
to
TGI
Kate's
comm
sealian
Kate's
got
local.
A
This
suffix
here
at
the
end
is
important,
because
this
tells
cops
that
we
want
to
use
the
gossip
protocol,
which
is
I.
Think
what
does
somebody
saying
how
I
got
it
running?
Okay,
thanks
anyway,
that's
that
tells
cops
to
use,
got
gossip.
So
then
I
did
this
cops
create
cluster.
The
cops
state
store
I
have
exported
here.
So
if
I
get
them
in
terminal,
I
can
do
export
or
I'm.
A
Sorry,
let's
do
in
type
to
grep,
I'm,
looking
for
name
and
I
hope,
I'm,
not
showing
secrets
here,
I,
don't
think
I
am
and
we
can
do
cops
as
well.
Okay,
so
you
can
see
Chris,
Nova
cops
and
then
I
have
cops
future
flag
and
then
I
have
this
old
environment
a
little
variable
from
when
I
worked
at
Microsoft
still
in
my
batch
RCE,
apparently
anyway.
So
those
are
environmental
variables.
I
have
set.
Let's
go
back
here
and
I
did
note.
A
Count
is
equal
to
1,
which
that
solves
the
problem
of
ensuring
that
our
pods
are
running
on
the
no
dresses
aged
into
we're
just
running
some
tea
to
mediums
for
both
our
master
and
our
notes
and
for
a
master
zone
I.
Originally,
the
documentation
had
this
runyan
in
each
a
again.
I
wanted
one
master,
because
I
just
going
to
make
demo
mean
a
little
bit
easier,
so
I
bumped
this
down
to
just
using
one
zone
and
then
I
did
one
the
same
zone
for
our
nodes
as
well.
I
had
to
look
up
this
image
ID.
A
So
if
you
pull
up
the
documentation
here,
you
can
run
this
command
and
I'll
just
go
ahead
and
run
it.
It
actually
works
right
out
of
the
box.
Where
did
it
go
here?
It
is,
and
let's
go
back
here
and
run
this
command
and
give
us
a
little
bit
of
space
there.
There's
our
image
going
back
to
our
documentation,
I
plugged
that
in
here,
I
did
override,
which
we
were
talking
about
this
on
a
open
source
called
Yoda.
A
This
is
a
handy
flag
for
cops
users,
because
it
allows
you
to
override
arbitrary
values
in
the
spec
at
creation
time
we
talked,
we
want
to
use
kubernetes,
1.3
or
one
dot
10.3,
and
we
apply
some
labels
and
give
it
a
name
l'm,
a
D
which
distro
and
kernel
are
I
use
it
on
your
control
plane
in
worker
nodes.
Lamanna
does
an
excellent
question.
So
again,
this
is
a
little
bit
of
storytelling
on
Thursday,
Wednesday
or
Thursday.
This
week,
I
tried
to
deploy,
see
Liam
and
I
tried
to
deploy
it
on
ubuntu
16.04.
A
That
was
running
the
cout
brunette
or
the
Linux
kernel
4.4,
and
that
was
incompatible
with
helium
and
it
was
kind
of
hard
to
debug,
because
the
pods
would
exit
like
within
moments
of
starting,
so
I
actually
discovered.
This
really
handy
command
here
that
I'm
gonna
show
folks
that
I
did
not
even
realize
was
the
thing,
but
it's
really
really
handy.
If
you
have
pods
that
are
like
only
running
for
a
split
second,
you
can
do
K
get
P.
A
A
Oh
it's
Kate,
logs
log
I
think
it's
a
log
simular
either
way
this
will
actually
get
the
logs
from
the
previous
pod
running
for
whatever
like
deployment
or
demons
that
her
staple
site
you
have
running
there,
which
is
handy
because
that
was
able
to
tell
me
that
Celia
was
actually
exiting
because
I
was
already
in
an
older
kernel
anyway.
To
answer
your
question
concretely,
we're
running
core
OS
I
forget
which
version
of
core
us
were
running,
but
we
can
you
name
in
just
a
second.
A
Oh
four
are
both
viable
options
for
sealing
anyway
great
question,
though
Matti
and
I'll
drop
by
you
named
our
or
you
name
a
into
the
terminal
as
soon
as
we
ssh
into
one
of
our
worker
nodes
and
if
I
forget,
feel
free
to
just
remind
me,
okay,
so
let's
go
back
to
our
documentation
here,
so
that
was
our
cops
create
command.
Then
you
like
have
to
do
this
weird
edit
thing
where
you
come
through
and
you
do
edit
and
then
you
like
paste
this
in
and
in
CML.
A
A
Let's
try
that
yeah
and
let's
try
this
again.
Okay,
that's
much
much
better
for
me
and
you
can
actually
see
that
we
have
some
sea
alien,
pods
running
and
that's
that's
good.
That
means
we
have
see
Liam
up
and
running
on
the
system
and
I
guess
now
would
be
a
good
time
to
actually,
let's
see
what
we
have
here,
Oh
George
is
just
taking
notes.
Okay,
now
it'd
be
a
good
time
to
ssh
into
the
node,
so
we
can
actually
start
seeing
so
I'll
drop
that
you
name
now.
A
So
we
want
to
go
into
our
amazon
and
we
want
to
get
some
information
about
our
master
in
our
worker
node.
I
have
a
lot
of
instances
running
sorry.
I
have
to
go
billion,
not
sorry,
actually
I'm
a
little
sorry,
but
I'm
gonna
turn
both
of
these
clusters
off
when
I
get
done
today.
So
do
not
gonna
be
running
for
very
long,
so
the
cups
one
that
we
want
to
do
is
this
master
here.
A
So
originally,
the
ceiling
documentation
suggests
that
you
deploy
with
private
equals
topology,
which
is
a
fabulous
feature
that
I
coded
in
the
cops
in
a
long
long
ago.
That
allows
for
all
of
your
ec2
instances
to
run
in
a
private
subnet
and
they
actually
don't
have
a
public
IP
address.
So
the
only
way
to
gain
access
is
where
the
bastard,
and
that
was
again
causing
a
little
bit
of
problems
for
me.
A
I
wanted
this
demo
to
be
quick
and
dirty,
so
I
removed
the
topology
private
flag
from
my
cops
crate
cluster,
and
now
we
have
a
public-facing
cluster.
So
anyway,
we're
gonna
ssh
into
our
master.
Really
like
looks
like
folks,
are
saying
something
and
see
what
it
says:
it's
working
on
Ubuntu
1604,
but
you
have
to
upgrade
the
kernel
to
something
greater
than
four
eight
Thank
You
Simon
and
yeah.
A
If
we
looked
in
the
repo,
we
could
actually
find
it
because
there's
a
lot
of
C
code
in
the
repo
we're
gonna
look
at
a
little
bit
later
in
the
episode
and
yes
I'm,
a
confirmed
it
as
well.
Okay.
So,
let's
open
up
a
new
tab
here,
I
wish
that
it
would
preserve
size
because
I
feel
like
I.
Do
this
every
episode
raft
like
scale
and
then
resync
it
back
anyway,
we
can
SSH
so
core.
A
Is
the
user
here
shout-out
to
have
to
you
a
field
engineering
team
who
helped
me
figure
out
that
its
core
is
the
name
of
the
user
and
our
public
IP
address?
And
we
can
say
yes
and
yeah?
So
what
does
it
say
here?
Cora
is
container
Linux
table
1807
point
no,
and
let's
do
a
you
name
a,
and
that
should
give
us
a
little
bit
more
information.
It
looks
like
we're
running
4.14
core
OS
kernel
here,
so
hopefully
that
answers
your
question.
A
Lou
Maddy,
let's
see
with
sign
or
Daniel,
says
the
kernel
has
basic
feature
set
from
BP
up
side.
Yes,
four
point:
nine
is
also
LTS.
Okay,
so
four
point:
nine
is
just
long-term
support,
so
that's
probably
why
that
one
is
relevant
okay.
So
this
is
our
cops
cluster
and
if
we
actually
go
in
and
we
can
look
at
Etsy,
we
can
go
to
what
is
it
Cece
and
I
yeah
Neji.
Oh,
let
me
see
you
up.
First
change
directory
at
CC,
&,
9,
&,
D
list
and
there's
our
cm.
A
Kampf
and
I
benefit
go
to
up
Seanie
and
I
been
list
here
is,
and
we
looked
at
a
lot
of
this
in
our
last
episode.
Here
is
like
flannel,
for
instance,
and
if
we
run
a
different
one
like
if
we
run
a
co,
AMC
and
I
got
old,
we
should
get
the
exact
same
output
as
well,
and
that's
because
the
CNI
protocol
is
actually
insects
out
to
the
CNI
plugin
and
it
expects
it
to
speak
some
sort
of
standard
API.
So
that's
why?
A
Regardless
of
what
you
see
and
I
plug
in
your
you're
running
you're,
going
to
get
the
the
same
output,
because
it's
all
standardized
so
anyway-
and
you
can
see
more
about
that
and
how
C
and
I
sort
of
setup
and
how
the
cubelet
registers,
C
and
I
and
and
in
the
previous
episode
that
we
did
on
calico
I'm
not
going
to
go
into
a
bunch
of
it.
I'm
just
going
to
kind
of
assume
folks.
Remember
that
or
at
least
can
go
look
it
up.
A
And
if
you
have
any
questions,
just
drop
it
in
the
chat,
as
we
kind
of
go
through
this.
So
anyway,
we
have
C
and
I
up
and
running
on
the
host
system.
So
I'm
gonna
take
a
slight
detour
and
we're
gonna
go
and
talk
about
just
because
I
think
this
is
a
good
way
to
teach
folks
about
see.
Liam
we're
going
to
talk
about
how
I
tried
to
set
up,
see
Liam
and
Cuba
corn
in
what
I
learned
about
how
helium
is
working
along
the
way.
A
So
how
we're
going
to
do
that
is
we're
going
to
go
back
to
Amazon
and
we're
gonna.
Look
at
my
cubic
corn
cluster
I
created
and
we're
gonna
SSH
into
this
master
here,
which
also
has
a
public
IP
address,
and
this
is
running
Ubuntu,
18.4
and
notice.
How
I
have
this
nice
side-by-side
here,
so
we're
actually
gonna
be
able
to
do
stuff
on
both
the
cops
deployment
and
to
keep
a
corn
one
as
well?
A
Okay,
so
SSH
Ubuntu
at
our
public
IP
address
there
and
then
I
will
sudo
up
and
from
here
I
think
I
should
be
able
to
keep
octo
get
P.
Oh
okay,
very
good.
So
does
that
work
here?
Okay,
good!
So
we
have
Quebec
double
on
both
Cuba
corn
can
I
actually
I
only
change
my
terminal
here,
so
you
can
tell
just
try
to
remember,
keep
Accords
on
the
left
and
that
cops
is
on
the
right
anyway.
If
we
do
alias,
okay
is
equal
to
Q,
Bechdel
cuz
I'm
super
lazy.
A
A
Here
we
have
our
C
Liam
pods,
which
are
running
in
a
daemon
set,
which,
if
you
remember,
when
we
looked
at
calico
last
week,
there
was
a
demon
set
that
went
and
configured
our
host
system
for
us
see.
Liam's
deployment
is
the
same
way.
The
the
issue
that
we
ran
into
is
that
the
see
Liam
deployment
here
has
this
config
map.
So
we
can
do
a
keg,
a
config
that
namespace
cue
system-
and
this
is
where
Thomas
who's
been
super
helpful
and
it
like
gave
me
like
a
hands-on
like
personal
walkthrough
at
BPF.
A
Helped
me
debug
this
for
a
few
hours
yesterday
and
we
ultimately
just
kind
of
discovered,
like
TLS,
was
just
a
big
pain
and
I
just
made
more
sense
just
to
go
with
cops,
because
really
we
just
want
the
sealing
cluster
of
the
brain.
Anyway,
we
have
this
helium
config,
so
we
can
edit
this
really
quick
k
edit.
Actually
before
we
do
anything,
export
editor
is
equal
to
Emax.
That's
a
big
deal.
A
So
now
we
can
do
k
edit
config
map
see
Liam
config,
namespace
cube
system,
and
we,
you
take
a
look
at
this
config
map
here.
So
if
you
go
and
you
read
the
ceiling
docs,
we
were
messing
around
with
this
EDD
config
here
and
you
can
sort
of
see
this
snippet
and
if
you
deploy
kubernetes
with
cuba,
admin
like
cuba
court
does
and
if
you
look
at
the
cube
admin
config
file.
Actually
I
can
just
do
that.
Really!
Quick
at
sea
cuba,
corn
I,
keep
admin
config.
A
We
actually
don't
specify
any
of
the
sed
override
parameters
here,
so
you
get
an
at
CD
that
is
T
less
encrypted
and
has
off
turned
on
so
see.
Liam
needs
an
SED
datastore
to
do
some.
It's
magic
behind
the
scenes,
so
part
of
installing
see
Liam
involves
pointing
it
to
the
at
CD
that
the
kubernetes
control
plane
also
uses.
In
this
case,
we
ran
into
some
trouble
because
the
TLS
certs
were
only
valid
for
localhost.
A
And
if
you
remember
in
last
week's
episode,
when
we
looked
at
calico,
there
was
two
different
ways
you
could
around
calico,
which
is
either
using
the
existing
a
CD
that
kubernetes
uses
or
to
deploy
it
with
its
own
I.
Think
we
deployed
it
with
its
own
and
then
just
a
heads
up
for
folks
that,
according
to
Thomas
and
folks
in
the
sea,
liam
slack
channel,
which
we're
going
to
talk
a
little
bit
more
about
that
later.
There
is
an
EDD
operator
that
solves
all
these
problems.
A
That's
coming
out,
I
think
I
want
to
say
in
the
next
version
of
Helium,
if
not
very
soon,
and
so
that
way
you
don't
have
to
go
in
and
and
manually,
do
all
this
and
fight
with
TLS.
So
it's
like
I
was
doing
anyway,
just
a
heads
up
for
folks,
I
kind
of
went
down
that
rabbit
hole
and
that's
where
I
got
so.
A
If
you're
looking
at
doing
any
sort
of
custom,
see
Liam
deployment
feel
free
to
ping
Mia
or
Thomas,
and
we
can
talk
about
like
what
we
learned
when
we
went
through
this
and
talk
about,
maybe
some
some
ideas
we
have.
We
got
pretty
creative
with
curl
and
trying
to
figure
out
how
to
authenticate
and
verify
that
our
TLS
was
actually
causing
the
problem
that
we
we
ran
into
and
just
for
a
good
measure.
Let's
do
a
git,
pods
and
namespace
cube
system.
A
A
If
you
remember
earlier,
I
talked
about
it,
does
it
kernel
check
right
on
the
back,
so
I
mean
the
program
would
start
it
would
do
a
kernel
check
and
exit
right
away
and
the
pod
would
go
into
a
crash
loop
back
off
and
using
that
previous
was
helpful
to
actually
get
the
logs.
That
said,
oh
no,
you
needed
a
greater
than
4.8,
so
then
it
doesn't
be
PF
requirements
and
the
BPF
file
system
is
going
to
be
mounted
automatically
okay.
This
is
another
thing:
I
forgot
to
bring
up.
A
If
you
go
to
the
see
Liam
documentation,
there
is,
let's
see
if
I
can
find
it.
A
A
But
you
got
to
mount
the
BP
ffs
filesystem
in
order
to
I
think
it's
for
data
preservation,
so
that,
like
and
I'm
speaking
out
of
turn
here,
so
somebody
from
the
ceiling
site
can
help
me
a
little
bit
more
but
I
think
in
order
for
BP
some
sort
of
EPF
to
persist.
You
want
to
make
sure
you
have
this
file
system
mounted.
Otherwise,
like
every
time
you
deploy
a
new
pod,
it
like
has
to
like
reset
something
along
the
way.
So
this
was
a
command
that
I
definitely
wanted
to
turn
on.
A
Whenever
I
always
said,
you
know
see
Liam
using
cubic
horns
so
make
sure
that
you
know
that
as
well,
so
that
you
don't
get
this
error
right
here
that
I
have
or
this
morning
right
here
that
I
have
highlighted
okay,
and
so
it
mounts
it.
And
then
it
says
unable
to
set
up
key
value,
store
err,
open
blah,
blah
blah
blah
blah
no
such
file
or
directory.
Okay-
and
this
is
a
this-
is
actually
not
the
area.
I
wanted
to
show
you.
This
is
just
me
hacking
on
it.
A
A
A
Moving
forward
and
we're
gonna
kind
of
like
read,
transition
over
to
our
cops
cluster
and
we're
gonna,
look
at
deploying
a
pod
in
see,
Liam
and
then
I
want
to
do
the
network
policy
and
look
at
some
of
the
see
Liam
CR
DS
and
go
through
some
of
the
tutorial
stuff
here
as
well.
So
I
guess
to
start
off.
A
One
of
the
things
I
wanted
to
demo
was
the
fact
that
see
Liam
had
comes
with
this
really
cool
command
line,
so
how
we
can
do
that
is
we
can
do
Quebec
doll,
get
pods,
namespace,
Q
system
and
I.
Guess
I
can
do
this
from
my
host
as
well.
So,
let's,
let's
actually
do
this
I
can't
get
P,
Oh,
namespace,
cube
system
and
that
way
we're
gonna
save
ourselves.
A
The
network
hop
when
we
exact
into
one
of
these
these
pods
here
in
a
second,
so
there's
two
pods
running
in
the
daemon
set
one
for
each
of
our
machines
in
our
cluster.
You
remember,
we
had
one
master
and
one
node.
So
these
are
the
two
pods
here
and
you
can
actually
run
this
a
wide
command
and
actually
see
where
things
are
running
so
run
our
previous
command
and
do
a
wide.
This
is
handy
if
you
ever
need
to
figure
out
where
your
pods
do
anything.
You
actually
get
an
IP
address
here.
A
It's
kind
of
cut
off
right
now,
let's
see
if
I
can
expand
this
a
little
bit.
Maybe
if
I
do
this,
what
is
cheat
okay,
yeah
and
you
get
the
IP
address
and
then
go
and
map
that
back
to
Amazon
and
see
what's
going
on?
Okay.
So
if
we
get
our
pods
again,
we
can
exec
into
this
force,
helium
pod
and
how
we
do
that
I
just
want
to
do.
A
K,
exact,
interactive
TTY,
the
name
of
our
pod
and
the
name
of
our
cube,
says
name
of
our
cube
system,
the
name
of
our
namespace,
which
is
cube
system,
and
we
want
to
run
bash
okay.
So
here
we
now
we're
exact
into
the
pot
and
you
can
actually
just
run
see
Liam
out
of
the
gates
in
like
poof,
here's
a
Nicole,
Burke
man
written
and
go
so
this
was.
This
is
really
easy
and
got
me
kind
of
excited
about,
like
all
the
cool
things
see,
Liam
can
do
behind
the
scenes.
A
I
guess
when
you
install
see
Liam
it
like
learns
about
kubernetes
at
bootstraps
itself
and
will
actually
go
through
and
sort
of
do
all
the
magic
for
you
of
registering
itself
as
a
scene,
I
provider
and
reconfiguring
the
cubelet
to
recognize
it
as
a
c9
provider.
And
if
that
wasn't
enough,
you
get
all
of
these
extra
goodies
here
as
well.
I
was
on
the
phone
with
Thomas
and
I
asked
him
if
there
was
a
good
way
to
like
gain
some
visible
into.
A
What's
actually
going
on
behind
the
scenes,
with
Siena
in
kubernetes
something
I
kind
of
struggled
with
with
with
calico
and
Thomas
mentioned
there.
Is
this
really
cool
command
here?
I
wanted
to
show
people
which
is
like
TCP
dump
for
kubernetes,
so
this
was
really
exciting,
so
you
can
run
see,
Liam,
monitor
and
then
I
found
out.
There's
a
verbose
flag
here
as
well,
and
you
can
run
this
and
you
actually
get.
A
A
Oh
there.
We
go
okay,
so
now
I,
don't
know
why
it
was
hanging
before
but
like
each
one
of
these
records
here.
Is
it
super
verbose
and
you
get
all
this
information
about
what
sea
Liam's
doing
behind
the
scenes
and
we're
going
to
go
through
and
we're
gonna
learn
a
little
bit
about
how
see
Liam
works
when
we
actually
look
at
the
BPF
stuff
a
little
bit
later
in
the
episode.
A
I
just
want
to
give
folks
an
idea
of
what's
actually
going
on
in
kubernetes
firsthand,
and
let
folks
know
that
we
do
in
fact
have
a
handy-dandy
CLI
tool
here.
Okay,
let's
see
what
else
we
have
here,
sometimes
I
will
just
like
get
bored
and
I'll
just
like
run
a
command-line
tools
can
just
be
like
Oh.
What
does
all
the
cool
stuff
it
can
do
here?
A
So
if
anybody
has
anything,
they
would
like
for
me
to
check
out
and
explore
feel
free
to
drop
it
in
the
chat
and
let's
go
and
let's
talk
about
how
see
Liam
works
and
talk
about
employees
and
ideas
really
quick.
So
let's
go
back
to
our
Docs
and
I.
Think
I.
Had
it
here,
see
Liam
and
point
list
is
what
our
fiend
says
so
yeah
I
think
I'm
gonna
run
that
command,
but
I
want
to
run
it
after
we
kind
of
like
go
through
the
docs
and
actually
learn
what
what
an
endpoint
is.
A
First,
but
that's
one
of
the
ones.
That
is
definitely
on
my
list
here:
Thank
You
Irene.
So,
let's
see
where
is
my
documentation
here?
Okay,
so
we
had
a
install
guide
that
we
looked
at.
This
is
a
really
cool
article
that,
as
I,
was
starting
to
get
into
see.
Liam
I
noticed
this
thing,
and
this
is
sort
of
talking
about
see,
Liam
and
BPF
in
general,
which
is
probably
not
important
to
understand
that
see.
A
So
it's
been
nice
over
the
past
couple
of
days
to
learn
about
BPF
and
how
it
works
so
anyway,
this
is
a
really
cool
article
if
you
want
to
come
check
it
out
and
then,
of
course
like
the
second
I
saw
that
Jerome
is
on
here.
I
was
like
okay,
I
gotta,
come
read
this
thing
so
as
I'm
going
through
that
and
learning
about
VP,
PF
and
I
wanted
to
understand
what
these
endpoints
are
to
see,
lame
and
how
they
are
relevant
to
kubernetes.
A
So,
let's
see
if
we
can
find
policy
code,
I
thought
I'd
linked
it
in
here,
but
maybe
not.
Let's
just
go
look
and
see
if
we
can
find
it.
There
was
really
good
documentation
on
helium
endpoints,
let's
just
search
here
and
see
what
we
can
see
and
points
searching
concepts
component
overview
of
helium
agent
ceilings,
you'll
I
we've
looked
at
that
Linux
kernel,
BPF
brettly
packet
filter,
which
is
what
PBS
stands
for
and
this
talks
about.
A
Yes,
this
is
what
we
were
just
talking
about
earlier,
not
in
points
but
null
event,
and
this
just
talks
about
how
we
need
a
kernel
greater
than
4.8.
It
talks
about
how
you
need
a
key
value
store
to
use
it
I
want
endpoints
there
we
go
endpoints,
okay,
I'm,
gonna
kind
of
do
a
little
bit
of
reading
the
docks
here
on
line,
because
I
think
this
is
an
important
concepts
for
folks
to
get
as
we
move
forward.
So
Celia
makes
application
containers
available
on
the
network
by
assigning
them.
Ip
addresses
multiple
application.
A
Containers
can
share
the
same
IP
address,
so
this
is
important
as
we
expand
our
replicas.
It
looks
like
they're
going
to
be
grouped
in
the
same
in
point
group
using
the
same
IP
address,
which
is
handy.
A
typical
example
of
this
model
is
kubernetes
pod,
which
would
you
can
scale
that
horizontally
using
because
all
application
containers
would
share
common
addresses
are
grouped
together
what
Celia
refers
to
as
an
endpoint.
So
an
endpoint
is
a
logical
grouping
and
again
I'm
counting
on
folks
at
sea
Liam
to
make
sure
I'm
getting
my
vernacular
right
here.
A
So
an
endpoint
is
a
logical
grouping
of
one
or
more
pods
in
a
replica
set
and
the
rest
of
the
see
Liam
Network
kind
of
treats
those
pods
the
same
way
and
that
is
relevant
to
kubernetes,
because
that's
how
we're
going
to
identify
these
and
how
we're
gonna
be
able
ultimately
be
able
to
enforce
things
like
Network
policy.
Remember,
network
policies,
this
ability
to
govern
which
pods
can
and
can't
talk
to
each
other
based
on
labels,
which
I
have
a
little
bit
of
a
tutorial.
A
We're
gonna
go
through
at
the
end
of
the
episode
that
shorter
gives
an
example
of
studying,
of
a
network
policy
with
helium
and
actually
comparing
the
helium
Network
policy
to
the
kubernetes
network
policy,
which
is
the
pattern
that
we
also
saw
in
calico,
which
was
we
have
the
kubernetes
network
policy,
which
is
a
very
simplistic
object
in
comparison.
And
then
we
have
this
more
complex,
more
feature-rich
network
policy
that
is
specific
to
our
scene,
I
plug
in.
In
this
case
it's
called
a
Celia
Network
policy.
A
I'm
going
to
keep
reading
about
endpoints
here,
allocating
individual
IP
addresses
enables
the
use
of
the
entirely
layer,
4
port
range
by
each
endpoint,
I'm
gonna
skip
ahead.
The
default
behavior
of
see
LEM
is
to
assign
both
ipv4
and
ipv6
addresses
to
every
endpoint.
However,
this
behavior
could
be
configured
and
the
same
behavior
will
apply
to
regard
it.
The
rules,
load-balancing
etc,
and
you
can
see
address
management
here.
One
other
point:
I
wanted
to
bring
up.
Our
vena
house
is
saying
something
in
the
chat.
An
endpoint
is
equal
to
a
group
of
containers.
A
Ie,
just
a
pot
encase
which
unites
pods
is
I
okay.
So
we
should
read
about
the
identity
here
as
well,
so
for
identification
purposes,
see
Liam
assigns
an
internal
endpoint
or
an
internal
ID
to
all
endpoints
on
a
cluster,
a
cluster
node.
The
important
ID
is
unique
with
the
context
of
an
individual
cluster
node.
A
So
I
think
what
we're
saying
here
is
that
see
Liam
is
going
to
assign
an
ID
to
the
endpoint,
and
that
is
how
we're
going
to
be
able
to
recognize
it
and
do
finer
control
over
which
pods
can
talk
to
other
pods
and
for
folks
at
home.
It's
important
to
remember
that
whenever
you
schedule
a
pod
in
kubernetes,
that's
when
we
call
out
to
the
scene
I
plug
in
and
the
CRM
plugins
responsibility
is
to
set
up
our
network
for
us.
So
let's
do
our
CNI
endpoint
list
and
see
which
endpoints
we
have
remaining
kubernetes.
A
So
we
can
go
back
here.
Remember
we're
exacting
to
a
pod
here
in
this,
and
we
can
do
see.
Liam
I
think
that's
nope
I
mean
if
you
look
here.
I
got
back
to
my
host
okay,
so
let's
run
our
exit
command
again
now
we
can
do
see,
Liam
see
Liam
endpoint
list,
and
we
can
actually
see
that
we
have
some
endpoints
here.
Let's
see
if
I
can
expand
this
make
this
a
little
bit,
prettier.
Okay,
why?
My
terminal
is
just
not
wanting
to
cooperate
today.
A
Let
me
get
out
of
this
one
here:
exit
exit
exit
and
let
me
zoom
out
bambam
and
then
resize,
okay.
That
makes
it
a
little
bit
easier
for
folks
treat
okay.
So
here's
our
first
endpoint
and
I
noticed
earlier
when
I
was
poking
around
at
see
Liam.
If
we
do
we
get
into
the
other
pod,
we
actually
see
different
set
of
n
points,
so
I
think
we
did
this
first
one.
So
let's
try
to
do
the
second
one.
A
A
So
this
is
important
to
call
out
that
we're
seeing
endpoints
to
find
on
the
node
level,
and
here
it's
a
lot
more
interesting
I'm,
assuming
the
reason
it's
more
interesting
is
because
this
one
that
we
just
accept
into
is
actually
running
on
a
node
which
actually
has
workloads
running
on
it.
So
we're
actually
getting
to
see
a
lot
of
these
endpoints
here
and
I
know
I
just
closed
my
terminal,
but
I
wanted
to
reopen
it
here
in
a
different
window.
A
A
Jai
grub,
yes,
I
did
I
want
this
one
I
grabbed
my
cubic
or
node
I
want
my
cock's
node
nope
I
did
that
backwards?
There
we
go
and
we
can
do
sudo
bash
and
we
can
go
to
var,
run,
see
Liam
and
if
you
come
in
here,
there's
the
state
directory-
and
this
is
really
cool.
This
is
sort
of
this
when
I
saw
this.
A
This
obviously
reminds
me
of
/proc
on
the
Linux
analytics
filesystem,
but
we're
actually
seeing
these
IDs
here
and
if
we
go
back
here,
I
bet,
if
we
look,
we
can
find
13
949.
If
we
come
back
here
sure
enough,
we
have
39
49,
so
we
have
an
endpoint
defined
and
I'm
assuming
this
is
all
stored
in
a
TD,
since
this
is
a
key
value
store
that
Celia
muses
we
have
an
import
defined,
which
is
how
see
Liam
is
grouping
together
its
containers
with
a
unique
ID
and
then
in
this
directory.
A
We
have
some
really
interesting
things
going
on
here,
so
for
this
endpoint
we
can
actually
see
that
we
have
a
couple
of
files
here.
This
first
one
is
an
object.
File
called
BPS
alexei.
We
have
this
configuration
file
and
we
have
alexei
config.
It's
a
header
file
and
I
know
what
you
think
you
like.
This
is
all
written
in
c.
A
A
Ok,
so
here
we
have
a
Prometheus,
pod
and
I'm,
assuming
we
have
another
Prometheus
pod
here
as
well:
yep,
ok,
so
what
he's
saying
is
we?
Oh?
This
is
really
cool.
Ok,
this
is
really
awesome.
Thank
you
for
bringing
this
to
my
attention.
I
ravine.
Ok,
so
we
have
this
pod,
which
has
identity
2274
within
point
13
949,
and
we
have
the
same
identity
here
for
this
Prometheus
pod.
But
it
has
a
different
end
point.
A
A
Okay,
our
Veen
says
it's
based
on
the
labels.
Okay,
so
everything
is
helium.
It's
label
based.
That's
really
cool
I
am
curious
about
these
Prometheus
pods.
Now,
look
I
just
want
to
see
like
what's
going
on,
so
we
can
connect
the
dots
here.
So
let's
do
I
love
hacking.
You
guys
like
the
fact
that
I
just
get
to
hack
I
work
on
Friday.
It's
like
this
is
a
French
fun.
Also
I
like
talking
to
myself.
So
the
fact
that
I'm,
just
in
this
room
talking
to
myself,
is
also
a
lot
of
fun.
A
Ok,
so
let's
do
can't
get
Kyo
I'm
assuming
that
is
in
yeah.
If
we
look
here
on
our
labels,
namespace
cube
system,
good
name,
space,
cube
system-
we
have
two
pods.
Do
we
have
Prometheus
pods
here?
Where
are
these
Prometheus
pods?
Okay,
so
I'm
gonna,
do
my
show
me
everything
in
kubernetes
command,
which
I
think
is
it
still
defiant?
No,
it's
not
I
did
tweet
about
this
the
other
day,
but
you
can
do
I
can't
get
all
national
team
spaces,
and
this
will
actually
show
you
every
object.
That's
raining
in
your
Cooper
cluster.
A
It's
pretty
handy
for
moments
like
now
when
you
have
a
smaller
cluster
and
you
just
want
to
see
all
the
things.
So
this
is
all
the
things
around
my
Cooper,
nice,
cluster
and
I
bet.
If
we
look
in
here,
we
can
find
Prometheus,
something
or
other
I
don't
see.
Prometheus
what's
going
on,
I
was
assuming
cops,
did
something
in
the
setup.
Prometheus
Forest,
but
I
have
no
idea
where
we're
getting
those
labels
from
let's
see
if
we
can
find
any
more
clues
of
here.
A
Okay,
so
we
have
this
pod
namespace
Q.
Oh
it's
in
the
cube,
dns
autoscaler.
Okay,
that's
that
pod!
Then
we
have
annotation
Prometheus
Iowa
port,
which
potus's
though
okay
there
it
is.
It's
cube,
DNS
and
cube.
Dns
I,
don't
know
why
we
have
Prometheus
labels
there.
That
was
confusing
me
okay.
So
if
we
come
back
down
here
here
we
have
cube
DNS
and
I
bet.
If
we
look,
we
can
find
a
cube.
A
Dns
deployment
and
replicate
count,
of
course,
is
set
to
2
so
to
review,
because
I
feel
like
this
is
important
for
folks
to
get
right.
We
have
a
deployment
called
cube
DNS
in
the
deployment
we
have
a
replica
count
set
to
2
because
replicates
count
is
set
to
2.
We
get
2
cubed
DNS
pods
here
and
here
when
we
exec
into
the
sealian
pod
with
see
Liam.
A
We
can
do
a
see
Liam
in
point
list,
and
we
can
actually
see
that
we
have
the
2
cubed
DNS
pods
here
and
here
which
have
a
matching
identity
so
that
see
Liam
treats
them
the
same
because
they're
in
the
same
replica
set
but
have
unique
in
points.
Furthermore,
if
we
actually
go
into
the
node
that
the
cube
DNS
pods
are
running
on,
we
can
actually
see
in
the
VAR
run,
see
Liam
state
directory.
A
We
have
two
directories
for
the
endpoints,
not
the
ID
that
match
the
QD
nuts
pods
as
well,
which
we're
about
to
sort
of
learn
about
what
is
in
these
directories
and
why
it's
important,
oh
ok,
that
was
a
lot
but
I
think
that
should
connect
where
the
rubber
meets
the
road
going
from
kubernetes
pods,
all
the
way
down
to
a
c
program
on
the
underlying
Linux
file
system.
So
we're
going
to
come
back
up
about
two
or
three
levels
mentally
and
get
some
C
code.
Actually
I
don't
know.
A
We
have
two
header
files
here:
let's
open
them
up,
actually
look
just
because
it's
a
new
program
that
I
don't
know,
I'm
gonna,
be
very
cautious
and
we're
actually
gonna
see
if
these
are
actually
file
not
found,
but
I
have
to
get
install
file.
Oh
it's
core
of
us.
Okay,
I,
don't
even
know
know
about
core.
Wes
is
my
first
time
ever
using
a
core
OS
image
anyway,
I'm
just
going
to
assume
their
header
files
and
if
I
bring
me
terminal
or
break
the
computer,
then
we'll
just
restart
it.
A
A
I'm
sure
our
friends
see
Liam,
who
are
on
the
call,
are
happy
to
explain
why
these
two
header
files
are
and
how
they're
relevant
to
the
rest
of
the
C
code
that
we're
about
to
look
at
I'm,
not
gonna,
go
very
deep
into
the
C
code,
because
I
feel,
like
that's
gonna,
take
a
long
time
to
go
through.
Yes,
Miguel
has
my
Emacs
command,
but
I
just
again
wanted
to
show
you
guys
that
see.
Liam
right
out
of
the
gates
will
reject
network
traffic
based
on
some
really
interesting
features.
A
Here,
let's
see
what
folks
are
saying
about
core
was
Miguel,
says
DNF.
Why
and
salty
max
knocks
daniel
says
BPF
features
the
header
file
is
generated
from
fro
being
the
underlying
kernel
of
available
PPS
features.
Okay,
so
I
think
what
he's
saying
is
we
probe
the
kernel
we
see
which
BPF
features
are
available,
and
then
we
define
some
sort
of
like
configuration
for
the
rest
of
our
C
program
when
we
build
it
later.
A
Okay,
because
the
way
that
BPF
works
is
we're
actually
building
these
small
I'm,
assuming
it's
the
object
file
that
gets
built,
but
we're
gonna
look
a
little
bit
more
later
at
that
after
we
look
at
the
C,
but
I'm
assuming
I
know
some
sort
of
bytecode
is
generated
and
that
generated
by
it
code
has
to
be
compiled.
So
I'm
assuming
it's
gonna
reference
these
as
well,
based
on
what
the
kernel
supports.
I
have
a
feeling.
That's
what's
going
on
Daniel.
A
If
you
want
to
+1
that
or
not
or
correct
me,
please
feel
free
to
Paul,
says:
Corollas
purposely
doesn't
have
a
package
manager
so
app
to
install
or
anything
but
could
drop
in
emacs
binary
there
Todd
says
yeah
Kouros
is
very
stripped-down
use
more
or
less
cat
grab
than
VI.
It's
about
your
best
options.
You
all
don't
want
to
see
me
open
on
VI
on
TJ,
okay,
that
won't
just
be
embarrassing
and
daniel
says.
Yes,
that
is
correct.
Yay
I
got
something
right
and
Paul
says
also
I
think
there
might
be
Nano
on.
A
Oh
there's,
Nano
encore
les
I
love
nano
nano
RC
for
life.
So
let's
see
what
we
had
here
noted
config,
no,
no,
no,
not
found
dang
it.
It's
okay,
I
can
use
cat
unless
and
grep
like
that's
totally
fine
I'm,
not
even
you
said
if
you
guys
are
very,
very
lucky
okay.
So
where
was
it?
Let's
go
up
a
directory
here
we
were
again
I'm
now
kind
of
curious.
What's
going
on
here
we
have
this
net
def
config
dot
H.
We
have
a
couple
of
other
object
files.
A
Oh
there's
a
log
I
love
a
good
log.
Bpf
features
dot
log
kernel,
config
not
found
okay,
so
yeah
there's
some
sort
of
EPF
log
going
there
as
well
Oh
policy
sad
face
because
an
you
know:
that's:
okay,
I'm
gonna!
Actually
try
miguel's
Emacs
command
just
because
I
am
all
about
blindly
dropping
commands
on
cloud
servers
because
I
feel
see
if
to
do
it,
because
it's
in
somebody
else's
data
center.
Do
you
know
if
not
found?
Okay?
So
no,
he
maxumcorp
less.
It
is
a
sad
day
for
this
girl.
A
Let's
go
and
let's
look
at
the
C
code
in
the
sealian
repository
and
hopefully
learn
a
little
bit
more
about
BPF
and
help
you
care
for
X
so
go
here.
I
want
to
go
back
to
my
documentation
and
here's
this
link
in
our
Doc's
sealian
C
code.
A
So
in
this
Celia
repository,
github.com,
sighs,
helium,
slash
helium,
we
have
this
BPF
directory
and
if
you
come
in
here,
you'll
actually
notice
that
this
is
all
written
in
my
personal
favorite
programming,
language
C,
which
really
is
every
programming
language,
if
you
think
about
it
anyway,
the
rest
of
C
liam
is
written
primarily
and
go,
but
there's
this
little
BPF
section
here
that
is
written
in
C,
okay,
so
the
first
file
I
wanted
to
look
at.
Is
this
BPF
LX
c
dot
c
file?
A
Again
we're
not
going
to
go
super
deep,
deep
into
the
c
code
here?
But
if
you
scroll
down,
you
can
see
that
we
are
actually
including
quite
a
bit
of
the
Linux
source
code
here,
especially
the
stuff
around
networking,
if
you've
ever
written
any
like
Linux
Network
code
before
you're-
probably
familiar
with
some
of
these.
These
Lib
files
here,
particularly
this
one,
our
good
old
friends
at
ipv4
that
we've
been
using
for
forever
and
there's
some
other
ones
as
well.
A
A
If
the
packet
isn't
in
the
established
direction
and
it's
destined
within
the
cluster
and
must
match
policy
or
be
dropped
when
it
says
a
much
must
must
match
policy
I'm
assuming
this
is
actually
where
we
do
the
network
transaction
or
we
either
approve
the
network
transaction
or
we
reject
it
as
well
and
I'm
curious
Daniel.
Maybe
you
know
the
answer
to
this.
What
exactly
are
we
checking
here
from
this
policy?
Can
egress
this
actually
checking
the
cumin
or
the
Celia
Network
policy,
the
Cooper
Deniz
network
policy,
or
something
entirely
different
altogether?
A
This
is
just
an
interesting
piece
of
code.
I
wanted
to
pull
up
and
take
a
look
at
what's
going
on
here.
Oh
sorry,
My
partner
is
texting
me
I,
guess
they
are
out
of
the
wilderness
anyway
interesting
this
here.
If
you
can
give
us
a
little
bit
of
clarity
here
Daniel,
so
it's
checking
the
sealing
policy.
Okay
and
we're
gonna
run
a
sealing
Network
policy
example
here
after
we
get
done
looking
at
these,
these
files
here
on
our
file
system.
A
A
Here,
the
sealian
pods
are
compiling
these
code
and
spitting
out
this
output
for
each
one
of
our
endpoints,
which
is
again
unique
to
a
pod
and
then
we're
using
this
I'm,
assuming
that
the
binary
here
in
ops,
II
and
I
Ben
I'm,
assuming
that
this
there's
some
tool
here,
that
is
actually
going
to
call
out
and
run
these
independent
binaries
as
needed,
but
again
Daniel.
If
you
could
help
us
connect
the
dots
there.
That
would
be
helpful,
also
Daniel.
Thank
you
for
joining
the
call.
And
how
can
you
walk
my
way
through
this
autumn?
A
A
The
important
thing
here
is
that
C
liam
is
based
on
endpoints
and
identities,
and
you
don't
see
that
in
some
of
the
other
sei
providers
like
calico,
where
we're
using
IP
tables
that
are
not
nearly
as
built
on
this
concept
of
identity
and
rejected
net,
that's
such
a
low
level
in
the
network
transaction
on
the
file
system
there
or
on
the
host
system.
Anyway.
A
Let's
look
at
running
a
kubernetes
network
policy
example
and
look
at
creating
a
Celia
network
policy
example.
So
here
in
the
docs,
that's
the
best
see
I
bit
there.
Now
we're
going
to
switch
over
to
network
policy
and
look
at
how
all
of
that
kind
of
works
so
to
go
back
into
the
docs.
We
have
two
pieces
of
code.
I
wanted
to
look
at
this
first.
One
here
is
layer,
seven
to
see
Liam,
which
I
was
really
excited
to
run.
This
live
on.
A
The
air,
cuz
I,
haven't
ran
this
yet,
and
it's
totally
Star
Wars
themed.
Oh,
not
the
layer.
Seven
one
I
opened
up
the
wrong
one.
This
is
the
one
I
wanted
to
open
up
this
polyp
nope
I
can
get
to
it
from
here.
It's
okay!
Actually,
you
know
where
it
was.
It
was
in
slack
with
Thomas.
Let
me
jump
back
into
slack
and
pull
this
up
really
quick.
A
So
speaking
of
slack,
there's
two
leagues
here,
I
totally
have
been
hanging
out
in
the
sealian
slack
all
day
today
and
all
day
yesterday
and
I
forgot
to
mention
in
the
last
episode,
there's
totally
a
calico
slack
as
well.
So
if
you're
interested
in
learning
more
about
these
there's
two
slack
channels
there,
and
let
me
find
this
link
to
this
documentation
that
I
wanted
to
run
through
as
well.
A
A
We
did
this
when
we
did
calico
and
C&I
provider
should
behave
the
same
with
network
policy,
so
this
is
a
good
example
that
we
can
just
sort
of
run
through
in
about
30
seconds
or
left
and
actually
do
like
the
Pepsi
challenge,
where
we
run
the
exact
same
code
and
we
make
sure
that
Celia
is
working
as
expected.
So
if
we
run
through
this
and
I
know,
folks
are
saying
stuff
in
the
chat.
A
So
we're
like
give
me
a
second
here
to
kind
of
like
go
through
this
and
we'll
check
and
see
what
you're
saying
also
still
wondering
where
that
Friday
beer
is,
that
cough
cough
I'm
Tony,
it's
a
cough
cough,
so
how
we
want
to
do
this,
is
we
want
to
run
it
on
my
laptop
here?
So
we'll
create
this
policy
demo?
We
want
to
run
this
pod
and
I'm
going
to
skip
over
some
of
this,
just
so
that
we
go
through
it
a
little
bit
faster.
A
So
we
create
our
engine
ax
pod
and
that's
what
we're
gonna
use
to
validate
that
Network
policy
is
working
as
expected
and
we're
gonna
create
a
service
for
that
by
running
this
exposed
and-
and
now
we're
gonna
run
this
command
here
and
just
demonstrate
that
it
works
and
we're
gonna
run
this
w
get
and
use
q
penis
to
try
to
hit
that
into
next
pad
you
just
created
and
poof.
It
works
so
scrolling
down.
We
can
now
enable
network
isolation
which
this
is
relevant.
A
We
talked
a
little
bit
about
it
last
time,
but
this
is
how
we
start
to
basically
enabled
kubernetes
network
isolation,
so
kubernetes
network
policy
is
even
relevant
in
the
first
place.
So,
let's
exit
out
of
this
pod
clear
my
screen
will
create
this
empty
network
policy
rule
here
and
that's
how
we
turn
on
network
isolation.
A
It's
a
bit
of
a
weird
pattern,
but
that's
just
how
we
do
things
in
kubernetes
the
weird
way,
and
now
we
can
run
this
command
and
we'll
Association
to
our
new
pod,
and
we
will
now
timeout
BAM
and
it's
time
you
know
now,
okay,
and
why
that's
time
you
know
I'm
gonna,
look
in
chat
and
see
what
folks
are
saying.
Okay,
so
I
was
scrolling
up
daniel
says
the
C
files
basically
compiled
okay.
We
already
read
that
one.
Our
Veen
says
yes
for
Cades
network
policy
and
there
are
extensions
for
l7
and
dns
space
policies.
A
Oh
cool
I
didn't
realize
that
see.
Liam
did
DNS
as
well.
That's
exciting
Thomas
Groff
says
the
port
of
the
C
code
is
enforcing
both
Kane's
Network
policy
and
the
Celia
network
policy.
Okay,
so
Thomas
brings
up
a
good
point
which
there's
a
difference
between
the
kubernetes
network
policy
and
the
Celia
network
policy,
which
the
kubernetes
policy
is
what
we're
looking
at
now-
and
this
is
just
a
really
basic
example-
to
kind
of
show
how
it's
gonna
work
and
that
we're
going
to
demonstrate
that
see.
A
Liam
is
respecting
kubernetes
network
policy
and
then
we're
going
to
look
at
actually
creating
a
sealing
specific
network
policy
and
look
at
some
of
the
cool
examples
of
things
you
can
do
with
Celia
on
top
of
base
kubernetes
network
policy.
Thanks
for
that
note,
Thomas
Ian
says
helium
watches
for
kubernetes
Network
policy,
yep
Ian's
just
validating
what
I
just
said
when
these
are
added
to
kubernetes
helium
is
not
it's
notified
by
kubernetes
and
computes
the
policy
based
off
the
rules
that
apply
to
each
endpoint.
A
So
as
we
change
network
policy,
a/c
liam
is
going
to
be
reactive
to
that
and
is
going
to
update
the
BP
fm
as
needed
to
respect
kubernetes
that
were
policies
which
that's
pretty
cool.
You
know
also
says
the
policy
that
applies
to
the
endpoint
is
converted
to
representation
that
the
data
path
can
understand
from
there
also
wonderful
information
and
thank
you
again
folks
for
joining
and
helping
us
out
as
we
explore
see
Liam
okay.
So
our
request
timed
out.
As
expected,
we
have
network
isolation
turned
on
and
we
haven't
created
a
policy
that
says
otherwise.
A
So
how
this
whole
thing
works
in
kubernetes.
Is
we
create
this
empty
network
policy?
It
turns
on
network
isolation
implicitly,
and
now
everything
is
sort
of
denying
all
and
you
have
to
explicitly
go
in
and
say
no,
please
open
this
up
so
that
we
can
actually
begin
to
use
that
now
and
that's
what
we're
doing
these
next
few
steps,
who
I
told
you
guys
I
was
gonna,
have
a
lot
of
sugar
today,
I
feel
like
I've,
been
talking
really
fast,
better,
have
some
more.
A
Okay,
so
now
we
create
this
network
policy
and
again
have
software
friends
at
ty
guerra
in
calico.
This
is
still
my
favorite
example
of
kubernetes
network
policy
anywhere
on
the
Internet.
So
this
is
one
of
my
favorite
things
to
demo.
So
thanks
for
letting
us
steal
it
for
both
episodes
and
we'll
probably
continue
to
use
it
for
other
CNI
episodes
as
well,
so
that
folks
always
get
a
side-by-side
okay.
So,
let's
out
of
this
pod,
let's
create
our
new
network
policy
here
and
it
wouldn't
be
tgia.
A
If
we
didn't
look
at
it,
yeah
Mel,
so
we
here
have.
We
have
our
match.
Labels
to
run
is
equal
to
nginx,
and
if
you
remember
earlier,
when
we
created
our
engine,
X
deployment
that
label
run
is
equal
to
engine.
X
is
implicitly
created
because
the
name
of
our
deployment
was
engine
X.
So
we
know
that
that
label
is
going
to
match.
So
we
create
this
and
it
says
access
engine
X
created
come
back
here
and
now
we
have
these
two
commands
here
so
I'm
gonna
kind
of
set
this
up.
A
A
So,
let's
clear
a
screen
and
let's
look
at
our
two
commands
that
we
have
in
our
buffer
here
the
first
one
is:
we
have
can't
access
here
and
our
next
one
is
we
have
access
so
we
want
access,
see
I'm
gonna
have
this
is
why
I
did
this
if
you're
running
access
and
then
we
run
our
W
get,
we
can
see.
Oh
I,
don't
know
why
I
have
that
copied
help
me
grab
my
W
get
here.
A
Bam-
and
you
can
see
that
yes,
we
do
in
fact
would
get
a
response
and
when
we
run
our
can't
access
which
will
have
a
different
label,
it
won't
match
and
when
we
run
our
W
get,
it
will
ultimately
time
out.
Ok,
so
let's
look,
what
see
liam
is
doing
behind
the
scenes?
So
if
we
come
here,
I
have
no
idea
what
changed,
but
I
bet
our
friends
at
C,
Liam,
Ian
or
Thomas,
or
our
Fein
can
can.
A
Let
us
know
what
happened
here
and
if
anything
changed
in
our
file
system,
but
somehow
see
Liam
was
going
through
and
actually
creating
that
Network
policy
and
actually
like
changing
things.
I'm
assuming
in
this
directory
to
respect
that
new
network
policy
we
just
created-
and
that
happened
while
we
were
running
all
of
these
commands
here.
A
So
we
can
now
validate
that
see.
Liam
is
actually
respecting
kubernetes
Network
policy,
which
is
good
because
that's
one
of
the
big
things
it's
supposed
to
do
and
we
can
actually
go
and
see
those
network
policies
in
kubernetes
doing
a
heyget
net.
Pull
will
do
all
namespaces,
ok,
and
you
can
see
here
in
the
policy
demo
which
we
sold
from
the
Calico
documentation.
A
We
have
our
index,
which
has
pod
selector
run,
is
equal
to
engine
X,
and
then
we
have
this
other
one,
which
is
how
we
turn
on
default,
deny
for
our
isolation
and
the
network
to
begin
with.
Okay,
so
those
are
the
kubernetes
network
policies.
Now,
let's
try
to
get
the
sealian
network
policies
up
and
running
and
look
at
some
of
the
other
examples
there
as
well
and
before
we
do
that.
I
just
wanted
to
point
out
for
folks
at
home,
you
can
do
heyget
CRD.
If
you
do
not
know
what
that
does.
A
It
returns
all
of
your
custom
resource
definitions.
You
have
in
your
cluster
here
we
have
see
Liam
in
points
and
we
have
see
Liam
network
policies.
So
what's
cool
here
is
you
can
do
like
a
get
I
think
it's
CEP
for
see,
Liam
in
point
all
namespaces
and
you
can
see
we
have
all
these
endpoints
here
and
I'm.
Trying
to
think
I
want
to
compare
that
to
an
in
point
list.
A
So
give
me
a
second
here:
namespace
cube
system,
we're
gonna
grab
our
see,
Liam
pod,
we're
gonna
exact
into
this
thing
again:
okay,
exact
IT,
the
name
of
our
pod
namespace
cube
system
bash
and
see
Liam
in
point
list.
This
dreaded
terminal
sizing
again.
Let's
try
this
okay,
so
we're
gonna
run
our
I
see
Liam
in
point
list,
that's
a
little
bit
cleaner
and
then
we're
gonna
X
out
of
this
and
run
our
all
namespaces
again.
A
So
we
can
see
that
we
have
endpoints
defined
across
our
cluster
and
a
subset
of
those
endpoints
are
here
and
we
get
a
little
bit
more
verbose
information
if
we
actually
run
it
with
this
helium
command-line
tool
as
we're
accepting
to
one
of
the
pods
but
again
important
to
note
that
see.
Liam
endpoints
are
unique,
ID
for
all
these
different
parts
that
were
running
but
that's
sort
of
how
this
whole
system
sort
of
works
together
using
C
IDs
using
the
O'neil
iron
at
CDs,
datastore
and
baking,
though
all
this
into
kubernetes.
A
So
it's
kind
of
cool
to
see
a
CNI
provider
that
actually
uses
a
lot
of
the
underlying
resources
of
kubernetes,
like
Sierra
DS,
to
actually
bring
themselves
to
life,
which
is
pretty
exciting
again.
This
sort
of,
like
echoes
the
sentiment
of
joe's
kubernetes
as
a
platform
platform
message
that
he's
always
talking
about
anyway.
Thomas
has
a
comment
for
us.
He
says
the
go
portion
of
see.
Liam
has
received
the
new
policy
calculated,
what
security
identities
this
mapped
to
and
then
pushed
down
rules
to
the
BPF
map
to
allow
these
identities.
A
A
Now
it
doesn't,
but
that's
cool,
because
we
know
that
we
have
this
helium
command-line
tool
there
and
I
guess.
This
begs
a
pretty
interesting
question,
which
is:
would
it
be
possible
to
configure
the
see
Liam
command-line
tool
to
work
from
my
local
workstation
here
or
do
I
have
to
be
exact
into
the
pod
whenever
I
want
to
use
it?
A
I'm
gonna,
assuming
there's,
got
to
be
some
sort
of
like
centralized
API
server
sed
somewhere
and
I'm
wondering
what
you
would
have
to
potentially
expose
in
order
to
get
the
sealian
command
line
total
working
outside
of
the
cluster,
so
yeah.
If
you
could
give
us
any
information
there,
that
would
be
helpful,
okay
and
last
but
not
least,
we're
already
an
hour
and
13
minutes
into
the
episode.
A
So
I
don't
know
how
much
time
I'm
gonna
have
to
actually
go
through
this,
so
we
might
actually
just
kind
of
go
through
it
really
quickly
here
and
then
we'll
we'll
get
off
for
the
day,
because
I
have
a
10
day,
mountain-climbing
vacation.
That
starts
the
second
I
hit
the
stop
streaming
button
in
YouTube.
Here
I
still
have
to
go
to
the
doctor
and
get
my
hand
checked
out,
but
I'm
getting
excited.
If
you
guys
can't
tell
if
you
folks
can
tell
okay.
So
let's
go
back
to
our
documentation.
A
We
want
to
look
at
for
our
final
little
bit
of
see
Liam
hackery
for
the
day.
Okay,
so
we're
gonna
kind
of
start
at
the
top
of
this.
This
snippet
here
and
you
know
what
I
better
add
this
to
the
hack
MD,
because
I
don't
think
it
is
otherwise
so
I
think
I
come
over
here
and
I
go
Bam
Bam,
so
here's
our
link
and
then
we
want
to
call
this
I,
don't
know.
What's
going
on
a
lot
of
weird
tab,
hinting
see
Liam
tutorial
from
and
so
okay.
A
So,
there's
that-
and
we
can
come
back
here
so
we're
going
to
go
through
this
first
install-
see
Liam,
which
we
already
did
deploy
the
demo
application
in
our
Star
Wars
inspired
example.
This
is
the
real
reason.
I
wanted
to
view
this.
There
are
three
micro-services,
the
Death
Star
and
the
TIE
fighter
in
the
x-wing.
The
Death
Star
runs
an
HTTP
web
service
on
port
80,
which
has
exposed
as
kubernetes
the
Death
Star
has
two
pod
replicas
and
what
is
a
TIE
fighter
do
or
the
x-wing
represents
a
similar
service
on
the
Alliance
ship.
A
A
We
can
run
Q,
Bechdel,
namespace,
cube
system
get
pods
and
we
can
filter
on
labels,
AB
simple
C
Liam,
and
then
we
can
actually
exec
into
our
see
Liam
pod
and
then
do
a
see
Liam
in
point
list,
which
we've
already
done
that
a
few
times,
and
you
can
actually
see
that
we
are
getting
Death,
Star
and
x-wing
end
points
here.
So
I'll
do
that
now,
but
we'll
do
it
sort
of
in
context
of
our
cluster
here.
So,
whereas
our
exec
BAM-
and
we
know
this
our
only
node.
A
So
we
know
we're
going
to
be
able
to
find
our
end
points
here,
and
you
know
what
I
really
wanted
alias
Seeley
illegal
to
see.
Can
I
like
make
a
pull
request
to
this
helium
repository
that
would
just
so
I
could
just
have
just
to
see
here
instead
of
the
whole,
see
Liam
command.
What
do
you
guys
accept
that
if
I
contributed
see
Liam
in
point
list
and
let's
see
if
we
can
find
any
deaths,
starry
things
here?
Oh
here's,
a
TIE
fighter
but
or
fishin?
Is
he
and
fired?
A
That's
a
great
label
and
class
is
equal
to
x-wing
and
I
bet.
If
we
look
around
even
more,
we
can
find
yep
there.
It
is
here's
our
Death
Star
here
as
well.
Okay,
so
let's
go
back
to
our
docks.
Our
V
Thomas
says
yes,
please
actually
looks
like
there's
been
a
couple
of
comments
in
chat.
Let
me
see
what
we
have
here.
A
Thomas
says:
helium
one
point
three
in
Kate's:
111
will
provide
nice
columns,
force
helium,
end
point:
Thomas
says:
yes,
please:
I
can
open
up
a
PR
yeah
I
love
contributing
to
open
source,
and
our
Dean
says
there.
If
you
tell
square
the
site,
you
can
run
directly
from
the
terminal
in
the
troubleshooting
section:
okay,
cool.
Any
idea
are
Avene
on
where
I
would
find
those
util
scripts.
I
mean
how
to
run
them.
Just
like
a
directory
to
check
out
would
be
helpful
for
me
to
navigate
to
so,
let's
check
current
AG
sex.
A
A
We
can
actually
control
some
of
the
different
layer,
seven
things
and
actually
like
when
we
hit
the
death
star
turn
on
like
yes,
we
we
do
want
to
accept,
gets
to
the
best
start
in
point,
slash,
request
landing,
but
we
don't
want
to
accept
posts
there
or
something
along
those
lines.
So
this
is
a
great
demo.
This
is
I'm,
probably
my
new
favorite
demo
that
I've
done
on
tgia
K.
So
thank
you
for
making
this
fun.
For
us,
by
making
a
Star
Wars
theme,
we
really
appreciate
it.
A
I,
don't
know
why
the
dictionary
keeps
getting
pulled
up.
Okay,
so
let's
copy
this
whole
command
and
let's
go
back
here
and
let's
clear
a
screen
so
that
folks
get
a
nice
wait.
Where
are
we
I
want
to
go
back
to
here
and
cue
Bechdel,
exact
x-wing?
That
should
do
it
and
it
says
Shipley
and
in
okay
and
that's
what
we
expect
here
and
then,
if
we
do
the
same
thing
on
the
TIE
fighter,
we
also
get
a
ship
landed.
I'm
gonna
skip
that
for
now
and
then
name
of
saving
time.
A
So
then
it
says
apply
an
l3
and
l4
policy.
So
when
using
see
Liam
in
point
IP
addresses
are
irrelevant
when
defining
security
policies.
Instead,
you
can
use
labels
assigned
to
the
pods
to
define
security
policies,
we'll
start
with
the
basic
policy
restricting
dust.
Our
landing
request
to
only
ships
that
have
label
org
is
equal
to
Empire.
Oh,
my
gosh,
only
the
Empire
is
allowed
to
land
on
the
Death
Star.
This
is
great
so,
and
this
will
not
allow
any
ships
they
don't
have.
A
The
org
is
equal
to
Empire
label,
to
connect
to
the
Death
Star
service,
and
so
here's
how
we
do
this
so
here
for
API
version.
We
have
our
C
Liam,
o
and
for
kind.
We
have
C
liam
Network
policy.
Remember
we
saw
those
C,
R
or
DS,
and
the
C
R
is
earlier
we're
actually
going
to
go
in
in
creating
one,
and
this
looks
very
similar
to
kubernetes
network
policy.
A
In
fact,
if
we
look
at
the
network
policy
in
the
Calico
documentation
here,
you
see
that
we
have
this
ingress
and
we
have
a
pod,
selector
and
I
think
you
can
define
egress
as
well,
and
if
you
come
back
to
our
dust,
our
example,
you
can
see
that
we
have
ingress
and
we
have
match
labels
in
endpoint
selector.
So,
instead
of
having
like
that
pod
sector,
we
have
an
endpoint
selector,
which
again
endpoints
are
sort
of
the
chorus
Illium
in-house
helium
works.
Okay.
A
So
we
want
to
create
this
to
apply
this
l3
l4
policy
run
this
command.
So
let
me
so.
This
is
a
pet
peeve
of
mine
and
I
want
to
know
what
folks
think
so.
I
know
that
everybody
who
loves
to
put
this
little
like
a
dollar
sign
here,
to
denote
that
this
is
running
in
your
terminal,
but
my
my
feeling
is
on
this,
so
that
actually
causes
more
problems,
because
then
I
can't
do
the
double
click
and
select
everything.
A
That's
actually
gonna
run
in
my
terminal
and
I
think
the
fact
that
it's
already
in
this
little
box
here
kind
of
lets
me
know
that
it's
like
a
command
anyway.
So
whenever
I
write
dogs,
I,
never
add
the
the
dollar
sign
there,
but
everybody
hates
me
for
it
and
always
opens
up
PRS
to
add
it
anyway.
But
I
from
one
of
the
belief
that
you
know
we
should
not
have
the
dollar
sign
there
so
that
you
can
always
have
a
command.
You
can
double
click
and
copy
to
your
clipboard
and
yellow
I.
A
Don't
care
anybody
has
to
say
about
that.
Ok,
so
we
created
this
new
rule,
1
pneumatics
that
agree.
The
dollar
sign
is
redundant
here.
Yeah
I'm
dead,
Oh
Paul
enjoys
as
well
I
wonder
if
we
can
start
like
a
like
a
movement
like
an
open
source
documentation
like
remove
all
dollar
signs
from
documentation
and
hashtag,
whatever
it
is
called
that
would
be
cool
anyway,
so
we
can.
Now
we
created
this
rule
one
see
Liam
Network
policy.
A
So
let's
do
a
git
sealing
Network
policy
and
you
can
see
we
have
one
and
let's
do
Hamill
on
that,
and
we
can
see
that
it's
actually
gone
through
and
expanded
quite
a
bit.
We
have
a
lot
more
defined
here
than
that
was
in
our
original
definition.
Let's
see
Simon
Says
I
think
you
can
do
it
as
a
table
with
a
dollar
sign
in
a
separate
cell
so
that
the
double-click
only
gets
the
command
P
Thomas
like
a
champ
PR,
is
welcome.
That's
awesome!
A
So
anyway,
let's
look
at
this
in
a
little
bit
more
detail
here
we
have
our
endpoint
selector,
so
any
class
is
equal
to
Death
Star
and
any
org
is
equal
to
Empire,
so
whatever
class
is
set
to
Death
Star
or
it's
equal
to
Empire,
we're
gonna
match
on
those
endpoints
and
for
ingress.
We're
gonna
allow
from
in
points
with
matched
labels,
org
Empire
on
port
80,
and
then
of
course,
we
have
this
fabulous
status
field
here,
which
gives
excuse
me
it
gives
us
a
status
of
what's
going
on
behind
the
scenes.
A
So
let's
go
back
to
our
documentation
and
it
says
now
if
we
run
the
land
a
request
again,
only
the
TIE
fighter,
pods
with
the
label
org
is
equal
to
Empire,
will
succeed
the
x-wing
the
dreaded
Alliance,
but
will
be
blocked.
So
how
we
can
do
this
is,
let's
run
our
TIE
fighter
command
here,
so
exact,
hi
fighter
curl
we're
gonna,
do
a
post
on
this
end
point
and
our
TIE
fighter
was
able
to
land
on
the
Death
Star.
A
And
if
we
come
here,
we
do
a
queue:
Bechdel,
exact
x-wing
on
the
dirty
Alliance
ship,
and
course
the
Alliance
is
blocked
from
landing
on
the
Death
Star
and
let's
hear
the
Millennium
Falcon,
in
which
case
we're
going
to
suck
you
in
and
stick
you
on
the
net
start
regardless
next
step,
five
inspecting
the
policy
okay.
So,
oh
so
we
can
actually
go
through
and
you
can
see
that
the
policy
enforcement
is
enabled
here.
So
let's
go
through
and
actually
do
that
anyway.
A
We're
hanging
here
so
I'm
gonna
see
term
our
way
out
of
there
and
let's
get
more
sealian
pod
again
here
and
see
Liam
in
point
list,
but
I'm
gonna,
clear,
my
buffer
first
and
I'm
gonna
expand
this
so
that
we
can
get
a
nice
read
out
here
and
sure
enough.
We
have
no.
A
We
don't
want
that
one
here
we
go
so
we
have
enforcement
enabled
on
our
Death
Star
in
point
here,
so
that
lets
us
know
that
we're
actually
using
the
Seeley
MCRD
to
to
manage
this
end
point
and
then,
of
course,
there's
all
this
wonderful,
BPF,
goodness
that's
happening
behind
the
scenes
and
is
making
all
of
this
happen
so
see.
Lane
has
got
this
really
nice,
like
this
layer
of
software
on
top
of
BPF
that
integrates
well
with
kubernetes.
That's
making
all
of
this
happen.
So
that's
really
cool.
A
So
let's
go
back
here
and
it
says
you
can
also
inspect
your
details
via
Kubek
dole,
so
you
can
get
here
which
we've
kind
of
already
done.
A
lot
of
this.
We
haven't
run
to
describe
on
it
yeah,
but
we
can
totally
run
it
and
describe
just
for
good
measure.
I'm
gonna
resize,
my
terminal
again
so
I.
Remember
it
to
you.
I
keep
everything
on
the
screen.
No,
we
don't
want
to
be
in
the
pod.
We
want
to
run
on
my
local
laptop
describe
C&P
rule
one.
A
So,
let's
see
here's
where
our
describe
starts
so
name
is
equal
to
rule
one.
It's
in
the
default
namespace,
we
got
our
spec,
we
have
some
mana
data
about
it
and
we
have
a
status
and
no
events,
so
pretty
straightforward,
described
here
and
nothing
too
crazy.
Going
on
our
bean
says
few
utils
here
and
see:
Liam
contributes,
okay,
I
will
go
and
check
out
one.
Second,
let
me
finish
up
this
demo
and
then
we'll
try
to
find
that.
Okay,
our
bean
okay,
so
step
six
apply
and
test
HTTP
aware
l7
policy.
A
It
requires
two
legitimate
operation,
so
this
is
the
FIB
the
bit
that
goes
above
and
beyond
the
kubernetes
network
policy
scope,
and
this
is
where
we
get
into
the
layer
seven
of
the
networking
stack
and
we're
actually
able
to
define
which
types
of
requests
on
which
endpoints
and
use
a
Celia
Network
policy
to
enforce
that
as
well.
So
let's
run
this
command.
For
example,
consider
the
death
star
service
exposes
to
some
maintenance
API,
so
it
should
not
be
called
by
random
Empire
ships
to
see
this.
A
Okay,
so
we
got
to
go
panic
and
the
best
start
exploding,
which
is
funny
because
we
all
know
how
the
death
star
really
explodes,
and
it's
because
this
is
a
great
example,
like
the
part
going
through
this.
The
more
I
love
this,
because
Luke
is
able
to
shoot
the
missile
into
that,
like
tiny
escape
hatch.
That
just
so
happened
to
be
there
I
have
to
say
the
Death
Star.
A
So,
anyway,
we
were
able
to
explode
the
Death
Star
and
here's
a
handy
diagram
about
what's
going
on
and
it
says
see:
Liam
is
capable
of
enforcing
HTTP,
lair,
ie
l7
policies
to
which
limit
the
TIE
fighter.
So
how
we
would
do
that
is,
we
would
apply
this
helium
Network
policy.
And
if
you
look
here,
we
now
have
this
rules
directive,
which
says
HTTP,
method,
post
and
we're
I'm,
assuming
this
is
o
of
whitelist
or
like
an
allow,
only
list
of
our
folks
at
Celia
Moana.
A
Let
me
know
for
sure,
but
this
says
that
we
can
only
post
to
v1
request
landing
so
how
we
apply.
This
is
again
here
and
we
go
back
and
I'll
clear
my
screen
and
now
we
can
apply
this
network
policy
rule
one
configured
come
back
here
and
if
we
do
a
queue,
Bechdel
exec
again,
we
can
verify
that
our
Thai
fighter
is
able
to
land
TIE
fighter
landed
and
Quebec
toll
exact
IP
fighter
dust,
our
exhaust
port.
A
So
this
is
how
we
Luke
Skywalker
exploded
the
Death
Star,
and
this
should
now
be
blocked
by
ceiling
network
policy.
So
we
have
sufficiently
secured
the
Death
Star
from
Nasty
Alliance
attacks,
it's
trying
to
keep
missiles
into
our
exhaust
port
and
of
course
we
get
access
to
nine
now,
instead
of
exploding,
our
death
star
from
the
inside
out
sorry
Alliance,
zero
and
this
time
and
you
can
observe
the
l7
policy
via
Q
Bechdel.
A
If
we
run
describes
helium
network
policies,
you
can
actually
see
that
we
have
these
rules
to
find
down
here
again,
which
I
can
run
that
in
my
terminal.
So
let's
go
describe
C
and
P.
You
can
actually
see.
We
have
well
we're
blocking.
So
this
is
a
block
list.
It's
not
it's
not
an
allow
list.
We're
blocking
this
request,
landing
I,
think
which
in
point
did
we
are
we
blocking
or
not?
No,
okay,
so
so
we're
allowing
request
landing,
but
it's
implicitly
blocked
on
the
exhaust
port.
Okay,
so
it
is
in
a
while
list.
A
Okay,
yes,
correct
I,
put
the
link
for
how
about
you
turn
this
demo
for
yourself
and
you
hack
in
denotes
Thank,
You,
Thomas
and
it
says
seven
cleanup
mini
cube,
delete,
okay!
So
anyway,
that's
that's
it
that's
see.
Liam
Network
policies.
Do
we
look
at
the
other
yeah?
We
looked
at
the
endpoints,
er
I
think
that's
all
I
wanted
to
go
over
today,
I'm
gonna
grab
some
tea,
really
quick.
It's
it's
an
hour
and
a
half
into
our
episode,
so
Friday
afternoons.
A
Let
me
go
and
do
do
the
thing
here
where
I
go
back
to
my
my
face.
So
Friday
afternoons
are
a
great
time
for
us
to
do
this
and
we
think
for
everyone
for
joining.
If
you
want
to
say
your
goodbyes
any
my
lips
are
like
so
dry
cuz
I
just
been
talking
for
an
hour
and
a
half
I
say
your
goodbyes
in
the
chat.
A
Any
more
questions
you
have
for
me
for
hep,
to4,
kubernetes
feel
free
to
add
them
in
the
chat,
and
you
know
I'll
try
to
answer
them
here
and
I
always
like
to
spend
the
last
few
minutes
kind
of
rambling
on
a
little
bit
so
that
you
guys
on
your
end,
won't
get
me
saying
goodbye
and
still
have
an
opportunity
to
say
something
in
the
chat.
I
can
still
respond.
So
I'm
always
like
trying
to
find
filler
here
at
the
end
of
the
episode.
A
It's
like
a
big
family
reunion,
we're
gonna,
go
climb
up,
Mount,
Rainier
and
I'm
taking
a
week
off
work
and
I
literally
think
I
have
plans
to
just
sleep
on
a
glacier
for
like
two
days
with
my
wonderful
partner
and
my
best
friend's
in
the
whole
world
and
I
think
we're
actually
going
to
try
to
bring
Charlie
to
Ashley's
up
to
the
top.
So
that's
going
to
be
really
exciting
as
well,
and
this
is
what
I
do
for
my
vacations
anyway.
Folks
say:
let's
see
what
they
said
so
Thomas
says
you're
awesome.
A
Our
Dean
says
thanks
Chris.
This
was
awesome.
Zolt
says:
thanks
for
the
great
demo,
have
a
great
holiday
pneumatic
says
thanks
great
demo,
Simon
looks
like
folks
really
liked
the
demo
more
great
demo
comments,
Paul
good
to
see
you
Paul
great
episode,
be
safe
on
the
mountain
totally
a
lot
of
says,
bye
and
thanks
daniel
says,
happy
climbing
Miguel
says
take
care
automate
says:
I
only
have
one
complaint
about
hefty:
oh
I
think
they
never
got
you
a
drink
when
you
ask
for
it
during
a
live,
show
it's
okay!
A
A
So
speaking
of
keeping
cool
on
a
glacier,
I
totally
have
a
Spotify
playlist
for
when
I'm
on
glaciers
and
it's
got
like
Ice
Ice
Baby
and
like
Led
Zeppelin
like
comfortable
and
of
the
ice
and
snow,
and
it's
just
all
songs
about
ice
and
snow.
It's
really
fun
to
listen
to
you
and
its
really
hilarious.
So
yeah
so
mozu
first
says
great
demo,
I
joined
late
today
and
Tobias
says
thanks:
Chris
enjoy
climbing.
So
again,
thanks
for
joining
this
was
one
of
my
favorite
episodes.
A
To
do
I
had
a
lot
of
fun,
learning
about
see,
Liam
and
exploring
Siena
in
kubernetes
and
the
folks
in
the
sea.
Liam
slack
couldn't
have
been
more
helpful
along
the
way.
I
felt
like
I,
got
a
little
bit
of
the
white
glove
treatment.
So
thank
you
so
much
for
that.
So
update
on
next
week's
TGI,
Kay,
so
I
think
Duffy,
who
will
be
our
third
at
TGI
K
presenter
here
at
hefty.
Oh,
is
going
to
try
his
hand
at
doing
tik
from
linux
next
week.
A
I,
don't
think
he's
going
to
be
here
in
the
hefty
dose
studios,
but
make
sure
you
that
you
join
and
show
our
support
for
Duffy
who's.
A
great
he's
one
of
our
field,
engineers
he's
a
great
great
guy.
Really
smart
I've
learned
a
ton
from
him
about
kubernetes
and
I,
asked
him
questions
regularly
and
he,
like,
even
before
I,
jumped
in
to
see
Liam.
He
was
already
telling
me
all
these
great
things
about
see.
Liam
and
what
he
liked
about
it
so
join
him.
A
Next
week,
Joe
and
I
are
both
going
to
be
out
of
town.
He
will
be
on
the
side
of
Mount.
Rainier
I
will
be
on
the
top
of
Mount
Rainier.
Hopefully,
and
it's
gonna
be
a
great
time,
so
join
Duffy
and
I
think
he's
doing
a
TGI
K
on
Harper
I
want
to
say
anyway
thanks
everyone.
We
will
see
you
all
when
I
get
back
from
vacation
in
two
weeks,
all
right,
bye,.