►
From YouTube: TGI Kubernetes 075: Troubleshooting Container Networking
Description
Join Duffie Cooley as he explores how to troubleshoot with a variety of CNI implementations. This episode we will use kind to explore cilium, calico and canal.

A
A
A
Yes,
indeed,
troubleshooting
live
YouTube,
hello,
everybody
welcome
to
TGI
K
this
week.
My
name
is
Duffy
and
I'll
be
hosting.
This
episode
turns
out
that
the
problem
that
we
had
was
apparently
we
were
doing
some
testing
with
another
prospective
member
of
our
cast
and
that
tied
up
the
encoder
and
didn't
allow
me
to
recreate
the
session.
So
so
now
we're
back.
Welcome
back
to
TJ
number
75
in
this
episode.
A
I'm
gonna
be
exploring
a
number
of
different
ways
to
kind
of
troubleshoot
and
play
with
c---nine
implementations,
because
I
had
some
work
this
week,
kind
of
covered
that
and
I
thought
that
was
pretty
interesting
stuff.
So
welcome.
Welcome!
Let's
take
a
look
at
the
chat,
see
how
we're
doing
here
just
a
second,
while
I
fumble
with
things
and
figure
out
what
I'm
trying
to
achieve
on
the
fly
all
right.
Okay,
so
we
got
John
Smith
signing
in
and
we've
got
hello
to
Christopher
and
Darsh
and
Darsh
anime
anime.
A
You
were
really
close
to
the
first
Rory
hello,
again,
I
hope
you're
having
a
great
time
over
there
on
the
other
side
of
the
pond,
I'm
actually
gonna,
be
in
the
UK.
For
three
weeks
in
June's
but
I'm
looking
forward
to
being
over
there,
we
should
try.
We
should
try
and
meet
yeah
hello
to
Joe
and
to
Jay
welcome
to
for
it
hello,
Forrest,
hey
Peter,
good,
to
see
you
I'm
glad
to
see
this
episode.
I
know
this
is
one
of
one
that
was
going
to
interest
you
hey
Krista
from
Germany,
hello,
joy
and
Brian.
A
You
know
all
kinds
of
wonderful
folks
joining
us
to
this
episode.
No
Matty
is
joining
us
again
and
he
reminded
us
that
Chris
has
actually
done
a
number
of
episodes
on
C
and
I
as
well,
and
those
are
broken
into
our
notes.
Let
me
put
a
link
up
to
the
hack
MD
unless
George,
if
you
could
handle
that
that'd,
be
awesome.
Just
put
up
a
link
to
that.
How
can
be
my
friend
Joe
seriously
from
Atlanta
some
people
from
Sweden,
hello,
Federico
from
Sweden
as
well,
is
the
audience,
but
up
now
I'm
just
talking
fast.
A
A
A
Sorry,
this
is
our
hack
MD
for
the
day
and
it's
got
a
number
of
interesting
things
in
it.
Let's
kind
of
go
through
and
see
what
we've
got
here,
if
you
want
to
keep
notes
or
refer
to
notes
that
are
being
created
in
here,
please
feel
free
to
do
so.
We're
gonna,
apparently
like
I'm,
probably
covering
probably
some
of
the
stuff
that
Chris
has
covered,
but
in
more
in
a
slightly
different
way
and
like
kind
of
a
troubleshooting
understanding
way.
A
I
think
Chris
is,
like
you
know,
kind
of
the
dyed-in-the-wool
hacker
so
when
she
gets
her
hands
on
the
Cu
nice
you're
gonna
dig
down
into
the
nuts
and
bolts
and
figure
out
how
it
works
from
the
code
perspective,
which
I
really
admire
about
her
and
I'm
I'm,
where
the
operator
I'm
gonna
kind
of
approach
it
from
the
perspective
of
like
what
is
it
actually
doing
it,
and
how
does
it
work
operationally
from
from
my
own
personal
perspective?
I
think
you
know
it
kind
of
takes
a
village.
A
This
was
actually
put
out
by
John
Geiger,
it's
on
the
I
Phoenix
website
and
in
here
he's
actually
kind
of
talking
about
another
solution
for
kubernetes
or
landing
vault
agent
on
kubernetes
to
renew
things.
Let's
see,
what
does
he
do
in
here?
So
in
this
case,
he's
using
mini
cube
and
he's
bringing
up
a
pilgrim
just
bringing
up
a
coop
in
his
cluster.
That
way.
A
Actually,
using
key
radium
to
install
things
as
well,
so
I
do
this
with
kind.
It
will
show
about
what
I'll
talk
about
a
little
bit
about.
What's
asked
about
hearing
a
little
bit
and
describe
it,
he's
also
pulling
down
vault
writing
about
server,
I!
Think
externally,
to
the
cluster
configuring.
The
authentication
method
for
coober
need
is.
This
is
actually
just
enabling
vault
to
authenticate
to
kubernetes
the
way
this
works
is
kind
of
being
able
to
do
service
account,
look
up
with
the
kubernetes
api.
A
So
when
you
have,
when
you
are,
when
you
have
a
service
account
token
within
the
cluster,
you
can
use
that
token
to
assign
a
key
to
the
vault
API
and
once
you
actually
are
authenticated
by
vault.
It
provides
you
a
vault
token
and
that
vault
token
is
how
you
would
actually
authenticate
to
the
vault
to
get
your
secrets
and
stuff
so
interesting
stuff.
In
there
talks
about
the
are
back
role.
A
We
needed
this
off
delegator
piece,
which
is
kind
of
basically
how
we
are
able
to
understand
whether
the
surface
account
itself
is
valid
and
they
get
into
different
use
cases.
Different
patterns
for
using
vault
within
communities
like
the
sidecar
model,
actually
I've
seen
a
couple
other
ones
recently
that
I
thought
were
interesting.
I
know.
There's
another
I
know
that
I'm
Seth
bar
go
to
actually
just
put
out
recently
an
article
talking
about
a
model
for
vault
on
Cooper
DVDs
as
well,
which
I
thought
was
pretty
interesting,
so
yeah
definitely
worth
a
read.
A
So
most
of
kubernetes
clusters
that
I
touched
on
a
regular
basis
are
leveraging
Prometheus
for
information
and
Prometheus
ties
pretty
well
into
Garifuna.
So
for
visualization
and
things
I
think
what
this
represents
is
a
number
of
okay,
so
he's
actually
leveraging
the
it's
a
mixing
stuff
for
that
Frederick
Frederick
put
together,
which
is
awesome,
and
this
provides
to
some
pretty
interesting
graphs
around
the
API
server
controller
manager
and
scheduler
and
the
cubelets
themselves,
which
I
think
will
be
pretty
useful.
They
also
have
some
grass
for
cubed
proxy.
A
Let
me
zoom
in
on
them
now
so
yeah
leveraging
Jason
had
to
build
them
out
like
cute
dashboards.
For
these
things,
I
bet
you,
those
are
actually
pretty
useful.
I'd
I
think
it'd
be
pretty
interesting.
A
pretty
interesting
tjk
to
kind
of
get
into
those
things.
Cuz
I
suspect
what
he's
showing
I,
don't
really
I'm,
not
able
to,
unfortunately,
dig
in
here
a
little
bit.
Let's
see
if
that
works.
A
Okay,
so
this
is
talking
about
real
sink
Lee
c4q
proxy,
so
how
long
it
takes
for
the
rules
to
actually
get
synced
up
describes.
What
the
ones
that
are
being
held
up
are
describes
the
cube,
API
request
rate
and
how
many
instances
there?
Oh,
so
this
is
actually
measuring
latency
for
the
application
of
rules
for
Q
proxy
across
the
cluster.
A
This
is
an
interesting
graph,
especially
as
you
grow
a
larger
cluster,
because
one
of
the
challenges
that
we
have
with
Q
proxy
and
IP
tables
at
the
moment
is
like,
as
we
grow
the
number
of
services
or
we
start
introducing
a
lot
of
churn
within
kubernetes.
We
actually
start
seeing
the
amount
of
time
it
takes
for
IP
tables
to
converge
on
the
entire
rule
set.
It
takes
a
little
bit
longer.
A
A
A
Well,
I!
Guess
it's
probably
the
speed
thing,
I
suspect
so
I'm
gonna
bring
myself
down
to
kind
of
a
regular
speed
of
speaking
rather
than
you
know,
speaking
like
a
squirrel,
so
we'll
see.
If
that
changes
anything,
please
keep
giving
me
feedback
and
let
me
know
how
it
goes
alright,
so
that
is
the
Q
proxy
stuff
and
then
these
other
graphs
I,
you
know
they
get
into
the
API
server
and
the
controller
manager
and
I
suspect
probably
provide
some
pretty
interesting
graphs
around
what
you
can
get
from
them
as
well.
A
A
So
this
is
what's
happened
to
since
111
9
looks
like
we
made
some
changes
to
Prometheus
and
some
of
the
other
add-on
stuff
we
create.
Some
of
these
are
settings.
If
you're
making
use
of
the
cloud
integration
provider
in
Azure
the
stuff
would
affect.
You
got
some
changes
to
the
GCI.
There's
no
problem.
Detector
configuration
is
now
coupled
with
kubernetes
release.
That'll
be
interesting.
The
problem
detector
is
a
really
interesting
project
actually
because
it
enables
you
to
kind
of
understand
the
state
of
your
of
your
clusters.
A
Look,
it
has
restored
some
of
the
configuration
flags
of
cube
kettles
and
they
can
continue
to
pass
user
name
and
password
if
you're
doing
that
when
I've
not
really
run
any
across
anybody
doing
that.
But
apparently
somebody
was
the
update
to
cluster
autoscaler
has
been
changed,
and
then
this
is
probably
where
it's
about
qubit
won't
evict
a
static
pod
with
priority
system,
no
critical
upon
a
resource
pressure.
A
So
that's
probably
a
driving
factor
for
this
release.
What
that
means
is
like,
if
you're
leveraging
a
cube,
APM
cluster
a
static
pod
might
be
your
API
server,
your
controller
manager,
and
so,
if
you
are
under
pressure,
if
to
keep
itself
is
receiving,
you
know
a
lot
of
pressure
for
IO
or
memory
or
CPU.
Then
what
ends
up
happening
is
there's
going
to
be
an
eviction,
and
it
looks
like
from
this
issue.
Just
from
the
title
of
this
issue.
A
What's
happening
is
that
the
qubit
was
actually
evicting
pods
that
were
in
system
node
critical,
presumably
because
they
were
static,
posit
that
what's
been
being
taken
into
account
or
something
like
that,
so
without
digging
into
the
issue,
that's
kind
of
what
I
would
expect
from
reading
that
the
next
article
we
have
is
deadlines
are
horrible.
Let's
take
a
look
at
that.
A
A
So
this
is
actually
so
this.
This
was
actually
I.
Think
the
result
of
Tim
being
very
transparent
and
very
awesome
as
Tim
is,
and
he
talks
about.
Basically,
you
know
like
he
he's
been
trying
to
catch
up
with
a
bunch
of
stuff
and
there's
a
ton
of
crypts
this
in
this
round
for
115
and
I,
and
he
feels
as
though
he
might
have
missed
a
couple.
A
So
if
you
were
holding,
if
you
were
waiting
for
Tim
to
review
your
cup-
and
it
just
didn't
get
happened,
he
was
just
like
letting
you
know-
hey
I'm,
sorry
about
that,
but
please
raise
it
to
me
and
it's
a
little
frustrating
so
yeah
I
mean
I.
Think
it's!
You
know
this
whole.
This
whole
community
is
built
on
people.
People
like
you
who
are
listening
today,
we're
locked
into
this
episode,
people
who
are
out
there
hammering
on
their
keyboards
fixing
kubernetes
bugs
left
and
right.
It
takes
all
of
us
and
we
were
all
people.
A
A
The
next
article
is
actually
talking
about
why
regard
for
kubernetes.
Now
this
is
an
interesting
one,
and
I
personally
have
kind
of
a
lot
of
opinions
on
this
subject.
So,
let's
just
talk
through
that
real,
quick
and
then
we'll
talk
about
these
two
projects.
There
are
two
projects
that
I
know
of
that
implements
this
there's
the
wire
guard.
One
is
actually
getting
some
press
in
the
community
right
now
delivered
by
gravitational,
that's
called
gravitational
wormhole
and
then
there's
an
older
one,
somebody's
torque,
with
back
in
kora
West
named
squat
that
built
called
kilo.
A
What
this
enables
is
the
ability
to
form
effectively
an
opportunistic
encryption
mesh
between
all
of
the
qubits
in
your
cluster
and
what
I
mean
by
that?
Is
that
the
way
that
this
all
stands
up,
you
would
set
up
a
effectively
a
way
to
ensure
that
you're
encrypting
all
traffic
back
and
forth
between
the
nodes
on
a
wire.
A
So,
if
you're
going
to
deploy
a
kubernetes
cluster
into
an
environment
that
you
don't
necessarily
trust
what
you
might
be
concerned
about,
is
how
do
I
make
sure
that
I
can
encrypt
all
traffic
back
and
forth
between
all
of
the
workloads
without
having
to
understand
things
like
M,
TLS
and
and
that
sort
of
stuff
and
having
to
build
in
my
application?
How
to
solve
those
things
right.
A
A
A
A
It'll
be
I
think
that
I
thinks
it's
a
complexity
riskier
with
sometimes
and
so
cooling
off
people's
resists
for
the
project
itself,
but
you
know
think
about
the
problems
that
you're
trying
to
solve
like
if
the
problem
that
you're
trying
to
solve
is
that
you're
going,
let's
see
with
security
about
it
and
you
need
to
be
able
to
ensure
the
traffic
is
encrypted
on
the
wire.
This
is.
A
A
A
So
the
other
problem,
some
of
the
other
ways
are,
are
there
a
number
of
kind
of
attempts
to
fix
the
Q
proxy
problem
and
what
kind
of
talk
through
a
couple
of
those
like
you
Brad
or
tries
to
address
this
actually
doesn't
address
this
still
iam
try
address
it
in
a
similar
way.
What
kind
of
play
with
that
kind
of
stuff.
A
All
right,
so
that
is
what
I
wanted
to
say
about
wire
guard,
very
cool
project,
very
exciting
stuff,
I'm
loving,
seeing
you
know
kind
of
that
encryption
on
the
wire
happen
is
the
lower
level
concern
I,
don't
think
that
it
necessarily
belongs
as
a
side,
car
and
and
so
I
think
it's
good.
It's
good
to
see
both
of
these
two
models
being
explored,
very
cool
stuff.
A
A
A
I
think
they're
talking.
This
is
a
pro
spinnel
article.
This
is
an
article,
that's
talking
about
leveraging
spinnaker
to
manage
the
continuous
deployment
aspect
of
your
application,
rather
than
trying
to
solve
this.
In
perhaps
a
way
of,
like
you
know,
trying
to
script
this
in
like
a
another
CI
environment,
and
so
that's
an
interesting,
interesting
piece,
definitely
worth
the
read
some
of
the
other
interesting
patterns
that
I've
seen
in
the
field
recently
around.
A
This
are
kind
of
more
toward
the
get
ops
flow
and
there's,
like
I,
think
of
a
few
different
ways
to
think
about
get
ops
flows
in
general,
but
as
a
quick
overview.
What
I've
seen
recently
that
I
thought
was
interesting
is
that
people
are
starting
to
adopt
a
model
wherein
they
have
some
agents
running
on
the
cluster.
A
That
is
watching
for
configuration,
updates
or
updates
to
a
particular
github
repository,
maybe
on
a
persona,
specific
branch,
and
when
that
branch
updates,
it
will
pull
those
configurations
into
the
cluster
where
that
agent
resides
and
apply
them
to
the
cluster
itself.
And
in
this
way,
if
you
think
about
it,
it's
a
little
bit
more
kind
of
toward
the
kubernetes
model
in
which
we
are.
A
A
Those
things
become
kind
of
much
challenging
because
you
have
to
start
thinking
about
you
know
what
does
a
canary
roll-out
mean
when
you're
doing
a
get
out
flow
in
which
you
have
an
agent
in
every
one
of
20
clusters
right,
if
you
make
a
change
to
the
to
the
upstream
github
repository,
do
you
want
all
20
of
them
to
apply?
Or
do
you
want
to
break
those
clusters
up
into
a
smaller
portion
so
that
you
have
a
be
availability
across
particular
subsets
of
those
clusters?
A
A
A
A
A
A
So
if
you,
if
you
if
you
feel
like
this,
is
something
that
you
could
use
to,
you
know
that
you
could
contribute
to
please
jump
in
and
and
help
us
out
and
spend
it's
it's.
It's
really
great.
Obviously,
you
know
like
if
you
speaking
another
language
or,
if
you're
or
if
you're,
trying
to
troubleshoot
with
us-
and
you
know,
trying
to
shovel
shoot
something
like
kubernetes
and
and
you
can't
like
get
any
content
that
you
know
relates
to
you
and
your
language.
A
I
can
imagine
how
that
would
be
very
frustrating,
so
I'm
sure
you
can
too
all
right.
Those
were
all
of
our
articles
from
this
week,
Oh
nobody,
we
got
a
few
others
late
comers.
They
talked
about
localize.
We
talked
about
docker
hub
maintenance.
Yes,
how
did
I
forget
that
there
was
a
there
was
a
breach
recently
for
dr.
cup,
and
so
they
there
have
been
a
number
of
different
articles.
A
A
A
How
interesting
to
this
this
will
be
down
for
less
than
eight
hours
stuck
up
when
we
read
only
for
less
than
eight
hours.
Docker
hub
will.
Automated
builds
will
be
offline
for
two
hours
and
docker
hub
downtime
will
be
15
minutes,
for
was
what
their
expectation
is.
That's
actually
pretty
good
right
up
again:
yeah
do
it
afraid,
isn't
doing
a
decent
job
of
like
communicating
with
their
with
their
customers.
A
So
tomorrow,
May
4th
from
9
a.m.
to
7:15
p.m.
they're,
expecting
2
hours
of
applying
for
automated
builds
they're
expecting
less
than
8
hours
to
read
only
which
means
you
could
still
do.
Polls
and
stuff,
but
you'll
be
able
to
push
they're
expecting
downtime
of
15
minutes.
So
just
be
aware,
doctors
not
broken,
there's
actually
an
announcement
for
it.
So
that's
good
see
what
else
we
got
and
then
down
here
below
we
have
links
to
do.
The
other
scene
is
ODEs
and
then
down
below
that
I
have
the
links
to
stuff
that
I
built
so.
A
Before
I
go
too
far,
I
want
to
talk
mind
you
about
this
project
and
we're
going
to
talk
about
the
project
and
like
some
of
its
strengths
and
weaknesses.
Here,
real,
quick
and
just
kind
of
quick
overview
of
what
kind
is
so
kind
is
a
project
that
enables
you
leveraging,
Tucker
and
docker
to
create
kubernetes
clusters
and
the
way
that
it
does
this.
If
you
think
about
it,
alright,
if
it's
easier
to
think
about
it
this
way.
A
A
A
The
access
that
they're
packaging
are
kind
for
darwin
linux
and
windows,
so
you
can
grab
any
of
those
binaries
and
just
pull
them
down
locally
and
make
use
of
this.
The
prerequisites,
of
course,
are
that
you
do
have
docker
installed
and
that
and
then,
depending
on
what
you
actually
want
to
achieve
for
with
kind
there
might
be
other
prerequisites
as
well.
Once
you
have
the
binary,
if
you
wanted
to,
you
can
actually
also
build
this
thing
from
scratch.
A
So
if
you
go
to
the
docks
they
also
talked
about
like,
instead
of
actually
grabbing
the
binary,
you
can
as
long
as
you
go
and
docker
installed,
you
can
do
go,
get
you
cigs,
dot,
k,
io,
/
kind
and
then
kind
create
cluster
when
you
use
kind
create
cluster
straight-up.
What
it'll
do
is
actually
use
a
docker
image
for
all
of
those
nodes
that
is
hosted
in
a
docker
hub
and
I'll,
pull
that
down
locally
and
just
use
those
images.
A
But
what
if
you
wanted
to
actually
use
kind
to
build
your
own
package,
like
you,
wanted
to
add
them
to
the
underlying
base
image
or
you
wanted
to
build?
You
wanted
to
use
the
binaries
that
you
wanted
to
use
your
own
version
of
cube,
ATM,
something
that
would
actually
like
color
log
output
or
something
what
kind
also
gives
you
the
ability
to
do
exactly
that
you
can
leveraging
leveraging
kind.
A
You
can
build
specific
base
images
and
you
can
spilled
specific
note
images
that
contain
all
of
the
stuff
that
you're
that
you
want
to
be
present
on
those
nodes
which
is
actually
pretty
cool,
but
in
principle
you
can
also
build
a
very
simple
Cooper
to
discuss
Terr
or
a
very
complex
one.
So
let's
talk
a
little
bit
more
about
what
I
mean
by
that.
A
A
The
configuration
options
of
kind
in
this
model,
for
example,
I'm,
creating
a
single
control,
plane,
node
and
then
I
passed
extra
mounts
to
that
control,
plane
node
to
overwrite
the
cni
that
is
going
to
be
shipped
with
that
image.
Now
I
could
just
package
my
new
cni
manifest
into
the
base
node,
but
in
my
case,
I
wanted
to
actually
just
deploy.
It
I
wanted
to
overwrite
that
one
with
my
with
the
final
configuration,
then
kind
of
show
how
that
mechanism
works,
because
it's
actually
pretty
neat
we're
gonna
play
with
this
idea.
A
A
A
So
now
that
kind
is
up
if
I
do
it'll
get
no
special
wide
I
can
see
that
I
have
4
nodes.
I
can
have
my
control
plane.
Node
and
I
have
3
workers
kind
of
work,
one
or
two
and
work
of
3,
and
these
are
each
have
an
IP
address
that
is
part
of
the
docker
bridge,
so
I
do
docker,
PS
I
can
see
these
containers
I
just
noticed
the
image
that
they're
based
on-
and
this
is
what
I
mean
by
if
you
leverage
kind
to
create
a
node
image.
A
A
So
each
of
these,
each
of
these
containers
that
is
running
in
my
underlying
host
is
actually
represented
by
one
of
the
nodes
in
my
kubernetes
cluster
and
if
I
jump
into
the
control
plane,
node
docker
exact,
TI
control,
plane
bash
inside
of
the
configuration
of
the
node.
This
is
actually
where
we're
specifying
all
of
the
configuration
that
makes
this
this
particular
thing
an
instance
of
a
kind'
node.
So
this
is
a
control
plane
manifest.
A
A
Yes,
and
we're
gonna
talk
about
like
how
that
works
and
what
to
look
for
and
that
kind
of
stuff,
as
well
so
final
being
a
really
early,
C
and
I
is
configured
in
such
a
way
that
and
actually
most
see
and
I
still
do
this-
that
they
rely
on
the
idea
that
the
controller
manager
is
going
to
allocate
to
the
nodes
a
particular
pot
of
cider.
It's
already
cubic
it'll
get
node,
that's
Joey,.
A
A
A
Okay,
so
here
in
this
case,
I
can
see
like
just
from
the
basic
troubleshooting
perspective,
that
each
of
these
nodes
has
its
own
subnet
and
control.
Plane
has
192
168,
0,
0,
24
kind
of
worker
has
1
0
kind
worker
to
2
0
kind
worker
3,
that's
actually
pretty
convenient
workers
has
a.
Why
don't
you
want
succeed?
3.0?
A
This
is
happening
because
of
the
way
that
queue
being
brought
up
the
clusters
when
I
specified
that
pot
cider,
that
was
the
available
IP
space
for
all
of
the
nodes,
I
also
configured
the
controller
manager
to
dole
out
some
of
those
IP
subnets
to
each
node,
and
let's
take
a
look
at
how
that
worked.
So
if
I
jump
in
to
my
control,
planes
know
it
again
and
we
have
a
look
at
our
control.
Plane
configuration
see,
kubernetes,
manifest
controller
manager.
A
If
you
look
at
the
arguments
inside
of
here,
I
can
see
there
is
cluster
cider
flag,
which
is
pinned
to
that
same
globe,
we'll
see
a
little
pod
segment,
we've
configured
right
side,
181,
6,
8000,
size,
16,
but
then
I
also
have
a
couple.
Other
arguments
that
have
been
configured
I
have
a
node
cider
mask
set
to
24
and
I
have
allocate
nodes
ciders
true.
A
We
can
see
that
that
actually
happened
as
part
of
the
registration
of
that
node.
The
controller
manager
is
going
to
look
and
see
if
a
node,
a
new
node
has
been
allocated
at
pod
cider
and
if
it
hasn't
it
will
it
will
allocate
one
for
that
and
specify
in
the
spec
for
that
node.
What
the
pod
cider
will
be
now
that
just
gets
it
to
the
where
the
node
has
a
pod
cider,
it's
basically
just
an
annotation
on
that
I
mean
it's
just
an
allocation
of
the
pod
IPS
to
that
node.
A
Let's
jump
back
into
our
control
a
node
and
understand
how
that
works.
Now
we're
going
to
start
talking
about
the
ste
and
I
that
kind
of
the
base
CNI
implementation,
which
is
container
networking
interface.
So
that's
what
CNI
it
stands
for
and
for
that
I
want
to
take
you
over
to
this
website
container
and
networking
and
I.
A
Think
we've
done
a
kind
of
a
generic
talk
on
C
and
I
as
well
in
a
previous
TG,
okay,
but
I
think
it's
definitely
worth
revisiting
every
once
in
a
while
for
the
documentation,
understanding
what's
actually
happening
with
it
and
what's
and
what
it
achieves,
because
one
of
the
things
I
just
recently
realized
is
that
they
were
actually
a
number
of
different
networking.
Plugins.
A
So
my
point
is
if
there
are
definitely
a
number
of
plugins
that
are
used
by
different
things.
Now,
let's
talk
about
how
this
particular
node
is
configured
and
then
we'll
kind
of
talk
about
like
how
that
works.
So,
let's
take
a
look
at
the
ten
comp
top
the
tenth
and
final
conflict.
Now
this
configuration
is
how
C
and
I
that
a
lot
of
the
software
built
into
the
ketubah
is
actually
going
to
configure
C&I
for
your
pause.
A
So
when
your
pod
tries
to
create
when
your
pod
comes
up-
and
it
interacts
with
C
and
I
to
get
an
IP
address,
the
very
first
thing:
it's
gonna
try
is
gonna,
try,
flannel
and
hey
I
need
an
IP
address
and
flight
I
was
going
to
delegate
things
like
hairpin
mood
and
is
default
gateway,
all
that
stuff
into
its
configuration
for
flannel.
But
then
look
at
this
they're
chained
right.
There
are
two
plugins
in
line
here:
there's
a
plugin
for
flannel
and
there's
a
plugin
for
port
map.
A
So
let's
take
a
look
at
what
that
means
so
like
if
I
go
into
opt
bin,
C
and
I
props,
you
know
I've
been
here.
Is
the
port
map
plug-in
so
this
port
map
plugin
is
actually
going
to
be
the
thing
that
creates
host
port
configurations
for
any
pod?
That
requires
it,
and
so,
let's
I,
think
just
kind
of
really
highlight
what
I'm
talking
about
here.
A
Let's
actually
just
create
something
with
host
port
and
then
we'll
kind
of
talk
through,
like
the
troubleshooting
part
of
that
which
I
think
will
highlight
what
we're
talking
about
and
then
we'll
start
talking
about
wrapping
and
how
all
that
works.
So
we're
still
kind
of
geeking
out
about
CNI.
Let's
go
ahead
and
grab
this
stuff.
A
A
So
in
this
configuration
first,
let
me
highlight
this
is
actually
kind
of
an
interesting
configuration
for
a
pod
and
I.
Think
it's
important
to
kinda
understand
what
happens
so
in
this
can
fit
in
this
manifest.
What's
actually
happening,
it's
a
nun
creating
just
a
static
I
mean
just
creating
a
normal
pod,
I'm
Dame
deco,
it's
listening
on
port
80
and
I've
configured
host
for
it
on
port
80.
But
the
interesting
thing
about
this
manifest
is
actually
down
here,
where
I
say
a
node
name
kind
worker.
A
The
reason
this
is
interesting
is
because
the
if
you,
if
you
think
about
the
way
kubernetes
works,
I
could
actually
turn
off
the
controller
manager
and
the
scheduler
and
still
be
able
to
deploy
this
pod
because
in
essence,
I've
already
satisfied
everything
that
the
roller
manager
would
perhaps
do,
and
I've
already
satisfied
everything
that
the
scheduler
would
have
to
do.
So
when
those
two
things
are
watching
for
new
deployments,
for
example,
there's
no
deployment
here.
This
is
a
pod.
It
was
looking
for
pods
that
aren't
scheduled
to
nodes.
A
A
A
Is
that
it's
not
flannel?
That's
configuring
this!
This
is
being
configured
by
the
port
map
plug-in
inside
of
C&I,
and
so,
if
you're
troubleshooting
stuff-
and
you
realize
you
know
what
the
port
maps
not
working
the
way
I
expect.
This
is
a
pretty
interesting
way
to
kind
of
troubleshooting
understand.
What's
happening
there
right,
you
might
want
to
start
by
jumping
into
the
cni
directory
on
a
node
and
trying
to
understand,
what's
actually
how
listings
are
configured
so
in
our
case
we're
looking
at
this
configuration.
A
We
know
that
we're
using
flannel,
which
is
great,
and
we
also
know
that
we're
using
port
MAP
to
handle
host
port.
But
what?
If
there
was
no
port
map
configuration
here?
What
plan
I'll
still
be
able
to
pass
handle
host
port
just
in
case
we're
gonna
talk
a
little
bit
more
about
that
when
we
get
to
the
the
we
get
to
the
the
Sicilian
conversation.
Alright,
so
next
thing
I
want
to
show
you
is
a
way
of
thinking
about
the
way
networking
works
in
Cooper.
It
is
in
general.
A
A
A
A
So
if
I
could
go
back
here
and
I,
look
at
the
the
output
of
that
command,
the
there
you
go
if
I
look
at
these
I
know
that
I
have
a
cider
that
is
actually
going
to
be
completely
owned
by
each
of
these
nodes
and
if
I
jump
into
my
node
again
and
I
do
IP
route.
I
can
see
that
each
of
those
ciders
is
being
routed
to
some
destination.
A
A
A
That
was
put
up
by
honey,
Gritty
I'm,
actually
gonna
copy
this
into
my
notes
here.
So
don't
forget
to
do
it
later
and
in
this
document
like
he
actually
really
gets
down
with
a
detail
of
how
final
works
and
why
it
works
the
way
it
does
all
right,
and
so,
as
we
can
see
already,
where
routing
this
traffic
this
way
and
then
I'll
show.
A
Ya,
suffice
to
say
this
traffic
is
being
routed
back
and
forth
between
all
the
nodes
in
final
we're
using
VX
LAN
is
this
interface,
and
so
there
is
a
configuration
inside
of
the
bridge
that
enables
us
to
understand
where
the
target
for
teenage
Joe
with
the
target
for
the
destination
is
right.
So
here's
the
MAC
address
for
this
destination
and
then
what
it
doesn't.
A
Allow
me
to
map
right
now
is
the
ability
for
you
to
understand
where
c90,
where
the
destination
is
so
calicoes
I
mean
finals
a
little
tricky
to
understand,
because
it
has
to
actually
wrote
that
stuff.
But
one
thing
that
is
interesting
about
final
but
I
do
want
to
point
out
is
like
how
we
can
actually
double
shoot.
The
networking
piece
so,
like
I,
said
it's
an
overlay
network,
we're
using
be
explained
for
the
encapsulation.
A
One,
and
still
just
that
command
right.
There
is
able
to
show
me
that
I
have
IP
connectivity
to
each
host
and
that's
true,
because
if
I
do
IP,
adder
and
I
look
at
the
flow,
no
one
interface
I
could
see
that
I'm,
actually
more
than
the
C&I
interface
I,
can
see.
The
cni
interface
has
one
92168
one
that
one-
and
this
is
the
issue
for
every
one
of
the
the
final
instances
right.
A
So
every
one
of
my
nose
is
gonna
have
a
c90
interface
and
each
one
of
them
is
gonna,
have
the
first
IP,
and
so,
if
I
can
configure,
if
I
could
ping
between
those
things,
I
know
that
this
is
gonna
work.
I
know
that
I
have
good
connectivity
between
all
the
notes
now
moving
on
up
the
stack,
how
would
I
troubleshoot
access?
You
know
from
a
pod
down?
You
know
across
to
another
pod.
A
A
A
So,
let's
take
a
look
at
what
that
shoot.
Is
that
shoot
is
a
a
tool
that
was
put
up
by
a
gentleman
named
Nikola
github,
and
what
this
does?
It
allows
me
to
create
a
an
image
that
actually
has
a
bunch
of
tools
already
deployed
into
it.
So
let's
take
a
look
at
what
that
image.
Is
you
know
you
shouldn't
deploy
stuff,
you
don't
understand
because,
like
you
never
know,
what's
gonna
be
in
there.
So,
let's
take
a
look
at
net
shoot.
A
Senescu
is
it's
basically
a
tool
that
he'll
it's
a
container
that
has
a
bunch
of
the
necessary
information
tooling,
to
be
able
to
help
kind
of
like
understand,
troubleshoot
networking
capability
within
the
cluster,
which
I
thought
was
actually
pretty
interesting
and
there's
a
bunch
of
really
interesting,
tooling,
that's
already
built
in
so
we
have
actually
two
utils.
We
have
bird.
If
you
want
to,
like
you
know,
take
a
look
at
like
bgp
configurations.
We
have
tcp
dump
curl
DHCP.
A
A
So
with
next
shoot
the
way
the
way
the
deployment
works
before
I
go
too
much
farther
again.
Here,
I
can
see
that
the
deployment
is
actually
configured
to
use
toast
network
and
that's
because
I
wanted
to
actually
expose
to
the
underlying
host
network,
thus
of
totaling.
So
I
could
travel
shoot
with
that
tooling.
So,
if
I
do
I
peer
out,
I
can
see
the
same.
Routing
that
I
saw
before
I
can
do.
I
can
use
paying
a
huge
TCP
dump.
A
A
Okay,
so
what
happened
there
was
that
when
I
use
the
command
from
the
run,
it
didn't
parse
the
command
correctly
and
put
the
wrong
thing
up,
which
was
pretty
reasonable.
So,
let's
jump
into
this
guy
overlay,
MX
bash
and
that's
by
the
IP
error.
Instead
of
here,
I
can
see
that
I
have
192
168
2.4,
slash
24
and
if
I
do
IP
route
from
inside
of
the
overlay
pod
I
can
see
that
my
gateway
is
1.
2,
1,
6,
8.
A
A
This
is
the
pod
reaching
the
Gateway
to
be
a
via
it's
its
own
network
and
I'm
able
to
reach
it,
and
everything-
and
one
thing
that's
interesting
to
describe
here-
is
that
if
I
look
at
this
IP
address
for
this
overlay,
pod
I
can
see
that
it's
got
a
network
size
of
slash
24.
Now,
if
you
get
into
like
TCP
a
little
bit
or
you
get
into
like
understanding,
IP
and
TCP,
what
that
means
is,
if
there's
a
broadcast
domain,
that's
been
defined
here
right.
A
A
In
68,
1.1
or
router
returned
on
all
this
stuff
is
able
to
actually
communicate
trimming
it
with
each
other
because
they're
all
part
of
the
same
broadcast
domain,
which
means
that
when
that,
when
those
packets
are
moving
back
and
forth
between
those
two
pods
they're,
never
traversing
a
layer,
3
boundary
and
so
I,
don't
really
have
a
lot
of
control
over
what's
happening.
There
now
10
to
4400
as
an
artifact
of
the
fact
that
a
flannel
assumes
that
network
and
I
didn't
figure
it
outside
of
that.
A
So,
but
my
point
is
that
even
if
we
didn't
have
a
gateway,
we
could
actually
route
this
traffic
back
and
forth
between
these
pods
and,
if
you
think
about
it,
that
means
it's
because
they're
all
two
adjacent
they're
all
in
the
same
network
and
then
their
next
example,
which
will
be
the
cilium
piece
we're
going
to
talk
about
how
this
is
very
different
than
the
way
that
calico
or
canal
or
flannel
work
all
right.
So
that's
what
I
wanted
to
show
you
do
that
stuff.
A
A
A
I'm
doing
this
part
Ln
minus
SF
in
current
working
directory,
see
and
I
attempt.
You
know
I'm
doing
that,
because
in
my
config
I
have
to
actually
specify
where
the
file
is
located
and
it's
easier
for
me
to
type
attempts
in
a
canal
than
it
is
for
me
to
figure
out
what
the
right
path
is.
Wherever
you've
deployed
I'm
good.
A
A
The
docks
for
our
canal
are
here,
so
this
is
a
documentation
for
calico
and
documentation
for
canal,
but
one
thing
I
wanted
to
point
out
is
like
in
any
case,
if
you're
gonna
start
troubleshooting
networking
whatever
this
T&I
is
that
you're
using
and
you
shouldn't
usually
tell
what
that
is
by
looking
at
cube
kettle,
get
pause
and
for
the
cube
system.
Namespace
you'll
see
some
pod
running.
A
Now
interesting
thing
that
is
happening
here,
you
can
see
that
in
Canal
we
have
a
ready
State
for
Canal
for
each
pod.
That
is
two
containers
right,
so
we
have
0
of
2
and
let's
stick
it
in
a
little
bit
more
about
what's
happening
there,
what
that
means
watching
our
control
plane,
kind
of
come
up
and
stabilize
here
we
have
a
couple
of
canal
pods
up,
so
that
one.
A
A
A
Okay,
let's
dig
into
what
that
means
in
practicality,
so
keep
kiddo
get
know.
We
have
the
same
kind
of
configuration
that
we
had
before
sorry,
it's
a
note
of
the
configuration.
The
host
names
are
all
the
same,
so
I'm
gonna
move
into
nets,
you
will
do
apply.
F9
chute
deploy
will
go
into
our
host
port
configuration.
You
get
a
little
fly,
F
dot,
I'll,
give
pause.
All
names,
B
well
get
pods.
Really
all
we
really
care
about.
So
we're
bringing
up
that
same
configuration
that
we
had
in
the
final
cluster.
A
A
Heco
series,
you're,
mostly
up
shoot,
is
coming
up
all
right.
There
we
go
so
everything's
up
and
working,
let's
jump
into
one
of
the
workers
and
see
what
that
configuration
that
we
talked
about
earlier
was
so
I'm.
Gonna
start
my
troubleshooting
by
digging
into
like
what
the
configuration
of
CNI
is
specifically.
So,
let's
jump
in
so
let's
do
talker,
exact,
yeah,
I
kind,
where
bash
that's
EE,
Cena
net
deep.
A
So
now
we
have
two
things
inside
of
this
configuration.
We
have
ten
canal
conflict
and
in
this
case
we
have
three
things
that
are
chained,
which
is
really
interesting
right.
We
have
actually
we
have
two
things
we
have
policy
being
chained
in,
we
have
the
plug-in
for
calico,
and
then
we
also
have
port
MAP,
so
our
calico
configuration
by
default.
This
is
actually
how
calico
is
actually
gonna
get
its
configuration
from
the
C&I.
So
we're
saying
log
label
is
info.
A
We're
gonna
use
ktd
the
datastore
for
calico
is
ktd,
we're
gonna
kind
of
talk
about
what
that
means
here
in
a
second,
the
node
name.
For
this,
particularly
one
is
kind
we're
here.
The
iPad
configuration
is
host
local
and
we're
going
to
use
the
use
pod
cider,
which
in
this
case
is
meaning
means
use
that
pod
IP
allocation.
A
That's
a
node
got
as
part
of
its
configuration,
so
this
is
calico
doing
all
this
stuff
now
what's
interesting
about
this
is
not
all
of
it
applies,
but
we'll
dig
into
it
a
little
bit
more
about
with
that.
What
I
mean
by
that.
So
then,
at
a
policy
type
we
have
Kate's,
and
then
we
have
this
kubernetes
cube
config.
That
is
actually
allowing
calico
to
authenticate
with
kubernetes
to
make
use
of
that
data
store
that
we
talked
about
and
then
down
below.
We
have
the
port
map.
A
A
A
What
does
all
that
mean
if
I
do
IP
route?
I
can
see
the
same
thing
that
I
saw
before
right.
I
have
a
flannel,
1
interface,
I'm
routing
that
traffic
very
similarly
to
the
way
I
did
before
only
this
time,
I'm
using
10
to
4400
instead
of
180
168,
but
what's
on
the
actual
interfaces
that
are
associated
with
the
pods
I'm,
getting
a
different
thing,
I'm,
seeing
this
Cali
thing,
then
I'm
doing
device
routing
for
those
interface
IPS
up
to
the
Cali
devices.
A
A
So
I
do
doc.
Rps
I
can
see
that
I
have
this
net
shoot.
Then
I
have
that
overlay
pods
that
I
created.
I
want
to
jump
into
that
overlay,
pod
or
real,
quick
and
take
a
look
around
and
see
how
that's
different
than
the
final
one.
So
I
need
cube
kettle
exacty.
I
overlay
q
bash
you
IP
route.
Now
this
is
interesting.
This
looks
very
different
that
did
in
flannel
and
if
I
do
I,
pee
adder
again
very
different.
A
So
those
of
you
who
have
who
have
dealt
with
the
Linux
routing
mechanism
understand
there's
something
really
funky
happening
here,
and
it's
really
worth
talking
about.
So
what's
happened
in
this
case.
Is
that
we've,
given
this
particular
pot,
an
IP
address
with
a
slash
thirty-two
subnet,
and
that
means
that
there
is
no
broadcast
domain.
I
can't
I'm
not
adjacent
into
anything.
There's
nothing
else
effectively
within
my
network
and
so
for
me
to
get
to
anything
else
on
the
network.
A
I
would
have
to
arm
out
to
something,
and
the
question
is:
how
does
that
happen
with
calico?
The
way
that
happens
with
calico
is
to
be
the
IP
route
mechanism
to
IP
route.
I
can
see
that
I
do
have
a
default
gateway
or
duvall.
I
do
Fida
fault
next,
half,
which
is
169.254
1.1,
and
it's
inside
traffic
out
the
east
zero
interface.
But
what
happens
like
now
without
actually
all
working
can
I
ping
other
Pods
on
the
same
node.
A
A
A
A
What
that
means
is
that
from
the
under
from
the
pod,
even
though
there's
no
network
associated
the
proxy
are
having
proxy
ark
enabled
on
our
side
of
the
beasts
means
that
when
traffic
tries
to
egress
that
pod,
it's
gonna
land
on
our
underlying
host
and
it's
actually
end
the
internal
v's.
This
one
located
in
the
host
name
space,
it's
going
to
ARP
for
everything.
So
if
you
want
to
know
where
the
gateway
is
it'll
tell
you
where
it
is.
A
If
you
want
to
know
where
the
eye
with
the
MAC
address
of
the
next
guy
over
is
or
we
route
to
the
next
hop,
it's
gonna
figure
out
from
that
particular
low,
lower
liar
the
low
lying
configuration,
and
so
even
though
I
can't
use
broadcast.
There's
no
broadcast
time
between
here,
I
can
still
peel.
All
of
the
things
inside
of
my
I
can
still
pink
all
the
things
within
the
same
scope.
But
what's
interesting
is
that
there's
no
broadcaster
mean
so
I
have
to
route
everything
that
means
Calico.
A
A
A
So
canal,
I
think
is
actually
really
interesting,
because
if
you
think
about
like
canal
versus
calico,
the
interesting
thing
about
the
way
calico
and
the
way.
The
way
they
differentiate
is
that
when
you're,
using
straight
calico,
you're
going
to
be
making
use
of
bgp,
even
if
it's
just
a
full
mesh
mechanism
from
back
and
forth
between
all
of
the
components
within
your
cluster,
whereas
with
canal
they're
using
flannel
for
that
routing
mechanism,
which
means
no
bgp.
A
But
you
are
still-
and
you
are
still
getting
kind
of
the
enhanced
experience
of
being
able
to
use
a
slash
32
for
our
pot
IP
for
each
host
interesting
stuff.
So
troubleshooting
is
basically
the
same
as
it
was
before.
If
I
jump
into
my
my
East
Maya
net
shoot
commit
container,
keep
cattle,
exact,
that's
GI
and
then
shoot
I.
A
So
because
this
IP
address
is
exposed
on
my
host
and
I
can
reach
that
back
and
forth
between
all
those
things.
This
is
my
way
of
actually
validating.
The
encapsulation
is
working
right.
If
I
can
ping
the
IP
addresses
that
are
available
on
the
other
nodes
that
I
know
that
encapsulation
is
working
correctly.
That's
awesome!
If
it
was
not
working
correctly,
then
this
would
fail.
I
can
ping,
the
other
I
could
ping.
The
IP
addresses
on
the
other
nodes
to
44.1
two
and
three.
A
And
so
I
can
prove
that
I
have
pod
two
pod
communication
and
I
can
prove
that
I
have
pod.
Two
hosts
communication
and
I
can
prove
that
I
have
a
host
to
host
communication.
All
of
those
things
actually
allow
me
to
troubleshoot
my
network
since
that's
kind
of
the
important
part
of
troubleshooting
here,
you're
networking,
an
overlay
scenario
is
you
have
to
kind
of
think
about
it
in
in
all
three
layers
like
do
you
have
pod
pod
working
great?
Do
you
have
pod
to
pod
to
post
and
host
a
pod
working
great?
A
I'm
looking
at
the
wrong
chat,
all
right
cool
yeah,
so
that
is
Calico
and
CNI
and
canal.
So
let's
do
one
on
cilium
and
kind
of
look
at
that
real,
quick,
because
I
think
still
IAM
is
actually
probably
the
most
interesting
of
the
bunch,
and
it
also
highlights
kind
of
what
we
were
talking
about
before.
So,
let's
get
into
that
part
of
it.
So
I'm
gonna
do
kind
delete
cluster.
A
A
A
A
This
actually
started
for
me
when
a
customer
was
created,
this
host
port
deployment
and
the
host
ports
weren't
working,
and
he
wanted
to
understand
why
and
how
to
fix
it,
and
in
this
case
he
was
actually
deploying
psyllium
as
he
and
I,
and
to
be
fair.
Salim
has
since
fixed
this
problem,
like
they've,
actually
managed
to
merge
and
the
ability
to
provide
you,
a
mechanism
to
configure
D&I
training
for
psyllium
and
I.
Have
that
documented.
A
A
So
just
pull
request
that
fixes
this,
so
you
can
see
it's
merged
I
just,
but
it
did
actually
kinda
like
a
really
interesting
break
for
psyllium.
That
I
wanted
to
kind
of
to
show
and
kind
of
explain
through
real
quick.
So
even
though
this
has
been
fixed
on
psyllium
side
and
I,
just
I
just
wanted
to
talk
through
it.
So
clear,
keep
kiddo
get
pause.
All
namespaces.
A
A
All
right,
hello
from
Tunisia
Christian
waves,
we
have
10
284
16
s,
see
I,
missed
a
pod
cider
in
the
final
and
every
case.
So
far,
all
of
my
clusters,
I've
been
using
1096
as
a
server
cider.
A
A
If
you
think
about
it,
there's
like
an
analogue
for
the
way
that
your
developers
might
try
and
make
you
super
Cooper,
neatest,
right
and
so
yeah
in
this
is
Pat.
Mcgrath
asked
an
interesting
question.
He
says
any
idea
why
obscene
I've
been
includes
a
flannel
binary.
It
is
historic,
in
fact,
yeah
because
effectively
since
Cora
was
to
create
scene
I
in
flannel
woods.
Actually,
the
default
scene
I
for
a
long
time.
It's
still
there
by
default,
but
so
it
is
definitely
historic.
The
one
that's
there
I
think
it's
actually
probably
not
the
most
recent.
A
Wait
for
things
to
come
up
here.
Almost
there,
it
looks
like
so
yeah
I
mean
in
this
case
a
little
bit
of
you
know,
history
of
like
what
I
was
doing
this
week.
So
I
was
working
again.
Looking
at
this
silly
deployment,
configuration
and
customers
like
I
can't
get
host
port
at
work
and
I
said
well,
here's
what
I'm
gonna
do.
He
was
kind
to
stand
up
a
cluster
with
cilium
configured
and
we'll
walk
through
me.
A
Why
are
you
doing
that
like
having
hosting
the
sed
inside
of
kubernetes
just
has
brought
me
a
lot
of
pain
over
the
years
and
so
I
was
really
concerned.
That
was
actually
a
thing
that
they
were
doing
turns
out,
though,
that
the
folks
of
psyllium
a
really
pretty
sharp
folks
what
they
end
up
doing
there
is
they
actually
use
it
effectively
as
a
cache
they're
holding
the
cache
information
for
this
stateful
data.
So
if
they
lose
that
cluster
it
doesn't
cost
them
anything
to
recreate
it
right.
A
So
if
they
actually
so,
if
I
delete
that
SVD
cluster,
the
sed
cluster
will
be
recreated
by
the
operator
and
state
will
be
reap
that
will
be
replaced
inside
of
that
as
a
cluster,
and
it
won't
actually,
they
won't
have
lost
any
capability
during
that
time
or
during
that
outage,
which
is
actually
really
cool
and
I
was,
and
that
calmed
me
right
down,
because
at
first
I
was
like
mmhmm
and
then
after
that,
I
was
like.
Okay,
all
right
come
on
board
now.
Finally,
it
makes
sense.
A
So
we're
waiting
for
that
stuff
to
start
up
I
want
to
talk
about
what
I
mean
by
this
is
kind
of
interestingly
broken.
So
if
I
go
into
psyllium
and
I,
look
at
the
configuration
of
this
one
I
can
see
I'm
actually
getting
quite
a
bit
more
creative
with
about
a
mouse
here.
So
let's
talk
about
what
happens
here
so
in
the
control
plane,
node
I'm
actually
populating
the
defaults.
You
know
with
Sicilian
configuration
so
that
it
will
be
applied
as
part
of
the
instead
of
a
default
scene.
A
Now
the
workers
are
each
configured
differently
in
worker,
1,
I'm,
actually
populating
the
file
and
because
C&I
was
pre-installed
all
of
the
port
map
configurations
and
everything
are
already
present
in
worker
I'm,
actually
overriding
the
oxy
ni
directory
with
an
empty
dear,
because
I
want
to
make
it
so
that
there
is
no
port
map
installed.
So
even
though
there
may
be
a
configuration
applied,
there
is
no
port
map
installed.
So
we'll
talk
about
how
that
works.
What
happens?
A
What
the
failure
case
is
and
how
to
troubleshoot
it,
and
then
worker
3
I'm,
just
not
populating
the
0
0
psyllium
port
map,
raishin
I've
not
actually
copied
that
in
and
so
in
worker
3
worse,
but
we're
expecting
the
possible
start
because
they
don't
know
that
part.
The
deport
map
isn't
working,
and
so
once
we
get
everything
started
up
here,
we'll
talk
about
like
what
the,
how
that
actually
manifest.
So
if
I
did
get
pods
that
shoot
working.
A
A
What's
up
happening
there,
okay,
so
fight?
What
we
saw
in
the
log
was
actually
pretty
telling
and
that's
actually
what
I
wanted
to
talk
about.
So
let's
talk
about
that.
So,
in
my
case,
what
happened
here
is
that
the
pod
that
was
created
landed
on
kind
worker
and
you
may
remember,
from
from
the
config
the
kind
worker
didn't
have
the
port
map
binary
on
on
disk,
but
it
did
have
a
configuration
that
said
for
it
to
farm
out
the
give
the
the
management
of
host
port
to
that
binary.
A
We
can
see
be
in
this
manifest
that
host
port
is
configured
and
set
as
host
port
0
0.
So
it's
actually
going
to
try
to
configure
a
host
forward
for
this
pod,
even
though
there's
no
particularly
good
reason
to
do
so.
It's
still
gonna
configure
it
that
way,
and
that
means
that
when
we
go
through
the
C&I
logic,
the
scene
out,
logic
is
gonna,
come
down
is
gonna
hit,
the
cilium
device
plug-in
and
sodium
is
gonna,
say
I'm,
not
handling,
host
port
and
it's
gonna
chain
down
to
the
port
map
plug-in.
A
A
A
A
A
A
We
have
all
of
our
pods
started,
everything's
working
and
now
we're
going
to
dig
into
the
psyllium
piece,
but
here's
the
piece-
that's
interesting,
remember
the
echoes
are
here:
let's
curl
each
of
these
and
see
what
we
see.
So,
if
I
do
curl
once
I
do
17-0
well,
first,
let's
do
this:
let's
do
it.
Oh
yeah
Oh
wide
get
node
Oh
wide,
so
kind
worker
has
to
kind
of
appeared
to
has
five
kind.
A
Worker
three
has
four
now
I'm
expecting
to
be
able
to
curl
172
zero,
two
and
172
1705,
but
I
am
NOT
expecting
two
curls
that
172
1704
and
here's
why
that
is.
If
we
adopt
Li
into
one
of
the
workers
that
has
everything
configured,
we
move
TI
I,
move
into
s,
T,
C
and
I,
and
that
D
we
can
see
that
there
are
two
files
in
here
right
and
they're
kind
of
electrically
ordered
we're
giving
zero-zero-zero
a
kind
of
an
important
thing.
A
If
I
do
my
curl
tests,
curl
HTTP
slash,
slash,
130,
1702
and
zero
five
everything
works
as
we
expect,
but
once
the
party
to
1704
doesn't
work.
So
why
is
that?
Let's
try
that
for
the
next
refused,
let's
duck
Craig
Zak
into
that
guy
and
look
around
and
figure
out
why
that
is
yeah,
I
kind
worker,
three
bash,
the
SEC!
You
know:
I
need
D,
there's
no
zero,
zero,
zero
clump
here,
which
means
it's
not
doing
anything,
there's
no
way
for
it
to
get
to
port
map.
A
There's
no
failure
mode,
except
for
the
fact
that
it
doesn't
work
so
the
pod
started,
even
though
part
map
wasn't
working
and
a
host
port
could
not
be
satisfied.
The
pod
still
started,
and
that
leads
to
behavior
that
is
hard
to
travel,
to
or
understand
for
some
people
right
select
by
what
that
means.
A
There's
no
host
port
configured
here
and
there
wouldn't
be
because
it's
not
actually
been
created
on
that
host.
So
the
last
thing
I
want
to
cover
before
we
call
it
a
day
is
what
happens?
How
can
I
fix
this,
though
right?
So
if
I
do
duck
or
CP
see
an
eyes?
There's
your
conflict
and
I
copy
that
into
this
pot
kind.
Worker,
3,
SC,.
A
A
A
A
A
A
That
means
that
if
I
change
the
configuration
of
C&I
the
next
time,
CNI
tries
to
exercise
that
code,
it
will
reload
that
configuration
from
disk
before
going
through
its
work,
which
is
mind-blowing
vast
majority
of
times.
A
a
process
that
is
running
has
its
configuration
loaded
into
memory
and
has
no
way
or
reason
to
go
back
and
look
at
disk
for
its
configuration.
A
So
the
other
thing,
the
other
yeah
and
so
I
wanted
to
share
that
with
you,
because
I
thought
that
was
absolutely
fascinating.
Anyway,
we've
been
at
this
for
two
hours:
I
hope
that
that
was
helpful.
I
know
that
was
a
lot
of
information.
I
think
it's.
It's
helpful
to
kind
of
understand
how
all
this
stuff
wire
stuff
before
I,
take
off.
A
If
I
jump
in
to
knit
my
overlay
pod
for
seven
bash,
if
you
wrapped
all
right,
looks
okay
default
route
via
one
nine,
two
one,
six,
eight
three
down
one
and
I
see
utter
well
hold
on
what
so
the
IP
address
is
192
168,
3.14
T,
but
it
has
a
subnet
mask:
go
255,
255,
255
255,
which
means
that
there's
no
broadcast
so
I
do
art
fresh
again.
I
do
ping
one
to
168.
A
A
A
When
I
type
in
ping,
one
nine
two
one,
six,
eight
three,
we
got
one
and
the
fact
that
I
even
get
a
an
art
packet
back,
that's
happening
because
in
BPF
there's
a
program
that
says:
here's
the
configuration,
here's,
what
I'm
expecting
here's,
what
I
want
you
to
do
whenever
you
see
a
packet
aggressing
that
port
with
that
MAC
address
that
information
I
want
you
to
do
this
stuff!
I!
Want
you
to
reply
back
if
they
are
four
one,
nine
two
one,
six
eight
3.1
this
is
the
MAC
address.
A
I
want
you
to
give
them
if
they
try
to
ping
4.22
to
that
one
I
want
you
to
basically
take
that
packet
and
send
it
out
the
default
gateway
and
rat
it
out
externally
and
add
it
out
over
the
host.
Just
like
we
do
with
all
of
the
other
networks.
I
want
to
ping
one
or
two
160,
something
that
is
known
to
me
right,
something
that
is
known
within
the
cluster.
A
Right,
I'm
still
able
to
prove
that
encapsulation
works
back
and
forth
between
other
things.
I'm
still
irrigate,
pada,
pada
communication,
but
what's
interesting,
is
that
all
of
this
packet,
forwarding
and
manipulation
is
all
happening
in
kernel.
Space
I'm,
never
actually
traversing
back
down
to
user
space.
A
If
you
read
about
psyllium
I,
think
you'd
find
it
really
fascinating.
I
found
it
pretty
pretty
pretty
riveting
about
the
way
they
do
their
thing
anyway.
That
is
our
episode
for
today.
Thank
you
all
for
logging,
on
and
sticking
with
me
through
that
crazy
amount
of
information,
I
hope
that
it
was
helpful,
and
you
know,
hit
me
up
on
on
spot
kubernetes,
select,
io
or
on
Twitter
and
all
goes
I'm
starting
a
blog.