►
From YouTube: TGI Kubernetes 123: Grokking Kubernetes: DNS Part 2
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
A
So
good
to
see
you
all.
We
got
Martin
Bartman,
saying
hello
from
the
Netherlands
Jeromy
Pruitt
who's.
Somebody
I've
learned
some
tech
stuff
from
good,
saying
hello.
We
got
VJ,
saying
hello,
folks,
l'm
andy
check
it
in
good
to
see
you,
though
matty
and
more
teza
from
tehran
always
always
awesome
to
see
what
a
what
a
global
audience
we
have.
You
know
I
feel
like
I
say
that
every
episode,
but
it
never
really
gets
old.
For
me,.
A
Full
the
gear
bear
from
Portugal
enjoy
from
Richmond
Virginia
all
right.
Well,
let's
get
into
our
notes.
As
always,
our
notes
are
up
and
public
at
TGI,
kada,
io,
/
notes,
and
so,
if
you
want
to
put
a
link
in
there
or
anything
else
like
that,
go
ahead
and
put
it
in
there.
Mr.
Keith
Lee
say
hello
from
Ireland
good
to
see
you
Keith
I
you're.
Definitely
in
the
top
five
of
my
favorite
Irishman,
you
know.
A
A
A
Got
some
scheduler
changes
got
some
remove
some
configuration
changes
on
the
cube,
scheduler
configuration
certificate,
signing
request.
Api
has
been
promoted
to
v1,
which
is
pretty
interesting
that
certificate
that
relates
to
the
in
cluster
CA.
That
kubernetes
clusters
come
in,
come
with
at
this
point
and
that
internal
CA
is
actually
used
typically
to
to
handle
issuing
certificates
for
things
that
are
scoped
within
the
cluster
itself,
so
things
like
the
cubelets
client
certificate
and
those
sorts
of
things,
but
it
can
be
used
for
any
number
of
other
thing.
A
A
The
feature
from
the
future
perspective.
We
got
a
new
extension
point,
post
filter
and
the
scheduler
framework
lots
of
work
being
happening
happening
on
the
scheduler
lately,
which
is
really
exciting.
We
have
the
privilege
flag
to
cube
kettle
run
this
one
I
remember
like
creating
some
stir
on
Twitter
about
this
and
what's
interesting.
A
One
is
it
basically
allows
us
to
make
to
define
a
QP
over
run
command
and
then
pass
privilege,
and
it
will
bundle
in
the
the
command
necessary
to
give
that
pod,
all
of
the
privilege
or
capabilities
that
it
could
possibly
have
like
it's
the
most
privileged
set.
It
basically
makes
it
so
that
the
the
pot
itself
is
running
as
as
a
highly
privileged
process
within
the
linux
machine
that
is
represented
by
your
cubelet,
and
so
it
can
do
things
like
have
direct
access
to
all
the
devices
made
available
on
that
node.
A
It
could
do
things
like
make
a
reboot
call.
I
could
do
all
kinds
of
interesting
things
when
you're
in
privilege
mode-
and
it's
put
there
I
think
mind
me
for
convenience,
if
you're
interested
in
that
issue
or
or
what
that's
all
about,
definitely
go
check
that
one
out
got
some
bugs
and
regressions
that
have
been
cropped
up,
fix,
xeb,
immersion,
mic
migration
script
and
the
sed
image
fixing
huge
page
tile
sizes
describe
output,
some
minor
figures.
A
This
is
an
exciting
one.
I've
been
waiting
for
this
one
to
happen
for
a
little
bit.
This
is
ad
reach
rise
for
Cuba
diem
joins,
so
if
you
medium
join,
doesn't
succeed
the
first
time,
usually
it
would
just
fail,
and
you
would
manually
have
to
rejoin
this.
One
is
actually
putting
in
a
Reese
a
retry
loop,
which
is
pretty
exciting.
A
We
are
still
doing
some
learning
around
the
newer
managed
fields
capability
within
kubernetes
1:18
and
so
we're
still
we're
still
seeing
some
things
crop
up
inside
of
the
manage
fields.
Arena
so
yeah
keep
your
eye
on
those
release,
notes
as
they
come
if
you're
interested
in
digging
into
what's
happening
with
119.
There
was
also
a
CBE,
and
this
one
is
actually
kind
of
a
really
interesting
one.
My
friend
Rory
also
explored
this
one.
Very
McCune
team
actually
be
with
us.
A
A
This
month
seems
to
be
the
month
for
amazing
people
like
that
every
month
is
a
month
for
amazing
people,
but
it's
kind
of
been
surprising
to
me
how
many
people
I
have
such
admiration
for,
or
respect
for
birthday
in
July,
it's
gonna
sneak
it
up
on
me
a
little
bit.
You
know
we
got
Bryan
Liles
in
Coldwater.
A
There
are
quite
a
few
others
that
are
that
are
all
July
July
folks,
and
that
was
kind
of
surprising
to
me
so
always
good
to
see
those
folks
out
there
celebrating
their
birthday
ISM
and
in
getting
getting
noticed
and
appreciated,
for
you
know
the
work
that
they
do,
even
if
it's
just
once
a
year,
probably
should
be
more
often
than
that.
But
you
know
it's
pretty
awesome
so
back
to
the
CBE.
A
The
CBE
is
a
really
interesting
one,
because
it
effectively
kind
of
breaks
the
way
that
we
think
about
the
security
boundary
of
localhost'
right.
So
typically,
when
we
think
about
accessing
localhost'
one
to
seven
zero,
zero
one.
We
think
that,
if
we're
accessing
that
IP
address
that
that
traffic
by
law
cannot
be
going
to
it,
Kent
couldn't
possibly
go
to
another
node
right
I
mean
if
you're
addressing
localhost
shouldn't
that
be
kind
of
localhost
shouldn't
that
be
right
there
on
the
same
node
with
you
and
associated
with
your
loopback
device.
A
You
can
do
a
thing
where
you
effectively
net
out
local
host
IP
addresses
and
send
that
traffic
to
one
of
the
adjacent
nodes
allowing
you
to
access
services
on
another
node
by
accessing
the
local
host
IP,
which
is
a
lot
to
understand,
but
that
is
a
crazy
cool
hack
in
my
humble
opinion,
but
it
has
been
fixed
or
has
been
mitigated.
I
should
say
and
understand
that
this
particular
risk
is
even
more
interesting,
because
it's
not
just
those
system
demons
that
listen
that
kubernetes
places.
It's
also
any
system.
A
Daemon
that
listens
to
all
interfaces
on
any
node
would
suddenly
be
accessible.
Actually,
any
system
daemon
that
listens
localhost
on
some
other
node
would
be
accessible
from
the
attacking
node
and
configured
in
this
way,
which
is
really
pretty
interesting,
so
definitely
check
this
out
if
you're
interested
in
kind
of
really
fun
trippy
network,
a
network
network
network
hacks
and
that
sort
of
thing
this
is
a
fascinating
discovery
and
a
fascinating
CAC,
so
kind
of
a
fun
one.
A
Now
and
that
one
was
like
really
fun
to
play
with
and
look
into
and
stuff
so
from
the
sank,
a
coupe,
a
DM,
maintainer,
Department,
Emmanuel
Evans
talks
about
how
to
set
up
a
minimal,
viable,
kubernetes.
I
think
this
is
like
whoever
did
this
the
hard
way
take
two,
but
I,
really
like
the
kind
of
test
driven
development,
sort
of
way
that
it
that
it
explores
this
right,
and
so
they
take
a
big-picture
view
of
how
this
works,
and
then
they
break
down
the
different
components.
A
You
could
see
stuff
like
this
right,
where,
where
they
take
the
cubelet
command
and
they
run
h
and
they
see
284
lines
of
output
from
just
the
help,
output
of
cubelet
right
a
lot
of
data-
and
you
can
imagine
that,
like
you
know,
parsing
that
into
something
that
is
a
reasonable
configuration
of
the
cubelet.
It's
not
trivial,
so
it's
kind
of
a
it's,
a
great
reminder
of
where
we've
been
and
where
and
how
we
got
to
where
we
are
mr.
A
We
got
Steve,
saying
hello
to
everybody,
good
to
see
you
all.
I
should
actually
shout
out
the
Atlanta
Meetup.
Somebody
want
to
give
me
a
link
to
that
Atlanta
meet
up
again
because
I
think
there's
another
great
another
great
set
of
talks
coming
up
and
the
Atlantic
uber
neeta's
meetup
has
gone
to
kind
of
a
virtual
virtual
state
where
they're
actually
accepting
talks
from
people
all
over
the
place
and
and
presenting
them
virtually.
A
So
that's
a
tremendous
rate
of
handling
this
coronavirus
sort
of
situation
and
if
you're
interested
in
you
know
kind
of
checking
out
like
the
coulee
atlanta,
kubernetes
meetup,
you
should
definitely
check
that
out.
I'm
sure
they
aren't
the
only
ones
doing
it,
but
it
is
really
great
to
see
that
happening.
A
A
So
if
you
want
to
check
that
one
out
definitely
check
that
out.
I
know
of
a
number
of
people
who
are
gonna,
be
talking
about
that
real
soon
or
talking
about
that
real
soon,
and
it
seems
like
a
really
exciting
way
of
kind
of
staying
in
touch
with
the
kubernetes
community,
which
is
greater
than
Atlanta
greater
than
California
greater
than
everybody.
It's
it's
all
of
you,
and
it's
everywhere.
It's
amazing,
so
parter
is
a
new
one
on
me.
I
haven't
actually
explored
this
one.
This
is
from
cube
sphere.
A
A
A
More
interesting
networking
stuff,
so
this
is
actually
neat.
Ecmp
is
really
a
cool
way
to
handle
the
kubernetes
load,
balancer
implementation
and
the
reason
I
I
see
that
so
cool
is
because,
if
you
think
about
the
way
you
see,
MP
e
CP
stands
for
equal
cost,
multi
path
right
and
if
you're,
on
a
network
and
you're
routing
back
and
forth
between
different
components.
A
This
is
just
about
distributing
new
connections
across
the
existing
routers
ecmp
gives
us
that
capability,
and
it
also
gives
us
the
ability,
you're
leveraging
the
technology
like
bgp,
to
grow
or
shrink
or
mitigate
the
number
of
routers
that
are
healthy,
endpoints
that
can
attract
a
particular
type
of
traffic
or
or
the
next
hop
of
a
traffic,
and
so,
if
you're
interested
in
that
kind
of
technology,
if
you're
interested
in
networking
and
how
all
of
these
pieces
work,
this
is
probably
a
pretty
decent.
We're
right
up
to
get
started
with.
A
I,
definitely
also
recommend
looking
into
metal
lb
by
Dave
Anderson.
He
has
done
an
amazing
job
of
describing
kind
of
the
thought
behind
the
design
what
he
did
there
and
what
and
what
has
happened
inside
of
that
space.
So
both
of
these
are
really
great
projects
and
actually
I
think
there's
a
third
that
I'm
familiar
with,
which
is
put
out
by
a
good
friend,
Dan
finnerman,
what's
called
cube
dip,
which
is
yet
another
implementation
of
this
right.
A
It's
kind
of
interesting
the
reason
that
people
build
these
things
right
is
because,
like
if
you
think
about
it,
it
actually
says
it
right
there
in
the
line.
Right
Porter
is
a
bare
metal
load
balancer
for
kubernetes,
and
the
reason
that's
important
is
because,
like
for
load
balancing
for
the
for
service
type
load
balancer
to
work
within
kubernetes
right,
you
usually
have
to
have
something
some
external
entity
that
you're
interacting
with
to
configure
and
make
available
that
load
balancer
in
AWS.
A
If
you
have
the
AWS
cloud
provider
integration
happening
when
you
create
a
load,
balancer
you're
gonna
get
an
lb
or
an
n
lb,
depending
on
how
you
configure
it.
If
you're
in,
if
you're
in
s,
you're
gonna
get
a
load
balancer
if
you're
in
GC
p,
any
of
these
cloud
providers
actually
support
it
same
thing
with
vSphere,
but
bare
metal
typically
does
not
right.
So
we're
a
bare
metal
fits
in
it's
like
well.
How
can
we
provide
that
same
capability?
A
A
A
It's
like
following
the
principle
of
least
privilege,
it's
very,
very
important
stuff,
oh
and
this
was
in
collaboration
with
the
Flex
team,
very
cool
yeah.
That's
right:
I
did
do
a
blog
post
on
Maui
lion,
although
that's
a
little
out
of
date,
I
need
to
kind
of
re,
reassess
it.
I
haven't
dug
into
the
difference
between
Porter
and
metal.
Lb
is,
but
they
do
look
like
they
try
and
solve
scratch.
The
same
itch
I
know
that
metal
lb
is
a
little
more
flexible
than
that.
It's
not
just
about
BGP
and
ecmp
with
metal
lb.
A
You
can
also
implement
that
as
as
layer,
two,
which
is
actually
very
exciting,
all
right
so
which
interface
will
ARP
out
for
the
shared
IP
address,
rather
than
being
able
to
attract
that
traffic
using
a
routing
mechanism
down
to
any
number
of
nodes,
so
very
different
implementations,
but
definitely
check
those
out,
and
in
my
blog
post,
I
used
the
l2
mechanism,
because
I
was
doing
it
inside
of
kind
and
so
kind
of
a
fun
one.
Do
they
can
do?
A
Thanks
to
you,
I
didn't
realize
that
this
was
in
collaboration
with
the
flux
team,
so
yeah
there
you
go
Michael,
Bridgette
and
Stefan
proton
from
weave
works.
All
three
of
all
three
of
them
went
down
a
journey
to
to
figure
out
how
to
lock
flex
down
so
exciting
stuff.
It's
good
to
see
those
sorts
of
improvements.
You
know,
community
driven
improvements
are
almost
always
like
exciting
to
me
just
yet.
You
love
to
see
it
right
there.
What
they've
been
saying
lately,
you
love
to
see
it.
A
This
is
another
one
by
Steve
and
I
think
this
is
actually
one
I've
seen
before.
In
this
case,
it's
a
how
it's
just
so
brought
cluster
to
its
knees
and
if
you're
interested,
this
one
definitely
check
it
out
and
if
you
have
articles
that
you've
written
or
anything
else
that
you've
actually
put
out
there
in
the
open
source
community.
A
A
But
it
highlights
it
in
a
way
that
is
actually
more
specific
to
the
some
of
the
other
tooling
that
happens
in
the
space
right.
So
in
this
case,
flux
reconciles
the
state
of
the
whole
cluster
all
of
the
resources
within
the
cluster,
and
because
of
that,
it's
basically
accessing
its.
It's
making
a
bunch
of
calls
against
sed
to
handle.
A
All
of
the
tokens
have
ever
been
generated
by
decks,
because
these
reefs,
what
these
are
stored,
as
resources
inside
of
kubernetes,
is
that
you
could
access
them
via
the
cubital
API
and
so
that
load
basically
increased
to
the
amount
of
load
on
the
backend
storage
on
a
CD,
and
it
did
nothing
to
like
decrease
the
real
challenge
that
it
was
causing
it
to
fall
over
right,
so
very
exciting
stuff,
if
you're,
if
you're
using
decks.
This
is
definitely
one
of
the
challenges
of
decks
and
I.
Remember
this.
A
A
I'm,
not
sure
if
this
has
actually
been
addressed.
If
you
have
an
issue
that
you're
tracking
with
Dex,
but
if
you
do
throw
up
a
link,
it'd
be
really
great
to
see
to
see
that
associated
with
a
particular
issue.
Inside
of
the
Dex
IDP
repository,
my
only
feedback
on
this
one
is
I
like
using
sed
as
a
Dex
back-end,
but
I'm,
not
sure
I,
home
charts.
The
right
way
to
do
it.
A
I
mean
I,
know
that
none
of
that
is
persistent
data,
but
at
the
same
time,
I'm
worried
that
a
helmet
art
doesn't
really
have
any
reconciliation
to
ensure
that
that
Etsy
cluster
is
continuously
healthy
over
time
and
so
I'm
a
little
worried
that
maybe
that
I'll
cause
you
trouble
like
you
can
just
like
add
another
Etsy,
D
node.
You
actually
have
to
have
there
are
commands.
You
have
to
run
to
join
that
new
Exedy
node
to
the
cluster
and
commands
you
have
to
run
to
remove
an
old
sed
node
from
the
cluster.
B
A
A
Right
cool
all
right,
so
I
have
a
cluster
up.
That's
good,
and
now
we're
going
to
play
with
modifying
the
configuration
of
of
core
DNS
directly
and
we're
going
to
introduce
a
bunch
of
different
ideas
and
I
want
to
talk
about
them,
real,
quick
and
then
we're
gonna
get
back
in
and
they're
going
to
get
back
into
the
technical
stuff.
A
It's
typically
going
to
look
for
files
or
your
Etsy
hosts
file
and
try
to
resolve
things
there,
and
then
it
will
interact
with
Lib,
resolver
and
figure
out
if
it
can
interact,
if
it
can
resolve
it
after
files
with
a
with
a
resolver
and
in
config,
it
will
take
the
configuration
of
SC
resolve
Kampf.
That's
the
way
that
it
goes
about
using
that
Lib
resolver
to
figure
out
like
what
the
IP
address
is
associated
with
a
particular
name.
Now.
A
Okay,
you
can't,
for
example,
have
a
resolver
that
is
a
local
DNS
server
that
resolves
specific
hosts
and
put
that
resolver
into
your
Etsy
resolve
Kampf
and
expect
that
queries
that
are
trying
to
resolve
a
hosting
to
an
IP
address
are
gonna.
Are
gonna
just
try
each
resolver
until
they
get
an
answer?
That's
typically
not
how
it
works
right.
If
one
resolver
comes
back
with
an
X
domain,
we
stopped
asking
because
we
assume
that
each
of
the
resolvers
in
your
resolved
Kampf
has
share
as
a
shared
view
of
the
world.
A
A
Why
and
and
here's
how
we're
gonna
get
into
this.
And
then
this
is
gonna,
be
where
we
introduce
the
idea
of
stub
DNS
and
modified
upstreams
and
those
sorts
of
things
and
we're
gonna
play
with
this
idea
of
like
split
horizon
and
those
sorts
of
things
and
how
they
work
so
back
to
screen
and
face
let's
get
into
it.
I
hope
that
was
helpful
and
that
you
all
understand
what
I
was
talking
about.
A
A
Exactly
yeah
so
moving
on
here,
the
order
is
important.
It
depends.
It
actually
really
depends.
Why
lead
on
the
implementation
on
your
host,
like
if
you're
using
system
D
resolve
Kampf,
it
actually
tries
to
be
a
little
smarter
about
it
and
it'll
try
and
actually
like
remove
resolvers
and
aren't
responding,
but
only
but
it
can
really
only
understand
that,
based
on
whether
the
the
resolver
is
responsive
on
that
consistent
set
of
queries
or
not
right,
so
yeah,
it
really
kind
of
depends
on
your
on
your
particular
resolver.
Not
every
resolver.
Does
it
the
same
way.
A
A
Okay,
so
in
this
link
where
it
says
sub
DNS,
so
cool
into
the
docks
link
in
our
in
our
notes
today
is
where
this
link
is
going
to
be
located
and
it
refers
to
customizing
the
DNS
service
and
we're
going
to
play
with
this
a
little
bit
now
to
kind
of
set
things
up.
What
I've
done
is
I've
created
another
zone
inside
of
my
route,
53
area
in
in
AWS
and
I'm
not
associated
with
the
root
domain
itself.
A
B
A
There's
nothing
right,
I'm
at
me,
and
that's
because
I
haven't
made
it
public
I
haven't
exposed
that
particular
set
of
name
name
records
to
anything
else
in
the
system.
I've
just
defined
it
inside
of
an
AWS,
so
I've
created
a
new
domain
tjk
decades
that
work
and
an
AWS
gave
me
some
name
servers
that
are
going
to
announce
and
could
announce
the
configuration
that
R
is
made
up
of
this
zone
file.
A
A
It
is
kind
of
changed
right
so,
like
G,
live,
see
and
muscle
handle
this
differently
from
each
other
right
and
that's
what
I
mean
it's
like.
It
really
depends
on
your
implementation,
both
of
the
resolver
and
also
of
the
hosts.
So
if
that's
the
thing
that
you
want
to
dig
into
I,
definitely
it's
definitely
a
behavior
that
one
should
understand
and
an
experiment
on
so
here's
our
domain
we've
got
T
G,
dot,
K,
it's
not
work.
Let's
go
ahead
and
play
with
this
just
a
little
bit
before
we
get
started
here,
nope!
B
A
Which
is
in
line
with
the
configuration
that
we
saw
over
here?
Aha,
especially
specifically,
this
line
right
here,
which
is
a
really
fascinating
record-
that
you
can
do
in
most
DNS
implementations
where
I
have
a
I
was
called
a
fall
through
host
name
here
right.
So
what
I've
done
is
I've
said
anything
not
more
specifically
defined
resolved
a
4.4.4
dot
for
and
anything
more
specifically
defined
resolve
to
whatever
it
is
right.
A
A
The
Maddie
is
4.4.4,
deaf
or
because
of
the
way
wildcard
DNS
works,
so
I've
got
a
wildcard
record
in
DNS
and
anything
I,
don't
more
specifically
configure
will
be
sent
to
that
DNS
to
that
resolution.
Kind
of
neat
so
I
set
this
up
because
I
wanted
to
play
with
Stuber
outing
inside
of
inside
of
the
core
DNS
configuration
and
we're
gonna
play
with
that
a
little
bit
more
and
and
see
how
that
might
work
and
how
and
why
that's
kind
of
an
interesting
use
case
within
kubernetes.
This
is
the
configuration
of
core
DNS.
A
And
take
a
look
at
our
config
Maps
in
the
cube
system.
Namespace
take
a
look
at
core
DNS,
and
here
we
have
the
default
configuration
of
coordinates
inside
of
the
cluster
right.
We
have
a
Prometheus
port
that
is,
that
it's
exposing
on
port
9
153
and
we're
also
exposing
port
53,
and
this
piece
right
here
I
think
we
talked
about
this
a
little
bit
last
time.
But
this
piece
right
here
basically
is
telling
a
it's
configuring
core
DNS,
to
forward
its
resolver
to
the
file
as
he
resolved
calm.
A
So,
however,
Etsy
resolves
accomplished,
configured
I
want
my
core
DNS
pod
to
use
that
file
to
figure
out
who
the
next
upstream
is
to
go
in
query
right,
and
so,
when
I
inside
of
a
pod
and
make
a
query
of
my
DNS
server
within
the
cluster
which,
as
you
recall,
might
might
be
the
service
side,
ER
plus
10
right.
So,
if
I'm
doing
a
query
against
in
my
case,
1096
is
0.
10
that
resolute
that
query
will
come
to
the
core
DNS
pod,
the
core
DNS
pod,
will
determine
hey.
A
Do
I
have
an
answer
for
that.
Is
that
answer
part
of
the
cluster
dot
local
domain?
Is
it
associated
with
the
kubernetes
plug-in
within
core
DNS?
Like?
Are
you
asking
me
what
kubernetes
default
that
service
that
cluster
dot
local
is
or
is
it
something
I
don't
know
about
it
all,
and
if
it's
something
I
don't
know
about
at
all,
then
I
have
to
go.
Ask
my
upstream
resolvers
to
give
me
an
answer
right
and
in
this
configuration
the
upstream
resolvers
are
all
known
inside
of
etsy
resolve
kampf,
and
this
way
this
is
the
default
configuration.
A
A
Right
because
the
DNS
policy
uses
the
default
DNS
policy
rather
than
I
know
it's
a
I'm
gonna
want
to
make
the
support
a
game.
It's
a
terrible
name,
a
terrible
name
for
policy,
but
in
this
case
DNS
policy
default,
which
is
not
the
actual
default
for
DNS
policy.
The
default
DNS
policy
is
cluster.
First
right,
it
basically
uses
the
configuration
presented
by
the
cubelet,
but
if
you
configure
the
DNS
policy
to
be
the
keyword
default
instead
of
cluster,
then
what
you
get
is
a
view
of
the
resolve
Kampf,
as
the
cubelet
sees
it
right.
A
So
typically,
the
cubelet
will
be
smart
enough
to
look
to
make
sure
that
the
resolve
comp
that
the
cute
that
that
the
accordion
is
pod
sees
is
one
that
includes
the
name
servers
that
the
node
itself
was
handed,
even
if
you're,
using
a
system
D,
which
is
really
cool.
It's
like
even
if
you're
using
the
system,
D
resolve
D
locally.
A
The
cube
is
actually
smart
enough
to
go.
Oh
wait,
I
can't
hand
you
a
resolve
comp.
That
includes
one
two:
seven:
zero,
zero
one
you're
not
in
the
host
network.
Namespace
I
need
to
make
sure
that
I
hand
you
a
resolved
comp
that
makes
sense
to
you
and
so
cube.
Atm
handles
this
I
think
the
cubit
is
actually
pretty
smart
about
it.
There
are
flags
in
the
cubit
where,
if
you
want
to
customize
what
that
resolved,
Kampf
is
and
point
you
at
a
different
file.
You
can
also
do
that.
A
Alright
moving
on
here,
so
let's
go
ahead
and
play
with
this
configuration
and
we're
gonna
or
we're
gonna
first
play
with
it
a
little
bit
by
changing
the
upstream
resolvers.
Instead
of
using
resolve
comp
we're
going
to
go
ahead
and
configure
we're
going
to
change
the
configuration
to
use
like
8.8
today
today.
B
A
Let's
go
ahead
and
set
up
set
up
start
up
by
our
exemple
pot.
Again
like
we
did
before
well
do
a
PK
update,
looks
like
DNS
is
still
resolving
again.
If
I
do
mic
at
etsy
resolved,
calm
I
can
see
the
DNS
configured
again
just
like
using
10960
10.
That
works
the
way
it
did
before
and
if
we
do
dig
actually
a
PK
add
find.
A
It's
only
going
to
hold
a
record
for
30
seconds,
but
we
are
seeing
that
the
clocks
are
different
and
they're
different,
because
we
have
multiple
resolvers
behind
that
service
right,
and
so
that
is
how-
and
this
is
this
recovered
a
little
bit
in
that
first
episode,
because
we
have
multiple
core
DNS
pods
the
clock
here
that
describes
like
how
long
this
record
is
valid
for
is
actually
going
to
be
represented
differently,
depending
on
which
of
the
core
DNS
pods,
we've
reached
directly
so
talked
about
that
a
little
bit
in
the
previous
episode.
If.
A
A
A
You
know
designed
for
that
particular
purpose
and
that
allows
us
to
further
ensure
that,
like
we're
designing
these
things
for
the
sort
of
resilience
that
is
necessary,
one
of
the
use
cases
and
I'm
going
to
call
this
one
out
now
and
we're
probably
gonna
talk
about
it
a
couple
more
times
that
comes
up
in
this
particular
case.
A
few
time
pretty
frequently
is
things
like
this,
and
let
me
show
you
this
one.
This
is
actually
kind
of
a
mind-blowing
thing
for
me
when
I
first
discovered
it
DNS
query
limit.
A
So
I
need
more
water
here
hold
on
this
isn't
this
doesn't
seem
like
a
lot
of
queries.
All
right.
This
doesn't
seem
like
a
low
number,
ten
hundred
and
twenty
four
packets
per
second,
but
this
limit
means
that
a
given
given
a
given
ec2
instance
over
a
given
en
I
interface,
can
make
1,024
packets
per
second
of
DNS
queries
and
then,
after
that,
the
DNS
servers
reject
the
traffic
that
exceeds
this
limit
and
that
can
cause
that's
a
good
point.
A
And
this
is
a
limit
in
AWS
and
I.
Imagine
that
other
clouds
also
have
the
same
limit,
but
let's
just
think
about
it,
a
little
bit
from
the
perspective
of
kubernetes
cluster
and
how
kubernetes
cluster
operates
and
kind
of
pieced
together
how
this
might
be
a
problem
right
and
so
within
a
kubernetes
cluster.
We
have
a
couple
of
control,
plane
nodes
and
then
we
have
maybe
a
hundred
or
so
worker
nodes
and
by
default
the
way
this
configuration
system
works.
A
A
Even
if
you
didn't
give
it
a
short
name
and
try
and
resolve
that
first
now,
all
of
that
traffic
dies
right
there,
accordion
nuts,
it
doesn't
get
forward
into
upstream,
but
any
other
traffic
that
comes
through
right.
That
might
be,
that
might
be
considered.
A
short
name
would
go
against
upstream
eventually
right
and
that
throttling
causes
all
kinds
of
wacky
things
inside
of
inside
of
your
dns
configuration.
So
we
have
to
coordinate
spots,
they
happen
to
land
on
the
same
node.
A
A
So
we
have
this
forward
line
here
right
and
that
forward
line
basically
gives
us
the
ability
to
set
our
upstream
resolvers.
This
forward
line
basically
sets
forward
anything
not
resolved
by
a
plug-in
I'll
swear
in
the
configuration
to
whatever
the
upstream
names
are
right,
and
so
in
this
case
it's
pointing
it
at
117
1601
in
many
cases,
it'll
point
it
at
whatever
I,
don't
think
it
I
think
it's
like
preferred,
but
not,
but
not
required.
A
So,
for
example,
in
my
configuration
sorry
Marco
so
I
mean
answering
your
question
and
it's
a
great
question:
let's
just
let's
just
investigate
that
real,
quick
and
then
we'll
come
back
to
this
part.
The
question
is:
are
there
anti
config
anti
affinity
configurations
in
the
default
deployment
of
core
DNS
I?
Believe
there
is
some
effort
of
that
keep
kiddo
get
deployment
gosh.
B
A
A
And
now,
if
you
think
about
it,
this
is
a
little
bit
of
a
race,
and
this
is
one
of
the
challenges
that
we
have
right
in
this
configuration
it's
a
little
bit
of
a
race
like
say
we're
bringing
up
a
cluster,
we're
gonna
start
with,
like
the
three
control
plane
nodes.
We're
gonna
bring
up
the
worker
nodes
after
that,
when
we
get
to
the
state
where
we
deploy
core
DNS,
that
core
DNS
deployment
will
land
on
whatever
nodes
are
registered
with
the
kubernetes
cluster
at
that
time.
A
Alright,
so
there's
that
now
back
to
our
configuration
I
want
to
play
with
this
idea,
because
this
will
be
a
fun
one
to
dig
into
also
right.
So
we
got
our
forward
configuration
now.
We're
gonna
play
with
this
idea
of
Stuber
outing.
Now
earlier
I
said
you
can't
one
does
not
simply
throw
it
up
with
another
resolver
into
their
resolve
Kampf
if
they
want
to
pick
up
things
like
these
other
hosting
right.
So,
for
example,
if
I
jump
into
my
configuration,
I
do
docker
run
actually.
B
A
Think
the
reason
I
don't
use
the
DNS
resolver
one
is
because
it
like
actually
I,
have
to
remember
that
path,
and
here
I
only
have
to
remember
the
word
bash
makes
it
much
easier
me
what
I'm
playing
around
with
these
sorts
of
things
thing.
So
if
I
do
dig
test,
Kincaid
work
or
actually
it's
GGI
k
that
case
that
worked
I,
don't
see
anything
resolving
right
and
that's
fine.
We
know
that
that's
not
going
to
resolve.
A
There
is
a
resolver
for
the
root
domain
because
it
is
a
domain
I
own,
but
I
don't
have
anything
informing
the
root
domain,
how
to
delegate
a
query
for
the
TGA
sub
domain
down
to
that
other
set
of
resolvers,
and
so
let's
go
ahead
and
do
the
thing
I
tell
people
never
to
do
and
explore
why
it's
such
a
fascinating
topic
right!
So
oh
shoot.
B
A
I'm
using
AWS
vault
to
handle
my
credential
stuff,
which
is
like
one
of
my
favorite
tools
for
this
sort
of
thing:
I'm,
probably
not
gonna,
explore
it
in
this
episode,
but
if
you're
interested
in
understanding
more
about
it
definitely
go
check
it
out
very,
very
cool
stuff.
So
here's
our
name
server
our
resolver
right
and
we
know
that
it
works
because
we
were
using
our
Digg
commands
earlier.
I'm
gonna
go
ahead
and
copy
that
one
when
I
come
over
here
to
my
resolved
back
hauls.
A
All
right
so
I've
set
my
resolvers
now
I
have
two
different
resolvers
right
yeah.
It's
the
resolve.
Doc
I've
got
my
10960
10,
which
is
my
dns
for
the
cluster
and
then
I
also
have
this
other
name
server,
which
is
the
the
resolver
for
the
host
and
I
know
that
they
both
work
right.
But
if
I
do
dig
testing.
A
It's
not
work,
I,
don't
get
any
resolved,
I,
don't
get
any
result
right.
In
fact,
I
got
an
NX
domain,
but
I
know
that
one
of
my
tape
surfers
in
here
knows
how
to
resolve
that
right.
Like
I,
can
query
this
guy
directly
I
could
do
dig
me
the
same
thing
and
just
point
that
resolver?
What
if
that
resolver
I
got
a
record?
Why
am
I
getting
into
Nick's
domain
right?
And
this
is
exactly
what
we're
talking
about
by
the
challenge
right,
you
can't
just
add
another
name
server.
A
That
knows
things
that
the
other
name
server
doesn't
know
to
your
resolve
comp
and
expect
things
to
work
normally
because
they
won't
right.
You'll
get
weird
behavior
like
you
know,
you
might
get
lucky
and
it
might
try
to
resolve
that
record
against
the
correct
name,
server
and
then
you'd
be
good
for,
like
at
least
a
few.
You
know
in
this
case,
you'd
be
good
for
300
seconds.
Maybe
all.
B
A
B
A
A
Waiting
for
things
to
converge
here
there
we
go
now
we're
all
converged
all
right.
So
now
our
accordion
is
positive,
all
converged,
and
now
they
know
this
record
and
if
I
look
up
tests,
I
don't
see
any
time
now.
It's
like
I
did
before
right
now,
my
each
of
the
core,
DNS
pods,
has
been
configured
with
that
upstream
resolver
for
that
domain
and
we're
good
to
go
all
right,
so
I
can
do
Lamade
e,
like
I,
did
before.
Oh
sorry,
my
bad
right
and
I
could
still
see
the
result.
Revolution
happening
dynamically
quickly.
A
It's
holding
that
record
for
30
seconds,
and
that
way
we
know
it's
accordion
s
doing
it,
and
in
this
way
this
is
actually
a
really
fun
part.
We've
configured
our
system
in
such
a
way
that,
if
you're
using
the
default
configuration
of
the
DNS
policy
for
your
pod,
you
can
now
rely
on
that
subdomain
to
resolve
records
that
are
known
only
to
it
right.
We
basically
kind
of
added
a
stub
DNS
domain,
and
now
we've
actually
got
a
reasonable
implementation
here
in
such
a
way
that
we
can
be
really
we
can.
A
We
can
be
relatively
assured
that
if
we
wanted
to
handle
a
specific
case-
and
in
my
case
like
say
my
enterprise
was
TJ-
hey,
it's
not
work
and
I
wanted
to
be
able
to
make
sure
that
I
could
resolve
names
for
that.
What
I'm
doing
in
this
configuration
here-
oh
that's,
fixed!
So
what
I'm
doing
in
this
configuration
here
is
I'm,
saying
that
if
the
query
is
headed
for
TGI,
K,
dot
k,
it's
that
work
if
that
matches
the
domain.
The
subdomain
for
that
particular
record
right.
A
So
when
the
query
comes
in
and
I
have,
that
particular
record
coming
in
I
want
to
make
sure
that
I've
forward
that
request
to
this
this
name
server
or
to
one
of
the
name
servers
in
this
list
right
you
can,
you
can
specify
multiple
name
servers,
you
don't
have
to
actually
only
specify
one
and
so
the
killer
part
about
that
is
now
I'm.
Making
like
three
decisions
right.
My
first
decision
is:
is
this
record
something
that
is
associated
with
the
kubernetes
plug
right?
A
Is
this
kubernetes
not
default
service
style
cluster,
not
local
or
service,
that
default
that
cluster
about
local?
And
if
it
is
that
answer
it
right,
if
I
don't
know
the
answer,
then
I'm
gonna
go
ahead
and
forward
it
to
one
dot,
one
dot,
one
dot:
one
and
I'm
gonna
cash,
the
result
of
that
for
30
seconds
and
then
down
below
I
have
specified
a
subdomain
I've
said.
But
if
the
query
is
going
to
t2
anything
Chugiak,
a
decade's,
not
work,
then
go
ahead
and
forward
that
request
differently.
Handle
that
traffic
differently
forward.
A
This
is
not
gonna
overload
core
DNS
yeah
split
DNS
configure
for
security,
maybe
like
I,
don't
necessarily
need
people
to
know
about
testing
dot.
T
GI
k,
dot
K,
it's
not
work
right
like
it's,
not
a
it's,
not
a
public
record.
It's
usually
only
used
for
my
own
internal
infrastructure
world
right.
It's
not
used
for
my
public
facing
and
so
I
want
to
make
sure
it
is
specified
globally.
A
But
the
way
this
is
configured
I
can
make
so
I
can
make
it
so
that
if
I
had
a
particular
domain
that
had
host
records
that
were
infrastructure,
specific
and
many
enterprises
you'll
find
have
this
sort
of
a
domain
right
it'll
be
like
dev
dot.
You
know
my
enterprise
comm
right,
maybe
that's
not
a
public
facing
domain
for
good
reasons
right
it.
Basically,
just
it
slightly
increases
your
attack
service,
nothing
super
serious,
but
in
some
ways
it
could
be
an
information
leak.
A
What
else
can
we
do
so
the
Madi?
The
answer
to
your
question
is
that
team
that
that
prometheus
port
is
exposed
on
a
kind
of
a
system
level
right
so
on
the
port
is
on
the
port
on
the
pot
itself,
it's
exposing
metrics,
so
that
was
about
I
guess
your
question
is
like:
can
we
determine
if
their
query
is
going
to
that
sub
DNS
resolver?
Is
that
where
you're
trying
to
head.
A
B
A
I
mean
you
do
right,
I,
don't
think
you
need
another
perm
easiest
configuration
because
you're
telling
Prometheus
your
talent,
your
instrumenting,
the
core
DNS
domain.
Sorry,
your
instrumenting,
the
core
DNS
application,
not
necessarily
that
particular
subdomain.
But
let's
take
a
look
at
what
the
metrics
look
like
and
then
we'll
know.
If
I'm
talking
about
the
right
thing
here
or
not
right
so.
B
A
Does
the
Cortina
spod
sink
the
cash
or
is
each
one
managing
its
own
cash?
Each
one
manages
its
own
cash
I've,
always
added
one
prometheus
line,
first
up
to
get
metrics
for
that,
so
yeah.
That
might
be
right.
You
might
be
totally
right
about
that
and
I'm
kind
of
surprised
by
that
I
didn't
I
thought
it
was
actually
being
instrumented
individual,
but
specifically
so
if
we
do
make
that
change,
that's
a
very
good
point.
Maddie.
A
A
A
A
A
B
A
B
B
A
A
A
Let's
go
ahead.
We
talked
about
that
one
all
right.
So
let's
explore
this
one
and
then
we're
gonna
get
into
service
discovery
things
and
we're
gonna
get
into
external
name.
Then
we're
going
to
do
no
local
DNS,
so
we
got
a
little
bit
a
while.
We
got
a
little
bit
left
to
cover
in
this
episode
before
we're
done.
So
let's
keep
movin
all
right.
So
there
are
other
things
you
can
figure
inside
of
what
is
this?
Why
is
there
note
there.
A
All
right,
there
are
other
things
you
can
configure
inside
of
via
the
API
inside
of
kubernetes,
like
you
can
configure
the
resolve
a
comp
specifically
for
a
pod.
You
can
configure
the
dot
you
can
consider.
You
can
add
things
to
the
hosts
file
for
a
pod,
and
you
can
also
do
things
like
configure
external
name.
A
A
Let's
talk
about
services
and
names
and
how
those
things
work
right
and
then
here
we're
gonna
get
into
kind
of
like
the
discovery
piece
a
little
bit.
So
let's
just
beat
that
up
a
little
bit.
Kubernetes
dns
based
service
discovery
is
an
interesting
implementation
of
things
and
we've
and
we've
done
and
we've
explored
this
a
bit
within
the
configuration
and
in
the
previous
episode.
But
let's
go
ahead
and
bring
this
up
again.
A
So
remember
that
I
think
we
talked
a
little
bit
about
service
discovery
last
time,
but
I
want
to
show
you
a
trick
that
I
was
kind
of
mind-blowing
by
when
we
introduced
core
DNS
is
the
default
cluster
DNS?
We
we
also
as
part
of
the
kubernetes
DNS
plug-in
piece,
enabled
this
really
wild
wildcard
DNS
thing,
which
is
really
fun
so
anything
dot
anything
dot,
cluster
local.
A
There
we
go
all
right,
so
what
this
does,
which
is
truly
wild,
is
that
you
can
use,
dig
or
other
tools
like
it
to
discover
services
with
them.
I
thought
it
was,
maybe
it's
any
any,
rather
than
anything
there
we
go,
so
you
can
use,
dig
or
other
tools
that
can
actually
look
up
post
names
and
stuff
to
discover
services
that
are
exposed
to
cluster
wide
from
anywhere
inside
the
cluster,
using
kind
of
an
interesting
incantation
of
any
any
or
any
star.
A
Within
the
cluster
okay,
so
if
I
define
a
new
service
within
the
cluster,
this
will
now
be
discoverable
from
any
pod
within
the
cluster
using
this
mechanism-
and
this
is
part
of
that
kubernetes
plugin-
piece-
that
for
that
core
DNS
configures
for
the
cluster
allows
us
to
do
service
discovery,
and
this
is
a
really
neat
one
like
what
it
does
is.
It
allows
us
to
effectively
enumerate
all
the
services
that
are
exposed
within
the
cluster
across
the
entire
cluster
right,
and
so,
if
I
were
to
do
cube
kettle,
create
namespace
test
cube
kettle,
create
service.
B
A
Now,
if
we
do
cube
kit,
I'll
get
SVC
Oh
wide,
we
can
see.
These
are
the
a
we
can
see.
These
are
the
services
that
are
known
about
cluster
wide.
We
got
that
kubernetes
default
service
defined
inside
the
default.
Namespace
we've
got
test
1.
That
is
exposing
our
test
service
within
the
default
namespace
and
we've
got
test
1,
exposing
the
test
deployment
inside
of
the
tests
namespace,
and
if
we
jump
back
into
our
pod,
it's
like
we
had
before.
A
Let
me
see
those
things
pop
up
right
and
we
can
see
kind
of
this.
I
think
really
helps
us
understand
how
we
can
build
based
on
that
DNS
service
discovery
within
the
cluster,
how
we
can
build
or
even
discover
the
different
services
that
are
available
to
us
within
the
cluster
right.
So
I
just
created
two
new
ones:
test
dot
test,
one
test
dot
service
by
clustered,
a
local
in
the
test.
A
Namespace
the
services
named
test1,
the
namespaces
in
is
test,
and
you
know
it
is
a
type
service
and
clustered
all
the
close
kind
of
the
default
domain.
You
can
configure
this
when
you
bring
up
the
cluster.
We
have
our
cube,
DNS
service
that
we
know
about.
We
have
the
kubernetes
service
that
we
know
about
and
then
I
also
created
a
test,
one
deployment
in
the
default
namespace
and
we
can
see
all
of
those
things
and
we
can
discover
them
using
this
really
kind
of
a
clever,
serve
trick
right
and
so
just
to
highlight.
A
A
A
This
isn't
quite
the
same
as
listing
endpoints,
because,
what's
here
is
actually
just
the
server's
IP
address
of
the
endpoint
rather
than
rather
than
rather
than
the
pod
IP
addresses
themselves
right.
So
endpoints
are
a
representative
of
the
actual
backing
service.
This
is
actually
just
being
able
to.
This
is
more
of
the
equivalent
of
cube
kettle
get
service
a
for
all
namespaces
in
the
cluster.
A
I
think
I
remember
there
being
a
way
to
turn
it
off
I,
don't
member
exactly
what
that
trick
was,
but
from
a
security
perspective
it
kind
of
bothered
me
also,
but
it
definitely
highlights
yet
another
one
of
those
assumptions
that
make
it.
That
makes
it
very
clear
that
kubernetes
is
not
a
hard
multi-tenancy
focused
solution
right,
it's
a
it's!
It's
at
best
used
for
soft
multi-tenancy.
A
So
we
talked
about
service
discovery
things.
We
talked
about
the
important
assumption
about
resolves
like
hump
that
was
earlier.
When
we
were
talking
about
you
can't
have
you
can't
just
one
does
not
simply
add
another
resolver
to
resolve
a
comm
and
now
we're
gonna.
Look
at
these
two
pieces,
which
are
both
pretty
neat.
A
Down
below
here,
there
are
a
couple
other
ways
that
you
can
configure
the
cluster
or
you
can
configure
your
pods.
What
they're
highlighting
here,
which
is
interesting,
is
that
you
actually
have
obviously
the
DNS
policies
that
we
talked
about
default,
which
is
not
the
default
allows
you
to
see.
There
was
a
resolve
of
the
underlying
node.
Cluster
first
is
the
default
for
the
cluster,
and
it
allows
you
to
see
things
which
are
known
about
within
the
cluster
cluster
first
with
hostnet,
if
you're
using
net
equals
host
inside
of
your
Dena
inside
of
your
pod
spec.
A
This
allows
us
to
configure
that
resolved
comm
so
that
it
still
uses
the
10960
10
resolver,
even
in
host
net,
which
is
pretty
slick,
and
then
none
is
the
one
we're
gonna
explore
next
right,
and
so,
if
we
have
configured
none,
then
we
can
actually
define
ourselves
what
we
want
that
to
look
like.
So
let's
go
ahead
and
play
with
that
idea.
A
B
B
A
All
right,
so
our
manifests
that
we
just
applied
inside
of
our
test
cluster.
Is
this
one,
and
what
this
is
doing
is
it's
setting
the
DNS
policy
that
we
talked
about
before
to
none
and
then
we're
passing
a
DNS
configuration
now
I've
talked
about
this
lots
and
lots
of
times,
but
just
to
highlight.
If
you
want
to
explore
what
that
pod
spec
might
look
like,
you
can
always
use
cue
cuddle,
explained
pod
spec,
dot.
A
Right
so,
if
you're
using,
if
you
wanted
to
explore
like
what
you
could
do
inside
of
this
particular
space
and
how
they
can
and
how
those
what
these
entries
are
for
and
what
they
do,
you
can
actually
use
cue
petal
explain
to
explore
that
stuff
right,
and
so
you
can
specify
a
list
of
name
servers.
You
can
specify
options
and
you
can
specify
different
entities
for
the
SIRT
to
append
to
the
search
path,
and
this
is
represented
as
a
list,
even
though
it
says
string
it
still.
A
It's
the
whole
list,
so
in
our
example,
configuration
we've
specified
two
searches
and
s1
service,
lacklustre
domain
example
and
my
DNS
search
suffix
and
we've
configured
n
dots.
If
you
wanted
to
customize
the
configuration
of
n
dots,
this
is
one
way
you
could
do
it
and
we
specified
the
name
servers
and
all
of
that
stuff
right.
So
inside
of
the
configuration
of
our
pod,
now
that
we
brought
up
with
that
configuration,
if
we
look
at
our
resolve
comm,
we
can
see
exactly
as
we
would
expect
right.
A
That
list
represented
it
in
order
so
NS
one
service,
dot,
cluster
domain,
dot,
local
we've
got,
we've
got
our
options,
configuration
and
we've
also
got
our
name
server.
Now,
obviously,
DNS
will
be
super
broken
here
right.
So
if
I
were
to
do
ping,
google.com
or
curl
Google
com
I
would
not
work
at
all
right
because
there
is
no
name
server,
one
two
three
far
far
as
I'm
aware-
and
it
seems
pretty
unlikely
at
the
moment.
A
A
So
this
is
one
of
the
ways
that
we
can
modify
this
now.
This
isn't
the
pod
configuration
I
just
brought
up
a
pod.
You're
gonna
absolutely
include
this
change
in
a
deployment
a
daemon
set
and
any
any
other
primitive
within
kubernetes.
That
creates
containers
right
because
we're
defining
that
right
here
in
the
spec
file.
So
you
could
do
this
with
a
deployment.
Damon
said
stable,
said
any
of
those
things.
B
C
A
A
Yeah,
so
in
our
configuration,
what
we
saw
right,
we're
actually
specifying
a
couple
of
host
names,
we're
gonna
forward
food,
local
debauch
food
at
local
and
bar
dot,
local
one,
two,
seven,
zero,
zero
one
and
we're
gonna.
So
anything
that
matches
that
particular
host
name
is
gonna.
Go
here
and
we're
also
gonna
do
forward
food
at
remote
and
bar
to
our
remote
to
ten
one,
two
three
and
we're
gonna
configure
that
in
the
SE
hosts
file
and
then
inside
the
container.
A
They
basically
just
ran
like
a
cat
Etsy
host
to
prove
that
it
worked
right,
and
so
we
can
see
that
same
output
here
right
so
there
it's
our
resolve
comp.
So
any
query
to
food
at
local
or
food
at
remote
will
be
resolved
to
the
IP
address
that
we've
configured
here
right
now.
What's
neat
about
this,
one
thing
I
wanted
to
like
as
a
quick
funny
aside
is
that
this
is
not
a
new
trick.
A
This
is
a
an
example
of
a
hosts
file
that
I've
note
that
I
I
know
has
been
in
existence
forever
right
and
in
this
particular
example.
Basically,
what
they're
doing
is
they're
determining
the
name,
the
name
records
of
things
that
should
not
be
resolved
within
their
domain
and
then
configuring
that
domain
to
resolve
against
local
storage
or
resolve
against
27001
right,
and
so
this
is
an
example
kind
of
an
extreme
example
of
that
right.
Maybe
I,
don't
want
media
fastclick
net
I,
don't
want
my
try.
A
I,
don't
want
my
my
browser
traffic
go
to
that
I
want
to
dead-end
that
traffic
and
the
way
that
I
could
do.
That
is
by
sending
that
traffic
to
localhost,
where
I
don't
have
anything
running
on
that
port
and
it
will
just
die
right,
basically
breaking
the
webpage
loading,
but
breaking
it
in
a
very
specific
way,
kind
of
it.
Yeah
piehole.
Does
this
gosh
there's
a
million
examples
of
this
right
piehole?
A
Does
this
kind
of
more
in
the
way
that,
like
core
DNS
does
and
that
you
have
a
DNS,
resolver
and
you're,
using
that
DNS
resolver,
and
you
can
programmatically
configure
that
DNS
resolver
to
say
I,
don't
want
to
send
any
of
my
traffic
to
these
domains
that
are
known
based
on
our
community
as
domains
that
I
don't
want
to
go
and
interact
with
right
so
the
same
trick,
but
like
you
could
totally
that's
what
I
was
calling
out
here.
It's
like
this
is
an
example
kind
of
a
much
slighter
example
of
a
technique.
A
That's
pending
use
forever
for
for
doing
things
like
limiting
access
to
things
that
you
don't
necessarily
want
to
allow
your
computer
or
your
or
your
application
software
to
go
and
interact
with
so
kind
of
fun.
You
can
do
negative
and
you
can
also
do
positive
right
and
so,
in
this
case
we're
doing
positive.
We're
saying
like
if
you
are
trying
to
use
this
hostname
then
map
it
here.
If
you're
using
a
remote
post
name
then
map
it
here
and
here
we're
doing
like
negative
I
want
to
throw
this
traffic
away.
A
C
A
So
what
I've
done
is
I've
just
used
the
cube
kennel
create
service
command
to
create
a
service
of
type
external
name,
and
this
is
part
of
that
again
that
kubernetes
plugin,
that
is
running
inside
of
our
core
DNS
pause
within
the
cluster,
and
what
this
does
is.
It
creates
a
cname
right,
and
so
let's
go
ahead
and
jump
into
our
pot
again.
A
B
B
A
A
A
So
we
saw
that
second
result,
because
there
is
no
way
that
Google
on
its
HTTP
protected
website
is
actually
including
this
as
a
subject:
alternative
name
right,
and
so
instead,
when
I
sent
that
request
the
sand
or
the
subject
alternative
name,
that
I
tried
to
query.
That
particular
website
with
didn't
match
the
certificate
and
that's
why
we
got
that
extra
line,
and
it's
also
why
we
got
that
output
of
like
we.
We
failed
to
verify
the
legitimacy
of
that
connection.
A
Not
still
would
break
right
because
they're
still
there
there's
no
there's
no
line
there,
there's
no
I'm,
not
in
there
sand
list.
So
even
if
I
added
the
host
header,
it
would
still
fail
right.
I
could
fool
like
some
parts
of
the
system,
but
at
the
end
of
the
day
like
what
shows
up
over
there,
it's
gonna
be
the
important
part
all
right.
So
we
talked
about
external
name.
It's
basically
a
way
of
creating
see
names
that
you
can
reference
internally.
A
A
I
need
to
use
some
local
cache
and
I'm
going
to
call
that
local
cache,
X
and
I'm
gonna
define
an
external
name
for
X,
so
that
I
can
just
always
refer
to
that
local
cache
without
having
the
more
specifically
define
it
within
a
particular
region.
And
then
in
my
configuration
management
system,
I
can
actually
say
every
one
of
my
regions
will
have
X
defined,
for
whatever
the
local
result
of
X
is
right
and
that
way
I
can
use
a
short
name
to
resolve
it.
A
I
can
assume
that
the
connectivity,
the
connectivity
part
of
it,
will
be
resolved
now,
there's
no
guarantee
that
the
local
cache
will
be
in
any
way
usable
across
these
different
regions.
That's
a
different
problem,
but
from
a
name
resolution
place.
This
is
what
kind
of
the
external
service
use
case
is
and
it
works.
It
works
pretty
well,
alright.
Last
thing
we're
gonna
do
is
explore.
Node
look
will
be
this.
Let's,
let's
go
ahead
and
do
that
so
do
kind,
delete
cluster
and
create
cluster.
A
The
reason
I
wanted
to
recreate
it
is
I
wanted
to
have
more
nodes
than
one
right,
like
I'm
gonna
bring
up
my
control
plane
here
and
we're
gonna
have
multiple
worker
nodes.
That's
the
important
part
of
this
particular
test,
because
it's
node
local
DNS
cache
and
so
we're
gonna
play
with
like
how
its
configured
to
how
it
works
and
all
that
good
stuff.
A
So
we
talked
about
the
problem
of
like
route
53,
limiting
the
number
of
DNS
queries
we
can
get
away
with
per
second
that
kind
of
thing
right,
and
this
is
another
way
of
resolving
that
right.
So
we
could
solve
that.
We
could
approach
that
problem
in
a
variety
of
ways.
We
could
increase
the
number
of
core
DNS
pods.
We
can
configure
anti
affinity.
A
Well,
we
could
do
a
number
of
things
that
would
actually
allow
us
to
improve
the
resilience
of
DNS
within
our
cluster,
but
one
of
the
other
things
we
could
do
just
like
we
do
at
our
host
level
is
we
could
put
a
caching
dns
server
on
each
of
the
nodes.
Okay,
now,
in
a
past
life,
while
I
was
a
quarry,
actually
helped
to
implement
this
for
for
one
of
our
customers
before
this
feature
was
in
existence.
A
So
this
is
how
we
can
turn
this
feature
on
this.
Is
your
gonna
be
enable
using
the
following
steps
prepare
manifest
similar
to
the
sample
node
local
DNS,
cama
and
save
it
as
X?
So
let's
go
ahead
and
look
at
the
sample
raw,
so
we
have
a
service
account
being
defined.
It's
being
defined
inside
of
the
cube
system,
namespace.
A
A
All
right,
so
this
looks
very
similar
to
our
configuration
before
right.
This
is
the
node
local
DNS
config
looks
very
similar
to
our
configuration
configuring.
A
successor,
denial
cache
denial
for
five
seconds,
success
for
30,
setting
the
peer
local
DNS
and
DNS
server,
forcing
TCP
exposing
Prometheus
on
port
nine,
two
five
three,
instead
of
nine
one
five
three,
and
that
means
that
it
will
be
exposing
that
port
on
the
each
host
or
that,
where
that
pot
IP
will
be
located,
configuring.
B
A
A
A
A
B
B
B
A
A
A
A
Think
I'm
not
sure
of
a
way
to
flush
the
cache
without
deleting
the
pods
I
think
deleting
the
pods
might
be
the
way
to
do
it,
I'm,
not
sure.
If
there's
another
way,
the
good
news
is
the
cache
is
only
30
seconds,
so
it
should
be
it
should
it
shouldn't
hurt
you
too
bad,
hopefully,
but
that's
the
way
it's
configured
all
right,
so
cube
kettle,
get.
B
A
B
A
B
B
A
So
it
looks
like
that's
all:
it
is
just
a
health
for
whether
it's
healthy
or
not,
that
makes
sense.
In
fact,
I
remember
now,
look
seeing
it
in
the
deployment
way.
That's
actually
how
it
determines
that
that
particular
guy
is
a
health
is
at
a
healthy
state,
so
neat.
So
then,
let's
take
a
look
at
our
routing
table.
A
A
A
B
B
B
B
A
A
To
get
the
response
again
right
and
now,
the
interesting
thing
is
that
we
have
a
cache
locally,
that
we're
residing
against
right.
So
you
can
see
that
we're
getting
the
same
response
that
there's
no
difference
right,
unlike
what
we
saw
before
we're,
not
getting
a
different
response
from
a
different
query,
where
all
of
our
queries
seem
to
be
going
to
the
same
DNS
server.
What
I'm
looking
at
to
determine
that
is
this
chunk
right
here
right
where
we're
looking
at
the
define
two
service
times
when
we
were
before
we
deployed
node
local.
A
What
we
were
seeing
was
difference
in
the
results,
because
the
DNS
service
would
forward
us
to
one
of
the
other
Edina
service
would
forward
us
to
one
of
the
resolving
or
DNS
pots,
and
so
every
time
we
made
a
request,
we
would
see
that
R
would
see
the
response
from
one
of
the
healthy
DNS
pods
and
the
clock
would
be
different
between
them
right,
and
so
we
would
see
a
different
result
right
in
this
case.
That's
not
what
we're
seeing
in
this
case.
A
B
A
So
now,
what
we're
looking
into
is
I'm
looking
at
that
cube,
DNS,
DNS,
TCP
piece
and
seeing
where
it's
gonna
forward
that
traffic
right,
because
what
I
was
expecting
to
see
was
that
it
would
forward
that
traffic.
It
would
be
somehow
hijacked
here,
but
instead,
what
I
see
inside
of
the
IPS
configuration
it's
a
different
thing
right.
A
B
A
B
B
B
A
Are
saying-
and
this
is
a
somewhat
non-intuitive
thing
about
DNS
in
general
right,
it's
saying
if
the
output
or
if,
if
the
if
the
destination
is
10960
10
or
the
destination,
is
1
6,
9,
2,
5,
4,
0
10,
then
just
forward
it
to
the
local
port,
53
or
the
local
port
8080,
but
don't
forward
it
to
another
host
on
output.
Kick
this
to
the
port
locally
on
port
53
right,
and
so,
if
I
do
again
that
problem
LS
s
minus
Ln
and
we
look
and
we
do
a
grep
for
53.
A
We
can
see
that
we
are
listening
on
port
53,
right
on
that
IP
address
or
on
1
6
9
2,
5,
4,
0
10,
so
depending
on
which
rule
it
matches
it's
going
to
forward
it
to
the
daemon
listening
on
port
53,
alright.
So
in
our
output
above
we're
forwarding
to
the
local
port
port
53
using
that
same
IP
address
and
we're
telling
it
to
just
port,
do
bind
to
our
local
port.
And
if
we
look
at
the
configuration
of
that
from
the
host
Ln
tup.
A
A
A
Forward,
just
a
great
little
diagram
right
what's
happening
here
is
your
pods
can
make
a
query
of
DNS
and
we're
gonna
take
the
output
when
we
see
that
why
don't
you
see
it
hit
the
output
chain,
we're
gonna
bind
it
to
our
note.
Our
node
local
DNS
cache
and
we're
gonna.
Give
you
a
response.
If
we
have
it,
if
it's
in
the
cache
response,
you're
gonna
get
it
response
much
faster
and
that
DNS
query
never
hits
the
wire.
A
If
we
don't
know
it,
then
we're
gonna
upgrade
the
connection
to
TCP
and
we're
going
to
send
that
TCP
connection
to
cube
DNS
or
the
coordinates
pods
that
we
know
about
that
exists
and
we're
going
to
ask
it,
and
in
this
way
we
could
still
handle
that
manipulation
right.
That
configuration
of
resolving
the
the
test
que
it's
not
case
or
that
or
the
TJ
K
dot
k.
It's
not
work.
Domain
I
can
still
define
that
at
my
cluster
level,
and
it
would
still
be
honored
by
that
local
cash
right,
which
is
really
cool
right.
A
This
is
like
kind
of
the
best
of
all
worlds.
It
gives
us
the
ability
through
to
modify
an
existing
cluster
without
any
modification
to
the
pod,
and
we
greatly
improve
the
resilience
of
our
DNS
queries
because
we're
landing
them
locally
and
if
they
fail,
then
it
will
fall
back.
If
it's
not
available
locally,
then
it
will
fall
back
to
the
rule
above
where
I
would
send
that
query
up
to
the
cluster.
A
So
I
wanted
to
do
a
little
more
chaos
testing
here
to
kind
of
dig
into
this,
but
like
it's
really
really
neat
yeah,
exactly
let
Matty
you
like
this
is
a
killer,
killer,
killer,
feature
for
a
neighbor
and
greatly
improving
the
resilience
of
DNS
on
kubernetes,
if
you're,
using
if
you're
using
Cooper,
do
this,
you
should
really
consider
using
this.
It's
it's
just
an
incredible
tool.
Anyway,
that's
my
time
today,
I
hope
you
all
have
a
wonderful
weekend.
I
know
that
I'm
gonna
have
a
great
weekend.
A
I'm
gonna
be
hanging
with
my
kid
and
spending
some
time
with
my
wife
and
I'm
planning
on
taking
time
off
next
week.
Thank
you.
Thank
you.
Thank
you
great
to
see
you
all
and
I
look
forward
to
the
next
one.
The
next
one,
I
think
might
be
Jo
so
tune
in
next
week
to
see
mr.
Jo
Vita
I'll
talk
to
y'all
later.