►
From YouTube: TGI Kubernetes 122: Grokking Kubernetes: DNS
Description
Come hang out with Duffie Cooley as he does a bit of hands on hacking of Kubernetes and related topics. Some of this will be Duffie talking about the things he knows. Some of this will be Duffie exploring something new with the audience. Come join the fun, ask questions, comment, and participate in the live chat!
A
Good
afternoon
everybody
and
welcome
to
episode
number
122.
This
is
gonna,
be
a
continuation
of
the
grokking
kubernetes
series
and
we're
gonna
be
exploring
dns,
which
is
one
of
my
favorite
topics.
There's
always
kinds
of
always
always
sends
a
fun
to
do
with
with
dns
just
to
kick
it
off
right.
I'll
share
this
graphic,
which
is
one
of
my
favorite
graphics,
related
to
the
topic,
and
it
is
it's
not
DNS,
there's
no
way
it's
DNS,
it
was
DNS.
It's
a
DNS,
haiku
and
I
think
you'll
find
that
it's
pretty
relevant
to
the
topic
at
hand.
A
A
Think
I
might
just
have
to
get
some
and
chuckles
saying
hello
from
Jakarta
and
over
you
saying
hello
from
Zug
Switzerland
enjoy
checking
in
from
Richard
in
Virginia
good
to
see
you
joy
and
Liam
from
UK
good
to
see
you
Tim
checking
in
from
Dublin.
That's
not
too
far
from
me.
It's
close
to
a
town
which
I've
never
understood
the
name
of
it's
called
Pleasanton
and
like
Pleasanton,
is
hot,
so
I
don't
know
how
that's
pleasant
to
me.
Mr.
A
Jeromy
Pruitt
checking
in
good
to
see
you,
sir
gibberish
it's
from
mountains,
India
I,
didn't
know
there
was
an
area
called
mountains,
I
kind
of
like
that,
though
yeah
I
know
I
was
you
know,
but
you
gotta
show
the
karo
ice
love
when
you
can
write.
It's
like
we
make
me
meet
a
lot
of
really
good
improvements
on
things
so
be
saying
hello
from
Istanbul
good
to
see
you
too
Sabine
and
bigger
from
Moldova
Martine
from
the
Netherlands
good
to
see
you
Martine
and
AJ
from
San
Jose.
Mr.
A
Steve
looker
checking
in
good
to
see
you,
sir
genic
from
Livermore
and
Phil
from
the
NYC
and
Mike.
One
of
my
co-workers
sit
checking
in
san
hello.
We
got
Jasper
from
Tapper,
Tampa,
Florida
I,
don't
know
why
I
said
Tampa,
Florida
and
Dylan
from
Williamsburg
Virginia
Peter
from
Kenya
Jean
wrong
from
Sudbury
Massachusetts,
more
teza
from
Tehran
Roy
from
Toronto
and
Alberto
from
Colombia
awesome.
Good
to
see
you
all
great
great
to
see
again.
A
So
if
you
do
want
to
have
a
link
that
you
want
to
share
some
other
content,
go
ahead
and
put
it
in
T,
gik,
dot,
io,
/
notes
and
that's
where
you'll
find
it
here.
We
use
hack
MD
for
this
incredible
service,
all
right
so
this
week
in
K,
its
core
119
is
still
in
beta.
No
changes
in
the
last
2
weeks,
which
is
a
good
sign.
Release
team
has
lots
of
point
releases
stuff
happening
this
week.
A
Mostly
having
to
do
with
hypercube
and
cube
proxy
failures,
this
you
may
remember
that
there
was
a
queue
proxy
thing
where
it
was
really
like
what
version
or
being
able
to
detect
the
version
of
IP
tables
available
on
the
underlying
host
and
I
think
this
is
all
related
to
that.
So,
if
you're
interested
in
understanding
a
little
bit
more
about
those
point
releases,
you
can
check
them
out,
but
speaking
of
core
OS
love,
hypercube
is
deprecated
and
it
will
not
be
patched
or
maintained
going
forward
from
119.
A
So
if
you're
using
hypercube
and
your
infrastructure
might
be
time
to
to
rethink
the
plan
and
like
look
at
look
at
some
of
the
other
tooling
whoops,
oh
dang,
it
I
just
closed
my
hack,
em
denotes
anybody
over
the
URL
teach
Aikido
I
know
there
we
go
but
I
got
I
spelled
notes
right.
You
know,
cuz,
that's
how
that
works.
A
There
we
go
all
right,
also
I
noticed
in
the
there's,
a
great
website
called
lwk
deed
info,
and
it's
hosted
it's
managed
by
a
few
of
friends
and
including
Josh
burkas,
whose
idea
it
was
and
who
doesn't
does
an
incredible
job
of
me
of
maintaining
some
interesting
notes
about
things
that
are
happening
in
the
space
and
what
I
noticed
was
that
in
119
we're
actually
removing
cue
kettle
export
from
the
cube
kettle.
Command-Line
it's
going
away
and
I'm
kind
of
sad
to
see
that
go.
A
It
was
really
a
super
handy
thing
for
me
to
to
use
when
like
trying
to
take
a
manifest
from
an
existing
cluster
and
moving
it
to
another
one
without
having
to
actually
go
through
and
like
remove
a
bunch
of
the
boilerplate
stuff
manually,
but
that's,
but
that's
been
deprecated
now
for
a
bit
and
it's
mostly
part
of
a
part
of
the
move
towards
servers
that
apply
and
that
sort
of
stuff
kind
of
changes.
The
way
those
things
are
working
so
cue,
kettle
export
will
be
gone
in
the
newer
versions
of
cube
kit.
A
All
other
exciting
news
from
the
kubernetes
eco
from
the
kubernetes
ecosystem.
Pull
back
on
my
chat
here
make
sure
I'm
not
missing.
Anything
harbour
has
graduated
from
the
CNC
F,
so
that's
pretty
exciting.
Let's
take
a
look
at
that
one
so
harbour,
which
is
a
project
that
was
donated
to
the
CNC
F
by
VMware,
and
some
and
and
some
of
our
other
folks
just
recently
hit
Harbor
2.0
and
it's
an
incredible
registry.
A
If
you're
looking
for
a
registry
to
host
your
images
locally
or
in
your
cloud
environment,
definitely
check
out
Harbor,
it's
a
thing
that
you
can
install
and
deploy
directly
on
top
of
kubernetes.
If
you
want
to-
or
you
can
also
deploy
it
as
in
a
variety
of
other
ways.
So
just
recently
graduated,
that's
super
exciting.
If
you're
interested
in
playing
with
Harbor
definitely
check
it
out,
I
know
that
done
a
few
episodes
on
Harper
as
well.
So
definitely
check
that
out
what
else
Dex,
which
is
another
project
I
worked
on
at
core
OS.
A
It's
now
a
CNC
F
Sam
project,
unbox
project,
it
just
got
accepted
into
the
CNC
F
and
it's
now
being
unboard
it.
So
that's
exciting
news
Dex,
if
you're
unfamiliar,
is
axis
kind
of
like
a
middleware
between
your
source
of
user
authentication
could
be
LDAP
or
OID,
C
or
or
a
variety
of
other
technologies
and
kubernetes,
because
kubernetes
itself
only
speaks
can
only
interact
with
you.
Just
pull
this
up
real
quick.
My
brain
is
like
not
really
with
me
100%
today.
A
A
Or
do
you
want
to
refer
to
for
authentication,
and
so
the
benefit
of
that
would
be
like
if
you
had
an
LDAP,
like
you
know
an
internal
LDAP
or
Active
Directory
or
something
inside
of
your
corporation,
and
you
wanted
to
be
able
to
reference
users
from
that
directory.
Dex
can
actually
can
act
like
the
glue
in
between
there
right.
So
the
call
would
come
in
to
authenticate
to
Dex.
You
would
pass
your
credential
to
that.
It
wouldn't
challenge
the
credential
against
LDAP
or
whatever
it
is.
A
You
have
configured
as
a
back-end
once
that
you
passed
that
once
Dex
was
that
was
satisfied
in
your
authentication
to
that
store,
then
what
you
get
back
is
touken
and
what
is
he
talkin
or
jadibooti
token
that
the
API
server
can
use
to
authenticate
you
into
kubernetes?
Now
this
is
just
a
syndication.
This
is
ozzie
not
often.
A
Actually,
this
is
often
not
seen
so
this
would
handle
the
ability
to
actually
teach
kubernetes
about
users
that
are
stored
inside
of
Active
Directory
or
OID,
C
or
or
other
stores
of
information,
but
it
won't
handle
things
like
putting
you
granting
you
access
to
stuff.
It
just
grants
you
it
just
authenticates,
you
so
interesting,
tooling,
but
it's
great
to
see
that
gets
donated
as
CN
CF,
because
it
means
that,
like
the
project's
moving
forward
and
we'd
like
to
see
some
more
adoption,
I
know
that
we
could
probably
use
a
few
more
contributors
on
it.
A
A
So
if
you
want
to
check
this
out,
you
can
check
it
out
here.
My
friend
Steven
is
one
of
the
maintainer
x'.
Now
there's
lots
of
good
stuff
happening
and
it
does
a
pretty
good
job
of
documenting
itself.
So
definitely
check
that
out.
This
was
the
authentication
tool
that
we
built
for
tecktonik
back
in
the
day.
A
Another
interesting
project
that
just
popped
up
a
friend
of
mine
was
telling
me
about
it.
Oh
thank
you
very
much.
Whoever
that
is
well
pointed
it
out
and
I
think
this
is
actually
kind
of
an
interesting
project.
It's
written
behind
John
Reese,
who
seems
you
working
on
it
pretty
directly,
and
this
projects
called
constraint
and
I
think
it
could
be
another
interesting
episode
to
catch.
A
It's
pretty
early
in
its
lifecycle,
so
I
haven't
committed
to
doing
an
episode
on
it,
but
looks
pretty
interesting,
so
this
tool
basically
exists
so
that
you
could
provide
a
to
provide
tooling
to
allow
you
to
generate
constraint,
templates
that
you
could
use
not
only
for
something
like
the
OPA
project,
but
also
for
things
like
comp
tests
and
that
actually
I
think
it's
probably
one
of
the
more
interesting
implementations
for
that
sort
of
thing
right.
The
killer.
A
Part
of
that
would
be
if
you,
if
you
went
down
that
path,
you'd
be
able
to
actually
use
comp
tests
to
validate
the
configuration
of
things
before,
assuming
that
they
would
work
when
those
things
got
inserted
into
your
cluster
right.
So
if
you
wrote
a
constraint
template
that
said,
no
pods
can
use
can
run
without
having
some
resource
definition
right,
like
you,
have
to
at
least
provide
some
resource
to
finishing
before
the
pod
will
be
allowed
into
the
cluster.
A
So
I
like
that
I
like
that
built-in
suspend
approach.
You
know,
I
like
to
I
like
the
idea
of
being
able
to
actually
validate
both
in
flight,
losing
an
admission
controller
and
also
in
practice
using
like
perhaps
step
in
your
CI
CD
flow
I.
Think
that's
a
pretty
killer,
pretty
killer
functionality.
So,
if
you're
interested
in
playing
with
this
definitely
check
it
out,
I
think
John
was
looking
for
some
in
some
people
to
try
it
out
and
see
what
they
think
he's
when
I
chatted
with
him
earlier.
A
He
said
it
was
ready
for
you
to
play
with
and
see
what
people
think,
but
yeah
check
it
out
pretty
cool
stuff.
Coming
on
back
to
the
chat.
Now,
let's
see
how
we're
doing
we
got
Welch
on
both
hunch
from
saying
hello,
we
got
Moses
Anglo
good
to
see
you
and
Sebastian
from
Hungary
and
Jukka
from
Helsinki
and
Miran
rm10
from
Turkey
good
to
see
you
Peter.
What's
up
good
to
see
you
Peter
Alex,
Barnes
good,
to
see
you,
we
got
Phillipe
from
Paris
and
we
got
Maddy.
A
Oh
well,
I
met
he
attended,
Meetup,
look
Lana,
Cooper
didn't
speed
up
that
one
did
look
pretty
good,
it
wasn't.
It
was
a
virtual
one
if
I
remember
correctly
and
that's
a
good
question
Alex,
you
should
stay
on
him
for
that.
Well,
I'm
at
it
would
be
great
to
have
present
there
Daniel
from
Warsaw
good
to
see
you
Daniel
and
marcin
from
Krakow
Poland
and
Robson
from
Fortaleza,
Brazil,
Wow
and
Moe's.
Saying
kubernetes,
kubernetes
1/18
support
test
driver
and
server.
Do
you
do
we
still
need
a
project
like
good
stream?
A
I
believe
that
we
do
because
really,
this
is
about
evaluating
resources
before
applying
those
resources
right,
providing
some
ability
to
define
defined
business
policy
that
maybe
beyond
beyond
things
that
we
can
do
with
like
pot
security
policies,
those
worth
of
things
and
being
able
to
enforce
those
policies
both
in
flight
using
an
emission
controller
and
also
in
before
applying
those
resources
like
inside
of
CA
CD
as
a
test.
So
why
do
you
think
I?
A
Do
you
think
it's
still
important
Pedro
scene,
hello
from
Scotland
good,
to
see
you
Petro
and
Jeremy,
say
and
hey
they're,
assuming
you're,
not
out
because
of
the
nice
fog
here
actually
I'm
just
inside,
because
you
know
it's
I
like
it
inside
my
house
as
well
as
I
outside,
but
I
should
get
back
out
in
the
garden,
maybe
next
week,
I'll
do
it
from
the
garden.
It's
a
good
point.
A
All
right
definitely
check
out
constraint.
This
next
piece
is
the
folks
at
Flint
have
an
article.
No,
the
folks
that
learn
Cates
have
put
up
a
visualization
on
how
to
quarantine
a
pod
in
kubernetes.
So,
if
you've
ever
wondered
about
how
that
works,
it
gets
into
some
of
the
pieces
that
are
necessary,
doesn't
quite
get
all
of
it,
but
I
think
it
does
a
good
job
right.
A
So
if
your
goal
is
to
take
a
pod
out
of
service
right
so
that
it's
no
longer
receiving
traffic
and
it's
no
longer
a
part
of
the
deployment
object
itself,
so
the
point
of
the
deployment
would
see-
oh
I
have
to
address
it,
make
a
new
one.
This
is
this
is
how
you
can
do
that
right.
You've
got
this
pod,
it's
malfunctioning
and
you
want
to
move
it
out
of
the
service.
One
way
to
do
that
is
just
handle
that
label
that
label
change
right.
A
Once
you've
changed
that
name,
it
basically
becomes
isolated
and
now
you're
other.
Now
the
deployment
object
will
realize
that
there
are
only
two
remaining
and
it
will
set
up
to
create
a
third
one
or
a
fourth
one
in
this
case
now,
what's
interesting
is
the
owner?
References
are
still
there
right.
So
when
it
came
time
to
delete,
you
would
still
see
pod
one
go
away,
but
yeah
cool
stuff.
So
definitely,
if
you're
interested
in
understanding
how
that
works
definitely
check
it
out.
Oh,
this
is
from
you.
A
A
Definitely
pretty
cool
yeah
if
you're
interested
in
understanding
how
to
provide
configuration
to
things
within
your
cluster
and
if
you
haven't
already
explored
this
I
think
this
would
be
a
great
entry
point
for
people,
understanding
how
config
Maps
work
and
some
of
the
concerns
with
them,
because
they
definitely
call
out
some
of
the
challenges
like
if
you're
going
to
use
a
config
map,
but
basically
places
that
file
or
environment
variables
inside
of
the
runtime
of
your
container.
But
if
you
want
to
change
those
things
dynamically,
how
do
you
change
him
right?
A
If
you
change
the
config
map,
your
environment
isn't
going
to
dynamically
get
updated
if
you
change
your
config
maps,
a
file
that
is
based
on
that
config
map
can
be
dynamically
updated,
but
that
doesn't
mean
that
your
application
is
watching
for
that
dynamic
update,
and
so
these
are
just
a
few
of
the
things
that
you
might
concern
yourself
with
when
considering
how
config
Maps
work.
It's
great.
A
It's
a
great
entry
point
for
config
Maps
inside
of
kubernetes,
my
buddy
Alex
Ellis
wrote
an
article
about
troubleshooting
apps
for
kubernetes,
and
it's
always
a
good
thing
to
revisit.
I
think
we're
I
think
a
lot
of
folks
are
pretty
decent
at
it,
but
I
think
that
there's
definitely
it's
always
good
to
kind
of
get
back
to
basics
for
those
sorts
of
things
right.
So
we
have
cue
cattle
gate
events,
which
are
a
great
way
to
actually
understand
what
the
system
is.
A
What
kubernetes,
as
a
system
itself,
is
doing,
we
have
cube
channel
describe,
which
is
really
an
important
one.
People
describe
as
neat
because
it
tries
to
give
you
like
a
human
consumable
output
of
the
configuration
of
the
object,
and
it
also
a
pins
down
at
the
bottom
of
that
object.
The
events
that
are
related
to
it
right
so
you're
not
getting
both
and
then
you
have
logs.
A
And
Oh
Alex
I'm
surprised,
okay,
well,
there's
also
a
logs
P,
which
means
logs
previous,
so
there's
two
things
cubic
it'll
get
logs
or
Q
catalogs
on
an
object
will
allow
you
to
see
the
logs
for
the
current
object.
But
if
you're
having
a
problem
where,
like
it,
the
last
one
died
and
I
want
to
know
what
the
logs
for
that
last
one
was
where
they
can
do
cube
catalogues
P
for
previous,
and
it
will
show
you
the
previous
logs,
pretty
cool
stuff.
A
A
In
the
weird
science
category,
this
one
actually
reminds
me
of
another
one
I've
seen,
which
was
actually
a
an
implementation
of
docker
in
bash.
That
was
called
bakker.
So
when
I
saw
this
I
thought
of
that
one,
but
this
is
actually
pretty
cool.
This
is
a
an
implementation
of
docker
written
and
go
right,
and
so,
if
this
is
the
sort
of
you
know,
neat
science
stuff,
is
it
interesting
to
you
definitely
check
it
out?
Right
like
this
is
an
implementation
of
all
of
the
pieces.
A
It
takes
to
create
a
container
inside
of
go
pretty
cool
I,
dig
that
a
lot
I
mean
that's
such
a
neat
idea
and
they
get
into
like
what
the
pieces
are,
how
they
work.
What
namespaces
are
they
get
into
like
layer
file
systems
and
how
the
images
and
stuff
work
but
yeah
like
if
you,
if
you've
ever
wondered
it
a
little
bit?
What
you
want
to
learn
a
little
bit
more
about
the
underlying
implementation
of
containers.
This
would
probably
be
a
killer
article
to
dig
into
that
stuff.
Using
unshare
sewing
stuff
into
namespaces.
A
A
Very
very
cool,
so
if
you're
interested
in
understanding
kind
of
that
underlying
implementation,
a
bit
more
definitely
check
that
out.
That's
pretty
cool
Alex
Sink,
hello
from
Northern
California,
hello,
back
from
Northern
California
good
to
see
you
Royal,
Roy,
saying
plus
one
for
K.
We
all
do
you
mean
K,
rail
or
Cale?
Okay,
okay,
is
it
a
tail
yeah?
A
What
were
the
other
I
think
you
know
you
heard
about
there's
so
many
projects,
but
I
know
what
you
mean.
Mister
will
lead
saying
hello
good
to
see
you
will
eat
when
Bogdan,
mentioning
kale
and
AJ
event.
Router
is
a
good
one.
Yeah
event:
router
was
basically
a
tool
to
take
events
and
throw
them
toward
a
logging,
a
solution
exactly
yeah.
A
But
yeah
we
talked
a
little
bit
about
events,
some
of
the
churn
that
events
represent
inside
of
the
kubernetes
distributed
system
in
the
SED
episode.
So
if
you're
interested
in
that
check
that
out
alright
in
the
other
weird
sentence,
category
I
saw
this
article
and
I
have
to
admit.
I
was
a
little
snarky,
perhaps
on
on
Twitter.
But
that's
what
Twitter's
for
right
and
in
this
article
that
says,
Bayer
CropScience
sees
the
future
with
15,000
node
gke
clusters
and,
on
the
one
hand,
I'm
like
wow.
A
15,000
notes
are
very
large
clusters
for
and
in
in
my
opinion,
for
tons
of
reasons.
This
is
like
a
crazy
anti
pattern.
Right,
like
one
of
those
reasons
might
be,
this
is
one
huge
failure
domain
right,
you
have
one
control
plane
for
fifteen
thousand
nodes.
That's
nuts,
like
what
happens
if
the
control
plane
goes
away
now,
fundamentally
like
for
fifteen
thousand
nodes,
the
cubelet?
Well,
it's
pretty
autonomous.
It's
gonna
keep
things
going
while
the
control
planes
gone,
but
at
the
same
time,
like
a
lot
of
that
stuff
is
tunable
like
it's
crazy,
I.
A
Think
you're
right
if
ants
are
the
most
underrated
concept
in
kubernetes,
so
I
saw
this
and
I
was
like
I.
You
know,
I
understand
kubernetes
well
enough
at
this
point
that
I
understand
the
scope
of
work
necessary
to
accomplish
this
and,
at
the
same
time
I'm
like,
but
you
need
to
kind
of
have
a
real
conversation
about
like
why
we're
building
five
fifteen
thousand
node
clusters,
because
there's
probably
a
better
strategy
here
in
my
opinion,
but
I'd
love
to
understand
the
use
case.
A
That's
just
my
point
anyway
enough
of
that
now
we're
back
to
it's,
not
DNS,
there's
no
way
it's
DNS!
It
was
DNS.
So,
let's
get
into
our
checklist.
We
have
a
lot
to
cover
this
time
and
so
I
wanted
to
kind
of
get
into
it
pretty
early
on
and
see
what
we
we
see,
what
we
can
get
done
in
about
an
hour
hour
and
a
half
all
right.
So
what's
DNS
DNS.
A
Dienes
is
this
right,
there's
going
to
be
I
P
addresses
for
things
that
we
actually
or
network
addresses
that
we
use
to.
You
know
to
those
services
and
stuff,
and
nobody
wants
to
remember
something
like
2607
a
fait
bo
4005,
808,
:,
:,
200
e
right.
They
really
don't
want
to
remember
that
that's
an
ipv6
address,
but
they
probably
also
don't
want
to
remember
a
bunch
of
I
give
you
ipv4
addresses,
and
so
we
created
this
abstraction.
A
The
domain
name
system
right
DNS
gives
us
the
ability
to
give
a
thing,
a
short
name
rather
than
having
it
have
a
great
a
super
long
name.
We
can
use
a
short
name
for
that
thing
and
that
way
we
can
look
those
things
up
and
we
can
refer
and
we
can
reference
them
know
like
all
abstractions.
There
are
problems
right.
A
Dns
is
one
of
the
first
distributed
systems
that
I
ever
worked
with
and
I
call
it
a
distributed
system,
because
if
you
look
at
how
it
works
underneath
the
covers,
there
are
usually
a
number
of
domain
name
servers
and
places
where
caching
works
across
that
whole
system
before
you
actually
get
your
result.
So
when
I
do
host
Google
com
I,
actually,
let's
just
break
that
up
a
bit
by
cat
Etsy
resolved
com.
A
A
Host
names
against
localhost
and
that's
because
I
actually
have
a
caching
dns
resolver
on
my
system
installed
on
my
laptop,
and
this
means
that
I
don't
have
to
go
and
look
up
everywhere.
Every
result
I
can
actually
determine
I
can
result.
I
can
hit
that
cache
and
get
a
cache
results
of
any
host
name
that
I
look
up.
So
if
I
do
host
google.com
again,
this
result
is
cached
I,
don't
actually
have
to
go
out
and
look
it
up
again.
A
A
A
So
when
I
make
a
query
for
host
google.com,
the
request
goes
to
a
cache
that
local
one,
two,
seven
zero
zero
one.
The
system
D
resolve
D
daemon
and
it
says
I
have
an
answer
for
that
and
it
provides
it
to
me.
But
if
it
doesn't
have
an
answer
for
that,
then
it
goes
and
asks
these
guys
up
here
for
an
answer
to
that
same
question
and
on
up
the
stack
it
goes
right
if
Google
doesn't
know
the
answer,
then
it
needs
to
go
and
figure
out
who
the
authoritative
domain
name
server.
A
A
Just
that
work,
okay,
so
test
dot
metal
decades.
That
work
has
the
address
1
da
1
da
1
da
1,
but
how
did
it
determine
what
the
IP
address
associated
with
that
record
is
right.
That's
not
gonna
be
determined,
that's
not
known
by
my
local
cache
per
se.
So
there's
actually
a
whole
series
of
question
a
whole
series
of
queries
that
happen
when
we
try
and
determine
how
what
the
answer
to
this
is.
So
if
we
do
dig
test
metal
gates
that
work,
we
can
see
the
result
of
this
particular
query.
A
A
So
if
I
do
dig,
test
metal,
doc,
aids,
work
and
I
heard
an
ad
symbol
and
paste
this
in
then
I
get
the
same
result
that
I
got
before,
and
this
time
I
actually
get
I
get
the
same
result
that
I
got
before,
but
I
get
that
queer.
I've
I've
queried
that
particular
server
directly
right.
This
is
actually
how
I
can
query
a
specific,
a
specific
name,
server
or
even
authoritative,
name
server
directly
for
the
result
of
an
answer.
A
A
So
this
is
so.
This
is
a
way
to
understand,
like
kind
of
just
how
functionally
from
a
high
level
how
DNS
works.
There
are
a
variety
of
different
answers,
a
variety
of
different
records
that
we
create
inside
of
us,
but
the
important
ones
for
this
particular
episode
are
going
to
be
things
like
NS
NS
records
which
represent
like
the
authoritative
server
or
how
we
actually
understand
the
host
or
the
information
for
a
particular
record.
A
Another
thing
we
alternate
we're
going
to
use
is
a
records
which
describe
a
mapping
between
a
hostname
and
an
IP
address
or
a
set
of
IP
addresses,
and
then
the
last
one
we're
going
to
use
is
a
cname
record.
I,
don't
think
I
have
an
example
of
that
one
right
now,
but
we're
gonna
get
into
it,
and
the
cname
record
stands
for
canonical
name
and
what
that
does.
Is
it
actually
allows
us
to
kind
of
just
map
one
hostname
to
another,
rather
than
having
to
map
back
to
an
IP
address?
A
We
can
actually
just
say
test
up
metal
decades
that
work
whenever
somebody
queries
me
for
that
I
want
you
to
respond
as
though
they
had
queried
for
this
other
name,
a
canonical
name
and
we'll
dig
into
that
a
bit
a
while
Oh
interesting
DNS
squished
out
that
I
would
see
that
ones
play.
Let's
go
check
that
one.
A
A
A
Here's
the
result
of
cash
cash
results
right.
We
got
these
cash
cash
cash
resolves
that's
pretty
slick
kind
of
dig
how
that's
been
done
so
a
good
way
of
actually
understanding
how
resolution
stuff
works
right.
So
our
host
name
directly
was
test
that
middle
dot
cased
at
work.
Then
we
had
to
figure
out
who
the
authoritative
entry
point
for
that
was
that
sent
us
all
the
way
back
to
the
TLB,
which
was
the
work
TLD.
And
then
we
went
up
to
my
my
domain
on
the
work
TLD
kate's
that
work.
A
A
So,
when
I
create
a
container
by
default,
running
running
on
Linux,
then
I
get
an
etsy
resolve
dot-com
file,
and
we
can
see
that
my
resolve
dot-com
file
looks
very
different
than
it
does
on
the
underlying
host,
and
this
is
an
implementation
detail
of
kubernetes
that
docker
itself
right.
Docker
realized
that
my
name
server
was
one
two:
seven
zero
zero
one
and
that
I
had
a
local
cached.
Systemd
profit
process
handling
that
stuff
and
it
was
like
yeah.
A
But
if
I
point
you
at
localhost
and
that'll,
be
the
local
host
inside
the
container
with
you
and
nothing
will
resolve
and
instead
I'm
going
to
modify
your
resolve.
Kampf
such
that
you
actually
get
working
resolvers
and
we're
going
to
do
that
by
pointing
at
that,
if
the
the
one
that
was
before
the
modification
and
that
way
the
benefit
here
is
that
we
get
working
resolvers
right,
I
can
do
host
google.com
or
I
can
do
apk
ad.
A
So
you
do
dig
test
metal,
okay,
it's
that
work.
Just
like
we
did
before
and
we
get
a
result.
We
do
host
google.com.
We
get
results.
Those
things
all
work
for
us
now,
if
you're
inside
of
a
container
inside
of
an
operating
system
that
doesn't
already
have
these
tools
like
find
and
I'm
sorry
dig
and
host
and
those
sorts
of
things
usually
they're
part
of
a
package
of
software
that
can
be
referred
to
as
like
bind
tools
or
bind
utils,
and
that's
what
I
had
to
install
here.
A
I
think
it's
finally,
that's
finally
been
addressed,
but
it
was
a
problem
for
a
bit
one
of
the
other
things
that's
interesting
about
the
muscle
implementation
is
that
the
ways
that
it
configured
the
way
that
it
conforms
to
resolving
things
is
a
bit
different
in
that
alpine.
Linux
will
basically
make
a
query
every
single
time
for
every
record,
whereas
like
gzip,
c
and
stuff,
will
actually
cache
some
of
those
things
and
well.
It
will
handle
these
things
in
kind
of
a
serial
fashion.
A
So
if
you're
interested
in
understanding
a
little
bit
more
about
how
muscle
does
its
thing
versus
how
g
Lachie
does
its
thing
definitely
check
this
stuff
out,
but
they
call
out
a
couple
of
different,
really
good
or
really
good
in
salient
points
right.
Having
a
caching
name.
Server
is
a
great
way
to
reduce
the
number
of
DNS
queries
that
hit
the
wire
having
a
DNS
server.
A
That
is
local
to
your
account
right,
like
if
you're,
actually
using
an
isp
or
or
one
that's
local
inside
of
your
data
center
is
always
gonna,
be
kind
of
lower
latency,
and
if
it's
caching,
even
better,
maybe
like
a
bigger
cache,
then
it
can
actually
handle
offloading
those
DNS
queries.
But
if
you're
going
out
somewhere
so
we're
far
away
to
cache
those
things,
it
means
that
you're
going
like
to
a
dot
8.8.8.8,
for
example.
A
So
these
things
all
kind
of
factor
into
how
we
think
about
these
things,
but
it
is
good
to
know-
and
it
is
actually
a
really
interesting
point,
because
it's
come
up
a
few
times
like
how
does
how
does
an
alpine
Linux
container
handle
and
lookup
versus?
How
does
GMC,
or
you
know
some
of
the
more
standard
Lipsey's
libraries
handle
those
sorts
of
things
they
do,
handle
them
differently
and
mostly
the
real,
salient
salient
difference.
A
Is
you
get
more
queries
on
the
wire
when
you
use
muscle
than
you
do
when
you
use
GMC,
because
of
the
way
the
way
it's
serialized
the
requests
and
what
it
does
in
the
mean
time
effectively
so
check
this
out
good
kubernetes
stuff,
other
stuff
that
happens
on
the
host,
which
is
interesting.
So
we
got
I
wanna.
Take
a
look
at
these
real
quick
inside
of
our
docker
container.
A
A
Yeah,
well,
you
know
what
let's
do.
Let's
do
this.
This
is
actually
more
irrelevant
in
here
anyway,
create.
A
A
The
TTL
is
basically
the
the
time
to
live
for
a
record
right,
and
so
when
we
did
and
dig
test
metal
cage
that
work.
We
saw
this
value
here.
This
is
the
number
and
tells
us
the
TTL
are
time
to
live
for
this
record
at
the
end
of
this
expiration,
then
whoever's
cashing
this
particular
result
will
have
to
go
up
and
validate
against
its
next
upstream
results
again
and
get
back
a
TTL.
Now
we're
back
at
299,
297,
295
and
it'll.
A
So
when
you,
when
you,
when
I,
create
this
record
inside
of
a
w+
route,
53
or
whatever
or
anywhere
else,
then-
and
I
specify
a
TTL
of
300
seconds
but
I'm
saying
nowhere
in
the
world,
should
somebody
assume
that
test
op
metal
key
each
network
is
going
to
point
to
the
same
IP
address
for
more
than
300
seconds.
It
can
assume
that
I
won't
move
it
for
300
seconds
and
then
any
further
than
that
it
has
to
recheck
with
me,
and
let
me
know
if,
if
that's
changed
and
you're
right,
300
seconds
is
pretty
short-lived.
A
A
So
up
here
inside
the
container,
if
we
do
a
PK
ad
buying
tools
and
then
we
do
dig
test
metal,
cage
that
work
and
down
here
we
do
to
test
metal
cage.
That
work
is
you
have
a
hundred
and
forty
seven
seconds
and
up
here
we
have
two
hundred
and
ninety
nine,
and
so
what
I
wanted
to
point
out
here
is
that
what
we're
looking
at
is
that
TTL
is
locally
relevant.
It's
not
a
global
value
that
everybody
tracks
what's
getting,
what
we're
getting
back!
A
That
is
actually
the
thing
that
everybody
tracks
is
the
TTL
value,
it's
300
seconds.
So
when
you
as
a
DNS
server
or
inside
of
your
Linux
or
inside
of
your
kernel,
when
you
get
when
you
resolve
that
name,
you're
gonna
honor,
that
name
for
that
many
seconds,
that's
it
not
more
than
that.
Only
those
300
seconds
right,
but
what
I
was
trying
to
point
out
here
with
these
values,
would
be
different,
even
though
we're
still
talking
about
the
same
Linux
kernel,
we're
still
talking
about
all
that
stuff.
A
The
way
that
get
host
by
name
lookup
happen
in
both
of
these
two
places
is
different.
There
are
two
different
network,
namespaces
two
different
sets
of
libraries.
One
of
them
has
a
caching
name
server,
the
other
one
doesn't
right,
and
so
in
my
in
my
interaction
with
this
particular
server
I'm
seeing
300
seconds
in
my
interaction
with
my
local
with
my
local
name,
cat
was
my
local
caching
name.
Server
I
have
one
hundred
and
forty-seven
seconds
so.
A
A
A
So,
if
I
do
a
refresh
as
as
I
bounced
through
the
different
backends,
we
can
see
the
TTL
changes
in
ways
that
don't
make
a
ton
of
sense
right
because
we're
actually
getting
a
different
resolver
each
time
and
some
of
those
resolvers
are
129
seconds
in
some
of
those
resolvers
are
273
seconds
in
you
can
kind
of
see
how
that's
working,
whereas,
like
my
local
cache,
I'm,
not
gonna,
see
that
change.
I'm
gonna
see
it
change
one
per
second
right,
and
that's
because
the
thing
that's
actually
caching,
that
resolution
is
system
d,
rizal,
D,.
A
So
good
point
bogden
all
right
moving
on
here,
other
things
that
are
to
know
about
the
host.
Sorry,
okay,
how
does
your
system
resolve
these
things
like?
What?
What
what
configuration
do
you
have
on
the
host
that
allows
you
to
modify
the
way
these
things
are
resolved
right,
so
we
already
talked
about
cat
as
he
resolved
back
home.
Actually,
I'm
gonna
jump
into
my.
A
A
So
now
we're
on
the
same
Bosch
container,
but
we're
inside
a
kubernetes
cluster.
We
do
good
cat
Etsy
resolve
Kampf,
and
we
can
see
that
our
resolver
is
different.
It's
no
longer
the
resolver
that
I
had
on
my
underlying
host.
It's
the
resolver
that
points
to
a
service
IP
address
so
10960
10.
So
whatever
your
service
cider
is
for
your
cluster,
it's
always
gonna.
A
It's
usually
always
going
to
be
dot
10
inside
the
cluster,
but
we
see
a
few
other
things
that
are
interesting
about
this
resolved
comm
that
are
different
than
the
one
that
I
had
on
my
local
one
right
like
in
my
local
one,
I
had
a
search
line.
I
had
a
couple
of
different
name
servers
inside
the
cluster.
What
I
see
is
I
have
one
name
server.
It's
pointed
out
a
service
that
can
be
backed
by
multiple
things.
A
I
have
this
thing
called
options
and
stat5
and
dot's
5,
and
we'll
talk
about
that
in
here
in
just
a
bit
but
I
also,
but
look
at
my
search
line,
search
lines
also
different
right.
My
search
line
includes
default
service
that
cluster,
that
local
service
cluster,
that
local
and
clustered
at
local
and
then
also
whatever
my
upstream
search
search
line
was
right.
A
A
A
A
A
One
of
the
questions
that
people
ask
me
sometimes
is:
can
I
use
a
hostname
inside
of
a
network
policy
object
right
can
I,
say
I
want
to
allow
a
pod
access
to
a
particular
URL
rather
than
or
to
a
particular
host
name,
rather
than
I
want
to
allow
access
to
it
from
a
pod
to
a
specific
IP
address
right.
The
reason
I'm
calling
this
out
is
because
if
we
look
at
the
result
here,
we
can
see
that
google.com
is
resolving
to
at
least
two
different
IP
addresses,
although
it's
not
doing
it
anymore.
A
So
when
we're
looking
up
these
records,
that's
actually
two
different
IP
addresses
that
could
be
google.com
right,
which
is
pretty
interesting.
That
means
that
google.com
has
two
valid
IP
addresses,
and
this
is
gonna
be
true
of
tons
of
stuff.
It's
not
just
Google
backup
because
it
has
multiple
a
records.
We're
just
ran
it.
A
We
randomly
gonna,
be
handed
one
of
them
as
a
result,
and
that
means-
and
that's
the
thing
you
have
to
consider
when
you're
thinking
about
how
to
think
you
have
to
consider
when
you're
thinking
about
like
whether
it's
possible
or
whether
it's
even
a
reasonable
design,
to
implement
a
thing
that
would
allow
you
to
define
network
policy
based
on
hostname.
Somebody
still
has
to
resolve
that,
and
hopefully
that
someone
it
still
is,
is
the
entity
that
it's
going
to
forward
your
request
to
whatever
the
IP
address
that
was
resolved.
So
you
have
to
be.
A
All
right
now,
other
stuff,
that's
happening
inside,
of
resolve
uh-huh
all
right.
We
have
these
search
fields
and
the
reason
these
search
fields
exists.
Is
they
allow
us
to
look
things
up
by
shortname
right?
So
if
we
do
the
host
kubernetes,
we
can
see
I
just
typed
in
kubernetes
and
I
resolved
to
host
default
service,
dot,
cluster
dot,
local,
and
it
gave
me
an
IP
address
that
resolves
to
it.
So,
even
though
I've
only
just
sped,
only
I
only
said
kubernetes
I
was
able
to
determine
that
there
is
actually
a
thing
called
kubernetes.
A
It
just
has
a
whole
bunch
of
other
things.
After
that
name,
I
was
able
to
resolve
that
and
and
determine
any
IP
address
for
that
record.
But
how
does
that
work
right?
What
are
some
of
the
things
that
we
can
use
to
configure
that
so
inside
of
that
resolve
comm
when
we
looked
at
the
search
path,
basically
we're
configuring,
the
resolver,
the
local
resolver,
whether
it's
muscle
or
G,
Lib
C,
to
query
whatever
this
record
is
against
each
of
these
results
and
the
difference
between
legitimacy
and
muscle.
A
Is
that
muscle,
like
it's
all
of
them
and
tries
to
determine
one
and
G
loop?
C
will
take
this
one
and
see
if
it
resolves
and
then
check
this
one
and
the
see
if
it
resolves
and
then
check
this
one
and
see
if
it
resolves
and
then
check
this
one
and
see
if
it
resolves
right
and
so
like
the
the
volume
of
DNS
traffic
resulted
from
that
it's
very
different.
This
is
actually
especially
triopia.
A
A
So
that's
what
that
search
path
is
doing
is
its
allowing
us
to
resolve
those
host
names
by
basically
making
it
so
that
if
we
provide
a
name
post
Fred,
we
can
see
that
Fred
doesn't
return
to
anything.
There's
no
Fred
anywhere
in
these
there's
no
Fred
dot
default
dot
service
that
clustered
on
local
there's,
no
Fred
dot
service
that
cluster
dot
local
or
no,
nor
is
there
in
just
inside
of
the
cluster
dot
local
domain
or
at
AT&T
logo.
A
Now,
before
we
move
off
of
this
topic,
I
want
to
talk
about
this
particular
search
path
in
particular
right.
This
one
here
says
default
service
that
cluster
dot
local,
and
this
is
happening
because
the
pod
that
I
deployed
is
in
the
default
namespace
right
and
so
by
default.
Some
of
the
magic
of
service
discovery
is
anything
with
a
hostname
or
any
other
any
service
to
find
it
in
the
same
namespace
that
you
are
also
located
in.
You
could
address
by
name.
Let's
just
play
with
that
idea.
A
A
A
A
A
So
if
I
have
two
deployments
in
the
same
thing
and
I'm
just
trying
to
reference
both
services
within
the
same
namespace
I
can
just
use
that
short
name
to
get
there
and
that's
fundamentally
how
it
works
right.
It's
going
to,
if
we're
using
G,
Lib
C
to
look
things
up
and
GMC
is
going
to
honor
the
search
path
outside
of
Etsy.
We
call
comp-
and
it's
gonna
see
if
it
can
find
a
record
for
us.
A
A
A
Exactly
Ori
has
been
bitten
by
this
one
himself.
I
myself
have
been
bitten
by
this
I'm
like
I
know
that
I
have
the
record,
because
I
can
do
host
right
host
works.
How
come
dig
doesn't
work
is
because
dig
by
default
doesn't
actually
use
your
search
pass
at
all.
But
if
you
wanted
to
turn
that
on,
then
you
can
just
specify
that
by
doing
plus
search-
and
it
will
give
you
the
response
so
interesting
challenge.
Alright
other
things
I
wanted
to
show
you
about
about
DNS
host
is
better
than
ever.
That's
true,
yeah.
A
It's
actually
a
little
closer
to
how
the
your
application
would
use
it
too
right,
because
if
the
host
is
are
going
to
use
G,
Lipsey
implementation,
oh
okay,
now,
let's
just
quickly
talk
about
NSS
switch
and
Gaga
Kampf.
These
are
two
things
which
I
think
I
don't
want
to
miss.
So
let's
talk
about
so
you
talked
about
how
search
pathway
we
talked
about
Etsy
resolve
comp.
There
are
two
other
super
important
files
actually
three
other
super
important
files,
but
let's
take
them
in
order.
A
A
Not
every
container
actually
has
this,
but
this
file
NSS
switch
comm,
basically
our
NSS
fish
Kampf
as
the
NSS
which
comm
informs
the
system
in
what
order
to
look
things
up
right.
So
when
I
go
looking
up
a
hostname
in
this
configuration,
it
will
go
and
search
files
first
and
we'll
talk
about
what
files
means
here
in
a
second
and
then
it
will
search
the
DNS
records
right.
A
A
A
There
we
go
all
right,
that's
what
I
was
trying
to
get
you
for
some
reason:
host
isn't
actually
honoring
the
NSS
switch,
but
applications
like
ping
would
write,
Soaker,
all
and
paying
and
applications
that
were
actually
out
of
those
sorts
of
things.
They're
gonna,
they're
gonna
under
the
NSS
switch,
and
if
we
were
to
use
s
trace
or
something
we
could
actually
even
see
it
hitting
that
call
right.
So
in
this
case
we
see
the
result
of
Bash
being
resolving
to
an
IP
address,
I
mean
if
we
do
ping
bash.
A
We
do
ping
echo
right.
There's
no
echo
resolved
here.
Our
say:
echo
resolves
to
a
service
IP
echo
resolved
using
our
search
path,
just
like
we
talked
about
before
to
echo
dot
default,
that's
service,
tech,
cluster,
dot,
local.
They
resolved
to
this
cluster
IP
and
it's
not
thinkable,
because
it's
a
service,
but
oh
yeah,
DNS.
So
that's
what
we're
working
on!
Ok!
So
because
of
its
the
NSS
switch
whenever
an
application
that
honors
G
Lib
C
is
doing
a
lookup
for
a
hostname.
A
It's
gonna
look
for
files,
first,
so
Etsy
hosts
and
then
it's
gonna
look
for
DNS.
So
that's
gonna,
be
your
resolved
Kampf
and
it's
gonna.
Take
it
in
that
order.
If
you
have
a
local
file
to
find
or
if
you
have
your
hosting
defined
to
in
side
of
Etsy
hosts,
then
that's
gonna
take
precedent
over
what's
inside
of
DNS
right,
so
I
do.
Host
google.com
I
can
see
this
resolving
to
all
of
that
stuff,
and
if
I
do
echo
cap
at
CFO's.
A
Right
so
I
just
added
a
new
record
for
google.com.
Now,
if
I
do
ping
google.com
its
resolving
to
one
dot,
one
dot,
one
dot
one
right,
even
though
that's
not
the
authoritative
record
I've
configured
in
NSS,
which
I
want
you
to
trust
the
file
before
you
trust
the
DNS
server.
This
is
kind
of
an
interesting
thing,
especially
from
a
security
perspective.
Things
get
kind
of
interesting
there
when
you
could
get
away
with
stuff
like
that,
all
right,
there's
one
other
file
inside
of
a
G
Lipsy
system,
that's
important,
and
that
is
this
one.
A
Ga
I
dot-com
and
this
one
is
a
hard
lesson,
hard
learned
lesson
that
I've
learned
quite
a
few
quite
a
few
times
in
my
career,
but
I'm
gonna
share
it
with
you,
so
you
can
just
learn
it
from
me,
which
would
be
awesome.
So
G
AI
stands
for
get
address
info
or
get
out
her
info
and
it
configures
the
way.
The
ordering
of
the
way
that
results
are
evaluated.
When
you
do
a
lookup.
A
A
So,
even
though
I
could
resolve
ipv6
address,
I
couldn't
reach
it,
and
so
that
was
breaking
DNS
for
the
world,
and
so
I
ended
up
using
DICOM
to
basically
fix
that
temporarily,
basically
setting
the
precedence
of
my
v6
lower,
so
that
I
would
always
resolve
ipv4
first
and
then
the
problem
went
away
until
we
were
able
to
actually
resolve
the
problem
with
ipv6,
but
got
kampf
and
SS
which
com
we
had
those
two
things.
That's
what
I
wanted
to
show
you.
A
Boom
boom,
the
TLS
connection
I,
want
to
just
talk.
I
want
to
talk
on
this
really
briefly.
I
don't
want
to
spend
a
lot
of
time
on
it.
I
know
that
we've
talked
about
yeah
right,
the
joys
of
dual-stack.
Never
anyone
never
really
gets
old,
so
I'm
gonna
use
a
tool
called
mixer,
and
if
you
haven't
heard
about
mixer
mixer,
it
is
awesome.
It's
a
tool
that
you
can
use
to
generate
certificates
locally.
A
A
A
Now,
whenever
we
make
a
request,
whenever
we're
interacting
with
it
with
a
TLS
secured
endpoint,
we
have
to
actually
make
sure
that
the
host
name
or
the
IP
address
or
however
it
whatever
it
is-
that
we're
putting
after
the
the
command
or,
however,
identifying
that
a
host
that
has
to
be
encoded
into
the
certificate-
and
this
is
super
important
because
we
get
it
wrong,
then
what
happens?
Is
that
we're
not
going
to
be
able
to
actually
authenticate
to
the
endpoint,
because
there's
no
subject
alternative
name
that
matches
the
records
that
we're
hitting.
A
Minify
alright,
instead
of
here
I'm
using
one
two,
seven
zero
zero
one
as
the
hostname
to
communicate
with
my
API
server
right
and
I'm,
also
using
this
other
port.
But
that
means
that
the
serving
certificate
in
front
of
my
API
server
has
to
include
that
IP
address,
and
this
is
actually
why
I'm
getting
into
it
because
DNS.
In
my
opinion,
this
is
all
related
stuff
right.
So
if
I
docker
exec
into
my
I.
A
And
we
look
at
the
subject:
alternative
names
right.
We
can
see
that
the
IP
address
one
two,
seven,
zero,
zero
one
is
in
there,
and
so
that's
why
we
can
communicate
with
it.
I
can
actually
change
that
name
inside
of
my
cube
panel
configuration
to
any
of
these
records
and
it
would
work
as
long
as
I
can
reach
it
right,
like
obviously
localhost
or
wood
would
work
in
this
case,
because
it's
still
referencing
one,
two,
seven
zero
zero
one.
A
As
long
as
I've
got
a
record
somewhere
that
points
localhost
to
one
two:
seven:
zero,
zero
one
we're
good
to
go.
I
could
also
use
this
IP
address.
I
could
use
this
IP
address
and
if
we
think
about
it,
this
is
kind
of
how
other
things
are
interacting.
With
the
kubernetes
service
inside
of
kubernetes
as
well
right
we're
allowing
you
to
communicate
with
kubernetes
the
short
name
right.
You
can
also
use
a
slightly
longer
name.
You
can
use
kubernetes
dot
default,
you
can
also
use
kubernetes
default
service.
A
A
The
reason
this
is
so
important
is
because,
when
you're
actually
establishing
connectivity
with
that
other
thing,
typically
speaking
inside
of
certs,
this
is
where
we
can.
This
is
where
we
canonize
what
the
valid
host
names
for
that
thing
are
write
certificates
force
us
to
get
it
right.
If
we
don't
get
the
certificate
right,
then
we
won't
be
able
to
terminate
connections
on
that
other
thing
in
a
state
in
a
state
of
trust.
Right
that
connection
will
fail.
We
won't
be
able
to
communicate
with
it.
A
A
Our
IP
addresses,
or
both
that
you're
going
to
use
to
communicate
with
that
secure
endpoint,
and
so
you
have
to
get
that
right
or
it
will
be
a
rough
time
all
right
cool.
This
is
all
going
a
little
slower
than
I
thought
it
would,
but
you
know
we're
gonna
get
through.
It
hope
you
all
are
finding
this
stuff.
Interesting
I.
Think
it's
I
think
DNS
is
like
one
of
my
favorite
subject,
because
it's
always
so
much
to
cover
all
right.
A
A
And
it's
called
cube,
DNS
and
there's
our
cluster
IP.
Those
are
one
of
the
interesting
things
about
services
is
that
you
can
define
your
own
cluster
IP.
That's
why
it's
always
the
same.
Ip
it'll
always
be
dot
10
of
your
server
cider,
and
these
are
the
services
that
it
is
exposing
right.
It's
exposing
53
for
UDP
53
for
TCP,
which
are
the
ports
for
both
of
those
things,
and
then
we
also
are
exposing
this
other
port
91
43
for
TCP.
We'll
talk
about
all
three
here
in
a
second,
oh
sorry.
Yes,
you
reminded
me.
A
A
So
because
we
can
actually
because
look
because
in
some
cases
it's
a
fully
qualified
name
for
any
service
is
gonna,
have
five
different
dots
before
the
resolved.
What
end
up
sorry,
because
I
can
have
five
different
because
it
can
be
like
exactly
service
namespace
service
die
cluster
dot,
local!
That's
right,
you're,
right
rare,
but
this
is
definitely
still
a
good
worse.
A
good
thing
worth
digging
into
and
I
think.
This
is
actually
a
thing
that.
A
That
was
pointed
out
earlier
wearing,
like
can
you
actually
include
a
last
dot
inside
of
your
head
side
of
your
hostname?
Look
up
I'm
at
it
was
pointing
this
out
right.
Is
it's
talking
about
like
how
to
speed
up
the
resolution
of
these
things?
If
I
rely
on
kubernetes
that
could
actually
that's
going
to
incur
a
bunch
of
traffic
and,
if
I'm
like
watching
the
wire
for
all
of
that
traffic,
we're
gonna
see
a
look
up
for
kubernetes.
We're
gonna
see
a
look
up
for
kubernetes
service,
we're
gonna,
see
or
not
default
Cabrini's
that
default.
A
But-
and
that
means
that
we
can
actually
incur
quite
a
lot
of
traffic,
and
so
this
is
kind
of
where
those
things
sort
of
come
in,
but
so,
if
you
want
to
understand
a
little
bit
more
about
that,
it
also
points
out
some
of
the
interesting
things
you
can
do
for
DNS
config
we're
inside
of
this
template.
You
can
actually
specify
DNS,
config
and
modify
things
like
and
dots
or
your
search
paths
or
things
like
that
for
a
particular
for
a
particular
pod
or
deployment
all
right.
That's
what
that
is
now
back
to
our
cluster.
A
And,
and
by
default
for
quite
a
while
now
most
of
the
implementations
of
kubernetes
out
there
have
been
start,
have
shifted
to
accordion
s,
I
think,
since
about
1/16
or
something
like
that
and
I
remember
putting
a
link
into
the
change
that
made
that
particular
move.
Let
me
see
if
I
can
find
it.
I
can't
find
it
all
right.
Well,
there
was
a
while
ago
when
we
actually
moved
accordion
s
to
general
availability.
Originally,
we
used
different
things
called
cube,
DNS
and
cube.
A
So
by
default.
This
is
a
pretty
standard
deployment
of
kubernetes
of
DNS
right.
Core
DNS
is
made
as
a
deployment
it's
deployed
onto
whatever
hosts
you
have
available.
Sometimes
it's
not
deployed
in
a
balanced
way
like
in
this
case,
I'm,
actually
just
running
on
a
single
node
cluster,
so
it's
deployed
only
on
that
one
node
and
even
though
I
have
two
pods
resolving
things
that
just
means
like
kinda,
a
like
I
have
two
places
where
I
can
cache
the
result
of
a
thing.
The
way
DNS
works
inside
of
kubernetes.
A
A
This
configuration
has
a
few
different
things
about
it,
which
I
want
to
talk
through
real,
quick,
it's
pretty
simple
to
configuration,
but
let's
talk
to
you
at
all,
so
we're
exposing
ports
53
for
UDP
and
TCP.
So
if
your
application
looks
up,
DC
DNS
names
with
over
TCP
that'll
work
and
also
the
more
standard
UDP
for
for
accessing
stuff,
there
are
limits
and
requests
configured
and
there
is
a
configuration
file.
A
A
A
So
this
line
right
here
actually
configures
accordion
s
to
use
the
underlying
hosts
resolve
comp.
Dns
policy
is
a
way
of
configuring.
How
the
resolved
a
comp
will
be
presented
to
your
pod,
and
this
is
actually
kind
of
an
interesting
one.
It's
interesting
also
because
it's
sort
of
a
terrible
name.
It's
not
the
default
DNS
policy
right.
But
if
we
do
keep
kettle
explained
pod
spec,
DNS
policy,
we
can
see
the
different
options,
so
the
DNS
policy
are
a
few
different.
There
are
a
few
different
options.
This
defaults
to
cluster
first.
A
This
is
one
of
the
first
things
that
don't
make
absolutely
crazy
about
kubernetes,
so
it
defaults
to
cluster
first,
and
that
means
that
we
see
a
resolved
comm.
Just
like
we
saw
inside
of
our
batch
container,
where
we
use
the
resolver
for
10960
one,
but
we
have
a
few
other
ways
to
configure
that
resolve
kampf
inside
of
kubernetes
right.
A
We
can
configure
it
with
default
and
when
we
configure
it
with
default,
we
present
to
the
pod
the
same
resolve
kampf
that
the
underlying
the
host
sees
wherever
the
cubelets
running,
whatever
cubelet
sees
as
that
resolve
comp.
That's
what
we're
gonna
get
for
it.
What's
up
we're
going
to
get
inside
the
pod.
A
And
the
reason
that's
important
is
because
this
is
actually
by
default
inside
Cortinas.
This
is
the
way
that
core
DNS
sets
its
upstream
resolvers
right
so
because
we
actually
have
the
DDS
policy
for
the
cord
core
DNS
deployment
using
the
default
configuration.
That
means
that
the
core
DNS
is
automatically
configured
to
leverage
the
the
DNS
resolvers
inside
of
the
underlying
host
to
interact
with
it's
up
streams.
A
A
When
you're
using
actually
sorry,
it
allows
you
to
use
the
resolvers
that
are
expressed
by
clust
by
the
cluster
first
result,
but
they're
using
host
net.
So
let's
talk
about
that
here.
Just
for
a
second
question
from
bog
de
is,
is
dns
policy
managed
by
the
cubelet.
So
I
guess
it
depends
on
what
you
mean
by?
Is
it
its
dns
policy
managed
by
the
cubelet?
Cubelet
is
responsible
for
configuring
or
for
determining
what
to
put
inside
of
your
resolve.
Kampf
cubelet
is
responsible
for
that
right.
A
A
A
All
right
so
get
pods,
and
then
we
have
a
couple
of
different
pods
running.
We
do
cube
Kittel
exec
GI
default
bash.
That's
here,
evolved.com!
That's
our
default!
Now!
If
we,
because
we
have
set
the
dns
policy
to
default,
we
see
a
very
similar
configuration
as
the
underlying
cubelet
does
right.
So
the
end-of-line
cubelet
sees
this
configuration
right
and
we
can
see
that
because
if
we
do
cube
cat
all
say,
actually
we
do
docker,
exec,
GI
kind,
control,
plane.
A
A
Now
this
was
super
interesting
because
it
looks
just
like
default,
or
it
looks
just
like
cluster
first
right,
I
mean
there's
no
real
difference.
We
still
see
option
n
dots,
the
name
server
is
10960
10.
We
see
the
search
path
all
configured
so
what's
different
about
this,
then
the
other
thing.
The
difference
is
this
I'm,
seeing
all
the
IP
addresses
from
underlying
host.
That
means
that
I'm,
actually
using
host
network
right
so
I
have
a
daemon
that
perhaps
needs
to
run
on
the
underlying
host
in
the
hosts
Network.
A
Maybe
it's
doing
Wireshark
or
something
else
and
I
needed
to
actually
have
exposed
to
the
underlying
host.
This
gives
me
a
way
of
exposing
that
underlying
host
Network
and
still
allowing
it
to
resolve
services
that
are
running
inside
the
cluster
right.
So
if
I
were
to
do
a
PK,
add
bind
tools
and
I
did
host
echo,
for
example,
I'd
still
be
able
to
resolve
it,
even
though
I'm
under
the
underlying
host
now
what's
interesting
about
this
is
if
I
do
the
same
thing
from
inside
of
the
control
plane.
A
A
A
A
A
A
There
needs
to
be
a
configuration
that
allows
you
to
do
that,
because,
by
default,
your
host
can't
community
can't
use
core
dns
as
its
resolver
I
can't
use
the
kubernetes
dns
resolver
as
its
resolver,
because,
if
you
think
about
it,
there's
a
chicken
and
egg
problem
here
right.
I
need
to
be
able
to
resolve
those
host
names
for
things
that
I
care
about,
like
docker,
IO,
pull
images
and
those
sorts
of
things.
A
I
need
to
be
able
to
interact
with
those
things
long
before
I
get
to
a
place
where
I
have
cube
DNS
deployed,
and
this
is
that
chicken
and
egg
problem,
your
hosts,
DNS
resolution-
needs
to
be
working
before
you
have
kubernetes
deployed,
and
so
for
you
to
actually
enter
act
with
or
to
be
able
to
resolve
hostnames.
You
kind
of
need
to
already
have
you
you.
A
You
need
to
be
able
to
to
make
that
work
before
you
have
cube
DNS
and
then,
if
you
wanted
to
expose
a
way
for
those
system
processes
to
run
on
your
host,
that
would
allow
you
to
resolve
those
things.
Then
you
can
actually
use
something
like
cluster
first
with
host
net,
which
relies
on
cube
DNS
being
up,
but
you
have
to
be
careful
with
it.
A
A
A
A
A
A
This
line
right
here,
which
is
I,
think
one
of
the
more
important
ones
that
tells
us
to
forward
any
requests
to
are,
as
he
resolved
kampf
and
we
just
figured
out
just
now.
What
we
were
talking
about
was
how
we
populate
that
se
result
at
home
right
we're
populating
that
su
resolve
comp
by
using
DNS
policy
default,
and
that
means
that
if
we
don't
have
a
cached
response,
then
we're
going
to
send
that
response.
We're
going
to
send
that
query
up
to
the
configured
the
configuration
of
resolve
comp
and
then
here's
that
piece
earlier
remember.
A
We
talked
about
the
fact
that
DNS
resolution
inside
of
our
DNS
is
always
set
a
hard
set
at
30
seconds.
That's
this
line
right
here.
It's
telling
it
to
cache
the
result
for
30
seconds,
no
matter
what
doesn't
matter
how
long
the
TTL
is
for
the
upstream
record
Courtney
and
s
will
only
will
only
hold
a
valid
record
for
any
record
for
30
seconds
and
there's
much
more
of
this
configuration
file
to
explore,
but
we're
not
going
to
explore
it
this
time,
we're
gonna
explore
next
time.
Well,
let's
see.
A
A
A
A
A
How
many
cache
misses
what
the
cache
size
is.
The
DNS
request,
duration
for
seconds,
it's
a
histogram,
so
we
can
determine
like
over
time.
Are
we
seeing
DNS
queries?
Are
we
can
see
DNS
for
DNS
requests,
go
up
or
down
for
the
UDP
protocol
or
for
the
TCP
protocol
getting
information
about
different
zones,
the
global
zone
there's
also
a
zone
for.
A
A
A
A
Think
I
need
just
a
few
more
words
to
understand
what
you
mean.
Mahon.
Are
you
saying,
are
you
do
you?
Are
you
saying
that
you
want
to
see?
What's
in
the
cache,
are
you
saying
you
want
to
understand,
what's
held
as
a
record
that
accordion,
which
I
guess
is
the
same
thing,
is
that
what
you're
going
ok
so
I,
don't
believe
that
is
exposed
and,
if
you
think
about
it,
we
couldn't
expose
that
as
a
metric
I'm,
not
aware
of
like
a
way
of
determining
what's
inside
of
core
DNS
is
cache
we
can
we
can.
A
A
A
A
Again,
30
seconds
right,
configure
it
inside
of
the
inside
of
the
zone
for
the
cluster,
and
this
is
the
kubernetes
plugin
and
so
what's
actually
happening
here
is
that
this
record
isn't
resident,
isn't
ever
placed
inside
of
a
flat
file
anywhere.
This
record
represents
a
cached
result
of
a
programmatic
response
coming
from
core
DNS
as
a
daemon.
This
is
like
an
application
result,
not
something
where
recorded
by
a
flat
file.
I,
don't
ever
have
to
write
down
all
of
the
services
inside
of
my
entire
cluster
and
what
IP
addresses
they're
associated
with.
A
Instead,
the
kubernetes
plug-in
takes
that
call.
It
says:
oh
you're,
looking
for
a
default
bad
service
like
cluster
dot,
local.
It
then
evaluates
the
services
that
it
has
cached
right
and
it
says:
oh
there's,
a
new
service,
that's
been
defined.
It's
called
echo
dot
via
echo,
dot
default
dot
service
that
clustered
at
local
and
here's
the
cluster
IP
associated
with
that
service,
and
it
caches
that
for
30
seconds
and
it
sees
if
it
continues
to
exist.
If
it
stopped
existing,
then
the
kubernetes
plug-in
would
stop
reporting
on
it.
A
A
Haven't
actually
explored
that
the
core
DNS
cache
prefetch
feature,
which
is
an
interesting
point
right
like
and
basically
it
gives
you
the
ability
to
populate.
What's
inside
cache
a
little
a
little
more
aggressively
and
I
haven't
explored
it
too
much
and
I
haven't
really
seen
too
many
people
else.
I
haven't
seen
too
many
others
exploring
it
too
much
again,
it's
30
seconds
right
so
like
you
would
have
to
think
about
the
tuning
a
little
bit
more
I
think
to
really
get
some
real
value
out
of
that
one.
But
it
is
an
interesting
point.
A
All
right,
my
friends,
it's
250,
I'm
gonna,
call
it
a
day
here
and
I
hope
you
all
have
a
wonderful
weekend.
I
really
enjoyed
digging
into
DNS
with
y'all.
It's
always
a
fun,
always
a
fun
thing:
I'm
gonna
be
I'm.
Probably
gonna.
Do
another
episode
on
DNS
cuz,
there's
a
ton
more
that
I
wanted
to
cover
and
I
know.
Dns
is
kind
of
a
strong
topic.
I
was
just
kind
of
surprised
by
how
long
it
took
for
me
to
get
through
the
content
that
I
had
already,
but
we'll
come
back
and
beat
up
on
it.
A
Some
more
some
of
the
other
things
I
wanted
to
cover
were
things
like
configuring
core
DNS,
specifically
right
like
you,
can
actually
specify
the
upstreams
and
some
of
the
examples
about
why
you
would
do
that
and
there's
even
a
way
to
configure
what's
called
stub
DNS,
or
maybe
we've
split
horizon
DNS
inside
of
core
DNS
and
we'll
talk
about
what
that
is
next
time.
We
talk.
A
A
We
talked
about
why
that
is
called
that
and
that's
the
service
discovery,
we're
gonna
talk
about
service
type,
cluster
service
type
of
external
name,
which
is
a
DNS
trick,
and
we're
also
probably
going
to
talk
about
service
type,
none
which
is
also
referred
to
as
a
headless
service,
which
is
also
kind
of
a
DNS
trick.
So
we're
talking
about
both
of
those
services.
A
Next
time
we
get
together
and
we're
also
going
to
explore
node
local
DNS,
which
is
really
cool
because
it
means
then
you
have
a
caching
name
server
on
the
node
that
caches
all
of
their
responses
for
all
of
the
pods
that
are
local
to
your
node
super
cool
stuff
and
I
can't
wait
to
dig
into
it
with
you
next
time,
but
that
is
my
week.
I
hope
you
all
have
a
wonderful
time.
Thank
you.
Thank
you.
So
much
and
we'll
see
you
next
time.