►
Description
https://coredns.io/
https://github.com/coredns/coredns
In this webinar, you’ll learn how CoreDNS is designed, and how the integration with Kubernetes works. You’ll find out how and why to use CoreDNS in place of the default kube-dns in Kubernetes deployments.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
All
right,
there's
two
minutes
I'm
going
to
get
cracking.
Welcome
everybody
to
this,
the
sounds
I
think
seem
safe,
webinar
could
be
six
cookies
doesn't
fit
account.
Today,
we've
got
John,
Bell
American
is
going
to
be
giving
us
an
introduction
to
core
DNS.
In
case
you
haven't
been
more
of
these
webinars
before
we
encourage
you
to
ask
questions
throughout
the
presentation
when
you
are
not
able
to
speak,
please
put
your
questions
in
to
either
the
Q&A
or
the
chat
box
at
the
bottom.
I
will
find
opportune
moments
that
interrupt
your
and
asking
those
questions.
A
B
Great
well
hello,
everyone
and
thank
you
for
attending
today,
I'm
John,
Bell,
America
I
maintain
our
core
DNS
I
was
Infoblox
and
I.
Talk
to
you
today
about
core
D&F,
giving
introduction
as
to
what
it
is
and
some
of
the
interesting,
more
unique
features
as
well
as
then
go
into
a
demo
of
bringing
Cordilleras
up
in
a
as
the
end
cluster
DNS
in
a
in
a
kubernetes
cluster.
B
So
to
start
that
what
is
core
DNS
Karina
is
a
cloud
native,
authoritative,
DNS
server.
It's
essentially
the
successor
to
Sky
DNS
to
for
dynamic
DNS
based
service
discovery.
It's
intended
to
be
a
better
sky,
DNS,
the
sky
DNS.
In
fact
it's
it
was
started
by
the
same
author
of
scan
DNS,
which
was
meet
given
from
Google.
He
also
still
leaves
the
project
and
we
have
a
number
of
people
who
blocks
as
well
as
other
people
at
community
involved
as
well.
B
The
really
key
thing
about
core
DNS
is
this:
as
an
extensible
middle,
we're
very
flexible
request
pipeline,
so
it
becomes
very
easy
to
add
additional
functionality
to
it
and
we'll
talk
more
about
that
in
a
bit.
We
are
an
inception
project
with
the
CMC
up
we
joined
in
in
March,
and
we've
been,
you
know,
really
rayful
to
the
scenes
you
have
for
giving
us
opportunities
like
this,
to
talk
to
you
as
well
as
time
at
conferences
as
well
and
access
to
to
some
CI
resources
and,
in
fact,
Linux
Foundation,
which
published
EMC
iPad
apart.
B
B
B
Deist
DNS
than
sky
data
sky
team
has
started
something
to
allow
this
dynamic
service
discovery
backed
by
fcg,
but
that's
really
still
a
very
narrow,
narrow
use
case,
and
what
we
wanted
to
do
was
create
a
DNS
server
that
could
be
used
for
ordinary
DMS,
as
well
as
for
service
discovery
and
would
allow
us
to
back
different
things
and
have
different
kinds
of
manipulations
of
the
request
as
it
goes
through.
So
it's
designed
from
the
start
actually
based
upon
the
Cadi
web
server.
B
So
this
is
made
us
to
create
a
lot
of
unique
features
that
that
would
have
been
more
difficult
to
add
to
something
like
idea.
That's
in
particular
we've
added
functionality
where
we
can
encrypt
DNS
over
TLS,
which
is
pretty
standard,
but
then
we've
also
our
own
invention
of
it
integrated
with
GM
PC.
So
we
can
actually
essentially
tunnel
DNS
through
G
LPC
over
a
TLS
connection,
which
is
more
secure.
B
We
are
working
on
integrating
with
some
external
policy
servers
and,
of
course,
we'll
talk
more
about
later
here,
as
we
have
integrated
directly
with
kubernetes
to
be
the
in
cluster
DNS.
So
essentially,
here
what
we're
doing
is
showing
how
you
can
use
the
traditional
mechanism
we
used
in
sky
DNS
with
NCD,
or
you
can
alternately
directly
talk
to
the
community
api
and
in
fact
you
can
do
this
both
at
the
same
time.
B
For
different
zones
and
sort
of
make
a
more
flexible
configuration
that
way,
in
addition
to
fit
the
youth
in
cloud
native
stacks,
it
is
a
full-fledged,
authoritative,
dns
server
that
supports
all
kinds
of
other
traditional
DNS
use
cases,
so
a
major
architecture
and
how
that's
really
one
of
the
key
features
of
core
DNS.
This
is
a
diagram
showing
how
the
requests
are
processed
when
they
come
in
to
core
DNS.
B
In
the
case
of
example,
that
IO
will
we'll
pull
it
from
a
file
that
will
log.
It
example
not
that
we
don't
bother.
Logging,
it
and
for
everything
else,
we
actually
run
it
through
kubernetes,
because
maybe
this
is
in
cluster
and
we
want
to.
We
want
to
see
that
resolve
the
in
cluster.
Tns
means
you
are
some
of
these
middleware
that
we're
talking
about
like
I,
said,
there's,
there's
two
different:
essentially,
categories
of
middle
weathers,
along
with
class
manipulators
and
backends
backends,
enable
you
to
source
data
from
different
types
of
repositories.
B
So
the
most
typical
one
of
course
file
ordinary
zone
file
same
as
you
would
have
used
a
bind
I
could
still
use
with
bind.
We
have
another
one
Auto
which
works
really
well
in
combination
with
get
sync
or
essentially
you
have
a
directory
that
we
monitor,
and
whenever
you
you
know,
you
make
a
DNS
record
change.
You
can
commit
to
github
and
get
simple
automatically,
pull
it
down
populate
that
directory
and
will
begin
serving
that
DNS.
B
So
these
are
some
of
the
kinds
of
backends,
of
course,
to
FTD
in
kubernetes
and
a
number
of
other
ones
as
they
as
needed
by
the
community.
On
the
request,
manipulation
side.
This
was
working
getting
the
interesting.
You
know.
We've
got
a
lot
of
standard
DNS
features
things
like
caching,
both
positive
and
negative.
Caching,
we
support
DNS
SEC,
but
we've
also
built
in
some
functionality.
That's
really
geared
for
cloud
native
environments
so
distributed
tracing
which
enables
us
to
trace.
B
When
we
have
all
this,
these
different
kinds
of
middleware
dana's
processing
them
become
a
little
more
complicated.
So
this
allows
us
to
actually
trace
the
DNS
requests
throughout
our
set
of
applications
and
you'll
see
one
in
a
slide
later,
where
we
make
use
of
this,
the
the
integration
with
Prometheus
for
running
in
kubernetes,
environment
or
other
native
environment.
If
we
have
that
health
check-
and
you
know
some
of
this
proxying-
which
allows
us
to
serve
serve.
A
B
From
different
different
zones
and
then
send
unknown
things
out
to
a
different
different
server,
you
I
mentioned
external
policy
engine,
so
this
is
a
youth
case
where
we
take
advantage
of
all
this
middleware.
This
is
an
Infoblox
SAS
offering
and
today
the
way
we
do
this
is
we
have
a
specialized
Unbounce
or
what
what
the
functionality
is
essentially
blocking
queries
that
are
say,
command
and
control,
domains
for
botnets
or
something
so
a
customer
in
a
branch
office
on
Prem
makes
a
DNS
request.
B
That
goes
out
to
a
cloud
where
we
determine
whether
or
not
it's
a
bad
domain.
If
it's
a
bad
domain
we
refuse
or
redirect
now,
if
it's
a
good
domain,
we
simply
resolve
it.
Today's
solution,
we're
we've,
been
investing
accordion
s,
partly
because
they
want
to.
We
want,
partly
because
we
want
to
do
it
in
this
in
this
souse
offering,
and
so
we're
looking
over
the
next
couple
months
to
roll
that
out
using
DMS,
Co
DNS.
B
Here's
where
we
use
some
of
these
really
interesting
little,
where
the
in
the
current
system,
we
use
unbound
with
something
called
response
policy
zones
which
are
great
but
are
very
take
a
lot
of
memory
and
whenever
any
of
these
policies
change,
which
can
happen.
Every
few
minutes,
Kleber
pointers
unbound
to
reload
all
of
those
response
policy
zones
which
make
a
minute
or
two.
So
instead
we're
going
to
wrap
all
that
we
take
an
open
switch
policy.
B
Engine
called
Themis
that
we're
building,
but
we
can
also
use
something
like
open
policy
agent,
which
is
another
of
the
source
policy
engine.
But
we
we
have
corners
on
pram.
It
talks
to
the
client
contacts
to
it
using
ordinary
DNS,
and
then
it
takes
that
wraps
it
up
in
TLS,
using
our
key
RPC
proxy
sends
that
up
to
a
core
DNS
and
the
cloud
actually
skip
to
step
there.
B
The
album
also
append
certain
data
to
the
request
that
identifies
the
site
and
the
customer,
and
then
the
cloud
accordion
s
knows
how
to
unpack
the
G
RPC
request.
It
extracts
that
extra
data
that
was
appended
send
it,
along
with
the
query
name
and
the
source
empty
things
like
that
to
the
policy
engine
which
can
then
make
the
decisions
and
that
point
core
DNS
policy
middle
winner
will
decide
whether
or
not
to
pass
this
on
upstream
to
the
recursive
resolver
or
to
deny
refuse
or
redirect
requests.
B
So
by
doing
this
with
core
DMS,
what
we've
done
is
we've
eliminated
the
need
to
reload
we've.
We've
enabled
us
to
stop
unbound
instead
of
a
specialized
one
that
supports
our
PE
and
we
allow
more
arbitrary
policies
that
can
be
implemented
in
this
external
policy
engine
in
a
way
that
we
couldn't
do
before.
B
B
Cody
knows:
we've
spent
the
last
few
months
building
out
the
kubernetes
integration
kubernetes
middleware,
so
that
it
can
be
a
drop-in
replacement
for
the
existing
cube
dns.
So
why
would
we
do
that?
Well,
we
see
a
number
of
issues
with
the
existing
to
begin
as
one
is
the
lack
of
flexibility,
it's
more
difficult
to
you.
Don't
have
access
to
all
these
same
DNS
features
that
are
already
there
in
core
dns,
because
cadenas
is
sort
of
specialized
for
specialized.
Only
for
kubernetes
Claudian
acid
also
has
a
lot
of
the
things
built
in
that
cube.
B
Dns
today,
Lee
needs
to
run
extra
sidecar
processes.
For
so
we
have
a
single
process,
single
executable
that
runs
that
handles
health,
the
handles
caching,
it
handles
the
kubernetes
integration.
So
there's
there's
not
three
different
pieces
running
all
at
the
same
time,
which
is
what
you
have
in
VMs.
We
also
saw
some
issues
include
unison,
some
things
that
we
did,
that
weren't
ideal.
B
So,
for
example,
one
of
the
issues
is
that
the
on
I.t
lookups
are
simply
an
echo
back
of
whatever
you
send,
which
the
intent
of
that
feature
on
IP
lookups
is
for
wildcard
certificates,
but
by
echoing
back
anything
that
they
send
you're
kind
of
breaking
the
trust
of
the
certificate.
The
purpose
of
validating
identity
I'll
show
that
a
little
bit
later
and
we've
made
it
a
lot
easier
with
those
existing
features
that
exist
within
quote
E&S,
to
do
customized,
DNS
entries
within
within
kubernetes
environment,
also
in
kubernetes,
actually
not
done
by
our
core
team.
B
Here,
it's
actually
done
by
which
I
mean
the
meek
and
me,
and
a
few
other
people
that
are
the
core
team
on
Cody
and
s,
but
actually
done
by
another
set
of
open-source
developers
who
I
really
dislike
users
of
core
DNS.
It's
not
so
much
a
changed
accordion
s,
but
it
is
a
use
of
quality
of
s
is
a
we're.
A
the
only
really
on-prem
alternative
for
federated
DNS
provider
within
the
core
DNS
I
within
the
kubernetes
Federation.
B
So
in
kubernetes,
Federation,
there's
a
method
for
playing
will
configure
DNS
entries
to
help
the
different
clusters
fund
each
other.
Essentially-
and
today,
that's
generally
done
with
cloud.
Dns
providers
like
route
53,
Google,
cloudiness
or
in
badger,
was
just
recently
added,
but
for
on-prem
DNS
services,
where
you're
not
using
a
cloud
service.
You
use
core
DNS
as
that
provider,
so
they
will
actually
a
completely
distinct
use
of
core
DNS
within
the
kubernetes
ecosystem.
That's
it's
not
in
cluster
DNS,
it's
completely
different.
B
We
have
also
heard
from
some
companies
about
the
need
for
doing
federated,
DNS,
meaning
multi
cluster
DNS,
without
necessarily
running
a
Federation
control,
plane
and
so
you're
looking
at
solutions,
and
we
have
a
proposal
that
we're
reviewing
with
the
community
on
how
to
do
federated
ENS,
with
or
without
a
control
plane
by
using
coordinates
instances
in
each
cluster
that
communicate
with
one
another.
So
that's
something
we're
looking
at
I
want
to
drill
more
into
the
kubernetes
case,
so
I'm
actually
show
a
demo
here.
B
What
we
have
here
is
I've
got
a
mini
queue
that
I
started
up
on
my
laptop.
It's
running
the
standard
tube
DNS
here,
I.
Just
want
to
show
how
it
works,
how
you
replace
the
standard,
cube,
DNS
record
E&S,
so
we
have
a
github
repository
in
our
core
DNS
organization
called
appointment,
and
it's
that
it's
got
some
directory
single
teachers.
How
you
deploy
system
e,
accordion
item
system
v,
here's
I
need
apply,
exploit
kubernetes.
B
B
I'll
show
you
that
now,
but
we
give
it
is
the
the
service
fighter.
That's
so
that
we
can
do
reverse
lookups
on
services
and
cluster
domain
and
the
base
file
to
use
in
in
kubernetes
1.6.
We
have
role
based
access
control,
and
so
we
need
to
add
a
little
bit
different
file
and
we
have
previously
for
older
versions
of
kubernetes.
B
Okay,
all
that
does
is
spit
out.
The
smile
here
I'll
talk
about
that
a
little
bit
so
within
in
order
to
get
or
DNS
running
in
cluster.
It
needs
access
to
certain
resources
in
the
cluster
via
that
the
kubernetes
api.
So
we
need
to
set
up
a
service
account.
This
is
the
world
based
access
control.
I
was
talking
about.
It
has
access
to
to
those
resources.
So
that's
what
the
first
few
resources
do.
B
Then
we
set
up
a
config
map
which
represents
the
configuration
of
a
for
dns,
so
in
this
config
map,
we're
we're
going
to
catch
catch
errors
and
log
them
moving
to
long
ordinary
queries.
We
want
to
support
the
communities,
health
checks
and
we're
going
to
use
the
kubernetes
middleware
to
handle
cluster
dot
local
domain
and
we're
going
to
do
reverse
lookups
on
this.
Everything
else
is
just
set
to
the
default
of
the
kubernetes
middleware
and
in
this
particular
example,
anything
that
doesn't
match
cluster
dot.
B
Local
will
pass
through
here
to
this
proxy
middleware,
which
will
use
the
the
name
servers
configured
in
the
core
dns
pods
at
sea
resolve
comp.
In
order
to
take
you
to
the
upstream
lookups
and
we're
going
to
cache
this
for
30
seconds,
so
this
essentially
more
or
less
duplicates
the
standard
cube
dns
functionality.
You
could
also
enable
the
prometheus-
that's
not
enabled
in
this
one,
but
if
you're
running
that
you
can
enable
that
as
well.
B
One
thing
we
do
that
the
in
this
particular
deployment
file,
because
you
can
customize
these
and
do
whatever
you
want
with
them
and
we'll
do
that
actually
for
our
demo.
But
one
thing
we
do
is
the
assumption
that
this
particular
script
makes.
Is
that
you're
replacing
a
existing
cube?
Dns
script
actually
looks
up
at
your
dns
cluster
IP
and
sticks
it
in
here
and
keeps
the
service
may
keep
DNS
and
the
reason
we
do.
B
B
B
An
example,
one
thing
I
want
to
show
is
how
the
how
we
can
use
was
essentially
functionally
clue
into
what's
called
a
stub
domain.
So
if
here's
about
your
your
cluster
running-
and
you
want
to
be
able
to
have
your
services
access
certain
things
within
an
internal
domain-
and
you
want
everything
else
that
isn't
within
the
cluster
to
get
resolved
say
by
some
common
with
DNS
server,
then
this
sort
of
conditional
forwarding
can
be
done
in
the
latest
version
of
keep
being
ethically.
They
can
do
it
too.
B
B
B
B
B
This
was
running
mini
cube,
as
I
said,
one
of
the
things
about
mini
cube,
that's
a
little
bit
odd,
and
actually
this
is
on
key
k2
is
that
it
runs
something
called
a
non
manager.
My
addon
manager
will
actually
undo
the
changes
we
make
to
certain
services,
so
I'm
just
going
to
turn
off
the
the
cube,
DM
cube
DNS,
and
otherwise,
when
we
replace
the
cadena
service
with
our
own
configuration,
the
add-on
matters
will
come
along
later
and
undo
our
changes
so
now
I'm
going
to
I'm
going
to
apply
our
custom
for
DNS.
B
A
B
That
just
lend
upon
it's
got
a
it's
an
Alpine
based
pod
with
some
basic
DNS
utilities
kernel
and
things
like
that,
as
well
as
curl
things
like
that
installed,
so
I
use
this
to
kind
of
kind
of
show.
The
testing
so
I
guess
the
first
and
easiest
thing
to
show
is
that
in
fact
you
know
it
does
work,
it
does
resolve
cholesterol.
This
is
our
record
EMS.
Now
we
can
see
we
solve
that
by
cluster
IP.
B
B
And
that's
elite:
we
see
that
it
gets.
It
gets
the
request
rather
well.
Here's
an
interesting
thing.
So
the
way
kubernetes
works
each
pod
that
gets
launched
as
it's
resolved
comm.
So,
if
you
familiar
with
the
way,
DNS
works,
there's
something
called
a
Search
app
on
a
search
list
and
essentially,
when
you
send
in
a
request,
it's
got
fewer
than
a
certain
number
of
dots.
It
will
go
through
that
search
back.
B
So
what
what
happens-
and
this
is
just
standard-
DNS
resolvers-
the
way
they
work,
so
you
can
see
that
what
actually
gets
requested
is
this
set
of
this
mean
which
it
has
close
to
that
local.
So
initially
the
kubernetes
middle
and
we'll
pick
that
up
and
it'll
say:
oh
well,
that's
not
a
valid
service
name
anywhere,
so
I'm
just
going
to
return
it
and
then
it
sort
of
iterates
through
this
search
path,
and
this
is
a
standards
to
an
extended
kubernetes
behavior.
B
B
We
we
talked
about
the
rewrite,
and
essentially
what
this
does
is
is
whenever
a
clearly
comes
in
for
this
specific
domain,
it's
going
to
to
change
the
name
internally
in
within
or
DNS
for
the
next
for
the
rest
of
the
middleware
chain.
It's
going
to
change
it
to
this,
so
the
idea
is
here
why
you
might
want
to
use
this
in
a
crib
in
Eddie's.
Environment
is,
if
you
have
some
service,
that's
running
TLS
and
externally,
that's
known
as
WWE
or
DNS
IO
that
certificate
that's
being
served
up,
is
signed
from
that
domain.
B
Name
for
that
domain,
name,
WWF,
Cordy
and
SEO,
so
things
that
are
outside
your
cluster
can
use
that
name
and
they
can
resolve
it
and
the
handshake
works
things
inside
your
cluster.
They
could
use
that
name,
but
it
would
actually
hair,
pin
outs
assuming
you're
running
that
service
in
your
cluster.
It's
and
the
hair
pin
out
and
go
back
through
the
through
the
outside
world,
which
you
know,
isn't
very
efficient
so,
rather
than
doing
that,
it
would
be
nice
to
be
able
to
just
use
the
internal
service
name.
B
The
problem
is
that
if
you
use
the
internal
service
name,
the
certificate
isn't
signed
with
that
name
and
so
you're
going
to
get
a
host
name,
mismatch
and
you're
going
to
your
handshakes,
going
up
your
TLS
can't
it's
going
to
fail,
so
we
can
kind
of
trick
the
resolver
here
by
taking
in
that
core
DNS
done.
I
know
that
initial
same
as
external
name
and
just
switching
it
internally
to
a
different
one.
So
when
I
run
this,
what
we
should
see
is
the
internal
address
come
back,
which
is
what
this
is
kind
of
0:34.
B
B
So
in
order
to
avoid
that
workaround
that
that
security
issue
we
introduced
this
concept
of
pods
verify
than
what
pods
verified
will
do
is
actually
will
listen
to
the
pods
from
the
kubernetes
api.
Not
just
the
services
and
end
points.
Typically
cube.
Do
nails
and
Cortinas
in
its
default
configuration
only
listen
for
services
and
endpoints.
They
don't
really.
You
don't
want
to
load
the
pausing
because
does
use
some
extra
memory,
but
if
you're
using
these
wildcard
certificate
and
using
this
feature,
it's
definitely
more
secure.
B
Okay,
it
resolves
properly.
Essentially
what
we're
doing
is
keeping
the
same.
Dns
schema
the
Winky
DNS
works
with
these
pod
addresses,
but
we're
validating
that
when
you
request
upon
for
a
polity
address
for
a
particular
namespace
that
there
actually
is
a
pod
running
in
that
namespace.
So
if
I
were
to
change
this
to
look
if
it
named
space,
it
will
say.
B
A
B
That's
a
great
question
so
right
now
we
are
still
it's
going
to
depend
on
some
of
the
features
you
have
enabled
in
there,
but
we're
actually
still
evaluating
that.
So
one
of
the
things
we
get
when
it's
with
CNCs
is
access
to
cluster
and
a
bare-metal
cloud
actually,
and
so
what
we're
working
on
now
is
both
automated
and
manual
evaluations
of
questions
like
that.
So
we're
going
to
take
that
bare-metal
cloud
we're
going
to
build
out
different
kubernetes
environments
of
different
sizes
and
monitor
and
measure
the
different.
A
Answer
the
question
then
do
feel
free
to
type
back
and
check
and
just
sense.
The
second
question,
which
is:
is
there
a
recommended
setup
to
use,
coordinates
external
from
Cuba
nettles?
They
can
also
look
at
Cuban
IT
services
with
that
require
Federation,
just
separate
instances
both
backed
by
community.
B
So
if
I
understand
the
question,
it's
one
accordion
s
outside
of
the
cluster,
but
have
it
look
up
cluster
services,
so
you
can.
Actually
the
issue
is
that
the
cluster,
the
services
that
gets
to
get
the
IP
address
of
the
get
returner
and
usually
the
cluster
Ikeys?
But
if
you
have
routable
pod
IPS,
for
example,
you're
using
a
VLAN
or
something
that's
routable
for
your
pod
cider
and
you
could
use
the
SRV
records,
so
SRV
records
headless
services
work
a
little
differently
rather
than
returning
the
cluster
IP,
which
is
the
kubernetes
internal
load.
B
Balancer
get
the
SRV
reference
would
return
the
actual
products.
So
if
you
have
routable
pod
you're
using
headless
services-
and
you
certainly
could
either
we
run
your
core
DMS
in
cluster,
like
we
showed
the
tech
shown
here
and
just
give
it
a
node,
pointer
or
load
balancer
type
of
service.
Well,
you
can
run
core
DNS
external
to
kubernetes
and
you
simply
have
to
in
your
configuration
you
have
to
give
the
kubernetes
api
endpoint
and
in
other
connectivity
and
credentials
and
things
like
that,
but
it
is
designed
to
be
able
to
be
done
externally.
A
B
A
B
B
One
thing
I
was
going
to
mention
that
we
also
have
an
experimental
feature.
That's
a
little
bit
related
this
one
of
the
things
you
see,
one
of
the
issues
people
see
with
key
DMS.
You
know
I
talked
a
little
bit
about
that
Search
Search
path.
Well,
when
you
query
for
something
outside
the
cluster,
with
your
query
from
google.com
I
mean
we
think.
Actually
we
can
see
you'll
see
that
you
get.
You
know
you
get
a
bunch
of
queries,
so
this
was
sort
of
amplification
of
the
queries
that
are
hitting
hitting
the
server.
B
So
this
with
a
client
going
back
going
back
going
back
going
back
till.
Finally,
it
resolves
to
something-
and
we
have
an
experimental
feature
that
has
some
bizarre
edge
cases
but
might
be
useful
to
people
in
some.
In
some
cases
we
will
actually
take
this
first
query
and
we'll
recognize
that
it's
coming
from
a
particular
IP
address
to
give
a
pod
and
therefore
a
particular
name,
space
will
recognize
that
this
trailing
piece
is.
B
Is
one
of
these
search
path,
queries
and
we'll
just
resolve
this
directly
and
just
return
you
the
result,
so
it
reduces
the
number
of
queries
that
the
client
has
to
do
over
and
over
again
so
latency
as
well
as
reduces
the
load
on
that
overall
again.
That
server,
then,
is
experimental,
there's
some
edge
cases
where
things
go
a
little
bit
weird,
which
we
don't
apply
in
most
people
for
most
people
systems,
but
you
know
we're
running
it
experimental
for
that.
B
All
right
so
a
little
bit
about
our
future
plans.
One
is
that
we
want
to
do
some
things
on
really
the
basic
game
as
functionality,
not
particularly
the
kubernetes
one.
We
have
communities
things
planned
to,
but
some
of
the
basic
things
are
on
a
zero
touch:
dns
X
or
if
you've
used
something
like
caddy,
that
the
e
that
integrates
with
with
certificate
management
services
like
let's
encrypt,
essentially
it
automatically
sets
up
your
TLS
for
a
website
or
web
server.
We
want
to
do
the
same
thing
for
DNS.
B
We
want
to
be
able
to
have
zero
touch,
DNS,
SEC
setup,
we're
working
on
DNS
tab,
support
game.
This
tap
is
a
sort
of
passive
capture
of
the
DNS
queries
that
are
going
through
for
various
analysis,
and
we
have
actually
thanks
to
the
CMC
off
again.
We
have
a
google
Summer
of
Code
students
working
on
that
and
it's
really
doing
a
great
job.
It's
got
it
just
about
wrapped
up
one
thing:
you
know
what
DNS
and
something
you'll
get
out
of
other
some
other
service
registry
discovers
discovery.
Service
registry
and
discovery.
B
Services
that
you
don't
get
in
the
kubernetes
dns
based
service
discovery
is,
is
push
so
if
you're
sitting
there,
you
have
you're
using
a
headless
service
in
cribben
at
ease
and
therefore
the
client
or
outside
load.
Those
and
the
client
is
aware
of
all
the
different
IP
addresses
that
a
service
resolves
to
and
that
changes
note
pot
goes
away
and
that
comes
in
whatever
it
may
be.
The
client
isn't
informed
back
with
dns
they
have
to
go.
B
So
today,
in
the
community
of
use
case,
kubernetes
itself
is
the
registry
and
we
just
do
the
discovery
side,
but
in
other
use
cases
where
you're
running
FTD
or
maybe
you're
running
sed
in
combination
with
kubernetes.
In
order
to
handle
some
of
these
other
records
that
I
put
in
the
files
because
he
wanted
to
be
dynamic,
we
like
to
be
able
to
register
about
through
an
API
or
DNS
and
then,
depending
on
what
back-end
you're
using
dynamically,
we
can
write.
We
can
write
it
to
that.
B
Back-End
I
mentioned
this
earlier,
but
multi
cluster
service
discovery
without
the
Federation
control
plane,
so
Federation's
really
cool,
but
we've
heard
from
people
who
don't
necessarily
want
to
run
the
Federation
control
plane.
It's
going
to
step
too
far
for
them
right
now,
but
they
have
multiple
clusters
and
they'd
like
to
be
able
to
resolve
services
between
multiple
clusters
automatically.
So
one
thing
has
some
solutions:
solutions
to
that
we're
talking
about
building.
You
know
policy
middleware
as
well
as
integrating
it
with
with
the
the
open
policy
agent.
Today,
policy
middleware
is
calling
out
of
tree
middleware.
B
It's
a
separate
repository
and
we'd
like
to
build
that
in
as
well
as
make
some
policy
simple
policies
directly
configurable
in
the
core
file.
That
is
the
configuration
file
accordion
as
itself,
while
still
enabling
complex
policies
that
live
essentially
in
a
separate
micro
service
that
evaluates
the
policy
in
whatever
the
community
needs.
I
mean
this
was
the
core
thing
about
core
DNS.
Is
that
it's
extensible?
It's
easy
to
add
new
features,
it's
easy
to
add
backends.
B
A
B
That's
our
attention
and
we've
been
working
with
the
community
on
that.
This
sort
of
major
change
will
happen.
You
know
we'll
take
a
bit
of
time.
We
have
to
answer
questions
like
Justin
had
about.
You
know
how
we
can
perform
in
different
different
scale
clusters
at
different
levels
of
load,
so
we're
working
on
evaluating
those
things,
and
we
are
we've-
had
several
meetings
with
folks
involved
in
currently
maintaining
cubed,
EMS
and
they're
open
to
it.
B
Cubed
EMS
is
based
on
Sky
genus,
which
you
know
if,
as
I
said,
the
predecessor
to
Cody
Ennis,
it
shares
a
lot
of
the
same
code.
The
main
DNS
protocol
code
is
the
same.
What's
different
is
the
way
that
we
interact
with
that
code.
In
way
we
present
that
code
and
configure
and
everything
that's
simply
a
lot
more
extensible,
so
they're
definitely
open
to
it,
but,
as
I
said
we're
working
on
on
getting
that,
getting
all
our
ducks
in
a
row
to
make
map
that
eyeball.
A
Thanks
John,
so
there's
no
other
questions.
Comments.
I
think
we
call
it
day.
I
want
to
thank
everyone
who
showed
up
to
listen
in
and,
of
course,
big.
Thank
you
to
you,
John,
taking
the
time
to
explain
this
introduction
to
core
DNS
for
now.
That
is,
if
you
do
need
to
get
in
touch
with
John.
There's
many
many
ways
there
and
if
that
doesn't
work
out,
you
can
always
contact
the
same
chip
and
we'll
put
you
in
touch
somehow.