►
From YouTube: OpenShift Commons Gathering Santa Clara 2019 Splunk on Kubernetes: Connectors, Operators and beyond
Description
Splunk on Kubernetes
Connectors, Operators and beyond
Matthew Modestino (Splunk)
A
Just
because
I
say
I
think
might
happen,
doesn't
mean
it's
gonna
happen
and
hopefully
doesn't
get
me
arrested
or
fined
by
the
SEC.
Okay,
I,
don't
I,
don't
got
money
like
Elan
to
be
making
jokes
like
that?
Okay,
so
so,
if
you
don't
know
who
spunk
is
again,
I'm
not
gonna
bore
you
with
the
details,
but
we're
your
friendly
neighbor.
We've
got
a
nice
big
building
down
here.
A
Instant
Anna
row
is
actually
the
first
time
I've
been
there,
so
it's
actually
nice
to
kind
of
come
down
and
join
you
guys,
I'm
from
north
of
the
wall
in
Canada,
so
it's
actually
kind
of
nice
to
escape
there
right
now.
So
again,
I
come
down
here
quite
often,
but
usually
to
HQ
in
San
Francisco,
so
where
we
also
were
kind
of
born.
So
a
couple
stats
up
there
I'm
sure
you
know
us,
we
do.
We
do
a
lot
of
things.
People
like
to
say
we
do
logging
we're
a
data
platform.
A
We
do
many,
many
things
and
actually,
as
a
user
myself
in
the
telco
space,
it
was
one
of
the
things
that
endear
me
to
Splunk.
The
most
was
that
it
was
just
something
I
can
I
knew.
There
was
always
going
to
be
changed,
there
was
gonna,
be
another
thing
to
integrate
into,
and
so
this
talk
is
kind
of
covering
that,
because
kubernetes
is
just
another
one
of
those
things
and
there
will
always
be
another
one
right,
and
so
that's
some
of
the
power
of
this
Blunk
platform.
A
Really
when
you
boil
it
down
all
we're
trying
to
do
is
make
data
accessible
a
lot
of
tools
out
there
do
this
do
a
lot
of
things
and
at
the
end
of
the
day,
we're
spunked
really
tries
to
just
Excel.
Is
everyone
in
the
org
can
make
better
decisions?
If
you
give
them
data,
so
what
about
kubernetes
so
kubernetes
we've
had
a
couple
of
different
themes
that
we
get
pulled
into,
whether
it
would
be
with
customers
with
partners
or
just
the
greater
marketplace,
and
so
the
major
themes
that
kind
of
that
we
are
we're.
A
Looking
at
or
playing
in
right,
now
kind
of
fall
into
three
major
categories:
the
first
block
you
know
Splunk
connect
for
kubernetes
or
getting
data
in
as
we
call
it
is
the
only
actual
kind
of
GA
product
that
I'm
going
to
today.
This
is
something
that's
available
right
now.
It's
an
open
source
project,
that's
available
on
github.
That
allows
you
to
deploy
to
your
kubernetes
clusters
and
get
data
out
and
again.
The
data
is
logging,
metrics
and
metadata.
A
A
You
you
you're
you're,
using
a
lot
of
Rackspace
in
my
in
my
data
center
or
I,
want
to
move
you
around
and
be
more
elastic,
so
running
Splunk
itself
inside
kubernetes
is
another
topic
that
we
talk
to
customers
a
lot
about
now,
that's
gonna
be
a
fun
one
and
a
health
save
some
time
for
that,
one
because
we're
a
monolith.
You
know
we're
the
thing,
we're
the
thing
that
you're
not
supposed
to
put
in
docker.
A
You
know
we're
the
thing
that
is
gonna
break
all
the
rules
of
the
church
of
docker
and
so
I'm
here
to
repent
for
a
couple
of
things
and
we'll
show
you
a
little
bit
more
about
that
and
then
the
spunk
operator
or
Splunk
operator
it
doesn't
exist.
It
may
never
exist,
but
I
want
to
tell
you
about
kind
of
some
of
the
the
work
we've
done
to
start
investigating
and
poking
around
in
there.
Because
again
you
know
we're
we're
a
software
company
as
well
right.
A
We
look
around
at
technology
just
like
our
customers
do,
and
it's
actually
one
of
the
fun
things
about
working
as
long
as
you
get
to
know
so
much
of
other
people's
technology
as
you
go,
let's
jump
into
the
first
piece,
because
again,
this
is
something
you
can
try
right
away.
This
is
Splunk
connect
for
kubernetes.
It's
version,
1.1
was
just
released.
Actually,
if
I
can
escape
here,
let's
see
you
mess
around
with
Diane's
laptop
Diane's,
email,
yeah.
A
So
if
you
just
Google
spunk
Connect
for
kubernetes
you'll
find
it
on
github
again,
it
is
a
completely
Splunk,
build
spunk,
supported
project
it
just
because
it's
on
github
doesn't
mean
it's
something
we
just
threw
out
there.
It's
something
that
we're
hoping
allows
us
to
have
much
better
feedback
on
products
and
loop
them
in,
and
so
that's
what
this
1.1
release
is.
A
You
would
get
with
a
u
FK
stack
or
that
you
would
find
it
there
we've
taken
it
in
we've
kind
of
stripped
it
down
and
built
our
own
plugins
to
put
inside
of
it
one
of
them.
The
one
of
the
the
other
new
one
is
actually
a
daemon
set
as
well
for
metric
collection.
We
rebuilt
it
from
scratch,
obviously
with
heap
stir
being
deprecated
recently.
So
this
is
a
Splunk
built
integration.
That'll
allow
you
to
collect
metrics
from
your
cluster
and
I'll
and
I'll.
A
Show
you
a
little
bit
more
about
that
later,
but
again
supported
fully
open
source
and
we're
really
just
trying
to
you
know,
support
the
CNC
F
projects,
the
you
know,
fluid
e.
Does
it
well?
So
why
not
write
so
yeah?
So
that's
so
collecting
data
from
the
cluster.
It
is
interesting
and
so
the
learnings
that
we
had
by
when
customers
would
bring
us
in
because
this
again
this
start
around
2017.
A
Where
customer
brings
you
in
you
know
you
give
them
their
swag
they're
happy
to
see
you
because
they
want
another
t-shirt
and
then
and
then
they
say
how
you
gonna,
monitor
kubernetes
and
you're
kind
of
like
what
do
you
mean
and
so
really
that's
the
life
of
a
Splunk
er
you
get
thrown
into,
and
it's
always
what
I
liked
about
it
was
you
get
thrown
into
new
technologies,
got
to
figure
out
how
to
get
the
data
out,
and
so
this
was
the
answer.
To
that
now
running
at
an
open
shift
in
kubernetes
has
been
interesting.
A
We
do
all
the
development
against
open
source
kubernetes,
but
it's
been
nice
to
know
that
we're
only
just
kind
of
another
couple
steps
away
from
getting
it
to
run
and
be
fine,
an
open
shift
and
I
think
Matt
and
Paul
hitted
the
best
it's
like
when
you
go
to
run
your
product
in
open
shift.
You
really
do
get
to
see
you
know.
Do
you
know
how
secure
dev
I
made
this?
A
What
kind
of
what
kind
of
permissions
that
do
I
need
to
look
at
to
be
able
to
run
this
in
this
more
kind
of
secured
environment,
so
to
get
it
to
run
openshift
and
we
do
have
customers
openshift
customers
were
really
the
first
ones
that
were
really
you
know
banging
the
drum
to
get
in
there.
I
don't
know.
Maybe
it's
just
you
know.
Enterprise
logging
strategy
is
Splunk
and
they
have
this
new
and
apprised
strategy
with
OpenShift.
A
They
want
to
marry
them
right
and
so
early
last
year,
with
the
first
release,
we
were
working
with
large
banks.
Large,
you
know
large
slunk,
customers
and
OpenShift
was
a
was
a
common
platform,
and
still
is
that
I
see
in
the
bulk
of
the
calls
that
I'm
doing
today
with
kubernetes
customers
and
so
getting
it
to
run
into
in
kubernetes
its
it
needs
root.
It
just
does
again.
A
Good
old,
docker,
I
think
we'll
get
into
them
in
a
minute,
but
at
the
end
of
the
day,
everything
that
we
built
inside
of
the
Splunk
connect
for
kubernetes
package
works
perfectly
fine
inside
of
OpenShift,
with
a
couple
of
tweaks.
So
generally,
what
I'll
do
with
the
customer
is
we'll
sit
down
and
we'll
either
helm,
template
and
just
burn
like
burn
the
manifests
down
and
then
use
those
to
deploy
using
OC
apply
or
will
you
know,
will
you
know,
put
helm
in
the
cluster?
A
It's
been
it's
cut,
as
you
know,
with
open
chef
customers,
it's
been
kind
of
50/50.
We
assumed
again
that
people
wouldn't
run
it
right.
They
wouldn't
be
running
helm,
but
I'm
kind
of
surprised,
generally
I
go
in
and
they're
like.
No,
no,
we
got
helm
and
usually
I'm
like
okay
good.
Here
you
go,
and
so
that
makes
it
a
little
bit
easier.
But
what
I'm
highlighting
here
is
for
the
actual
logging
pods
and
the
metrics
pods.
First
blown
connect
for
kubernetes.
A
It's
just
like
the
two
pieces
of
information
that
kind
of
give
it
the
keys
to
the
castle,
and
so
the
only
real
changes
to
the
actual
manifests
beyond
adding
sort
of
Red
Hat
specific
API
endpoints
to
some
of
the
collection
data
collection.
Are
these
kind
of
updates
to
the
manifest
here
so
again,
that'll
that'll
get
you
going
on
on
open
ship
for
collecting
data.
A
Again,
some
of
the
other
useful
things
that
we
found
that
that
are
required
when
you're
when
you're
deploying
something
like
this
is
you
can't
just
put
you
know
privilege
in
the
actual
manifest
you
have
to
then
go
give
the
service
account
privilege
to
have
the
privilege.
So
again,
it's
great
security
practice,
and
so
again,
I
just
try
to
have
a
couple
of
collection
of
commands
that
are
helpful
for
openshift
customers
when
they
go
to
deploy
Splunk
connect
for
kubernetes.
A
So
again,
I
think
what
it
does
is
just
highlights
kind
of
some
of
the
again
some
of
the
deep
down
kind
of
OpenShift
security
mechanisms
that
are
there
and
it's
good
to
get
to
know
them,
especially
as
you
start
to
get
deeper
with
OpenShift,
to
understand
kind
of
how
you
control
those
those
that
access
who
do
you
give
it
to?
How
do
you
give
it
to
them,
etc.
A
The
other
one
is
openshift
ships
with
default,
node,
selectors
and
then
in
the
namespaces,
and
so
again,
just
you
know,
as
part
of
you
know,
learning
openshift
as
someone
who
had
built
had
had
played
with
this
in
open
source
kubernetes.
This
was
another
thing
that
you
could
bang
your
head
a
little
bit
for
like.
Why
is
my
Padma
I've?
Had
a
couple
customers
called
me
up
me
like.
Why
are
these
pods,
not
scheduling
everywhere?
A
I
want
them
to
go
everywhere
and
it's
again
we
learn
OpenShift
together,
and
so
that's
and
that's
been
the
the
trend
right.
The
customers
and
all
of
us
are
learning
as
we
go,
which
is
fun,
and
so
these
are
just
some
of
the
items
that
we've
taken
away
and
then
again
helm
like
everyone
has
their
feeling
on
how
and
all
that
I
think
locally.
You
don't
even
have
to
run
tiller
and
the
cluster
just
using
it
to
avoid
the
fat,
fingering
and
yeah
mole,
you
know
I
know
hand
coding.
A
The
amyl
is
is
a
great
little
trick.
I,
don't
know
if
a
lot
of
people
use
it,
but
I
tend
to
use
it
now
now
that
we
use
the
helm
chart
as
well.
So
once
you
run
it,
it
looks
something
like
this,
so
this
is
a
three
node
cluster,
just
a
simple
development
cluster
and
so
you'll
see
the
logging
pods
will
run
on
on
each
of
the
nodes,
as
well
as
the
metrics
pods,
and
then
there's
a
metrics
aggregator
deployment
and
an
object's
deployment.
A
The
aggregator
is
actually
just
talking
to
the
API
and
getting
summary
metrics
to
feed
into
Splunk
and
the
kubernetes
objects.
Pod
is
the
is
scraping
the
API
for
metadata,
and
so
this
comes
into
sort
of
how
we've
been
able
to
kind
of
improve
some
experience
for
some
customers
over
just
like
generic
fluent
deep
plugins,
and
you
might
be
thinking
like
well,
why
would
I
use
this
one
over
the
other
one
that
ability
to
kind
of
separate
that
metadata
collection
and
not
have
to
do
it
for
every
single
log
line?
A
It's
a
big
dynamic
shift
versus
what
it
used
to
be
where
someone
would
comes
kiss
the
Splunk
admins
ring
and
they
would
fill
in
a
form
and
say
my
logs
live
in
this
directory,
and
then
it
would
take
two
weeks
and
then,
if
finally,
some
logs
would
show
up
we're
in
there
with
the
devs
in
an
hour.
They
drop
it
in
the
cluster
and
they
have
everything
from
their
cluster
flowing
in.
A
So
they
hyperventilate
that
little
guy
there,
the
Splunk
in
mineral,
is
hyperventilating
a
little
bit
and
we
just
teach
them
how
to
filter
and
teach
them
how
to
monitor
their
ingestion
right.
So
some
of
the
observations
that
we
get
again,
we
have
some
optimized
logging
formatting
versus
some
of
the.
The
kind
of
this
idea
that
crappy
logs
inside
of
a
JSON
payload
are
better
logs,
there's
still
just
crappy
logs
and
indexing.
A
A
You'll
see
that
your
log
message
is
a
key
inside
of
a
JSON
payload
and
trying
to
work
with
that
is
kind
of
a
little
bit
gnarly
right,
and
so,
when
we
index
it,
we
actually
have
built
a
couple
of
one
of
the
cooler
plugins
they'll
be
built
for
the
collect
connector
as
a
JQ
plug-in.
So
you
can
actually
use
that
crazy
power
of
JQ
to
actually
unwrap
that
JSON
that
docker
JSON
log,
and
not
only
does
that
make
the
actual
TCO
and
footprint
of
that
data.
Much
smaller.
A
You
know,
100
line
log
is
still
just
a
hundred
line.
Log,
but
we're
able
to
keep
all
that
great
metadata
and
use
Splunk
tricks
rights.
You
know
the
the
actual
Splunk
platform
to
index
those
important
fields
as
what
we
call
index
time,
fields
which
don't
count
against
license
and
they're,
not
a
you
know
they
don't
bloat
your
logs.
A
You
know
making
sure
that
logs
for
this
team
go
to
that
place.
You
know
for
this
app
team
go
there,
and
so
these
and
then
container
certification
with
Red
Hat,
is
something
that
we're
looking
at.
You
know
can
it's
taken
a
while,
but
we're
trying
we're
trying
our
best
to
get
in
there
so
that
we
can
get
into
the
container
marketplace
so
again
covering
the
last
two.
So
these
are
the
last
two
boxes
that
I
was
telling
you
about.
These
are
more
adventures
that
were
on
right
now,
so
Splunk.
This
is
wrong
in
a
doc.
A
Maybe
if
there's
any
purists
in
the
room,
so
over
in
October
2018,
we
released
Splunk
official
docker
images.
Those
images
again
are
a
lift
and
shift.
They
are
not
a
refactor
Splunk
in
any
way,
they're
things
that
you
know
that's
something
that
is
on.
You
know
on
the
horizon
kind
of
thing
for
Splunk,
not
something
that's
available.
Today,
however,
there's
still
benefits
of
running
inside
of
a
containerized
environment.
For
our
users,
you
know
easier
test,
faster,
deploy,
more
reliable
deploys,
and
so
this
is
available.
A
The
Splunk
image
is
available
there
and,
if
you've
ever
wanted
to
try
it
it's
a
great
way
to
get
started.
It
actually
uses
ansible
as
the
entry
point,
that's
usually
where
I
duck
we're
in
a
Red
Hat
room.
So
I
shouldn't
be
that
scared,
but
that
can
be
a
little
bit
of
a
controversial
thing
for
me
as
a
practitioner,
I
could
care
less.
It
works
real
good
because
when
you
bring
up
a
monolith
inside
of
a
container
the
orchestration
at
the
a
player
like
Matt
and
Paul
said,
there's
a
lot
of
work
to
do
there.
A
A
So
what
I've
done
is
there's
a
couple
of
blogs
out
there.
If
you
just
google
Splunk
can
Splunk
enterprise
on
kubernetes.
We
took
a
early
look
some
test
scenarios
that
are
available,
that
you
can
try
in
our
in
our
docker
github
repo.
That
will
allow
you
to
try
running
spunk
in
kubernetes,
and
some
of
the
decisions
we
made
and
so
running
in
an
open
shift
again
was
much
of
the
same.
A
The
last
thing
is
operators,
and
again
this
is
just
something
that
we're
experimenting
with
so
I
just
want
to
be
very
clear
about
that.
It'll
just
be
a
kind
of
a
nice
little
note
here
of
us
learning
from
from
all
of
you
or
anybody
that
is
gonna,
go
and
try
operators
we've
we
had
a
little
I
guess
you
could
call.
You
know
a
POC
internally
and
where
we
started
was
real
simple,
you
know,
learning
that's
DK,
taking
a
poke
around
looking
at
what's
involved.
A
What's
this
gonna
mean
for
us,
what
will
it
give
us
and
so
what
and
why
and
and
again
the
reasons
are
all
the
same
across
all
these
tools
is
to
make
it
simpler
for
our
users
to
get
to
what
they're
supposed
to
be
doing,
which
is
working
with
the
data
right.
So
much
of
these
tool
sets
and
so
much
of
the
of
the
actual
logging
and
monitoring
is
spent
in
the
weeds
with
caring
and
feeding
and
at
the
end
of
the
day
you
just
got
to
get
the
inside
out.
Nobody's
gonna.
A
Ask
you
how
you
got
it
if
you
have
the
right
answer
on
that
3:00
a.m.
bridge
right,
so
those
are
the
things
so
again
encoding
brest
practices,
the
horrors
I
have
seen
the
horrors
I
have
seen
when
you
leave
customers
with
a
big
shotgun,
pointing
at
their
feet.
Right
and
spawn
can
be
like
that.
Sometimes
I
mean
we're.
We
know
it
the
most
right,
and
so
this
presents
an
interesting
opportunity
for
us
to
really
encode
that
knowledge
in
there
and
not
have
to
kind
of
you
know
you
don't
need
three
people
with
a
funny
hat.
A
You
know
to
run
your
foot
to
run
your
deployment,
and
so
some
of
our
some
of
our
team
that
were
here
and
some
of
the
people
that
work
at
Splunk
had
a
hack
at
it
and
we
were
able
to
get
a
pretty
nice
little
start
on
it,
where
we
were
able
to
create
an
operator
pod
that
would
instantiate
Splunk
instances
and
some
auxilary
software
that
we
use.
You
could
imagine
you
know
where
this
can
be
really
helpful
right.
You
know,
syslog
gets
used
a
lot
and
we
rely
on
a
lot
of
open
source
projects.
A
A
Finally,
really
as
we
look
across
the
kubernetes
landscape,
whether
it
be
collecting
the
data,
you
know
working
with
kubernetes
tools
to
make
our
software
easier
or
better
or
just
a
you
know,
just
playing
in
the
ecosystem
at
all.
The
the
major
items
were
looking
at
is
you
know,
repeatable
and
declarative
deployments.
A
That
would
that
would
make
a
lot
of
Splunk
users
life
so
much
better
than
kind
of
bespoke
kind
of
you
know
voodoo
in
the
data
center
by
some
specialists
that
you
have
to
call
every
time
and
then
obviously
the
immense
power
of
the
API
in
kubernetes
and
an
open
ship.
You
know
allowing
managing
our
software
to
be
much
much
easier
and
then
again
all
of
this
is
about
value.
A
A
That's
very
you
know,
does
a
lot
of
things
and
so
again
I'm
sure
they'd
even
tell
you
it's
a
they
do
it
because
of
the
answers
that
come
out
the
pipe
on
the
other
end,
and
so
as
much
as
we
can
make
that
administrative
life
much
much
easier.
We
think
it'll
open
us
up
to
a
new
kind
of
customer.
You
know
developers,
people
that
are
not
going
to
spend
their
time
fiddling
away
with
deploying
tools
and
so
on.
A
I've
I
will
make
sure
Diane
gets
a
copy
and
there's
a
lot
of
great
links
in
there
a
lot
of
cool
stuff
to
try.
What
we're
here
to
do
again
is
to
learn
and
be
part
of
the
community.
We're
not
you
know
telling
you
that
we
have
the
way
for
you
to
forward,
or
you
know,
teach
you
how
to
do
it.
We
actually
want
to
work
with
you
and
understand
how
we
should
do
it.
You
know
how
we
should
go
about
this.
Do
you
like
these
ways?
A
Would
you
use
them
and
so
happy
to
hear
about
them
or
you
can
join
us
in
some
of
the
github
repos?
The
major
ones
you
want
to
check
out
is
our
docker
Splunk
repo,
again
I
apologize
for
the
computer.
I
had
a
couple
things
I
want
to
show
you
a
little
bit
beyond
that.
But
if
you
check
out
the
repos,
you
can
find
me
there
and
yeah
check
it
out.
Hopefully,
it'll
it'll
make
it
a
little
easier
for
you
to
try
the
future.
So
thanks
for
your
time,
everyone.