►
Description
Guest Speaker: Peter Brachwitz
The Elasticsearch Operator or ECK is built and maintained by the creators of Elasticsearch, it does a lot more than manage Elasticsearch though. The operator can help you automate the deployment, management and upgrade of Elasticsearch, Kibana, Beats and Enterprise Search. Since making the operator generally available in January 2020, we have consistently added value to our customers by releasing new features and partnerships, like the Red Hat Certification most recently.
In this Briefing, learn about how the ECK operator can help you automate the Day2 operations if you are running the Elastic Stack and Solutions on Kubernetes.
A
Hello,
everybody
and
welcome
to
another
openshift
commons
briefing
as
we
like
to
do.
On
monday.
We
bring
in
other
projects
products
and
have
people
give
us
a
an
intro
tutorial
overview
of
what
they're
doing
and
we've
all
probably
heard
of
the
elk
stack
or
elastic
and
all
of
those
things.
But
I
thought
it
was
high
time
we
got
an
update
on
what
was
going
on
in
the
elastic
cloud
and
especially
since
they've
now
built
an
operator,
so
we
have
peter
brockfitz,
who
is
here
with
us
from
vienna?
B
A
But
we're
going
to
have
him
talk
for
and
give
us
an
intro
tutorial
to
elastic
cloud
on
kubernetes
and
we'll
have
some
demo
and
then
there's
time
at
the
end
for
live
q
a
so
wherever
you
are
on
facebook,
twitch
youtube
or
here
in
blue
jeans.
Please
just
ask
your
questions
in
the
chat
and
we
will
really
relay
them
to
peter
and
have
a
conversation
at
the
end,
so
welcome
everybody
and
peter
go
for
it.
Take
it
away,
introduce
yourself
and
let's
hear
all
about
elastic
cloud.
B
B
Let
me
see,
if
I
can
to
my
slides
so
the
elastic
out
and
kubernetes
is
a
the
way,
the
official
way
to
run
the
elastic
stack
on
on
kubernetes,
and
maybe
it's
worth
sort
of
taking
a
step
back
and
and
reminding
ourselves
what
we
mean
by
elastic
stacks.
So,
as
you
might
know,
at
the
heart
of
the
elastic
stack
is
elasticsearch
is
a
distributed,
data
store
and
search
and
analytics
engine.
B
And,
of
course,
we
have
a
sort
of
window
internet
stack
through
kibana,
which
is
a
sort
of
extensible,
visualization
and
ui
application,
and
that
enables
the
three
solutions
we
currently
have,
which
is
enterprise
search,
which
gives
teams
the
ability
to
unify
sort
of
data
from
multiple
data
sources
into
a
unified
search
experience
or
allows
them
to
add
a
search
box
to
their
application.
That's
powered
by
elasticsearch
and
elastic
observability,
which
is
what
we're
looking
at
closer
today.
B
Also
during
the
demo
later,
which,
of
course,
the
ability
to
sort
of
observe
your
infrastructure
through
logs
metrics
application,
performance,
monitoring,
data
and
uptime
data,
and
things
like
that,
an
alert
on
it
and
finally
elastic
security,
which
is
a
sim
solution
and
has
also
an
endpoint
component.
So
all
of
these
have
also
small
binaries
or
processes
behind
it,
which
is
something
to
keep
in
mind,
because
what
we're
interested
in
is
actually
orchestrating
this
later
on
on
kubernetes.
B
So
in
order
to
get
data
into
elasticsearch,
we
have
these
data.
Shippers
beats
and
box
dash
beats
comes
in
in
many
flavors
sort
of
metric
beat
for
metric
data
file,
beat
for
files
and
so
forth.
B
But
now,
coming
back
to
the
question,
how
do
you?
How
do
you
run
this?
Of
course,
the
easiest
way
is
to
use
our
hosted
sas,
offering
called
elastic
cloud.
It's
the
easiest
way,
because
all
you
have
to
do
is
say.
B
I
want
to
run
this
product
and
that
product
and
tell
us
how
many
resources
you
want
to
use
and
we
take
care
of
the
rest
now
it
could
be
that
for
some
one
reason
or
another,
regulatory
or
legal
or
some
other
requirement,
you're
not
able
to
run
on
a
public
cloud
or
and
have
the
requirement
to
run
on
your
own
infrastructure.
And
this
is
where
elastic
cloud
enterprise
comes
in
as
an
on-premise
version
of
elastic
cloud.
B
And
finally-
and
this
is
what
we're
focusing
on
today-
elastic
cloud
on
kubernetes,
which
aims
to
give
you
a
similar
experience
as
elastic
cloud
enterprise,
but
on
kubernetes.
So
today,
elastic
cloud
encryption
is
just
an
operator
and
a
kubernetes
operator,
it's
available
in
on
operator,
hub
io
and
with
the
latest
release.
It's
also
going
to
be
a
red
head.
Certified
openshift
operator.
B
Now
why
why
do
we
need
an
operator
to
run
elasticsearch
or
the
elastic
stack
on
kubernetes?
B
Well,
you
could
say:
can
I
not
just
create
the
necessary
kubernetes
objects
myself
to
run
these
applications,
and
the
answer
is
yes,
maybe
for
simple
use
cases,
but
as
soon
as
we
think
about
sort
of
beyond
a
simple
proof-of-concept
deployment,
it
becomes
clear
that
sort
of
you
need
more
power
and
more
capabilities
to
run
this.
Let
me
illustrate
this
by
going
back
to
that
slide.
B
But
this
is
also
the
most
interesting
application
from
an
orchestrator
point
of
view
and
which
is
what
I'm
focusing
on
today,
mostly
so
community,
of
course,
has
support
for
stateful
workloads
in
form
of
stateful
sets,
but
there's
other
things
to
take
into
consideration
right.
So
we
need
to
start
thinking
about
storage
to
persist
the
state.
So
we
need
to
think
about
volume
management.
B
We
need
to
think
about
choosing
a
storage
provisioner
that
has
offers
good
enough
performance
for
an
application
like
elasticsearch
and
then-
and
this
is
the
other
focus
of
this
presentation-
I
guess
is
you
need
to
think
about
what
happens
after
you've
initially
deployed
this
cluster?
What
if
your
requirements
change
you
need
to
scale
up
or
down,
or
you
want
to
change
the
architecture
of
your
cluster,
because
you
have
a
new
use
case.
B
You
want
to
incorporate
and
then
I
think
it
becomes
clear
that
we
need
sort
of
take
extra
into
consideration
that
not
all
elastic
search
clusters
are
uniform.
The
simple,
a
simple
plaster
would
like
this
one
that
I
illustrate
here
might
be
something
you
start
out
with.
B
I
don't
know
how
much
you
know
about
elasticsearch,
but
in
the
lesser
search
we
have
this
concept
of
the
node
roles
that
basically
determine
what
elastic
search.
Node
does
in
the
class
different
master
responsible
for
cluster
state
and
reaching
consensus
and
cluster
membership,
and
things
like
that
data
role
to
store
data
on
the
node
ingest,
to
run
in
just
pipelines
and
ml
to
run
ml
jobs
and
in
a
simple
topology
like
this
one,
it's
all
collocated
in
the
same
physical
node.
B
They
play
all
the
roles
together,
but
as
your
as
your
use
of
a
cluster
grows,
what
people
start
to
do
is
sort
of
factoring
these
out
into
different
tiers.
So
then,
you
have
suddenly
a
cluster
where
not
all
nodes
are
uniform
anymore,
but
they
have
different
roles.
So
one
thing
people
tend
to
do
is
sort
of
pull
out
the
master
nodes
in
a
dedicated
set
of
nodes
just
to
isolate
them
from
any
spikes
and
traffic
and
make
sure
that
your
cluster
stage
stays
available
at
all
times
and
then
as
more.
B
There
are
more
advanced
use
cases,
of
course,
where
you
then
pull
out
every
kind
of
node
into
a
separate
tier,
and
if
you
have
a
lot
of
time
series
data,
it
might
even
make
sense
to
differentiate
the
data
tier
itself
into
what's
called
a
hot,
warm
cold
architecture
where
we
move
data
that
starts
out
on
a
so-called
hot
node.
It's
very
power
powerful
hardware
and
contains
that
you
search
a
lot,
for
example,
log
data
of
the
last
couple
of
days.
B
B
The
main
purpose
of
this
is
not
to
explain
the
different
kinds
of
elastic,
search,
topologies
or
cluster
architectures
to
you.
Just
the
one
thing
to
take
away
of
this
is
that
not
all
elastic,
so
search
nodes
are
uniform
and
if
we
come
back
to
that,
what
I've
said
about
the
need
to
manage
kubernetes
resources
and
we
think
about
how
stateful
sets
work
in
kubernetes,
and
we
realize
that
we
cannot
create
such
an
architecture
with
a
single
stateful
set
right,
because
a
stateful
set
has
one
template
can
create
one
uniform
type
of
node.
B
So
just
a
quick
reminder,
I'm
sure
everyone
knows
how
operator
work,
but
just
to
remind
ourselves.
Sort
of
the
main
feature
we
are
using
here
is
at
the
extensibility
in
the
kubernetes
api
server
through
custom
resource
definitions
that
allow
us
to
introduce
our
own
types
into
the
kubernetes
api.
So,
on
the
right
hand,
side
you
should
see
an
example
of
how
a
elasticsearch
cluster
spec
looks
based
on
our
crd
freight.
So
you
see
what
I
said
earlier
about
different
tiers
of
nodes.
We
call
them
node
sets
here
that
I
have
defined
two
sets
of
nodes.
B
Three
master
nodes,
two
data
nodes
and
the
second
pillar
of
an
operator
is
then
an
actual
process
running
in
your
kubernetes
cluster.
Of
course,.
B
Which
has
access
to
the
kubernetes
api
and
watches
as
users
make
changes
to
these
custom
resources
as
they
create
and
spec
out
elasticsearch
clusters,
for
example,
it
responds
to
these
events
and
starts
a
process
that
we
call
reconciliation
where
it
sort
of
compares
what
the
user
has
specified.
I
want
to
have
a
cluster
with
these
and
these
nodes
and
tries
to
compare
what
it
sees
in
the
kubernetes
cluster
at
this
time
and
try
to
try
to
work
towards
that
desired
state.
The
user
has
expressed,
so
it's
maybe
a
little
bit
abstract.
B
B
But
actually
that's
not
all,
but
there's
a
lot
because
there's
a
lot
of
sort
of
supporting
actors
here,
there's
config
maps
that
we
mount
into
the
pod
for
configuration
files
or
secrets
for
tls
data
or
user
password
information
or
the
so-called
keystore,
which
is
the
way
in
elasticsearch
to
specify
sensitive
configuration
information
that
shouldn't
be
in
a
clear
text
file.
B
B
Do
we
have
the
stateful
set
that
we
expect
to
have,
if
not
create,
and
do
we
have
all
the
supporting
kubernetes
objects,
if
not
create
them,
and
we
go
beyond
that
as
well
by
interacting
with
them
the
elasticsearch
cluster
that
we
create
directly
and
on
the
one
hand,
of
course
that's
in
order
to
apply
configuration
settings,
but
on
the
other
hand
we
also-
and
there
come-
we
come
back
now
to
this
idea
of
sort
of
day
two
operations
where
your
cluster
topology
changes
over
time.
B
We
need
to
take
special
care
when
we
enact
these
configuration
changes
so
consider,
for
example,
a
scale
down
you
may
have
over-provisioned
your
cluster
or
you
you're
migrating
away
from
one
topology
to
another,
so
you're
scaling
down
one
type
of
nodes
and
scaling
up
another
type
of
nodes.
So
we
take
extra
care
that
we
migrate
the
data
away
from
a
node
before
we
decommission
it
and
that's
not
to
say
that
elasticsearch
doesn't
have
its
own
recovery
mechanisms
to
deal
with
failures,
but
it's
rather
that
this
is
more.
B
This
is
a
planned
configuration
change,
so
we
want
to
have
a
smooth
transition
and
we
don't
want
to
rely
on
the
failure
or
failover
mechanism
in
elasticsearch
and
similarly
for
rolling
upgrades.
So
rolling
upgrades
is
when
we
sort
of
apply
a
new
configuration
to
all
the
nodes
in
the
cluster,
but
we
do
this
one
node.
At
a
time
again,
we
interact
with
elastic
search
here
and
basically
tell
it
if
you
will
hold
on
a
second
we're
going
to
take
down
this
node,
no
need
to
panic,
no
need
to
recover
the
data.
B
B
Now
I
showed
this
very
simple
manifest
earlier
that
we
that
you
can
use
to
spec
out
your
cluster
for
advanced
users.
We
also
give
a
lot
of
power
by
exposing
the
pod
template,
which
the
statement
said.
The
underlying
stateful
set
uses
directly
through
our
custom
resource
definition.
So
this
allows
you
to,
for
example,
add
additional
metadata
to
elasticsearch
nodes,
to
use,
affinity,
node
or
part
infinity
infinity
or
to
tweak
the
jvm.
So
elasticsearch
is
a
jvm
based
application
and
you
can
sort
of
tweak
it
here
with
environment
variables
like
in
this
example,.
B
Coming
back
to
this
idea
that
we're
using
stateful
sets
under
the
hood
to
orchestrate
the
different
tiers
of
nodes
we
have
in
elasticsearch
cluster,
but
not
only
means
that
every
pod
has
a
stable
network
identity.
It
also
means
that
it
has
a
stable
association
with
a
persistent
volume
and
makes
a
lot
of
sense.
Of
course,
we're
talking
about
stateful
application
after
all,
but
we
need
to
give
users
the
ability
to
to
have
an
influence
on
how
these
volumes
are
created.
B
So
this
is
why
we're
exposing
in
the
manifest
as
well,
the
volume
claim
template
where
you
can
tweak,
for
example,
how
big
the
volume
should
be,
in
this
case,
for
example,
100,
gigabytes
and,
most
importantly,
which
storage
class
should
be
used
to
create
it,
and
that
has,
of
course,
a
lot
of
impact
on
the
performance
of
that
volume,
and
you
need
to
sort
of
make
an
educated
decision
based
on
your
use
case.
What
kind
of
storage
class
you
use.
B
B
So,
to
summarize
elastic
cloud
and
kubernetes
as
it
is
now
is
an
is
an
elastic
is
an
operator.
A
kubernetes
operator
allows
you
to
deploy
elasticsearch
kibana
apm
server
beats
an
enterprise
search
on
kubernetes,
and
when
I
say
on
communities
I
mean
vanilla,
kubernetes,
openshift
and
the
hosted
kubernetes
offerings
from
the
major
cloud
providers,
I'm
going
to
show
how
the
interaction
model
works
in
the
demo,
and
we
talked
a
little
bit
about
already
about
the
support
of
sort
of
moving
from
from
different
topologies
to
another
and
rolling
out
changes
and
version
upgrades.
B
The
operator
itself
is
open
code,
so
you
can
take
a
look
on
our
github.
It's
key
type
elastic,
slash
cloud
hyphen
on
hyphen
cades,
where
you
can
see
what
we
do
under
the
hood,
where
you
can
open
issues
where
you
can
open
pull
press
if
you
want
to
and
get
in
touch
with
us
as
well.
B
We
just
released
version
1.3
of
of
elastic
cloud
on
kubernetes,
which
is
going
to
be
the
first
release,
which
is
going
to
be
a
certified
operator,
and
it
has
a
new
feature
to
allow
volume
expansion
which
I'm
going
to
demo,
and
we
also
have
for
people
who
are
invested
in
the
helm
universe.
We
also
offer
the
operator
as
a
help
chat
now
and
we
have
fixed
some
issues
around
ipv6.
A
So
far
you're
hitting
it
out
of
the
park
peter
so
keep
going
and
and
we'll
get
the
questions
at
the
end.
B
B
Yeah
we
need
some
elevator
music
there.
I've
taken
the
liberty
to.
I
create
a
project
ahead
of
time
here.
It's
called
elastic
monitoring
because
we're
focusing
on
the
observability
use
case.
So
we
see
what
I
mentioned
earlier
about
custom
resource
definitions,
so
each
of
them
is
an
api
extension.
Basically,
it's
listed
here
as
provided
apis,
and
we
can
now
go
ahead
and
maybe
start
creating
an
elasticsearch
cluster.
B
We
can
do
this
just
use
a
different,
a
different
spec
that
I've
prepared
ahead
of
time,
which
has
the
latest
version
of
elasticsearch
and
we're
starting
out,
as
I
said,
with
a
very
simple
topology,
just
one
group
of
nodes,
three
nodes,
eight
gigs
of
ram
and
I'm
giving
half
of
the
ram
to
the
jvms
heap
space,
which
is
recommended.
B
If
you
have
this
kind
of
mixed
role,
setup
and
then
the
other
thing
we
also
covered
in
the
presentation
earlier
is
I'm
specking
out
a
volume
claim
template
here
for
50
gigs
and
the
last
setting
at
the
bottom
is
just
because
I
haven't
tweaked
the
kernel
on
this
dev
environment.
So
I'm
turning
off
some
memory.
B
Mapping
feature
in
elasticsearch
which,
on
production
environments,
gives
you
extra
performance,
but
we
don't
need
that
for
the
demo
I'm
hitting
create,
and
what
we
can
see
now
is
that
it's
the
operator
kicks
into
live
and
starts
deploying
this,
and
we
can
actually
use
the
openshift
console
to
see
the
stateful
set.
As
I
said,
we
have
just
one
set
of
nodes
uniform
at
this
point,
prime
back,
and
we
see
the
pots
coming
up
now.
B
The
next
thing
we
want
to
do
is
have
a
kibana
in
front
of
that,
so
that
we
have
actually
something
visual
to
see.
I
don't
have
to
demo
everything
with
with
apis
and
for
that
I'm
going
to
switch
the
command
line
so
that
you
can
also
see
how
the
interaction
works
from
command
line
tooling.
B
Let
me
fire
up
my
editor
here.
Just
let
me
actually
let
me
deploy
it
first
and
then
show
you
what
I'm
doing
afterwards
save
some
time.
B
B
Basically,
we
say
I
want
to
automatically
connect
this
to
an
elasticsearch
cluster
called
elasticsearch,
which
is,
if
you
go
back
to
the
ui
for
a
second,
which
is
the
elastic
search
cluster
we
created
just
moments
ago,
and
what
the
operator
then
starts
doing
is
sort
of
setting
up
certificates,
setting
up
a
user
with
minimal
privileges
for
this
and
making
sure
that
that
association
works
between
these
two
applications.
B
The
only
other
thing
I'm
doing
here
is
I'm
using
the
open
shift
service
serving
certificate
feature
to
get
a
trusted,
tls
certificate
for
kibana,
and
then
I'm
using
this
here
in
the
in
the
spec
later
on.
B
Now,
in
order
to
access
kibana
from
outside
the
cluster,
we
also
need
an
openshift
route.
I've
prepared
that
as
well,
so
I'm
using
a
let's
encrypt
certificate
here
and
pointing
the
service
that
kibana
has,
and
let
me
actually
show
this
briefly
to
you
so
the
exposing.
B
B
You
go
back
to
our
ui.
We
can
check
if
kibana
is
up.
So
kibana,
as
I
mentioned,
is
a
is
basically
a
stateless
application,
so
we're
just
using
a
deployment
here
to
to
roll
it
out
into
the
kubernetes
cluster.
You
click
through.
To
that
we
see
the
kibana
part
is
is
up
and
running.
We
can
also
monitor
the
help
health
of
these
applications
that
we
just
have
deployed
display
using
the
standard
command
line.
B
B
Actually,
this
doesn't
sound
right,
but
yeah,
that's
more
like
it,
I'm
seeing
a
login
window
now.
So
how
do
we
log
in
there's
a
built-in
user
called
elastic
that
I'm
going
to
use
and
what
we're
doing
is
we're
exposing
the
password
for
this
user
through
a
kubernetes
secret.
It's
called
elasticsearch,
so
the
name
of
the
cluster,
the
as
elastic
user
and
then
I
can
just
use
a
little
bit
of
command
line.
B
Do
it
actually
there's
a
new
line
in
that,
so
we
need
to
use
some
built
in
shell
functions
to
turn
that
off
and
I'm
going
do
it
in
mac
os,
there's
a
function
to
copy
stuff
from
the
command
line
into
your
paste
board.
I'm
using
that
as
well,
then
that
should
give
me
the
password
for
production
environments.
What
you're
probably
going
to
do
is
not
do
that
all
the
time,
but
instead
configure
summer
or
some
form
of
single
sign-on
instead,
but
for
the
demo.
I
think
that's
maybe
too
much
for
now.
B
So
what
we
see
now
is
kibana
and
I
said
we're
going
to
look
at
observability
a
little
bit
today
during
this
demon.
We
see
there
is
nothing
there,
there's
no,
no
data
in
there.
So
how
do
we
get
data
in
to
remember
what
I
said
earlier
about
the
elastic
stacked
and
beats
is
the
way
to
go
to
ship
data
into
elasticsearch.
B
What
I,
what
is
a
good
starting
point,
is
our
documentation
page
for
elastic
value,
kubernetes.
It's
on
the
elastic.co
website,
slash
guide
and
you
just
click
through
to
elastic
cloud
and
kubernetes,
and
then
we
have
a
section
for
each
individual
application
that
we
support
and
as
we're
interested
in
beats.
Now
there
is
in
the
beats
chapter.
There
is
a
subsection
with
configuration
examples
and
that's
a
good
starting
point,
because
these
bits
configurations,
it's
usually
a
lot
of
yammer.
B
So
this
is
a
the
third
custom
resource,
I'm
introducing
now
we
had
elasticsearch
and
keeper.
This
is
beats
same
principle,
a
name
namespace
a
version.
The
type
is
metricbeat
because
we
want
to
get
the
metrics
from
the
elasticsearch
cluster
and
then
again,
this
elasticsearch
ref
element
that
we've
seen
before
that
automatically
sets
up
the
connection
to
elasticsearch
and
I'll,
be
using
attribute
configuration
to
extract
elasticsearch,
specific
metrics.
B
So
that's
a
specific
integration
for
elasticsearch
and
there's
multiple
integrations
in
metricbeat
for
different
kinds
of
common
applications
and
I'm
targeting
an
elasticsearch
cluster
that
has
these
labels
on
now.
In
order
to
have
these
labels
on,
I,
I
also
sorry
scrolling
widely
through
that
document.
I
also
applied
a
configuration
change
to
elasticsearch
itself
to
add
this
metadata
to
the
port
number.
So
this
is
the
cluster
from
before.
B
B
It's
already
updated
the
elasticsearch
as
default
2
node,
so
it's
doing
one
node
at
a
time
terminating
it
waiting
for
30
seconds
to
drain
connections,
to
the
node
rolling
out
the
change
booting
it
up
and
recreating
the
pod
and
then
waiting
for
the
cluster
to
be
healthy
and
then
continues
with
the
next
now
and
so
in
a
very
safe
way.
It's
rolling
out
this
configuration
change
across
the
whole
cluster.
It
takes
a
little
while,
but
it
makes
sure
that
sort
of
we
don't
lose
availability
during
that
process.
B
I'm
not
going
to
sit
here
and
watch
how
this
is
running
out.
We
are
almost
done
anyway.
Instead,
I
think
I
want
to
roll
out
two
more
things
in
that
cluster
and
that's
I
don't
wanna.
I
don't
want
to
only
monitor
my
own
elasticsearch
cluster.
I
also
wanted
to
monitor
kubernetes
itself.
I'm
gonna
roll
out
a
demon
set
for
magic
beat
that
allows
us
to
monitor
kubernetes
and
filebeat
to
extract
logs
from
all
running
containers.
B
B
And
then
I'm
explaining
a
little
bit
what
they
do
while
we
wait
for
this
to
be
rolled
out,
so,
let's
start
with
metric
beat.
So
this
is
by
now
familiar
the
custom
resource
for
beats
type
magic
beat
elasticsearch.
We've
discussed
this.
The
new
thing
here
is
the
kibana
ref,
which
allows
your
instructor
the
operator
to
automatically
connect
this
metric
bit
instance
also
with
kibana,
so
that
magic
big
can
install
some
dashboards
into
kibana
and
what
this
yammer
manifest
is
a
slightly
modified
version
from
what
we've
done.
B
So
my
colleague
michael,
is
working
on
a
blog
post,
actually
for
the
openshift
blog,
which
will
have
the
full
manifests
for
you
to
download
very
soon
and
what
this
does
it
targets
specifically
openshift's
control
plane.
So
we
have
done
a
few
tweaks
here
to
to
account
for
the
openshift
namespace
structure,
for
example,
to
extract
metrics
from
the
controller
manager
from
the
scheduler
from
core
dns
and
so
forth.
We're
deploying
this
as
a
daemon
set.
B
B
Which
you
can
see
here
so
and
similarly
5
8,
is
basically
the
same
idea.
We've
seen
it
now,
a
couple
of
times
use
a
different
type
here:
it's
firebit
not
metric,
beat
but
otherwise
very
similar,
using
elasticsearch
to
automate
the
setup
of
the
connection
and
then
we're
using
a
feature
in
beats.
That's
called
autodiscover,
which
is
for
kubernetes
environments
where
ports
come
and
go
so
that
it
automatically
picks
up
the
logs
from
these
containers
as
they
are
created
or
stops
picking
them
up
once
they
are
deleted.
B
B
So
we
see
the
two
daemon
sets
one
for
firebeat
to
extract
the
logs
one
for
metric.
Beat
they
look?
Okay,
they
look
healthy.
So
let's
maybe
take
now
a
look
back
at
kibana
and
see
if
data
is
starting
to
flow.
This
can
take
a
minute
or
two,
but
we
already
see
log
data
is
coming
in
and
matrix
data
is
coming
in.
We
can
now,
for
example,
zone
in
on
the
metrics
and
maybe
slice
and
dice
this
a
little
bit
differently
by
looking
at
kubernetes
pots.
B
That
looks
better
and
maybe
focus
on
our
namespace
here
how
to
refresh
this.
So
we
see
the
deployed
bots
and
we
can
maybe
zone
in
onto
the
logs
for
this
elasticsearch
cluster
and
we
see
we
can
jump
through
and
see
the
logs
streaming
and
actually
it
can
stream
this
life.
If
I
wanted
to
and
see
new
locks
coming
in.
B
Similarly,
we
could,
if
I
go
back
one
second,
we
could
also
click
through
and
and
get
an
overview
of
metrics,
but
this
is
sort
of
very
generic
and
as
if
you
remember
what
we
did
in
the
beginning,
I
said
we
also
installed
a
instance
of
metrobeat
to
monitor
the
elasticsearch
cluster
directly,
and
this
is
where
we
get
a
richer
set
of
metrics.
B
For
the
elasticsearch
cluster
that
we've
deployed
the
individual
nodes
and
so
on
and
finally
the
last
bit
we
deployed
metric
beat
to
monitor
kubernetes
itself
and
it
rolled
out
also
a
few
dashboards
that
we
should
be
able
to
see
now,
and
indeed
we
see
six
kubernetes
nodes,
which
is
accurate
to
the
number
of
deployments.
We
see
data
streaming
in
slowly
just
deployed
that
and
get
stats
about,
cpu
usage
memory
usage
network,
and
we
can
also
look
at
a
specific
dashboard
for
the
equinox
controller
manager.
B
You
can
get
metrics
for
the
work,
queue
and
cpu
again
now,
one
last
thing
so
we've
seen
just
to
summarize
what
we've
done.
We've
deployed
metrobeat
to
monitor,
kubernetes
and
the
hosts
in
this
cluster,
like
when
its
control
plane,
enter
hosts.
In
this
clusters,
we
deployed
5b
to
harvest
all
logs
from
all
running
containers,
and
I
deployed
stack
monitoring
to
get
a
richer
view
for
a
specific
application.
In
this
case,
elastic
search
itself.
B
And
we
we've
seen
how
a
rolling
upgrade
happened
when
I
added
additional
metadata
for
the
scraping
of
the
matrix
now
one
thing.
One
last
thing
I
want
to
show
you
is
a
the
new
feature
we
added
in
version
1.3,
which
is
inline
or
dynamic
volume
expansion,
if
you
will
so
imagine,
we've
deployed
this
cluster
we're
very
happy
with
this
initial
setup,
but
we
realize
we've
underprovisioned
these
nodes
slightly
with
only
50
gigs
of
storage
space,
and
I
want
to
fix
that.
How
would
you
go
about
that?
B
So
you
cannot
change
the
capacity
once
you've
provisioned
it,
but
we've
added
a
feature
to
work
around
this
until
there
is
a
kubernetes
enhancement
request
to
fix
that
in
the
stateful
set
controller,
but
until
that
lens
we've
built
in
a
workaround
that
allows
you
to
directly
go
into
this
yaml
spec,
as
it's
deployed
just
doing
this
in
the
openshift
ui
now
and
just
change
this
to,
I
don't
know:
100
gigs
instead
of
50
save
the
resource
and
then
let's
maybe
watch
kubernetes
events.
B
So
what
this
does
is
it
basically
works
around
this
limitation
in
stateful
sets
it's
it's
going
to
directly
edit
the
persistent
volume
claim
and
then
recreate
the
stateful
set
on
on
top
of
that
to
re-adopt
these
pots
that
have
now
been
changed.
You
see
the
events
flowing
in
that
are
already
saying
that
the
volume
expansion
succeeded
here
for
this
persistent
volume
claim.
So
if
you
go
back
to
our
stack
monitoring
view,
we
should
see
that
elastic
search
picks
up
this
volume
expansion
without
the
need
of
a
restart.
B
B
Re-Create
stateful
set
manually,
I
think
that's
all
I
wanted
to
show
it
was
a
lot.
I
think
it's
time
maybe
for
questions
now.
If
you
have
any.
A
So
there
there
is
one
that
someone's
asked
and
it's
about
how
to
install
elastic
plug-ins
when
using
the
operator.
B
B
That,
of
course,
has
the
disadvantage
that
you're
susceptible
to
any
kind
of
network
glitch,
because
it's
it
has
to
download
the
plugin
every
time
it
boots
or
restarts
the
pod.
So
the
alternative
way
that
we
that
we
recommend
is
this
is
actually
also
documented
here
on
that
on
that,
on
our
documentation
page
is
to
create
a
custom,
docker
container
and
you're
free
to
create
your
own
docker
container,
based
on
the
official
images,
as
long
as
they
are
based
on
the
official
images
and
basically
install
the
plugin
at
container
creation
time,
and
that's.
B
That's
the
the
second
way
of
doing
so.
This
here
is
showing
how
to
use
an
init
container.
Basically,
you
run
a
little
shell
script.
That
runs
the
installer,
and
the
alternative
way
is
to
use
a
custom
image.
We
have
a
simple
example
here
as
well
how
it's
like
a
very
simple,
two-line,
docker
image,
based
on
the
official
images
to
install
it,
which
is
probably
recommended
for
more
production,
ready
scenarios.
A
But
that
makes
it
easy,
hopefully
for
everybody
here
and
we'll
we'll
add
that
link
to
the
page
into
the
the
video
so
that
you
can
find
this
later.
I
I
had
a
question
because
I
was
reading
through
the
what
was
in
the
latest
release
of
of
elastic
and
specifically
in
kibana
is,
is
kibana
lens
part
of
the
operator,
or
is
that
a
plug-in
or
or
can
you
tell
us
a
little
bit
about
that
because
that
looked
like
really
cool
visualizations
that
got
added
into
the
latest
release.
B
B
So
if
I
create
a
new
visualization
here
and
click
go
to
lan,
so
it's
already
included
in
the
package-
and
I
can-
I
don't
know,
start
playing
around
with
this,
for
example-
to
to
drag
and
drop
some
fields
on
the
left
hand
side
to
create
a
graph
here.
In
this
case,
it's
now
showing
me
the
count
of
records
for
each
different
host
name-
it's
not
the
most
imaginative
use
of
it.
But
you
can.
You
can
come
up,
of
course,
with
your
own
uses.
A
Yeah,
so
it's
something
it
comes
out
of
the
box.
So
when
you
get
the
idea,
it's
not
a
separate
plug-in,
as
I
think
what
I
was.
A
A
But
I
think,
are
there
other
things
that
came
in
this
latest
release
that
you'd
you
you'd
have
like
to
point
out
because
there's
a
I
saw
that
I
read
the
whole
thing
and
I'm
like
there's
a
ton
in
this
latest
release.
B
So
one
I
mean
in
the
latest
seven
or
ten
release.
I
think
one
of
the
more
interesting
features
is
sort
of
a
formalization
of
the
concept
of
data
trees.
I
think
I
spoke
about
this
very
briefly
in
the
presentation
so
that
you
have
these
hot,
warm
cold
use
cases
where
time
series
data
is
moved
to
cheaper
hardware
over
time
and
the
support
footage
has
been
formalized
in
a
ten
release
and
even
extend
it
in
in
such
a
way
that
you
can
back.
B
I,
maybe
I
need
to
sort
of
step
back
a
little
bit
as
you
know
that
data
in
elasticsearch
is
organized
in
indices
and
they
sort
of
can
be
backed
up
in
snapshots
so
on
you
can
now,
with
the
latest
release
you
can
search
through
these
snapshots
that
are
stored
in
very
cheap
storage,
like
think
s3
from
amazon,
or
something
like
that.
B
So
that
gives
you
another
very
cost-effective
way
of
organizing
of
building
these
multi-tier
architectures
with
elasticsearch
clusters,
where
you
move
your
data
to
cheaper
and
cheaper
data
tiers,
as
it
becomes
less
relevant
to
you
usually
for
log
data
you're
only
interested
in
the
last
week's
logs
right.
So
it's
done
after
a
week,
then
maybe
you
keep
it
around
for
the
occasional
root
cause
analysis,
and
then
you
know
after
another
week
it's
typically
maybe
you're
only
keeping
it
around
for
regulatory
or
audit
purposes,
and
you
rarely
ever
look
at
it.
A
That
actually
is
like
one
of
the
the
wonderful
use
cases
for
sort
of
hybrid
cloud
and
hybrid
storage.
I
mean
pick
your
lowest
common
denominator
for
cost
and
and
store
your
stuff
where
it
should
be.
I
think
that's
one
of
the
sweet
spots
for
for
cloud
computing
and
taking
advantage
of
that's
a
great
thing.
A
So
I
was
wondering
too,
since
this
is
the
community
side
of
openshift.
Tell
us
a
little
bit
about
this
is
it's.
This
operator
is
available
on
operatorhub.io,
which
is
you
know,
the
the
community
side
as
well
as
well
as
you've
got
a
certified
one.
What
are
the
differences
between
the
community
release
of
elastic
and
this
operator
and
the
the
the
product
side?
Is
there
anything
missing
between
the
two
of
them
or
how
are
they
differentiated?
B
So
the
community
so
there's
how
to
how
to
explain
that,
so
the
community
operator
is
basically
identical
to
the
certified
operator
and
both
allow
you
to
run
in
a
free
to
use
basic
license,
which
is
what
we
we've
seen
today
and
both
the
community
as
well
as
the
certified
operator
or
the
other
download
options
we
have.
If
you
download
it
from
our
website
or
you
install
it
via
helm,
it
doesn't
matter,
go
all
of
these
versions
of
the
operator.
B
Allow
you
to
install
a
license
into
it,
and
then
you
get
all
the
commercial
features
in
in
the
elastic
stack
machine
learning.
Certain
aspect
of
the
observability
features
the
security
features
and
so
forth.
So
they
no
matter
where
you
start.
There's
always
a
way
to
to
sort
of
use.
These
commercial
features,
if
you
need
them
and
to
upgrade
and
install
a
license
into
that,
so
there's
no,
no,
no
strong
difference
and
you're
not
locked
into
one
or
the
other.
A
That
totally
makes
sense-
and
that's
actually
nice
to
have
that
easy
on-ramp
to
getting
to
the
pro
the
pro
version
or
the
the
product
version
from
the
community
is
quite
nice,
because
I
know
where
I've
heard
the
most
about
elastic
recently
being
inside
of
red
hat
is
around
open
data
hub
and
that
it's
elastic
is
one
of
the
components
and
the
reference
architecture
that
open
data
hub
was,
and
I
think
they
were
using
the
community
side
initially
and
well,
I'm
looking
to
see
whether
we
can
whether
they're
using
the
operator
I
haven't,
talked
to
the
operator
hub
team
in
the
recent
weeks
or
months.
A
So
it
might
be
time
to
get
them
back
on
too
to
see
you
know
and
get
their
feedback.
How
is
the
so?
How
long
has
this
operator
been
available?
Because
I
want
to
ask
the
question:
what's
the
feedback
you've
gotten
from
customers
on
using
the
operator?
But
if
this
is
just
been
a
few
weeks
here,
it
might
not
be.
B
So
we've
had
now
you're
asking
me
my
memory
serves
me.
You
know,
I
think
we
had
the
ga
what
we
call
the
ga
version
of
the
operator.
The
1.0
released,
I
think,
was
a
january
2020.
B
Yeah,
but
we
had
a
beta
version
before
that
it
goes,
I
think,
back
almost
a
year
or
so
I
don't
remember
when
we
initially
released
it.
So
we
had,
I
think,
quite
some
time
quite
some
exposure
to
users,
and
I
think
the
feedback
has
been
overwhelmingly
positive.
B
A
So
if
so-
and
you
had
said
prior
to
us,
getting
started
that
you
were
part
of
the
team
that
built
the
operator
and
what
was
that
process
like
for
for
elastic,
I
know
not
just
the
certification
stuff
but,
like
the
actual,
was
what
advice
would
you
give
to
someone
now
if
they
were
starting
down
their
operator
journey
like
besides,
read
the
documentation.
B
I
guess
it
it
was,
I
think,
for
for
many
of
us
it
was
the
first
time
writing
any
any
operator
in
in
kubernetes.
So
it
was,
I
think,
a
a
steep
learning
curve.
Of
course
you
get
familiar
with
the
frameworks
making
a
choice.
You
know
there's
different
sort
of
operator
frameworks
out
around.
Eventually
we
settled
on
controller
runtime
and
q
builder.
Initially,
even
though
we
have
now
replaced
parts
of
what
creepy
automates
away
with
sort
of
manual
things,
we
do.
B
And
I
think
it's
really
I
mean
my
my
colleague
sebastian
has
given
a
quite
good
talk
at
the.
I
think,
the
last
year's
kubecon
about
the
pitfalls
of
writing
your
own
operator,
which
summarizes
our
experience
and
sort
of
our
journey
to
understanding
how
this
works.
I
mean
just
to
give
a
few
examples.
I
think
one
thing
we
needed.
We
realized
during
that
process
is
the
way
you
interact
with
the
kubernetes.
Api
is
usually
through
cached
clients.
So
there
is
no
direct.
B
You
know
no
direct
request
to
the
kubernetes
api
server,
and
this,
of
course,
is
a
measure
they've
taken
in
order
to
reduce
load
on
the
actual
api
server.
So
instead
you
have
like
cache
locally
in
your
application
and
that
cache
is
synchronized
from
the
api
server.
But
that
means
whatever
you
do
whatever
you
you,
you,
whatever
interaction
you
have
with
the
kubernetes
api
is
sort
of
always
a
little
bit
behind
the
actual
state
of
kubernetes.
B
I
think
by
now
we
are
sort
of
aware
of
these
pitfalls,
but
initially,
of
course
that
was
sort
of
something
to
work
around
and,
of
course,
also
things
like
designing
your
custom
resource
definition.
You
know
we
went
through
multiple
iterations
of
of
trying
to
abstract
away
many
many
things
and
giving
the
user
sort
of
a
very
high
level.
B
Toggle,
so
to
speak,
to
to
turn
things
on
and
off,
and
then,
as
I
showed
you
during
the
presentation,
we
gradually
moved
back
from
that
to
a
place
where
we
expose
actually
a
lot
of
the
underlying
abstractions
directly
in
the
crd,
to
give
the
user
more
power
more
flexibility
and
also
to
reduce
cognitive
overhead.
As
we
introduce
concepts
in
the
custom
resource
design.
B
Every
users
have
to
learn
this
understand
this,
and
so
we
eventually
opted
for
removing
a
lot
of
that
abstraction
and
sort
of
exposing,
also
elasticsearch
configuration
as
directly
as
possible
in
the
crd,
because
many
users
of
the
operator
are
familiar
with
with
elasticsearch
and
they
know
what
to
put
into
the
elasticsearch
configuration
file.
B
So
we
wanted
to
give
them
a
similar
experience
here
in
kubernetes,
where
they
could
just
apply
what
they
already
know
and
just
plug
that
into
that
manifest
in
that
custom
resource
manifest
as
it
as
it
were,
on
an
on-premise
installation.
A
Cool
and
someone's
just
added
in
the
writing
a
kubernetes
operator,
the
hard
parts
that
sebastian
wrote
did
we'll
we'll
we'll
put
that
in
the
notes
too,
so
linking
and
the
youtube
video
that
I
post
later
from
this
session,
so
we're
we're
almost
to
the
end
of
the
hour
here.
A
I'm
wondering
it's
sort
of
in
a
closing
thing
because
me
I
I'm
totally
focused
on
helping
people
build
their
operators
so
and
whether
it's
for
elastic
or
premium
securities,
open
unison,
but
really
hearing
people's
feedback
on
on
the
operator,
the
operator
framework
and
things.
So
thank
you
for
that.
I'm
wondering
if
there's
anything
coming
in
the
elastic
road
map,
new
feature
or
functionality
that
you
we
should
keep
an
eye
out
for
in
the
upcoming
releases
or
something.
What
are
you
working
on
now?
That's
gonna
make
it
even
more.
A
B
I
I
can't
speak
for
sort
of
the
overall
elastic
roadmap.
I
know
what
we
are
working
on
for
the
operator.
That's
maybe
something
I
can
talk
about
a
little
bit.
So,
of
course,
we
are
trying
to
bring
more
stack
applications
onto
the
operator,
so
there's
a
new
thing
coming
out,
which
is
called
the
elastic
agent,
which
is
a
way
of
bundling
together.
These
different
beats
into
one
binary.
B
So,
instead
of
what
is
what
you
saw
during
the
demo
that
I
had
to
deploy
metrobeat
separately
from
filebeat-
and
there
are
other
beats
as
well
so
in
the
future,
elastic
agent
will
allow
you
just
deploy
one
binary
and
then
the
different
sort
of
harvesting
capabilities.
It's
just
configuration
of
the
agent.
So
that's
something
we're
looking
into
supporting.
B
Eventually,
we
also
looking
into
sort
of
some
form
of
autoscaling
capabilities
for
the
operator
and
elastic
cloud
on
kubernetes
is
also
a
sort
of
you
know.
I
referenced
elastic
cloud
enterprise
as
the
on-premise
product,
which
has
a
ui
which
has
apis
in
sort
of
in
the
far
future.
We
want
to
sort
of
close
the
gap
here
between
these
two
products
and
give
you
a
similar
experience
as
well
with
the
ui
maybe
and
some
ui
apis.
A
So
we
did
get
one
more
question
from
the
live
stream.
This
is
how
would
I
handle
selection
of
again
specific
nodes
or
in
more
detail
to
deploy
elastic
search
operator
on
a
dedicated
infra
nodes?
Is
it
preferred
to
configure
the
tolerations
for
taints
on
the
deployment
config
of
elk,
or
do
it
on
the
operator
config
itself?
B
So
I'm
I'm
yeah.
I
I
think
I'm
reading
this
as
a
question
how
to
so
the
operator
itself,
of
course.
So
if
you
install
it
via
operator
hub,
it's
all
managed
by
the
operator
lifecycle
manager,
so
the
namespace
it
lands
into
is
sort
of
predetermined.
I
think
that's
not
the
question.
I
think
the
person
is
asking
about
how
to
influence
where
the
elasticsearch
nodes
land
on
and
so
there
and
there
you
can.
You
can
use
node
selectors.
B
You
can
use
any
feature
that
sort
of
kubernetes
offers
you
to
target
that,
and
you
would
do
that
as
I
I
think
I
showed
this
briefly.
I
don't
know
about
the
slides
still
up
through
the
pod
template,
I'm
just
going
back
here,
for
example
through
node
affinity
or
you
can
use
a
node
selector,
so
everything
that
kubernetes
offers
you
to
target
a
specific
node
or
exclude
nodes.
A
Perfect,
I
hope
that
answered
their
question.
I
think
it
might
have
so
that's
good
if
you
could,
if
you
have
a
final
slide,
if
you
could
pop
over
to
that,
so
people
know
where
to
find
elastic.
That's
not
hard
elastic.co
where
to
find
you
guys,
that's
probably
a
good
one
to
end
on
and
we'll.
B
I
only
have
a
slide
without
github
repository
on,
but
I
think
from.
If
you
go
there
you
your
website
is
linked
as
well.
So
I
hope
that
works
too.
A
Otherwise,
I'm
going
to
let
you
go
back
to
your
evening,
because
you're
in
vienna
and
I'll
grab
another
coffee
and
make
sure
I
get
this
wonderful
walk
through
of
the
elastic
world
view
of
things
up
on
youtube
for
everybody
later
today,
and
we'll
definitely
have
you
back
with
the
next
release
and
I'm
gonna
have
to
go
I'm
now.
A
I'm
really
gonna
have
to
go
play
with
the
cabana
lens,
because
now
you've
made
it
dead
easy
and
I
have
no
excuse
to
not
to
not
make
beautiful
visuals,
even
though
you
only
showed
us
the
bar.
I
saw
some
demos
earlier
with,
like
just
beautiful
mapping
and
also
interesting,
really
interesting
stuff,
so
there's
there's
a
whole
whole
world
of
visualization
that
you've
opened
up
to
and
made
much
easier
for
folks.
A
So
I
totally
appreciate
that
and
we'll
have
to
work
out
with
the
open
data
hub
team
too
them
getting
some
nice
visualizations
added
in
which
I've
seen
a
few.
But
this
I
think
this
really
is
going
to
expand
upon
all
this
and
you've
made
it
dead,
easy
for
people
to
deploy
it
now
on
openshift
and
elsewhere
on
kubernetes.
So
this
is
kudos
to
you
and
the
whole
team
over
at
elastic.
Thank
you
so
much
for
taking
the
time
today.