►
From YouTube: Building Cloud-Native Logging Pipelines on Top of Apache Kafka - Jakub Scholz, Red Hat
Description
Collecting logs from system components or applications and delivering them for processing, analytics or just for the ops team to browse them is an important part of every production environment.
Website: https://www.redhat.com/de
Organized by @Microsoft @kubermatic7173 @SysEleven
Thanks to our sponsors @CapgeminiGlobal, @gardenio, @sysdig, @SUSE, @anynines, @redhat, nginx, serve-u
A
Now,
for
a
start,
as
I
will
be
talking
a
lot
about
monitoring
and
logging,
maybe
we
can
a
little
bit
Define
what
I
mean
with
that
and
what
it
is
usually
under
monitoring.
You
can
understand
some
collecting
analyzing
and
using
some
information
about
your
system
or
your
or
your
systems,
which
typically
these
days,
you
would
have
kind
of
three
major
areas.
You
would
have
the
actual
logs
produced
by
your
application.
You
would
have
some
metrics
and
then
more
and
more
popular.
A
These
days
is
also
tracing,
so
you
possibly
might
have
some
some
traces
and
in
the
talk
and
in
the
demos,
I
will
really
focus
on
on
logging,
but
basically
all
the
principles
which
I
will
be
showing
they
can
be
applied
to
the
other
areas
as
well,
and
the
monitoring
is
important
for
us
for,
for
multiple
reasons.
It's
giving
us
information
about
the
current
state
of
the
system.
So,
what's
going
on
right
now,
but
it's
obviously
also
super
important.
Then
we
had
some
issues
in
the
past
and
we
want
to
analyze
them.
A
We
want
to
find
the
root
cause.
We
want
to
fix
it.
It
then
we
kind
of
use
these
data
to
also
look
back
into
the
past
and
see
what
happened
there
and
I
guess
more
and
more
with
the
rise
of
machine
learning
and
artificial
intelligence.
This
data
will
be
also
used
to
kind
of
try
to
look
into
the
future.
To,
for
example,
try
to
predict
what
might
be
the
the
next
failure.
What
might
happen
in
the
future?
A
So
monitoring
and
logging
is
super
important
for
for
all
production
systems,
and
typically
you
will
have
some
kind
of
monitoring
pipelines
which
will
collect
the
monitoring
or
logging
data.
It
will
somehow
parse
them
and
normalize
them
to
try
to
understand
them
and
make
sure
they
have
all
kind
of
the
same
format
structure
and
then
not
all
of
these
data
might
be
interesting,
so
we
might
do
some
filtering
and
then
you
usually
wrote
it
somewhere,
and
then
you
do
all
kinds
of
different
things
with
this
data.
A
You
might
just
view
them
and
look
at
them
or
you
might
run
some
analytics
on
them.
You
might
just
store
them
for
for
the
future
or
you
might
do
all
kind
of
different
processing
and
usually
whatever
tooling
you
use
for
monitoring.
You
will
have
some
pipeline,
like
this,
represented
by
by
different
tools
and
one
of
the
when
we
talk
about
logging.
One
of
the
most
common
software
Stacks
is
the
so-called
efk
stack,
which
consists
of
elasticsearch
fluency
and
in
kibana
they're,
fluenties
kind
of
the
part
which
is
collecting
the
locks
of
your
containers.
A
Then
it
sends
these
locks
to
elasticsearch,
which
kind
of
works
as
as
a
storage
and
as
a
search
engine,
and
then
kibana
is
kind
of
the
part
which
is
used
for
the
visual
visualization
and
and
stuff
like
that
now
I'm
using
this
in
the
demos,
because
as
far
as
I
know,
it's
still
one
of
the
most
popular
software
Stacks
I
know
that
not
everyone
was
happy
with
the
license
changes
which
elasticsearch
did
to
their
components,
and
there
are
now
other
tools
as
well
to
kind
of
replace
it
again.
A
Everything
I
will
be
showing
can
be
applied
to
the
other
tools
as
well.
It's
really
not
enough
time
to
show
them
with
everything.
So
that's
why
I
use
these
things
in
the
demo,
but
you
can
adapt
it
to
whatever
you
you
are
using
now.
This
is
really
just
a
picture
diagram.
It
shows
how
it
works.
So
fluency
takes
the
locks,
usually
using
HTTP,
boosts
them
to
elasticsearch
and
then
usually
using
HTTP.
Kibana
gets
that
and
somehow
visualize
that
in
your
in
your
browser,
for
example.
A
Now
how
does
the
Apache
Kafka
platform
fit
in
there?
How
many
of
you
heard
about
Apache
kafta
and
have
some
rough
idea
what
it
is?
Okay,
that's
great.
Almost
everyone
to
do
some
quick
recap
for
those
who
didn't
Apache
Kafka
is
distributed
even
streaming
platform.
What
does
that
mean?
There
will
be
many
different
definitions.
You
will
get
if
you
Google
it,
but
one
I
like
quite
a
lot
is
that
even
streaming
platform
is
something
what
combines
delivery
of
some
events.
A
Storage
of
some
events
and
the
processing
of
some
events
and
the
Apache
Kafka
project
with
all
kinds
of
different
components,
offers
all
of
these
three
capabilities.
A
And
then,
if
we
want
to
get
into
the
real
buzzwords,
then
it
of
course
checks
all
the
boxes
like
high
performance,
highly
scalable,
highly
available
reliability,
durability,
fault,
tolerance
and
so
on.
So
that's
all
kind
of
part
of
of
Apache
Kafka
offers,
but
it
also
has
a
huge
ecosystem
of
different
clients,
libraries
tools,
connectors
and
so
on,
and
that's
one
of
the
most
important
aspects,
because
pretty
much
in
most
of
the
demos
I
will
be
showing
today
and
most
of
you
do
with
Kafka.
A
A
So
why
would
Apache
Kafka
be
a
good
tool
to
put
into
the
login
pipeline,
which
we
just
talked
about
well.
First
of
all,
it
has
very
efficient
TCP
based
protocol,
because
in
most
of
the
parts
of
the
of
the
Apache
Kafka
project,
it
doesn't
really
care
that
much
about
what's
inside
the
messages,
it
doesn't
really
decode
them.
A
It
really
just
takes
them
dumps
them
to
the
disk,
eventually
when
needed,
reach
them
to
the
disks,
pass
them
to
the
consumer,
and
that
allows
it
to
work
with
great
performance,
and
it
has
really
great
throughput
when
ingesting
the
messages,
and
it
has
also
configurable
reliability,
which
is
also
important
because
from
my
experience
when
I
talk
with
people
about
about
monitoring
data,
I
find
it
quite
interesting
that
there's
a
lot
of
people
who
are
kind
of
on
one
side
of
the
spectrum
and
they
are
like
monitoring
data
I,
don't
need
any
reliability
if
I
lose
them
because
something
goes
wrong.
A
Then
okay
happens
it's
just
monitoring
data,
it's
not
banking
transactions
or
whatever,
and
then,
surprisingly,
there's
a
lot
of
people,
also
on
the
complete
other
side
of
the
Spectrum,
which
are
like.
Oh
monitoring
data.
That's
the
thing
where
I
absolutely
need
100
reliability
and
availability,
because
that's
the
thing
I
need
to
analyze
any
issues
and
any
problems
and
to
see
what
was
happening
and
what
would
I
do
if
something
happens
and
I
don't
have
the
monitoring
data
for
it.
A
Even
so,
so,
interestingly,
there
are
people
on
both
sides
of
the
spectrum,
and
the
nice
thing
is
that
with
Kafka
you
can
actually
configure
the
reliability
depending
on
how
you
need
it.
So
if
you
really
want
availability
and
reliability,
you
can
use
it.
If
you
don't
need
it,
you
don't
have
to
use
it
and
you
get
a
little
bit
more
of
performance
or
maybe
faster
latency
and
so
on.
A
Now
the
important
thing
about
putting
Kafka
into
your
logging
pipeline
is
that
it
will
decouple
the
different
components:
it's
a
kind
of
act
as
a
buffer
between
them
and
that's
important,
because
if
something
goes
wrong
in
your
in
your
cluster
or
in
your
system
landscape
in
general,
there
will
for
sure
be
many
different
Services.
A
We
will
just
go
completely
crazy
and
start
logging,
thousands
of
different
exceptions
per
second
and
then
obviously
all
of
these
rocks
will
be
collected
and
sent
somewhere,
and
it
means
that
if
something
goes
wrong,
it
quite
often
generates
a
huge
amount
of
monitoring
data,
and
that's
where
the
decoupling
is
really
great,
because
with
Kafka
and
the
and
the
great
performance
when
ingesting
the
data,
you
can
just
push
the
data
into
Kafka.
But
you
can
let
them
be
there
for
some
time
and
then
you
can
kind
of
postpone
the
processing
part
behind
for
some
later
time.
A
A
So
what
you
get
will
be
something
like
this.
You
will
have
kind
of
one
partition
where
fluently
sends
the
data
to
Kafka,
and
this
can
basically
work
on
its
own,
even
if
the
components
Downstream,
for
example,
wouldn't
be
available,
and
then
you
have
the
second
part
where
you
get
the
data
from
Kafka
into
the
elasticsearch
and
that's
kind
of
again,
a
separate
component
which
can
work
even
if
fluency
is
not
running.
A
That
said,
if
nobody
is
pushing
the
data
into
Kafka,
then
they
won't
be
available
there,
of
course,
so
this
is
kind
of
what
you
get
and
what
you
can
use
and
get
advantage
of
the
decoupling
and
the
buffering
of
the
data
and
yeah.
You
can
use
this
pattern
in
in
any
systems
basically,
but
you
can
of
course,
do
it
on
kubernetes
as
well,
and
that,
of
course
includes
the
Apache
Kafka
cluster,
which
can
run
on
the
kubernetes
cluster
as
well.
So,
let's.
A
Get
a
bit
into
the
first
demo
and
when
I
get
the
running
pots
in
my
cluster
I
kind
of
intentionally
deployed
everything
into
single
namespace
so
that
it's
easy
to
see.
But
I
already
deployed
these
things
to
save
the
time
on
pulling
the
images
and
the
things
starting
up.
But
I
have
basically
here
the
the
streams
operator
which
I
use
because
I
work
on
that.
Most
of
my
time,
I
use
that
to
run
the
the
Kafka
part
and
that
already
deployed
the
Kafka
cluster
and
the
Kafka
connect
cluster.
A
Then
I
have
the
elastic
surge
deployed
here
and
kibana
deployed
here
and
then
I
have
all
the
fluent
bits
which
are
collecting
the
logs.
Now
I
already
deployed
it,
but
we
can
take
a
quick
look
at
how
I
deployed
it
and,
at
the
end
of
the
slides,
I
have
a
link
for
the
GitHub
repository
where
you
then
can
find
all
the
details
and
all
the
yamas
and
sources,
and
so
on
so
I
hope.
A
All
of
you
already
heard
about
kubernetes
operators
and
have
some
idea
how
they
are
working
so
to
deploy
the
Kafka
cluster.
What
I
really
just
do
here,
I
create
the
resource
with
the
kind
Kafka,
but
I
specify
all
the
things
around
how
the
Kafka
cluster
should
look
like
how
many
replicas
should
it
have?
How
much
resources
should
it
have
now
you
can
see
that
for
the
demo,
it's
a
fairly
small
cluster
for
for
usual
Kafka
environments,
I
can
configure
the
listeners
and
the
authentication
and
authorization
and
I
can
configure
the
storage.
A
I
can
get
Prometheus
metrics
out
of
the
box
and
and
so
on,
and
just
by
creating
this
resource.
Basically,
the
operator
spins
up
and
runs
the
whole
Kafka
cluster
form,
and
then
I
did
the
same
for
the
Kafka
connect
and
because
the
Kafka
connect
runs
really
as
a
separate
application
and
just
connects
to
get
the
data
from
Kafka
on
one
side
and
then
sends
them
and
the
other
side
somewhere
else.
A
Then
I
need
to
create
this
Kafka
user
so
that
I
can
authenticate
with
the
Kafka
cluster
and
do
things
in
a
secure
way
and
I
specify
which
authorization
rights
it
should
have
I
can
just
scroll
through
it.
A
And
then
I
create
some
secrets,
so
I
don't
show
the
secrets
here.
They
are
just
commented
out
because
I
don't
want
to
share
the
secrets
on
record,
but
I
will
then
later
use
them.
They
are
secrets
with
credentials
for
my
AWS
account
and
my
my
slack,
which
I
will
use
later
in
the
in
the
second
demo.
So
we
will
see
that
later
and
then
I
just
deploy
the
Kafka
connect
as
well,
and
the
Kafka
connect
itself.
A
It's
really
just
a
framework
for
the
Apache
Kafka
project
provides
so
I
need
to
add
different
plugins
to
it,
which
I
can
then
use
integrate
with
the
other
systems.
So,
for
example,
here
I
tell
the
operator
that
I
want
to
add
the
camel
elasticsearch
plugin.
If
you
heard
about
Apache
common,
that's
another
great
Apache
project,
which
is
hundreds
of
different
Integrations
and
you
can
use
all
of
them
with
Kafka
to
then
integrate
it
and
get
your
monitoring
data
to
all
kinds
of
other
systems
and
not
just
elasticsearch.
A
So
I
basically
just
tell
the
operator
that
I
want
to
use
this
use
this
plugin
and
it
will
automatically
download
it
and
build
a
new
container
image
for
me
and
then
I
add
some
more
plugins
I
mount
the
secrets
and
metrics
again
and
again.
The
operator
makes
it
running
for
me
now.
A
The
next
part
is
the
elasticsearch
deployment
and
I'm
sure,
there's
operator
for
elasticsearch
as
well,
but
to
be
honest,
I'm
not
working
with
elasticsearch
that
much
so
I
just
deployed
as
a
stateful
set
here
instead
of
using
the
operator,
but
that's
deployed
as
well,
and
the
same
for
kibana,
which
is
here
running
as
well,
and
then
the
next
thing
I
need
to
do
is
I
need
to
create
a
topic
where
I
will
get
all
the
different
blocks
from
my
containers.
A
So
I
use
the
Kafka
topic
resource
for
that
I
say
that
the
topic
should
be
named
rocks
and
it
will
automatically
create
the
topic
in
the
in
the
Kafka
cluster
and
then
the
only
other
part
needed
is
the
actual
fluency
deployment
which
will
read
the
logs
from
the
containers
and
push
them
into
the
Kafka
topic.
So,
let's
look
at
this
one
a
bit
more
closely
from
the
beginning.
A
There
are
just
some
usual
Arabic
files
for
accessing
the
kubernetes
API,
then
I
again
create
the
Kafka
user
and
then
in
this
config
map,
that's
where
the
the
main
configuration
for
the
fluency
is
located.
So
what
are
the
important
parts
here?
Is
this?
This
input
basically
tells
it?
Where
should
it
look
for
the
log
files
and
virtually
the
kind
of
scrape
the
logging
data
produced
by
the
containers?
A
Then
there
is
some
some
filtering.
If
you
remember
the
slide
with
the
pipeline,
that's
where
you
can
do
all
kind
of
filtering
of
the
data,
and
then
this
is
the
output
part,
which
is
the
important
part
that
I
basically
tell
it.
Okay
I
want
to
Output
the
data
into
my
Kafka
cluster.
Let's
send
them
to
the
service,
my
cluster
Kafka,
bootstrap
or
9093,
and
let's
send
it
to
the
topic
named
blocks
and
then
another,
for
example.
A
Important
part
is
year
at
the
end,
that's
the
TLs
certificate
used
for
authentication,
and
then
here
is
the
parsing
of
the
locks
depending
on
who's
producing
them
and
then
there's
the
demon
set,
which
makes
sure
that
one
instance
of
the
fluency
will
be
running
on
each
of
my
cluster
nodes
and
collecting
the
logs.
A
Now,
as
I
said,
that's
already
all
running
so
I
can
use
here
this.
This
simple
shell
script,
which
I
created
just
to
not
do
any
typos
when
writing
the
commands
and
what
it
will
do
is
it
will
just
use
Kafka
consumer
and
connect
to
the
to
the
Kafka
broker
and
start
reading
the
messages
from
the
logs
topic
and
what
we
can
see
now
on
the
screen.
It's
not
that
easy
to
read
in
the
Json
format,
but
these
are
the
log
messages
which
are
collected
there.
A
You
can
see
there's
some
stuff
from
the
Calico
networking
which
I
use
on
my
cluster
and
there
will
be
some
kubernetes
locks.
This
is
from
The
Zookeeper,
which
is
part
of
the
Kafka
cluster
itself.
So
using
the
kapha
cluster,
we
are
actually
monitoring
the
the
Kafka
cluster
itself
as
well,
and
so
we
are
getting
all
of
these
into
the
Kafka
Brokers
into
the
topics.
A
And
now
all
I
need
to
do
to
get
these
into
the
elasticsearch
is
I
need
to
deploy
the
actual
instance
of
the
connector,
which
I
again
thanks
to
the
operator
pattern.
I
can
do
all
of
this
with
yamos
instead
of
some
rest
commands,
and
you
can
have
all
of
this
in
my
git
and
use
githubs
and
so
on
to
to
deploy
and
manage
all
these
things.
What
I
do
here,
basically
is
I,
say:
okay,
I
want
to
have
a
Connector
running.
It
should
use
this
camel.
A
A
A
Elasticsearch
connector,
let's
add
a
Yama
to
it.
I
can
see
that
it's
all
deployed
and
running
and
when
I
now
switch
to
the
to
the
kibana
and
do
refresh
you
can
see
when
I
zoom
in
that
okay,
16
1947
is
the
last
time
some
of
the
last
record,
which
is
pretty
much
the
time
which
we
have
right
now.
So,
okay,
we
have
the
data
there
and
it
seems
to
work.
A
So
that
work,
but
that's
just
the
beginning
right-
we
probably
wouldn't
do
all
this
effort
just
to
get
the
data
in
the
elastic
search
again.
So
we
can
leverage
that
Kafka
is
a
given
streaming
platform
and
do
more
with
it
and
what
might
be
the
things
they
would
need
with
with
blog.
So
maybe
some
archiving,
maybe
some
other
thing,
maybe
some
analytics.
A
All
of
that
we
can
do
because
we
can
just
leverage
the
components
which
we
have
in
the
Kafka
ecosystem,
and
so
what
we're
gonna
look
into
in
the
second
demo
is
how
to
take
this
thing,
which
we
just
saw
and
elevate
it
a
bit
more.
A
So
let's
say
that
I'm,
using
slack
with
my
team
and
with
my
Ops
guys
and
I,
want
to
have
some
alerting
which
will
find
some
alerts
from
the
logs
and
send
them
to
slack
so
that
I
know
that
there
might
be
some
problems,
something
to
look
into
and
so
on
and
another
thing
which
we
can
show.
Okay,
in
most
cases
you
might
want
to
Archive
data
in
some
long-term
storage.
A
Maybe
if
you
work
in
some
regulated,
Industries,
there's
even
some
government
office
which
says,
oh,
you
have
to
store
the
locks
for
five
years,
at
least
and
so
on.
So
that's
all
we
can
do
basically
just
by
connecting
into
the
existing
Kafka
cluster
and
reading
from
the
topics
where
we
already
have
all
the
logs
and
and
leveraging
all
of
that.
A
So,
let's
start
with
the
with
the
archiving
part,
because
that's
actually
super
easy
again,
all
I
need
to
do
is
I
need
to
deploy
another
connector.
This
time
it's
the
camel
AWS
S3,
sync
connector,
and
what
I
do
here
is
I
again
tell
it.
Okay,
read
the
data
from
the
topic
named
rocks
from
the
Kafka
cluster
and
then
send
them
into
my
Amazon
S3
bucket
named
through
and
bitlocks
bucket,
and
then
just
to
because
I
don't
really
want
to
have
in
the
S3
bucket
a
single
file
for
every
single
line
of
lock.
A
A
And
you
can
see
that
there's
a
bunch
of
bunch
of
files
here
already
the
last
one
is
1609,
but
remember
that
there
is
this.
This
batching
and
the
cluster
I'm
using
for
the
demo
is
not
that
busy
with
hundreds
of
other
applications.
So
there's
not
that
much
stuff,
so
it
doesn't
produce
millions
of
logs
every
second
so
get
to
the
archiving
and
of
course,
in
the
S3
bucket,
you
can
then
configure
all
the
rules.
A
How
long
should
it
be
stored,
then
it
should
be
uploaded
to
Amazon
Glacier,
for
example,
and
so
on.
Now,
let's
look
at
the
alerting,
and
this
is
the
part
which
I
will
be
actually
deploying
live,
so
first
I
will
create
yet
another
Kafka
topic
which
I
will
use
for
the
for
the
alert
so
whenever
I
want
to
send
some
other
to
my
slack,
I
will
send
a
message
to
this
topic,
so
this
time
I
have
to
do
the
actual
cuddle
apply.
A
Then
I
will
create
yet
another
connector.
This
time
it
will
read
from
the
topics
for
a
topic
called
alerts
and
it's
a
select
scene
connector.
It
will
send
the
messages
to
a
channel
called
kcd.
Berlin
2022
so
again,
you've
got
a
reply
and
then
I
have
here.
This
super
simple
test
application,
which
is
just
a
very
simple
rest,
API.
A
Which
doesn't
really
work
because
all
it
does
is
just
locks
errors
so
that
we
can
use
them
in
the
demo
and
it's
probably
the
best
app
you
can
see
in
terms
of
logging
errors
and
then
I
deploy
the
actual
alerting
up.
So
that's
a
Kafka
application
using
a
Java
based
Cloud
native
framework
called
clarkus,
so
I
create
the
Kafka
user
and
then
I
create
just
a
deployment
with
with
the
application
where
I
configurate
how
to
do
the
authentication
and
which
topics
it
should
use
and
so
on.
A
Wrong
file:
23,
okay,
so
let's
create
it
and
let's
quickly,
look
at
the
the
source
code,
you
will
find
the
whole
Maven
project
in
the
GitHub
repo
later.
But
basically,
this
is
using
the
Kafka
streams
API,
which
is
Kafka
stream,
processing
library,
and
what
I
can
do
here
in
just
I.
Don't
know,
15
lines
of
code
is
I,
basically
tell
it
to
read
all
the
logs
from
the
logs
topic
and
then
I
do
this
filtering,
which
in
this
case
for
the
demo
is
super
sophisticated
I'm
just
looking.
A
If
the
message
contains
this,
this
special
error,
something
ugly
happened,
which
is
what
my
test
application,
is
producing
all
the
time
and
then
I
basically
count.
How
many
times
did
this
error
happen
in
the
last
minute
and
if
it
happened
more
than
10
times
in
the
last
minute,
then
I
want
to
raise
an
alert,
because
maybe
something
strange
is
going
on.
You
know
how
it
is
some
error
message:
it
happens
all
the
time
once
twice
but
yeah
if
it
happens
more
than
10
times
in
a
minute.
A
Maybe
that's
something:
I
want
to
alert
on
so
I
prepare
this
alert
message
and
I.
Just
tell
the
Kafka
streams
to
send
it
to
this
target
topic,
which
is
the
alerts
topic
and
that's
all
I
need
to
do.
The
the
select
connector
will
then
do
the
do.
The
rest
now
I
know
that
this
is
simple,
but
well
it's
a
reliable
demo
which,
which
can
be
shown
so
that's
all
running,
and
what
I
can
do
now
is
I
have
again
here.
A
Is
this
simple
shell
script,
which
just
triggers
a
lot
of
the
errors
from
from
the
test
application
and
now
I
have
the
last
script
here
which
just
to
verify
that
it
works,
connects
to
the
to
the
Kafka
topic
with
the
alerts
and
reads
reads
from
there
and
then
once
the
alert
is
raised,
we
should
see
it
here.
Remember
that
we
are
kind
of
configure
the
application
to
work
in
one
minute
windows.
A
So
it
should
now
take
a
few
minutes
for
the
window
to
close,
and
then
it
should
see
that
it
was
more
than
10
times
and
it
should.
It
should
send
the
alert.
A
We'll
show
you
the
the
slack
here
which
right
now
is
empty
I,
actually,
don't
know
how
to
zoom
it.
So
you
might
not
see
it
that
much.
A
Well,
let's
get
back
to
the
slides,
and
hopefully
when
I
finish
with
them,
there
will
be
something
so,
as
I
already
said,
you
can
use
these
patterns
also
Beyond
logging
for
for
tracing
or
for
metrics.
This
is
a
picture
which
is
from
the
Jaeger
tracing
documentation,
which
actually
shows
how
to
use
Kafka
with
Diego
for
for
traces,
but,
as
the
last
thing
I
wanted
to
talk
about
why
you
might
not
want
to
use
Kafka
in
your
login
pipeline.
A
So
one
thing
you
should
understand
is
that,
however,
we
try
in
the
streamsy
project
to
make
it
super
easy
to
run
Kafka
and
to
have
the
operator
do
everything
for
you.
It's
not
completely
perfect
and
it
might
fail.
You
probably
should
have
some
understanding
how
it
works
if
you
want
to
rely
on
it
in
production,
so
there
is
some
effort
to
do
this.
You
need
to
have
some
knowledge.
You
need
to
have
someone
to
take
care
of
it.
Additionally,
Kafka
isn't
always
the
cheapest
thing
to
run
it's.
A
If
you
really
want
to
use
it
for
for
high
throughput
messaging,
it
might
need
a
lot
of
resources,
so
get
something
you
should
consider.
As
well
so
so
at
the
end,
it's
a
kind
of
a
question:
do
you
have
enough
all
these
different
use
cases
to
make
some
use
of
it
and
and
to
leverage
it
or
not?
It's
not
like
everyone
should
absolutely
have
this
in
their
pipeline,
but
in
many
cases
it
might
make
your
life
much
easier
and
might
help
you
so
yeah.
A
You
should
consider
last
slide
right
now
in
the
streamsy
project.
We
are
running
a
community
survey,
so
yeah.
If
you
are
interested
in
running
Kafka
on
kubernetes
or
if
you
already
know
and
use
trimsy,
please
take
a
few
minutes
and
fill
it
in
and
that's
it
here
are
the
the
slides
and
the
in
the
sources.
This
will
just
redirect
you
to
to
the
GitHub
repo
and
let
me
quickly
check
if
the
group
not
something-
and
we
didn't
got
anything
so,
let's
try
to
hire
the
alerts
again.
A
So,
that's
it
sorry,
for
example,
not
working,
that's
what
happens
with
live
demos,
any
questions.