►
Description
Join Pat Stephens, Fluent Bit contributor and Kubernetes expert, as he shares best practices on Advanced Routing with Fluent Bit. Fluent Bit is a high-speed observability agent that can route logs, metrics, and traces to destinations such as Kafka, Splunk, Elasticsearch, and more.
In this webinar you'll learn:
1. Fluent Bit and Tagging: How is data interpreted through a pipeline and routed to destinations
2. Sending data to two or more destinations
3. Filters and Outputs: How you can use routing and filters to process data independently
A
So
I
was
just
going
to
give
a
bit
of
a
background
as
to
who
I
am
and,
as
you
can
probably
tell
I'm
English
so
I'll
also
be
correcting
Austin's
pronunciation
of
the
word
routine
and
my
surname
I'm
Patrick
Stevens
I'm,
the
tech
lead
of
infrastructure
at
calypsia,
so
collector
provides
so
with
the
open
source
maintainers
for
fluent
stuff.
A
So
all
the
ecosystems
that
are
here
for
a
beef
and
operator
and
we're
we're
starting
to
maintain
a
spirit,
but
we
also
build
some
Enterprise
products
on
top
of
that
as
well,
so
I've
been
working
as
it
seems
a
terribly
long
time.
21
years
in
software,
engineering
professionally
I
did
start
in
defense.
So
I
spent
a
very
large
time
in
defense,
but
more
reasonably,
probably
the
last
six
seven
years.
A
Well,
there's
different
defense,
but
doing
kubernetes
entertainers
before
I
moved
over
to
a
database
company
called
couchbase
back
in
early
2021,
which
is
where
I
first
started
using
film
beer,
and
we
were
looking
at
observability
there
and
how
to
add.
Logging
output
and
some
of
the
metrics
as
well
and
so
I,
put
together
a
sidecar
for
a
little
bit
for
the
couch-based
operator
and
as
part
of
that
work,
I
started
contributing
to
the
project,
getting
quite
active
on
slack.
A
You
know,
wrote
quite
a
few
blog
posts
and
and
sort
of
trying
to
help
out
people,
but
you
know:
I
I
spent
a
bit
of
time,
figuring
out
the
solutions
to
it
and
some
of
the
workarounds
and
and
trying
to
help
people
and
putting
them
in
the
right
directions
and
then
clipsa
reached
out
and
made
me
an
offer
in
late
2021,
where
I
so
I
moved
over
to
them
became
an
open
source
maintainer
as
well
around
the
same
time,
although
primarily
I'm
working
on
on
open
source,
it's
cicd,
the
automation,
some
of
the
release
measurement
as
well,
but
I've
also
been
putting
together
some
of
our
commercial
offerings
and
dealing
with
the
internal
infrastructure
running
out,
SAS,
Solutions
and
and
making
commercial
offerings
to
customers
as
well.
B
A
A
developer
for
a
bit
I,
don't
really
understand
the
code
in
in
enough
detail,
but
yeah
I
can
understand
the
high
level
aspects,
but
it's
more
about
how
do
you
use
slim
bit?
How
does
it
work?
You
know
what
would
I
do
with
it
rather
than
this
is
how
you
know
the
internals
of
it
work.
So
I'm
going
to
focus
on
some
of
that
today
and
give
you
a
hopefully
a
fairly
simple.
A
It's
a
bit
tricky
but
straightforward
examples
for
routine
and
some
of
those
other
aspects
that
you
can
use
them
for
a
little
bit.
A
You
may
have
seen
me
if
you're
on
the
slack
I'd
probably
spend
far
too
much
time
on
that,
but
yeah
I'm,
quite
active
on
the
the
open
source,
slack
so
come
join,
say
hi.
Hopefully,
some
of
the
people
here
have
joined
from
slack
and
and
I
may
have
chatted
to
you
or
or
offered
you
a
solution
or
or
hopefully
solve
some
problems
for
you
as
well.
A
So
what
we're
going
to
do
today,
so
I'm
going
to
give
just
a
quick
intro
for
a
bit
of
what
it
is?
It's
not
going
to
spend
too
long
on
that,
but
the
main
focus
is
routing.
So
how
do
you
get
data
from
A
to
B?
How
do
you
take
stuff
from
various
inputs?
Send
it
to
various
outputs?
How
do
you
make
data
pipelines
I'm
going
to
try
and
show
you
this
with
some
straightforward
examples?
A
It's
a
bit
tricky
with
some
of
the
more
advanced
stuff,
but
I
do
have
some
actual
real
life
examples
in
there,
but
I'm
going
to
try
and
try
and
show
you
with
conceptual
examples
as
well
to
make
it
straightforward
to
understand
the
concepts
rather
than
is
a
you
know,
very,
very
complex
thousand.
A
A
One
of
the
things
we
wanted
to
focus
on
as
well
is
like
how
you
can
send
data
to
multiple
outputs
independently
how
you
can
easily
do
that.
There's
there's!
You
know,
there's
quite
a
few
vendors
provide
integrated
Stacks
with
agents
that
just
send
to
the
one
output
that
how
can
you
use
like
the
cncf
open
source
project
right
back
to
to
easily
plug
in
new
outputs,
different
outputs?
A
You
know
stuff,
you
want
to
send
some
data
to
different
different
destinations,
and
maybe
you
want
to
filter
it
process
here
in
different
ways
and
how
you
can
do
that
without
having
the
data
simplification,
so
just
a
quick
overview
of
a
little
bit.
Hopefully,
I
wasn't
sure
how
many
people
would
know
it.
So
I
just
wanted
to
give
people
a
little
bit
of
a
background.
So
what
is
it
so?
A
It
provides
a
collection
of
logs,
metrics
and
traces
in
a
in
a
single
agent
with
processing
filtering
those
kind
of
data
pipeline
functionality
and
allow
it
allows
you
to
multiple
outputs,
it's
high
performance,
so
it
started
life
as
an
embedded
solution.
So
that
works
quite
well,
then,
for
minimizing
resource
costs
and
stuff,
like
that
in
some
of
the
large
Cloud
providers,
which,
as
you
can
see,
is
then
why
they
started
the
boxing
it
as
the
default
observability
agent.
So
if.
A
On
one
of
the
cloud
providers
using
kubernetes,
you
will
be
using
for
a
bit
somewhere
in
the
stack
you
just
might
not
realize
it,
and
also
things
like
the
observability
agent
for
gcp
Eric
can
be
installed
on
VMS
as
well
plus
who
those
standard
bits
as
well.
So
lots
of
people
use
it.
It's
not
just
a
dumb
pipe.
So
one
of
the
things
we're
going
to
show
in
later
webinars
is
how
you
can
have
processing
and
stuff
before
it
leaves
for
a
little
bit.
A
So
it's
quite
useful
in
some
places
to
add
some
context
to
your
data.
You
know
like
if
I'm,
if
I've
got
a
thousand
clusters
which
cluster
is
this
coming
from
so
add
some
context
there,
but
also
things
like
reduction
and
stuff,
like
that,
you
know:
remove
the
data
before
you
send
it
rather
than
try
and
sanitize
your
database
or
wherever
it
ends
up
afterwards
so
and
part
of
that,
as
well
as
reducing
costs.
A
You
know
if
a
lot
of
logs
may
not
be
very
useful,
so
let's
filter
down
the
ones
that
are
useful
and
send
those
to
the
higher
costs
receivers.
That
can
do
a
lot
of
analysis
on
them
and
yeah.
Why
do
we
use
a
billion
downloads
I
think
that
was
July
I
think
so
it's
probably
June
July
I'm,
not
sure.
So
it's
probably
my
Dynamics
there's
quite
a
few.
A
So
there's
various
outputs
supported
lots
of
different
vendors
hotel
and
then
generic
TCP
and
HTTP.
So
a
lot
of
the
vendor
plugins
are
based
on
probabilities
between
HTTP
outputs
but
there's
generic
ones
as
well
internally
for
a
little
bit.
It
just
has
a
common
data
structure
and
everything
works
on
that
and
then
just
transforms
it
into
the
output.
So
it
can
work
with
multiplexing
data
coming
in
most
collection
data
coming
out
and
to
touch
on
Michael's
question.
Yeah
absolutely
can
work
for
familiar
with
kubernetes.
A
There's,
no
reason
why
I
can't
you
know
a
lot
of
the
the
test.
You've
looked
at
just
run
unkind
in
CI,
but
any
any
kind
of
people
that
you
should
work.
It's
just
in
just
to
confuse
kubernet
logs
sends
them
out.
If
that's
what
you're
going
to
do
with
it,
how
does
it
work
so?
I've
talked
about
inputs
and
outputs
and
other
services
and
syncs.
A
So
basically,
you've
got
various
configuration
options
to
to
control
it,
so
we've
got
a
declarative,
syntax
in
yaml
or
all
the
style
for
that
as
well,
and
you
connect
these
different
sources
and
things
together
along
with
only
Transformations.
So
we
call
those
filters
and
look
at
terms
and
you
can
Multiplex
input
and
Multiplex
output,
so
each
of
the
filters
or
each
of
the
inputs
or
each
of
the
outputs
can
work
with
a
number
of
data
destinations.
A
Those
can
be
persisted
disk
if
you
want,
and
so,
if
you
need
to
ensure
that
you're
not
losing
any
data.
You
know
if
there's
some
kind
of
issue
with
something
to
your
output
and
maybe
it's
gone
down.
Maybe
it's
got
some
kind
of
loading
issues,
sort
of
networking
problem
like
that:
we're
basically
buffing
them
persisted
to
disk
until
it
comes
back
up
again
and
so
on
and
there's
a
lot
of
different
options
in
there.
A
So
we've
touched
on
here
like
Dynamic
Discovery,
so
the
the
kubernetes
deployment
I'll
show
you
an
example
of
it
later
on.
The
points
have
been
set.
Typically,
so
that's
the
the
open
source
sound
chart
by
default
and
that
can
get
all
the
logs
on
itself
off
the
public
and
straight
metrics
can
receive
metrics
and
traces
from
the
host
and
and
pass
them
on
to
wherever
you
want
to
show
them.
It
could
also
query
the
kidneys
API
for
extra
metadata
about
the
quad
logs.
A
So
if
you
need
some,
if
you
want
to
add
the
labels
annotations,
you
know
names
and
required
those
kind
of
things
that
can
be
done
as
part
of
your
your
pipeline
before
it
leads
your
agent
and
there's
some
powerful
filtering
as
well
available
via
lure
and
wasm.
So
lure
I,
don't
know
if
anyone's
used
it.
It's
just
like
a
scripting
language,
but
it
gives
you
a
lot
of
flexibility
in
your
filters
of
what
you
can
do
and
it's
it's
very
powerful
and
there's
lots
of
different
Inspirations.
B
A
I
just
wanted
to
give
you
time
over
and
an
overview
of
what
fluid,
which
is
before
diving
system
and
the
more
detailed
stuff.
So
today
the
focus
is
on
routing.
So
how
do
we
create
our
pipelines?
How
do
we
get
stuff
from
inputs?
Output,
maybe
rise.
Some
kind
of
processing
intermediate
processing,
so
I'm
going
to
cover
that
now.
So
this
this
sort
of
diagram
was
taken
from
the
from
the
red
box.
A
It
kind
of
shows
you
how
we
can
Multiplex
outputs,
but
how
how
the
different
stages
in
the
field
that
pipeline
work
so
we'll
start
with
input.
We
can
parse
stuff.
So
that's
taking
unstretching
data
like
logs,
which
is
just
like
maybe
lines
and
converting
them
into
some
kind
of
structured
format.
A
So
internally
thought
that
uses
a
message
platformat,
which
is
a
type
of
binary
Json,
and
it
converts
into
that.
And
then
we
can
apply
a
set
of
filters
we
don't
have
to,
and
then
we
can
start
routing
it
to
different
outputs.
So
this
is
the
way
you
create
these
kind
of
data
pipelines
and
I've
sort
of
shown
you,
the
the
old,
the
old
style,
non-yamor
format
here
as
well,
so
we've
got
an
input
for
a
tail.
A
An
input
for
systemd
was
then
applied
a
filter
and
then
we've
got
multiple
outputs
here
and,
as
you
can
see,
we're
matching
either
everything
or
only
logs,
depending
on
which
output
it
is
so
it's
showing
you
how
you
can
take
data
and
and
grab
the
data
for
different
types
of
different
pipelines
and
send
it
to
different
destinations
very
easily,
and
in
this
case
it's
slightly
different
from
a
bit.
There's
no
duplication
of
that
painting
that
how
sorry
slightly
different
differently,
we
don't
have
to
copy
data
from
one
part
19
over
there.
A
A
So
why
might
you
want
to
do
this
so
typically
in
larger
organizations
or
bigger
deployments,
and
you
want
to
have
logs
and
metrics
for
multiple
sources?
You
know
you
want
to
get
cigarette
logs.
You
want
to
get
system
D
logs
to
see
what
you
know
is
the
tubular
failing
or
is
there
some
other
stuff?
You
know?
Maybe
you've
got
some
audit
logs
some
some
additional
logs
as
well.
A
Maybe
it's
not
okay.
It's
at
all,
maybe
there's
a
switch
log,
some
of
stuff
for
assist
log
or
something
like
that,
and
you
want
to
grab
all
those
different
logs
and
metrics
and
and
traces
and
send
them
to
separate
destinations
depending
on
their
type
of
data.
A
So
I,
you
know
typically
I've
seen
components
where
maybe
while
we're
running
stuff
in
Dev
would
get
all
the
logs,
but
only
for
about
you
know
24
hours
or
something
like
that.
Instead
of
going
to
be
dog
staff
and
figure
out
what's
going
on
and
then
when
we're
running
a
production,
you
want
to
filter
that
down
and
say:
oh
these
clusters,
you
know
only
tell
me
about
important
things,
but
also
stuff
like
if
you've
got
any
kind
of
security
integration
like
a
seam
or
something
like
that.
A
You
may
want
to
push
audit
logs
stuff
like
that
to
some
kind
of
glacial
storage
or
somewhere.
You
can
do
post-intrusion
analysis
as
well
to
make
sure
like,
oh
yeah.
These
are
all
the
logs
we've
got
in
production,
but
someone
you
know,
maybe
there's
there's
been
intrusion
and
those
we
can't
trust
those
jobs
anymore.
So
let's
push
them
somewhere
else
that
we
can
make
sure
it's
only
you
know
is
not
going
to
be
attacked.
A
So
there's
lots
of
different
approaches
for
that,
and
we
can
do
all
this
with
a
single
deployment
with
a
little
bit.
So
as
I
touched
on
before
yeah,
you
can
send
stuff
to
like
the
phone
energy.
You
can
start
stuff
to
the
elastic
with
the
elastic
engine,
but
you
can't
send
to
each
other
that
way.
So
it's
quite
useful
to
be
able
to
say
look
later
on.
Maybe
we
want
to
add
another
tool
we
don't
have
to
drop
in.
A
We
we
don't
have
to
figure
out
how
to
configure
the
existing
data
product
lines
we
have
in
the
new
tool.
We
don't
have
to
redeploy
this
new
tool
and
deal
with
any
of
the
maintenance
complexities
of
managing
the
life
cycles
of
multiple
agents.
You
know:
do
they
interact
with
each
other?
Do
they
consume
resources
from
each
other?
You
know
those
kind
of
things
and.
A
It
as
well
you
just
you,
don't
want
to
deal
with
all
that
upfront
cost.
We
just
want
to
say
start
sending
data
to
this
new,
wonderful
tool.
I've
got
or
I
want
to
evaluate
on,
and
you
can
you
can
know
for
sure
that
you
know
it's
sending
the
exact
same
data
extended
to
the
current
one,
because
it's
using
the
same
related
pipeline
as
well.
So
that's
just
an
example.
This
there's
probably
other
ones
as
well,
so.
A
How
does
routine
work
so
simply
inputs
provide
tags
and
everything
else,
so
the
filters
and
outputs
match
to
those
tags.
So
it's
quite
useless
I've
drawn
sort
of
data
pipelines
here,
but
there's
no
direct
way
of
saying
this.
Is
you
know
you
don't
say
this?
Is
the
data
pipeline
of
a
to
B
yeah
a
to
a
and
b
to
B?
What
you're
actually
saying
is
each
filter
in
that
thing
is
matching
to
the
data
from
a
that
outputs
matching
to
the
page
from
a
and
this
output
is
matching
into
your
data
from
B.
A
So
that's
a
high
level
how
you
do
how
you
provides
visiting
so.
A
A
Can
be
part
of
multiple
pipelines
but
without
having
to
say
copy
data
from
pipeline
age,
python,
B,
no
you're
just
consuming
the
same
data
on
on
the
same
input,
but
in
different
Pipelines
so,
but
to
touch
on
that.
I
wanted
to
also
give
a
bit
of
overview
of
what
the
data
looks
like
inside
for
a
little
bit
to
kind
of
give
you
guys
an
idea
of
of
how
this
works,
so
I
touched
it
before
so
internally.
A
All
data
is
transformed
into
message
pack,
so
it's
a
type
of
Json
which
supports
by
a
new
data
and
it's
optimal
serialized
and
those
kind
of
benefits
as
well.
The
idea
being
you
can
have
data
from
any
type
of
source.
That's
in
whatever
format
it's
in,
and
the
job
of
the
input
plug-in
is
to
transform
data
that
it
receives
into
the
internal
format.
The
job
of
the
output.
A
So
that's
how
you
can
plug
and
play
all
these
different
things,
because
they
work
with
the
same
data
format,
so
you
just
put
them
in
in
a
sequence
and
off
they
go
transforming
the
data
outputing
it
and
so
every
record
to
do
that
has
a
timestamp.
So
typically,
you
pass
that
from
the
log
file,
but
it
could
just
be
a
time
that's
received
or
something
like
that.
That
has
the
actual
data
that
goes
along
with
it.
A
So
the
structured
data
you've
got
like
the
actual
log
meshes,
the
actual
metric,
whatever
it
happens
to
them,
and
the
important
thing
for
routing
is
a
tab.
So
tag
is
like
I.
Am
the
data
or
X,
or
something
like
that.
So
you
know,
I,
have
you
know
tags
that
says
this?
Is
the
orbit
logs
data
stuff
like
that
and
then
anything
that
needs
to
work
the
building
logs
data
matches
to
that
tab.
So
it
goes
give
me
data
for
audit
log
and
it
will
receive
all
the
data
from
the
audience
now.
A
Something
else
can
also
say
that
and
it
will
receive
the
same.
Data
generally
stays
the
same
through
the
pipeline,
but
there
is
a
filter
attached
on
later,
which
lets
you
modify
the
tag
to
start
migrating,
the
pipelines
sending
data
from
one
pipeline
into
another,
Pipeline
and
stuff
like
that.
A
So
here's
an
example:
I
ran
earlier
today.
If
you
convert
the
epochs
on
your
policy,
it's
sometime
this
morning,
UTC
time
but
I
just
want
to
show
you
an
example
using
the
container
where
I
use
the
dummy
input.
So
this
is
just
an
input
that
generates
data
for
us
without
actually
receiving
anything
to
a
standard
output.
So
that's
an
output
point
of
emergency
rights
to
standard
output.
A
A
A
and
then
the
actual
contents
of
the
the
data
you
you've
sent.
So
the
dummy
plugin
defaults
to
Sony
basement
which,
like
this
so
internally,
it
looks
like
yeah,
it's
just
a
Json
Json
message:
the
standard
output
plugin
just
adds
some
stuff
to
to
highlight.
You
know
where,
where
the
fields
are
the
key
value
pairs,
those
kind
of
things.
A
A
Z,
so
let's
dive
into
taking
a
little
bit,
how
does
it
work
exactly
so,
as
I
mentioned,
all
inputs
need
to
provide
tags.
Now
you
can
do
that
in
various
ways
and
it
depends
a
little
bit
on
the
input
plugin
as
to
what
exactly
is
supported,
but
these
are
the
general
approaches,
so
you
can
tag
things
explicitly,
so
you
can
say.
Oh,
let's
go
back
one.
A
So
this
is,
you
know
this
is
my
log.
This
is
this
tag
is
going
to
be
explicit
and
you
can
actually
use
the
same
type
of
multiple
sources
if
you
really
want
to
start
doing
stuff
as
well,
and
but
it's
just
identifying
the
data
for
other
things
to
match
with
it
later.
You
can
also
say:
yeah
use
a
wildcard
and
that's
quite
useful
for
like
the
tail
input
plugin,
which
is
the
one
that
reads,
files
on
disk.
A
It
acts
as
if
it's
tail
minus
f
and
it
you
it
will
substitute
the
sort
of
name
of
the
the
file
it's
tailing
into
the
wild
card.
So
that's
quite
useful.
A
I'll
show
you
in
itself
with
the
kubernetes
filter
but
yeah,
it's
quite
useful
to
say,
like
just
tell
everything
in
this
directory
and
tag
it
with
a
wild
card.
But
in
this
case
you
can
see
I've
also
added
a
common
prefix.
So
that's
quite
useful,
then
to
say
when
I
want
to
match.
I
can
match
logo
dot
style
as
well.
I
get
all
the
different
things.
I
don't
have
to
be
explicit,
saying
with
an
audit
I'll
give
them
this
other
one.
A
It
could
just
do
wild
cards,
and
then
you
can
do
regex
stuff,
so
you
can
extract.
This
example
is
taken
from
the
tail
plugin,
but
you
can
extract
the
name
of
the
the
log
file
if
you
want
maybe
remove
the
DOT
log
on
the
end,
because
that
doesn't
make
sense.
I've
used
it
in
a
path
past
as
well
to
do
just
to
give
a
much
nicer
tag.
Names
than
maybe
the
Wild
Card
stuff
would
be,
but
it
can
be
quite
powerful
as
well.
A
You
can
do
a
lot
of
Red
X
in
there
and
you
can
use
that
quite
powerful
and
then
the
last
thing
I
want
to
touch
on
for
typing
this
there's
a
special
filter
called
rewrite
tag
which
I
shown
it
an
example
of
later.
But
the
thing
with
this
filter
is:
it
can
receive
data
with
one
tag
and
create
data
with
a
new
tag.
A
So
it's
quite
useful
for
maybe
sending
data
you
received
from
one
input
off
into
a
whole
separate
data,
and
we
can
allow
it
to
the
existing
data
to
carry
on
in
the
current
Pipeline
and
mess
about
with
it.
So
it
gives
us
some
very
powerful
options
there.
If
you
want
to
set
up
some
very
complex
data
pipelines,
say
you've
got
a
common
bit
of
processing
and
you
apply
to
all
your
data
or
certain
types
of
data,
and
you
can
say
right:
okay,
when
this
data
comes
in,
actually
send
it
through
this
processing
chain.
A
The
example
I'll
show
later
as
well
is
about
tagging
it
for
specific
outputs
as
well.
So
maybe
all
your
logs
go
to
local,
for
example,
in
the
cluster,
but
you
want
the
audit
log
as
well.
It's
a
very
explicitly
very
soon
I've
seen
anymore,
and
then
you
can
use
rewrite
tag
to
send
your
you
know,
give
it
a
new
tag,
but
only
the
scene
matches
I
mean
you
can
do
it
in
other
ways
as
well.
A
You
know
with
the
with
the
matching
and
stuff
like
that,
if
you
set
it
up
appropriately,
but
sometimes
it's
quite
useful
or
you
may
have
very
disparate
matching
values
you're
going
to
use,
so
it
can
be
useful
to
completely
rewriting
to
a
new
format.
A
Matching
so
matching
is
like
the
inverse
of
tagging,
so
the
idea
of
being
here
like
if
you
have
a
filter
and
output,
it
sits
there
independently
and
says
I
want
to
match
data.
That's
got
this
tag,
it's
not
sat
in
a
pipeline.
You
don't
configure
a
pipeline
in
that
way.
You
know
saying:
do
a
then
B,
then
C,
then
B.
What
you're
saying
is
this
filter
matches
this
data
and
conceptually
builds
pipeline
that
way,
but
it's
very
flexible
in
the
sense
that
you
can
send
data
to
different
pipelines
quite
easily.
A
It
just
can
be
quite
hard.
Sometimes
if
you
make
it
very
complex
to
to
see
those
conceptual
pipelines
as
well,
you
can,
as
we're
tagging,
you
can
do
it
in
the
same
way.
So
you
can
match
by
you
know
explicit
matching,
so
it
only
been
the
stuff
that
matches
my
log.
You
can
say,
as
we
said
there
like,
match
everything
with
the
log
prefix
and
then
maybe
do
redneck
snatching.
If,
if
you're
feeling
very
you
know
like
you,
want
to
hurt
yourself
with
webex's.
A
In
the
same
section
and
one
does
take
precedence,
but
it
can
be
a
bit
confusing
if
you've
got
a
lot
of
confidence
in
between
them
and
someone
will
see
the
match
and
not
see
the
matching
X
or
vice
versa.
So
just
make
sure
you
click
click
which
approach
you're
going
to
do,
and
my
guns
as
well
always
is
like.
If
you
can
avoid
using
a
reg
X,
it's
a
lot
easier
to
maintain
for
a
lot
more
people,
so
try
and
keep
things
simple.
A
A
So
this
is
Multiplex.
Mansions
so
probably
touch
on
some
of
the
configuration
aspects,
so
it
doesn't
matter
what
order
you
Define
inputs
or
outputs
in
you
can
Define
all
the
outputs
first
and
all
the
inputs
right
at
the
end.
Vice
versa
inputs
always
happen.
You
know
at
the
beginning
of
the
pipeline,
they
receive
the
data
and
outputs
always
happen
at
the
end,
they're
saying
filters
matching
in
real
the
redefine.
A
So
if
you've
got
a
configuration
file,
it
will
match
in
the
order
that
they're
defined
in
the
file
and,
if
you're,
using
the
command
line
parameters.
Obviously
they
don't
actually
be
able
to
do
the
climax
so
that
that's
just
one
thing
to
be
aware
of,
if
you're
doing
Furniture
English
and
make
sure
your
order
is
correct
and
be
careful
if
there's
there's
some
powerful
tools
to
like
include
other
files
in
the
configuration
file
and
if
you
start
using
wildcards,
sometimes
you'll
do
is
undefined,
so
it
can
be
quite
quite
tricky.
A
So
just
make
sure
you
understand
the
real
difference
as
well.
Now,
I
wanted
to
kind
of
show
you
how
this
kind
of
multiplexing
matches.
So
here
there
are
actually
two
pipelines,
but
everything
matches
independently
as
I
sort
of
touched
on
before.
So
we
have
an
input,
a
with
a
tag,
a
this
matches
to
the
first
filter
because
it
says
matched
away.
It
then
says:
oh
there's,
another
filter
that
matches
everything.
A
Okay,
that
also
affects
my
type
A,
and
then
we
send
it
to
Output
a
because
that
matches
a,
but
also
we
send
it
to
this
other
output,
because
that
also
matches
so
B
is
slightly
different
and
it
doesn't
go
through
the
first
filter
because
it
doesn't
match,
but
it
goes
to
the
second
filter
and
I'm
on
to
the
all
output.
It
doesn't
go
to
the
top
output,
so
technically
you've
got
a
going
to
Output
a
and
we
output
all
and
you've
got
B
just
going
to
alcohol.
A
So
I've
tried
to
sort
of
summarize
that,
because
you
can
kind
of
see
here,
this
filter
is
saying:
I
match
anything.
So
it's
not
part
of
a
specific
pipeline.
It's
just
saying
as
I
get
data.
Give
me
data
from
here.
Give
me
data
from
here,
because
they
both
match
pass
it
along,
and
these
don't
start
matching.
Oh
I
only
match
Time
game.
Well,
that's
the
same
tag.
That's
coming
through.
B
A
I
match
everything:
okay,
I'll
grab
everything
else,
so
that's
kind
of
how
it
works.
I
hope
that
makes
any
sense
that
we
can
discuss
it
a
little
bit
in
Q
a
if
you
want
as
well
and
now
the
rewrite
tag.
Filter
so
rewrite
tag
is
a
special
type
of
filter,
which
you
know
you
can
say.
Oh
look,
maybe
you've
got
some
well.
A
This
is
very
simplified,
but
generally
you
probably
have
some
complex
pipeline
before
each
of
these
and
we
kind
of
want
to
take
a
subset
of
the
data
and
inject
it
into
the
other
Pipeline
and
that's
where
the
rewrite
type
filter
comes
in.
So
it
can
say:
okay,
I
match
B
and
then
I'm
actually
going
to
create
a
tag
called
a
and
what
that
means
is
it
it
acts
as
if
it's
the
input
a
so
it
goes
into
the
start
of
the
pipeline
and
then
carries
on
so.
B
A
Of
people
maybe
will
Define
filters
in
order
in
their
file.
You
know
so
they
have
filter
one
filter,
two
and
then
the
rewrite
filter,
but
actually
when
it
rewrites
it.
It
starts
at
the
beginning
of
all
the
filters,
because
it's
that
it's
as
if
it
is
a
new
input,
so
it
goes
all
the
way
back
here
and
then
carries
on
silverware
and
with
the
rewrite
tag
filter
you
can
say
so
that
so
I've
matched
and
data
from
B
I
can
either
let
it
carry
on
or
I
can
drop
it
once
I've
Rewritten
it.
B
A
A
So
a
real
world
example,
so
a
very
common
use
case
and
it's
the
default
in
the
open
source
Center
is
you
deploy
a
tail
input
and
you
deploy
a
systemd
input
and
use
the
kubernetes
filter,
so
I've
linked
out
to
it
there,
but
the
kubernetes
filter
is
a
way
of
querying
querying
the
kubernetes
API
for
some
additional
records
metadata.
A
So
what
we?
What
we
see
here
is
we're
like
tailing,
all
the
container
logs
so
on
a
Kubrick,
so
you
deploy
this
as
a
data
set
on
the
Kubler.
It
will
have
all
the
logs
fill
the
containers
it's
done
and
those
logs
have
a
consistent
moment,
structure
enforced
by
kubernetes
standard,
so
that,
namely
includes
things
like
the
namespace,
the
Pod
name
and
I
think
the
container
ID
and
the
idea
being
from
those
three
things.
You
can
then
query.
A
A
So
this
will
actually
end
up
expanding
into
the
full
name
of
the
odd
file
and
then
we're
saying
with
this
filter
match
all
the
kubernetes
stuff,
because
I
don't
want
to
match
system
boost
like
that,
doesn't
make
any
sense
it'll
just
mind
if
you
try
and
do
that,
because
it
can't
understand
without
it.
A
The
system
D
stuff
is
all
from
the
host.
You
know
it's
like
the
actual
Improvement
logs
generally
staff,
all
that
kind
of
things
that
you
might
want
and
to
find
out
what's
going
on,
not
just
with
the
container
but
with
your
your
nodes
as
well.
So
what
we're
saying
here
is
like
send
me
all
the
kubernetes
stuff
and
I'll.
Let
him
go
off
and
create
a
filter,
but
I
wanted
to
show
you.
This
is
very,
very
simple
and
very
standard
approach
to
deploying
Google
sets
the
work
song
should
work
on
any
kubernetes
deployment.
A
You
know
we
just
tell
the
logs
and
then
we
do.
We
add
some
additional
motivator
for
kubernetes
and
then
we
can
do
whatever
we
like
with
it,
send
it
to
an
output
and
do
some
more
filtering
whatever
you
want.
So
I've
just
got
an
example
where
I
send
it
to
send
it
out.
It
was
in
the
helm.
Chart
I
ran,
there's
some
kinds
earlier,
so
I'll
show
you
in
a
minute,
but
I
just
wanted
to
touch
on
some
of
the
the
tagging
specifics
here.
So
the
kubernet
logs
have
said
there's
a
standard.
A
They
must
include
namespace,
modern,
container
information,
retail
input
encodes
this
information
into
its
attack.
So
we
get
with
that
previous
one.
We
get
this.
This
tag
here
expands
into
varred
blogs,
containers
and
then,
like
the
main,
is
the
multiplier.
So
whatever
it
happens
to
me,
and
then
we
can
use
that
start,
we
strip
off
all
that
stuff
and
take
the
namespace,
the
Pod
name,
the
container
information
and
say
to
the
kubernetes-
and
this
is
quite
interesting
to
doing
that.
A
So
here's
an
example,
as
I
said:
I
ran
this
morning,
so
I
ran
to
some
kind,
different
kind
of
services
in
the
the
kind
of
networking
and
stuff
like
that,
so
I
use
the
open
source
Helm
chart
linked
it
through
this
Loop
and
I've.
Just
added
a
couple
of
you
know:
I
overload
a
couple
of
values,
so
I
told
her
to
exclude
this
little
bit
logs.
A
It's
quite
useful
to
do
that,
because
otherwise
you
get
into
some
weird
but
like
because
I'm,
certainly
just
no
output,
it
will
keep
reconsuming
its
own
input
and
adding
more
stuff.
So
it
just
gets
a
little
blue.
So
I
told
it
to
sleep,
just
look
through
a
bit
logs
and
we
could
do
that
with
simple
annotations.
So
that's
one
of
the
benefits
of
people
search.
You
can
tell
it
to
ignore
logs
or
use
special
Parcels
just
by
pod
annotations,
rather
than
having
to
completely
different
there.
A
B
A
Know
messy
because
it's
just
a
simple
example:
using
standard
output,
we
can
see
how
this
could
be
easily
sensored
locally
or
elastic
or
whatever.
If
you
want
to
do
it,
so
we
take
the
full
log
name
and
you
can
kind
of
see
in
that
we've
got.
Oh,
so
I'll
highlight
everything,
so
we've
got
the
name
of
the
namespace,
the
name
of
the
Pod,
the
I
think
this
is
the
Goku
container
ID,
but
it's
all
part
of
the
standard.
A
So
this
is
the
actual
name
of
the
file
on
disk
and
then
there's
the
directory
as
well.
We've
got
the
actual
log
message
in
there.
You
can
kind
of
see
it
so
by
default.
Tail
will
just
take
the
whole
line
of
data
and
construct
a
Json
key
called
law.
You
can
add
parsers
if
you
want
to
sort
of
extract
more
details
on
it,
which
is
quite
useful
for
like
engineering.
So
it's
a
lot
of
other
logs.
This
is
Jason
parses
and
things
like
that
which
you've
got
j7
from
that
race,
which.
A
Lot
a
lot
of
I,
don't
know,
and
then
so
that's
the
actual
log
message
and
then
you
can
kind
of
see
the
kubernetes
filter
has
added
some
extra
kubernetes
information.
So
you
can
see
the
Pod
name
here
and
you
can
see
the
namespace
it's
in
you
can
kind
of
see
its
ID
all
of
its
labels.
I.
Don't
think
this
one
had
any
annotations,
but
it's
it's
everything.
That's
actually
two
minutes,
API
kind
of
gives
us
about,
so
that
that's
just
a
very
simple
example.
Hopefully,
actually,
this
is
what
the
code
looks
like.
A
Logs
to
different
outputs
I've
got
an
example
here
using
locally
in
S3.
It
could
be
anything
so
here
we're
saying,
send
all
the
all
the
container
logs
to
kubernetes
and
then
send
there
is
to
to
S3,
so
there's
just
a
simple
way
of
doing
it,
and
it
can
be
quite
useful
in
different
situations.
A
So,
as
I
say,
this
is
a
very
simple
pipeline,
you
probably
start
doing
things
like
using
a
graph
filter
or
dropping
data,
sending
all
the
stuff
to
in-cluster
destinations
and
then
out
of
cluster
destinations
with
some
of
the
filtering
with
action.
Maybe
add
some
context.
Information
like
this
is
the
name
of
the
cluster.
If
I'm
saying
somewhere
and
then
and.
B
A
I
just
want
to
show
you
quite
a
it's
fairly
simple.
You
just
have
to
deal
with
matching
and
typing.
That's
that's
kind
of
what
I
want
to
show
you
if
you
want
to
add
a
new
output,
so
it's
quite
useful.
So
we've
got
an
existing
output
defined
like
this
we're
sending
all
our
logs
to
Loki,
including
the
kubernetes
logs,
and
then
the
assistant
D
logs
from
The
Host
but
say
I
come
along
and
I
say.
Oh
actually,
I
want
to
evaluate.
A
What's
the
performance
of
open
search
when
I
ingest
these
logs,
you
know,
do
I
get
any
benefits
stuff
like
that.
Now
you
know
you,
don't
you
don't
have
to
do
anything
other
than
add
a
new
output,
so
all
the
existing
pipeline
stays
the
same.
All
the
existing
data
is
the
same.
You
know
it
is
the
same.
It's
going
through
the
same
pipeline.
There's
no
additional
work
to
do
to
configure
that
and
you
just
say:
send
it
to
open
search
now.
A
As
as
you
mentioned,
you
can
also
do
photography,
outwards
so
I'm
sure
here
of
using
the
rewrite
tag,
but
also
I
wanted
to
like
highlight.
You
know
a
few
bits,
not
just
about
logs.
You
do
metrics,
we
do
traces
as
well,
so
here
we're
using
remote
Explorer
input,
so
that's
equivalent
to
running
remote
experience,
tool,
option
deployment
on
kubernetes
and
we're
sending
that
to
it
to
an
endpoint
that
we
can
scrape
there's
also
a
remote
right
at
endpoint
as
well.
A
But
what
we're
saying
for
the
rewrite
type
filters?
You
give
it
a
rule
saying
when
you
see
data
that
looks
like
this,
so
maybe
it
has
a
specific
field
in
it
or
a
specific
field
with
a
specific
value.
Whatever
you
can
say
right.
Take
this.
Take
this
data
create
me
a
new
tag
and
then
my
my
output
will
match
that
tag.
A
So
this
this
data
will
carry
on
through
the
rest
of
the
system
to
wherever
it
was
going,
but
it'll
also
create
a
whole
new
pipeline
that
sends
just
to
to
the
scene,
for
whatever
your
room
is,
so
that
that
was
just
trying
to
show
you
something
out
there.
So
this
some
good
examples
of
this
out
in
the
world
using
Falco
and
a
few
others
where
people
have
audit
logs
security
logs.
A
A
Right,
I'm
touching
on
the
final
thing
processes
which
are
the
future,
so
it's
a
pretty
generic
name
I've
linked
out
to
the
animal
config
for
it,
but
I
wanted
to
touch
on
some
of
the
performance
aspects
as
well,
so
all
inputs
and
outputs
for
a
while
now
can
can
provide
dedicated
threads
for
their
processing.
A
So
when
flow
a
bit
started
life,
you
know
it
was
single
threaded
embedded
that
kind
of
stuff,
so
everything
around
on
round
on
the
main
thread,
but
for
a
few
reasons
now,
You
can
provide
dedicated
threads
for
input
and
and
dedicated
sales
throughout,
which
is
quite
good
because
sometimes
retrying
output
doing
stuff
with
output
can
be.
You
know
quite
expensive,
the
rest
of
the
pipeline
or
maybe
other
outputs
that
are
much
quicker.
You
know
handling
their
output
phone.
A
You
know
the
issues
that
filters
run
on
the
main
thread,
so
you
just
have
to
be
careful
expenses
but
to
resolve
that
what
was
been
added
recently
in
2.1
I
think
but
maybe
I
might
be
able
to
keep
me
honest
on
that.
Is
that
a
New
Concept
for
processes,
so
these
you
can
attach
directly
to
inputs
or
outputs,
and
what
that
means
is
that
any
filter
can
be
run.
A
You
know
for
that
input
or
output,
and
you
can
run
a
series
of
them,
just
as
you
really
know
the
pipeline,
but
they're
attached
to
the
to
the
input
or
the
output
and
they
use
the
threads
and
everything
associated
with
it.
So
it's
a
good
way
to
do
some
of
maybe
the
more
heavyweight
processing
for
dedicated
inputs
and
outputs.
Regex
pausing
can
be
quite
expensive
unless
you
you're
in
a
a
hardware.
That's
for
example,
so
it's
quite
useful
to
do
somewhere
and
it's
only
available
for
the
ammo
config.
A
Here's
an
example
that
we
use
for
benchmarking,
I've,
linked
it
out
just
right
into
the
final
cloud
in
this
case
and
actually
I
sort
of
shrunk
here,
a
little
bit
to
fit
on
the
page,
but
for
the
benchmarking
stuff
put
together,
some
VMS
that
run
a
Prometheus
located
with
final
stack
on
the
VM
and
also
we
send
data
to
perform
a
cloud.
So
if
you
go
look
at
the
example,
it
actually
said
you
know.
A
There's
it
shows
you
how
you
can
proxy
data
from
to
two
different
sources
and
stuff
like
that,
so
but
I
just
want
to
show
you
an
example
of
how
you
might
actually
do
it
now.
You
know
so
here
we
collecting
all
the
node
metrics
and
dealing
with
all
the
logs.
A
We
just
collected
the
logs
mostly
to
to
check
for
any
issues
and
but
also
to
exercise
the
loading
them
with
this,
because
we
are
so
we
started
having
generating
lots
of
logs
and
seeing
how
it
coaches,
you
know,
whatever
goes
to
sizes
those
kind
of
things
as
well
so
I've
linked
out
to
that
I
think
we're
going
to
share
the
stories
afterwards
as
well.
So
people
should
be
able
to
have
a
look
at
it
as
well
right,
Austin,
I,
think
you
were
going
to
summarize
this
a
little
bit
great.
B
Yeah,
thanks
for
and
thanks
everybody
for
your
questions
throughout
the
presentation
so
far
if
we
didn't
get
to
them
or
if
that
didn't
get
to
him
yet
we'll
get
to
him
here
in
a
little
bit
in
the
Q
a
section
right
here
we
have
a
QR
code
to
join
our
fluent
vet
and
fluent
the
slot
communities.
B
If
you
want
24
7
access
to
Pat,
this
is
the
best
way
to
do
it,
like
you
said,
he's
on
there
all
the
time,
so
we'll
leave
this
up
for
a
second
just
so
you
guys
can
get
in
there.
I'll
also
be
throwing
all
the
links
that
we're
about
to
go
through
in
the
chat.
B
So
if
this
isn't
working
for
you
guys
just
go
ahead
and
copy
and
paste
there
all
right.
So
this
is
the
community
survey
that
I
was
speaking
to
earlier.
It's
super
brief.
It
does
help
us
out
quite
a
bit.
We
love
getting
feedback
from
the
community.
It
helps
us
make
a
overall
better
product
and
provide
more
features
to
our
fluent
users
and
employment,
open
source
ecosystems.
So
if
you
guys
wouldn't
mind
taking
a
few
moments
spell
it
out,
it
goes
by
super
quickly.
I
took
it
this
morning.
B
It
takes
like
maybe
a
couple
of
minutes.
It's
mainly
multiple
choice,
not
a
ton
of
long
text
answers
we're
not
asking
for
an
essay.
It's
not
the
acts.
It's
not
the
SATs!
We're
not
asking
you
to
answer
any
GMAT
questions,
so
just
take
a
few
minutes
if
you
have
the
time-
and
that
would
be
great
all
right,
jumping
into
the
next
two.
B
These
are
for
the
next
two
webinars
in
the
series.
If
you
guys
are
interested
in
going
a
little
bit
deeper
into
processing,
we
have
fluent
bit
Advanced
processing
coming
up
for
our
next
webinar,
which
is
in
two
weeks
that
is
in
August
or
on
August
10th.
B
B
We
have
fluent
bid
operations
and
best
practices.
So
if
you
guys
are
interested
in
that
or
you're
interested
in
both
sign
up
for
both,
that
would
be
great
yeah.
So
we'll
leave
this
up
for
a
couple
couple
more
seconds
here,
so
you
guys
can
get
in
and
get
registered
if
it's.
If
it's
of
interest
and
I'll
be
throwing
those
links
in
the
chat
as
well
and
then
we'll
hop
into
q,
a.
A
All
right
should
we
tackle
the
questions
in
the
chat,
I
guess
I'm,
not
sure
I
can
do
all
of
them,
but
I
can
certainly
talk
about
all
of
them.
B
Yeah
definitely
I
think
we
covered
off
on
the
on
the
benchmarking
one
Pat
for
nikhil,
but
is
there
anything
else
you
wanted
to
die
off
on
that
one.
A
Yeah
I
mean
the
specific
question
is:
are
there
benchmarking
tools
for
fluently
at
performance
depends
what
you
mean
by
that?
If
you
want
to
evaluate,
if
you
want
a
bit
for
your
needs,
I
wouldn't
say
you
know,
never
trust
a
vendor,
but
use
use.
Your
actual
data
generator
run
it
against.
Whatever
you
want
to
evaluate,
if
you're
asking
questions
about
I've
got
this
pipeline,
how
does
it
compared
to
this
other
pipeline
into
it?
A
But
that's
probably
a
different
question
now
and
we've
certainly
got
some
tools
around
that
and
we'll
put
the
touch
on
them,
but
the
Benchmark
I
generally
say
you
know
just
use
industry
standards
benchmarks.
You
know
generate
data
that
matches
your
data
because
a
lot
of
it
is
about
what
your
data
looks
like
your
data
rates,
where
you're
sending
out
your
infrastructure.
So
definitely
do
that
and.
B
B
You
know
yeah
yeah
excellent
noise.
Is
there
any
real
benefit
to
forwarding
data
to
fluentd
before
Loki.
A
The
third
day
specifically
depends
so
fluent
pit
doesn't
provide
quite
as
many
plugins
as
student
B
does.
So,
if
there's
something
that
fluent
D
is
doing
that
flip
bit
doesn't
do
then,
obviously
that
that's
a
good
reason
to
do
it.
However,
if
the
question
is,
is
there
any
real
benefit
to
forward
data
to
feel
a
bit
or
fluently
before
sending
it
to
like
a
passive
search,
I?
Think
so?
A
Yeah
there's
a
there's
quite
a
decent
aggregated
use
case
around
this,
and
it's
actually
what
you
built
so
Enterprise
products
in,
but
getting
generally
speaking
right,
so
you've
run
a
little
bit
demon
set
or
on
your
node
on
your
hose
to
the
end.
A
Whatever
you
want,
and
you
don't
want
to
slow
that
down,
you
want
to
be,
you
know,
adding,
maybe
some
content
it's
doing
whatever
the
bare
minimum
is
to
the
data
before
it
leaves
your
your
host
and
then
you
can
send
it
to
an
aggregator
where
you
can
do
more
work
and
you
can
scale
that
aggregator
independently
obviously
been
saying
you're
not
messing
about
with
the
demons
there.
You
know
if
you
start
doing
some
very
heavyweight
processing.
Maybe
it's
going
to
slow
it
down.
A
Maybe
you
only
want
to
do
some
aggregation
on
certain
types
of
data,
so
that's
the
kind
of
use
case
where
having
an
aggregator
that
you
can
send
to
which
you
can
scale
independently,
you
can
manage
independently
and
you
can
configure
independently
as
well,
is
quite
useful.
There's
quite
a
few
use
cases
where
people
are
like.
A
Maybe
cluster
admins
deploy
the
like
log
collection
or
you
know,
there's
Ops
level
kind
of
admin
stuff
to
collect
logs
on
them
on
on
the
host,
because
it
is
quite
a
sensitive
area
and
then
they
want
to
send
it,
send
it
to
the
aggregator.
And
then
maybe
the
data,
scientist
or
whoever's
responsible
for
analytics
of
their
data
has
more
control
there.
So
they
can
tweak
things
they
can
mess
about
with
things
much
more
efficiently
and
with
without
the
kind
of
security
concerns
of
like
appearing
and
changing
stuff
on
the
host.
A
That's
sending
the
original
data,
and
so
so
those
are
some
of
the
reasons
why
you
might
want
to
do
that.
So
I
think
there
definitely
is,
and
my
suggestion
would
be
like
do
ads
contacts
do
redaction
whatever
you
need
to
do
on
the
host
or
try
and
get
the
data
off
it
as
quickly
as
possible,
and
then
any
more
heavyweight
processing
doing
an
aggregation
layer
where
you
deploy
through
a
bit
or
support
or
all
through
to
do
that
kind
of
stuff.
A
B
Thanks
Pat,
so
we
are
a
little
bit
over
time,
but
if
you
guys
want
to
bear
with
us,
we
have
a
couple
more
questions
in.
A
B
A
Yeah
we
now
have
to
run
multiple
field
base,
so
I,
probably
it's
not
really.
My
area
very
I
think
it
probably
depends
on
the
specifics
here.
So
what?
When
I
typically
see
these
questions
in
slack,
it's
normally
for
kubernetes
and
what
normally
is
is
people
have
configured
their
compute
resource
limits
in
a
particular
way
and
fluidbit
isn't
using
them
and
the
reason
it's
not
using
them
is
because
it's
running
a
single
standards.
A
So,
generally
speaking,
when
you
see
these
kind
of
things,
it's
like,
why
is
Phil
a
bit
only
using
50
of
my
computer
in
it,
but
it's
still
not
processing
things
fast
enough
and
it
tends
to
be
like
well,
you've
only
sold
it
choose
one.
Third,
you
tell
it
to
use
two
threads
or
whatever
and
get
it
to
explain
that
way.
A
So
it
is,
it
is,
can
be
a
bit
frustrating
than
it
doesn't
Auto
scale
the
threads,
but
then
it
is
intended
to
be
very
controlled,
and
you
know
it's
up
to
you
to
say
yeah
you're
using
this
to
answer.
Don't
use
more
threads.
A
So
typically,
that's
where
I
see
it.
If
it's
talking
about
like
CPU
affinity
and
and
sharing
jobs
across
you
know,
I
can't
answer
that
question
I,
don't
know
enough
about
that,
but
we
can
take
that
away
and
I'm
really
coming
back
over
there.
B
Great
two:
more
can
we
have
kubernetes
logs,
go
in
two
different
elastic
search,
outputs.
A
Yes,
there's
a
short
answer:
if
we
look
at
the
I
think
in
fact,
that
might
be
what
a
Helm
chart
does
part
of
me.
Let
me
have
a
look.
B
A
You
can
there's
no,
you
can
send
to
any
number
about
this,
so
you
can
send
the
same
logs
to
Brazilian
now,
whatever
256
outputs
actually
issue
one.
So
yeah,
there's
there's
no
problem
with
that.
B
A
A
But
it's
you
know
it's
a
complete,
separate
output,
one
could
be
lasted,
one
could
be
low-key.
We
could
have
17
monkeys
it's
up
to
you
as
I
say
in
the
the
benchmarking
stuff
I
did
we
had
one
output
to
an
in-cluster
locally
and
one
output
to
a
definite
Cloud
located
just
because
we
wanted
to
make
sure
we
didn't
lose
stuff
on
there
and
also
so
we
could
mess
about
always
loading
and
things
like
that.
A
No
there's
a
short
answer:
the
only
things
I'd
say:
try
and
use
threading
try
and
use
processes
for
that
kind
of
stuff
to
offload
any
kind
of
additional
process
that
you're
doing
so.
The
main
thread
has
to
do
with
all
the
events
and
stuff
that's
going
on.
A
So
if
you
can
take
stuff
off
it,
that's
good,
but
also
it's
carry
outed
by
like
if
you're
asking
for
a
bit
to
do
more
than
it's
physically
possible
to
do
with
your
computer
and
whatever,
then,
obviously
it
can't
cope
with
you
know,
as
you
add
more
work
to
do,
you
need
more
compute.
So
as
long
as
what
you've
got
available
is
okay,
there
should
be
any
performance.
Impact
is
adding
extra
outputs
and
that's
one
of
the
benefits
of
d
as
well.
B
We
got
one
last
question
and
then
we'll
wrap
up
here.
I
appreciate
everyone
sticking
on
a
little
bit
later
with
us.
Are
there
any
cases
for
fluentbit
with
get
Ops.
A
Yeah
I
mean
that's
so
my
big.
What
I
like
to
do
is
automate
everything
in
kit
Ops,
so
everything
I
do
that
deposit
to
get
Ops
I.
Guess
it's
a
very
big
question
that
Phil
thanks
for
that,
but
yeah
there's,
certainly,
options
for
for
user
with
geoffs
I've
got
some
examples.
I
think
in
devtools
I
put
together
for
the
community
about
how
to
deploy
things
and
stuff
like
that,
but
yeah
there's
definitely
use
cases.
A
B
A
Yeah,
you
can
control
the
config
and
what
we
provide
for
Enterprise
customers
is
very
much
focused
on
that
as
well.
It's
like
and
we've
got
a
nice
UI,
but
also
commercials.
The
new
bit
Ops,
deploy
you
know
by
the
CLI
and
make
changes
if
you're
doing
them
included
with
the
coach
and.
B
Great
well,
that
is
all
the
time
and
a
little
bit
extra
time
that
we
have
for
today.
So
I
want
to
appreciate
or
take
the
time
to
thank
everyone.
We
appreciate
you
being
here
and
look
forward
to
seeing
you
guys
in
the
next
two
webinars
in
our
summer
series
and
look
for
more
coming
from
us
with
the
summer
series
as
well.
So
more.