►
Description
Please join us for Fluent Talks! Our weekly webinar and office hours, on Fridays at 2PM Central. Streaming live on YouTube.
This week we'll talk about Fluent Bit + Kafka and OpenTelemetry
#kafka #opentelemetry
A
A
A
Open
up
my
video
here
should
be
able
to
see
me
now
great
okay,
so
two
two
pretty
fun
discussion
topics
one
is
fluent
and
kafka
and
the
second
is
fluent
and
open
telemetry.
So
I
think
the
first
one
is
is
something
that
we've
somewhat
had
connectors
for
for
a
long
time.
You
know.
Over
10
years
people
have
been
using
fluent
bit
for
sending
data
to
kafka.
A
I
don't
know
when
kafka
was
created,
but
we've
had
a
plug-in
for
a
very
very
long
time,
and
I
remember
a
lot
of
the
use
cases
that
we
used
to
see
in
the
beginning
before
kafka
connect
were
using
fluentd
to
connect
a
ton
of
data
sources
to
kafka,
and
one
of
the
benefits
here
is,
as
we
add,
you
know,
fluentd
and
fluent
bit
connectors
that
it
just
makes
it
super
simple
to
go
ahead
and
collect
that
stuff
without
having
to
write
a
lot
of
consumer
code
or
even
manage
those
consumers.
A
So
it's
still
quite
popular,
especially
if
folks
are
looking
to
bring
observability
data
into
into
kafka
there's.
This
is
a
great
way
to
go
about
and
do
that
you
know,
I
think
some
of
the
other
other
places
we've
seen
kafka
really
heavily
utilized
is
when.
B
A
Are
looking
to
have
a
lot
of
resiliency
in
their
pipeline,
they
need
they
can't
have
any
data
loss.
There's
a
there's,
a
ton
of
folks
that
will
start
to
use
fluent
flint
d
and
fluent
fit
for
it
so
kind
of
just
to
show
case
two
things
here:
one
we
just
launched
our
youth
bid.
A
I
o
website,
but
in
in
in
this
picture
we
showcase
a
bunch
of
sources
and
destinations
right
kafka,
of
course,
being
one
of
those
those
ones,
and
so
what
I
thought
I'd
show
today
is
actually
just
you
know:
how
do
we
send
data
to
to
kafka,
and
then
we
have
some
pretty
cool
stuff
coming
with
fluidbit
1.9
around
kafka,
and
then
I
I
have
thiago
who's
joined
us
from
calyptia.
A
Who
can
give
us?
You
know,
give
us
a
share
of
what's
going
on
before.
I
share
anything
any
other
thoughts
on
kafka.
B
B
So,
just
to
say
that
kafka
has
been
something
that
the
community
has
been
asking
for
pretty
much
a
couple
of
years,
if
I'm
not
wrong,
but
yeah,
and
I'm
pretty
happy
to
see
this
happening
right.
So
kafka
output
has
been
there
for
some
time.
Actually,
one
of
the
biggest
contributors
to
to
our
kafka
connector
in
the
output
site
has
been
linked
in
the
the
company.
A
B
A
A
A
I
spun
up
a
confluent
cloud
cluster,
so
just
a
cloud-based
cluster.
The
nice
thing
about
flipbit
is
we
support
all
the
sassl
and
kerberos
type
off
that
you
you
might
need,
and
especially
for
cloud-based
kafka
solutions.
You
can
you
can
leverage
those
too.
I
personally
haven't
used
like
things
like
msk
or
anything
like
that,
but
I
thought
I
thought
I'd
give
out
confluent
cloud
a
a
try.
A
So
here
what
we're
going
to
do
is
I'm
just
going
to
run
a
container
of
fluent
bit
1.8.12
and
I'm
going
to
use
that
same
exact
configuration
that
we
saw
earlier
so
going
to
go
ahead
and
start
it
and
looks
pretty
boring.
So
let's
go
ahead
and
switch
over
to
confluent
cloud.
A
A
These
are
all
things
that
I
could
add
as
part
of
part
of
reading
from
from
kafka,
so
yeah.
A
It's
a
pretty
pretty
simple
use
case.
It's
something
that
you
can
get
set
up
within.
You
know
one
to
two
minutes
and
with
that
you
know,
if,
if
you
are
going
to
be
running
this
at
large
scale
across
multiple
containers
or
kubernetes
examples,
you
know
there's
there's
a
ton
of
good
good
things
there.
I
think
another
part,
that's
really
important.
Is
you
know
with
fluent
bid,
especially
when
you're
sending
things
to
kafka?
Sometimes
we
want
to
have
it
in
a
place.
A
B
A
I
want
to
transform
the
data
so
that
when
it's
in
kafka
I
can
then
route
it
to
my
my
particular
observability
backend
with
kafka
connect
and
have
all
the
data
in
you
know
in
a
place.
That's
really
useful
and
you
know
there's
there's
arguments
of
hey.
Should
I
do
all
this
in
in
kafka,
or
should
I
do
this
at
the
fluent
pit
site
now
you
know
whatever
is
really
simple
for
you
that
I'll
give
you
the
some
of
the
trade-offs
when
you
do
it
centralized.
A
You
know
you
might
be
spending
a
lot
of
time
doing
some
of
these
transformations,
but
when
you
do
it
at
fluid
bit
side,
you
could
do
that
distributed
across
the
thousands
of
nodes
that
you
might
have,
or
the
multiple
container
endpoints
that
you
might
have
and
when
you
do
that,
you're
you're
able
to
then
quickly
do
the
parsing
do
the
transformation
and
you
are
separating
that
compute
out
right
where
that
data
is
generated.
So
that
can
be
a
pretty
nice
advantage
of
doing
some
of
these
type
of
transformations
at
the
co.
A
Sorry
at
the
floor
fit
side
versus
having
to
do
it
in
in
the
kafka
layer.
So
this
is,
you
know
again
a
really
simple
way
to
send
data
in
you
saw
my
lines
of
config.
I
think
I
have.
I
have
my
api
keys
in
there,
so
those
are
going
to
be
quickly
voided.
So
no
one
else
can
send
me
fluid
bit
data
but
yeah.
C
A
A
little
bit
about
how
we,
how
we
produce
data
and
send
it
to
to
kafka,
you
know
using
conflict
cloud
just
to
show
it
in
a
in
a
bit
of
a
visual
way.
Now,
if
you're,
not
using
confluent
doesn't
matter,
you
know,
we
support
all
types
of
kafkas.
I
just
chose
this
because
it
was
pretty
pretty
simple
to
set
up
and
actually
I'll
stop
sharing
for
a
little
bit.
The
thiago
on
our
team
has
been
building
a
lot
of
the
a
lot
of
the
stuff
to
read
from
kafka
and
again
as
it
were.
C
A
Showcase
that
hi.
C
Yeah
yeah,
I
have,
let
me
show
my
screen.
C
Okay,
so
what.
C
B
C
The
kafka
configuration
it's
deflected
configuration
for
kafka,
input,
output,
filter.
This
is
a
very
simple,
luis
script.
That's
going
to
be
called
by
every
kafka
event.
C
Record
is
basically
adding
another
property
processed
by
fluid
bits
and
total
records,
and
here
on,
the
top
left
is
the
is
just
a
shell
script.
That's
using
a
kafka
console
producer,
it's
just
a
command
line,
application
that
you
can
type
the
data
and
send
to
kafka.
It's
a
script.
That's
provided
with
kafka
distribution.
C
C
C
Start
what's
going
to
do
is
create
all
the
necessary
containers
here,
it's
going
quick
because
I've
already
created,
so
it's
just
starting
up
and
then
it's
going
to
attach
to
the
logs
of
the
kafka
consumer.
C
It's
attaching
more
to
the
fluid
bit
logs,
which
is
what
you
see
now
after
it
started,
and
then
the
kafka
consumer,
it's
showing
the
objects
that
are
processed
by
flint
bit.
As
you
can
see,
those
objects
that
are
being
created
by
shell
script
are
receiving
this
property
that
was
added
by
the
lower
filter
and
they're
being
sent
via
out
kafka,
and
this
kafka
consumer
script
is
showing
the
events
as
they
are
being
added
this
time,
stuff
decided
by
my
friend
bit.
A
A
I'm
gonna
read
from
one
topic,
I'm
right
to
this
other
topic
and
it's
topic
modified,
and
I
almost
found
that
that
kind
of
interesting,
I
think,
there's
there's
a
ton
of
use
cases
out
there
where,
where
you
can
leverage
kafka
and
fluent
together
right,
like
almost
like
we're
an
unofficial
agent,
so
you
can
send
us
some
data
to
kafka.
So
I
think
this
is.
A
This
is
going
to
be
pretty
good
for
a
lot
of
the
folks
out
there,
and
you
know,
I
think,
there's
other
services
out
there
that
use
the
kafka
api
that
are
not
that
have
a
whole
different
code
base,
and
you
know
we
haven't
tested
out
with
with
some
of
those.
But
you
know
some
high
hopes
that
everything
just
works,
as
is
because
we
just
use
arnie
kafka,
which
is
a
pretty
standard
standard.
Client.
A
I
think
that's
been
a
pretty
great
great
use
of
using
an
external
library
because
there's
so
many
settings
right,
there's,
there's
sassel
settings
security,
settings
buffer
settings,
timeout
settings.
So
I
think
that's
that's
one
of
the
advantages
of
leveraging
that
so
awesome.
A
Unless
I
think
kind
of
most
of
what
we
have
for
for
kafka.
There
kind
of
we
could
jump
into
some
of
the
some
of
the
open,
telemetry
stuff.
I
guess
eduardo
open,
telemetry,
pretty
big,
big
topic.
Anything
you
want
to
just
touch
on
before
we
dive
into
some
of
the
integration
points.
B
Sure,
definitely
I
don't
know
if
I
can
turn
on
the
video
here.
Oh,
how
can
I
share
it
in
my
video
team,
so
I
can
oh.
I.
A
Think
yeah.
C
B
Let
me
text
here,
okay,
so
we
have
been
working
a
lot
in
you
know
the
last
year
we
started
all
this
journey
about
that
started
from
our
community,
saying
how
we
can
take
fluent
and
also
not
just
doing
logging
logs
for
with
them,
but
also
how
to
do
metrics
right
and
for
us
as
addition
of
the
project
and
I'm
not
talking
from
just
company
side
but
as
a
project
where
we
said
hey,
we
wanted
to
bring
the
best
experience
for
the
users
and
sometimes
the
best
experience
is
not
just
a
do
the
best
on
this
ecosystem.
B
It's
also
about
with
integrations
or
extend
the
scope
for
the
problems
that
we
have
the
scope
of
the
problems
that
we
have
on
this
year
and
not
descendant
two
three
years
ago,
with
explosion
of
containers,
kubernetes
distributed
systems
and
the
kind
of
challenges
that
we
have
are
quite
a
quite
high
right
now,
right
so
talking
with
the
maintainers
having
discussions
like
and
our
community
is
saying,
hey
well,
we
cannot
collect
matrix
with
fluid
yeah.
You
can,
but
there
are
metrics
like
locks.
B
B
B
Cannot
hide
it
yeah,
so
a
at
a
at
kalitia,
so
we
created
this
project
called
symmetrics,
so
our
our
journey
started
how
we
can
make
sure
that
flu,
embed
and
fluidity
can
inherit
a
metric
support
right.
So
for
us,
when
we
handle
logs
logs
are
just
a
structured
messages,
but
in
a
schema-less
fashion,
schema
less
means
that
we
don't
follow
a
strict
schema
right.
Your
law
can
have
a
mini
keys,
different
values,
nested
keys
and
that's
fine.
But
when
you
talk
about
matrix,
it's
different
right
because
measure
produces
well
define
it.
They
have.
B
You
have
a
value,
you
have
a
type
for
the
metrics.
Maybe
you
have
some
description
and
we
say
hey
how
we
can
integrate
with
all
the
metrics
ecosystem,
so
we
created
a
small
plc
called
a
symmetrics
which
was
like
how
we
can
implement
metrics
in
c
right.
So
we
say:
okay,
who
are
the
best
at
this
point.
One
year
ago
it's
like
who's
doing
metric
stuff.
Okay,
prometheus
is
a
king.
Okay
and
permitted
has
sdks
for
gold
language
for
multiple
languages
and
we
define
it.
B
We
decided
okay,
let's
pretty
much,
try
to
remain,
there's
the
way
that
they
create
counters
gauges,
but
in
c
so
we
can
consume
that
from
a
fluid
perspective
and
we
created
spray
called
a
symmetrics
that,
as
of
today,
has
a
10
contributors-
and
this
is
already
on
fluent
with
a
upstream
1.8
that
allows
us
to
create
context
of
matrix,
but
also
not
just
a
just
a
a
simple
representation
of
that
metric.
B
That
also,
for
example,
if
you
take
a
look
at
the
code,
we
can
take
this
a
matrix
content
called
symmetrics
and
encode
it
back
to
message
pack
and
code
it
to
open
the
element
to
triple
meteos
to
text.
So
any
other
output,
plugin
instafluentbit
can
take
advantage
of
the
system
where
we
take
a
binary
representation
of
structure
and
generate
any
kind
of
payload.
Subsymmetric
is
not
about
a
transport
it's
about
a
payload.
B
10
days
ago,
right
and
a
when
we
talk
about
all
this
all
this
journey
of
bringing
the
system
together
like
metrics
logs,
you
start
thinking,
oh
hey,
how
do
we
provide
the
best
experience
possible
for
the
user
and
when
you
go
to
the
enterprise,
when
you
go
to
companies-
and
I
will
be
lying,
if
I
tell
you
yeah,
everybody
just
use
one
stack,
it's
not
like
that
people
has,
for
example,
if
you
talk
about
databases,
you
have
positive
sql,
you
have
environments
within
my
sequel,
marietv
or
radius
for
caching,
so
they
are
not
kind
of
homogeneous.
B
Always
they
have.
A
lot
of
you
know
different
and
similar
tools
for
different
use
cases
right
and
they
tried
to
adapt
as
much
as
they
could
so
in
for
matrix.
We
always
say
we,
our
mindset
was,
we
are
not
trying
to
replace
prometheus.
We
don't
need
to
replace
open
telemetry
right.
We
want
to
bring
the
same
value
of
mental
neutrality
of
platform,
a
with
a
neutral
platform
for
everybody
who
wants
to
solve
the
logs
problem
and
everybody
who
now
wants
to
solve
the
matrix
problem.
So
now
we
have
the
prometheus
ecosystem.
B
We
have
the
open,
telemetry
ecosystem
with
different
instrumentation
mechanisms
and
it's
a
life.
We
say
that
now
everything
will
switch
to
x
technology
tomorrow.
That
won't
happen
right
in
our
experience,
agents
with
fluent
beta
fluency
and
companies
to
switch
into
average
version
or
to
switch
from
one
thing
to
the
other,
takes
about
one
year
two
years
right
so,
but
how
you
can
get
into
that
environment
and
provide
the
value,
provide
the
value
solve
the
problem
in
a
kind
of
a
with
a
micro
surgery,
not
a
major
surgery
right.
B
So
I
would
say
that
a
fluent
influence
becomes
like,
like
a
snide
for,
looks
and
metrics
now,
and
the
one
of
the
goals
of
this
now
is
to
talk
about
a
what
is
our
integration
with
open
telemetry?
It's
one
of
our
initial
entry
points.
As
you
know,
I
just
said
that
when
bit
supports
native
metrics,
we
can
receive
a
prometus,
payloads
and
export
to
prometheus.
B
Payloads,
like
prometheus,
takes
a
prometheus,
remote
right
and,
if
you're,
using
open
telemetry,
for
example,
you
want
to
take
fluent
bed
and
shift
the
metrics
that
fluid
is
scraping
or
internal
matrix
and
send
it
back
to
open
an
open,
telemetry
collector.
You
can
do
it
now
well
with
the
1.9
version
right.
This
is
our
major
release
that
is
coming
out
today.
We
have
just
released
rc1
for
1.9,
and
1.9
should
be
available
the
first
week
of
march,
so
I'm
going
to
steal
the
screen
again
and
show
you
a
bit
of
this
integration
that
we
have.
B
I
was
quite
of
preparing
the
demo
here,
but
can
you
read
the
text
so
it's
too
small
and
right
now
it's
a
little
small.
If
you
don't
mind
increasing
it
there
you
go
better.
B
My
setup
is
a
bit
messed
up
today,
okay,
so,
for
example,
on
my
on
my
right
thing,
I'm
going
to
I'm
having
here
a
config
file
which
is
for
the
open,
telemetry
collector,
actually
yeah.
The
order
is
a
bit
messy
here,
but
we
have
pretty
much
a
service
right.
That
is,
it's
loading,
a
receiver
which
is
otlp
protocol
and
one
exporters
right
we're
receiving
the
data
through
otlp,
and
then
we're
going
to
send
this
data
out
to
to
file
right.
B
In
addition,
we're
going
to
write
the
data
to
a
file,
so
the
author
collector,
what
it's
going
to
do
is
to
receive
a
matrix
from
fluent
date.
Well,
it
does
know
that
it's
fluent
right,
you
just
receive
a
tlp
protocol
and
just
dump
this
information
or
matrix
every
five
seconds
to
not
to
promote
us
but
to
the
file
system
here
and
we're
going
to
listen
for
these
connections
on
on
this
port
911,
okay,
so
the
first
step
here,
oh,
I
think
that
I
lost
my
command.
B
I
was
trying
to
I
just
disconnected
stuff
here.
I
was
just
trying
to
I
said
to
run
this
manually,
but
let
me
exit
tmux
here,
because
my
super
command
that
is
here,
let
me
try
to
see
if
I
can
dump
it
to
a
make
file.
I
always
do
this
can
use.
This
is
perfect
preference
right.
You
know
slack
okay,
so
when
I
do
make
it
should
run
the
collector
yeah,
let's
go
okay.
B
So
let's
do
some
cleanup
here.
I
have
a
directory
called
out
where
I
supposed
to
write
a
you
know
the
metrics,
but
I
will
make
sure
that
my
mid
file,
so
you
know
that
I'm
not
lying
it's
going
to
truncate
that
file
before
we
do
anything
else
and
yeah.
So
again
we
have
this
config
file
right
we're
going
to
receive
the
data
from
or
in
our
otp
endpoint,
and
we're
just
going
to
dub
the
information
to
the
file
system.
What
my
make
file
is
doing.
B
Actually
my
docker
container,
it's
pretty
much
a
mounting
the
volumes
from
where
I'm
going
to
write
the
data
right
just
expose
the
com,
the
config
file
and
also
expose
the
tcp
port.
Where
fluid
bit
will
connect
right.
If
you
want,
we
can
put
a
please
just
write
us
on
the
youtube,
live
stream
or
on
the
slack
chat,
and
we
can
put
all
this
information
to
get
in
the
wreath,
so
you
can
consume
it
in
a
in
a
better
way.
B
B
I
think
it's
a
brand
new
fluent
bit
from
the
command
line
yeah,
but
this
is
not
the
right
path.
It
should
be
through
embed
build
bin
through
a
bit.
This
is
what
we're
going
to
do.
We're
going
to
take
the
binary
right,
I'm
not
going
using
a
config
file,
I'm
just
going
to
write
flip
bit
from
the
command
line.
You
can
do
the
same
thing
you're
going
to
flip
this
right.
For
that
place,
I'm
going
to
start
a
the
plugin,
which
is
the
input
plugin,
which
is
called
not
exported
matrix.
B
What
not
exported
metric
does
it
makes
the
prometheus
a,
not
exporter
a
project,
pretty
much.
We
provide
a
subset
of
the
same
metrics.
Actually,
what
we
did
was
a
copy
paste
so
for
our
users
that
were
using
a
prometus,
no
exporter
and
they
also
were
using
fluent
bit.
They
wanted
to
have
a
more
unified
experience,
so
they
just
tell
us
hey,
please.
We
need
a
plugin
like
this
that
collect
the
same
metric
with
the
same
labels,
dimensions
and
descriptions
right
after
that,
the
input
the
output
will
be
open.
B
Telemetry,
I'm
going
to
move
this
a
little
bit
to
to
the
right
open,
telemetry,
I'm
going
to
connect
to
the
host,
which
is
localhost,
because
I'm
running
this
locally
and
to
the
port
that
I
just
exposed
on
the
containers,
and
I
have
to
provide
the
the
address
from
where
I'm
going
to
ingest
the
matrix
right
yeah.
We
might
set
this
by
default,
but
it's
how
it's
working
now.
B
B
We
got
this
out
directory
right,
since
this
is
just
for
root
just
to
make
it
easier,
I'm
going
to
switch
to
root,
and
I
can
scrape
this
file,
which
is
a
json
file
generated
by
the
hotel
collector
that
tell
me
about
all
the
metrics
that
might
not
export
a
from
fluent
in
a
local
host
is
collecting
so
pretty
much.
What
we're
doing
here
is
I,
connecting
the
prometheus
world
with
a
open,
telemetry
called
a
la
bit
way
right.
B
So
the
goal
here
is
that
we
support
kind
of
the
same
metrics
labels
dimensions
and
we
are
able
to
interoperate
between
the
two
words
right.
So
no
matters
if
your
company
is
moving
to
open,
telemetry
or
switching
to
prometheus
right
from
fluent
bait,
you
can
get
the
same
experience
and
I
would
say
that
we
lowered
the
friction
for
adoption
on
all
these
new
technologies
now
a
where
what
else
we
can
do
here
right.
B
So
actually
you
saw
that
I'm
going
to
stop
fluent
right,
as
you
saw
that
in
the
command
line,
I
was
taking
note
exporting
metrics
and
sending
them
to
open
telemetry
right.
But
what
about
I'm
going
to
send
now?
The
metric
to
two
places
open
telemetry
and
throughout
prometheus
exporter
right
I'm
going
to
spread
the
exporter
which
will
run
slowly.
A
B
A
B
Let's
yeah,
so
it's
more
easy
to
visualize.
Oh
maybe
we
could
use
jammel,
but
yes,
in
order
to
to
reduce
the
timing
here.
Let's
do
it
on
this
way,
and
so
here
we
are
going
to
write
fluent
bit
matrix
with
same
oh.
This
is
internal
metrics
for
fluently
yeah,
we're
going
to
use
node
exporter
matrix
or
one
spoiler.
If
you're
on
windows,
we
will
have
now
windows,
exporter
available.
A
B
I'm
going
to
make
sure
that
everything
that
is
generating
the
pipeline
gets
will
be
telemetry,
post,
four
and
one
one
yuri.
I
will
be
one
matrix
if
I'm
not
run,
but
also
we
are
going
to
expose
this
matrix
by
using
prometheus
exporter,
so
we
are
going
to
ship
the
metrics
to
one
place
and
is
creating
from
a
using
http
client
from
the
other
side.
B
B
Oh
yeah,
let's
show
that
in
a
minute
yeah
here
in
the
prometeos
part,
I
think.
C
B
B
Okay,
so
the
data
should
be
flowing
here
right.
So
if
we
do
something
like
that,
you
say
so,
every
certain
seconds
the
data
is
going
to
be
refreshing
on
this
file
right.
This
is
your
just
jumping
date:
json
right,
this
open
telemetry
data.
Now
I
don't
know
just
so.
We
just
told
everybody
we,
oh,
let
me
close
here,
so
we
get
more
space
on
the
window.
Okay,
and
here
I'm
going
to
close
it,
and
now
I
can
connect
to
fluid
prometus
exporter
endpoint,
which
support
is
2021
dash,
matrix
and
pretty
much.
B
C
A
What's
the
what's
the
performance
look
like
on
flip
it
right
now,
it
should
be
pretty
minimal
right
like
we're
doing
metric.
We
do
two
streams
of
metrics
we're
doing.
Oh
answering
a
lot.
I
don't
know.
Let's.
B
B
B
So
the
thing
is
that
I
don't
know
if
you
remember
that
we
talked
about
symmetrics,
so
everything
in
inside
fluid
is
symmetric.
Once
you
get
symmetrics,
you
can
customize
labels
in
code.
They
go
to
different
matrix
format,
so
at
label
take
advantage
of
this
and
allow
us
that
when
the
plugin
is
going
to
generate
a
payload
for
certain
output,
we
can
attach
labels,
and
sometimes
you
need
this
because
you
want
to
reach
your
matrix.
B
We
are
doing
this
now
we're
going
to
do
it
in
the
output
site,
but
you
can
do
it
in
in
our
in
other
places.
So,
for
example,
add
label
company
equal.
I
think
this
is
the
fourth
one
yeah
for
you.
Forgive
me
if
I'm
wrong,
but
it
should
be
like
that,
so
the
premier
supporters
should
be
on
that
there
you
go
oh
at
least
expect
a
key
and
a
value
yeah
we're
no
longer
using
this.
It's
like
this
yeah
we're
going
to
fix
that
in
the
yemen
version,
no
worries.
B
Okay
and
if
you
go
to
curl-
and
we
query
now
the
prometheus
information,
you
will
see
that
most
of
the
labels
contains
now
sorry
most
of
the
matrix
contains
a
label
called
company
equal
scality,
and
this
is
quite
powerful
because
sometimes
before
that
before
you
ship
the
information,
you
want
to
say,
hey.
No,
this
is
a
hostname
or
you
want
to
add
some
custom
label,
because
when
you
do
matrix
aviation
you
wanted
to,
I
don't
know,
do
a
query
based
on
these
labels
right
and
this
same
thing
that
we
just
did
here
applies
to
the.
B
B
B
There
you
go,
I'm
going
to
restart
fluid,
and
now
the
author
matrix
should
have
that
right.
So
remember
that
the
dimensions
from
output
telemetry
are
being
delivered
to
a
file,
so
pretty
much,
you
can
do
fluid
bit
metrics
hotel
and
look
for
ecosystem
there
you
go
so
you
go.
You
know
the
labels
right
inside
your
hotel
matrix
so,
and
this
opens
also
a
lot
of
possibilities
because
in
influence
you
will
be
able
to
not
just
add
labels
but
back
to
matrix
variation.
B
Now
I
know
we're
going
to
get
a
bunch
of
questions
where
we
go
in.
What's
fluent
integration
with
hotel,
so
for
us
we
see
that
this
is
a
just
one
ecosystem.
We
see
that
most
of
this
stuff
is
getting
around
a
around
fluid
right.
We
have
a
operating
systems.
We
have
a
also
you
know,
frameworks
like
open
telemetry,
and
it's
really
important
that
an
agent
or
any
kind
of
aggregator
could
be
deal
with
those
these
different
sources
of
information
that
could
be
logs.
It
could
be
metrics.
B
Actually,
I
wanted
to
show
you
something
because
we
are
working
on
a
new
logo
that
pretty
much
represents
what
we
are
doing
here.
B
Let's
just
give
me
one
second,
because
I
cannot
find
it
png.
Oh,
it
should
be
on
dollars.
A
C
B
Can
share
the
image
here
so
we're
trying
to
to
come
up
with
this
right
so
that
when
you
have
the
project
also,
this
comes
from
a
company,
a
perspective
that
you
know
things.
We
can
connect
different
worlds:
open,
telemetry,
open
metric,
permit,
use,
kubernetes
different
operating
systems,
and
this
is
really
interesting
because
it
opens
a
lot
of
possibilities
and
actually
it
continues
the
same
direction
that
we
allow
to
the
users
to
be
become
and
deploy
in
a
main
mutual
way.
B
A
A
I
think
it's
it's
fun
to
to
do.
I
mean
obviously
we'll
have
to
do
some
log
to
metrics
in
one
of
these
fluid
talks
in
the
future,
but.
B
A
Definitely
okay!
Well
with
that,
then
I
think
we
can
probably
close
out
for
for
today
with
some
some
nice
future
topics
coming
coming
next
week.
Maybe
a
lot,
maybe
we
can
do
login
metrics
next
week.
That
can
be
a
pretty
fun
fun
time
for
for
folks.
So,
okay!
Well
with
that,
then
yeah
we
can.
We
can
go
ahead
and
end
for
today,
thanks
so
much
for
everyone
for
joining
and
of
course
you
can
reach
us
on
the
fluent
slack
channel
or
at
calyptia
inc.
So.
B
Oh
android,
always
we
are
hiring
yes
developers,
front-end
developers
see
developers.
So
if
you're
watching
this,
you
want
to
play
on
these
interesting
challenges.
Let
us
know
collect
is
a
fully
remote
a
distributed
company
and
we
are
open
to
hire
everywhere
to
the
right
people
with
the
right
skills
so
feel
free
to
ping
us
we
are
pretty
open
about
diversity.