►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon North America 2021 in Los Angeles, CA from October 12-15. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
Graduated Project Lightning Talk: Fluent Bit - Eduardo Silva, Calyptia
Learn about Fluent ecosystem and Fluent Bit best practices for cloud native environments
A
Hello,
my
name
is
eduardo
silva
and,
as
part
of
this
coupon
europe,
a
fluent
lighting
talk.
We
are
going
to
introduce
a
little
bit
about
the
fluent
project,
which
is
part
of
the
fluent
ecosystem,
but
also
talk
a
little
bit
about
the
roadmap
and
the
exciting
news
that
we
have
for
this
year.
I'm
going
to
share
the
screen
right
now,
so
you
can
see
the
presentation.
A
So
my
name
is
eduardo
silva.
I'm
the
founder
of
the
company
called
kalitia,
where
we
offer
enterprise
offerings
on
top
of
the
fluency
and
fluid
vehicle
system,
and
I'm
also
the
creator
of
this
project
called
fluent
bit
a
fluently
and
fluent
bit.
Both
are
part
of
the
cnc
graduated
projects.
Actually,
in
technical
terms,
fluent
b
is
a
graduated
sub-project
under
the
umbrella
of
fluently
both
under
the
apache
license
and
both
actually
are
part
of
the
same
huge
ecosystem
that
you're
planning
to
use
or
using.
A
Now,
let's
talk
about
logging,
pretty
briefly
about
how
this
operates,
how
this
works,
so
you
can
get
a
clear
vision
about
what
is
expectations
from
a
logging
tool?
Actually,
login
try
to
solve
the
problem
on.
How
do
we
collect
messages
from
applications,
services,
even
hardware
right
and
try
to
process
this
information
and
centralize
it
in
a
place
where
we
can
deal
with
it
or
perform
data
analysis?
Most
of
the
log
information.
A
It
comes
as
a
text
format,
but
nowadays
the
biggest
problem
is
that
with
microservices
and
with
the
new
cloud
infrastructure
and
the
new
ways
to
deploy
applications,
we
have
more
data
that
we
used
to
have
before
right
and
you
have
more
data.
You
have
more
challenges
right.
How
do
I
analyze
this
information,
but
to
analyze
it?
I
need
to
centralize
it,
but
also
collect
it
when
an
application
generates
a
message.
A
This
message
also
can
come
in
different
formats,
and
usually
this
there's
no
unified
format
for
logging
messages
right,
there's
a
some
trends
in
the
market
to
try
to
unify,
but
this
has
been
there
for
more
than
30
years
actually.
But
what
actually
is
really
important
for
the
scope
of
this
presentation
and
for
the
project
itself
is
that
in
an
environment
it
could
be
a
cluster
environment
with
kubernetes
distributed
or
just
normal
normal
environment
right.
A
What
is
important
is
that
we
have
to
understand
that
our
data,
most
of
the
time,
will
come
in
a
route
experiment
without
structure
without
a
schema,
but
we
need
to
be
able,
from
an
application
perspective,
as
a
log
processor,
be
able
to
handle
this
a
message
actually
and
when
the
applications
start
generating
messages.
All
these
messages
go
most
of
the
time
to
the
file
system.
Sometimes
you
can
shape
it
over
the
network,
but
normally
in
production.
A
You
just
write
to
this,
and
then
you
just
let
the
log
processor
collect
all
these
messages
and
then
the
whole
goal
of
this
is
not
just
deal
with
the
messages.
As
I
said
in
the
beginning,
the
goal
of
this
is
to
deal
with
data
analysis
and
data
analysis
will
happen
after
you
have
been
able
to
collect
all
the
information
and
centralize
the
information
back
to
a
database
or
any
kind
of
storage
cloud
provider
like
amazon,
s3
elastic.
A
There
are
many
options
in
the
market
right,
so
the
whole
purpose
of
logging
is
to
be
able
to
connect
point
a
to
point
b,
but
also
in
the
middle,
be
able
to
do
some
kind
of
data
processing
right.
It's
not
just
read
and
write.
Actually,
you
read
the
data
and
you
want
to
perform
some
data
modifications
on
the
data.
A
But
also
we
have
some
insights
that
there
are
many
vms.
There
are
many
metal
servers
that
are
running
fluid
and,
of
course,
we
don't
have
metrics
from
from
that
perspective.
But
if
you
are
attending
this
session,
you
can
be
pretty
confident
that
you
are
getting
into
the
right
choice
now
as
part
of
fluence.
One
of
the
important
things
is
the
robot
right.
A
It's
not
just
about
to
solve
the
problem
that
we
have
now,
but
what
are
other
problems
that
we're
getting
reported
by
the
community
and
these
problems
are
not
just
technically
a
tied
bit,
but
also
to
market
trends
or
different
needs
that
we
have
in
the
infrastructure
right.
So
we're
happy
to
announce
that
as
part
of
this
year
we
started
the
journey
where
fluent
bed
will
handle
native
matrix
support
right.
What
this
means?
A
Usually
you
used
to
have
your
own
agents
for
logs
and
a
separate
agents
for
metrics,
but
from
a
community
perspective
from
the
fluent
ecosystem.
We
always
got
asked
many
times
and
talking
maybe
more
than
10-15
times
why
you
don't
implement
the
metric
support.
Why
you
don't
unify
these
needs
in
a
single
agent
which
is
fluent,
which
is
pretty
lightweight,
so
we
can
get
rid
of
some
extra
agents
that
we
have.
It's
not
because
of
the
other
agents
are
bad,
actually,
they're
really
good.
A
But
since
we
are
using
the
fluent
ecosystem,
we
would
like
to
have
a
more
unified
experience,
and
when
we
talk
about
metrics,
there
are
many
corner
cases
right.
The
first
one
is
that
some
applications
ship
lock
messages,
but
these
log
messages
sometimes
are
blocked
messages
right
be
back
error
in
learning,
but
sometimes
they
also
ship
metrics
as
well.
A
For
example,
they
just
ship
a
json
map
and
they
have
a
bunch
of
key
values
where
they
say
hey.
This
is
a
counter.
This
is
a
gouge
or
an
aggregation
for
a
different
value,
and
if
we
handle
that
information
as
login,
maybe
we
are
a
losing
opportunity
to
do
more
processing
of
that
data
and
do
more
smart
things.
Also,
there's
the
other
corner
case
that
says
where,
since
fluent
bit
is
running
on
every
single
node
of
our
cluster,
why
you
don't
collect
the
host
matrix
disk,
swap
cpu
memory
and
so
on?
A
Also,
we
have
applications
that
ship
metrics
right
natively.
So
if
your
embed
is
there
why
we
cannot
handle
that
right
and
if
we
talk
about
a
what
is
being
used
in
the
market
right
now
most,
I
would
say
eighty.
Ninety
percent
of
the
industry
is
aligned
nowadays
within
the
cloud
native
space
with
committees
and
open
metrics
right,
so
a
as
a
fluently
project
and
spray
maintainer.
We
we
decided
that
we
are
going
to
go
on
that
way
right
now.
A
We
are
going
to
support
native
metric
support
and
fluid,
but
work
towards
the
integration
with
the
prometheus
ecosystem
right,
which
is
on
top
of
copper
metrics.
Now
and
aside,
we
know
that
open
telemetry,
which
is
a
sandbox
project
in
the
cnc,
is
being
developed
to
try
to
solve
all
this
complexity
of
logs
traces
and
metrics
right
and
right
now,
the
the
pre
that
project
is
growing
and
but
we
aim
to
integrate
with
primitives
first
and
once
open
telemetry
get
a
more
maturity
level.
A
We're
going
to
integrate
from
a
protocol
perspective
right
fluent
for
us
is
like
agnostic
software
and
we
need
to
communicate
with
all
vendors
all
protocols
possible
right.
So
if
I
can
share
here
pretty
quickly,
I'm
just
going
to
share
in
my
screen,
we
have
a
poc
of
how
we
are
building
our
own
node.
Exporter
is
a
fluid.
A
Actually,
it's
a
stripped
down
version
of
the
one
that
you
find
in
prometheus,
but
we
change
to
solve
the
the
needs
for
our
own
users,
so
I'm
just
going
to
run
it
from
the
command
line.
So
would
be
no
exporter,
we're
going
to
send
data
to
the
standard
and,
as
you
can
see,
we
will
able
to
collect
a
bunch
of
information
from
the
host.
But
actually
this
is
my
my
personal
laptop,
where
we're
collecting
data
from
cpu
scaling
frequency,
a
memory
information,
the
total
bias
you
can
see.
A
They
have
16
gigabytes
on
this
laptop,
the
available
memory
and
so
on.
But
what
we
did
here
is
to
try
to
replicate
a
collect,
natively
all
the
metrics
from
the
system,
but
implement
a
full
layer
where
we
can
translate
this
metrics
to
different
payloads
expected
by
the
users
by
you,
for
example.
You
would
expect
to
have
this
information,
increments
format
right
and
where
we
are
doing
that
at
the
moment.
A
You
can
protect
any
kind
of
function
right
and
originally
in
any
language
and
make
it
run
on
the
browser,
but
right
now
that
web
assembly
engine
is
being
translated
to
backing
services.
So
a
one
of
the
problematics
that
we
have
in
influence
is
that
from
a
community
perspective
people
said
hey.
I
have
struggled
to
write
sequel
right,
writing
c.
Sometimes
it's
not
that
easy.
You
need
a
bunch
of
best
practices
experience,
because
a
managed
memory
you
know
could
be
you
can
make.
A
A
I
would
say
that
this
is
one
of
the
the
huge
things
are
coming
up
this
year
and
we
are
really
really
excited
because
we
see
that
the
project
started
as
a
small
forwarder
now
is
doing
stream
processing
distributed
stream
processing.
Actually,
we
have
sql
on
it,
but
also
we
are
going
to
provide
and
give
a
better
more
flexibility
to
our
developers
from
the
user's
perspective.
A
Okay,
thank
you
so
much
for
watching
this
presentation.
I
know
this
was
so
just
a
few
minutes
of
a
half
long
bit
works.
What
are
the
the
next
things
are
coming
up,
so
please
stand
on.
We
have
many
news.
I
know
that
during
the
conference
we're
going
to
see
more
news
about
the
status
of
metrics
and
the
status
of
a
web
assembly.