►
From YouTube: Fluent Bit 1.9 - First Mile Observability
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
welcome
to
the
clcf
webinar
about
fluent
bed
and
the
common
release
of
lone
vape
1.9.
My
name
is
eduardo
silva
and
one
of
the
creators
and
maintainers
of
this
project
called
fluenbeep.
I'm
also
a
lucky
founder
ceo
of
this
company
called
kalitia
and
has
been
a
long
time
as
cncf
maintainer
and
engaged
with
the
community.
A
A
Before
getting
started
with
the
news
about
limit
109,
I'm
sure
that
many
of
you
are
new
about
a
fluent
bit.
Some
of
you
may
come
from
around
from
the
fluent
ecosystem,
or
maybe
you're
just
learning
about
logging
and
metrics.
So
I
will
take
some
a
few
minutes
and
take
a
few
slides
to
share
some
concepts
about
a
fluent
bed
and
why
this
is
important.
A
A
One
of
the
problems
that
flynn
bit
I'm
sold
pretty
much
like
fluentd.
We
are
from
the
same
family,
it's
like.
Sometimes
you
have
many
data
that
comes
from
different
sources,
and
you
want
to
do
some
data
analysis
right.
But
when
you're
talking
about
these
two
video
systems
and
we
have
a
applications
of
environmental
servers
or
you
have
applications
and
in
kubernetes
right,
how
do
you
extract
all
this
information
in
a
smooth
way?
So
you
can
perform
your
data
analysis
later
right
and
when
you
do
a
data
analysis
right
pretty
much.
A
This
is
really
important
right,
because
one
of
the
challenges,
sometimes
as
a
developer,
is
like
build
application.
Then
the
next
stage
is
deploy
the
application.
But
after
deploying
this
application,
you
need
to
monitor
this
application
and
there
are
many
areas
to
to
monitor
an
application
or
host
software
instrument.
Application
flunkbed,
as
of
today,
cares
about
two
spaces
of
monitoring
of
or
observability.
A
One
of
them
is
logs,
which
pretty
much
a
text
information
that
the
applications,
a
writes
out
the
standard
output
interface
or
to
a
log
file
or
syslog,
but
also
collect
metrics
from
pretty
much
any
endpoint
that
this
application
can
expose
fluent
is
has
been
getting
a
lot
of
traction
in
primarily,
he
would
say
because
of
his
performance
and
the
low
system,
resources
usage
right.
It's
not
the
same,
something
that
a
tool
that
is
processing
thousands
of
messages
per
second
consuming,
two,
three
gigabytes
of
memory
in
your
system.
A
A
Now
getting
a
layer
getting
more
doing
a
deep
dive
of
the
project,
a
fluid
is
kind
of
an
engine
right
that
has
support
different
input,
source
of
information
that
could
be
lost,
matrix
that
it
take
them
in
packages
by
the
representation
and
generate
some
internal
events.
Those
internal
events,
then,
could
optionally
go
through
a
filter
chain
where
this
data
can
be
enriched
or
can
be
modified,
and
one
of
these
use
cases
is,
for
example,
if
you're
running
a
I
don't
know
in
aws
and
you're
processing,
a
logs
from
because
x
application.
A
You
would
like
to
add
some
export
important
information
like
not
just
application
itself,
but
also
where
this
application
is
running.
What
is
the
hostname
associated
to
that
the
instance
id
laser's
name
this
the
zone
where
this
was
deployed
right?
So
all
of
this
kind
of
modifications
can
be
done
in
a
filter
phase
after
that
in
the
pipeline.
We
have
this
concept
of
buffering
as
soon
as
we
get
data
we
filter
it.
A
We
just
store
a
temporary
right,
even
in
memory
on
disk
right,
because
this
data
needs
to
be
prepared
to
be
delivered
to
multiple
one
of
multiple
destinations.
So
buffering
is
really
important,
because,
if
you're
going
to
send
the
data,
many
things
could
go
wrong.
One
of
them
are
dns,
there
are
network,
outages
or
sometimes
your
remote
endpoint
is
down
for
a
couple
of
minutes
couple
of
hours
and
you
don't
want
to
lose
your
data
right.
A
So
we
need
this
buffering
in
order
to
make
sure
that
everything
that
you
have
in
there
will
persist
until
you
can
send
it.
Of
course,
we
provide
mechanisms
to
say
I
want
to
provide
just
a
few
gigabytes
of
data
for
buffering,
but
no
more
than
that
right
and
in
the
output
destination
side
is
where
we
provide
connectors
for
different
backends
kafka,
elasticsearch,
open
search,
prometheus
remote
right.
There
are
plenty
of
them.
In
general,
we
have
around
a
hundred
plugins
between
inputs,
filters
and
outputs.
A
So
fluke
is
a
very,
very
subtle
agent,
like
we
call
it
like
a
swiss
knife
because
it
can
be
deployed
in
a
kubernetes
cluster
environmental
machine
or
any
kind
of
old
system
that
you
need
to
take
your
information
out
from
it
now.
Who's
using
fluent
bet
nowadays
say
upload
bit
is
used
by
all
the
major
club
providers.
A
For
example,
if
you
go
to
google
cloud
and
you
deploy
a
kubernetes
cluster
and
you
inspect
what
pods
are
running
your
node,
you
will
see
that
fluent
bit
is
there
same
case
for
microsoft.
Aws
has
its
own
distribution
upload
bit
called
aws
from
bit,
which
comes
with
specific
connectors
for
aws
and
more
custom,
not
optimization,
but
custom
setups
for
aws
customers,
and
so
there
are
many
it's
not
just
for
cloud
providers
right.
We
have
many
companies
using
it
or
integrating
with
it
even
splunk
new
railing
load
dna
data
doc.
A
So
what
is
important
here
is
that
fluent
beta
is
a
real
vendor,
neutral
solution
right
as
a
beginner,
neutral
solution
that
we
aim
is
like
allows
you
to
deploy
this
and
choose
your
vendor
your
backend
and
switch
to
the
next
state
to
any
backend
that
you
want
right.
It
gives
you
the
freedom
to
to
take
your
data
control
your
data
and
set
it
where
you
want
today
and
give
you
the
flexibility
to
change
that
destination
tomorrow.
A
Now
about
a
fluent
bit
updates
right.
We
have
a
lot
of
news
to
share.
Actually
I'm
pretty
excited
about
this
release
and
the
whole
team
community
and
companies
working
together,
and
this
release
has
been
tremendous
work
right
and
sometimes
it's
not
just
a
code
base.
There
are
many
areas
and
some
of
them
we're
going
to
discuss
here
in
this
webinar.
One
of
them
is
like
the
biggest
news
is
like
the
success
that
a
we
just
crossed
the
one
billion
deployments
right.
A
If
you
go
from
our
docker
hub
registry-
and
this
is
a
huge
accomplishment,
I
would
say
that
a
just
few
project
hits
these
or
runs
at
this
scale
right.
This
took
like
a
few
years,
and
nowadays
I
would
say
that
flood
gets
exploited,
one
to
two
million
times
per
day,
which
is
insane
and,
of
course,
that
means
more
traction,
more
backs,
more
enhancer
requests,
but
a
bigger
community,
and
that's
what
we
aim
for
as
part
of
a
development
process
and
a
project
is
sometimes
not
just
to
write
code
right.
A
We
want
to
make
sure
that
things
and
gets
right
so,
a
few
years
ago
we
get
many
complaints
about.
Sometimes
we
push
on
future
that
break
things
or
the
the
lack
of
automation
in
the
development
workflow.
A
A
A
Now,
all
in
all,
if
you're
using
containers
right,
we
just
make
sure
to
ship
a
lightweight
container
image
lightweight
I
mean
that
is
not
in
a
big
size
that
has
a
data
that
is
not
needed
right
for
that.
We
choose
this
trellis.
What
time
ago,
with
this
true
list
is
a
kind
of
a
container
based
image
that
just
contains
what
is
needed.
There's
no
shell,
there's
no
external
binaries.
A
Now,
what
you're
contributing
to
also
to
fluent
back
to
github
sub
billionaire
pr,
and
we
have
more
than
30
40
checks
for
that
vr
reading
running
some
linkedin
shell
checks,
actual
links,
compiler
version
and
making
sure
that
every
contribution
from
the
community
will
not
break
anything,
and
this
goes
from
centos
7,
which
is
a
very
old
distribution
to
the
latest
ones,
right,
an
old
major
distribution
and
also
we
have
integrated
a
new
security
system.
So
every
time
that
we
push
an
image
where
we
are
testing
the
development
workflow,
a
new
container
image
is
created
yeah.
A
Now
it's
in
huber,
youtube
and
jkl
and
ruby,
but
now
has
been
translated
to
huma
and
this
site
is
fully
available
in
the
source
code
of
it
in
github.
So,
if
you
are
interested
in
contribute
documentation
our
cycles
and
yeah,
now
we
have
all
the
pieces
and
the
framework
to
provide
in
that
future
set.
A
Now,
let's
get
to
the
other
fun
part
of
this
release,
one
that
night,
I'm
sure
that
you
wanted
to
know
a
what's
coming
out.
What
are
new
features,
what
new
things
around?
Okay,
the
first
one
is
like.
We
have
implemented
a
new
configuration
mechanism.
Actually
it's
not
a
new.
We
refactor
the
whole
configuration
mechanism.
A
So
now
we
support
not
just
the
classic
mode
of
fluid
to
configure
the
pipelines,
but
also
we
support
yaml
right
and
you
may
be
asking
the
white
double
well
most
of
integration
services
or
when
you
want
to
do
a
iteration
with
apis
and
make
sure
or
connect
flow
and
beta
the
platform.
The
classic
mode
is
not
so
friendly,
it
has
a
special
indentation
and
it
gets
complex
when
you
go
to
the
cloud
native
space
where
everything
is
jammed.
Yeah
flambet
was
kind
of
more
complex
to
manage
and
we
wanted
to
change
that
now.
A
A
So
you
can
separate
things
that
a
you
might
think
that,
logically,
something
that
generates
from
some
source
with
attack
will
go,
must
go
to
a
specific
output
right.
So
and
sometimes
we
have
found
users
that
they
have
huge
configuration
with
different
logical
pipelines,
but
all
of
them
mix
it
together.
Yeah.
We
wanted
to
fix
that
and
now
you
can
define
multiple
pipelines.
A
So
when
it's
time
to
maintain
your
pipeline,
it
will
be
easier
from
any
kind
of
perspective
and
also
from
yaml
perspective.
We
support
includes-
and
we
support
that
in
floatbed
for
a
long
time,
but
we
just
make
sure
that
when
you
provide
a
yaml
file
to
flow
embed,
you
can
include
other
files
in
the
case
that
you
segment
or
you
have
more
different
pipelines
to
include.
A
And
this
is
always
expected
performance
right
if
lumber.
As
I
said
some
minutes
ago,
it's
always
we
care
about
about
performance,
low
memory
footprint,
but
and
not
sacrifice
in
any
resource
right
if
you're
going
to
use
something
messy
right,
optimize
it.
Now,
when
the
the
price
started
to
grow,
it
started
as
a
single
threat
product
right.
It's
fully
asynchronous
event,
driven
on
all
what
we
want.
But
now,
when
you
have
events
for
coming
from
network
io,
timers
scheduling
waking
up
curatings
right
because
we
also
run
with
corotins
and
you
get
these
saturations
of
events.
A
In
the
main
event,
loop
right
and
aws
has
done
an
outstanding
job
analyzing.
How
we
can
improve
this
and
they
have
contributed
the
concept
of
priority
queues
to
fluent
bank
and
pretty
much
say
is,
is
a
way
to
prioritize.
What
kind
of
events
that
are
being
reported
by
the
kernel
needs
to
be
processed
first,
second
and
in
the
third
state,
so,
for
example,
events
that
comes
from
the
scheduler
or
we
need
to
flush
a
core
routine
or
we
need
to
dispatch
dispatch,
a
task,
and
that
is
high
priority.
Those
events
reported
will
be
processed
first.
A
A
We
implemented
this
threading
model
in
the
output
plugins,
so
everything
that
is
about
a
converting
the
payload
from
message
pack,
which
is
internal
representation.
To
I,
don't
know
an
external
json
that
is
suspected
by
elasticity
by
splunk.
Everything
was
being
done
in
just
one
single
thread.
You
can.
You
can
imagine
that
that
adds
a
lot
of
latency
and
delay
things
in
the
pipeline,
so
we
created
threads.
We
move
all
those
expensive
tags
to
different
threads
right.
A
A
So
most,
for
example,
plugins,
like
http
std
out
files,
blank
elastic,
open
search,
all
of
them
now
runs
by
default
in
separate
threads.
You
can
override
that
behavior.
You
can
put
worker
zero
and
you
should
be
fine,
but
the
defaults
are
optimal
for
majority
of
use
cases
yeah
everybody
who
wanted
to
know
more
about
performance
or
want
to
hit
something
higher
yeah.
They
can
adjust
the
values
without
any
problem.
A
Now,
let's
talk
about
the
input
plugins
mandan,
we
are
going
to
prefix
this
input
plugin
with
log
input,
plugins
right.
As
I
said,
we
are
in
the
logs
and
matrix
space
and
shortly
we're
going
to
jump
into
traces.
So
I'm
going
to
describe
a
bit
about
the
log
input,
plugin
stat
and
what
we
have
done
on
that
area.
Right
tail
is
the
main
plugin
that
allows
you
to
follow
files
on
the
file
system
right
and
every
every
six
months.
A
We
found
that
users
has
more
challenges
and
some
of
them
say
I
have
50
50
000
files.
But
when
flow
and
bits
start
it's
taken
to
three
minutes
right.
We
never
thought
that
we
were
going
to
hit
those
use
cases,
so
we
optimized
the
tail
input,
plugin
and
now
processing
or
to
get
style
to
process
50
000
files.
A
We
had
an
issue
like
number
300,
which
is
from
2017
in
underground,
which
is
about
asking
for
kafka
input.
Plugin
users
wanted
to
have
lumbee
to
behave
as
a
kafka
subscriber
right.
They
were
sending
data
to
kafka
and
they
wanted
to
subscribe
from
it.
So
we
just
implemented-
and
I'm
saying
that
now
is
experimental,
because
we
are
testing
a
new
architecture
for
this
plugin
right
and
flaming,
what
the
nine
chips
with
a
input
plugin
for
kafka.
A
This
is
quite
a
working
fine
and
you
can
subscribe
to
many
topics.
You
can
get
all
your
data
in
and
here
we
found
some
interesting
cases
where
users
are
sending
data
to
kafka
using
fluent
bed
and
from
the
other
side
they
have
another
fluid
consuming
those
right
doing
a
filtering
modif
modifying
this
information
and
generating
a
new
topic
right.
A
So
yeah
they're,
using
also
fluent
main
as
like
a
kind
of
string
processor
for
kafka,
and
we're
really
happy
about
to
hear
about
this
use
case
because
for
us
stream
processing
has
been
always
something
really
interesting
when
we
can
add
a
ton
of
value-
and
this
is
not
about
replace
kafka.
Actually,
how
to
add
more
value
to
it.
We
have
thousands
of
users
connecting
kafka,
and
now
this
brings
a
new
kind
of
level
of
integration
for
our
windows,
users,
which
are
not
a
small
amount.
A
Actually
you,
you
might
surprise
that
most
of
a
financial
institutions
just
run
hundred
thousand
services
with
windows,
servers
right
and
they
have
they
face
the
same
issue.
How
do
I
collect
my
information,
my
login
information
for
my
system
we
used
to
have-
and
we
still
have
a
plugin
called
wind
lock,
which
allows
you
to
get
in
the
logs
from
the
windows
event
lock
system
from
windows,
but
it
was
just
associated
to
classic
channels.
Now
the
this
new
plugin
called
win.
A
Now,
let's
jump
to
the
filter
plugins.
This
is
not
the
full
list
and
I'm
just
going
to
talk
about
one
new
field
that
we
have
because
there
we,
when
I'm
recording
this,
the
release
is
happening
just
in
a
few
days,
and
we
still
have
some
more
filters
that
we're
going
to
add
to
this
release.
A
The
new
filter
is
called
nightfall.
Nightfall
is
a
is
a
it's
a
vendor.
It's
a
specific
service
that
makes
sure
that
if
your
records
contains
any
sensitive
information
like
api
keys
or
pii
yeah,
it
can
do
data
data
reduction
on
that
and
make
sure
that
you're
not
going
to
ship
any
sensitive
data.
This
is
a
third
party
service.
This
is
a
contribution
for
nightfall
to
flowing
bet.
So
thanks,
nice
fall
for
contributing
this
and
yeah.
A
A
from
another
angle
also,
our
users,
some
of
them
were
migrating
to
open
search.
They
had
their
own
reasons
and
but
for
us
being
a
vendor
agnostic,
project
framework,
agnostic
and
everything
is
like.
We
want
to
make
sure
that
our
user
has
the
possibility
to
switch
to
a
different
service
to
a
different
project
as
a
backend,
but
they
have
the
right
implementation
as
a
connected
frame.
So
fluid
bed
has
is
given
now
a
new
open
search,
connector
based
on
the
old
elasticsearch
connector.
A
A
Other
users,
not
others,
thousands
a
useful
embed
to
send
in
data
to
us
three
packets,
so
amazon
f3-
and
we
got
this
really
interesting
use
case.
The
data
that
we're
shipping
is
going
to
be
consumed
for
analytics
use
cases
and
they
needed
to
have
this
in
apache
arrow,
so
a
contribution
from
the
company,
their
code
implemented
all
this
apache
arrow
encoding
for
the
s3
connectors.
A
Now
another
interesting
angle,
fluent
bed
and
matrix-
and
we
always
get
this
question-
why
is
flown
back
about
matches?
What
do
we
think
about
matrix
and
part
of
the
story
is
that
when
we
started
fluent
bet
one
of
the
first
plugins
that
we
wrote
for
it
because
that's
years
ago,
from
better
linux
was
how
to
collect
cpu
metrics,
this
kind
of
metrics
and
so
on.
But
at
that
time
we
handled
all
that
information
as
structure
blocks,
not
as
a
real
matrix
with
a
schema.
A
But
in
since
one
year
ago
we
started
turning
into
implementing
a
native
support
for
matrix
payloads
that
allows
fluent
to
connect
to
other
ecosystems
right.
Our
addition
is
like
fluent
bed
is
like
the
core
on
all
this
ecosystem
for
observability,
and
we
provide
all
the
tooling
all
the
connectors
that
you
are
able
to
connect
to
different
systems,
different
protocols,
different
frameworks
and
actually
keep
being
vendor
agnostic
and
that
that's
our
mantra,
keep
being
vendor
agnostic
and
talk
to
everybody.
A
A
Another
angle
is
that
we
talked
about
matrix
logs,
but
now
we
are
also
able
to
collect
metrics
from
windows
in
a
native
way
right
now,
this
experimental
plugin
is
able
to
collect
cpu
metrics
from
the
windows
system
and
now
during
the
1.9
release
cycle,
we're
going
to
add
other
kind
of
metric
samples,
for
example,
disk
memory,
storage
file
system
and
so
on.
So
this
is
an
ongoing
work
and
if
you
are
interested
in
some
special
collector
for
windows,
that
is
not
there.
A
A
I
see
that
the
industry
for
for
monitoring
or
metric
space
runs
on
premiers,
that's
a
fact
and
when
fluentment
started
integrating
with
metrics,
the
first
approach
is
like
for
us
is
like
to
understand
what
are
the
challenges
from
the
users?
A
In
addition-
and
this
is
not
new,
but
it
was
there,
but
we
did
a
couple
of
enhancements
right
in
our
prometus
exporter,
but
that's
in
the
output
side,
because
one
side
we
can
collect
locally
metrics
like
we
have
one
plugin
to
collect
node,
metrics
windows
metrics.
But
now,
when
you're
going
to
the
output
site,
we
can
put
that
metrics
in
a
prominent
export
format.
So
other
services
can
scrape
them
or
if
you
want,
you
can
use
our
another
plugin
called
prometus
remote
write
that
allows
you
to
take
all
these
metrics
and
dispatch.
A
A
So
we're
really
happy
about
working
together
with
the
prometheus
to
me
to
steam,
because
this
is
not
about
separating
things
between
logs
metrics
or
traces
right.
If
you
go
to
any
kind
of
a
production
environment,
you
will
find
that
most
of
technologies
are
there
different
projects,
there's
no
one
single
tool
for
everything
right,
but
all
of
them
have
a
the
same
need,
which
is
kind
of
integration
and
fluid
times
to
solve
that.
A
A
A
We
built
plugins
for
log
stash
for
beats
at
that
moment,
so
everything
is
about
to
provide
the
user,
the
flexibility
to
solve
the
problem
that
they
have
it's
not
about
to
replace
technology
and
for
us,
when
we
talk
about
open
telemetry,
we
see
this
addition
of
unified
framework
for
telemetry
and
for
us
yeah
it
looks
great,
but
now
we
have
a
responsibility.
We
have
thousands
of
users
a
big
part
of
them.
Maybe
we
jump
into
open
telemetry.
So
now,
how
do
we
extend
our
scope
to
support
this
journey
for
them
jumping
into
open
telemetry?
A
So
on?
We
flip
it
1.9.
We
are
launching
our
first
connectors
for
open
telemetry.
One
of
them
is
open.
Telemetry
input
that,
as
of
today,
is
just
a
to
support
matrix
over
oclp
and
during
the
release
cycle
of
1.9,
we're
going
to
add
support
for
traces
right
in
the
output
side.
Now
we
support
only
matrix
but
the
same
story
as
in
the
input
side,
we're
going
to
ship
a
be
able
to
shift
traces
and
you
might
be
asking
oh:
does
it
mean
that
you
won't
bid
a
replacement,
pentameter
collector?
A
I
would
say
there
will
be
many
features
that
are
going
to
overlap
with
one
to
each
other.
But
our
intention
is
that
the
user
get
the
flexibility
to
integrate
all
their
systems
right.
A
A
But
from
our
standpoint
we
want
to
make
sure
that
we
solve
the
problem
today,
but
also
we
open
the
door
for
the
integration
to
where
the
market
wants
to
standardize
right,
and
this
is
really
interesting
and
we're
really
happy
as
a
flu
employee,
you're,
going
to
start
participating
more
in
open
telemetry,
already
we're
talking
with
our
from
a
company
perspective
with
partners,
understanding
what
they
are
the
needs,
because
they
are
user
fluent,
but
they
are
also
jumping
to
open
telemetry
and
there
are
many
gaps
they
want
to
fix
for
their
own
specific
use
case.
A
So,
if
you
want
to
talk
about
open
planetary
yeah,
we
are
really
happy
to
say
that
we
are
jumping
into
it.
We
are
supporting
and
embracing
of
internet
to
make
sure
that
also
that
prey
can
succeed.
We
are
both
part
of
the
cncf
and
yeah.
This
is
just
interesting
journey,
the
observability
so
fluently.
A
That
is
expecting
that.
So
you
might
hear
about
a
lot
of
about
this
kind
of
implementations,
pocs
and
things
running
in
pollution
already,
but
I
will
really
happy
to
hear
from
you
about
the
use
case
even
concerns
or
anything
that
we
can
add
as
a
value
to
the
project.
A
Well,
that
was
a
the
quick
webinar,
the
quick
presentation
about
fluid
1.9.
I'm
sure
that
many
things
could
happen
in
a
few
days
when,
when
you
see
the
release
and
recording
this
few
days
before
we
are,
you
know,
put
in
the
live
stream
for
this.
But
if
you
have
any
questions,
please
feel
free
to
write
me.
An
email
I'll
be
happy
to
sing
jump
into
a
song
called,
also
a
reminder
that
we
have
our
fluent
bed.