►
From YouTube: Fluent Talks | 006 | Fluent Bit 1.9 Launch Party!
Description
Please join us for Fluent Talks! Our weekly webinar and office hours, on Fridays at 2PM Central. Streaming live on YouTube.
#fluentbit #observability
C
D
A
C
Hey
everyone
thanks
for
joining
another
fluent
talk
today,
we
got
pretty
exciting
one.
Philippine
1.9
we're
going
to
walk
through
kind
of.
What's
in
the
new
release,
a
little
bit
of
a
launch
celebration
talk
through
some
of
the
new
features
talk
through
what's
coming
next,
so
be
sure
if
you
have
questions
comments,
drop
them
in
in
the
in
the
youtube
chat
and
we'll
we'll
try
to
get
to
them.
So
with
that
edwardian,
you
want
to
take
it
away
a
little
bit
and
I'll
interrupt.
A
A
A
So
a
I
assume
that
if
you're
here
is
because
you
are
a
fluently
or
fluent
bit
users
and
as
you
might
notice
well
this
week,
we
release
it
fluent
1.9.
That,
for
us,
is
like
a
major
version
and
if
you
are
not
familiar
with
fluent
bet
well,
you
might
note
that
fluent
beat
and
fluent
year
from
the
same
family,
both
projects
are
under
the
cncf
and
fluent
bait
is
a
greater
project
under
the
umbrella
of
a
fluency.
A
The
kind
of
problems
that
fluent
bet
solves
is
like
to
be
able
to
collect
or
receive
locks
and
metrics
from
different
sources
and
allows
you
to
arrange
that
information
back
on
any
destination
right.
This
is
just
like
a
queen
intro
we
can
do
logs.
Metrics
is
fully
written
in
c
language,
so
it's
made
to
be
high
performance
and
consuming
very,
very
low
resources
in
your
systems.
There
are
many
users
that,
when
they
deploy,
for
example,
a
kubernetes
cluster,
they
care
about
a
lot
about
the
performance
of
their
own
applications.
A
But
if
your
lock
or
matrix
agent
consume
a
lot
of
resources,
it
might
be
a
problem
right
because
it
adds
can
add
more
latency
to
the
whole
system.
So
that's
why
we
focus
a
lot
of
in
performance
and
well.
This
is
one
of
the
cool
graphics
to
show
how
fluent
bit
works
pretty
much.
We
have
a
pipeline
where
we
have
data
sources.
We
have
filters
to
reach
your
data
buffering
to
make
sure
that
a
data
does
not
get
lost
if
the
destination
fails.
A
When
sending
the
data
out
now,
who
useful
embed
a
pretty
much
most
of
club
providers,
companies
and
even
companies
that
has
their
own
agents
like
splunk,
they
use
when
b2,
lock,
dna
and
data
dock
now
from
the
community
update
perspective
a
I
think
that
anwar
you
can
take
over
this.
This
really
great
milestone
here.
C
Yeah
yeah,
we
got,
we
got
deployed
over
a
billion
times,
so
you
know
something.
That's
always
fun
is
when,
when
you're
watching
you
know
application
get
downloaded,
you
see,
the
download
counts
go
up,
and
you
know
I
I
think,
with
with
fluid
d,
we
were
all
super
excited
when
it
crossed
a
couple
hundred
million
threshold
and
then
fluid
bit
came
along
and
especially
with
containers.
It's
like
two
million
docker
deploys
per
day.
C
I
think
the
one
fun
stat
about
this
too
is
this-
is
just
docker
docker
hub
and
there's
like
many
other
container
registries
out
there,
that
we
don't
account
in
this
number.
So
maybe
it's
like
1
billion
plus
plus
right-
and
this
is
really
just
from
one
source,
so
a
great
milestone
right
and
I
mean
it's
got
to
feel
pretty
cool
right
to
where
you
write.
Ryzen
software,
you
know
a
billion
times.
It's
it's
been
downloaded.
A
A
We
have
more
than
200
almost
hitting
300
in
total
right
and
one
of
the
greatest
things
here
is
like
we
always
hear
about
what
were
the
use
case
for
the
users
when
we
were
to
call
cubecon
different
conferences,
hey
what
you're
missing
what's
next
right
and
that's
where
many
features
come
in
right,
like
kubernetes
field,
their
tail,
we
didn't
support
cell
at
the
beginning,
also
what
I
said:
lua
prometheus
for
metrics
right
now,
so
that
is
really
really
interesting.
A
Now
you
you
get
one
project
being
downloaded
like
crazy,
but
also
you
get
more
back
reports
more
future
requests
and
payments
enhancements.
So
this
is
really
interesting
right,
because
I
think
that
now,
from
a
perspective,
it's
like
it's
not
about.
How
do
we
make
sure
that
somebody
adopts
this
technology
now
it's
about?
How
do
we
make
it
sustainable?
A
I
hope
the
people
who
deploy
fluent
vietnam
can
be
trusted.
This
tool
will
be
around
for
10-15
years
right
and
we
have
a
real
vendor
neutral,
set
of
maintainers
that,
if
I'm
not
here
or
amazon,
just
stop
using
it
yeah
the
project
will
continue
right.
That
is
an
important
thing.
It's
like
pretty
much!
You
will
find
now
in
the
market
a
bunch
of
servers
with
a
recap:
5
centos,
6,
right
and
they're
still
working
right.
We
want
to
provide
the
same
experience.
C
Yeah,
I
think
this
this
has
been
like
you
know.
Anyone
who's
been
using
the
fluid
projects
they're
they
they
typically
download
this
treasure
data
package
and
you
know
they'll,
go
out
and
deploy
and
and
really
I
think
we
we
really
took
a
lot
of
the
time
to
say
this
thing
is
getting
used
a
lot.
It
needs
a
lot
of
maturity
to
be
built
in.
Thankfully,
there
was
a
lot
of
open
source,
tooling
and
open
source
stuff
that
the
cncf
create
granted
us
and
we
went
to
go.
C
C
It's
like
any
any
open
source
project
will
go
through
like
a
maturity
curve,
as
you're
kind
of
drowning
from
all
the
usage
you've
got
to
make
sure
you
can
scale
it,
and
I
think
ci
cd
is
such
a
good
way
to
such
a
good
way
to
scale
and
actually
I'm
looking
forward
to
more
more
stuff
within
the
ci
cd
workflow.
C
So
it's
it's
absolutely
something
where,
where
I
could
see,
you
know
us
us
having
even
more
coming
coming
through,
and
I
think
that
one
fun
thing
too
is
like
more
and
more
folks
are
running
ci
cd
for
flint
bit
and
fluency.
So
how
do
we?
How
do
we
make
sure
those
folks
are
also
successful
or
can
take
what
we're
doing
in
our
smoke
test
to
make
sure
they'll
also
be
successful.
A
Yeah
as
as
a
maintainer
right
actually
in
the
past,
we
got
a
bunch
of
problems
with
the
workflow
of
the
development
of
fluence
and
yeah,
respecting
pr's
or
the
respecting
have
a
full
ci.
We
got
many.
We
struggled
a
lot
because
also
ci
takes
time,
and
sometimes
we
need
to
get
a
fix
in
place,
but
we
had
to
wait
an
hour
also.
We
make
sure
we
tested
everything
internally
and
then
we
found
that
hey.
This
is
not
working
on
centos
seven!
A
Well,
because
it's
breaking
the
compilation
process
is
breaking
and
when
we
went
out
we
have
many
times
happen.
We
have
the
releases
out
and
we
ask
it,
but
can
we
make
that
center?
Seven
builds?
You
know
the
source
code
on
every
ci,
every
pr,
yeah,
and
actually
with
that
save
us
a
ton
of
time
because
many
times
developers
contributors
just
put
some
a.
I
don't
know
a
common
example
that
was
probably
was
centos.
Seven,
it's
no
problem,
but
the
complete
the
the
compiler
has
a
special
flux
for
the
c
standard
right.
A
A
So
this
kind
of
things
are
saving
us
a
lot
of
time
and
yeah.
It
was
not
easy
right.
I
think
that
has
been.
I
don't
know,
months
months
of
work
and
now
we're
pretty
happy
to
say,
yeah
we're
going
to
achieve
one
bit
of
release
and
is
tested
in
performance
in
portability
in
windows,
mac,
os,
linux,
so
there's
more
trust
in
general,
I
would
say
so
as
a
developer.
I
feel
that
I
can
try
to
break
things,
but
the
ci
will
trap
them
right.
C
B
A
Yeah
we
got
nightly
bills.
We
got
more
coverage
on
also
on
the
security
side.
By
using
fasting,
you
know
we
use
fasting
technology,
google
fast
in
order
to
fast
fluent
bed.
That
fasten
means
it's
a
technique
in
which
you
get
any
kind
of
function
or
entry
point,
and
you
send
some
data,
which
is
valid.
A
Actually
it
started
in
last
week
we
started
putting
more
facets
in
place,
and
now
we
are
just
seeing
the
internal
reports
of
more
problems
like,
but
it's
associated
with
memory
allocation.
What
would
happen
if
the
memory
allocator
fails?
It's
unlikely
to
happen,
but
if
it
fails,
you
know
at
this
in
the
middle
of
the
function
right
is
it
returning
properly?
Is
it
scratching?
Is
it's
generating
memory
leaks?
That
stuff
is
coming
out?
Also.
C
That's,
I
think,
that's
super
important
right,
especially
if
we're
like
we're
writing
this
project
and
we're
saying:
okay,
we're
going
to
use
the
most
high
performance,
low,
low
resource,
consuming
language
of
c,
like
as
low
level
as
you
can
get.
These
are
the
the
the
places
where
okay
hey,
what
are
the
trade-offs,
and
this
is
one
of
the
trade-offs
and
how
do
we
mitigate
that
is
to
do.
A
A
Actually,
we
are
following
what
other
projects
are
doing
like
kubernetes
and
boy
we're
listening.
We
are
using
hugo
as
a
framework
so
and
now
the
new
website
is
available
on
github.
So
if
you
want
to
contribute
with
guides
or
anything,
it's
just
purely
markdown,
we
will
appreciate
your
contributions
on
that.
I
know
that
this
break
takes
some
time
because
going
from
design
and
figma
to
translate
to
that
teacher
html,
they
translate
that
to
a
template.
A
C
Yeah
as
long
as
you
know,
if
more
folks
can
learn
about
what's
going
on
and
and
how
to
configure
how
to
learn
from
it.
That's
that's
always
a
big
plus,
that's
actually
there's
there
should
be
there's
some
fun
things,
we're
thinking.
We
can
add
here
too
in
terms
of
education
and
and
content.
So.
D
A
So,
let's
jump
into
the
code
1.9,
okay,
this
is
a
I
will
have
to
adjust
the
slide.
I
have
to
do
a
minor
adjustment
here
and
one
of
the
things
is
like
from
a
usability
perspective.
You
know
a
in
kubernetes
all
projects
use
the
ammo
infrastructure
used,
jammel
and
most
of
the
tools
around
yaml
and
json
and
fluent
bed
did
not
support
the
ammo
actually
for
many
historical
reasons.
But
we
hit
a
point
where
we
said
you
know
if
we
implement
the
same
configuration
concepts
that
we
support,
but
we
offer
a
yaml
layer.
A
I
think
that
yeah
we're
going
to
make
things
easier
for
everybody
right.
But
let's
say
we
said:
let's
take
the
approach
of
not
deprecate
the
old
configuration
mechanism,
let's
use
the
new
one
right.
So,
let's
use
now
fluid
1.9
supports
gamble,
but
also
support
the
classic
mode.
So
we
don't
add
this
kind
of
breaking
changes
right,
so
we
created
a
new
configuration
layer
that
has
two
backends
one
for
classic
mode
run
for
the
ammo
and
both
use
a
new
api
that
generate
all
the
structure
of
context
for
configs.
A
I
don't
know
if
this
is
good
to
do
it
live,
but
I'm
going
to
go
to
the
slide
and
try
to
change
the
code
here.
I
don't
know
if
this
is
a
oh.
This
is
an
image.
I
cannot
change
it,
but
anyways.
The
pipeline
concept
here
is
just
a
logic
pipeline,
meaning
meaning
like
sometimes
we
got
the
users
like
when
you
use
from
bed
in
production
and
you
start
having
field
there's
a
bunch
of
stuff.
Your
configuration
starts
to
grow
right
and
it's
really
hard
to
keep
in
mind.
A
A
So
we
said:
hey,
let's
implement
the
concept
of
pipeline
in
the
yaml
file,
where
we
say
we
have
input
filters
and
outputs,
it's
not
a
logical
thing,
so
you
can
have
a
better,
a
better
design
for
the
config
reader
right.
So
in
terms
of
inputs,
for
example,
here
we
have
tail
but
tell
this
is
an
array.
The
the
typo
here
is
that
we
don't
have
the
the
dash
right
same
for
filter
same
for
output,
so
you
can
have
many
tails
or
sorry
many
input,
plugins
many
filters
or
multiple
pipelines.
A
A
Okay,
this
is
performance,
and
this
is
a
really
interesting.
This
work
was
done
by
aws,
and
one
of
the
things
is
like
a
fluid.
Historically
is
like
a
synchronous
service
in
a
single
thread
used
to
be
like
that
right,
so
you
have
one
in
main
event
loop,
where
you
have
multiple
events
coming
in
coming
out.
So
you
subscribe
for
notifications,
you
ask
the
kernel
to
tell
you,
when
the
socket
is
ready,
when
it's
not
ready
and
so
on,
so
with
the
perf
with
priority
q.
A
A
Some
events
were
losing
priority.
For
example,
you
want
to
have
more
priority
for
scheduler
events
that
task
initialization,
but
and
not
having
an
order
for
that
added.
A
lot
of
latency
a
lot
of
congestion
in
the
system
now
fluent
bits,
support
output
threats,
so
a
threading
in
the
output
side.
So
a
part
of
this
work
was
was
okay.
A
A
This
is
nothing
that
you
have
to
enable.
So
it's
just
there
in
fluid
1.9.
So
if
you
run
from
bit
a
very
high
scale,
you
might
notice
some
performance
improvements
and
low
resource
usage
when
processing
those
events,
as
I
said
also
in
fluent
bet
1.7
cycles,
which
is
elevation.
What
the
seven
we
have
one
1.8,
we
implemented
threading
support
for
the
output
plugins,
but
this
was
enabled
on
demand
by
the
user.
A
So
you
got
the
configuration
and
you
have
to
enable
you
have
to
say
yeah
workers1
workers2,
and
we
found
that
most
of
users
weren't
doing
that
manually.
So
we
said
hey
why
we
don't
enable
default
threads
for
most
of
the
output
plugins.
So
now
you
will
see
the
full
threading
for
splunk
elastic
search,
open
search
http.
A
A
A
Okay,
lock,
input
plugins,
we
used
to
say
input
plugins,
but
now
we
have
to
say
looks
because
we
also
have
matrix
plugin.
Now,
okay,
one
of
the
another
performance
improvement,
is
in
the
type
plugin.
Where
now
we
are,
we
improve
the
tail
plugin
in
order
to
be
able
to
have
better
performance
on
start,
because
if
we
had
like
you
have
many
use
cases
like
50
000
files
on
this
and
you're
going
to
process
all
of
them,
yeah
flm
video
has
taken
so
long
to
name
this
buddha
process.
A
This
start
process
now
it
will
take
a
few
milliseconds
and
we
have
a
new
kafka
source
or
input
plugin,
which
behaves
as
a
subscriber
where
it
can.
You
can
subscribe
to
one
or
multiple
topics
and
extract
messages
and
do
all
your
magic
with
filters,
and
if
you
want
you
can
check
out,
you
can
use
a
kafka,
oppo,
plugin
and
reingest
these
processed
messages
back
into
another
topic.
That's
one
of
the
use
case.
A
We
have
in
windows
a
a
classic
plugin
called
winlock,
which
used
to
consume
messages
from
the
classic
channels.
Now,
if
you
are,
if
you
have
non-classic
channels
for
security
other
stuff,
now
we
have
a
new
plugin.
We
call
win
ebt
log,
which
allows
you
to
read
from
these
non-classic
channels.
All
of
this
is
available
on
1.9
now
for
filters.
We
have
a
new,
a
new
filter,
a
that
was
implemented
by
this
company
called
nightfall.
Knightfall
is
a
vendor
that
provides
a
way
to
protect
your
logs.
A
You
know
pii
all
that
stuff,
so
it
helps
you
to
scan
load
your
logs
and
reduct
any
sensitive
data,
so
they
provided
this
filter
and
is
available
from
one
fluid
1.9
and
in
the
output
side
a
we
have
been
working
closely
with
the
open
search
team.
So
we
did
a
close
work
between
with
open
search
in
two
areas.
One
is
fluency,
and
also
fluent
so.
Fluentd
has
a
new
native
connector
for
open
search,
but
also
we
have
the
same
connector
in
fluid.
A
We
started
as
a
base
with
the
old
elasticsearch
connector,
but
now
we
are
going
to
start
implementing
the
specific
features
for
open
search
on
that
specific
plugin.
Our
suggestion
is,
if
you're
using
elastic,
please
keep
using
the
elastic
plugin.
If
you
are
moving
to
open
search,
please
use
the
open
search
plugin
instead
of
the
elastic
one,
because
at
some
point
the
functionality
might
be
different
or
you
will
find
different
config
options.
A
S3
is
a
very,
very
used
output
plugin,
you
know
most
of
users
like
likes
to
store
the
data
in
s3
packets,
but
also
we
got
this
contribution
that
say
hey.
I
just
don't
want
to
store
my
data
in
json.
I
need
to
have
it
in
apache
error
because
I'm
doing
some
analytics
analytics
stuff.
So
now
the
s3
plugin
support
encoding
for
apache
arrow.
So
this
is
a
very
common
use
case.
So
we're
happy
that
we
got
this
contribution
from
the
company
called
clearcode
from
japan.
A
We
have
a
bunch
of
users
deploying
thousands
of
instances
on
arm
64.,
but
also
there's
other
interesting
angle,
which
is,
I
don't
know,
what's
the
right
word,
but
it
would
be
like
a
multi-frameworks
or
multiple
ecosystems
right
as
you
know,
and
you,
if
you
go
around
and
you
see
what
is
the
standard
for
metrics
in
the
market
right,
you
will
find
that
everything
is
tormented
with
prometheus.
A
This
is
all
unified
in
one
way.
It's
not
that
like
it's!
Not
it's
not
like
that.
For
example,
if
you
think
about
databases,
you
will
go
to
any
environment,
it's
not
that
you
have
just
my
sql
now,
if
you
might
have
my
sql
oracle
policy,
sequel
radius
as
a
cache,
many
kind
of
a
implementation
for
different
use
cases
and
the
same
thing
we
are
seeing
in
the
observability
space
with
logs
with
matches
with
traces.
Now,
where
are
we
going
as
a
project
we
are
going?
A
You
know
to
any
destination
or
button
right,
and
that's
one
of
the
mission
and
that's
this
is
the
the
focus
of
now
is
the
in
the
metric
side
and
as
a
metric
implementation
in
fluent
bet
right
now
we
just
we're
shipping.
A
new
nginx
matrix
collector,
so
fluent
bit
can
go.
You
can
point
it
out
to
your
own
nginx.
A
A
web
server
instance
scrape
the
geometrics
engine
exchange
the
metrics
in
json,
so
we
can
go
there
check
that
json
convert
that
to
our
a
matrix,
payload
and
ship
them
out
to
through
prometheus
exporter,
prometheus
remote,
right,
open,
telemetry,
metrics
or
anything.
This
new
plugin
supports
nginx,
open
source,
but
also
a
nginx
plus,
which
is
the
enterprise
edition,
and
a
also.
We
are
shipping
another
plugin
for
windows,
which
is
called
windows,
metrics
exporter,
which
is
experimental
right
now,
which
is
collecting
cpu
samples
for
a
from
windows.
This
is
based
on
the
prometheus
and
windows
exporter.
A
It's
pretty
much
try
to
replicate
the
same
functionality,
but
for
our
fluent
bit
users,
so
our
users
does
not
need
to
have
two
agents
in
place
for
the
same
functionality.
I
said
that
this
is
experimental
because
we're
just
collecting
cpu,
we
still
have
to
add
file
system.
This
and
network
within
others.
A
Okay
and
now
we
jump
into
the
prometheus
world
in
prometheus.
For
us,
it
has
many
angles:
right,
input,
output
and
in
the
input
side,
we
are
launching
a
new,
primitive
scraper,
so
meaning
that
fluent
bit,
you
can
point
it
out
to
a
pro
any
application
that
is
exporting
metrics
in
parameters,
format,
a
script,
those
metrics
and
process
it
through
the
pipeline,
even
well
as
as
a
matrix
payload.
A
Of
course,
in
the
output
site,
we
have
enhanced
our
prometheus
exporter
plugin,
which
is
a
way
that
yeah
any
metric
that
we
get
inside
fluent
bed,
I'm
not
talking
about
fluent
metrics
any
metric
that
we
get.
We
can
expose
it
in
different
ways.
One
of
them
is
through
prometheus
exporter,
meaning
that
we
can
wait
for
a
client
or
an
agent
to
scrape
those
metrics
or
also
we
can
push
the
matrix
out
through
prometheus,
a
remote
write,
which
is
a
promising
protocol
for
network
transfer.
A
Now
for
the
open,
telemetry
world
right,
which
is
also
part
of
the
cncf
ecosystem,
we
start
experimenting
with
open,
telemetry
metrics
right.
I
know
that
is.
This
is
not
the
biggest
use
case,
but
we
wanted
to
get
familiar
first
with
implementation
with
the
spec
and
we
are
shipping
now,
which
is
open,
telemetry
output
matrix
and
an
input
plugin
to
receive
open,
telemetry
metrics.
A
Now
the
next
step
within
the
fluid
1.9
release
or
development
cycle,
is
like
we're
going
to
start
implementing
support
for
traces,
at
least
raw
traces.
This
is
a
very
common
request
that
we
get
from
the
community
and
as
our
vision,
as
I
said,
we
want
to
be
able
to
connect
all
the
worlds
all
the
implementation,
all
the
protocols
possible
in
from
our
in
the
fluent
way
right
and
so
yeah
we're
launching
these
two
plugins
and
they're
ready
to
go.
A
Well,
I
think
that
that's
mostly
about
the
presentation
just
keep
in
mind.
The
flume
bit
aims
to
be
a
the
swiss
army
knife
for
all
matrix
logs
processing,
but
also
shortly
we
would
play
something
with
a
traces.
A
I
know
that
we
have
some
people
watching
this
slide
but
feel
free
to
send
your
question
on
the
public
slack
for
fluency
on
the
fluent
bed
channel.
I
will
be
happy
to
answer
anything
like
that
and
remember
that
every
two
weeks
we
get
the
the
fluent
bit
the
flu
embed
community
meeting
I'm
going
to
quit
here.
If
you
go
to
the
flow
and
bit
website
right
to
our
new
website,
that's
good
and
you
go
to
the
community
link.
A
Here.
You
will
have
all
the
information
about
the
fluent
bit
monthly
community
media.
What
we
need
to
fix
this
is
now
every
two
weeks
just
feel
free
to
join
the
meeting.
If
you
want
to
write
the
topic,
anything,
please
do
it.
We
have
an
agenda,
so
you
can
just
discuss
your
topic.
Say:
hey!
You
know,
I'm
struggling
with
this,
or
we
would
like
to
see
this
kind
of
future.
This
is
our
use
case
and
we're
missing
a
b
and
c
okay,
so
that
will
be
for
now.
I
appreciate
your
time
watching
this.