►
From YouTube: Fluent Community Meeting Jan 27th 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Sweet,
okay,
well,
hey
everyone!
I
know
we
only
have
28
minutes.
So
thanks
for
for
joining,
I
want
to
just
kick
off
with
just
some
quick,
quick
announcements
and
then
we
can
jump
into
the
first
topic.
Let
me
go
ahead
and
share
my
screen
here.
A
Okay,
can
folks
see
it
yep
awesome,
okay,
so
we
have
a
couple
topics
today,
so
we
have
a
fluentcon
europe
that
is
already
scheduled
and
approved
by
the
cncf,
and
so
we
we're
gonna
start
opening
up
a
bunch
of
cfps.
So
if
you
have
a
cool
topic,
you
want
to
chat
about
like
a
use
case
or
something
you've
done.
A
It
would
be
awesome
to
have
you
have
you
there
as
part
of
flintcon
europe,
it's
side
by
side
of
kubecon
europe,
so
folks
are
going
to
kubecon
europe.
It
is.
It
is
just
one
of
the
days
prior
and
yeah
just
to
some
context.
We
had
a
flint
gun
in
europe
the
year
before
we
had
some
really
great
topics.
We
had
to
flinch
on
north
america
last
year,
so
this
is
just
a
great
place
to
meet
other
fluent
developers
fluent
folks,
people
using
it.
A
People
want
to
learn
about
it,
share
what
you're
you're
doing
and
that's
always
useful
for
for
the
broader
community
and
then
yeah.
Of
course,
you
know
we
try
to
to
make
sure
that
there's
good
good,
swag
and
and
stuff
there
to
to
entice
you
so
yeah,
that's
gonna,
be
may
16th,
I
believe,
and
I'll
post
some
more
information
on
on
this.
A
But
if
you
have
a
cool
topic
you
want
to,
if
you
have
your
even
if
you've
never
chatted
before
or
like
presented
before,
like
there's
a
great
place
to
to
kick
it
off.
But
okay,
let's
go
to
the
first
topic,
which
is
flooring,
log
to
metric,
filter,
work
and
participation.
A
You
could
give
a
quick,
intro
background,
sure
sure.
B
So
so
welcome,
I'm
very
glad
that
we
got
invited
here.
Sadly,
my
my
colleague
oliver
is
not
able
to
participate
today,
he's
music,
but
not
with
corona.
We
are
yeah.
We
are
in
iot
company
from
from
germany
called
smart
dings
and.
C
B
Are
currently
developing
with
a
with
a
client
together
and
a
sensor
gateway
which
is
collecting
a
lot
of
different
sensors
and
nodes
and
kind
of
accumulating
that,
based
on
the
raspberry,
pi
and
yeah,
we
we
push
these
things
to
to
the
cloud
and
to
prometers
to
prometus,
database
and
yeah.
For
that
we
we.
B
Fluid
but
for
monitoring-
and
we
would
love
to
use
fluid
also
for
pushing
our
sensor,
metrics
and
yeah
for
that
we
we
were
already
exchanging
with
other.
C
Rock
and
we
have.
B
Figured
out
that
yeah,
it
would
be
wonderful
if
we
have
a
a
metric
filter,
a
log
geometric
filter.
Sorry,
because
this
is
missing
for
us
we
would.
We
would
like
to,
for
example,
buy
an
http
input,
plugin
send
center
metrics
or
kind
of
the
logs
and
metrics
as
logs,
and
then
push
them
with
the
parameters.
B
Output
plugin
to
the
cloud,
but
we
are
missing
kind
of
this
loctometric
filter
and
therefore
we
would
love
to
participate
here
in
the
community
yeah
and
help
adding
this
feature,
because
this
is
important
for
us
and
I
could
imagine
it
might
also
be
interesting
for
some
others
for
other
use
cases.
B
D
A
Well,
yeah,
so
thanks
thanks,
so
much
and
thanks
for
joining
and
chatting
about
it.
I
know
there
was
some
chat
about
log
to
metric
work
in
the
roadmap,
and
so
I
think
we
have
edvardo
here.
Maybe
we
could
run
through
some
of
the
initial
thoughts
about
it
and
and
what
we
could
potentially
do
and
then
actually
yeah.
I
know
dennis
you've
been
doing
some
metric
log
to
metric
stuff
yourself
already,
so
maybe
some
cool
workaround
that
exists
today
would
be
cool
to
chat
about.
C
Yeah,
so
the
thing
I'm
doing
is
I'm
using
a
lua
filter,
but
I'm
not
using
the
prometheus
push
output,
but
I'm
having
the
lower
filter,
send
things
over
udp
to
a
local
statsd.
So
it
doesn't
quite
match
what
florian
wants,
but
I'd
be
definitely
interested
in
getting
rid
of
the
liver
filter
and
using
something
built
into
fluent
as
well.
E
Oh,
what
about
now
yeah,
hey
wrong,
mic,
okay,
so
yeah
optometric
is
something
that
we
are
really
excited
about
it
actually,
when
we
planned
to
do
this
some
months
ago
and
we
started
to
put
our
hands,
we
draft
how
it
should
be
looks
like
the
configuration
we
found
that
the
configuration
was
blocking
us
from
implement
this
filter,
because
we
support
metrics
native
metrics
now
and
when
you
define
a
matrix.
Actually,
you
define
a
metric
name
description,
potential
labels
and
information
around
that
specific
metric.
E
But
what
about?
If
we
get
many
different
matrix
types
inside
the
log?
How
do
we
handle
that?
And
actually
we
were
blocked
by
a
configuration
schema
perspective,
because
our
configuration
was
pretty
plain.
It
doesn't
support
a
subsections
or
groups.
It's
just
one
first
level
right
and
the
good
news
is
that
the
first
blocker
has
been
finally
fixed
today,
because
we
just
a
implemented
group
supports
in
the
configuration
and
yaml
it's
like.
We
have
a
yaml
version
of
the
same
config
type,
so
yeah
it
was
merged
today.
E
So
if
you
do
get
pull
from
master,
you
will
get
a
bunch
of
changes,
and
the
interesting
thing
is
that
now
we
can
start
talking
about
how
do
we
want
this?
From
a
usability
perspective?
This
conflict
looks
like
right,
so
it
would
be
great.
If
maybe
we
can
start,
I
have
a
old
document
I
can
share
with
you
guys.
E
Labels
blah
blah
blah
right
and
inside
that
I
don't
know
if
you're
familiar
with
sigmetrix,
but
symmetrics
is
a
library
that
we
created
to
handle
metrics
internally,
it's
a
kind
of
copy
paste
of
prometheus,
a
goal
and
client.
So
it's
a
way
to
have
our
own
context.
So
the
thing
is
that
inside
the
filter
with
when
this
information
comes
in,
we
need
to
create
a
c
matrix
context,
and
then
we
just
ship
that
context
over
the
pipeline,
but
would
be
really
great
if
we
can
come
up
a
with
a
with
an
agreement
or
a
document.
E
A
So
should
we
create
a
discussion
topic
on
it
and
then
we
load
up
the
resources
what's
the
best
way
to
to
kind
of.
E
B
The
from
the
base
or
from
the
core
from
today.
E
Meaning
one
hour
ago
is
everything:
is
there
okay,
perfect
yeah?
So
this
is
like
a
good
timing
that
that's
what
hey?
Let's
start
the
discussion?
Actually,
ideally
we
want
to
ship
this
out
with
one
that
night
and
we're
going
to
delay
the
release
for
two
weeks
was
planning
for
this
week,
but
since
we
have
many
pending
staff,
I
think
that
loctometer
is
something
that
everybody
wants
and
we
should
ship
it.
B
So
so
our
plan
is
because
for
our
customers,
it's
important
not
to
to
see
results
quickly
to
to
do
a
quick
implementation,
at
least
to
see
how
it
is
working.
E
Yeah,
because
I
got
some
information
from
the
community
from
you
specific
use
cases,
I
don't
think
that
this
will
be
like
a
big
implementation.
I
think
that
will
be
really
easy
to
do
honestly.
E
A
Okay,
I've
already
did
the
eu
announcement
so
added
stuff
there
and
then
dennis
and
fleury
about
you're,
both
okay
to
participate
in
the.
A
That's
much
appreciated.
Yeah,
let's
that'll
be
great
to
see.
Okay,
I
think.
That's
that's
it!
For
that
topic.
We
have
some
good
actions
and
looks
like
we
can
get
some
work
scheduled
here
shortly.
Chanting
are
you?
Are
you
able
to
do
a
quick
intro
and
then
I
would
love
to
hear
about
this
yeah
sure.
F
So
we
have
recently
started
using
fluent
bit
for
getting
fluent
d
and
syslog
logs
and
sending
them
to
loki
and
a
central
lock
server
as
well.
So
things
are
fluid,
they
come
and
go.
I
mean
they're
in
flux,
not
sure
what
will
survive.
F
What
we
missed
in
the
way
was
to
we
we
had
to
add
the
lua
script
to
you
know,
get
the
tag
itself,
so
we
spent
quite
a
while.
So
I
feel
that
there
should
be
a
simpler
way
to.
You
know
basically
observe
the
incoming
packets
so
to
speak
and
dump
them
if
possible,
so
that
I
know
what
is
missing
and
then
I
can
take
it
from
there.
A
No,
we
had,
if
I
remember
right,
there
is
a
branch.
A
Dynamic
yeah,
so
it's
actually
this
it's
a
it's
a
branch
called
stream,
processing,
dynamic
queries,
and
so
you
can
dynamically
request
even
modify
data
from
an
incoming
stream.
I
think
one
piece
that
was
that
the
stream
processing
had
was
it's
appended
to
the
http
server,
so
you
call
it
remotely,
but
it
doesn't
have
any
auth
behind
it.
So
the
the
hard
part
was
if
we
enable
this
by
default.
Then
anyone
and
everyone
can
just
request
your
data
by
submitting
a
stream
processing
task
and
start
tapping
it.
A
I
wonder
if
there's
a
good
way,
we
could
leverage
what
this
has
it's
import
for
maybe
some
background.
A
The
fluent
has
a
sql
stream
processor
within
it
and
it
supports
this
concept
called
snapshots
where,
as
it
identifies
a
particular
message
or
particular
field,
it
will
grab
that
and
allow
you
to
essentially
flush
it
to
wherever
it
needs
to
go.
So
you
can
kind
of
like
grab
context.
You
can,
you
can
like
say
hey
if
I
find
an
alert.
A
Send
me
the
next
hundred
messages
as
part
of
that,
and
so
the
snapshot
feature
is
actually
pretty
lightweight,
and
so
the
idea
was
to
just
make
it
so
snapshots
could
help
you
tap
an
input,
but
it
doesn't
do
it
in
a
like,
follow
method
or
anything
like
that.
So
I
don't
know.
Maybe
there's
some
way
to
leverage
that,
but.
F
Another
tool,
but
vector,
has
got
something
called
as
vector
tap.
A
F
I
mean,
obviously
you
would
so
what
happens?
Is
you
enable
the
api
just
like
fluent
bit
and
on
the
same
machine?
You
do
vector
tap,
so
it
starts
spewing
out
what
the
incoming
messages
are.
That
would
be
adequate,
because
this
is
an
operator
query.
We
are
not
looking
for
an
advanced
workflow
over
this.
It's
just
that
hey.
Is
this
xyz
thing
coming
at
all
in
the
incoming
message?
That's
actually
the
thing
we're
trying
to
debunk.
That's
all.
I
see.
E
F
I
know
eventually
this
will
be
kept
on
in
production.
I
mean
kept
enabled
it
would
be
like
somebody
complains,
I
don't
see
xyz
log
message
in
say,
loki
or
somewhere
else,
wherever
I've
kept
it
now,
I
have
to
go
back
and
on
the
machine
which
is
generating
the
logs.
I
need
to
either
investigate
or
prove
saying
that
hey
look.
The
incoming
log
message
doesn't
have
this,
so
there
is
no
way
this
is
going
to
work
or
something
like
that.
E
G
F
F
F
G
F
So
but
I
think
tcp
might
just
work
out,
because
if
I
start
listening
on
127
and
the
docker
bridge,
I'm
assuming
docker
sorry,
then
even
the
container
from
the
inside
could
do
stuff
on.
You
know
the
usual
172
1701,
which
is
the
typical
docker
bridge.
A
F
E
And
and
one
question
a
okay,
it
was
okay.
Security
is
something
that
we
need
to.
Okay,
what
is
the
future,
but
also
what
the
security
concerns.
That's
something
we
can
find
and
work
around
now,
for
example,
if
you
have
an
input
and
two
filters,
what
would
be
the
ideal
outcome
of
that
is
just
get
the
output
of
the
input
or
also
after
each
filter,.
F
It
would
be
awesome
if
we
could
do
everything
at
least
the
input,
because
after
the
filter
is
also
something
which
would
be
very,
very
useful
to
you
know,
emit
somewhere
temporarily.
I
could
emit
like,
after
one
filter
emitted,
to
file
one
after
filter
to
emitted
to
file
two.
I
I'm
not
just
thinking
of
the
top
of
my
head.
E
G
Yeah,
because
there
are
like
the
lure
filters
that
you
can
like
tap
into
each
stage,
you
can
get
the
output
and
you
can
just
get
it
to
the
to
stand
it
out
or
whatever
and
that's
the
way,
because
really
at
the
moment,
the
only
way
to
do
what
you're
doing
is
to
have
a
match
at
each
stage
and
then
fire
that
down
a
socket
or
have
it
coming
out
and
stand
it
out.
And
then
looking
at
the
logs
directly.
E
F
Yeah
I'll
give
you
a
exact
example
of
what
was
missing,
so
the
tag
field
itself
was
not
there
in
the
input
message
right
and
it's
a
documented
issue
that
you
know
you
run
it
through
the
lua
script.
So
until
we
hit
upon
that
solution,
we
were
wondering
why
the
tag
field
is
not
there
in
the
output
loki.
So
that's
exactly
where
I'm
coming
from
specifically.
F
A
Cool
okay,
okay,
maybe
you
could
add
it
to
that
that
github
discussion
I
just
opened
chanting.
A
See,
okay,
I
I
don't
think
there's
any
like
yeah.
This
is
the
exact
way
we
could
go
do
it,
but
let's,
let's
keep
going
in
that
discussion
yeah
I
could.
I
could
see
a
ton
of
use
for
that
and
I
felt
that
pain
too,
and
we
don't
know
what
something
looks
like
like
after
it
gets
ingested.
So,
okay.
A
We
have
seven
minutes
left,
rahul
is,
are
you
there?
Maybe
you
could
do
a
quick
intro
talk
about
this
yeah.
F
F
But
the
problem
is
we
recently
moved
for
aggregator
from
5d
to
7-bit,
because
open
search
when
amazon
went
from
elasticsearch
to
open
source
vivendi
was
having
problems
with
connecting
with
data
aws
elastics
the
database
open
source.
So
we
tested
some
scenarios
with
open
source
and
things
were
working
good
with
open
source
and
if
you
wanted,
but
the
problem
is,
there
is
no
way
of
connection
cooling,
elasticsearch.
F
Open
search
so
every
time
we
try
to
log
our
message
and
everything
it
is
creating
creating
new
tcp
requests.
What
we
saw
and
other
thing
is
that
the
there
is
no
way
we
could
set
a
limit
on
the
number
of
log
messages
it
could
send
in
one
go,
so
I
I
believe
elasticsearch
lined
with
elasticsubscribe.
This
is
possible
to
control
the
client
level.
F
F
Freemandy
umd,
I
think
there
is
a
there-
is
an
option
over
there
to
control
our
output
limit
over
there.
What
I
have
seen
my
team
uses,
but
there
is
nothing
to
control
for
fluent
bit.
F
So,
while
using
fuel
and
fuel
india
and
elasticsearch,
we
were
not
facing
that
bottom,
like
of
crossing
a
network
threshold,
but
with
a
frequent
bit
network.
Sometimes
the
thing
is
how
long
messages
are
about
20
mb
as
well,
because
it's
quite
a
big
big
log
and
they
are
basically
getting
better
aggregator
levels.
So
it's
very
difficult
for
us
to
same.
F
Yeah,
no
there's
another
plugin
for
specifically
written
for
aws.
We
are
using
that
one.
So
we
had
to
talk.
We
had
a
talk
with
some
internet
from
folks
on
the
company.
They
gave
us
that
parameter.
Basically,
they
told
us.
We
used
that
parameter
to
control,
outline
it
was
there
in
the
code
or
probably
not
document
or
something
oh,
but
in
in
this
plugin
or
it's
a
different
place.
It's
a
different,
it's
true.
It's
a
different
plugin.
A
A
Yeah
one
thing:
we
just
finished:
building
a
open
search,
specific
plugin.
So
that's
that's
there
for
fluentd
now
as
well,
and
then
we
as
part
of
1.9
we're
going
to
release
an
experimental
version
of
the
open
search
output
with
with
okay,
awesome,
yeah.
F
So
we
are
using,
we
are
using
this
on
this
one,
so
they
had
some
parameters
to
basically
control
output
and
everything,
and
we
took
a
support
of
the
database
also
as
well
to
help
us
set
this
thing
up.
F
So
with
this,
we
were
able
to
like
control
limits
and
match
our
questions
and
send
properly,
but
it's
not
possible
with
the
ship
and
plugin.
It
looks
like
like.
Let's
say
for
a
request
is
about
15
mb
and
aws.
Open
search
is
only
allowing
you
to
send
a
request
for
10
name,
because
there
are
some
network
limits
they
have
applied.
On
instance,
level.
F
F
I
don't
think
so.
That
is
support
that
supports
your
the
fuel,
but.
A
F
Will
this
work
on
aggregator
level
because
we
are
using
seven
bit
as
aggregator?
Basically,
we
are
doing
aggregation
like
it's.
We
have
we're
using
like
it's
a
center
aggregator.
So
will
filters
work
on.
A
A
Thank
you
just
just
another
thing
to
try.
I
couldn't
find
the
output
network
limit.
Maybe
if,
if
you're
able
to,
could
you
add
it
here,
yeah.
F
Sure
I
would
ask
my
team
member
to
share
that
tag,
what
they
were
using
on
the
functionality,
and
I
will
update
it
over
here.
Okay,.
F
Again,
I
think
forward
as
well.
Can't
we
can
we
try
to
implement
that
thing
on
like
on
kubernetes,
sorry
for
the
elasticsearch
plugin
as
well
like.
If
you
want
network,
I
saw
that
you
could
control
the
output,
the
the
mv
and
the
like
the
size.
How
much
you
want
to
send
it
at
one
go
in
batch.
D
A
Okay,
yeah,
I
think
yeah,
if
you
add
that
maybe
we
can
start
like
I'd,
say
either
a
discussion
or
an
issue,
I'm
not
sure
on
how
to
how
to
add
controlling
limits,
throttling
or
more
customization.
On
the
output
side,
I
think
there's
been
some
chats
about
like
how
we
could
do
buffers
per
output
plug-in,
but
it's
yeah,
so
something
we've
just
slightly
talked
about
not
necessarily
implemented
anything.
Yet.