►
From YouTube: Fluent Community Meeting Feb 24th, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
hey
everyone
welcome
to
another
fluent
community
meeting.
Today
we
have
a
couple
of
topics
which
is
awesome.
I
think
there's
there's
a
couple
of
key
things
going
on
in
the
community.
Just
for
some
community
updates.
We
have
the
floyd
con
europe
co-located
conference
coming
may
16th
in
valencia,
so
there
are.
A
There
are
both
virtual
passes
available
to
join
that
we
just
closed
off
the
cfp,
so
we'll
be
putting
together
the
schedule
organizing
that
over
the
next
few
weeks
and
then
and
then
we'll
publish
that
live,
and
then
I
think
the
other
big
thing
about
that
is
there
will
be
a
there'll,
be
some
hybrid
components,
so
folks
are
actually
traveling
to
valencia
yeah.
We
should
you
know
we
should
have
like
a
small
in-person
type
type
thing.
I
think
there
there's
some
pretty
good
content.
A
I
already
saw
in
the
the
some
of
the
submissions,
so
I
think
we
will
be.
It
will
be
another
great
event
and
then,
of
course
we
have
the
fluent
community
meeting
and
interim.
So
you
know,
if
you
ever
want
to
share
something,
you
don't
have
to
wait
till
these
fluent
cons.
Nor
do
you
even
have
to
apply
to
these
fluent
cons.
We
can
always
we
can
share
stuff
in
this
community
meeting
and
then
in
in.
A
In
addition
to
to
that,
I
think
just
some
general
general
notes.
We
we
discontinued
the
google
groups,
so
there's
five
or
six
ways
that
folks
can
come
and
ask
for
help
on
like
fluenty
and
fluent
fit,
and
things
were
getting
a
little
out
of
hand.
I
think,
in
the
last
meeting
we
had
a
we
had
a
action
item
to
go
ahead
and
try
to
remove
some
of
those
places
that
folks
ask
questions
so
that
way
we
can
narrow
it
down
to
one
or
two
good
places.
A
A
B
Which
one
you
prefer
the
slack
or
the
discussion
on
the
kid
have
you.
A
Recommend
yeah
slack
is
great
if
you
want
to
just
do
something
ad
hoc,
yeah
and
discuss
is
great
if
we
need
to
come
up
with
like
a
design
or
there's
like
some
some
detail,
or
even
some
error
that
we
we
want
to
say
hey.
I
don't
know
how
this
works,
something's
not
being
parsed
correctly.
A
It's
a
it's
a
better
way
to
capture
it
long
term,
because
in
slack,
what
I'll
find
is
we'll
probably
get
a
repeat
of
a
couple
questions
and
then
we'll
be
like
okay,
hey
I've
heard
this
question:
here's
what
you
have
to
check
blah
blah
blah
blah,
but
in
the
discussions
we
can
kind
of
capture
it
and
say
hey.
This
is
this
is
how
you
solve
x,
y
z,
problem
and,
ideally,
when
we
capture
it
here,
we
then
come
up
with
like
a
design
solution
for
how
to
actually
fix
fix.
A
A
D
E
Of
my
teammates
was
just
asking
it:
was
there
a
password
set
up,
because
I
I
didn't
have
a
password
when
I
joined
but
she's
saying
that
she's
getting
a
little
dialog
box,
yeah.
C
F
C
E
I
don't
I
don't
know,
but
all
right.
Thank
you.
I
I
give
that
to
her
and
yeah,
we'll
see
if
that
works.
Okay,.
C
Okay,
so,
let's
because
we
only
have
30
minutes-
let's
jump
into
this-
so
I
don't
know
I
was
hoping
eduardo
was
going
to
be
here.
I
wanted
to
discuss
basically
the
solution
for
this.
We
don't
need
to
go
to
this
issue.
That's
like
for
context
for
folks
who
need
to
understand
how
the
multi-line
filter
works,
but
basically
the
multi-line
filter
is
I
implemented
this
option
called
buffered
mode,
which
is
needed
for
the
multi-line
filter
to
truly
actually
work
with
most
inputs.
C
It
has
to
buffer
events
between
flushes
and
then
concatenate
them
together,
and
then
it
uses
the
in
emitter
instance
same
as
rewrite
tag
to
re-emit
the
concatenated
records
back
into
the
pipeline
right.
So
the
problem
is
that
on
shutdown
you
could
lose
basically
per
stream
at
least
one
multi-line
record.
So
if
there's
a
multi-line
record,
that's
incomplete,
currently
being
completed
and
buffered,
it
will
not,
it
can
be
lost
on
shutdown.
C
C
I
have
an
in
mind
an
idea
for
a
solution,
so
the
the
intel
multi-line
doesn't
have
this
problem
because
there's
a
pause
callback
on
the
tail
plug-in
input
and
when
that
is
called
on
shutdown
or
in
any
case,
basically
it
will
flush
pending
multi-line
records,
and
we
need
to
kind
of
build
to
do
the
same
thing
in
the
filter.
But
the
filter
doesn't
have
a
pause
callback
because
filters
don't
have
pause
callbacks.
C
So
my
idea
was
that
since
the
filter
uses
the
in
emitter
instance,
the
in
emitter
could
have
a
pause
callback
and
then,
when
I
create
the
emitter
I
just
need
to.
My
idea
was
that
I
could
pass
in
a
function
pointer
for
like
basically
a
pause,
callback
that
and
so
then
the
in
emitter
just
call
in
its
paused
callback
just
calls
my
pause
callback
that
I
passed
in
when
I
created
the
in
emitter
instance,
and
I
think,
if
I
do
this,
I
can
solve
it.
C
I
wanted
someone
to
like
eduardo
to
say
that
he
thought
that
was
an
okay
or
decent
idea
and
that
he
couldn't
come
up
with
anything
better.
That's
what
I
was
hoping
to
get
out
of
this
meeting,
but
if
he's
not
here,
then
I
don't
maybe
someone
else
can
help.
I
don't
know
yeah.
A
C
So
I
don't
think
it's
like
super
super
urgent,
actually,
because
I
actually
think
in
like
in
practice.
In
most
cases,
customers
are
not
really
gonna
run
into
it,
but
it
is,
it
is
like
we
should
fix
it.
It
would
be.
D
C
I
I'd
be
happy
to
do
the
work
as
soon
as
you
know,
it's
approved
to
to
get
it
done
by
1.9.
That
way,
we
don't
run
into
anybody
complaining
about
it
right,
like
the
the
thing
that
worried
me
is
that,
if
it
happens,
like
multi-line
is
mainly
used
for
stack
traces
and
if
you
lose
the
last
log
that
might
be
the
stack
trace.
You
know
that
you
need
right
so.
A
Yeah,
I'm
wondering
the
thing
that
I
wonder
is
like:
if
you
call
the
pause,
because
a
multi-line
filter
could
run
after
an
in
forward,
it
could
run
after
in
tcp,
and
you
do
that
pause
loop.
Would
it
when
you
send
back
the
pause?
Would
it
cause
any
issues
with
receiving
traffic
because
I
get
when
entail
the
pause
makes
sense,
but
do
we
hit
some
sort
of
interesting
behavior
when
we
try
to
pause
a
network
input
or
something
like
that.
C
Well,
so,
okay,
so
pausing
the
an
emitter
and
pausing
the
network.
Input
would
be
totally
different
right
right,
because
these
are
these
would
be
different
inputs
and
yeah.
I
don't.
I
don't
see
that
being
a
problem
like
on
shutdown
anyway,
we
already
are
pausing
all
of
the
inputs
yeah.
That's.
A
Okay
even
add
this
to.
B
A
Okay
call:
let's:
let's
see
how
we
can
get
that
okay
get
get
some
design
stuff
approved
quickly,
so
it
unblocks.
A
E
So
yeah,
that's
that's
me,
so
I
just
want
to
give
kind
of
an
update
to
the
community
on
the
things
that
I've
been
working
on
so
a
while
back.
E
We
had
this
issue
that
was
coming
up
under
high
load
and
essentially,
when
there's
a
very
large
throughput,
we
noticed
that
a
very
large
number
of
proteins
get
sort
of
created
and
all
those
proteins
try
to
get
completed
at
the
same
time
or
not
really
at
the
same
time,
but
rather
their
work
would
sort
of
be
interleaved,
and
so
you
would
have
you
know
some
amount
of
work
and
then
a
network
call.
E
But
then,
by
the
time
the
network
call
finishes
it
takes
so
long
to
get
back
to
it,
because
all
other
codes
need
to
run
first,
just
because
the
way
that
we
were
initializing,
the
courage
means
pretty
much
as
soon
as
you
get
work
to
do.
It
they'll
create
like
a
routine
for
that
work,
and
so
that
that
wasn't
really
you
know
the
best
option
in
some
cases,
because
you
end
up
with
a
really
long
sort
of
wait
time
before
you
can
resume
back
to
the
career
team.
E
D
E
If
you
can
take
a
look,
it's
the
priority
event
loop,
so
essentially,
instead
of
handling
the
courage
means
in
a
first
in
first
out
type
of
manner,
it's
to
sort
of
keep
a
record
of
the
sort
of
task
dispatches
and
that's
sort
of
the
thing
that
creates
a
curve
routine
and
give
that
a
lower
priority
than
the
curtains
themselves.
So
essentially,
what
we're
doing
is
trying
to
change.
E
The
paradigm
from
you
know,
work
on
anything
that
you
started
to
rather
finish
the
things
that
you
started
and
then,
when
you
have
sort
of
free
effort,
then
initialize
new
work,
and
so
that
way
we
can
try
to
manage
getting
all
the
stuff
that
we
have.
You
know
already
started
done
and
then
once
we're
all
blocked
and
all
the
code
regimes
that
exist,
then
we
can
start
picking
up
new
tasks
because
they're
sort
of
in
this
dispatch
event
that's
of
a
lower
priority.
E
So
that's
that's
what
this
dr
accomplishes
and
passes
all
the
tests
currently
so
looking
forward
to
working
with
eduardo
and
the
team
on
and
the
community
on
getting
this
merged
and
everything
yeah.
This
looks
nice.
A
And
yeah,
I
think
we
should
just
run
through
a
lot
of
ci
right.
This
is
this.
Is
some
sensitive
things
right,
but
I
guess
if
we
need
we
need
this.
We
probably
need
it
for
1.9
for
sure
right.
Otherwise
it
would
be
a
pretty
big
change,
otherwise,.
B
Diesel
yeah,
this
will
be
part
of
1.9
per
discussion
with
arwadu
a
couple
weeks
ago.
We're
trying
to
wrap
it
up
and
test
it
make
sure
that
we
can
get
the
rule
out
and
matt
is
also
preparing
a.
E
A
I
can't
comment
on
any
selection
process,
but
I
I
I
think
the
the
dates
are
probably
in
march,
for
when
the
sessions
and
agenda
gets
set.
I
think
it's
actually
on
the
fluid
con
io
and
then
there
should
be.
B
Yeah,
I'm
not
sure
valencia's
place,
has
a
good
soccer
team.
A
Here
there
it
is
okay
march
21st
is
when
24th
yeah
yeah
and
then
a
scheduled
announcement
march,
23rd,
okay,
so
yeah.
I
think
I
think
this
will
be
good.
You
know,
last
last
year
we
had
some
pretty
awesome
topics.
Everything
gets
recorded
and
put
on
youtube
too.
So
I
I
having
having
this
almost
like
recorded
is,
I
think,
is
great
right.
You
get
folks
who
can
see
design
decisions
in
the
past
why
things
were
created
for
xyz
awesome,
okay,
yeah,
let's
go
to
the
next
one.
Ecs
load
test
results
from
anonymous.
D
Yeah,
so
this
one
is
from
me
sorry,
I
I
didn't
sign
so
I
I
just
want
to
post
an
update
here
that,
like
we
have
talked
about
the
load
test
framework
before
and
now
we
have
yeah
we
have
built.
One
part
of
our.
Our
load
test
is
for
the
eight
other
software
testing
improvements.
D
So
we
help
the
areas
customers
to
use
language
and
ecs
customers
is
one
of
our
main
customer
groups.
D
So
we
build
a
testing
framework
to
simulate
their
real
world
workflow
and
to
help
to
like
use
different
throughput
levels
of
input
and
then
run
tests
so
that
we
can
have
a
benchmark
result
on
it,
and
then
we
can
see
that
if
we
can
find
any
potential
customer
problems,
so
so
this,
so
this
part
is
for
the
testing
framework
and
we
have
lunches
and
we
haven't,
have
the
public
testing
results
shared
so,
but
we
will
have
it
in
our
next
release,
so
maybe
we
so
maybe
which
will
be
flimsy
next
release.
D
We
will.
We
will
run
the
this
testing
this
load
test
and
we
will
share
the
results.
Maybe
we
can
see
later
yeah
yeah
so
yeah.
So
this
is
our
current
work
and
maybe
in
the
future
we
are
looking
forward
to
with
flambe
upstream
for
some
low
test
framework
field.
Yeah
yeah.
A
A
Search
real
quick
should
all
be
public
yeah.
This
is
all
public
and
it
says
plenty,
but
we
also
did
some
fluent
bit
as
well,
and
this
was
just
to
trying
to
showcase
like
hey
how
much,
how
much
throughput
can
we
get
within
tail?
How
much
can
we
get
with
with
with
in
forward
as
well?
So
just
some
really
quick,
quick
test
trying
to
see
if
we
can
start
to
put
it
in
as
part
of
like
the
release
process.
A
So
if
there's
a
giant
regression
in
performance
I'll
also
be
able
to
start
capturing
some
of
these
things,
I
think
it'll
be
good.
It
sounds
like
we
all
have
that
same
goal
of
like,
let's
make
sure
we
don't
get
performance,
regressions
and
also
get
max
performance,
so
we
probably
want
to
have
some
some
stuff
together
there,
but
we
run
it
for
about
five
minutes
just
because
some
of
these,
these
specific
instance
types
have
nvme
local
and
they're
a
little
they're
a
little
expensive,
so
yeah.
A
Here
it's
like,
I
think,
when
we
do
a
hundred
thousand,
it's
pretty
fine,
but
once
you
get
to
three
hundred
thousand
things
start
to
get
a
little
saturated,
and
this
is
with
one
kilobyte.
So
these
this
is
100
megabytes
per
second
and
300.
300
megabytes
per
second
cpu
usage
is
pretty
good
and
rss
usage.
Is
it's
not
too
bad
either?.
A
B
B
And
build
no
so
yeah.
This
is
good.
I
think
this
is.
We
can
get
the
if
you
we
can
look
at
the
one
and
you're
doing
from
kolepsha
site
and
see
we
can
test
the
tail
one,
the
performance
we
can
compare.
I
think
we
a
aws
for
from
b1
testing.
We
will
share
the
results
with
the
with
the
our
customers.
We
are
more
than
happy
to
contribute
these
pieces
to
the
to
the
upstream.
B
A
Great
yeah
that'll
be
it'll,
be
really
good
to
have
it,
and
then
we
can,
if
we
can
run
it,
you
know
for
for
releases.
I
think
that's
that's
definitely
key
yeah,
so
I'm
I'm,
I'm
all
for
it.
Yeah
we're
looking
even
getting
a
third
party
to
do
some
benchmarking
for
us
too,
just
to
make
sure
okay
we're
we're,
we
can
watch
and
we
have
like
three
folks
that
can
do
do
benchmarks
and
we'll
compare
all
the
notes.
A
Okay,
yeah,
that's
great
okay,
so
I
know
we
have
seven
minutes
left
fluently
interaction,
so
I
wanted
to
actually
share
this.
This
is
something
we
just
built.
We
released
it
as
apache
2.0
right
now.
It
does
have
a
dependency
on
our
cloud,
but
we're
trying
to
remove
that
here,
but
what
it
does
is
we
found
it
really
hard
to
check
configurations.
A
So
when
you
want
to
go
check
a
configuration,
you
have
to
run
it
in
a
container
or
you
know,
run
it
locally
and
that
that
works
well,
but
we
think
we
can
do
better
and
want
to
leverage
a
lot
of
the
tools
that
exist
out
there
for
many
languages
like
linting
etc.
So
we
added
we
created
this
fluent
linter
action
and
it
will
go
and
lint
your
fluid
bit
configuration
you
give
it
three
parameters,
your
api
key
for
ecliptic,
because
we
run
everything
in
a
container
today
and
then
we
do
a
config
location.
A
So
you
give
it
a
glob
pattern.
This
actually
follows
includes
as
well.
So
if
you
have
an
include
of
multiple
files,
we
can
go
and
combine
all
of
them
and
do
the
linting.
On
top
of
the
includes,
and
then
some
examples
of
this,
let
me
see
I
can
showcase
this.
A
And
it's
based
off
a
parser,
we
wrote
in
typescript,
that's
also
patchy
too.
So,
if
you
want
to
play
around
with
that
parser
more
than
welcome
to
yeah
here's
an
example
very
simple,
simple
one,
but
basically
I
made
a
commit
change
from
cpu
to
cp12.
A
We
have
the
linter
that
runs
and
it
will
tell
you
immediately.
Inline
hey
this
is
this
input
plug-in
is
actually
not.
This
is
this
is
wrong
and
it
will
go
check
parameters,
you'll,
check,
runtime
errors
and
it
will
append
it
where
in
the
configuration
things
are
going
wrong
but
yeah
something
we
want
to
keep
expanding.
A
We've
talked
a
little
bit
about
like
doing
test
harnesses
having
more
of
these
things,
so
you
can
put
in
ci
cd
if
you're
deploying
fluid
configuration
out-
and
I
wanted
to
make
sure
you
know-
if
that
the
community
is
interested
in
this
type
of
stuff,
that
hey
we
can
keep
building
it,
and
everything
we've
done
thus
far
is.
Is
apache
2.0
we're
working
on
removing
our
ecliptic
cloud
requirement
for
the
linter,
but
the
main
thing
is
fluent
bit:
has
these
things
called
config
maps
and
with
the
config
maps?
A
A
We
can
use
it
for
language
servers,
so
if
you're,
using
like
vs
code
or
sublime
text
or
others
you'll
be
able
to
have
you
know
language
server
for
for
these
configurations,
yeah,
nice,
I
I
it's-
it's
improved
my
quality
of
life,
so
I'm
I'm
I'm
happy
about
it.
E
A
Helpful
yeah,
we
we've
been
talking
a
little
bit
about
like
a
a
recommendation
as
well,
so
we
can
say
things
like
hey.
Your
member
limit
is
really
low
here
or
hey.
You
don't
have
workers
enabled
you
might
want
to
do
something
like
that,
and
that
might
be
something
great
that
you
know
we
can
plug
in
make
it
something.
Community
can
keep
adding
rules
and
advice
on
top
of
and
and
yeah
keep
giving
a
lot
of
value.
A
Okay,
a
big
one
v
1.9!
What's
up.
B
A
A
So
in
your
in
your
github
repo
you'll
come
in
and
say
hey,
I
want
to
add
an
action
and
you'll
say
hey.
I
want
to
set
up
a
new
workflow
if
you
search
you'll
find
it
today.
You
can
search
linter
and
you'll,
find
it
right
here
and
then,
as
part
of
your
github
and
right
now
it's
just
a
github
action.
B
Yeah,
I'm
thinking
of
customers.
How
do
they
use
it,
and
because
I'm
configuring
running
my
workload
in
the
production,
how
can
I
can
benefit
this?
One
is
useful
for
ci
cd
for
checking
a
robot.
How
can
customer
benefit?
For
example,
I
want
to
have
five
workers.
I
want
to
set
up
a
memory
buffer
limit.
How
do
I,
how
can
this
will
help
those
customers.
A
For
for
those,
so
it
doesn't
do
recommendations
today,
but
we
would
say:
hey
wherever
you're,
storing
your
philippine
configuration-
and
I
hope
it's
checked
into
source
control,
like
I
hope
folks,
are,
are
checking
their
fluid
configurations
into
source
control
in
your
source
control.
If
it's
github
add
this
linter
and
every
time
you
make
a
change
or
modify
we'll
run
it
through.
Basically
a
container
image
and
say:
okay
in
the
container
image,
did
everything
match
up?
Did
it
actually
start
up
and
run
did?
A
A
Okay,
that's
good!
Thank
you!
Okay
and
we
we
have
this.
We
have
a
calyptia
cli
that
we've
been
thinking
we'll
add
the
parser
to
as
well.
So
then
you
could
just
run
your
config
with
the
cli
as
well
and
check
hey,
there's
everything
they
match
so
yeah,
all
of
all
the
stuff
there
that
that
we
we've
been
working
on.
Okay,
access
to
1.9,
release,
schedule,
key
features:
roadmap
recap
some
some
more
pin,
pin
links.
A
A
I
think
there
is
a
log-to-metric
filter,
though
I'm
not
sure
if
it
will
make
1.9
and
a
bunch
of
other
kind
of
performance
improvements,
an
nginx
metric
input,
a
trying
to
think
of
any
other
outputs
that
are
there.
Oh,
a
windows,
exporter
metrics,
so
we
can
collect
metrics
from
from
windows
and
in
a
prometheus
style,
and
I
think
those
are
those
are
the
major
ones.
Oh
there's
out,
skywalking
plug-in.
A
So
if
you're,
using
apache
sky,
skywalker
or
skywalking
that
will
be
included
as
well
from
the
key
features
in
roadmap,
we
had
done
something
with
1.8.12.
I
I
really
liked
and
I'd.
F
Be
curious,
yeah
I
wanted
to
I
I
quote
something
like
that,
but
I
didn't
lost.
I
did
I
lost
where
the
thing.
A
Was
we
have
done?
Maybe
it's
closed
now,
but
it
was
like
one
got
12
release.
A
F
Did
we
did
we
create
something
like
in
a
backlog?
Type
of
you
know,
kanban
style,
backlog
management
and
having
a
visible
roadmap
moving
forward,
like
maybe
quarterly
basis
or
something
didn't.
We
have
that
kind
of
thing.
That's
a
roadmap.
Usually
I
can
wrote
my.
B
This
is
good.
I
think
this
no
conflicts
was
based
on
what
you
are
saying.
Okay,
I
think
he's
a
sorry.
I
don't
see
your
last
name:
okay,
okay.
Now
I
think
I
would
have
this.
One
invisibility
was
coming,
what
an
iraq
machine
and
the
other
one
is.
What
your
mind
should
give
people
a
roadmap
view.
That's
also
good,
I
think,
there's
no
conflict.
Once
we
have
the
roadmap,
we
can
track
the
delivery
for
each
items
on
the
list.
A
Yeah,
like
kind
of
like,
instead
of
putting
we
we
could
put
like
these,
are
quarters,
and
then
I
know,
pat,
I
think
pat
had
presented
a
a
proposal
before
wow
yeah
yeah,
but
I
don't
think
we
actually
ever
ever
ended
up.
I
think
we
should
so
I
just
created
one.
So
we
have
a
new
project
and
then
we
can.
We
can
take
some
of
these
like
1.9
track
release
issues.
Let
me
pin
this
issue
by
the
way,
so
folks
can.
A
A
Are
the
you
know
things
that
we
want
to
do
over
the
course
of
this
quarter
next
quarter
and
in
the
quarter
after
so
I
can
already
think
of
a
couple
that
we
could
add
in
like
this
quarter,
which
is
the
benchmarking
adding
to
adding
metrics
open
telemetry,
integrations
log
to
metrics
filter
next
quarter,
we
talked
a
little
bit
about
like
parsing,
adding
more
parsers,
so
I
think
we
right
now
we
focused
very
heavily
on
lua.
In
fact,
the
1.9
should
have
some
improvements
to
lua,
but
we
could.
A
A
E
Because
one
of
one
of
our
one
of
our
our
associates
really
wants
to
to
do
that
and
it's
kind
of
blocking
them
from
using
fluids.
But
at
the
moment
they
want
to
send
prometheus
metrics
to
cloud
watch
and
apparently
they
don't
have
a
way
or
or
wherever
you
have
to
like,
firehose
or
kinesis
or
whatever
yeah.
A
A
Okay,
great,
I
think
those
are
those
are
the
major
topics
anything
else
on
this
one.
We
want
to
cover.