►
From YouTube: Fluent Community Meeting 2023 01 26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
D
D
D
A
A
Output
filters
and
related
discussion
I
believe
this
was
a
a
carry-on
from
last
time,
I'm
going
to
go
ahead
and
click.
This.
A
Okay,
so
Ryan
I
think
this
is
some
of
the
pieces.
You've
been
talking
about
on
how
you
want
some
more
advanced
configurations,
with
message
routing
to
match
things
like,
for
example,
what's
going
on
with
safe
flu
and
D
secondary
Advanced
routing,
we
would
love
to
maybe
just
hear
a
little
bit
more
on
on
some
of
the
stuff
that
you're
thinking
of
and
then
I'd
happy
to
share
some
thoughts.
We
had
around
this
recently.
B
Sure
yeah,
so
this
has
much
probably
to
do
more
in
trying
to
use
you
know
fluent
bit
in
an
aggregation
type
role
right
where
it
has
to
do
probably
a
lot
more
processing
than
we
kind
of
usually
when
we
push
it
down
to
you
know
more
of
the
initial
collector
type
role
and
need
more
advanced
features.
B
You
know,
as
a
result,
you
know
flew
indeed
long
having
lived
into
that
type
of
concept
has
much
more
advanced
concepts
around
message,
routing
so
beyond
just
the
tagging
features
there's
you
know
you
can
essentially
I'll,
say
Short
Circuit,
the
kind
of
message
and
Route
it
off
to
an
entirely
different,
almost
pipeline
of
things
via
a
labeling
system.
B
B
B
You
know
you
can
take,
you
know
only
the
problematic
messages
and
Route
those
out
to
a
label
which
again
is
not
something
that
we
really
see
in
any
of
the
output
plugins
in
fluent
bits.
So
having
a
foundation
all
piece
for
some
of
that
you
know.
Advanced
routing,
obviously
is
a
benefit
in
terms
of
the
the
pipeline
as
messages
are
rolling
through
in
the
first
place,
but
then
also
lends
itself
well
to
what
we
want
to
do
for
more
advanced
air
handling
of
messages.
A
Yeah
and
there's
so
there's
two
pieces
here
that
I
think
are
are
pertinent
to
how
we
can
kind
of
solve
some
of
this
more
advanced
message.
Routing
one
is
the
difference
between
just
fluently
flip
it.
In
generally,
when
fluid
D
sends
data,
it
takes
that
same
message
and
it
actually
pushes
it
through
the
entire
pipeline.
So
if
you
want
multi-output
or
you
need
to
send
it
to
multiple
locations,
you
actually
have
to
copy
that
message
in
order
to
Route
it
to
multiple
locations.
A
Now
fluent
bid
tries
to
solve
some
of
the
burden
of
performance
on
that
side
by
using
pointers.
So
if
I
want
to
do
outputs
to
58
different
output,
plugins,
it's
not
like
I
take
that
message
copy
50
times.
Actually
we
just
use
a
pointer
reference
to
the
message,
and
that's
that's
how
it
gets
said
now.
What
we've
been
thinking
of
on
the
message?
Routing
side
is
a
couple
things.
What
we
try
to
solve
when
we're
doing
some
of
the
routing
is
things
like
error
handling.
A
A
Or
a
secondary
location,
so
the
what
other
roadmap
items
and
I'll
be
happy
to
show
the
roadmap
here
as
well.
Is
that
we've
been
thinking
about
as
a
secondary
borrowing
from
what
fluid
D
has
with
secondary
over
here
with
Philippine
and
then
the
other
one
which
I,
which
I
just
put
here.
Let
me
just
put
this
in
comment
here:
is
the
output
filters
output
filters
are
a
bit
different
than
what
I'd
say
is
a
filter
in
the
regular
pipeline
sense.
A
The
idea
is
that
if
I
have
a
log
that
is
going
to
both
let's
say
open,
search
and
file,
system
and
I
want
to
say
you
know
what
for
open
search
I,
don't
want
to
include
any
any
message
that
contains
X,
Y
or
Z.
The
way
to
do
that
today
is
you
either
have
to
write
a
Lewis
script
or
you
have
to
write
a
rewrite
tag
and
then
have
that
same
stream
of
data
kind
of
re-ingested
in
the
pipeline.
A
In
order
to
to
achieve
that,
it's
it's
not
the
most
elegant
thing,
so,
output
filters
is.
We
can
use
that
same
single,
pipe
or
single
stream
of
data,
which
has
pointers
to
both
output,
plugins
and
then
from
one
of
the
output
plugins.
What
we'll
do
is
we'll
say
yeah.
Well,
as
we
are
pointing
that
data
and
taking
the
pointer
reference,
we
can
actually
take
out
the
data
that
we
don't
want.
A
So
in
the
open
search
plugin,
we
can
say
if
data
equals
bills
via
Lua,
or
you
know,
we
will
have
to
figure
out
the
mechanism
we
can
have
it
in
line
in
code,
filter
out
those
messages
for
that
output
plug-in,
while
still
servicing
the
other
output
plug-in,
would
kind
of
full
full
data
streams.
There's
some
underlying
pieces
here
too,
that
we'll
have
to
work
on
which
is
like
back
pressure
and
how
multiple
outputs
affect
back
pressure
but
I.
A
B
Yeah
I
think
that
would
cover
most
cases,
I
think
again
for
other
folks,
who
may
be
a
bit
more
used
that
usually
more
advanced
their
the
that
type
of
message.
Routing
I
think
you
hit
the
nail
on
the
head
and
it's
calling
them
like
pipelines
right
in
some
other
instances.
You
know
people
you
might
have
data
sets
that
you're
not
looking
to
necessarily
wait
only
into
you
know
how
you're
filtering
for
outputs,
but
rather
the
entire
data
set
you're
trying
to
say
I
need
to
do
it
in
a
different
way.
A
different.
B
D
A
Okay,
that's
really
good
to
hear
awesome.
Awesome!
Okay,
let's
I
can
say
any
other
notes
or
topics
on
this
on
the
output
filters.
A
Okay,
let's
go
on
to
the
next
two,
so
I
actually
believe
Pat
is
he's
on
a
is
on
a
train
over
to
a
Meetup
right
now,
so
I
think
he
just
added
these.
Let's
see
news
the
release
server
in
use
yeah.
This
is
this
is
important,
so
we've
we
had
to
switch
over
from
what
we
use
right
now,
which
is
like
equinix
metal
for
all
of
our
package
hosting
into
a
new
one.
So
we've
actually
gone
cloud
and
put
everything
into
repositories.
So
you
know
folks
are
having
any
issues.
A
A
Landscape
six-store
signs
so
now
we're
part
of
this
openssf
landscape
as
kind
of
projects
that
support
this
the
six
door,
and
also
within
this
we
have,
we
actually
just
wrote
a
Blog
kind
of
on
the
clip
two
side
of
how
this
all
works
with
cosine
with
openssf,
and
then
you
can
also
use
this
yeah
verified
sign.
We
have
some
documentation
about
it
and
how
you
can
use
cosine
to
do
that,
to
verification.
So
all
pretty
cool
stuff
now
cheers
for
for
the
folks
who
worked
on
that.
A
Okay,
big
one-
is
the
road
map
items
for
this
year,
so
we'll
go
ahead
and
open
up.
My
dog
I
need
to
convert
this
into
some
issues.
None
of
it
should
be
too
surprising
here,
I
by
the
way
I've
just
paused
my
screen
share.
So
let
me
just
pull
it
up
real
quick.
A
Excellent,
so
these
are
some
of
the
areas
that
we've
been
working
on,
or
at
least
have
been
planning
on
working
on.
A
This
is
probably
more
about
like
as
I
would
say,
six
to
eight
months,
2023
roadmap,
there's
still
quite
a
bunch
of
room
here
to
add
more
one
of
the
the
big
things
that
we've
been
trying
to
make
sure
works.
Really
well
is
our
ecosystem
support.
A
So,
for
example,
you
know
what's
in
the
Prometheus
ecosystem,
there's
some
more
conformance
tests
going
on,
so
we
want
to
be
able
to
to
have
that
full
compatibility
and
capability,
so,
whether
that's
remote
right
or
or
how
even
we
do
Prometheus
export
or
just
making
sure
we
play
as
really
good
citizens
there.
Similarly,
with
open
Telemetry,
there's
no
conformance
today,
but
at
least
we
want
to
be
very
conformant
with
the
likes
metrics
traces
that
are
going
on
there,
as
well
as
the
hotel
Logs
with
metadata.
A
So
that's
stuff
that
we're
actually
working
on
right
now,
trying
to
make
sure
that
that
support
is
is
all
buttoned
up
and
big,
shout
out
to
the
community.
Who's
been
really
helpful
and
and
kind
of
hammering
out
bugs
and
finding
you
know
hey
this:
is
this
works,
but
this
doesn't
it's
been
really
great
to
see
all
the
activity
around
open,
Telemetry
and
a
little
bit
the
other
one,
which
is
it's
new.
You
just
actually
put
a
PR
for
what
we
call
in
is
the
plan
is
called
inelastic
search.
A
So
it
reads
in
from
any
any
client
that
sends
out
elasticsearch
bulk
requests,
works
against
metric
beat
file
B,
as
well
as
as
other
clients,
and
this
was
more
of
a
way
for
folks
who
are
looking
to
do
some
migrations,
especially
as
we
have
stuff
that
goes
out
to
open
search
if
folks
want
to
migrate
some
stuff
over
to
from
elasticsearch
to
open
search.
A
This
just
provides
another
route
in
which
that
can
can
work,
so
I
will
ship
the
V1
here,
probably
in
the
next
version
or
two,
but
if
you're
interested
really
interested
there's
some
nightly
builds
with
PRS
out
there
metrics.
This
is
the
still
very,
very
much
a
a
growing
field
we
want
to
be.
You
know,
play
really
well
and
be
good
citizen.
Here
we
just
added
a
lot
of
Windows
metrics,
so
replicating
Windows
exporter,
so
you
can
do
component
selection,
scoping
we
use
wmi
now.
A
A
So
if
you
don't
want
to,
if
you
don't
want
to
capture
a
certain
device
or
or
something
of
the
sort,
we
can
start
to
filter
those
out
same
with
Linux
node
exporter,
we're
currently
working
on
file
system
support
system,
D
metrics,
some
Community
has
been
asking
for
those,
so
we're
we're
working
on
that,
as
well
as
the
ability
to
scope
in
and
out
and
I,
actually
think
the
pr
for
that
went
live
last
night
or
earlier
this
morning.
So
you
can
scope
which
collections
you
want
to
collect
so
say.
A
For
example,
disk
is
every
10
minutes,
but
CPU
one,
every
10
seconds,
I'm
sure
you
can
add
to
Prometheus
node
exporter
plugins
and
choose
the
the
interval
for
that
process.
Metrics
for
both
both
ecosystems.
That's
going
to
be
some
a
bit
of
a
heavier
lift,
we're
going
to
try
to
figure
out
how
we
can
can
get
that
plug-in
support.
One
of
the
ones
we've
been
seeing
more
is
Google
Chronicle,
so
I'm
going
to
be
looking
to
to
to
get
that
going.
A
Well
we're
trying
to
get
access
to
some
Google
Chronicle
to
see
how
we
can
integrate.
But
if,
if
anyone
here
on
the
call
knows
that'd
be
awesome,
internal
metrics,
so
some
of
the
the
ideas
we
have
here
are
things
like
adding
latency
metrics.
So
from
a
chunk
in
just
time
to
when
it
actually
gets
flushed.
You
can
start
to
see
the
latency
within
fluidbit
itself
in
case
there's
like
heavy
back
pressure
or,
for
example,
a
endpoint
goes
down
or
is
rejecting
a
bunch
of
requests.
A
A
A
So
that
would
be
something
useful
if,
especially,
if
folks
are
using
those
to
do
health
checks
or
try
to
understand
that
the
usage
across
an
environment,
a
V2,
metrics
API,
this
is
kind
of
a
low
hanging
fruit.
We
have
fluid
bit
Metro
internal
metrics
that
contain
everything
all
in
one
and
you
can
use
Prometheus.
Exporter
choose
a
custom
Port,
but
we
just
said:
hey:
let's
just
create
a
V2
API
as
well
in
case
you're.
A
Using
the
you
know,
the
the
metrics
Prometheus
endpoint
this
last
one
is
is
one
and
I
I
ran
into
I
was
just
doing.
Fluentbit
node
exporter
and
I
was
firing
it
to
a
cloud
service
and
I
didn't
realize
how
much
carnality
explosion
I
had
how
many
labels
I
had
and
it
it
was
not
so
delightful
on
my
bill
and
I
thought.
Wow
it'd
be
great.
A
If
fluidbit
could
tell
me
how
many
cardinality
metrics
I'm
scraping
from
nodex
border
or
or
Prometheus
scraper,
or
anything
of
that,
so
we're
trying
to
see
if
we
can
get
some
internal
metrics
around
cardinality,
that
can
then
be
used
as
you
try
to
make
some
of
those
decisions.
A
This
additional
function,
support,
I,
I
didn't
fill
in,
but
the
idea
at
least
to
just
to
talk
about
it
broadly,
is
like
Lua
right
now
is
very
encapsulated
into
a
filter
and
so
bringing
Lua
into
the
input.
Plugins
has
been
something
that
we've
been
working
on,
so
we
call
it
input
plus
scripting
same
thing
with
output
filters,
but
also
allowing
Lua
to
hold
data
in
for
longer
periods
of
time
without
having
to
immediately
flush
that
data.
A
So
we
might
be
able
to
bump
out
more
stream
processing,
there's
going
to
be
a
lot
of
Investigation
in
things
like
performance
and
how
would
this
work
but
I'm
hoping
it's
going
to
allow
for
much
more
useful
type
of
filters
that
that
work
across
a
variety
of
data
versus
just
the
particular
message
or
record
and
then
changes
with
influent
bit
core
things
like
yaml
as
a
first
class
citizen
additional
metadata
for
logs.
So
today,
logs
really
just
contain
tag
timestamp
and
message
so
having
metadata
to
match
the
hotel,
specs
and
hot
reload.
A
So
this
has
been
very
really
highly
requested,
making
sure
that
it
works
for
select
configurations
plus
scripting,
applicable
scripting
things
that
I
mentioned
Lua
function
and
then
the
error
handling,
which
we
were
just
talking
talking
about
so
error
handling
when
there's
bad
output
in
error
handling.
What
do
you
need?
Failover,
such
as
doing
a
secondary
destination?
A
One
exciting
thing:
we
just
added
a
bunch
of
support
for
Debbie
and
risk
five,
so
folks
who
are
leveraging
that
architecture
are
interested
in
using
fluidbit.
We
now
have
support
for
that
as
well.
A
I
will
pause.
I
know,
I've
been
rambling
for
about
five
ten
minutes,
but
any
anyone
have
any
questions
or
things
and
by
the
way,
this
is
just
one
segment
of
the
roadmap.
I'm
sure
you
know
others
have
their
own
places,
they'd
like
to
invest
within
within
flip
it
so
I'm
happy
to
try
to
put
it
all
into
a
single
one
as
well.
A
Okay:
let's
go
back:
okay,
yeah
honey,
Richard,
the
metrics
from
Lord
PR
yeah.
This
is
awesome.
E
Yeah,
yes,
thanks
for
the
opportunity
to
put
this
on
the
agenda.
Actually,
we
Richard
and
I
myself.
We
are
new
to
the
community
meeting
this
office.
So
thanks
for
having
us.
Actually
we
are
to
software
Engineers
working
for
German
software
company
called
sap.
We
have
been
working
with
Olympic
for
the
past
three
years
or
so,
and
our
logging
stack
consists
of
floaty
float
a
bit
low-key
and
we'll
use
Prometheus
for
monitoring
and
our
users
requested
at
some
point
more
analytics,
unlocks
accounting
and
log
alerting.
E
Also-
and
you
know
the
luckily
queers
we
had
were
inefficient
and
alerting
is
also
not
possible
with
that.
So
we
looked
for
log
derived
metrics,
and
so
since
there
was
no
such
functionality
so
far
in
fluid
as
We
Know,
the
idea
came
up
to
create
our
plugin
and
Richard
wrote
that
mostly
so
yeah
and
to
get
with
still
currently
testing
internally,
and
but
we
created
already
this
PR
to
get
early
feedback
from
you
from
the
community,
and
so
here
we
are.
E
Basically,
there
were
already
some
question
last
time
in
the
last
community
meeting
we
couldn't
address
because
on
short
notice,
we
couldn't
join
performance.
Impact
was
one
question
we
didn't
do
any
real
measurements.
Yet
from
our
we
didn't
notice
any
obvious
impact
with
planning
some
some
tests.
Yeah.
Maybe
Richard
can
go
more
into
details
on
on
the
technical
side
or
on
the
on
the
plugin,
because
here
wrote
it
single-handedly
basically
yeah,
maybe
ritual.
You
can
go
more
into
detail
and.
C
Yeah
I
think
what
that
would
be
great
if
you
can
could
just
yeah
really
yeah
go
go
through
the
the
questions
here
again
or
the
the
points
that
we've
wrote
so
performance
impact
with
with
completed
yeah.
What
what's
meant
with
lock,
metrics
filtering
so
to
to
filter
the
metrics
again
or
what
what's
meant
with
with
that
yeah.
A
So
this
was
a.
This
was
a
question
around
if
you're
capturing
a
bunch
of
logs-
and
you
actually
want
to
filter
based
off
of
the
metrics
that
you're
then
deriving
or
not
creating
a
brand
new
stream
of
metrics.
But
even
just
saying,
let
me
dump
these
logs
100.
All
I
would
care
about.
Is
the
metrics
I
I,
don't
necessarily
know
if
it's
100
related
to
the
pr
I
think
it's
more
on
the
pipelining
of
Olympia
itself,
but
maybe.
A
You
could
correct
me
if
I'm
wrong
Richard,
you
could
take
those
metrics
that
can't
then
get
derived
and
readjust
into
the
pipeline.
They
have
their
own
tag
and
then
you
take
the
original
tag
and
you
just
send
that
to
null
or
something
equivalent.
But
with
that
the
scenario
that
would
be
fulfillable
here.
C
Yeah,
actually,
with
with
tagging
of
the
metrics
I,
have
some
question
marks
because
I
I've
not
seen
the
other
the
use
of
text
in
in
the
code
of
the
other
metric,
the
other
metrics
plug-in,
so
so
the
node
export
and
the
nginx
export
and
so
on.
A
Tagging
for
the
metrics,
it
I
think
it
auto
assigns
whatever
the
plug-in
name
is
I,
wonder
what
so.
There
must
be
some
tag
already
being
generated
here
on
the
output
side.
We
might
be
able
to
tell,
via
just
like
she's,
sending
it
to
standard
out
to
print
out
the
tag
that
the
metrics
are
being
assigned,
but
yeah.
They
essentially.
C
I
haven't
found
it
in
the
code,
so
so
yeah
all
right.
It's
a
little
yeah
this.
This
only
few
documentation
about
this.
This
whole
topic,
so
it
was
quite
hard
to
to
get
in
touch
with,
with
the
yeah
plug-in
creation
and
and
especially
with
with
this
metrics
filter,
plug-in
as
yeah
I.
Think
this.
C
This
kind
of
pattern
has
not
been
used
before
so
to
to
to
kind
of
yeah,
have
a
filter
which
which
doesn't
actually
filter
but
just
creates
metrics
and
and
puts
them
in
the
in
another
metrics
stream.
So
I
think
that
that
hasn't
been
done
before
yeah.
A
E
So
so
are
we
generally?
Are
we
on
the
right
track
with
NPR?
Did
somebody
look
at
that
yet
in
depth,
yeah.
A
I
think
so
so
I
I
believe
Pat
was
doing
some
work
with
it.
We
have
someone
who's
assigned
to
take
a
look
at
at
this
in
the
next
few
days,
as
we
start
to
get
ready
for
2.09
release.
I
think
there
was
some
some
work
that
was
being
done
prior
to
this
on
I
can't
remember
what
merging
was,
but
that's
what
was
I
think
it
was
some
Hotel
bugs
that
folks
were
squashing,
so
I
think
folks
should
be
freed
up
to
start
looking
at
this
for
2.09
release,
but.
A
Let's
just
see
it's
it's
a
great
start,
so,
in
fact
this
is
what,
if
I
go
back
here
to
just
recap,
some
of
the
discussion
from
last
time,
yeah
was
when
we
were
looking
at
this
logs
to
metrics
filter
things
like
hey
the
Lua
filtering
in
its
existing
state
doesn't
actually
allow
you
to
do
this
extraction,
so
the
extraction
right
now
is
you
have
to
build
this
type
of
C
plugin?
A
Can
we
make
this
easier
and
more
fungible
for
for
users
and
that's
where
some
of
the
blue
enhancement
from
the
roadmap
came
from
the
idea
of
cardinality
of
metrics?
So
if
we
have
a
ton
of
metrics
that
are
being
generated-
and
you
know,
our
back-end
system
is
not
something
that
is
sized
appropriately,
nor
is
it
potentially
cost
efficient
to
send
that
much
cardinality?
Can
we
do
some
filtering
or
dropping
of
those
metrics?
What
type
of
things
can
we
introduce?
So
it's
almost
like
this
has
stemmed
more
of
those
metric
discussions
versus.
C
A
quick
Interruption,
so
especially
just
this
cardinality
thing
here,
so
wouldn't
that
be
a
point
for
for
the
output
plug-in
yeah
more
than
for
the
filter,
Plugin
or
the
input
plugin
total.
A
There
totally
this
is
definitely
not
like
I,
said
I
think
we
just
shoved
all
the
metrics
discussion
into
this
this
into
this.
This
point:
okay,
yeah,
but
absolutely
not
like
something
like.
Oh
the
logs
for
metrics
plugins
should
do
card.
It
doesn't
make
sense
there
right,
because
we
don't
even
know
what
metrics
are
being
generated
from
those
logs
until
they
get
generated.
A
Stream
processing
is
like
how
can
we
do
some
more
stream
processing
around
metrics,
which
again
prompted
some
more
discussion
about
hey
I,
think
we
can
do
some
of
this
in
lieu
up.
That
is
what
folks
kind
of
want
to
do,
but
Lula
right
now
needs
you
kind
of
have
to
hack
it
to
send
a
message
every
five
seconds.
If
you
want
to
do
an
aggregation
every
five
seconds
and
then
you
know
some
things
around
conditional
logic.
A
Actually
applicable
to
the
filter,
let
me
go
ahead
and
I'll
cut
these
ones
out
and
just
seeing.
A
Was
to
say
whether
or
not
to
extract,
but
this
was
like
an
enhancement
request,
not
a
must-have.
C
And
what's
what's
what
what
do
you
mean
with
with
conditionals
object
so
to
it
to
to
yeah
to
make
kind.
A
A
C
Least
for
for
the
regex
filtering,
because
the
the
plugin
yeah
mainly
basis
on
on
the
graph
filter,
so
I
took
the
grab
filter
as
a
as
a
template
and
and
yeah
built
the
rest
on
top
of
that
and
so
yeah.
The
code
is
almost
identical
to
the
graph
filter
and
cluster
the
whole
Matrix
thing.
So
the
the
basic
idea
was
to
to
have
a
simple
plugin
to
to
at
least
count
messages
based
on
a
regex
and
for
the
for
for
the
other
two
modes.
C
So
for
some
and
for
Gage
yeah,
we
thought
of
having
a
property
for
for
key
for
for
a
field
where
you
specify
the
key
and
and
then
yeah.
We
assumed
that
this
this
key
edited
this
field
is
always
yeah
a
numbered
fields,
and
then
you
can
just
sum
up
or
or
take
the
value
as
a
gauge
value,
and
so
we
we
sort
of
having
the
the
the
regex
stuff
for
for
this
value.
Maybe
yeah
one
one
step
before
before
that
with
the
plugin.
C
So
there's,
maybe
a
another
plugin
which
which,
where
we
extract
the
value
into
another
field,
and
then
you
just
specify
the
feedback.
C
Value
is
located
in,
but,
of
course,
that
that's
just
a
yeah,
the
first
first
version
and
for
us
mainly
of
at
least
for
for
our
use
cases,
mainly
the
the
counter
is,
is
yeah
relevant,
so
I,
don't
know
how
relevant
it
is
to
to
introduce
another
regex,
maybe
for
for
extracting
the
the
numbers
to
calculate
the
metrics
in
these
two
modes,
yeah
and
I
think
also
a
histogram
and
and
summary
they
also
supported
by
C
Matrix,
as
I've
seen
yeah,
that
that
would
be
yeah.
C
Something
for
for
next
version.
C
But
so
so
in
this,
this
version
is
just
yeah,
really
simple:
yeah.
A
That's
a
really
cool,
it's
really
really
cool,
like
so
I
yeah
I
think
we
should
focus
on
getting
it
reviewed,
getting
it
launched
and
then,
if
you
know
both
of
you
are
up
for
it,
maybe
we
do
like
a
short
webinar
or
something,
and
you
could
talk
about
the
use
cases
or.
A
Think
that
would
be
super
useful
to
to
the
community
and
the
cncf
community
in
in
a
much
broader
way.
So
yeah.
D
C
Would
be
cool
if
someone
yeah
who's
really
familiar
with
this
whole
symmetrics
thing
and
so
on?
I
could
have
a
look
on
on
this
code,
because
yeah
I'm
totally
new
to
to
this
plugin
accordingly
influent
bit
and
so
I
would
yeah.
I
would
like
to
have
some
some
double
check
here.
Yeah.
D
A
Any
other
things
on
the
logs,
the
metrics
that
folks
had.
A
A
Oh
okay,
Community
Oso,
adding
flip
it
to
that?
That's
pretty
cool
and
I
had
one
more
topic.
It
was
more
on
some
Integrations
with
ecosystem
for
open
search
and
elasticsearch.
A
I
was
working
on
a
a
small
converter
with
Lua
to
ECS
to
take
Lewis
fields
and
then
convert
them
to
ECS
and
then
doing
some
dashboarding
there.
I
was
curious
if,
if
anyone
has
done
anything
in
the
past
to
conform
to
specific
schemas
with
filters,
well,
as
they
read
data
in
from
Olympic
or
if
folks
said,
I'd
done
anything
like
that
I'd
be
super
interested.
B
A
I
and
I
took
ECS
1.0
as,
like
the
most
simple
example
and
date.
I
just
tried
to
use
the
most
crude
mapping
between
what
I
saw
Within
the
standard
flip
bit
parser
and
then
converting
it
to
ECS
or
elastic
common
schema,
and
it
seems
like
that's
the
route.
A
lot
of
log
providers
have
started
to
to
go
as
with
ECS
and
the
hotel,
even
looking
at
ECS
Etc.
A
So
I
was
thinking
about
writing
some
just
quick
blogs
with
some
Lewis
script.
Examples
on
here's,
how
you
can
do
ECS
with
nginx,
so.
D
A
If
folks
are
interested
in,
that
would
love
to
to
hear
it,
but
otherwise
yeah
that
should
it
should
be
coming
soon.
Folks
are
interested.
B
Yeah
sure
yeah,
especially
especially
as
paired
by
the
way,
with
what
you're
talking
about
for
like
in
the
in
Alaska
search
style,
input
right
right.
That
could
marry
up
very
importantly,
because,
like
a
lot
of
how
things
work
from
the
Beats
architecture
necessitates
that
there's
a
Upstream
pipeline
that
continues
to
convert
the
messages,
there's
very
little.
B
That's
done
in
the
Beats
side
of
things
so
to
convert
them
to
ECS
necessitates
there
to
be
yes,
processing
done
somewhere
and
that's
exactly
the
type
of
stuff
I
was
looking
at
yeah
doing
in
fluent.
But.
A
Cool
cool,
yeah,
I'll
I'd
be
super
Keen
to
have
a
a
conversation
on
it
and
then
I
could
show
I,
don't
know
if
you've
seen
the
inelastic.
So
I
can
show
you
how
I
how
it's
working
and
everything
too
just
to
get
your
feedback.
B
Yeah,
absolutely
we
can
schedule
something.
B
I
do
have
a
kind
of
a
question,
so
you
know
the
we
were
talking
about
the
metrics
from
logs
a
lot.
You
know
the
discussion.
There
is
very
much
based
off
of
essentially
kind
of
going
through
and
creating
new
metrics
based
off
of
rules
as
logs
are
kind
of
streaming
through
internally
or
more
core
to
the
project.
If
you
guys
looked
at
anything
more
in
terms
of
like
can
like
what
it
would
take
for
converting
a
log
typed
object
to
a
metric
because
I
believe
internally,
they
have
different
schemas
right.
B
They
can't
be
used
in
the
same
type
of
filters
and
whatnot.
For
example.
Is
there
any
work
being
done
to
take
something
that
may
not
come
in
structured
appropriately
as
a
metric
that
we
could,
then
you
know
essentially
transform
into
something
that
would
become
a
metric
and
then
convert
to
the
metric.
A
So
yeah
actually
I
think
this
is
what
Richard
and
hending
have
have
already
built.
Is
you
take
any
arbitrary
log,
whether
it
contains
just
text
and
then
do
a
count
on
top
of
that
would
be
a
regex
and
by
the
way
Richard
and
please
keep
me
honest
or
you
take
any
arbitrary
log,
even
if
it's
not
formatted
just
has
log
and
we
take
that
raw
text
and
add
it
as
a
field.
And
then
you
can
extract
out
the
metric
from
there.
C
Actually,
I
think
the
question
yeah
is
or
the
answer
of
the
question
is
that
there
would
be
I
think
the
the
the
inverse
of
this
output
plugin
for
cloud
watch
with
this.
With
this
yeah,
a
lock
I,
don't
know
the
the
name
of
it.
There's
a
log
format,
other
the
metric
metrics
sent
as
a
lock,
and
so
it's
the
opposite.
C
I
think
that
you'll
mean
so
to
have
to
have
a
log,
and
you
want
to
interpret
it
as
an
as
a
metric
and
then
do
some
some
operations
on
that
inflow
bit.
Is
that
correct,
I
see.
B
So
and
again
it
might
be
that
I'm,
not
quite
understanding
yet
from
what
is
I
gotta,
read
more
probably
about
what
is
is
in
that
plug-in,
but
from
here
again
what
it
looks
like
is
you're
essentially
taking
a
stream.
So,
for
example,
in
a
situation
where,
like
you,
have
that
counter
right,
you're
you're
going
to
be
building
a
counter,
that's
matching
a
certain
situation
right,
yes,.
C
That's
right
so
so
this
is
just
creating
locks
metrics
from
the
logs
coming
in,
so
you
can
count
locks,
you
can,
you
know,
take
take
value,
a
value
of
of
a
field
of
unlock
and
so
on.
So
it's
it
generates
yeah,
just
just
metrics
from
from
the
logs,
but
it's.
B
Yeah
so
that
so
that's
a
little
bit
different
than
what
I'm
I'm
asking
about
right,
which
is
a
persistent
stream
of
we'll
keep
saying
logs,
but
like
yeah
semi,
you
know
semi-structured
objects
that
would
but
go
through
a
potential
conversion
process
and
then
finally,
you
know
be
blessed
if
you
will
as
a
metric
in
the
appropriate
metric
schema
on
underneath
the
Hood
I.
Don't
I,
don't
know
that
that's
something
that
can
be
easily
accomplished
here
or
elsewhere,
yeah,
let's.
A
See
Uncle,
hi
Richard.
Sorry
no
go
ahead,
but
I
was
going
to
say.
Maybe
what
would
be
useful
just
so
I
I
know
we're
almost
against
time.
Here,
too,
is
if,
if
there's
like
a
specific
scenario
or
something
like
I'm,
getting
this
log
from
say,
nginx
and
I
want
to
capture
this
metric
from
it
and
send
it
as
a
Prometheus
metric
here.
I
think.
Maybe
that
is
the
the
missing
piece
of
like
fully
understanding
what
what
we
can
apply
to
solve
that
issue
or
like
what
the
problem
statement
is.
If
you
will.
B
I
mean
I
think
the
main
use
case
is
right.
The
the
internal
types
of
input
for
metrics
right
now
are
a
defined
set
of
Entry
types
right
so
yeah,
you
know
if,
for
example,
though,
we
wanted
something
that
was
more
generic,
that
can
be
input
as
a
metric.
Let's
just
say
you
know
in
my
use
case,
I
have
stuff,
that's
you
know,
bigger
Json
items
right
that
could
then
be
internally
converted
into
an
appropriate
metric
item
to
be
treated
appropriately
and
then
be
output.
B
C
So
so
you
think
think
of
you
having
a
message
with
yeah.
Let's
say
a
counter
and
a
histogram
and
and.
D
C
Objects
inside
and
then
you
want
to
to
convert
that
into
metric
and
yeah
push
it
through
yeah.
B
C
C
That's
what
I
meant
with
with
this
with
this
AWS
plug
output,
plug-in
as
there
is
yeah
this
format
where
you
send
metrics
as
a
lock.
So
that's
the
opposite.
What
you
mean
for
cloud
watch
you
mean.
C
I
think
that's
the
opposite
of
your
idea
here.
D
A
Legit
online,
but
yeah
I,
think
I
think
we're
starting
the
process
of
getting
these
arbitrary
data
formats
of
logs
into
okay.
Now
it
is
conformant
with
Prometheus
and
open
metrics
via
this
filter
and
then
actually
now
you
can
even
go
to
hotel
metrics
as
well.
If
you
put
Hotel
output
here
as
well,
so
I
I
think
some
of
it
is
there,
but
we
probably
have
to
clean
it
up.
So
it's
super
easy
to
understand
because,
obviously
right
we.
D
A
All
these
different
formats,
all
these
different
data
types,
it's
it's
it's
confusing,
but
you
know.
Let
me
add.
This
is
like.
A
Yeah,
obviously,
it's
going
to
be
something
more
more
ongoing,
but
also
yeah,
hey,
hey
folks,
I
know
we're
a
little
over,
but
I
really
appreciate
the
time.
If
no
one
else
has
anything
else,
I
think
we
can
go
ahead
and
and
thanks
so
much
for
joining.