►
From YouTube: Fluent Talks | 010 | Guest Speaker: Matthew Fala, AWS Software Engineer, Fluent Bit Contributor
Description
Please join us for Fluent Talks! Our weekly webinar and office hours. Streaming live on YouTube. Today we'll talk with AWS Software Engineer and Fluent Bit contributor Matthew Fala about some of the improvements he did to the Fluent Bit event loop.
#flluentbit #aws #observability
A
A
A
Hello:
everyone
welcome
to
this
10th
edition
of
the
fluent
talk
hosted
by
kalitia.
My
name
is
eduardo
silva,
one
of
the
co-founders
and
ceo
of
caliptia,
and
this
time
we
have
a
really
special
guest.
Today,
as
you
know,
we
invest
a
lot
of
in
open
source
and
we
create
products
for
observability,
but
actually
in
the
open
source
space
we
work
pretty
close
with
different
companies
and
club
providers.
A
One
of
the
major
contributors
also
fluent
bet
is
aws
aws
services,
and
today
we
have
a
very
special
guest
called
matthew
fala,
who
will
join
us
to
talk
about
his
contributions
to
the
project
and
what
is
new
and
what?
What
were
his
last
effort
of
making
a
fluent
bit
more
scalable
for
the
whole
users?
Hi
matthew.
Are
you
there.
B
B
Yup
all
right
so
yeah,
so
thank
you
so
much
for
the
introduction.
My
name
is
matthew,
fallon
yeah.
I
work
for
aws,
specifically
on
the
elastic
container
service
team,
on
the
observability
section
and
so
really
for
aws
we're
trying
to
find
you
know
the
best
solution
to
kind
of
send
logs
and
send
metrics.
You
know
from
customer
side
just
over
to
you
know.
B
Aws
cloud
watch
s3
different
areas
like
that,
and
we
found
that
flipbit
is,
you
know,
really
great
option
to
use
for
this
and
it's
something
that
customers
really
like
to
use.
Just
because
of
how
lightweight
it
is.
You
know
how
low
resource
consumption
it
is,
and
so
again
we've
been
really
promoting
it
on
our
end,
through
the
firelands
product,
which
is
just
kind
of
a
wraparound
flip.
That
makes
it
really
easy
to
use
with
aws
ecs,
which
is
elastic
in
your
service.
B
So
I've
been
working
really
closely
on
the
integrations
and
trying
to
just
make
sure
that
that
is
performing
enough
for
our
customers
and
trying
to
resolve
any
problems
that
they're
they're
facing
so
working
closely
with
the
open
source
community
and
also
with
colyptia.
I
studied
computer
science
at
the
university
of
southern
california
yeah.
I'm
just
really
excited
to
be
here
and
answering
any
questions
that
we
might
have
and
also
kind
of
guiding
you
through
the
work
that
I've
been
doing
recently
here
at
flute
bit
and
also
at
aws.
A
Great
awesome,
I
actually
have
some
personal
questions.
How
do
you
get
involved
in
c
development,
and
I
think
this
is
kind
of
a
tricky
question
sometimes
because
nowadays
not
most
people
likes
to
develop
and
see,
and
that's
fine
right,
but
I
would
like
to
get
your
take
on
that.
How
do
you
get
involved
with
the
language
and
and
to
say?
Yes,
I
want
to
contribute
to
this
project.
B
Yeah
yeah,
so
I
think
in
college
we
had
a
lot
of
good
experiences
with
see
like
operating
systems,
classes,
they're,
they're
generally
in
c,
the
very
low
level
stuff.
I
did
like
to
kind
of
work
in
some
game
development,
also
in
c
yeah.
I
would
say
kind
of
as
a
background.
I
I
think
I
did
work
with
a
lot
more
high
level
languages
like
javascript.
B
You
know
dart
flutter,
you
know
just
working
on
things
like
like
front
end
front
of
my
applications
and
also
my
internship
working
some
java
projects.
So
I
do
have
more
of
a
well-rounded
software
development
experience,
but
I
do
like
c
a
lot
because
it
is
a
very
low
level,
very
performant.
We
get
to
work
very
close
to
the
you
know,
operating
system
level
and
yeah.
B
I
think
that
just
joining
a
fluid
bit
or
joining
seeing
what
the
team
really
needed
here
at
aws,
it
was
really
a
lot
of
just
you
know.
C
development
with
flip
bit
so
just
kind
of
you
know,
refining
my
skills
and
that
I
think
a
lot
of
the
you
know
skills
I
had
in
c
plus
plus
in
the
past
kind
of
translate
over
pretty
well
minus
the
classes
part
so
yeah.
I
mean
I
think
that
flipbit
has
been
the
first
major
project.
A
Yeah
and
that's
great
actually,
it
was
remembered
that
when
talking
to
wesley
one
of
the
fluent
maintenance
from
aws,
because
he
was
riding
the
initial
connectors
for
fluid
bed
for
aws
services
in
goldman-
I
think
that
was
cloud
watch
ss3
at
that
moment
and
that's
when
aws
created
this
a
screen
in
aws
for
fuel
and
b
distribution
with
all
these
golden
plugins
bundle
and
part
of
the
funny
story
is
like
I
think
we
were
with
in
vegas
with
wesley.
If
I'm
not
wrong,
we
were
discussing
and
it's
okay,
why?
A
A
A
I
started
crafting
one
from
scratch
kind
of
poc.
The
good
thing
is
that
aws
has
a
really
good
a
unit
test
framework
for
that
right.
So
if
you
have
this
kind
of
request,
this
is
the
expected
output
based
on
site
before
and
actually
I
think
that
I
could
pass
like
80
or
something-
and
I
showed
this
to
wesley
the
next
day
and
he
got
excited.
He
said.
Oh
yes,
let
me
see
if
I
can
remember
some
code
and
from
there
wesley
took
it
and
created
all
these
plugins.
A
So
all
this
stuff,
so
yeah
languages,
sometimes
are
you
know,
generate
some
discussions
on
stuff
and
I
think
c
is
perfect
for
flintbeat
for
flimbit
use
case,
but
there
are
many
right.
So
that's
really
great
and
well
for
everybody
who's
watching
this.
You
know
when
you
use
footbed,
you
might
be
using
a
different
skill
right.
Some
people
process
a
few
hundred
messages
per
second
other
few
thousands,
but
at
a
really
high
scale
in
the
enterprise.
A
They
are
very
intensive
use
cases
actually,
as
as
of
one
year
ago,
fluent
bid
used
to
work
just
in
a
single
thread
like
a
single
process
right
and
it
was
working.
Fine
anti-cloud
providers,
starting
asking
hey:
can
we
get
more
performance?
Can
we
duplicate
this?
Can
we
hit,
I
don't
know,
200
megabits
percent
200
megabits
per
second
and
we
implemented
threading
in
the
output,
so
when
fluentbit
was
taking
the
data
processing
data
and
passing
the
data
to
the
output
plugins?
A
What
is
the
and
all
this
when
passing
the
data
means
to
take
this
binary
data
convert
back
to
json
to
the
expected
format?
Do
that
working
or
anything
that
is
needed
to
deliver
this
message?
That
is
a
really
quite
a
expensive
task.
So
with
this
was
to
create
trading
in
the
output
site
and
we
solved
the
problem
for
one
year
and
then
we
get
we
can
we
and
mata
shows
up
and
say
hey.
You
know
what
I
think
that
there's
some
saturation
the
flambe
pipeline
for
our
use
case.
A
It's
like
the
cue
of
the
messaging
is
not
it's
not
scaling
up
right.
All
the
events
are
mixed
together,
like
there's
no
differentiator
between
timers
scheduling,
network
io
or
any
kind
of
things
that
could
touch
the
even
loop
from
it
has
a
main
event
loop,
where
receiver,
pens
and
inbox
in
the
certain
a
functions
or
plugins
that
need
to
take
some
action
on
that
right
plus
routines.
A
So
it
was
just
a
really
complex
scenario,
and
a
map
shows
a
problem
of
scalability
problem
that
they
were
facing
at
the
aws,
and
it
was
quite
interesting,
but
also
not
just
a
problem.
Actually,
he
came
up
with
the
solution,
and
this
is
one
of
the
biggest
topic
of
this-
a
of
this
kind
of
interview,
technical
session
to
ask
not
a.
How
do
you
found
this
problem?
A
what
kind
of
approaches
you
were
thinking?
How
do
you
approach
the
problem?
How
do
you
come
up
with
a
solution
and
how
the
solution
looks
like.
B
Yeah,
absolutely
so,
I
guess
the
kind
of
the
kind
of
depth
that
we
would
like
to
go
into
this.
Do
you
have
other
questions
for
me,
or
is
this
mainly
the
topic
of
our
discussion,
so
I
should
just
kind
of
get
into
presenting
sort
of
my.
You
know.
Research
and
my
findings
online
solution
as
well.
A
Yeah,
I
think
it's
quite
flexible,
flexible,
not
because
I
I
would
like
that
you
you
share.
How
do
you
approach
your
problem,
so
we
can
say
other
engineers
or
people
looking
at
this
session
understand
how
do
you
tackle
these
kind
of
problems?
How
do
you
think
about
it
right
and
how
do
you
come
up
with
a
resolution?
A
B
Absolutely
so
yeah
just
getting
into
kind
of
our
you
know,
detection
right,
so
we
had
a
lot
of
customers
who
are
sending
the
logs.
These
are
kind
of
for
bigger
companies.
You
know
on
the
bigger
side,
they're
standing
like
eduardo
was
saying:
maybe
200
megabytes
per
second
to
various
output,
plugins
kind
of
at
the
same
time,
so
yeah,
there's
you
know
fluent
fit,
has
highly
saturated.
B
B
So
we
we
tried,
you
know
taking
it
to
our
side
and
we
started
just
you
know,
sending
tons
of
data
to
a
bit
like
maybe
50,
megabytes,
to
100
megabytes
per
second,
maybe
one
kilobyte
logs
and
yeah.
B
We
noticed
that
we're
seeing
the
exact
same
thing,
maybe
15,
broken
pipe
connection
that
connects
40
connection,
timeout
errors,
just
almost
man
in
five
minutes,
and
so
that's
not
very
good,
and
what
that
kind
of
means
is
that
every
time
you
see
these
errors,
a
network
request
is
going
to
have
to
try
again
and
so
you'll
start
seeing
a
whole
bunch
of
you
know
logs
being
tried
again
and
then
maybe
that'll
make
it
so
that
you
need
to
be
sending
not
just
50
megabytes
but
maybe
60
megabytes
because
of
all
the
retries
and
then
it'll
be
kind
of
like
a
snowball
effect,
because
eventually
you're
retrying,
so
many
logs.
B
That
would
just
be
you
know
so
so
much
data
just
kind
of
in
a
fluid,
but
that
you'll
start
to
see
you
know
kind
of
bigger
problems
so
with
low
throughput.
You
know
we
saw
flip,
it's,
you
know
very
stable,
but
with
high
throughput
we
saw
a
lot
of
network
problems
and
connection
timeout
problems,
and
this
was
a
big
customer
struggle
for
us,
especially
for
our
larger
customers,
and
that's
why
we
really
want
to
do
a
deep
dive
and
and
sort
of
target.
B
You
know
how
we
can
solve
this
and
help
our
customers
through
these
issues.
So
yeah
we
had
a
couple
ideas
on
what
could
be
causing
it.
You
know.
First,
we
thought.
Maybe
it
was
api
like
fire
hose
was
one
thing
that
customers
would
we're
sending
things
to,
but
we
kind
of
talked
to
some
other
teams,
and
we
kind
of
talked
to
some
other
people
who
are
using
fire
hose
and
did
some
testing
and
sending
things
to
fire
hose
ourselves
and
realize
that
fire
hose
is
very
reliable.
B
We
don't
see
these
connection
timeouts
or
broken
by
bearers.
The
next
thing
we
thought
was
that
it
was
a
fluid
bit
timeout
problem,
so
we
actually
published
a
pr
to
adjust
the
timeout
or
to
to
kind
of
resolve.
What
we
thought
was
the
issue
it
turns
out.
The
timeout
was
not
flawed
if
the
timeout
was
working
fine,
so
the
last
thing
that
we
thought
could
have
been
the
problem
is
that
code
genes
are
actually
hanging.
B
So
what
that
means
is
that
when
you
make
a
network
connection
attempt,
then
you
know
it
will
start
the
network
connection.
The
network
activity
will
complete
very
quickly,
but
then
the
code
will
stop.
It
will
kind
of
pause
for
a
while,
and
you
might
pause
for
like
one
second
five
seconds
or
even
ten
seconds
and
and
finally,
when
it
comes
back,
there's
already
kind
of
like
the
network
connection
that
was
made
to
whatever
upstream
source.
It
is
so
that
could
be
fire
hose
or
s3.
B
That
network
connection
would
be
stale
and
we
found
that
this
was
you
know
what
was
actually
causing
the
problem,
so
we
really
wanted
to
try
to
get
into
what
exactly
the
issue
was,
and
so
this
is
just
kind
of
a
picture
of
what
we
found.
You
know
the
network
connection,
the
network
activity,
if
you
were
to
kind
of
monitor
packets
coming
up
soon,
but
it
kind
of
goes
very
quickly
right.
It's
like
a
hundred
milliseconds
or
less,
or
something
like
that,
but
then
the
code
when
it
makes
that
network
request.
B
It
takes
over
10
seconds
sometimes
to
resume,
and
you
know
that
that
amount
of
time
when,
when
you're
trying
to
do
you,
know
a
couple
different
requests
at
the
same
time,
it's
long
enough
that
you
know
cause
this
some
broken
piping
connection
time
that
errors
so
yeah,
so
that
that
was
kind
of
the
problem
and
just
kind
of
looking
at
the
data
you
can
see.
B
This
is
kind
of
we
graphed
the
broken
pipe
and
connected
timeout
problems
over
time,
and
you
can
see
they're
very
highly
clustered,
you
kind
of
see
it
in
several
different.
You
know
like
five
different
clusters
for
the
broken
pipe
so,
and
we
also
kind
of
map
that,
to
the
amount
of
I
guess,
sort
of
simultaneous
code,
the
sort
of
writing
at
the
same
time-
and
we
realized
that
you
know-
there's
some
correlation
between
this.
So
we
were
trying
to
really
get
to
the
bottom
of
it
but
yeah.
B
In
order
to
get
to
the
bottom
of
everything
we
had
to
really
dive
deep
into
the
event
loop
and
so
yeah.
I
don't
know
how
deep
we
should
get
in
this
conversation,
but
we
can
definitely
go
through
some
parts
of
this
so
yeah.
We
are
going
to
give
a
more
in-depth
talk
about
the
event
loop
at
fluentcon.
B
So,
if
you'd
like
to
to
sign
up,
I
think
there
still
might
be
time
we'll
dive,
really
deep
into
the
event
loop
and
some
of
these
problems
and
some
more
details
of
how
we
solved
the
event
loop
but
yeah.
Maybe
we
can
just
get
into
kind
of
a
high
level
perspective
of
what
the
event
loop
is
and
some
of
the
problems.
What
do
you
think
eduardo.
A
Yeah,
this
is
like
kind
of
a
we're
going
to
make
some
examples
right.
I
don't
know
if,
when
you
go
to
take
a
an
airplane,
you
go
to
the
supermarket
the
different
kind
of
lines
right
and
the
way
you're
going
to
you
need
to
do
to
provide
a
good
service
to
the
people
who's
there
right
on
this
case
are
the
events
right.
A
So
the
event
loop
was
deciding
well,
there
was
no
order
right
first
in
first
out,
and
sometimes
that
order
was
a
bit
of
a
mess
and
the
all
this
even
dream
pro
programming
and
the
event
loop
side
in
influent
bit
actually
was.
The
workflow
is
like
this,
for
example,
if
you
need
to
send
data
to
somewhere,
we
check
the
data
in
I'm
going
to
just
talk
about
the
output
site
as
a
context,
so
you
got
the
output
plugin
and
you
have
a
flash
callback.
A
That's
flash
callback
is
in
charge
to
do
the
whole
operation
to
deliver
data
right.
The
first
approach
from
asynchronous
programming
that
we
did
is
like
every
flash
callback
runs
in
a
different
coroutine
where
in
a
necrodin
or
a
lightweight
thread
or
whatever
you
wanted
to
call
it,
and
the
interesting
concept
is
like
the
flush.
Callback
can
have
many
operations,
but
imagine
that
you're
going
to
do,
for
example,
an
http
request,
because
you're
going
to
connect
to
an
http
endpoint.
So
what
are
the
steps
right?
A
The
first
one
will
be
try
to
connect
a
host
that
that
implies
to
make
a
dns
resolution.
Once
do
you
know?
What's
the
ip
address
when
you
want
to
connect
from
a
code
perspective,
you
need
to
perform
at
the
disciple,
tsp
connection,
and
once
that
is
open.
Maybe
it
comes
after
that,
a
tls
handshake,
because
you're
going
to
use
a
secure
channel
and
when
that
is
established,
then
you
can
start
transferring
data
on
all
these
steps.
There's
always
all
one
symptom.
It's
like
you
asked
to
do
something
to
the
kernel
and
pretty
much.
A
You
just
sit
and
wait
until
the
kernel
returns
back
to
your
code
and
then
you
continue
executing
by
having
core
routines
and
even
loop.
What
we
try
to
do
is
like
we
are
going
on
this.
I
o
operation,
or
maybe
dns
query
is
like
shift
the
query
as
a
kernel
to
buffer
the
data,
and
we
just
not
sit
and
wait.
We
just
from
a
corrupting
perspective.
A
We
just
return
to
the
event
loop
to
continue
doing
other
work
and
once
that
operation
has
been
completed,
we
just
return
to
the
same
code,
and
this
is
happens
in
a
very
a
smooth
way.
Actually,
the
user
does
not
know
when
a
routine
has
been
suspended.
Gelding
has
been
presumed.
This
is
pretty
transparent,
but,
as
you
can
see
in
just
one
output,
plugin
there's
many
events
happening
right.
So
all
these
things,
and
if
you
have
multiple
open
plugins,
you
have
multiple
flashes
retries.
A
If
you
cannot
attempt
in
in
person,
we
are
going
to
stream
all
these
sessions
virtually
so
please
just
sign
up
to
fluentcon
and
you
will
get
access
to
the
this,
a
great
presentation
and
yeah
actually
before
to
to
let
you
go.
I
know
that
you
have
a
very
tight
schedule
this
day,
it's
a
friday
for
you
a
I
wanted
to
know.
It's
like
how
was
your
experience
starting
contributing
to.
B
Yeah
yeah
absolutely
yeah.
I
think
that
it's
a
lot
of
just
kind
of
testing
we
we
really
needed
to.
B
Can
you
find
a
better
way
to
test
fluid
fit
because
just
kind
of
writing
some
commands
in
the
terminal
to
send
tcp
to
packets
to
fluent
fit
it
wasn't
really
going
to
scale.
Well,
especially
if
we
wanted
to
to
you
know,
send
very
high
throughput
logs
and
monitor
problems,
or
you
know,
add
some
instrumentation
to
fluid
fit
and
so
yeah.
So
you
know
we
built
out
this.
B
This
test
framework
called
you
know,
fire
lens
data
jet,
and
we
also
have
another
test
framework
as
well,
but
yeah
fireline's,
datajust
kind
of
allows.
You
to
you,
know,
write
down
in
a
single
json
file,
a
whole
bunch
of
tests
and
it
separates.
You
know
the
problem
of
sending
test
data
input
into
you,
know:
outputs
data,
generators
and
sort
of
the
data
data
sending
array
right.
So
you
can
have
you
know
these
these
sort
of
groupings
kind
of
connected
together.
B
You
know
either
in
like
a
synchronous
fashion
or
an
asynchronous
fashion,
and
you
can
tell
it
to
repeat
several
times
and
so
just
and
then
also
we
had
it.
So
you
know
you
can
wrap
these
sort
of
configurations
and
fluid
with
executors.
B
B
I
think
at
one
point
in
time:
you're
asking
me
like:
oh
matt,
could
you
run
all
the
tests
again
with
one
worker
instead
of
zero
workers,
five
workers,
because
that's
what
we
were
testing
things
before
and
I
think
if
we
were
doing
all
the
testing
manually,
that
could
have
taken
like
a
week
or
something
to
do,
but
just
because
we
had
this
sort
of
system
put
in
place.
We
said:
okay,
great,
we'll
just
change
some
configuration
values
in
our
you
know,
testing
framework
and
and
just
go
run
it
again.
B
We
waited
overnight
and
yeah.
We
just
came
back
in
the
morning
and
had
all
the
data
just
ready
for
us
to
analyze,
and
so
I
was
able
to
to
ship
you
guys
and
also
like
my
manager
as
well
just
sort
of
the
data
on
one
worker,
like
14
grafts
or
something
and
just
being
able
to
sort
of
take
a
look
at
that
together
and
sort
of
analyze
those
results
yeah.
B
I
think
that
just
sort
of
leveraging
tools
that
that
can
help
to
you
know,
analyze
things
and
and
to
make
testing
more
reliable
and
sustainable,
I
think,
is-
is
one
of
the
biggest
ways
that
you
know
we
were.
We
were
trying
to
to
to
make
make
the
code
that
we're
writing.
You
know
rigorous
and
sort
of
well
tested,
and
all
of
that
and
also
writing,
you
know
strong
unit
tests
as
well.
I
think
that's
another.
B
You
know
big
part
of
this
process
and
code
review
as
well,
so
I
think
there's
there's
several
different
pillars
of
this
testing
unit
tests
and
then
also
code
review.
Those
are
kind
of
like
the
three
pillars
of
open
source.
You
know,
code,
resilience
and
stability.
A
A
A
But
I
think
that
when
you
join
a
contribute,
a
project
like
this
when
you
have
to
deal
with
data
movement
corrupting
threats,
networking,
at
least
for
me-
it's
really
interesting
because
it's
you
need
to
touch
different
parts
and
understand
how
the
breathing
system
works
right,
because
at
the
end
of
the
day,
you
want
to
provide
a
really
good
experience
and.
B
I
have
that
analogy
right
yeah.
We
have
that
analog
the
thought
framework
analogy.
I
don't
know
if
you
want
me
to
share
that,
but
maybe
that
would
help
some
of
the
developers
who
are
out
there
but
yeah.
Please
go
ahead.
Oh
no!
No
vote,
please!
Oh
yeah,
so
yeah,
one
of
the
things
I
shared
with
eduardo
was
kind
of
like
that
thought
framework.
We
had
like
an
an
analogy
that
we're
using
here
at
aws
to
really
think
about
you
know
the
event
loop
and
the
co
routines
and
yeah.
B
I
think
that
when
we
talk
about
these
things,
just
from
a
higher
level
perspective,
it
can
get
kind
of
like
lost
between
you
know.
Like
blocking
operations,
you
know
asynchronous
events,
threads
coaching
cpu
and
all
these
things
can
get
a
little
bit
convoluted.
So
we
kind
of
created
an
analogy
that
allows
us
to
think
of
the
solutions
and
the
problems,
all
kind
of
in
a
very
simple
way
that
everyone
can
understand
whether
you're
working
on
snippet
or
not
and
so
yeah.
B
If
you
want,
if
you
want
to
hear
the
analogy,
it's
pretty
quick,
it's
just
really
the
following
here.
So
you
have
people
who
are
standing
in
line
to
work
on
a
forum,
that's
on
a
desk,
and
you
know
have
some
people
who
are
standing
at
a
phone
booth
and
they're
phoning
friends,
and
that's
really
it
and
so
kind
of
talking
about
the
fluid
bit
system.
What
is
what's
going
on
here?
Well,
the
desk
is
like
the
cpu.
B
The
forms
are
like
your
code
that
you're
working
on
so
eduardo
mentioned,
how
we
have
kind
of
some
event-driven
programming
paradigm
for
fluid
bit,
where
you
have.
You
know,
co-routines
that
are
just
waiting
to
get
the
cpu
to
start
working
on
their
code.
So
that's
kind
of
like
these
people
who
are
standing
in
line
every
single
person
is
like
a
car,
routine,
so
kind
of
yeah.
You
have
these
people
who
are
standing
online,
they're
they're,
like
the
cover
teams
of
the
bitland.
B
You
have
the
desk,
which
is
the
cpu.
You
have
the
forums
which
is
the
curating
code
that
the
code
teams
are
waiting
to
work
on.
You
have
the
people
who
are
phoning
a
friend,
so
this
is
like
the
the
the
crew
teams
they're
suspended.
You
know
they're
waiting
to
make
a
you
know,
kind
of
like
blocking
a
synchronous
call
a
network
call
or
something
they
have
the
phone
booth.
So
this
is
the
not
ready
list.
B
So
all
these
people
who
are
making
these
blocking
calls
get
to
go,
wait
in
some,
not
ready
list.
So
you
know
if
it
has,
has
this
concept
as
well
and
finally,
you
have
the
line,
so
this
is
kind
of
the
fifo.
So
edvardo
mentioned
the
fifo
event
loop.
This
is
kind
of
like
what
we
had
in
flipfit
1.8
before
we
added
some
new
changes
to
to
give
some
prioritization,
but
in
1.8
every
single
curve
gene
was
standing
in
just
one
line
for
the
cpu.
It
doesn't
matter
how
important
you
are.
B
It
doesn't
matter
whether
you
started
your
work
or
not
everyone's
waiting
in
the
exact
same
line
to
get
to
the
cpu,
so
just
kind
of
taking
a
look
at
what
happens?
You
know
your
cover
machine.
You
have
some
work
to
get
done.
Maybe
it's
sending
you
know
information
to
firehose
right,
so
you
get
your
code
at
the
cpu.
It's
just
one
code
team
is
working
on
it.
Everyone
else
is
suspended,
everyone
else
is
just
kind
of
waiting
in
this
line.
B
B
And
then
the
next
routine
gets
up
to
the
desk
kind
of
switched
in
because
it
realizes
that
that
the
other
person
has
left,
and
so
he
gets
to
access
his
code.
He
starts
working
and
the
process
continues
and
then,
finally,
when
a
phone,
a
friend,
you
know,
when
a
network
call
completes
that
coaching
gets
to
stand
back
in
line,
he
has
to
wait
for
the
cpu
again
so
yeah.
So
this
is
the
entire
kind
of
flip
fit
process
of
you
know
making
network
calls.
B
This
is
the
this
is
the
limit
process
of
you
know
these
cover
teens
waiting
for
the
cpu
and
you
know
getting
to
run
their
code
and
in
kind
of
it's
almost
like
a
multi-threaded
fashion,
except
there's
only
one
thread,
and
only
one
of
these
proteins
are
doing
work
at
a
given
time.
So
yeah
like
eduardo
mentioned,
there's
an
issue
with
this
fifo
q
yeah.
Let's
take
a
look
at
what
that
issue
is
so
this
is
kind
of
where
you
know
the
problem.
The
real
problem
is
that
we
were
talking
about
earlier.
B
This
is
the
root
cause
where
you
have
all
these
people
they're
waiting.
In
the
same
line,
and
unfortunately
because
there's
no
prioritization,
this
line
gets
extremely
long
and
there's
you
know
it
doesn't
matter
who
you
are.
You
have
to
weigh
in
the
exact
same
line
and
an
issue
with
the
you
know
just
system
in
general.
Is
that
if
you
make
a
network
call
and
you
stand
back
in
line-
and
you
take
too
long
to
get
to
the
cpu,
then
you
forget
your.
B
You
know
what
your
friend
told
you
yeah,
it's
like
you
forget,
so
you
get
a
broken
connection
or
a
broken
pipe
or
a
connection.
Timeout,
so
kind
of
like
eduardo
was
saying
you
know
you
might
start
some
some.
You
know
ssl
or
tls
connection
and-
and
you
know
like
you-
have
to
make
some
more
network
calls
after
that.
But
then
you
know
you
get
like
a
broken
pipe
because
you
just
waited
too
long
to
get
to
the
cpu.
B
After
finishing
your
network
call,
so
this
is
kind
of
what
we're
talking
about
from
the
customer
standpoint,
where
you
know
you,
you
finish
your
actual
network
activity,
but
then
for
the
code
to
start
running
again.
It
takes
like
10
seconds
because
they're
waiting
in
this
line
to
get
to
the
desk
so
yeah.
So
when
you,
when
you
get
to
the
front
of
the
desk,
you
have
you
know
some
issues,
you
you,
you
have
broken
pipe
or
connection
time
out.
B
So
where
are
these
people
coming
from
well
they're
coming
from
kind
of
the
input
plug-ins,
so
there's
just
some
some
list.
It's
we're,
calling
it
the
hold
line
and
like
eduardo
mentioned,
there's
this
flush
operation
right.
The
flush
takes
everyone
from
the
whole
line,
everyone
from
these
kind
of
like
input
lists,
and
it
just
moves
them
all
to
the
same
line,
and
you
can
see
this
line
gets
very
long
and
we're
seeing
maybe
sometimes
it
gets
from
like
60
to
100.
B
You
know
different
proteins,
they're,
just
waiting
in
the
exact
same
line,
and
that
takes
a
big
you
know
toll
on.
You
know
how
long
it
takes
to
get
through
this
waiting
waiting
list
to
get
to
the
cpu
all
right.
So
how
are
we
going
to
resolve
this?
B
Well
yeah?
So
prioritization
is
the
answer.
So
we
added
a
new
policy.
We
thought
it
would
be
a
good
idea
if,
instead
of
taking
everyone
from
the
hold
line
and
moving
them
to
the
desk
line,
just
whenever
they
they
show
up
so
every
time.
You
know
you
get
some
new
data
from
the
input
you
move
them
to
the
cpu.
You
move
them
to
wait
for
the
cpu,
which
makes
the
cpu
line
super
long.
Instead
of
doing
that,
we're
going
to
tell
you
to
hold
we're
going
to
say
hold
on.
B
Let's
wait
till
everyone
in
the
cpu
line
finishes
if
you
haven't
started
yet
you
know
if
you
haven't
started
your
code
yet
just
wait
and
and
once
that
cpu
line
gets
to
zero
because
all
the
work
is
completed,
then
we'll
start
admitting
you.
So
that's
where
the
priority
queue
comes
into
place.
So
this
kind
of
prepares
us
for
you
know
the
solution
which
we
added,
which
is
to
add
some
priorities.
B
So
if
you
are
a
code
team,
that's
inactive,
you
haven't
started
yet
you
know
you're
waiting
to
be
flushed.
Well,
we'll
give
you
a
priority.
That's
lower
than
the
co
routines
that
are
already
started
and
maybe
doing
some
network
calls
maybe
they're
waiting
to
get
back
to
cpu
after
they
finish
phoning
a
friend,
so
yeah,
the
one
on
the
left
and
the
one
on
the
right
have
different
priorities.
So
this
is
what
we
added.
You
know.
B
We
gave
the
people
who
who
made
their
network,
calls
and
are
waiting
to
get
back
to
the
cpu
to
complete
their
network
call.
We
gave
them
a
very
high
priority,
so
that
means
that
they'll
be
prioritized
and
we
gave
the
people
who
are
inactive,
a
very
low
priority
right.
So
these
are
you
know,
code
routines
that
haven't
even
started,
yet
we
have
them
very
low
priority
and
so
doing
we
make
sure
that
the
line
on
the
right
hand,
side
is
very
short,
and
we
saw
you
know
tremendous
results
in
this
skipping
to
the
results.
B
You
know
we
saw
across
the
board.
You
know
like
96
to
100
percent
improvement.
We
saw
you
know,
connection
problems
going
from
51
errors
to
two
errors
or
51
connection,
timeout
errors
to
one
connection,
timeout
errors.
We
saw
the
number
of
proteins
going
down
by
a
factor
of
you
know,
maybe
three
on
that
active
proteins
that
is-
and
you
can
see
that
you
know
from
before
the
data
shows
that
you
know
there's
a
very
high
amount
of
proteins,
maybe
like
30,
and
when
you
see
a
big
spike
in
proteins.
B
You
see
a
big
speck
of
errors
following
it
kind
of
in
these
red
boxes,
but
then
you
know
with
the
new
solution,
we're
seeing
you
know
across
the
board.
You
know
like
10
or
less
co
routines,
often
and
almost
no
errors,
and
so
this
tremendous
improvement.
You
see,
we
only
have
like
three
errors
here,
whereas
in
the
past
we
had
a
ton
of
errors
in
the
blue
and
silver,
so
yeah,
it's
just
seeing
tremendous
improvements
all
across
the
board
and
yeah
the
reason.
B
Why
is
because
we
added
these
this
kind
of
two
different
lines?
So
what's
the
new
paradigm,
I
guess
the
old
paradigm
was,
you
know,
start
all
the
work
as
soon
as
possible.
You
know.
So
if
you
have
a
bunch
of
work
to
do
start
all
the
work
as
soon
as
possible,
the
the
new
paradigm
with
the
priority
queue
is,
is
complete
the
work
that
you've
already
started.
So
if
you
started
some
work
and
you're,
you
know
doing
like
a
phone,
a
friend
you're,
making
a
network
call.
B
B
We
think
that
this
is
something
that's
really
going
to
help
in
the
future,
to
it
be
more
scalable
just
so
that
we
can
keep
the
sort
of
latency
between
you
know,
waiting
for
the
cpu
and
actually
getting
the
cpu
at
a
minimum.
So
yeah.
I
think
you
that
that's
kind
of
the
improvement
that
eduardo
was
alluding
to
phantom
slides
to
kind
of
help,
but
we'll
go
into
a
deep
dive
of
exactly.
You
know
what
this
means
in
the
fluent
con
and
yeah.
B
To
this
video
all
right.
A
A
So
if
you
are
using
1.8
yeah
the
performance,
one,
that
knights
will
be
on
what
1.8
was
doing
and
actually
I
think
that
this
priority
thing
works
for
such
as
life,
but
also
code.
So
it's
really
good.
This
analog,
just
like
yeah.
If
you're
doing
one
thing
just
finish
it
right,
don't
jump
into
other
ones,
manage
priorities
and
yeah.
That's
that's
really,
really
really
good.
B
Yeah,
I
think
I
think
it's
really
to
make
make
a
lot
of
proof
of
concepts.
First,
make
make
very
rough
proof
of
concepts,
first,
that
sort
of
prove
that
the
idea
is
viable
and
then
and
then
polish
it,
but
I
would
say,
get
the
proof
of
concept.
B
First,
you
know
show
it
to
show
it
to
you
know
whoever
it
is
that
that
has
the
decision
making
like
yourself
eduardo
or
leonardo
or
other
people
at
columbia
to
have
got
them
on
board
to
you
know,
point
you
in
the
right
direction
to
get
that
from
the
proof
of
concept
to
the
actual
final
solution,
because
I
think
a
mistake
that
you
know
I've
made
in
the
past
is
really
trying
to
perfect
something
at
first
and
then
showing
it
to
people
and
then
kind
of
having
to
see
sort
of
like
okay.
B
It
kind
of
doesn't
align
completely
with
the
vision
of
fluid
bit
and
then
having
to
sort
of
revise
things
and
and
sort
of
re-polish
things.
So
I
think
proof
of
concepts
are
really
great
because
it
allows
you
to
not
invest
too
much
time,
but
just
give
it
certain
idea
and
then
you're
totally
willing
to
throw
away
the
idea
perfect.
The
idea
change
the
idea
completely
before
getting
to
that.
You
know
final
result
that
we're
willing
to
add
to
flip
it
to
you
know,
make
it
exactly
what
everyone
has
in
mind.
B
You
know
for
the
vision
of
this.
You
know
full
community,
so
I
would
say
definitely
proof
of
concepts
are
a
good
idea
because,
like
we
were
showing
in
the
beginning
how
we
thought
it
could
have
been
three
things.
It
could
have
been
an
api
problem,
a
time,
timer
problem
for
the
event
loop
issues,
so
we
actually
did
create
a
proof
of
concept,
for
I
guess
each
of
those
different
problems,
yeah
we
we
had
some.
You
know
like
postband
requests
for
testing
a
fire
hose
api.
B
We
actually
made
a
pr
to
do
timer
timer
thing,
so
so
the
the
timer
we
we
noticed
that
the
timer
events
would
get
queued
in
that
part.
In
that
event,
loop,
just
like
all
the
other
things.
B
So
we
noticed
that
it
it's
kind
of
weird
like
if
some,
if,
if,
if
a
task,
if
a
cover
chain
was
queued,
you
know
like
one
second,
but
then
it
actually
got
run
after
100
seconds,
then
the
timer
might
judge
it
incorrectly
like
it
might
think
that
you
know
it's
100
seconds
out,
even
though
the
event
was
queued
after
one
second,
and
so
we
published,
you
know
a
pr
to
say:
okay,
let's,
let's,
instead
of
tracking,
when
the
code
gets
run,
let's
track
when
the
care
routine
gets
added
into
the
event
loop.
B
B
So
it
actually
did
matter
that
the
coating
was
getting
run
in
100
seconds
rather
than
one
second,
so
yeah
in
one
sense,
the
timer
was
kind
of
incorrect
and
we
were
correct
in
thinking
that.
But
you
know
the
incorrect
behavior
of
the
timer
actually
makes
sense.
The
timer
should
be
tracking
when
the
protein
is
spreading.
You
know
the
timeout
should
be
tracking
when
the
curtain
is
running,
not
when
the
courage
gets
added
to
the
event,
so
that
proof
of
concept
actually
showed
that
it's
not
beneficial
to
track.
B
When
the
current
team
was
added
to
the
event
loop,
it
is
beneficial
to
track
when
the
current
team
gets
run
and,
and
that's
what
it's
already
doing,
so
we
got
to
just
quickly
throw
that
idea
away.
You
know
put
that
pr
just
kind
of
in
our
archives
and
then
move
on
to
the
next
problem
that
we
thought
it
could
have
been,
which
is
you
know,
the
event
loop
hanging
as
for
prioritization
solution.
B
So
I
think
proof
of
concept
is
probably
my
best
sort
of
like
piece
of
advice
to
anyone
who
wants
to
contribute.
A
Happened
to
me
yeah.
It
also
happened
to
me
the
whole
time
right,
and
sometimes
this
pocs
is
like.
Sometimes
it's
like.
Oh
should
I
write
it.
Should
I
not
write
it,
but
it's
a
really
good
way
to
validate
things.
It's
like
unique
testing
right.
I
don't
like
to
write
unit
testing,
but
every
time
that
I
write
it
and
then
I
fold
my
own
box
and
I'm
thankful
for
writing
those
unique
tests.
So
plcs
are
really
interesting.
Actually,
most
of
the
projects
start
as
a
poc.
A
A
He
already
wrote
the
message
of
taxi
library
at
that
time,
so
he
just
created
this
prototype
in
ruby
and
started
working
well
and
the
brakes
started
to
grow,
to
grow,
to
grow
and
well
fluently
is
everywhere
nowadays,
actually
when
in
the
fluent
bed
story,
that
is
a
similar
thing
right.
It's
like.
Should
we
write
something
and
go
I'm
talking
about
seven
years
ago
right,
something
insane
and
I
said,
let
me
prototype
something
in
c,
maybe
no!
No,
because
that
might
take
so
much
time.
A
Okay,
I
just
took
a
week
and
wrote
the
video
the
first
flowing
bit.
I
already
wrote
some
event
loops
stuck
before
with
a
web
server,
so
I
had
many
components
so
it
was
to
just
about
hey.
Let's
call
it
as
an
input
metric
from
the
kernel,
something
simple
stuff
that
exists
in
the
file
system
right,
because
the
metrics
are
in
the
file
system
and
then
send
it
out
to
standard
output
with
event
loops
collecting
a
timer.
Send
it
send
it
out.
A
You
can
see
that
in
the
first
version
of
flm
bit
and
yeah
that
plc
started
to
grow,
grow
and
grow,
it
becomes
a
solution,
and
here
we
are
after
a
couple
of
years,
so
yeah
and
many
people
tell
you
it's
like.
Don't
oh,
don't
waste
time
on
that.
Actually,
that's
in
general,
right
for
life,
not
just
software
code.
A
pocs
are
a
great
way
to
do
it
and
when
we
were
at
some
conference
at
the
beginning,
an
open
source
conference
showing
fluent
bet
most
of
some
companies
are
very
well
known.
A
If
I
remember
one,
guy
was
from
a
company
that
wears
a
red
hat:
hey
how
much
my
home,
how
many
lines
of
code
the
project
has,
and
that
moment
was
like
ten
thousand
or
so
they
said.
Oh,
no!
No,
it's
not
radiant!
It's
already.
We
need
more
time
yeah
right,
so
there
are
different
ways
for
people
to
measure
things,
but
I
think
that
one
of
the
keys
just
to
be
persistent
of
this
and
creating
poc
trying
to
prove
fix
the
problems
and
and
so
on
so
but
yeah.
I
guess.
B
That's
a
good
question
to
have
in
general
for
the
fluency
community
because
flip
it
has
been
something
that's
just
so
lightweight,
but
it
seems
like
the
sort
of
vision
for
flint
as
you're
going
into
the
future,
with
adding
all
these
like
lua
adapters
and
javascript
engines
and
other
things
like
that.
I
I
just
wonder
about
kind
of
like
the
flint
bits
direction.
Is
it
heading
towards
more
of
like
a
full
sort
of
system?
That's
that's!
B
Less
lightweight
and
more
just
kind
of
like,
like
has
a
whole
bunch
of
features,
or
are
you
going
to
try?
Do
you
see
in
the
future,
like
kind
of
in
the
vision
that
you're
going
to
keep
it
extremely
lightweight
like
zero
dependencies?
You
know
kind
of
like
what
you
had
envisioned.
I
guess
looking
back.
A
A
It
wasn't
a
time
that
fluent
bet
was
not
famous
on
the
cloud
space,
but
it
was
a
time
when,
in
europe
gdpr
was
hidden.
Gdpi
was
a
lot
of
policies
and
securities
in
europe.
Where
you
you
have
to
obfuscate
data.
You
cannot
share.
For
example,
credit
card
transaction
numbers,
insert
any
scenarios
or
move
them
over
the
network
if
they
are
not
encrypted
stuff,
like
that,
so
I
was
in
a
conference
talking
with
the
early
fluent
bit
users
and
one
of
them
actually
two
of
them
from
a
company
from
germany
told
me
hey.
A
We
are
testing
out
fluent
bed.
We
are
doing
we're
moving
some
blocks
from
our
system,
but
we
have
a
problem.
Gdpr
is
coming
and
we
need
to
offscate
the
data,
because
some
fields
are
the
credit
card
numbers
and
it's
all
here
maybe
write
a
plugin,
maybe
a
plugin,
to
offer
sk
data
and
they
said
yeah,
but
sometimes
we
have
conditionals
like.
If
this
record
contains
a
b
or
c,
we
have
to
obfuscate
it.
A
This
way,
otherwise
take
the
other
action,
and
I
was
quite
some
a
little
familiar
with
why
we
can
do
some
lua
scripting
so
and
that's
what
it
started
with
you
listen
to
the
user,
because
it's
no,
it
was
not
addition
right.
The
user
tells
you
hey.
This
is
my
problem
and
then
the
his
problem
was
not
how
to
offscate
data.
A
Why
don't
add
a
kubernetes
filter
tail
because
they
was
noted
from
the
beginning
until
came
after
one
two
years,
why
we
can
send
the
data
to
elasticsearch,
so
this
new
fancy
database
talking
years
ago
right
and
all
of
this
is
feedback
from
the
users
now
this
was
not
meant
like
a
vision
about.
Let's
do
it
on
this
way.
For
all
these
years
was
always
a
continuous
interaction
with
the
users.
A
A
A
A
Secondly,
you
can
add:
maybe
you
can
do-
do
a
scripting,
it's
like
to
attach
the
filters
right,
but
running
them
in
a
separate
context
or
maybe
run
a
sql
query,
because
that
is
for
the
user
right.
The
user
is
getting
the
data
and
they
can
say
I
just
want
keys
a
b
and
c
no
more
so
maybe
you
can
use
c
we
support
sql,
so
we
can
do
that
very
easily.
So,
where
we're
heading
is
like
fluent
bits
is
behaving
like
a
platform
for
data
processing.
A
It
started
as
a
canadian
for
low
forwarding
right,
but
now
it's
not
just
logs
since
one
year
we
do
matrix
collection.
We
are
replacing
some
of
the
prometheus
scrapers
and
we
can
do
connection
with
prometheus.
Now
we
started
supporting
open
telemetry,
because
this
one
one
problem
in
in
the
industry
is
like
few
people
care
about
performance.
A
Well,
most
people
care
about
performance,
but
few
of
people
execute
on
optimizing.
The
software
right
to
run
at
a
very
high
scale
right,
and
I
think
that
tools
around
are
not
well
optimized
for
performance
yeah.
They
do
the
job
and
they're
optimized.
So
when
they
hit
certain
scale,
er
yeah
they
got
stuck
and
then
they
come
to
flu
and
they
say
hey.
Can
you
do
this
on
this
way
in
this
way
or
improve
it
yeah?
We
can.
A
I
know
it's
not
because
it's
written
in
c
and
I
think
that
it's
an
architectural
thing,
because
you
can
run
slow
code
on
any
language
right,
it's
easy,
but
I
think
that
the
fundamentals
of
performance
influence
has
been
there
there
for
years,
always
high
performance
optimize
try
to.
If
you
can
catch
the
memory,
don't
free
real
up
the
memory.
Try
to
do
it
right.
A
So,
where
we're
heading,
I
would
say,
stream
processing
like
a
platform
a
right
now
at
calito,
we
are
creating
a
whole,
a
new
interface
to
run
to
create
gold
and
input
plugins
in
fluent
okay-
and
this
is
like-
oh
yes,
people
say:
oh
really,
is
it
possible
yeah
we're
working
on
that
right
now,
and
this
is
pretty
much
to
provide
more
options
to
the
users
where
they
can
create
their
own
pcs
without
learning
c
right
and
sometimes
as
what
happens
to
wesley
yeah,
he
wrote
the
staffing
goal
and
then
he
migrated
everything
to
c,
because
it's
not
because
golden
versus
c
is
because
golem
has
its
own
apa
right,
their
own
interfaces.
A
B
Web
assembly
yeah
very,
very
amazing,
yeah.
I
think
that's,
that's
such
a
such
an
awesome
sort
of
paradigm
to
have
where
it's
you
know,
expandability
adaptability
and
just
allowing
for
the
users
to
totally
customize
things.
B
I
was
recently
hearing
something
about
kind
of
like
how
javascript
added
so
much
to
the
web
experience,
because
before
that
you
know,
people
couldn't
add
on
you
know
the
kind
of
scripts
and
sort
of
extensions
to
to
the
web
experience
with
just
html
and
css
and
just
sort
of
having
javascript
be
there
to
give
people
kind
of
like
a
customizable
experience.
Just
change
change
the
game
and
I
think
for
fluent
bit.
B
You
know
it
kind
of
is
kind
of
like
a
very,
very
similar
concept
where
you
know
you
have
this
really
well
built
out
pipeline
for
data,
and
you
know
a
structure,
a
framework
for
processing
that
data,
but
then
you're
also
giving
the
users
kind
of
like
that.
Javascript,
like
experience
to
be
able
to
you,
know,
write
their
own
code
to
modify
that
data
and
just
whatever
they
want,
they
can
sort
of
add
it
in,
and
I
I
guess
like
javascript
and
sometimes
like
jquery
or
something
or
you
know
some
some
things
like
typescript.
B
These
things
can
get
put
into
javascript
in
general,
like
the
standards-
and
I
guess
that's
kind
of
like
what
you're
saying
where
people
can
bring
things
from
go,
put
them
in
to
see
to
really
make
things
super,
performant
and
just
kind
of
baked
into
the
platform.
But
I
think
it's
it's
so
so
nice
to
know
that
it's
so
extensible
that
anyone
can
add
this
code
to
modify
this
pipeline
and
to
get
their
custom
experience.
I
think
that's
such
a
powerful
idea
and
a
powerful
concept
and
a
great
direction
for
fluent
fit
in
the
future.
A
A
I
know
that
or
praise
tried
this
some
time
ago
and
they
found
some
very
performance
penalties,
but
sometimes
you
need
the
pocs
to
make
it
work
first
right
and
then
optimize.
I
think
that
yeah,
we
know
all
know
that
prematurely
optimization
does
not
work
all
of
the
time,
but
it
would
be
really
interesting
really
interesting
to
see
this.
We
have
assembly
stuff
taking
up
some
time.
A
A
Okay,
so
I
think
that
we
extended
a
little
bit
this
fluent
talk.
I'm
really
thankful
that
you
could
join
in
this
session
and
for
everybody.
Please
join
a
fluent
talk
from
confluent
fluency
in
may,
and
also
these
fluent
talk
sessions
are
happening
every
friday.
It's
usually
2
p.m,
pacific,
but
sometimes
we
are
switching
the
times
just
based
on
the
speaking
availability,
so
feel
free
to
just
connect
and
send
your
questions
on
the
chat.
Well,
matt.
Thank
you
so
much
again
for
for
joining
this
session
and
we
hope
to
see
you
soon
in
valencia.