►
From YouTube: 2021-04-28 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
D
E
E
E
Yeah,
so
I
guess
you
can
see
our
like
meeting
doc,
where
we
put
our
all
the
ice
and
dust
and
out
of
three
items.
This
is
the
first
thing,
so
I
opened
an
issue
for
writing
a
new
processor
which
is
like
kind
of
magic
generation
processor.
E
It
should
generate
like
matrix
new
metrics
from
like
its
existing
matrix
and
the
development
will
take,
like
maybe
couple
of
more
days
to
finish-
and
I
started
the
past
year
here,
which
only
covers
the
skeleton
and
the
configuration
which
gives
an
idea
like
horse
is
going
to
look
like
this
is
the
first
year
and
another
couple
of
thumb
will
come
maybe
next
this
week,
but
here
is
the
pr
I
will
say
like.
Maybe
we
can
have
a
look,
and
I
just
want
to
mention.
E
The
previous
plan
has
sensed
a
little
bit
two
important
things
just
to
note
here.
So
what
it
does.
It
gives
us
the
option
for
creating
a
new
metric
from
existing
metrics.
So
there
are
two
approaches.
Currently
it
supports
in
the
first
version.
One
thing
is
like
it
can
create
a
new
metric
using
two
existing
metrics,
applying
an
like
a
given
rule,
which
is
might
be
a
like
arithmetic
operation,
which
might
be
like
one
of
the
five.
E
These
are
version,
add
subtract,
multiply,
divide
percent
and
one
example
is
like
calculating
the
participation
list,
which
is
kind
of
practical
example,
which
I
am
looking
for.
So
how
do
we
calculate
this
generation
type
calculate
and
it
will
take
two
operand
matrix,
so
the
first
open
and
second
open,
and
it
will
just
divide-
and
maybe
we
can
just
type
this
dot
options
and
the
another
option
is
like.
E
Sometimes
we
need
to
change
the
matrix
or
scale
the
value
by
scale
up
or
scale
down,
so
I
just
also
want
to
mention,
like
I
had
a
chat
with
like
debit
offline.
So
I
think
someone
is
also
working
and
created
a
pr
for
the
magnetic
transform
processor.
So
this
is
also
necessary
for
me.
I
this
this
receiver
will
also
cover
this
stuff.
Here,
like
there
is
another
generation
type
which
is
sql,
so
it
offers
only
on
one
metric
and
we
can
scale
the
value
by
any
constant
value.
E
Say,
for
example,
from
bytes
to
megabytes,
we
are
getting
a
new
matrix
in
megabytes,
which
was
in
bytes
or
something
like
this
or
maybe
okay.
Maybe
I
miscalculated
here
it
should
be
defined.
I
get
oh,
no,
it's
correct
yeah,
so
this
is
another
example
for
scaling
a
mate
and
creating
a
new
one.
So
this
is
kind
of
two
different
approaches
and
it
supports
like
different
operations
based
on
the
rules.
So
here
is
the
pr
I
have
updated,
the
redmi
and
other.
E
So
I
would
say
maybe
we
can
take
a
look
offline
and
put
the
comment.
Then,
if
it's
good,
I
will
send
the
implementation
appears
later.
A
E
I
think
we
talked
about
this
earlier
for
looking
into
the
so
if
I
understand
your
question
correctly
like
so,
we
talked
about
this
like
doing
something
like
promo
kill
like
query
more
generic
and
more
robust.
But
I
guess
here
is
the
issue
link
where
I
opened
and
made
my
comment.
Okay,
so
it
seems
like
we
had
the
earlier
discussion
and
it
seems
like
after
exploring
I
found
it
like
overly
complicated
and
more,
it's
definitely
more
robust
but
complicated.
It
will
also
take
longer
to
finish
all
the
steps.
E
So
then,
in
one
of
the
segments
we
decided
to
like
stay
with
the
like
simple,
but
I
mean
definitely
workable
and
good
accepted
solution,
for
I
guess
like
more
generic
one,
then
here's
the
like-
maybe
comment-
I
don't
know
so.
If
so,
I
think
that
was
the
decision
in
the
segmenting
like
for
for
now
like
this
is
like
simple,
but
generic
also
not
tightly
coupled
to
the
tight
rules.
I
guess
so.
I
think
that
was
the.
D
E
F
I
have
a
question
about
the.
If
you
know
blogging
is
finished
like.
Why
did
scale
is
a
separate
thing.
Why
can't
we
just,
for
example,
an
operand
one
value
open
two
value,
and
then
you
know
you
can
put
like
one
million
to
one
of
those
operands,
and
so
you
can,
you
know,
multiply
and
divide
by
a
value
if
it's
provided
otherwise.
H
F
Like
I
mean
we
don't
have
to
complicate
the
degeneration
type
and
like
have
scale
as
an
option
and
stuff
like
that,
we
may
have
like
multiple
ways
of
providing
operands.
It
could
be
either
a
metric
type
which
is
represented
by
operand
one
underscore
metric.
It
could
be
a
you
know,
number
value,
operand,
one
underscore
value
same
applies
to
the
you
know,
second
operand
as
well
and
based
on
whatever
is
provided.
We
can
do
the
calculation.
A
So
that's
that's
going
to
force
you
to
to
have
to
do
some
lecture
and
parser
for
for
that
expression,
but
I
think
could
be
nicer
to
write
if.
F
Why
do
you
need
to
write
a
lecture
you
don't
have
to?
It
will
be
just
a
number.
F
A
F
Anyways,
just
just
kind
of
like
this
configuration
looks
confusing
to
me
to
scale
on,
especially
as
a
user.
That's
why
I
kind
of
like
suggested
that
maybe
we
can
simplify
it,
but.
E
Yeah
I
was
wondering
like
from
the
I
thought.
Maybe
in
future
we
might
have
more
generation
type
and
maybe
more
calculus
because,
like
these
are
like
supporting
only
stick
like
kind
of
like
very
minimal
options,
so
I
was
wondering
if
in
episode
we
come
with
like
a
different
type
of
generation
idea.
So
I
was
thinking
like.
Maybe
generation
type
is
useful
in
and
maybe
we
can
just
keep
it
just
make
it
more
generic.
I
don't
know,
and
also
for
this
one
yeah,
I
think
yeah
different
opinions.
E
A
I
think
I
think
we
can
explore
and
and
improve
this
the
the
idea
of
having
this
transformation
or
whatever
is
called
processor.
It's
is
good.
I
would
just
make
fun
of
bit
of
you
that
megabytes
to
bite
is
not
a
million.
You
have
to
have
10
24,
multiplied
by
10,
24,
but
sure.
A
But
okay,
so
as
long
as
we
take
this
a
bit
as
a
experimental
for
especially
for
the
config,
I
think
we
are
I'm
fine
to
to
make
progress
on
this
and
and
maybe
iterate
over
the
the
the
configuration
in
the
following
years.
I
think
yeah
jana
does
that
make
sense
to
you
like
we
can
iterate
and
improve
based
on
feedback
and
and
one
of
the
feedback
to
give
is,
is
super
useful
and
I
think
it
it
will
be
super
good.
H
F
That's
true
and
one
of
the
other
things
that
I
actually
would,
let's
maybe
skip
this.
You
know
the
other
thing
that
I
think
we
should
worry
about
is
like
scale
by
like
big
numbers
there
like.
Is
it
going
to
be
easy
to
write
them
like?
Can
I
do
tend
to
do
something
like
anyways?
Maybe
this
meeting
is
not
the
best
place,
I'll
kind
of
like
leave
some
comments
on
the.
A
So,
okay,
I
think
this
this
needs
to
to
move
forward
ryan.
I
will
take
a
look
after
the
meeting.
E
But
I
think
it's
a
good
start
yeah.
That
should
be
good.
I
I
also
maybe
I
did
it
wrong.
I
also
made
some
implementation
details,
so
maybe
pasta
review
will
be
highly
highly
appreciated
so
that
I
can
make
this
in
this
soon.
Yeah.
Thank
you
and
the
next
thing.
So
this
is
another
big
one.
I
guess
so
I
send
like
so
in
this.
E
In
the
same
like
issue
I
opened
we
had
like
plans
like
creating
new
matrix
and
we
also
had
like
another
thing
like
for
generating
new
matrix
based
on
their
label,
so
this
pizza
is
like
already
available
in
our
matrix
transform
processor.
E
Is
it
the
same
pl
yeah?
So
what
it
does?
I
would
just
explain
in
one
word:
one
sentence
like
it
in
the
matrix
transform
processor.
We
have
filtering
mechanism
who
is
based
on
this
filtered
matrix.
E
We
insert
them
or
rename
them
so
for
the
insert
or
renaming
I
just
modified
the
filter
with,
like
I
just
enhanced
the
filter,
where
we
can
filter
the
matrix
based
on
the
matric
names
as
well
as
matrix
levels
here
so
earlier
we
were
able
to
filter
matrix
using
only
metric
name,
but
now
it
gives
us
to
filter
the
matrix
messing,
the
matrix
name
as
well
as
the
level
values
so
by
which
it
gives
us
like
an
option
for
creating
a
new
matrix
based
on
their
level
sets.
E
So
the
thing
was
like
then
bogdan
made
a
comment
like
yeah.
We
discussed
it
earlier,
like.
Unfortunately,
we
don't
want
to
new
functionality
to
the
matrix
transfer
processor,
as
we
already
all
know
like.
We
have
a
plan
to
rewrite
this,
but
my
checker
was
like
for
two
meter.
I
think
this
is
the.
I
mean
right
place
for
this
feature
to
have
on
this
processor,
and
my
comment
here
is
definitely
so
when
we
will
rewrite
this
from
my
side.
E
It's
kind
of
a
commitment
I
will
read
my
part
and,
if
possible,
I
will
help
to
rewrite
other
parts.
Also,
and
maybe
I
can
just
create
a
new
issue
and
assign
it
to
me
specifically
for
writing
this
path.
So
I'm
just
so.
This
is
like
fish
are
completed
and
I'm
just
expecting
an
opinion
from
brockton,
and
maybe
the
community
like
if
it
sounds
good
place,
to
have.
A
Work
done,
my
comment
is
not
about
not
not
being
the
right
place
or
the
right
functionality.
It's
just
like
that
is
one
of
the
most
complex
processor
that
we
have
and
is
still
working
on
open
sensors,
and
I
was
hoping
that
by
not
allowing
people
to
add
new
functionality,
we
force
the
community
to
start
transforming
that
into
the
p
data.
So
then
we
can
have
it
in
the
new
format.
E
Maybe
it's
not
a
good
thing,
just
as
just
it
was
a
small
modification
in
the
filtering
mechanism,
one
small
function
and
all
other
steps
are
like
basically,
testing
and
others.
So
I,
in
my
opinion,
it
wouldn't
take
longer,
at
least
for
since
this-
and
I
will
definitely
I
I
would
do
it
myself
when
we
are
deleting
the
processor
so
because
we
are
also
hitting
our
deadline.
So
if
I
want
to
wait
for
rewrite
the
whole
processor,
so
it
would
be.
I
don't
know-
and
for
this
thing
I
cannot
write
a
whole
new
processor.
I
Yeah,
I
think
you
and
I
had
talked
about
it
and
the
idea
really
was
that
you
know
to
work
with
bogdan
and
figure
out.
You
know
what
we
needed,
how
we
could
make
sure
we
could
reuse
either
the
other
processor
that's
been,
you
know
the
the
other
pr
that
exists
or
just
figure
out.
You
know
how
we
can
simplify
this.
A
E
This
problem,
okay,
thank
you
yeah.
I
just
can't
give
you
a
share,
an
idea
when
I
write
the
aws
quantities,
continuous
processor.
We
also
had
like
similar
issues.
I
was
using
the
open,
sensors
and
so
on
rag,
and
maybe
you
say
it
like:
we
need
a
commitment.
So
after
publishing
my
plugin,
like
after
the
deadline,
I
rewrite
converted
all
the
open
sensors
to
our
tlp
matrix,
and
that
was
the
first
receiver.
I
believe
who
was
who
was
like
fully
using
using
the
p
data.
I
Right
so
ryan,
that's
really
good!
I
mean
as
long
as
you
own
this
and
continue
to
you
know.
E
Yeah,
I
think
so
these
processors
are
being
used.
I
mean
heavily
by
many
of
my
customers,
the
plugins
I'm
working
on.
So
definitely
I
will
contribute
here.
Yeah,
I'm
on
it.
I
E
E
E
If
I
want
to
explain
in
one
or
two
sentences,
so
I
need
to
calculate
the
cpu
utilization
metrics
in
rate,
but
I
from
my
understanding
most
of
the
back
ends
actually
handle
this,
but
for
our
cloud
was
client
container
inside
customers,
we
need
to
send
the
rate
from
the
from
our
kind
of
collector,
like
from
our
receiver
or
in
the
processor.
E
So
I
need
to
store
the
previous
estate
data
points
or
records
to
calculate
the
rate
between
two
time
based
times.
So
I
was
just
wondering.
Maybe
I
need
to
store
the
previous
record
always
so.
I
am
not
sure
if
we
have
like
similar
mechanisms
in
any
of
our
existing
processors
and
also
wondering
like,
is
there
any
risks
or
anything
if
we
make
a
process
or
a
state
pool
to
save
the
information
from
previous
data
point.
E
Also,
like
any
other,
so
can
we
modify
the
again
the
magnetic
transform
processor
or
should
we
write
a
whole
new
processor
for
this,
and
only
thing
is
like
cpu
utilization
matrix
in
my
case,
for
now
at
least
so.
A
We
do
have
something
similar,
I
think
in
the
in
a
receiver,
not
in
a
processor.
The
only
problem
with
this
is:
can
you
guarantee
that
all
the
points
are
going
through
the
same
instance,
so
you
don't
have
to
do
any
kind
of
load,
balancing
or
sharding
of
these
things.
A
So
then,
then,
then,
then
there
is
no
easy
solution,
because
you,
you
have
to
have
a
more
or
less
a
distributed
state,
not
a
just
a
state
that
that
you
have
to
do
some
kind
of
sharding
and
stuff.
One
option
that
I've
seen
people
doing
is
using
kafka
for
replicating
a
stuff,
but
that
may
be
too
much.
Another
option
is
who
is
producing
these
metrics?
Who
who
are?
Who?
Who
is
the
producer
of
these
metrics.
A
One
option:
one
option
is
for
you
to
to
do
it
close
to
the
source,
where
you
guarantee
that
you
see
all
the
points
in
one
instance.
So
one
option
for
you
would
be,
to
maybe
add
an
an
parameter
or
an
option
to
the
to
the
receiver
that
or
to
the
scraper
that
produces
this
and
produce
utilization
or
percent
rate
from
there.
A
F
E
Yeah,
I
am
still
not
super
clear
like
if
we
really
need
something
like
distributed
estate
maintenance
or
something
like
this.
Maybe
I
need
more
thought
on
this.
I
can.
F
E
Yeah,
okay,
so
maybe
I
also
need
to
think
more
on
that
is
yeah
yeah.
Thank
you.
Maybe
I
replay
study
more
on
this
then
maybe
I
can
create
an
issue
to
discuss.
Maybe
sure
thank
you.
A
Again,
I
don't
know
your
requirements
so,
as
jana
pointed
first
thing,
first
gather
the
requirements
and
understand
what
what
you
need
to
do
before
proposing
something.
Okay,.
E
E
A
All
that's
all
for
next
item
about
the
wall.
Hopefully
it's
not
the
mexican
wall,
it's
another
one.
B
B
Yeah,
so
I
have
prepared
this
design
dock
related
to
wall
design
for
open,
telemetry,
collector
and-
and
this
is
a
draft-
so
I
I'm
hoping
to
like
put
more
detail
here,
but
I
wanted
to
discuss
what
I
have
found
so
far
with
the
community
and
and
see
where
we
should
go
from
here.
So
I
have
prepared
some
requirements
which,
roughly
speaking,
is
like.
B
I
think
what
anyone
would
expect
from
wall
implementation,
which
is
like
some
sort
of
persistence
in
case
the
memory
buffer
is,
is
getting
full
or
the
open,
telemetry
collector
is
being
killed
for
whatever
reason-
and
I
I
don't
want
to
go
into
too
much
detail
here,
but
the
two
items
that
I'm
interested
in
is
like
first,
where
in
principle
the
wall
should
be
included
like
in
which
a
component-
if
this
should
be
like
some
sort
of
processor
and
or
some
sort
of
exporter,
both
of
these
approaches
have
the
pros
and
cons
so
making
a
wall
as
an
processor
allows
us
to
essentially
have
like
a
single
place
where
records
of
a
given
signal
are
being
persisted.
B
However,
it
complicates
things
a
little
bit
because
it
doesn't
get
the
feedback
from
exporters.
What
was
the
reason
why
the
export
failed,
like
processors
are
largely,
let's
say
fire
and
forget,
and
they
don't
know
if,
for
example,
there
was
400
error
on
the
exporter
or
500
error,
or
even
if
there
was
an
error,
so
there
would
need
to
be
some
bigger
changes
in
the
collector
to
make
them
support
that.
So
that
that's
why?
B
I
think
that
maybe
exporter
is
a
better
place
to
put
the
wall,
especially
since
we
have
a
cuter,
try,
helper
commonly
being
available
in
exporters,
and
one
can
think
about
wall
as
a
replacement
for
memory-backed
queue
that
cuter
try
is
is
using.
So
that's
my
line
of
thought
and
I
wanted
to
check
what
do
you
think
about
it?
B
If
this
makes
sense
and
and
okay-
and
maybe
let
me
go
to
the
second
item
which
is
shorter-
I
was
looking
at
the
at
some
of
the
choices
for
implementing
this
and
two
items
I
was
verifying
is
using
a
file
storage
extension
that
was
recently
added
to
open,
telemetry,
collector
contrib
and
the
other
approach
was
checking
a
tidal
library
that
is
actually
being
proposed
for
using
in
primitives
remote
right
pr-
and
I
looked
at
both-
I
did
like
some
very
quick
and
dirty
benchmarks.
B
It
seems
that
for
larger
batches
those
are
comparable
for
smaller
batches
tidwell
is
faster
and
well.
Tidwell
provides
also
indexes
and
ability
to
truncate
files,
maybe
maybe
it's
a
better
choice
overall
also,
I
was
looking
at
the
comments,
since
this
is
a
somewhat
popular
library
in
in
golang
word,
so
I
have
some
small
preference
for
that,
but
yeah,
but
but
but
but
also
this
brings
the
question
of
how
we
should
think
about
the
wall.
In
principle,
you
know
telemetry
collector.
We
have
this
like
a
separate
ppr
that
supports
just
prometheus
remote
right.
B
A
I
I
think
I
like
the
idea
of
replicating
of
being
able
to
to
put
the
world
in
the
exporter
the
problem
with
the,
and
I
think
this
is
the
question
in
general,
who
needs
to
retry
in
case
of
or
what
is
the
main
purpose
of
the
world
is.
Is
it
to
to
to
be
able
to
like
survive
different
spikes
of
data,
or
is
it
more
or
less
for
for
guarantees
of
100.
B
B
This
is,
for
example,
ability
to
still
export
the
data
if,
for
example,
there's
a
network
outage,
but
maybe
some
other
organizations
are
having
like
this
a
collection
on
the
on
the
devices
that
are
not
always
connected,
like
they
have
some,
let's
say,
background
processes
and
etc
and
connect
to
the
network.
Only
every
now
and
then
I
would
like
to
have
this
like
larger
buffer
that
extend
exceeds
the
available
memory
and
send
this
data.
So
it
depends,
but
I
think,
in
principle
has
several
use
cases.
L
Yeah
go
ahead,
yeah
exactly
so
bogdan
all
the
reasons
you
mentioned,
they're
very
valid,
and
you
know,
for
example,
like
with
a
remote
right
exporter
for
prometheus.
If,
if,
for
example,
the
prometheus
endpoint
is
taken
down,
you
don't
want
to
lose
that
data,
you,
you
know
you
want
to
ensure
that
you
always
have
it,
but
one
other
interesting
thing
is
that
there
will
be
some
pipelines
where
the
exporters
might
be
slower
to
read,
and
so
you
don't
want
too
much
ram
just
getting
chewed
up.
L
So
with
that
just
write
to
the
wall
and
at
some
point
it
will
get
picked
up,
even
if
the
collector
gets
shut
down
in
the
midst
of
exporting,
at
least
that
data
is
written
to
the
wall
and
later
on,
when
it
gets,
you
could
resume
it
could
resume
the
processing
instead
of,
for
example,
having
to
use
like
pub
sub
or
kafka
or
any
of
these.
This
is
a
more
all-in-one
solution.
F
One
of
the
difficulties
about
this
is
like
there
are
stateful.
You
know,
protocols
like
such
as
the
prometheus
one,
where
you
have
to
actually
like
write
things
in
the
right,
correct
order
and
stuff
in
order
to
be
able
to
export.
So
if
you
build
something
generic
that
doesn't
really
necessarily
take
care
of
all
of
that
stuff,
it
may
make
the
work
harder
for
the
exporters.
Are
we
considering
like
comparing?
You
know
what
the
protocols
do
and
just
you
know,
making
sure
the
wall
is
maybe
like
more
aligned
with
the
guarantees.
F
H
A
Got
it
and
we
can
do
the
same
thing
I
mean
we
can
have
the
world
with
one
consumer
and
it's
the
same
thing
that.
F
A
That's
one
thing,
but
there
are
use
cases
where
people
want
to
persist
this
and
it's
a
just
an
exporter.
So
like
people
there
are.
I
heard
people
wanting
to
wanted
to
write
things
to
a
file.
Then
they
they
use
some
kind
of
ftp
or
whatever
protocol,
to
download
the
file
and
parse
it,
because
they
cannot
communicate
over
network
for.
A
They
have
the
only
option
to
extract
data
from
from
a
network
to
another
is
by
by
moving
files,
but
is.
I
That
is
that,
like
in
more
of
a
specific
security
use
case,
I
mean
you're
correct.
That
is,
and
I
mean
I
have
heard-
that
too
from
various
customers-
and
you
know
it's
again
really
dependent
on
their
configurations.
Like
you
know,
they
have
other
restrictions
for
their
environments.
A
So
for
that
having
this
as
a
proper
exporter
and
receiver
would
help
them
so
before
jumping
to
conclusions,
I
think
one
one
one
one
lesson
from
this
is
or
maybe
maybe
the
the
file
part
should
be
part
of
the.
There
is
already
the
logging
library
that
does
the
file
thing.
So
we
should
probably
not
be
worried
about
that.
So,
okay,
stepping
back
what
which
problem?
Which
of
these
problems?
Are
we
trying
to
solve
right
now
with
this
with
this
pr,
especially
with
them
under
hdr.
L
I
believe
for
aws
having
having
a
right
right
ahead.
Log
is
important
to
avoid
data
loss,
to
also
ensure
that
you
know
if,
if
we
have
like
traffic
spikes
and
that
kind
of
thing
that
you
know
still
doesn't
consume
so
much
ram,
I
love.
I
H
B
A
A
Separate
solutions,
if
that's
the
case,
I
I
like
the
idea
of
being
able
to
have
two
ways
that
you
retry
to
to
keep
it
in
memory
or
put
it
in
a
world
so
essentially
replace
the
in-memory
queue
or
file
queue
or
whatever
we
call
it
in
the
exporter
helper.
So
then
every
every
exporter
can
enable
file
queueing
versus
in
memory.
Queueing.
A
Is
that
reasonable?
For
for
for
you,
jana
and
emmanuel.
H
F
A
A
That's
that's
another.
The
other
thing
janna
about
this
is
one
of
the
reason
we
you
used
to
have
the
the
curie
try
as
processor
and
one
of
the
problem
that
we
had
and
the
reason
why
we
moved
it
to
the
to
the
exporter
was:
if
you
we
have
a
fan
out,
so
I
I
don't
know
if
you
can
move
to
the
document.
A
There
is
a
picture
with
the
processors
and
fan
out
to
the
exporters,
so
we
have
a
fan
out
there,
so
a
processor
may
be
able
to
export
to
to
push
to
multiple
exporters
and
the
problem
that
we
had
is:
if
one
of
them
failed,
we
will
retry
on
all
of
them,
because
we
don't
have
a
way
to
to
keep
state
on
everyone.
So
then,
by
by
doing
curie
try
on
exporter
level
you
you
can
control
better.
When,
when
that
exporter
fails,
you
can
retry
and
yeah.
A
I
got
it
so
so
a
lot
of
the
reasons
based
on
these
two
to
have
even
the
world
on
the
on
the
exporter
helper
there.
So
that's,
probably
a
better
solution.
Just
for
the
problem
of
making
sure
that
data
that
we
receive
are
are
transformed.
A
There
is
another
outlier
by
the
way
in
terms
of
processors.
I
don't
know
if
you
are
using,
that
is
the
the
batch
processor.
A
A
F
Yeah,
this
might
be
also
like
be
useful
for
some
of
the
receivers
like,
for
example,
there
are
all
these
like
intermediate
things
that
may
break
some
of
the
receivers.
I'm
just
making
this
up
right
now,
and
people
may
want
to
also
like
purses,
but
that's
a
difficult
problem.
I
think
making
it
reusable.
It's
just
making
sense
to
me.
A
By
default,
the
default
queues
will
be
still
in
memory,
but
will
be
an
option
to
replace
that
with
file
and
put
some.
H
A
It
is
the
client
problem,
so
the
client
is
still
the
owner
of
the
data
until
we
respond.
So
I
don't
know
if
we
need
a
wall
there
unless
we
need
to
have
a
state
like
the
thing
that
ryan
was
mentioning
in
terms
of
of
the
calculating
usage
yeah.
A
A
K
David
I'll
talk
about
this
really
briefly,
I'd
like
to
work
on
and
and
processing
latency
only
because
gk
is
running
this
in
production
and
we'd
love
to
have
better
debuggability
metrics.
K
A
Wait
wait.
We
already
have
traces
for
all
of
these
things,
so
you
you
do
have
latency.
You
probably
are
looking
for
metrics
correct.
Yes,.
K
A
And
by
the
way
I
I
have
couple
of
ideas,
I
think
we
we
need
to
to
understand
better
before
jumping
to
to
to
do
this.
The
the
point
is
in
my
mind,
I
had
this
problem
and
maybe
maybe
you
can
draw
a
picture
or
understand
better
than
me,
but
we
receive
something
and
we
have
a
couple
of
points
in
the
in
the
collector
where
we
terminate
the
request
where
we
send
back
the
response
without
actually
finishing
the
the
request,
so
so
points
inflection
points
or
whatever
you
call
them
like
in
the
batch.
A
When
we
hit
the
queue
we
send
back
the
the
the
request,
the
response,
but
we
we
did
not
finish
yet
the
request.
Then
then,
on
the
sending
queue
in
in
the
exporter
on
the
q
retry
things
we
do
the
same.
A
I
don't
know
if
that's
important
or
not,
and
if
that's
important,
how
do
we
calculate
it
and
stuff?
Another
option
is
to
put
a
start
time
on
the
every
batch.
We
we
can
add
a
notion
of
a
receive
time
of
every
batch
and
then
every
time
when
we
export
it.
A
We
we
record
the
metric
of
now
minus
whenever
we
receive
it.
A
Sometimes
not
latency
is
delay
because
latency
usually
means
processing
of
things,
but
but
here
sometimes
we
are
cueing
things.
If
we
have
wall
we
are,
we
are
going
to
put
it
on
the
wall.
I
don't
necessarily
call
that
latency,
but
I
actually
more
or
less
delay.
So
so
you
are
interesting
more
or
less
into
the
delay
of
the
data.
Okay.
A
I
I
need
to
think
a
bit
about
this,
but
and
you
need
this-
for
for
metrics
traces
and
logs
correct,
I
mean
we.
If
we
come
up
with
a
solution,
we
come
up
for
all
of
them.
Yep.
A
In
in
the
context,
may
not
be
right
david
with
the
context,
because
when
we
do
batching
we
kind
of
lose
the
context
of
the
initial
things
so
because
there
are
multiple
contacts
coming
like
multiple
requests
with
different
contexts.
So
when
we
do
badging,
we
create
only
one,
but
we
we
we
should
talk
about.
So
if,
if
possible,
maybe
maybe
one
thing
that
we
need
to.
A
A
F
A
A
I
Yeah
logan,
these
are
reviewed
by
sorry,
go
ahead.
A
I
already
read,
and
I
have
a
solution
I
think
there's
a
there
is
a
label
called
ready
for
merge
or
something
like
that.
I
think
we
should
start
using
that
and
also
also
we
should
start.
We
should
document
that
we
have
a
contribution.md
file.
Maybe
maybe
we
can
update
that
and
say:
okay
in
the
country,
especially
in
the
country
where
we
have
tens
of
approvers
that
not
necessarily
have
all
the
rights
to
make
the
things
green.
A
I
A
Okay:
okay,
let
me
look
into
this
and
see
if
I
can
enable
authors
to
to
be
able
to
address.
F
This
is
like
for
issues
might
be
also
useful,
like
we
can't
triage
a
lot
of
things,
because
you
know.
A
The
other
thing
is,
we
can
create
a
github
action
and
if
anyone
approves
put
the
label,
so
I
will,
I
will
make
sure
we
have
a
way
to
to
add
the
label.
Okay,.
I
A
A
Sorry,
okay,
anything
else.
We
have
one
more
item.
C
Yeah,
hey,
I
added
it
last
minute
during
the
meeting,
so
apologies
quick
question,
because
some
of
the
conversations
today
reminded
me
of
it.
I
was
hoping
I
brought
this
up
in
february
around
having
the
ability
to
do
filtering
of
traces
which
has
some
trade-offs
and
drawbacks
because
it
can
create
broken
or
incomplete.
Whatever
terminology,
you
want
to
use
traces,
but
I
did
see
that
jp
who's,
not
here,
unfortunately
had
suggested
sort
of
a
workaround
or
idea
for
folks
at
datadog.
C
We've
seen
people
continue
to
ask
for
this
feature,
and-
and
so
I
assume
other
people
in
the
community
have
as
well,
so
he
suggested
an
idea,
an
extension
of
a
contrib
processor,
and
I
was
wondering
whether
one
if
I
were
to
put
in
the
time
to
make
that
you
know
just
part
of
the
processor
instead
of
telling
people
to
write
like
you
know,
to
extend
it
on
their
own.
Would
it
be
accepted
as
a
pr
and
two?
C
I
just
wanted
to
sort
of
see
whether
the
check
the
temperature
here
of
anyone,
whether
this
is
still
something
people-
want
the
ability
to
drop
traces
or
trace
chunks
and
and
yeah
see
if
this
is
still
a
need
for
people,
because
I'd
be
open
to
right
for
in
the
meantime,
because
people
are
asking
for
it,
for
you
know
like
for
our
vendor
specific
stuff
and
because
we
have
some
ingestion
costs
that
sort
of
trump
everything
else.
C
I've
just
put
up
a
work
around
pr
for
our
exporter,
that
is
pretty
pretty
hacky,
but
is
mimicked
some
of
the
functionality
in
our
own
agent
and
and
so
and-
and
you
know,
I
think,
we're
comfortable
with
the
ex
with
the
trade-offs
but
yeah.
I'm
wondering
whether
this
is
something
I
could
generalize
and
add
to
the
routing
processor
in
contrib.
A
I
don't
know
too
much
about
the
routing
processor.
The
routing
processor
does
one
thing
by
the
way,
talking
about
statin
processor,
that
one
thing
that
keeps
a
state
of
the
things
in
in
memory,
because
it
it
groups
things
by
trace
id
and
and
then
route
them
based
on
the
trace
id.
I
think
the
idea
to
put
it
there
is
is
because.
C
That
would
be
the
idea,
is
you
could
just
supply
and
attribute?
Basically,
as
long
as
you
have
a
trace
chunk
that
you
know,
maybe
because
of
state
batching
or
statefulness,
you've
collected
a
trace
chunk
and
then
you
check.
Ideally,
the
the
root
span
of
that
trace
chunk
is
the
span
on
which
you
check
whatever
attribute
you
want
to
filter
by
and
if
that
attribute
exists
or
matches
whatever
a
regex
or
I
don't.
You
know,
I
don't
know
the
finer
details
of
that
matching.
C
C
I
think
how
I'd
approach
it
and
it's
I
was
hoping
jp-
was
I'd
only
now
realize
he's
not
on
the
call,
but
that's
sort
of
how
I
would
approach
it
and
if
there's
some
openness
to
that,
I
could,
I
could
put
up
a
pr
in
the
coming
weeks.
I
think
it
would
be
a
high
value
thing
for
folks,
but
I
didn't
want
to
do
the
work
if
you're
like
you
know
if
people
are
like
well
because
it
introduces
the
potential
for
complete
traces,
it's
a
no-go
from
the
start.
You
know.
A
So
I
I
think
I
think
there
are
other
use
cases,
one
of
a
very
common
use
case
that
I
heard
is
to
drop
a
one
span
trace
that
is
regenerated
for
health
checks.
So
a
bunch
of
people
are
complaining
about
one
span,
traces
health
checks,
one
span
traces
that
they
want
to
filter.
So
I
would
not
be
opposed
to
this.
A
I
just
want
to
make
sure
we
put
enough
effort
to
think
about
what
is
the
right
thing
to
do
now
versus
long
term
and
not
to
to
do
something
that
in
long
term
is
not
going
to
be
useful
or
or
we're
going
to
rewrite
it
completely.
So
as
long
as
you
put
this
effort
and
document,
why
going
through
with
the
routing
processor
is
the
right
thing,
long
term
and
stuff?
A
C
I
agree,
I
think
that's
also
the
majority
use
case,
and
you
know
people
complain
about
everything
right
so,
like
you
can't
make
everyone
happy
all
the
time
but
yeah.
That's.
I
think
the
the
that
is
the
big
chunk
of
folks
is
like
I
have
this
health
center
point.
It
gets
a
little
funky
because
even
some
health
check
endpoint
requests
to
certain
languages.
If
it's
a
framework
instrumentation
there
might
be
multiple
spans,
even
for
just
something
as
basic
as
a
health
check.
Endpoint.
C
C
C
Well
then,
I
can
put
out
these
trade-offs.
I
can
document
these
trade-offs
and
then
people
can
make
decisions,
but
that's
good
feedback.
I
I
understand
your
feedback
and
I
think
it
makes
sense
and
that's
the
use
case
I
want
to
solve
for
too.
So
all
good.
Okay,
I'll
stop
wasting
everyone's
time.
A
Thank
you.
I
think
we
are
good.
We
just
need.
I
have
an
action
item
today
to
try
to
look
into
the
label
thing,
and
then
we
are
good
on
this.
Thank
you
so
much
everyone
see
you
next
week.
D
D
D
D
D
D
If
anybody
wants
to
work
on
the
specification
and
adding
a
clarification
for
this
recommendation,
it's
not
entirely
clear
what
the
recommendation
is
going
to
be
right.
This
requires
some
thinking
to
come
up
with
some
some
glitch
tests
of
what
what
goes
away.
D
D
This
one
I
just
opened
what
what
do
people
think
about
having
support
for
for
binary
just
light
streamlock
collection
here
is
what
what?
What
are
you
thinking
here.
So
I
believe,
christian
last
time
opened
the
asks
for
a
way
to
store
lights
and
I
think,
generally,
it's
useful
to
have
in
block
collection.
I
wonder
what
do
people
think
about.
D
M
Yeah
yeah
sorry
yeah.
Thanks
for
raising
this
sorry,
I'm
a
little
bit
late.
I
just
happened
to
be
in
the
office
today
we
opened
up.
I
just
need
to
figure
out
get
my
bearings
here.
M
I'm
you
know
it's
interesting.
It
certainly
gets
me
out
of
the
ground.
Talk
they
look
at
home.
I
think
you're
gonna.
M
Beard-
maybe
not
I
don't
know,
I
don't
know
as
long
as
they
go
there.
We
like
we
have
this
thing
here
at
the
entrance
that,
like
you,
know,
takes
your
picture
and
measures
your
temperatures
as
long
as
that
still.
H
M
Yeah,
so
basically,
you
know,
frankly,
when
we
discussed
this
first,
this
idea
of
the
body
that
it
was
completely
opaque.
You
know
in
my
mind
it
always
it
always.
You
know,
and
we
never
really,
I
think,
spelled
that
out,
but
I
always
felt
that
it
would
definitely
have
to
be
able
to
accommodate
just
a
robin
array.
M
All
right-
and
I
realized
that
when,
when
we
looked
at
this
clarification
issue,
that
it
listed
any,
as
you
know,
a
bunch
of
things
but
but
but
not
by
the
array,
and
that's
why
I
commented
on
it
and
like
for
my-
I
think
I
guess
my
take
is
that
we
should
allow
that.
D
Okay,
I
think
related
to
this
is
also
whether
we
want
to
be
able
to
record
a
stream
of
bytes.
Today,
it's
assumed
that
you
have
a
collection
of
log
records.
We
don't
necessarily
even
preserve
the
order
of
these
top
references.
If
you're
dealing
with
a
stream
of
bytes,
then
the
ordering
matters
right.
So
probably
so
I
guess
maybe
two
things
here:
one
is
when
you're
reading
a
stream
of
bytes
when
you're
collecting
from
a
file,
then
you
do
not
necessarily
look
into
breaking
it
down
into
records
by
some
sort
of
deadline
right.
D
So
that
means
somehow
we
need
to
record
the
position
of
the
champion
in
the
in
the
stream,
possibly
as
some
sort
of
attribute
in
in
the
record
itself
like,
but
have
a
semantic
convention
for
that
this
point
fold
around
right
here.
M
So
personally,
I
haven't
thought
that
far
to
be
quite
honest,
I
think
I
didn't
even
think
about
streams.
I
thought
that
it's
really
interesting.
I
thought
that
somehow,
whoever
would
create
the
record.
The
individual,
lock
record
right
would
have
done
whatever
needs
to
be
done
in
order
to
chop
it
in
you
know
whatever
way.
That
makes
sense.
M
So
I
wasn't
necessarily
thinking
into
just
say
you
know
pipe
piper
binary
executable
of
some
sort
or
like
a
raw
network
dump.
You
know
straight
into
otc.
I
was
thinking
that
there
would
be.
You
know,
somebody
that
would
be
in
the
middle
of
that
and
already
do
some
chunking
of
some
sort.
You
know
either
by
I
don't
know.
Maybe
the
convention
is
it's.
You
know
one
record
for
a
udp
packet
or
you
know.
I
don't
know
something
like
that.
M
D
Yeah
the
use
case
I
have
in
my
mind
here
is:
let's
say
you
collect
the
file,
but
you
don't
know
how
to
break
it
into
records.
You
don't
know
what
the
delimiter
is
right.
Let's
say
you
have
your
backhand,
which
is
able
to
figure
it
out
somehow,
and
so
the
goal
here
would
be
to
deliver
it
precisely
as
a
stream
of
bikes
in
the
right
order.
Right,
if
you
don't.
H
D
It
in
the
right
order,
then
you're
going
to
be
wrong
on
the
backing
and-
and
if
you
want
to
do
that,
you
don't
know
what
the
dimitri
is.
You
don't
know
how
to
break
into
log
records.
So
the
concept
of
the
telemeter
is
pointless
in
this
case
right.
So
essentially,
what
you're
looking
for
is
it's
still
a
log
file
right,
but
you
don't
know:
what's
the
encoding,
you
don't
know,
what's
the
line
ending
there
right
that.
D
Like-
and
in
that
case,
you
probably
want
to
just
collect
it,
as
is
precisely
making
sure
that
you,
don't
you
don't
change
anything.
I
think
that
the
bytes
are
coming
exactly
in
the
same
order
as
the
yard.
We
see
that
the
source,
whether
it's
a
file
or,
for
example,
there
was
a
proposal
to
other
tcp
receiver,
which
is.
H
M
Yes,
that
is
yeah
udp.
Usually
you
just
assume
that
it's
one
per
packet
and
then
you
leave.
H
M
To
the
whatever
possible
needs
to
receive
it,
to
see
whether
that
actually
makes
sense
or
not
tcp
is
more
tricky
all
right
yeah,
I
was
thinking
mostly
about
you
know
I
have
like
pickup
records
or
maybe
netflow
records,
or
something
like
that.
M
M
H
D
Like
so,
if
you,
if
you're
reading
a
stream
of
bytes
and
let's
say
you're
you're
breaking
it
down
into,
let's
say
some
maximum
size
of
chunks
like
like,
for
example,
I
don't
want
my
log
records
to
be
gigabytes
of
data
right.
Let's
say
I
limit
each
log
record
to
one
megabyte
at
most.
Then
each
becomes
a
single
request
that
that
that
the
collector
processes,
the
collector,
never
guarantees
the
order
of
the
viewer.
D
D
M
D
D
D
D
Yeah
so
anyway,
I'm
I'm
not
looking
into
what
do
we
do
to
add
the
data
type,
the
bytes,
the
binary
data
type
to
the
protocol,
which
is,
I
guess,
independently
from
what
I
was
just
discussing
about
the
ordering
and
all
that
stuff,
and
I
think,
that's
still
useful
on
its
own
and
can
be
used
not
just
for
loads,
but
for
it
came
up
in
other
places
as
well.
So
I
I
do
want
to
have
that
yeah.
M
D
D
This
concept
of
the
tcp
receiver.
Here,
that's
that
comes
from
stanza
and
well,
I
think
yeah.
I
think
it's
useful.
I
would
think
that
for
uniformless
purposes,
we
would
want
this
to
behave
very
similar
to
the
file
logic.
I
accepted
reads
from
the
tcp
socket
instead
of
file.
Otherwise
I
would
expect
things
to
be
precisely
the
same,
which
is
not
the
case
today
right
we
filed
a
couple
of
issues
to
have
that.
I
believe
missing
things
are.
M
D
M
Yeah
so
in
stanza
land
when
they
had
that,
or
is
this
kind
completely
a
new
receiver
code
here,
no.
D
D
D
D
D
B
Well,
since
you
have
your
screen
already
here,
I
think
that's,
that's
fine.
So
we've
been
discussing
this
during
agent
collectors
like
just
40
minutes
ago,
and
there
were
like
some
some
arguments.
Interesting
arguments
brought,
so
I
will
bring
them
here
so
okay.
So
the
context
is
that
I
was
working
on
this
persistent
buffering,
support
for
open,
telemetry,
collector
or
rider
head
lock
support,
and
I
have
started
with
defining
the
requirements
that
I
think
are
pretty
much.
B
What
most
people
would
expect
from
from
wall,
how
how
it,
how
it
should
work-
and
they
are
listed
here
and
essentially
what
we
are
trying
to
achieve
in
the
first
iteration
at
least-
is
being
able
to
persist,
records
to
disk-
and
in
case
let's
say,
there's
a
failure
of
open
telemetry
collector
because
of
I
know,
maybe
there's
a
problem
in
the
exporter
and
or
on
or
on
on
the
vendor
side,
and
it's
sending
data
and
this
data
is
being
cued
up
and
the
queue
is
filled
and
we
are
losing
records.
B
We
don't
want
to
this
to
happen
would
like
to
avoid
that.
The
other
use
case
is
when
the
open,
telemetry
collector
is
being
killed,
because
maybe
the
pod
in
which
it's
running
is
misconfigured
or
whatever
else
happens,
and
some
data
is
being
lost,
so
so
those
kind
of
use
cases
just
to
make
sure
that
data
is
is
never
lost,
and
if
you
would
like
to
review
it
and
have
some
some
comments
or
suggestions,
then
please
do.
B
And
then
I
was
looking
at
the
pipeline
and
thinking
where
this
wall
support
should
be
present,
even
if
there
should
be
a
receiver,
processor
or
the
exporter,
and
each
of
these
choices
brings
some
pros
and
cons
to
which,
which
can
be
discussed,
are
processor
or
exporter.
If
that
would
happen
on
the
processor,
then
it
has
some
benefits.
B
For
example,
when
there
are
multiple
exporters,
then
each
record
is
being
stored
to
well
only
once,
but
then
each
exporter
has
their
own,
let's
say
life,
and
maybe
some
exporters
fail,
others
are
not
failing
and
the
wall
would
be
essentially
tied
to
the
worst
performing
exporter.
In
such
case,
additionally,
from
the
implementation
standpoint
in
exporter,
we
know
the
status
of
the
of
the
request
to
the
to
the
vendor
or
to
the
external
service.
We
know
if
it
failed
due
to
let's
say
400,
error
or
500
type
error,
and
we
can
accordingly
behave.
B
This
is
being
handled
in
qt,
retry,
helper
right
now,
which
is
short
by
many
exporters
and
in
processor.
We
do
not
have
this
information,
so
if
we
would
like
to
provide
some,
let's
say,
wall
processor
would
need
to
pass
this
information
from
exporters
to
this
processor,
which
would
complicate
the
things
to
some
extent.
This
is
a
similar
experience,
as
was
with
q3
processor,
which
originally
was
a
processor,
but
it
was
moved
to
a
helper
afterwards
because
of
these
issues
and
and
the
fan
out
and
everything.
B
So
I
think
that
making
this
well
capability
somewhere
close
to
the
exporter,
perhaps
as
an
extension
to
cute
retry,
makes
the
most
sense,
and
it
additionally
makes
more
sense
since
q3
already
has
a
queue.
This
queue
is
memory
backed
right
now,
and
it
will
be
a
natural
extension
that
they
would
have
an
option
to
provide
a
disk
based
eq.
That
would
essentially
persist
the
records
as
they
are
being
enqueued
and
and
being
sent
further.
B
So
that's
that's
from
from,
let's
say
the
standpoint
of
which
component
should
handle
that
the
second
let's
say,
area
that
we've
been
discussing
was
how
this
should
be
implemented,
and
I've
brought
several
examples
here
of
and
of
course,
this
can
be
implemented
in
a
number
of
ways,
but
two
that
make
the
most
sense
is
either
reusing
a
file
storage
extension
that
was
recently
added
to
open,
telemetry,
collector
contrib.
This
is
essentially
a
b
bolt
based
database
and
a
simple
interface
to
use
it
to
store
and
retrieve
keys.
That
works
pretty
well.
B
The
second
approach
is
to
use
some
already
existing
library
for
providing
this
sort
of
capability,
and
one
of
them
is
tidwell
that
was
actually
proposed
by
emmanuel
in
the
prometheus
remote
write
pr,
which
adds
this
capability
only
to
this
prometheus,
remote
right
exporter,
the
benefit
of
tidwell
is
that
it
has
already
support
for
truncating
data,
so
like
management
of
essentially
of
these
worlds,
as
well
as
some
support
for
for
indexes.
Knowing
what
was
the
last
start
index
things
like
that,
so
maybe
it
would
be
like
a
little
bit
more
confident.
B
Also,
I
did
some
some
benchmarks
and
when
the
batches
were,
I
deserved
the
fire
synchronization
for
both
of
them,
because
this
is
how
fire
storage
currently
works.
So
when,
when
the
batch
sizes
were
large,
they
were
like
quite
comparable
around
when
the
batch
size
was
including
1000
spawns.
I
was
getting
around
to
1000
batches
per
second
for
each
of
them,
which
was
pretty
pretty
neat.
B
However,
when
the
batch
size
was
small,
this
this
were
different
and
when
the
batch
was
just
one
span
with
fire
storage
extension,
I
think
I
was
able
to
hit
around
16
000
batches
or
spans
per
second
with
tidwell.
It
was
around
200,
000
spawns
or
batches
per
second,
so
it
was
performing
out
a
bit
better.
B
But
maybe
this
is
some
configuration
item
and,
and
during
this
agent
collector
seek
we've
been
discussing
all
of
those
things,
and
the
conclusion
was
that
making
this
on
the
exporter
site
makes
the
most
sense,
maybe
via
cute
retry
extension
and
also
we've
been
discussing
the
special
case
for
a
premium
remote
right,
because
in
primary
phase
remote
right,
we
need
to
make
sure
that
the
order
of
data
follows
certain,
let's
say
rules
or
like
some
specific
specific
order,
and
because
of
that,
they
started
with
making
this
separate
pr
with
providing
world
support
there.
B
Also,
one
item
brought
during
this
discussion
was
that
there
is
another
use
case
slightly
similar
where
customers
have,
for
example,
some
separate
environment
in
which
they
don't
have
access
to
a
network,
and
they
want
to
store
these
records
to
disk
and
then,
for
example,
upload
those
records
in
some
specific
way,
maybe
via
ftp
or
whatever
they
are
using
and
and
push
them
on
some
different
machine.
B
However,
I
think
this
is
maybe
a
slightly
different
use
case
and
we
can
focus
on
just
providing
this,
for,
let's
say,
resilience,
reasons
to
store
the
data.
So
that's
that's
my
take
on
that.
D
This
is
great,
thank
you
for
for
the
very
last
item
that
we
mentioned
for
storing
the
telemetry
data
files.
I
think
we
have
a
separate
item
separate
issue
file
for
that
to
come
up
with
file
format
for
storage.
With
regards
to
the
placement
of
where
the
the
puffers
should
be.
D
I
do
agree,
I
think
we
could
defy
the
natural
place
to
replace
it,
but,
but
generally,
I
think,
what's
the
idea
we
want
to
make
sure
we
don't
lose
data
right
if
we
crash
or
we
start
everything
that
is
in
memory
technically
should
be
made
by
a
persistent
storage
here,
which
means
today
it's
primarily
cube,
retry
in
exporters,
but
also
a
batch
processor,
which
has
its
own
queue.
D
So
we
will
need
to
also,
I
believe,
if
we
want
to
make
sure
we
don't
lose
data,
we
will
have
to
put
it
there
as
well
in
the
patch
processor
there
it's
it's
limited
in
the
size.
We
don't
so
I
guess
it's
in
both
places
it's
limited,
but
for
future
try
we
can
make
it
significantly
larger
than
the
memory
size
is
for
watcher.
It's
pointless.
We
keep
one
watch
at
most
right,
but
again
it's
still
in
memory
right.
We
will
need
to
make
sure
we
restart.
D
We
don't
lose
that
data,
so
I
think
it
will
be
useful
to
to
do
this
implementation.
You
mentioned
in
the
slot
right
so
that
in
the
plastic
box-
and
I
think
it's
that's
completely
valid-
I
don't
remember
if
there
is
any
other
place
where
we
do
a
similar
thing
like
we
accumulate.
D
D
All
right,
yeah,
okay,
so
those
those
processors
we
do.
That
sort
of
thing
I
mean
if
we
either,
we
explicitly
say
that
they
are
not
safe
from
the
perspective
of
losing
data
on
crashes
or
restarts,
or
we
somehow
and
be
nice.
If
we
could
could
do
that
in
some
generic
way
right,
so
that
arbitrary
processors
could
use
that
functionality
to
to
make
sure
that
whatever
data
they
stored
in
memory
is
also
mirrored
by
some
sort
of
persistent
storage.
B
D
By
the
way,
the
prometheus
case,
the
reordering
I
suggested
from
the
beginning
for
them
to
just
use
a
single
consumer,
I
don't
know
why
they
decided
to
go
ahead
and
re-implement
the
entire
field.
That's,
I
think,
that's
pointless,
and
now
they
have
to
have
their
own
buffering.
They
have
I
mean,
there's
no
need
I
don't
see
the
point.
The
the
generic
queue
can
serve
very
well.
You
just
limit
the
consumers
to
one
and
that
that's
all
we
need
to
make
sure
that
the
order
is
preserved.
D
D
N
This
yeah
for
sure
yeah.
I
wanted
to
introduce
an
engineer
on
our
side
that
was
going
to
work
on
the
poc
for
the
request
here,
basically
implementing
a
there's,
a
bunch
of
different
ways.
We
could
do
this.
That's
why
I
sort
of
wanted
to
discuss
this
in
in,
if
you,
if
you
want
to
introduce
yourself,
that
would
be
cool
too,
because
it's
first
time
on
the
on
the
call
so
help
yourself
and
then
we'll
dig
into
it.
O
O
I
want
to
try
to
run
this
pieces
and
just
had
a
few
a
few
questions
about
this
about,
especially
you
know
why
why
which
was
logs
for
jail,
because
it
seems
for
two
reasons
from
the
specification
it
had
only
three
attributes
like
a
time
step
body
and
everything
go
to
the
attribute
and
and
beside
beside
that.
O
D
Yeah
so
hi
welcome,
thank
you
for
for
offering
your
help.
This
is
we
appreciate
it
very
much
so,
regarding
the
question
about
why
log
4g
and
not
something
else,
that's
just
an
example
here.
You're
completely
welcome
to
start
with
some
other
library,
so
I
guess
yeah
so
you're.
Thinking
about
me
this
in
java.
From
from
what
I
understand,
that's
the
idea,
I
believe
so
I
will
need
to
double
check,
but
I
think
we
have
some
initial
implementation
both
for
lock4g
and
logback.
D
If
I,
if
I
remember
correctly
in
java
instrumentation
repository,
so
it's
completely
fine.
If
you
want
to
do
this
on
vlogback,
then
yes,
that's,
that's
great.
The
the
very
first
one
or
two
implementations
are
really
about
prototyping,
about
making
sure
that
this
is
what
we
want
to
have
composition,
wise
and
then
we'll
refine
this
specification,
and
it
will
be
then
added
as
the
specification
to
the
specification
report.
This
one
is
the
proposal's
repository
and
it
is
specifically
labeled
as
just
a
prototyping
specification,
not
the
final
specification
that
we
want
to
have
for
for
the
libraries.
D
So
whatever
you
are
going
to
do
for
log
back,
it
will
be
also
exploratory
work
right.
So
it
would
be
great
if
you
give
us
some
feedback
about
what
you
like,
what
you
don't
like
in
the
specification.
What
you
think
should
be
altered
in
this
proposal
so
that
in
the
final
version
we
can
take
into
account
those
things
you
know
you
were
saying
something
so
yeah.
O
D
N
D
H
D
D
These
these
were
created.
Definitely
they
were
created
before
the
specification
before
the
proposal,
so
I'm
sure
they
are
likely,
incomplete
and
maybe
even
contradict
with
with
the
specifications
or
some
things
may
need
to
be
changed.
I
do
know
that
they
do
not
implement
or
tlp
exporting
at
all.
This
was
not
done
at
the
time
they
are
probably
dealing
with
with
the
context
hopefully,
but
they
don't
have
the
exporters
and
all
that
stuff.
N
So
you,
the
goal
is
obviously
to
to
implement
otlp
a
way
to
send
otlp
directly
into
into
that.
D
Right,
that's
that's
one
of
the
things
that
we
want
to
have
right
this,
this
concept
of
log
export
we
want
to
have
that
and
david
is
not
in
the
call
today
he's
the
author
of
these
two
things
of
this
partial
implementations
and
I'm
sure
he'll
be
happy
to
tell
you
what
actually
he
did
and
get
you
up
to
speed
so
that
you
can
continue
from
here.
Technically,
you
can
also
start
from
scratch.
D
I
mean
if
it's
completely
unusable,
that's
fine
as
well,
but
maybe
it's
it's
worth
at
least
having
a
look
at
what
exists.
Maybe
it's
good
enough
starting
point
right.
I
can
connect
you
with
david.
Unfortunately,
you
wouldn't
make
it
today
you
could
not
join
but
he's
the
author.
He
wrote
it.
He
will
be
able
to
tell
more
about
what
exists.
D
Let
me
I
guess
I
will
ask
him
to.
Are
you
guys
in
slack
in
hotel,
vlog
channel,
I
mean.
N
Yeah
I'm
in
there,
but
I
don't
think
gohan
is.
I
can
okay,
I
can
send
him
so.
N
D
Okay,
cool
yeah,
okay
me
there
and
we'll
take
it
from
there
and
if
you
have
specific
questions,
also
feel
free
to
post
this
slide
and
mention
me,
I'm
also
happy
to
answer.
If
there
is
anything
unclear
in
the
specification-
and
we
can
discuss
that
now
as
well
or
feel
free
to
post
in
spike-
and
I
can
answer.
O
D
You're
welcome
okay,
okay,
we're
out
of
items
here
anything
else.
Anyone
wants
to.