►
From YouTube: 2020-10-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Okay,
let's
start
so
first,
we
we
made
some
small
clarifications
around
how
the
reviews
and
the
concrete
depository
need
to
work
and
what's
the
role
of
improvers
there,
specifically,
it
was
previously
not
clearly
specified
the
the
idea
is
that
approvers
and
maintainers
in
the
concrete
repository
will
be
playing
the
role,
primarily
the
role
of
facilitators
in
the
reviews
and
and
the
reviews
themselves
will
be
done
by
the
by
the
code
owners
of
each
component
and
the
code.
A
Owners
of
the
components
are
people
who
contributed
the
component
because
most
of
those
are
coming
from
companies.
So
it's
the
developers
of
the
company.
We
have
some
automation
in
place.
We
we
edit
that
which
now
we
listed
the
correct
owners
in
co-donor
spas.
So
whenever
there
is
a
change
that
touches
the
component,
it
will
be
automatically
the
reviewers
will
be
added
automatically
and
also
have
some
information
around
assigning
facilitators.
A
A
B
A
Yeah,
I'm
rugged
and
jay
are
both
approvers.
So
both
already
know
this
is
not
me,
I
guess
that's
it
from
my
side.
Let's
see
yeah
any
questions.
B
You
have
a
question,
so
how
does
you
might
have
mentioned
this
earlier?
But
how
does
a
code
owner
get
get
elected
like
how
who
chooses
whether
someone
is
a
code
owner.
A
Yeah
yeah,
it's
normally
the
original
author
of
the
component,
first
of
all
right.
So
if
someone
contributed
a
component,
they
automatically
become
the
code
owner,
but
also,
typically,
the
components
come
from
companies
and
companies
usually
also
assign
other
developers,
so
there's
usually
more
than
one
person
who
contributes
to
it
to
the
components.
So
they
they
automatically
in
a
way
become
the
code
owners
of
just
that
sub
directory,
where
the
component
leaves.
A
So
it
was
kind
of
an
easy
thing.
We
already
had
the
list
of
people
who
were
responsible.
It
was
just
not
in
the
form
of
code
owner
style.
It
was
just
a
really,
but
we
just
made
it
a
code
owner
so
that
it's
now
automated.
But
that's
all.
We
did
really.
A
It's
that's
a
github
feature
right.
There's.
If
there
you
have
a
code
owner
file
in
the
specific
format
when
whenever
there
is
a
pull
request
that
touches
files
in
a
particular
directory,
then
the
the
appropriate
people
based
on
this
list
of
code
owners
are
automatically
assigned
to
the
pr.
It's
a
github
feature:
yep.
Okay,
thanks.
A
Cool,
let's
move
to
the
second
topic,
discuss
visibility
of
being
processor
albert.
I
guess
we
haven't
met,
so
maybe
you
want
to
introduce
yourself
and
then
we
can
talk
about
the
topic.
B
Yeah
sure
so
yeah,
I'm
albert
theo.
I
I
joined
logs
io
about
two
months
ago.
Prior
to
that
I
was
with
uber
for
two
and
a
half
years
and
I
my
main
task.
At
the
moment
I've
joined
a
team
on
the
tracing
product
that
also
offer,
and
my
main
project
right
now
is
to
look
at
producing
metrics
from
trace
data
and
we've
determined
that
putting
this
logic
inside
the
collector
would
be
a
good
option
and
it'll
be
good
to
discuss
that
as
well.
B
Whether
if
that
is
true
or
not,
and
we're
thinking
to
have
this
as
a
processor
within
a
trace
pipeline
that
emits
metrics.
It's
a
no-op
operation
on
the
trace
itself
under
spans.
But
then
it
accumulates
metrics
and
emits
them
that
we
propose
as
a
as
an
otlp
metric
and
then
another
pipeline
can
consume
that
data
and
write
that
up
and
export
that
in
their
own
metrics
format.
A
Okay,
okay,
so
welcome,
first
of
all,
the
pipelines
today
they
can
only
carry
one
data
type,
it's
either
metrics
or
traces
or
blocks.
If
the
input
data
type
of
the
pipeline
is
metrics
or
traces,
then
that
that
is
what
the
processors
can
consume
and
produce.
A
There's
no
way
today
to
switch
the
data
type
somewhere
in
the
middle
of
the
pipeline.
It's
not
possible!
It's
something
that
we
considered,
that
we
would
need
one
day,
but
there
is.
There
is
no
possibility
that
the
interfaces
internal
interfaces
that
the
processors
are
supposed
to
implement
are
declared
in
a
way
that
makes
that
impossible.
A
Do
we
want
to
do
that?
I
think
we
do,
but
maybe
not
right
now.
The
problem
is
that
we
are
right
now
preparing
for
the
ga
release
of
many
of
the
open
planetary
components,
including
the
collector,
and
we
wouldn't
want
to
make
any
major
changes
here
in
this
area.
It
likely
means
that
we
will
need
to
introduce
another
set
of
interfaces
that
this
new
kind
of
processors
would
need
to
implement,
so
that
there
is
a
way
to
switch
the
table
type
in
the
middle
of
the
pipeline.
A
It's
probably
not
the
right
time
at
the
moment
because
we're
we
want
to
focus
on
the
release
and
it's
going
to
be
hard
to
make
substantial
changes
like
that
to
the
core.
But
in
principle
I
think
that's
something
we
would
want
to
have
now
you
could,
I
guess,
fake
it
in
a
way
right.
You
don't
necessarily
have
to
implement
it
clean
when
the
pipeline,
if
it's
an
experiment
that
you
want
to
conduct
so
the
process
area
that
you
have,
you
declare
it
as
a
trace
processor.
The
input
is
a
trace
and
the
output.
A
Is
you
produce
nothing
in
terms
of
traces
right
and
then
you
directly
from
your
processor
can
produce
whatever
metrics
you
want
to,
but
not
send
it
to
the
next
processor
in
the
pipeline,
but
just
evacuate
whatever
you
want,
but
you
you
can
just
serialize
it
and
send
it
over
the
network
if
you
want
that,
probably
will
will
be
maybe
enough
for
for
you
to
do
the
experimentation,
probably
not
good
enough
for
the
production
use.
I
guess
it's
not
very
nice
usability.
A
B
Yeah,
and
so
it
is
possible,
I
guess-
to
have
another
pipeline
that
could
ingest
those
metrics
that
you
that
we
talked
about
just
now,
so
a
processor
would
emit
metrics
a
trace,
processor
would
emit
matrix
and
then
have
another
pipeline
that
ingests
those
metrics
you
would.
You
would
have.
A
To
configure
that
you
would
have
to
so
in
the
other
pipeline,
which
is
the
metric
pipeline,
you
could
configure
a
receiver
which
is
capable
of
accepting
metrics
and
from
this
processor
you
could,
but
you
could
configure
the
processor
it
directly
to
send
to
that
receiver
right
and
that
would
work.
Yes,
you
could
chain
the
two
pipelines
but
they're
the
core
that
the
the
collector
itself
internally
wouldn't
know
that
you're
doing
that,
you
would
need
to
configure
that
in
in
the
complete
file.
But,
yes,
that's,
google.
You
could
do
that.
D
B
E
Exporter,
at
least
at
this
point,
if
it's
gonna
be
sending
metrics
out.
A
That's
a
good
point
because
it's
going
to
be
the
last
one
anyway,
there
is
there's
nothing
that
can
come
after
that
processor,
because
you're
not
actually
you're
terminating
the
flow
of
data
right
there.
So
you
could
make
it
a
processor,
so
an
exporter,
and
it
would
feel
more
slightly
more
natural
right
if
it's
an
exporter
which
is
which
is
accepting
traces,
does
some
sort
of
transformation
to
those
and
sends
over
the
network,
but
they
just
become
on
the
network.
B
So
what
I
mean
is
the
reason
why
we
would
need
to
have
this
processor
as
the
first
processor
and
the
reason.
Why
is
because
we
want
to
avoid
any
pre-processing.
So,
for
example,
any
tail-based
sampling
or
any
sort
of
filtering
we'd
want
to
sample
as
much
of
that
data
as
possible
so
that
we
have
full
fidelity
of
the
metrics.
B
So
I'm
not
sure
if
it
could
be
the
last
processor,
it
could
be
we're
thinking
just
to
have
a
no
op
operation,
just
a
pass
through
of
that
particular
of
that
particular
span,
and
then
it
outputs
it
using
otlp
format.
B
Maybe
it
reuses
an
existing
library
for
for
for
creating
those
otlp
metrics
but
yeah.
Basically,
another
go
routine,
for
instance,
that
does
it
out
of
band
so
that
it
doesn't
impact
the
main
flow
of
or
spans
through
the
the
trace
pipeline.
F
Feasible
think
another
option
is
because
you
can
actually
have
one
receiver
that
points
to
two
different
pipelines,
so
you
could
have
your
trace
receiver
and
then
have
two
different
pipelines,
one
that's
just
your
regular
one
and
then
a
separate
pipeline.
That
just
has
this
component,
which
can
be
an
exporter
which
will
then
export
the
matrix.
So
I
think
you'd
be
able
to
achieve
the
same
thing,
and
I
think
that
would
be
a
little
bit
more.
D
E
E
Another
thing
about
the
processor.
Is
you
you
get
an
instance
like
the
lifetimes
are
different
between
like
how
they're
created
right,
like
so
like
exporters
and
receivers
are
shared
instances,
but
processors
get
a
unique
instance
per
processor
type
right.
So,
if
you
have,
if
you
use
the
processor
multiple
times,
I
think
you'd
end
up
doing
double
the
work
right
so
like
if
you're
wanting
to
like
export
his
metrics
to
two
different
targets.
E
B
Yeah
a
requirement
needs
needs
a
trace
pipeline
because
we
do
actually
want
to
have
that
trace
data
as
well.
We
want
that
trace
data
to
be
exported
to
our
back
end,
but
we
also
want
the
metrics
data
to
be
exported
to
our
backend,
so
we
want
those
two
streams
of
data,
and
so
we
would
need
that
trace
pipeline.
I
think.
F
Yeah,
so
I
think
I
guess
what
I
was
suggesting
was
that
you'd
have
three
pipelines
in
the
end.
So
you'd
have
your
one
trace
receiver
and
you
have
your
normal
pipeline,
which
can
explore
your
trades
data
as
normal.
Then
you
have
a
second
pipeline
that
starts
off
with
the
same
trace,
receiver
and
then
all
it
does
is
it
goes
to
your
matrix
exporter
and
then
you
explore
this
metric.
Then
you
have
a
third
pipeline
which
receives
those
metrics
and
then
does
whatever
processing
you
want
in
the
metrics
and
sends
them
to
the
vacuum.
F
F
A
If
the
receiver
feeds
data
into
one
pipeline,
it
does
not
create
a
copy,
it's
the
whatever
it
receives
it
sends
to
the
processor.
If
there
is
more
than
one
pipeline
attached
to
a
single
receiver
and
any
of
the
pipelines
declares
that
it
modifies
the
data
as
an
intent
to
modify
the
data.
In
that
case,
the
data
is
going
to
be
cloned.
A
This
is
all
described
in
a
data
ownership,
documentation
where
you
would
hear
if,
if
you,
depending
on
what
you
intend
to
do
with
the
data
right,
if
you,
if
you
intend
to
modify
the
data,
you
would
need
to
clone
it
here
in
your
processor,
but
to
trace,
need
to
clone
it
and
then
send
it
to
whatever
is
the
quality.
That
does
the.
A
If
you
need
to
touch
the
data
and
stuff
like
that
right,
because
you
you're
also
saying
it's
a
no
so
you're
passing
through
the
data
to
the
next
processor,
and
when,
when
you
pass
the
data
to
the
next
processor,
you
you
give
up
the
ownership
right.
So
you
no
longer
can
touch
the
data.
That's
that's
the
assumption.
A
A
B
A
Welcome
okay,
so
I
guess
we're
good
with
this
issue:
what's
the
next
one,
the
label
processing
issue
update
okay,
so
this
is
about
the.
G
Yeah,
I
just
I
just
wanted
to
talk
a
bit
about
people.
I
know
that
james
also
left
a
comment
on
this
about
how
this
relates
to
one
of
his
issues.
So
I
guess
the
first
thing
that
we
could
discuss
if,
if
james
wants
to
want
to
discuss,
is
the
filter
set
and
whether
that's
something
that
we
want
to
modify,
because
it's
a
breaking
change
if
we
want
to
simplify
what
filter
sets
are
and
if
we
want
to
make
that
external.
So
I
was
curious
about
your
opinion
on
that.
A
On
on
the
braking
changes
right,
I'm
always
very,
very
reluctant
on
doing
any
braking
changes.
If
we
have
to
then,
yes,
we
can
do
that,
but
the
bar
is
much
higher.
I
will
need
to
have
a
much
more
detailed,
look
and
understand
what
do
we
want
to
do
here,
and
that
necessarily
means
that
it
I
I
will
need
to
take
time
to
do
that
right
now.
A
A
Today
it
applies
the
transformation
based
on
the
specified
condition
and
the
condition
is
the
name
of
the
metric
and,
if
you
just
make
that
condition
optional,
meaning
that
you
apply
the
transformation
unconditionally.
But
that's
all
you
need
right
that,
that's
exactly
what
what
you
wanted
to
achieve.
That's,
I
believe,
a
very
simple
change.
A
A
F
Yeah
put
a
background
on
why
I
said
that
in
the
first
place
I
was
actually
looking
at
updating
the
I
just
happened
to
be
for
another
thing:
I'm
working
on
looking
at
updating
the
metrics
transport
processor
to
do
bulk,
update
it's
based
on
some
filter
and
originally
I
thought
well
I'll.
Just
use
the
filter
set
that
we
have
in
in
core,
because
we
want
that
to
be
consistent
across
components,
but
that's
currently
in
an
internal
package,
so
we
can't
actually
use
any
control
and
also
debugging
around.
F
Can
we
use
that
in
contributing
that
yeah
we're
not
super
happy
with
other
filters
structure
at
the
moment,
so
I
want
to
look
at
changing
it,
so
you
asked
me
to
look
into
that
and
then
you
turned
it
to
like
quite
a
bunch
of
changes
to
potentially
all
you
want
to
do
with
something
very
simple,
so
yeah
so
to
what
tickering
said
just
making
metric
name
optional,
it's
a
little
bit
wasn't
exactly
what
I
would
have
anticipated
as
a
feature
for
the
metrics
transform
processor
to
just
update
everything.
F
But
I
guess
I
don't
have
any
major
objections
to
it.
Probably
okay,
if
you
want
to
go
ahead
and
do
that.
G
Yeah,
actually,
what
I
want
to
discuss
is:
I
think
that
this
might
not
actually
be
the
solution.
I'm
looking
for,
I
think
I
might
have
been
a.
I
think
I
might
have
had
the
wrong
idea
when
I
originally
made
the
issue,
because
if
I
add
labels
to
the,
if
I
add
data
point
labels,
then
it
doesn't
quite
actually
do
what
I
want,
because
when
I
integrate
this
with
prometheus
remote
right
exporter,
it
doesn't
allow
labels
to
have
start
with
double
underscores,
which
is
what
I
wanted
in
prometheus.
G
G
G
G
So
what
solution
three
here
says
is:
I
could
use
the
resource
processor,
which
already
exists
and
already
has
the
ability
to
add
resource
attributes.
I
could
use
that
to
add
cluster
and
replica
as
resource
attributes
and
then,
instead
of
adding
a
new
processor,
I
could
add
functionality
within
prometheus
red
exporter
to
take
those
resource
attributes
and
then
pretty
much
copy,
those
into
the
prometheus
time
series
that
is
exported
to
cortex.
G
So
I
was
wondering
if
tigran,
if
you
think
that
that
makes
sense
as
a
solution
to
do
as
opposed
to
the
metrics
transform
processor,
because
I
think
it
makes
more
sense.
I
also
talked
to
josh
mcdonald
who's
really
familiar
with
metrics
and
he
kind
of
agreed
that
resource
attributes
are
the
closest
thing
there
is
to
external
labels
in
prometheus.
A
Sorry
I
lost
the
beginning
of
what
you
were
saying
so
you're,
suggesting
the
other
to
go
with
the
other
solution
with
the
resource
processor.
So
I
guess
using
the
resource
processor,
it's
already
possible
to
to
add
whatever
attributes
on
the
resource
you
want
to
right
and
then
you
would
need
to
turn
those
attributes
into
labels
in
the
exporter.
Is
that
what
you're
suggesting
yes.
A
C
I
think
it's
actually
required
functionality
for
most
exporters,
so
actually
there's
a
pr
I
linked
in
the
notes.
It
seems
to
be
a
combination
of
solution,
three
and
four,
where
it
copies
all
the
resource
attributes
onto
labels
in
the
metrics
transform
processor.
So
actually
someone
did
write
a
pr
for
that.
G
Yeah
that
makes
sense
another
issue
with.
That,
though,
is
that
the
prometheus
remote
exporter
doesn't
allow
for
labels
to
begin
with
a
double
underscore
which
cortex
requires
what
it
will
do.
Is
it's
going
to
append
a
string
key
in
front
of
it,
so
the
underscore
underscore
replica
label
it
appends
a
string
in
front
of
that
which
changes
what
it
is
and
then
hence,
when
you
export
the
cortex,
it
can
no
longer
do
its
deduplication
because
it's
not
the
same
label
anymore.
G
I
can
show
you
exactly
in
the
prometheus
room
or
exporter
code,
which
happens.
That's
that's
actually.
Another
question
is
whether
that's
something
that
should
happen
in
the
promethease
from
our
exporter
or
not.
G
G
The
permutation
mode
exporter
does
not
so
what
exists
as
data
point
labels,
it
does
not
copy
them
over
correctly
if
it
starts
with
a
double
underscore.
A
Okay,
so
it's
not
the
the
problem
is
not
that
the
resource
attributes
are
not
are
not
do
not
become
labels
on
the
metrics,
which,
I
guess
is
still
a
problem.
I
don't
understand
why
we
don't
do
that,
but
let's
say
you
place
it
somewhere
in
a
processor,
you
have
a
custom
processor
which
adds
this
as
a
label
on
data
points
that
still
will
not
solve
your
problem.
If
I
understand
what
you're
saying
correctly.
G
Yes,
that
won't
solve
my
problem,
because
the
permutation
of
an
exporter
does
not
allow
labels
to
start
with
double
underscores.
G
G
What
will
happen
is
when
that
goes
to
the
premier's
report:
exporter,
when
it's,
when
it's
copying
the
labels
over
interpreters,
time
series,
it
appends
a
string
in
front
of
it
so
underscore
underscore
replica
becomes
key,
underscoring
score
replica
and
when
it
exports
that
to
cortex,
I
lose
the
functionality
that
I
want,
because
we
want
to
be
able
to
export
the
label
as
it
is
before.
Okay,
because
cortex
uses
that
for
deduplication.
A
Okay,
so
I
don't
know
why
the
exporter
does
it
today
we
will
need
to
understand.
Maybe
there
is
a
reason
for
that.
If
there
must
be
a
reason
right,
somebody
wouldn't
do
that
unless
it
was
a
necessary
thing.
But
let's
say
you,
you
remove
that
limitation.
What
happens
there
right?
We
would
need
to
understand
I'm
not
familiar
with
the
primitives
exporter
implementation.
A
G
To
josh
mcdonald,
who
is
pretty
familiar
with
it
and
that's
why
what
he
suggested
was
that
I
go
with
solution
number
three
in
my
proposal,
which
is
to
use
the
resource
processor,
I
can
add
cluster
and
replica
as
resource
attributes
and
then
now.
What
I
need
to
do
is
make
changes
in
the
premier
remote
at
exporter,
which
takes
resource
attributes
and
copies
them
into
the
prometheus
time
series
labels
and
that
would
bypass
that
restriction
of
not
having
double
underscores
in
the
labels.
G
E
A
So
what
I
see
here
is,
if
I
understand
the
game
correctly,
the
resource
attributes
are
ignored
completely
today
in
the
previous
exporter,
which
is
wrong
in
my
opinion,
they
need
to
go
somewhere
right,
we're
losing
information
now,
in
that
case,
what
you're
saying
makes
makes
total
sense,
I
guess,
but
again
we
would
need
to
understand.
What's
the
what's
the
consequence
of
a
change
do,
are
we
breaking
something
that
relies
on
not
populating
resource
attributes?
We
would
need
to
validate
that.
I
I'm
not
sure
who
can
help
us
with
that.
So.
G
I
know
that
the
past
aws
interns-
I
think
this
is
just
missing
functionality,
there's
actually
a
comment
in
the
code
that
says
they
need
to
add
the
the
functionality
to.
A
C
G
Think
that
that
might
also
be
another
issue,
because
I
know
that
that
specific
part
of
the
code
was
actually
copy
pasted
from
a
prometheus
exporter
which
is
normally
used
in
the
poll
environment.
And
you
know
that
that's
done
because,
when
prometheus
scrapes
it
gets
rid
of
double
underscores
anyways.
That
is
also
something
that
might
not
be
needed,
I'm
not
100
sure
but
like
there
are
quite
a
few
solutions
that
we
can
take
here,
evidently
like
with
like
the
metric
transform
processor
or
the
resource
attributes
processor.
C
A
I
I
would
say
there
is
an
expectation
from
every
metric
exporter
to
to
use
the
resource
information
if
it's
being
ignored
today,
I
think
that's,
that's
just
missing,
implement
the
missing
thing
in
the
implementation
of
the
exporter.
There
is
an
expectation
that,
whatever
data
you
receive,
you
need
to
transform
it
to
the
equivalent
data
in
your
output,
format
and
resource
attributes.
Are
they
they
correspond
to
labels
today?
That's,
I
guess,
that's
the
most
logical
thing
you
can
do
dropping
the
data
ignoring
that's,
that's
not
what.
A
A
Have
two
sets
of
the
same
tables
right,
double
double
one
important
the
resource,
the
other
somehow
magically
appearing
on
the
data
points
like.
A
I
would
I
would
try
to
vet
this
change
with
some
real
users
of
the
primitives
exporter
in
the
collector.
I
I'm
not
using
that.
I
I
don't
know
right.
What's
the
implication
of
making
this
change,
my
guess
is,
it
should
be
okay,
but
I
would
like
it
to
do
to
make
sure
that
we're
not
breaking
some
other
people's
stuff,
so
I
don't
know,
maybe
james.
Maybe
you
can
help
if
you're
using
the
exporter
internally.
H
A
Wasn't
listening
to
the
last
minute
what
I
was
saying
that
do
you
know,
I
guess
the
question
is:
do
we
know
any
real
users
of
the
prometheus
exporter?
Who
can
help
us
clarify
whether
a
change
we
intend
to
do
here
is
not
going
to
break
whatever
they
are
doing
today.
F
A
A
G
So
I
don't
like
we
at
aws-
that's
we're
going
to
be
using
this
for
bt,
smart
exporter
to
be
pushing
metrics
to
cortex.
So
we
we
are
users
for
this.
I
can
talk
to
some
of
the
senior
engineers
here
at
aws
as
well
and
their
opinions
on
this,
but.
A
G
Anticipate
any
issues
because
you
know,
if
I,
if
I
add
the
ability
to
convert,
convert
resource
attributes
to
labels,
we're
just
adding
more
labels,
which
should
really
yeah
yeah.
A
E
Yeah
I
have
kind
of
a
small
demo.
Can
you
guys
hear
me
you
read
your
signal,
looks
pretty
bad.
E
Okay
can
y'all.
H
E
Like
two
terminals
split,
yes,
yeah,
okay,
so
kind
of
quick
background.
So
the
other
day
I
was
trying,
I
deployed
a
collector
and
something
wasn't
working
and
I
basically
wanted
to
see
like
what
the
raw
data
was,
that
that
was
being
collected,
and
so
I
didn't
really
want
to
like.
I
could
go
turn
on
the
vlogging
exporter,
you
know
and
but
then
I
have
to
like
redeploy
it
and
it's
just
it's
going
to
be
a
giant
hassle.
So
I
basically
wanted
something
that
I
could
leave
enabled
all
the
time.
E
You
know
that
didn't
really
have
much.
You
know
any
or
much
overhead.
So
basically
I
came
up
with
this
tap
exporter
and
so
well
nothing's.
So
basically
to
get
data
out
of
it.
It's
you
send
a
curl
command,
oops.
E
And,
and
so
basically
it
doesn't
start
like
it
doesn't
start.
You
know
formatting
these
until
a
client
is
connected.
So
it's
basically,
if
a
client's
not
connected
the
exporter,
is
just
basically
a
no-op.
If
a
client
connects,
then
it
starts.
You
know
formatting
these,
you
know
formatting
the
all
the
messages
and
and
writing
them.
So
that
way
you
could
in
theory,
just
keep
it
enabled
all
the
time
and
it
wouldn't
have
any
kind
of
overhead
unless
you
know
you're
actually
connected
to
it.
E
E
F
I
usually
use
the
prometheus
exporter
to
do
that
if
I
wanted
to
view
it
in
the
past,
but
that's
obviously
always
running
so.
This
sounds
very
similar
to
like
the
stuff.
That's
usually
exposed
through
z
pages,
but
I'm
not
sure
it
would
make
sense
as
a
z
page,
because
this
is
kind
of
specifically
what's
going
through
the
pipeline.
F
E
Yeah,
so
I
guess
this
would
be
more
like
an
alternative
to
the
logging
exporter
in
a
way
or
highlight
the
logging
slash
file
exporter
and
that
you
could
yeah,
and
then
you
also,
I
guess,
with
the
prometheus
one
it
so
there's
some
overhead
of
using
that
right,
like
like.
That's
it's
like
they're
like
running
all
the
time,
or
does
it
only
run
like
when
yeah.
F
E
I
think
it
does
still
run
all
the
time,
I'm
pretty
sure
but
yeah.
So
of
course
I
mean
this
works
for
traces
as
well
right,
so
traces
logs
geometrics,
whereas
the
the
prometheus
thing-
that's
just
metrics,
so
yeah.
So
you
could.
You
could
like
extend
this.
You
know
like
have
like
filters
on
it.
I
haven't
done.
I
haven't
done
to
see
that,
but
you
know
you,
you
may
be
able
to
say
like
type
you
know
you
know
expand.
E
F
E
I
think
ideally
because
like
if
it's
right,
if
it's
there
and
has
zero
overhead,
then
when
you
need
to
use
it,
you'll
have
to
like
go
and
restart
your
collector
to
to
like
log
data
out
right
so
like
otherwise,
you
have
to
go
like
you
want
to
like
look
at
the
raw
like,
or
you
know,
log
or
log
to
a
file
like
you'd
have
to
go
and
restart
right,
push
update
to
the
config
and
restart
and
and
so
on
so
yeah.
So
that
view
is,
I
would
say,
yeah.
Ideally,
you
could
use
this
in
production.
E
F
E
Anyway,
so
that's
kind
of
the
demo
of
yeah.
If
anybody
has
any
other
ideas,
I'd
be
curious
to
hear
them
about
how
they
might
use
them,
how
they
might
do
something
like
this
or
if
you
have
like
similar
kind
of
debugging
type
issues.
H
Let's
see,
I
think
that
was
it
right
on
the
agenda.
That
was
the
last
thing
on
the
agenda
yeah.
Unless
anyone
else
that's
something
I
want
to
discuss.
Yeah.
I
Buddy
one
minor
update
people
probably
got
emails,
but
if
you
haven't
their
instructions
online,
the
open
palm
tree
governance
committee
election
is
taking
place
next
week.
Anyone
with
meets
the
I
don't
know
what
the
bar
is
but
meet.
Some
set
bar
for
github
activity
in
the
repository
should
have
automatically
received
an
email
if
you
hadn't
and
you'd
like
to
vote
and
and
then
there's
instructions
on
the
community
repository
for
enrolling
yourself
as
a
voter
in
the
election,
and
I
think
voting
takes
place
next
monday
through
friday.