►
From YouTube: 2021-12-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
B
B
Okay,
so
I
guess
I
can.
I
can
share
my
screen
and
we
can
take
a
look
at
contrib,
for
I
don't
know
five
minutes.
10
minutes.
B
And
the
last
item
on
the
first
page,
six
days:
okay,
so
seven
days,
yeah,
this
one
here
is
triaged
already.
B
B
B
D
Yeah,
I'm
here,
I
think,
we're
basically
waiting
on
the
person
who
opened
the
issue.
D
B
All
right
this
one
here
there
is
apr
and
I
think
it
is
about
to
be
merged
yeah.
Then
there
are
a
few
items
that
I
opened.
I
think
most
of
you
are
aware
of
that
already.
I
think
all
the
approvers,
at
least
those
are
items
that
we
either
discussed
during
the
past
sig
meeting,
or
I
think
for
the
title.
It
is
something
that
we,
some
of
us,
were
trying
to
do
that
already
in
some
cases.
B
So
if
you're
not
aware
of
those
items,
even
if
you're,
not
an
approver,
just
you
know,
share
your
your.
Your
ideas
share
your
opinions
and
those
are
things
that
we
are
probably
gonna
affect
contributors
soon.
I
don't
know
how
soon,
but
soon
this
is
on
me.
It
is
part
of
another
task,
a
bigger
task
tracked
here
on
the
remote
sampling
feature,
splitting
the
remote
sampling
feature
from
jaeger
receiver,
yeah,
there's
apr
open
for.
If
people
want
to
review
it
this
one
here.
E
E
B
So
it
so
the
eager
receiver
basically
has
this
feature
where
you
can
specify
a
json
file
to
be
served
to
the
clients,
or
it
can
query
an
upstream
jager
collector
for
display
strategies.
So
it
is
basically
just
splitting
that
feature
into
its
own
extension
right
now.
We
discussed
that,
in
I
think,
was
part
of
the
governance,
call
that
it
would
make
sense
to
to
have
it
split
from
from
here,
allowing
you
know
other
people
to
take
advantage
without
requiring
the
whole
eager
receiver
to
be
part
of
it.
G
Yeah
we
discussed,
we
discussed
that
we
want
to
offer
this
for,
for
people
to
be
able
to
switch
from
younger
client
to
open
telemetry.
We
want
two
things:
we
want
one
to
support
a
sampler
in
every
library,
the
other
sampler
and
give
them
a
way
to
use
the
jager
remote
sampling
with
otlp
exporting.
So
in
order
to
do
that,
we
had
to
decouple
the
receiver
and
allow
them
to
use
the
otlp
receiver,
plus
this
extension
for
for
future.
B
So
we
have
to
think
about
how
to
handle
that
kind
of
situation
you
know
so
do
we
do
we
want
to
still
have
the
remote
sampling
feature
as
part
of
the
eager
receiver
and
just
have
an
accession
that
does
pretty
much
the
same
or
or
how
do
we
handle?
You
know
both
being
used
on
the
same
port,
but
we
can.
We
can
discuss
that
in
the
issue
this
one
here
is
related
to
that.
So
this
is
just
another
task
in
accomplishing
the
whole
feature.
B
So
yeah
I
pinged
people,
oh
yeah,
is
james
still
part
of
of
the
project.
I
I
just
think
I
remember
fortunately,.
B
The
metrics
transform
processor
right
and
I
think,
kevin
brockhoff-
is
also
a
co-owner
of
a
couple
of
modules.
Okay,
those
are
not
gonna,
be
orphans.
I
think
he
also
stepped
out.
B
Okay,
so
I'll
complete
this
issue
here
later,
all
right,
so
isapoonia
is
the
new
owner
for
the
matrix,
transform
processors
right.
B
B
So
we
have
two
options.
The
first
one
is
and
not
do
anything
and
just
fix
whatever
broke,
and
I
think
we
did
fix
already
and
just
wait
for
042.
B
The
second
option
is
finite
way
of
running
the
release
process
locally
and
either
just
generate
the
binaries
locally
and
upload
them
manually
to
the
release
or
have
this
script
uploaded
automatically?
I
don't
know
how
it
works,
so
I
haven't
seen
the
script
yet.
I
don't
know
how.
B
The
problem
was
a
change
that
I
sent
for
remapping
arms
64
to
arch
64.
I
think
it
is
broke
one
of
the
scripts
during
the
release
process,
so
that
yeah,
I
should
probably
go.
B
It's
not
gonna
succeed
so
either
so
we
fixed
that
already
and
if
we
we
run,
it's
not
gonna
work
for
the
label
for
the
tag
that
we
have
right.
So
we
have
to
create
a
new
tag
like
zero
forty
one
one
and
do
a
new
release.
I
don't.
B
Yeah,
okay,
yeah!
That's
what
I
thought.
So
some
people
are
okay
waiting
for
042,
so
you
know
if,
if
nobody's
using
0
41,
then
vendors
do
not
have
to
care
about
041,
so
they
can
just
wait
for
0.42
and
then
people
are
asking
for
0
41,
but
I
haven't
seen
a
concrete
reason.
So
there
is
nothing
there
that
is
not
in
0
40
that
people
want
to
use
right
now
or
something
that
people
cannot
wait
like
a
couple
of
weeks.
B
Our
next
release
is
on
25th
is
on
january
5th.
So
it's
three
weeks.
B
G
Why
why
is
the
renaming
important
with
arch.
B
Yeah
because
arm
64
means
something
different
in
when,
when
you're
doing
an
rpm,
so
when
you're
installing
an
rpm
that
is
mapped
as
arm
64
and
you
are
on
a
amazon,
graviton
image
or
vm,
it
will
fail
to
install
because
it
thinks
it
is
not
the
same
architecture,
even
though
the
actual
binary,
it
runs.
Fine,
you
know,
but
the
rpm
is,
is
causing
the
system
to
refuse
to
to
get
installed.
G
Yeah
I
love
amazon.
B
Yeah
so
a
another
solution,
I
don't
know
if
I
mentioned
that
already,
the
another
solution
is
to
just
get
contrib
as
part
of
the
releases
repository
and
from
what
is
that
there
is
an
issue
assigned
to
you
for
that.
Yes
and
there's
apr
open
for
that
as
well.
Oh
perfect,
just
I
just
opened
a
pr
today.
B
I
think
it
is
yeah.
It
is
failing
right
now
because
of
another
issue
looks
like
signalfx.
Exporter
fails
to
compile
on
arm
v6.
I
open
an
issue
here
and
the
question
is
what
what
do
we
do?
Do
we
just
remove
that
component
and
perhaps
others
or
just
remove,
arm
v6
from
the
matrix?
This
actually
works
locally.
So
when
I
run
go
reuser
locally,
it
just
works.
So
I
don't
know
what
what's
wrong
here.
I.
B
I
do
use
117
something
locally
and
it
is
the
same
on
on
the
workflows.
So
this
is
the
one
that
actually
fails
so
yeah
117.
B
Yeah
so
there's
an
issue
there.
I
don't
know
if
we
had
arm
v6,
but
I
would
just
remove
it
from
now
and
see
how
it
goes
and
then
perhaps.
E
G
We
can
start
looking
but
right
now,
but
let's
not
stop
the
the
process
that
you
have
here
by
that,
let's,
let's
try
to
continue
and
continue
with
all
the
components
for
the
moment.
So
a
couple
of
things
about
this.
One
of
these
has
if
there
is
no
other
topic,
because
if
there
are.
B
We
were,
we
were
on
on
the
triaging,
so
we
do
have
a
schedule
for
an
agenda
for
today.
Okay,
so
we.
B
This
one
here
and
discuss
it
on
the
issues
themselves,
yeah
all
right,
so
I
think
we're
now
way
over
the
time
boxed.
We
know
that
we
had
but,
as
I
said.
E
E
Yeah
yeah
yeah
and
3-bit
extension.
So
these
are,
I
want
to
understand
what
do
we
do
about
this,
because
it's
a
potential
security
threat,
because
these
are
executing
arbitrary
things.
I
just
supply
a
command
and
it
runs
it,
which
is
a
problem,
especially
if
we
combine
this
with
ability
to
remotely
receive
a
configuration.
E
B
Yeah,
so
I
I
shared
a
couple
of
things
on
those
already,
and
I
think
you
know
what
I
what
I
have
in
mind,
but
before
doing
anything,
I
need
to
look
at
the
code
myself.
So
I
I
really
don't
know
what
they're
doing
on
you
know
around
the
actual
execution
of
the
commands.
So
I
don't
know
whether
the
input
is
is
having
some
untrusted
input
or
interested
strings
from
user
or,
if
they're,
just
trusted
input
from
admins.
If,
if
this
is
all
about,
you
know
input
from
admins,
then
that's
fine.
B
A
E
Enabled
somehow
using
the
config
sources
that
we
have
in
the
collector
or
using
agent
management
that
is
maybe
coming
soon
right.
Somehow
the
configuration
file
of
the
collector
is
received
from
some
other
remote
source
right.
How
exactly
that
happens
is
is
not
important,
but
because
it's
remote,
you
cannot
trust
it
so
because
it
can
put
anything
great
right.
It's
some
some,
maybe
malicious
party
there
right,
yeah
and
anything.
There
means
essentially
really
anything
these
two
components
you
put
a
command
there,
like
they
literally
execute
anything
you
put
there.
E
B
Yeah,
so
I
guess
at
the
very
least
we
have
to
ensure
that
the
pipes
that
are
used
for
this
communication
for
retrieving
the
remote
configuration
they
are
secure.
So
no
http,
only
https
yeah.
E
E
E
To
assume
that
the
remote
source
from
which
you're
receiving
the
data
is
compromised,
you
have
to
make
that
assumption
that
that
should
be
our
our
threat.
Modeling
assumption,
I
believe
the
remote
sources
cannot
be
trustworthy,
they
can
be
compromised
and
you
may
receive
anything
from
there,
but
you
should
protect
yourself
from
those
malicious
sources
right.
I
think
that
that
should
be
our
our
stance
in
the
collector
we
cannot
allow.
We
cannot
say
that
the
collector
trusts
a
remote
source
because
they
can
actually
secure.
That's
not
good
enough.
In
my
opinion,.
B
So
perhaps
the
nuclear
option
is
actually
the
best
one
right
now,
but
at
the
same
time
we
could
provide
a
distribution
called
like
unsafe.
Just
like
you
know,
there
are
unsafe
operations
in
programming
languages.
We
can
have
an
unsafe
distribution
for
people
who
want
to
use
those
components
and
at
the
same
time
we
could
implement
some
sort
of
accept
list
for
for
the
commands
or,
for
you
know,
the
processes
that
we
should
be
executing.
H
I
don't
think
that
this
is
something
that
the
collector
should
be
doing
in
the
first
place.
I
don't
think
we
should
be
putting
effort
into
trying
to
create
allow
lists
of
acceptable
commands.
There
are
systems
that
exist
for
process
management
if
a
user
wants
to
ensure
that
a
process
is
always
running
alongside
the
collector.
There
are
ways
to
make
that
happen.
Those
systems
have
that
as
their
core
competency
and
focus
on
that
we
don't
and
shouldn't.
H
Right,
yes,
I'm
saying:
remove
the
components,
burn
them
with
fire,
pretend
they
never
existed
and
hope
they
never
come
back.
H
A
E
I
A
H
Reading
files
reading
log
files
is
at
least
part
of
the
core
competence
of
the
collector,
though
that's
that's
a
thing
we
should
be
doing
unless
we
need
to
solve
the
security
around
that
starting
arbitrary
processes
isn't
and
yeah
bogdan,
just
linked
the
pr
for
the
create
a
sub
process
extension
which
I
think
kicked
off
all
of
this
discussion
and
I
didn't
think
I've
commented
on
there
pretty
much
exactly.
E
E
G
E
E
E
G
But
I
don't
think
this
is
a
real
problem,
so
so
for
me
for
me,
on
the
other
side,
I
understand
everyone's
here
right
now
with
look
for
jay,
and
everyone
thinks
about
that.
But
kubernetes
master
has
the
same
problem.
If
somebody
can
send
a
file
to
the
kubernetes
master
and
will
will
start
a
job
for
you.
G
B
B
H
B
Now,
I
guess
the
main
difference
here
is
that
for
kubernetes,
a
the
owner
of
the
cluster
can
set
restrictions
on
resources
and
quotas,
and
so
on
and
so
forth,
and
they
make
sure
that
whoever
is
applying
a
ammo
file
to
the
cluster
has
the
rights
to
do
so.
In
this
case
here
we
are
assuming
that
a
server
is
compromised
and
distributing
config
files
to
other
agents
out
there,
and
those
agents
would
just
accept
that
configuration
file
and
do
whatever
the
compromise
master
is
telling
them
to
do
so.
G
So,
to
be
honest,
to
be
honest,
we
don't
have
right
now
a
problem,
let's,
let's
be
clear,
because
the
file
can
be
specified
only
by
the
but
whoever
starts
the
process,
and
if
that
person
who
starts
the
process
starts,
the
collector
can
start
any
arbitrary
binary.
You
don't
have
to
use
the
collector
to
start
arbitrary,
binary,
correct.
B
G
B
G
H
However,
remote
config
is
something
that
is
useful
to
us.
That
we
know
is
widely
desired
by
our
community
and
that
we
have
plans
to
implement
running
arbitrary
commands
based
on
configuration-
I
don't
think
is
if
we
have
to
give
up
one
or
the
other.
I
would
much
rather
give
up
running
arbitrary
commands
based
on
configuration
than
remote
configuration.
G
H
Even
at
cert
time,
let's
say
I
I'm
going
to
load
configuration
from
an
s3
bucket
right.
Unless
I'm
you
know
well,
you
know,
even
if
I
were
to
pull
the
contents
of
that
s3
bucket
and
then
verify
it
and
then
start
the
collector.
The
contents
may
have
changed
between
the
time
that
I
pulled
it
and
verified
it
right.
There
may
be
different
contents
in
that
bucket
when
the
collector
goes
to
pull
it.
So
anytime,
you've
got
a
remote
configuration,
that's
usable.
G
H
Are
already
compromised?
Yes,
all
the
more
reason
why
I,
I
simply
don't
think
that
running
arbitrary
processes
out
of
the
collector
is
a
good
idea.
E
E
H
E
You
can
have
a
shell
file
which,
which
is
supposed
to
be
executed
periodically,
and
somebody
can
go
and
change
it.
How
is
that
different?
It's
not
different,
I
mean,
but
that's
fine,
I'm
that's
fine,
but
I
agree
with
most
of
what
you
said
anthony
except
one
thing.
I
think
there
is
value
in
this
feature.
E
It
tries
to
bring
the
concept
of
essentially
plugins
there
of
something
that
is
created
independently
and
generates
open
telemetry
or
tlp
telemetry
data,
and
thanks
to
the
collector.
This
essentially
in
a
way
is
a
is
a
notion
of
plugins,
which
I
think
people
find
valuable.
So
I
wouldn't
completely
agree
with
the
idea
that
this
doesn't
belong
to
the
collector.
Maybe.
B
E
H
Processor,
I
I
I
would
have
more
sympathy
to
that
argument
as
a
processor,
something
that
can't
be
injected
at
the
head
of
a
pipeline.
But
if
it's
producing
data
to
a
receiver
that
that
process
can
exist
anywhere,
it
can
be
managed
anywhere
and
doesn't
require
anything
special
beyond
what
the
collector
already
provides.
E
Yeah,
but
there
is
there,
is
there,
is
value
in
convenience
right,
you're,
disregarding
that
completely
like
the
user.
Experience
is
very
different
right.
If
you
have
this
notion,
you
may
even
want
to
distribute
some
of
these
receivers,
which
are
developed
independently.
E
They
are
independent
executables,
but
you
find
maybe
them
valuable
enough
to
call
them
plugins
and
put
into
your
distribution.
Potentially
I'm
not
saying
we
should
do
it,
but
if
you
think
from
the
plugins
perspective,
it
kind
of
makes
a
little
bit
sense
right.
It's
not
like
this
is
completely
unrelated
stuff
that
we're
executing
here.
This
is
something
that
is
a
source
of
telemetry
data
for
the
collector.
B
E
B
Yeah,
but
the
the
way
that
it's
proposed,
at
least
here
it
says,
just
launch
and
monitor
itself
process
and
it's
so
processed
it's
configured
with
an
executable
path.
The
arguments.
E
They
try
to
generalize
it,
but
I
think
the
goal
is
not
really
that
the
goal
is.
I
think
they
created
some
components
which
read
telemetry
from
some
hardware
or
something
and
they
have
their
own
executables
for
that,
and
they
just
want
to
send
the
data
to
a
collector
because
they
actually
use
the
collector
in
the
same
application
as
well.
I
think
that's
so
cool.
B
So
to
be
concrete,
a
receiver
or
an
extension
mechanism
that
is
based
on
on
some
jar
or
some
plugin
mechanism
to
be
concrete,
something
like
jrpc
plugin
right,
so
the
the
hashicorps
jrpc
plugin,
I
think
is-
is
the
best.
What
we
have
right
now
in
the
global
world,
yeah
yeah.
E
And
which
we
don't
have
in
the
collective,
but
people
ask
us
about
plugins:
it's
not
like
nobody
needs
it.
It
seems
like
it
is
well,
maybe
not
very
popular
thing,
but
periodically
somebody
comes
and
tells
us
we
want
something
like
that.
Anyway,
all
I'm
saying
is:
let's
not
mark
this.
Is
this
capability
right?
I
think
I
see
some
value
in
having
something
like
this,
maybe
not
exactly
in
this
form.
Maybe
it
should
be
implemented
differently,
but
let's
not
completely
disregard
the
use
case.
E
B
All
right,
so
the
first
item
for
today's
agenda
is
oh
yeah,
so
release
I
I
did
the
past
couple
of
releases
and
at
least
for
me
it
takes
very
long
time
to
get
approvals
and
or
merges
in
the
case
of
the
core,
because
all
of
you
apparently
are
in
the
u.s
and
even
worse.
For
me,
you
know
you're
very
far
away
in
the
u.s,
so
you
only
can
review
and
approve
or
merge
on
when
I'm
already
off
now.
B
I
think
it
is
a
problem
and
I
don't
know
what
how
it
should
be
handled.
So
do
we
need
more
approvers
and
maintainers
here
in
europe
or
you
know,
in
a
different
time
zone,
or
should
we
let
just
the
us
folks
do
the
releases
for
now.
C
I'm
an
approver
on
contrib,
so
yeah.
I
try
to
review
things,
but
I
may
miss
it.
You
I'm
happy
if
you
ping
me,
if
I'm,
if
I'm
missing
something
and
it's
urgent
sure,
but
I'm
not
an
approver
on
on
the
core
repo.
So
I
can't
upload.
B
B
Okay,
that's
good
all
right,
so
we
can,
we
can
try,
we
can
try.
You
know
a
couple
more
times,
but
at
least
it
was
a
pain
point
for
me
during
the
past
couple
of
releases.
B
B
E
I
Sure
so
I
have
a
question
related
to
config
for
open
telemetry
collector.
So
I
know
that
there
are
some
enhancements
in
config
provider
and
there
are
some
ideas
about
having
different
means
for
providing
config
than
just
the
file,
and
I
want
to
check
what's
the
current
status
and
one
use
case,
I'm
thinking
about
is
providing
the
config
to
open
telemetry,
collector
via
environment,
variable
and
lws,
auto
disk
or
collector
has
such
capability
and
it's
very
useful
in
certain
environments
like
fargate.
I
So
I'm
wondering
if
this
is
something
that
would
fit
the
open,
telemetry
collector
or
maybe
not
there's
an
alternative
approach
to
that
yeah.
This
issue
with
making
it
easier
for
custom
distros
to
provide
their
own
configmap
providers.
So
I
wanted
to
ask
what
do
you
think
like
what
we
should
proceed
for
this
use
case?
If
we
should
add
this
capability
to
the
builder,
which
I
think
makes
sense
on
its
own
or
maybe
this
is
something
that
should
be
implemented
into
the
collector.
G
I
You
yeah,
so
I
think
that
the
idea
is
that
you
have
one
environment
variable
which
contains
the
the
full
conflict.
The
whole
contents.
G
But
that's
not
related
to
the
config
provider
map
provider,
because
that's
the
main
functionality
like
the
domain
should
read
that
environment
variable
should
parse
it
and
pass
it
to
the
service.
So
essentially
is
is
where
we
read
we
separated
again
we
separated
like
who
is
reading
the
flags
or
or
environment,
variable
and
versus
who
is
passing
this.
So
I
don't
think
this
is
a
capability
of
the
provider.
It's
purely
a
main
function
capability.
I.
D
G
I
Well,
in
certainly.
H
A
I
Yeah,
so
I
want
to
have
the
ability
to
provide
the
config
via
environment
variable,
and
I
don't
think
that
this
is
currently
possible
and
I
don't
think
it's
currently
possible
when
using
builder,
because
you
cannot
provide
your
own
consigma
provider
easily
and
so
yeah.
I'm
wondering
like
what
should
be
the
approach
here.
If
we
want
to
extend
the
builder
with
making
that
possible
or
just
add
capability
to
have
this
default
provider
supporting
environment
variables
as
well.
F
I
I
think
that
the
problem
here
is
that
in
these
environments,
it's
not
very
convenient
to
provide
any
sort
of
files.
It's
better
to
just
have
just
this
binary
executable
and
be
able
to
provide
config
using
different
means.
H
E
G
E
With
the
command
line,
remember
we
we
said
that
in
the
command
line.
If
we
allow
specifying
the
the
type
of
the
config
source
as
some
sort
of
url,
then
you
could
have
n,
for
example,
as
a
config
source
type,
and
you
could
have
m
column
by
variable
name
right,
something
like
that,
for
example.
E
I
E
I
I
Yeah,
you
are
saying
that
if
we
want
to
retain
the
this
provider,
we
want
to
use
the
config
parameter
of
the
of
the
of
the
binary.
So
the
only.
G
G
I'm
not
saying
anything,
then
you
don't
need
to
implement
the
provider
unless
you
intend
to
use
it
via
config
flag.
If
you
have
a
different
environment,
variable
name
which
is
defined
standard,
then
what
we
have
to
do
is
look
up
for
that
environment.
Variable
in
the
main
file
read
the
content.
We
already
have
bytes
provider
so
put
that
content
in
the
bias
provider
pass
that
to
the
service,
and
we
are
done
so.
If
we
have
a
different
injection
point,
which
is
not
the
config
flag,
it's
directly
this
variable,
then
there
are
other
solutions.
E
If,
if
I
wanted
this
feature,
I
would
definitely
want
it
to
be
done
exactly
like
via
the
the
command
line
parameter,
because
it's
it
fits
very
nicely
in
the
model
of
having
different
config
sources.
Environment
is
just
another
config
source.
I
think
it
fits
very
nicely
conceptually
there,
so
I
would
prefer
it
down
that
way.
Yeah
yeah!
G
H
We
need
the
config
source
config
map
provider.
We
we
need
that
meta,
config
level,
that
I
think
you
and
tigran,
and
I
have
discussed
in
the
past
where
you
would
be
able
to
say
here
are
my
config
sources.
I
want
them
combined
in
this
order
and
do
that
through
a
configuration
file
or
through
the
command
line.
I
think
that's
the
piece
that's
missing.
G
G
G
E
Yeah
that
that's
what
you
wanted
to
do
right,
that's
that's
where
we
stopped,
where
we
post
on
the
pr
that
I
proposed.
So
if
we
do
that,
even
with
just
single
source,
which
you
can
provide
from
the
command
line,
then
that
solves
this
problem
right.
The
environment
becomes
just
another
possible
source
like
file
for
these.
G
Anyway,
I
think
I
have
to
run.
I
will
give
you
at
least
anthony.
I
will
give
you
the
the
merging
part
today
to
review,
because
that's
very
simple
to
merge
files
and
then
and
then
we
need
to
do
the
the
smarter
source
separately.
H
Okay,
yeah
we've
been
talking
about
this
internally
at
aws
because
of
the
the
ecs
team's
desire
for
s3
that
I
think
has
come
up
in
the
past.
I've
had
some
ideas
on
how
we
might
achieve
that,
but.
G
I
think
we
discussed
that
there
is
nothing
stopping
you
to
do
that
manually
and
test
it.
If
that's
okay,
I
I
think
I
remember
clearly.
H
G
G
B
All
right
so
actual
items
for
this
item
here
I
are
kind
of
lost
here.
What
have
we
decided.
I
I
E
B
E
If
you
need
this
urgently,
you
can
do
it
on
your
distribution.
If
you
can
wait
a
bit,
then,
when
we
have
this
ability
to
specify
different
types
of
config
sources
from
the
command
line,
then
I
think
it's
very
reasonable
to
have
an
environment
as
one
of
the
possible
config
types
that
we
support
in
the
core.
I
B
Yeah,
so
as
an
action
item
mark
that
chamec
is
greatly
an
issue
with
the
proposal
for
this
one
here
for
the
concrete
steps,
and
I
think,
anthony
and
pogba
and
tigran
had
things
to
share
as
well.
B
All
right
so
next
one.
D
Hello,
hey,
hey
real,
quick
guys.
This
is
my
first
time
on
this
call,
I'm
an
se
and
I'm
trying
to
put
together
a
collector,
config
everything's
working
great
we're
using
a
lot
of
processors.
This
config
to
store
labels,
values,
metrics,
transform
processor,
et
cetera.
D
I
ran
into
an
issue
where
I'm
trying
to
extract
a
label
or
sorry
extract
a
value
on
an
attribute,
and
I'm
just
getting
nothing
and
I've
tried
several
different
canned
metrics
just
to
prove
my
theory
and
I've
posted
this
to
the
debug
logs
to
our
internal
engineers
at
our
company
here,
and
they
said
that
none
of
these
processors
can
currently
access
data
point
address
data
point
attributes
looking
through
the
code,
I'm
a
more
of
a
python
developer,
but
looking
through
the
go
code,
the
logic
seems
that
all
of
the
regex
extract
code
is
like
included
by
virtue
of
the
actions.
D
I
think
it's
called
action
dot
go
out
in
the
open
repo,
so
I'm
hoping
that
we
can
get
a
fix
because
I
feel
like
this
could
be
something
that
a
lot
of
folks
would
want
to
use
when
the
data
sources
are
coming
from
like
things
like
collecti
or
prometheus
data
sources.
D
D
E
D
B
H
I
think
there
is,
or
was
a
separate
resource
processor
resource
attribute
processor
that
that
handled
that
for
resources
and
the
the
mention
of
data
points
sounds
like
it's.
Are
you
trying
to
deal
with
attributes
from
metrics
rather
than
traces,
in
which
case
I
don't
know
that
we
have
an
attribute
processor
that
will
process
metrics
because
of
the
complications
that
arise
when
you
add
or
remove
attributes
and
need
to
do.
Dimensional
re-aggregation.
D
Yes,
yes,
so
you
hit
the
nail
on
the
head,
I
am
trying
to
do
it
from
from
metrics
and
it's
you
know
there
is
a
need
for
this
in
multiple.
You
know
use
cases
for
several
different
clients,
so
yeah.
H
There's
here
the
metrics
transform
processor,
but
even
that
I'm
not
sure
we'll
be
able
to
remove
attributes
if,
if
that's
a
thing,
you're
trying
to
accomplish
simply
because
if
we
were
to
try
to
do
that,
we
would
need
all
of
the
data
points
that
have
that
attribute,
so
that
we
could
remove
them
and
all
of
the
data
points
that
don't
have
that
attribute
and
be
able
to
re-aggregate
effectively
yeah.
D
Yeah,
I
mean
really
I'm
just
trying
to
change
the
value
of
of
yeah.
It's
just
a
value
of
a
label
right
in
this
case.
From
this
perspective
for
metrics
transform
processor,
which
I
have
tried,
the
extract
is
only
available
on
pattern
match
and
not
for
extract
or
upsert
right.
So
I
don't
have
the
ability
to
do
that
with
this,
but
that
would
be
great
to
do
it
here
or
anywhere,
but
you
did
mention
that
there's
a
there's,
a
purpose
for
why
we,
you
guys
did
not
include
that.
H
There's
another
data
point
that
has
the
target
value
of
that
attribute
already.
You
would
need
to
combine
the
two
and
the
collector
can't
know
if
there's
a
data
point
with
that
target
value
somewhere
else.
E
E
H
You
change
the
value
of
an
attribute.
I
I
think
is
where
the
concern
comes
in,
even
even
if
you
rename
an
attribute,
though
I
think
if
something
else
has
an
attribute
with
that
name,
you
then
have
to
combine
those
those
values
and
if
the
data
point
there's
another
data
point
that
has
an
attribute
with
that
name
that
comes
in
in
a
different
batch,
you
won't
have
all
the
data
to
be
able
to
accomplish
that.
H
H
Curtis.
If
you
can
create
an
issue
describing
what
you're
trying
to
achieve,
we
can
use
that
as
a
place
for
a
discussion
of
the
the
options
that
are
available.