►
From YouTube: 2022-02-09 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
And
I
guess
what
I
wanted
to
do
is
to
discuss
if
this
is
the
right
way
of
doing
things.
A
So,
first,
I
think
it
kind
of
conflicts
or
perhaps
conflicts
is
not
the
right
word,
but
it
does
have
a
few
features
that
we
should
be
having
as
part
of
the
transform
processor
is
not
here
but-
and
I
think
he's
working
on
that
and
I'm
I
don't
know
if
I'm
I'm
really
up
to
re-implementing
the
the
the
include
sexual
roles
without
reusing
code
from
other
places
so,
and
I
would
like
to
hear
other
people's
opinions
on
that.
B
So
yeah,
I
have
an
opinion
on
this.
I
think
there
is
a
proposal
that
then
made
about
having
this
ability
to
have
signal
translation
which,
which
is
a
new
separate
feature,
but
as
part
of
that
there
there
was
this
sub
feature
where
you
would
be
able
to
connect
pipelines
with
one
another.
B
B
C
C
A
So
the
routing
processor
can
take
only
specific
attributes
from
the
data
and
it,
and
it's
only
like
positive
confirmation.
So
if
it
sees
a
specific
attribute,
then
it
it
rots.
Based
on
that
now.
This
feature
request
here
is
to
implement
a
includes
excludes
rules,
rule
set
for
each
route
right.
So
we
have
a
route
for
a
tenant,
x
and
environment
staging,
for
instance,
and
then
it
goes
through
a
specific
exporter
and
then
tenant
x
and
then
environment
production
goes
from
another
exporter.
A
C
A
Yeah
so
it
does
and
then
this
preprocessor
would
have
to
restore
the
outcome
of
this
calculation
as
part
of
a
resource
attribute
or
something
like
that,
and
then
we
go
into
a
separate
issue.
That
was
that
your
mac
that
mentioned
that
they
were
having
a
problem
with
removing
such
attributes
from
the
the
data,
because
otherwise
this
data
reaches
through
the
back
end,
and
they
don't
want
to
see
this.
A
A
That's
true
from
the
contracts
it
won't
get
there.
I
don't
think
we
can.
We
can.
We
can
route
based
on
the
context
yet,
but
that's
that's
a
minor
issue
yeah.
A
So
it's
real
here
in
the
call
within
trend.
E
Well,
yeah,
on
further
investigation,
I
was
able
to
put
together
a
combination
of
the
attributes,
processor
group
by
adders
and
then
routing,
and
that
seems
to
work.
E
It
works,
at
least
for
my
use
case,
and
if
you
want,
the
idea
of
ordered,
ordered
execution
of
rules
like
the
first
matching
rule
will
make
a
writing
decision
and
that
will
take
effect,
and
you
want
me
exclusively
it's
possible
to
do
that
all
with
the
attributes,
processors,
just
by
using
combinations
of,
insert
and
upset
or
update
things
like
that,
so
you
can
override
a
routing
decision
if
you
need
to
or
you
can
sort
of,
let
the
first
decision
take
precedence.
E
So
you
get
that
flexibility,
I'm
okay!
With
that
I
mean
it's.
I
guess
the
the
issue
I
see
now
mostly
is
well.
It
wasn't
a
very
obvious
solution
and
there
are
other
folks
in
the
community
that
want
to
accomplish
similar
things
and
are
just
asking.
How
do
I
do
this?
I
don't
see
how
it's
possible
to
do
this,
so
I
mean
I'm
okay
with
the
state
of
things.
E
Maybe
the
enhancement
could
be
like
a
recipe
book
for
solving
these
sort
of
problems
and
and
when
in
talking
about
like
okay,
how
do
we,
how
do
we
assemble
different
pipelines
and
routes
to
different
ones?
Yeah?
I
see
that
as
being
like
an
optimization
that
we
could
pursue
the
only
way
I
could
see
doing.
That
is
just
like
having
different
exporters
sort
of
connects,
two
different
receivers
on
separate
pipelines
and
stringing
that
all
together
but
you're
doing
all
that
over
grpc
locally.
E
So
I
think,
I
think,
for
the
community,
I
was
trying
to
get
more
feedback
on
okay.
Well,
what
are
your
routing
use
cases,
and
can
we
can
we
solve
your
use
cases
with
what
I've
put
together,
and
so
the
question
is
still
out
there.
C
Maybe
long-term
jurassic
once
the
transform
processor
is
ready.
They
have
to
have
this
condition
library
or
part
like
the
where
statement
in
a
sequel
where
what
anurag
is
doing
and
if
that's
a
library,
you
can
just
simply
integrate
that
library
as
one
of
the
thing
and
say
hey
route,
these
were
conditioned
and
you
have
everything
written
and
you
will
be
able
to
just
plug
that
in
into
this.
A
All
right
so
looks
like
we
have
two
things
here
so
first
is:
will
is
happy
with
the
status
quo
right
now
we
just
need
better
augmentation
and
perhaps
even
a
recipe
book,
and
I
think
a
recipe
walk
is
something
that
we
should
really
consider
for
ga
right
so
showing
people
how
to
accomplish
things
not
only
for
for
the
routing
processor,
but
in
general,
and
the
second
thing
is
for
the
future.
A
Once
we
have
this
feature
of
connecting
pipelines
together,
then
we
might,
we
might
reconsider
how
we
do
things
with
the
roaching
processor
and
whether
it
is
actually
needed.
Yeah
yeah,
for
that.
E
Yeah
and
I,
like
the
the
point
that
bogdan
brought
up
with
anaraga's
work
on
the
transform
processor
and
the
query
that
query
language
like
if
that
that
would
enable
far
more
flexibility
than
current
attributes
matching
stuff
that
we
can
do
so.
I'd
like
to
see
that
happen.
A
But
within
within
I
mean
I
guess,
that
that
kind
of
contradicts
what
tigran
said
that
we
probably
do
not
want
to
invest
that
much
right
now
on
the
routing
process
or
as
it
is,
but
wait
for
the
connection
of
pipelines
through
the
future
work
right,
so
anarch's
work
can
can
can
be
completed
and
then,
once
we
have
this
connection
of
pipelines
there,
then
we
can
think
on
how
to
tie
those
two
things
together.
A
Instead
of
investing
right
now
on
on
working
on
the
query
language
for
for
the
wrong
chain
processor,
or
rather
I'm
using
the
query
language
not
doing.
A
C
Select
query:
essentially,
you
have
a
query
language
for
for
selecting
spans
for
different
actions,
and
one
of
the
actions
that
we
can
have
is
to
do
routing
based
on
the
select
but
does
not
solve
your.
I
mean
the
the
somebody
else.
I
think
should
look
into
the
connect
pipeline's
idea
and
see
if
we
need
the
routing
processor
or
we
need
a
connector
or
whatever
we
call
it
that
does
this.
B
B
My
objection
is
primarily
around
the
fact
that
if
this
is
going,
if
if
this
is
a
the
not
the
general
approach
that
would
like
to
take
like
a
long
term
approach,
it
may
not
be
worth
spending
too
much
time
on
this
right,
since
it's
probably
not
going
to
be
what
what
stays
with
us
for
a
long
time.
So
yeah
small
improvements
here
and
there
I
mean
it's
fine
after
after
all,
let's,
let's
be
like
like
we
need
to
solve
the
real
problem,
then
we
need
to
solve
it
right.
B
Probably
I
at
least
it
would
be
my
my
pr,
my
preference
as
a
user
as
well
to
have
this
this
capability
there.
The
the
way
that
we
do
with
the
routing
processor
is
somewhat
clunky
right.
It
doesn't
really
fit
nicely
the
model
of
the
pipelines
that
we
have,
where
there's
a
processor
there's
the
exporters
which
are
connected
to
the
last
processor.
We
kind
of
break
the
model
by
by
having
this
processor
there.
B
E
A
B
It
has
much
broader
scope.
It's
about
connecting
pipelines
of
different
signal
types,
so
signal
translation
is
the
primary
goal
there,
but
as
part
of
that,
it
talks
about
how
you
connect
the
pipeline
so
that
essentially,
inevitably
it
leads
to
the
fact
that
you
can
have
the
pipelines
of
the
same
type
connected
and
that's
routing
you,
you
get
the
routing
right
there
right.
A
All
right,
so
I
guess,
then,
we
have
three
things
here.
The
first
one
is
the
query
language
or
whatever
comes
as
a
result
of
the
transform
processor
that
we
could
use
on
a
next
iteration
of
the
routing
processor.
The
second
thing
is,
then,
perhaps
routing
should
be
an
aspect
of
the
transform
processor,
one
of
the
outcomes
of
the
processing.
A
That's
what
I
understood
from
both
them
and
the
third
one
would
be
in
this
collection
of
pipelines.
I
mean
a
transform
process
were
with
the
connection
of
the
pipelines.
Did
they.
B
B
A
Got
it
okay?
So
what
boredom
mentioned,
then,
is
probably
what
I
thought
before
in
terms
of
replacing
our
from
attribute
mechanism
to
this
query,
language,
which
then
we
would
do
a
go,
no
go
decision
for
the
for
this
band.
A
Okay,
all
right
all
right,
so
I
try
to
summarize
what
we
discuss
here
in
the
agenda.
If
there
is
anything
missing,
feel
free
to
add
folks,
and
if
I
misunderstood
anything,
just
correct
me
there
as
well-
and
I
think
we
are
ready
for
the
next
topic-
the
reduction
processor.
H
Sure
hi,
I'm
leo,
so
I've
been
working,
adding
a
new
processor
to
as
a
collector.
It's
currently
on
its
second
vr,
which
is
stage
where
the
actual
implementation
goes
and
a
couple
questions
that
have
come
up.
H
The
idea
was
a
reduction
processor
is
that
it
defines
business
needs,
there's
a
need
of
technical
controls
for
compliance,
for
example,
with
things
like
you
can
try
this,
and
basically
there
are
some
kinds
of
data
you
cannot
send
across
borders
in
your
privacy,
private
data.
More
generally,
it
might
be
credit
cards
and
in
china
you
cannot
send
the
geographic
coordinates
across
borders
and
that's
kind
of
those
kinds
of
data
might
end
up
in
traces.
H
So
the
reduction
processor
does
two
things
one
is
it
enforces
a
schema?
It
only
allows
us
the
defined
list
of
attributes
to
go
through
and
two.
It
actually
checks
the
values
against
the
regular
expressions
and
make
sure
not
necessary
values
go
through.
H
H
So
the
pr's
at
the
stage
where
we
need
a
sponsor
from
a
long
time
contributor-
and
I
was
talking
to
dimitri-
an
option
about
this
and
something
that
he
raises
that
there's
a
new
processor
is
also
in
the
processes
being
added
called
the
transform
processor
and
it
can
do
about
half
of
the
same
things.
H
So
the
transform
processor
can
also
enforce
a
schema
on
actually
on
keys
on
attributes.
It
cannot
do
anything
but
their
values,
but
it
does
have
a
keep
keys
kind
of
feature
in
it.
So
the
question
is:
how
do
we
resolve
this
overlap
and
there
is?
There
are
requests
for
the
other
half
of
the
features
that
the
russian
processor
does
that
I
noticed
when
I
was
looking
through
the
issues
from
other
folks.
C
Leo
everything
sounds
great,
but
for
me
I
have
two
questions
here.
The
fact
that
we
have
or
not
a
separate
processor
may
be
good
to
actually
have
a
separate
process
or
just
for
people
to
signal
things.
But
now
the
question
is
in
terms
of
their
configuration
and
the
language
that
we
use
there.
That's
where
I
I
have
the
most
problems
with.
So
what
I'm
trying
to
avoid
is
this
redact
processor
has
a
way
to
specify
rules.
Then
transform
has
another
way
to
specify
rules.
C
C
Okay,
for
for
for
like
selecting
what
spans
we
apply
or
what
element
you
apply
for,
selecting
the
so
for
how
we
define
the
action
of
delete
block
or
whatever
it
is.
I
think
it's
important
across
the
project
to
have
consistency
at
this
level.
C
I
Or
replace,
I
think,
replaces
a
really
valid
use
case.
I
want
to
suppress
user
names,
but
not
the
fact,
not
the
rest
of
the
information
there
or
something
like
that
does
not
replace
that's
editing,
a
value.
You
know,
like
looking
looking
at,
say
a
url
and
suppressing
the
portion
of
the
url
that
corresponds
to
a
user
identifier,
something
like
that.
Yeah
yeah.
So.
C
That's
that's
okay,
so
so
anyway,
now
can
we
marry
the
the
the
way
how
we
define
this
language
in
transform
processor,
with
with
this
language
to
be
similar
compatible?
H
B
This
reduction
capabilities,
as
you
describe
them,
are
strictly
a
subset
of
what
transform
processor
should
be
able
to
do
the
question:
is
you
need
it
to
be
more
specifically
defined,
so
that
it
is
easier
to
use
that
maybe
would
be
one
of
the
justifications
why
it
needs
to
be
a
separate
processor?
Otherwise,
from
the
perspective
of
the
ability
to
do
that,
I
would
absolutely
expect
that
the
transform
processor
should
be
able
to
do
all
these
things
at
this
reduction
process.
I
can
do
it's.
B
B
Now
the
transform
processor
may
be
difficult
to
use,
and
that
would
be
maybe
the
only
in
my
opinion
justification
why
it's
not
a
very
suitable
approach
to
do
this
to
this
reduction
like
I
want
this
induction
process
to
be
simply
simple
to
use
and
the
transform
processor
requires
the
person
to
learn
a
new
new
language
yeah,
that's
a
bit
bit,
maybe
too
much
right
other
than
that.
B
H
C
First
of
all,
first
of
all,
leo
as
an
action
item
I
would
evaluate-
and
maybe
maybe
we
missed
something
when
we,
so
I
send
you
a
couple
of
documents
here
in
the
chat
and
also
in
the
agenda.
I
put
the
in
the
agenda
doc.
I
put
a
couple
of
links.
So
first
go
read
the
processing
language
that
we
we
are
trying
to
implement.
Right
now
tell
us
if
that
will
work
and
that
will
fix
your
problem
like.
Can
you
extend
that
language
with
this
functionality
or
we
cannot?
C
Maybe
maybe
we
fail,
and
then
we
we
discuss
from
that.
If
the
extension
can
be
done
now,
we
have
two
options.
We
either
implement
this
extension
of
the
language
in
a
separate
processor
or
we
add
code
to
that
transform
to
support
this
and
it's
up
to
you
like,
but
but
I
think
there
is,
there
is
some
preparation
work
to
be
done
here,
which
is
understand
if
the
language
that
we
are
proposing
is
there
it's
gonna
work
for
for
your
purpose
and
what
is
missing
or
what
is
not
missing
and
stuff.
C
And
after
we
have
this
information,
I
think
the
simplest
way
or
the
preferred
way
would
be
to
extend
the
current
transport
processor
to
support
this.
If
it's
not
possible,
let's
have
have
it
separate
now,
as
I
said,
I
think
another
option
is
to
have
a
separate
from
the
beginning
and
and
say:
okay,
you
know
why
what
this
is
called
redact
processor,
even
though
it's
using
the
same
language
as
transform
processor,
because
it's
more
or
less
focused
on
on
this
problem.
J
I
don't
want
to
extend
the
conversation
down
a
rabbit
hole,
but
for
context
this
this
feature
would
be
super
useful
for
us
as
a
user
shopify.
Specifically,
we
do
some,
my
sequel
statement,
obfuscation,
and
this
is
a
so
in
ruby-
the
the
regex
here,
it's
a
collection
of
red
axes.
Regex's
is
not
re2
compatible,
so
they're
slightly
modified,
but
it
would
be.
J
We
found
doing
the
stuff
on
the
clients
untenable
at
scale
and
so
the
collectors
the
more
appropriate
place
to
do
it.
It
would
be
wonderful.
We
have
some
custom
stuff,
that's
awful,
and
I
don't
wish
it
upon
anyone.
It
would
be
wonderful
to
be
able
to
coalesce
around
the
transformation
processor
and
not
have
to
you
know
like
have
a
bunch
of
folks
learn
a
bunch
of
you
know
everyone
on
my
team.
The
only
other
thing
I
would
add
is
another
area
we've
been
exploring
is
fronting.
J
Some
of
the
like
we've
called
obfuscation,
is
fronting
some
of
the
obfuscation
with
a
cache
we
found
that
can
be
useful.
It
depends
how
you
deploy
your
collectors.
I
don't
think
that
would
fit
into
this
proposal,
but
just
something
to
mention.
I
know
some
folks
at
datadog
have
done
this
recently
with
their
database
statement
objection,
and
I
think
it
has
some
significant
performance
improvements,
so
just
passing
that
along
as
well
but
yeah.
I
would
love
to
see
this
if
possible,
not
that
I
have
any
influence
here.
J
E
Yeah,
there's
there's
obvious
community
need
for
for
satisfying
this
use
case
and,
and
I'm
just
trying
to
understand
like
how
far
off
is
the
transformation
processor
and
closing
that
feature
gap.
C
So
the
transformation
processor
is
the
next
item,
which
is
I'm
asking
for
people
to
for
another
approval,
to
take
a
look
at
the
implementation
of
that.
So
we
are
almost
ready
to
to
have
a
first
implementation,
but
I
need
somebody
else
to
take
a
look
on
the
on
the
implementation.
K
For
that
I'm
about
halfway
through
another
review
of
that
I
reviewed
it
initially
when
it
was
all
one
big
pr-
and
I
know,
there's
been
some
changes
based
on
your
feedback
on
on
this
poll.
So
I'm
about
halfway
through
that.
I
hope
to
finish
that
today,
but
it
would
be
good
if
others
could
look
as
well.
A
So,
to
recap
this
point
here:
I
understood
that
leo
is
gonna.
Take
a
look
at
the
current
proposal
for
for
the
processing
language
and
see
if
that
that
provides
everything
that
they
need
for
the
reduction
processor
and
then
come
up
with
a
proposal
for
either
incorporating
the
needs
for
the
reduction
processor
into
the
transform
processor
or
the
query
language,
or
still
going
through
by
specialist
specializing,
the
the
the
processor
and
making
a
new
reduction
processor
or
reworking
on
the
pr
all
right.
How
does
that
sound
leo.
C
Eric
I
have
a
one
question
for
you
for
your
shopify.
I
mean
I
saw
a
bunch
of
red
sorry,
the
bunch
of
replace
statements
with
that,
what
you
added
or
or
what
you.
J
You're,
passing
in
a
series
of
regexes
to
you
know
essentially
so
be
able
to
you
know
like
perform.
These
regexes
on
arbitrary
span
attribute
values,
so
you
could
configure
you
know
like
hey.
If
a
span
has
this
key,
please
you
know
perform
these
rejects
and
replaces
on
the
value
we're
doing.
J
C
J
Right
so
we
replace
it
with.
Like
a
I
mean,
a
question
mark,
you
know
so
we're
looking
to
pull
out
like
also
in
a
you
know,
database
statement
there's,
like
I
don't
know,
there's
lots
of
potential
pii.
So
there's
some
basic
rules.
I
there
it
will
vary
depending
on
databases
are
complicated,
so
we
only
do
mysql.
J
We
don't
worry
about
redis
or
postgres
or
whatever
so,
but
yeah
the
being
able
to
basically
do
arbitrary
collections
of
regexes
and
then
you
could,
I
suppose,
specify
the
character
you
want
to
replace
with
or
the
rules
we
also
update.
We
add
a
little
attribute
as
well
to
say
like
yep.
This
has
been
sanitized,
you
don't
have
to.
You
know
anyway,
ping.
A
few
other
questions
leo
as
well.
Happy
to
you
know
like
I
think
it's
awesome
work.
So
thank
you
all.
A
A
There
is
a
an
issue
currently
the
database,
the
tail
sampling
processor.
It
constructs
a
new
resource
spans
from
scratch
and
a
new
instrumentation
library
spans
from
scratch
as
well.
So
it
means
that
information
that
that
came
from
the
pipeline
are
is
effectively
erased
and
there
is
a
a
proposal
to
change
that
to
reuse
whatever
came
from
the
pipeline,
so
we
now
split
the
resource
spans
into
multiple
based
on
the
number
of
spans
within
the
resource
bands.
A
This
might
have
side
effects,
so
users
of
the
tail
base
sampling
are
encouraged
to
watch
those
issues
and,
if
possible,
you
know
when
time
comes
to
do
some.
Some
some
pre-testing
of
this
feature
before
we
publish
it,
live.
A
And
that
was
it
for,
for
that
one.
The
next
one
is
from
brian
at
crosslink
dependency
tool.
L
Hi
yeah,
I'm
just
there's
a
new
pr
in
the
go,
build
tools,
repository
and
I
am
just
trying
to
get
some
eyes
on.
L
I
think
the
only
two
owners
right
now
are
goal
maintainers
and
collector
maintainers
anthony
has
done
a
few
passes
at
reviewing,
but
we
would
really
be
grateful
if
anyone
else
could
take
a
second
look
at
it.
I
know
there's
callouts
and
contrib
for
a
tool
like
this
and
also
improving
the
one.
That's
already
existing
in
the
hotel
go
repository
and
that's
it.
L
Yeah
sure
so,
right
now,
with
contrib
and
autogo,
specifically
they're
multi-module
repositories,
consisting
you
know,
multiple,
go
modules
and
right
now,
contrib,
specifically,
when
you
add
a
new
go
module,
you
have
to
add
manually,
add
replace
statements
to
the
inter
repository
dependencies,
including
you
know,
direct
and
transitive
dependencies
right
now.
Otel
go
implements
a
more
rudimentary
version
where
they
pretty
much
just
insert
replace
statements,
even
if
those
dependencies
don't
fall
in
the
dependency
tree,
so
they
just
have.
You
know,
replace
statements
to
the
local
pass
pretty
arbitrarily
crosslink.
L
Does
this
a
bit
smarter
and
only
adds
replace
statements
for
those
dependencies
if
they're
needed
and
also
add
some
extra
functionality?
On
top
of
it,
like
excluding
modules
that
you
don't
want
to
touch
or
being
selective,
if
you
want
to
kind
of
make
destructive
or
non-destructive
actions
to
your
go
mod
file,
so
it's
just
an
easier
way
to
manage
these
multi-module
golang
repositories
and
side
note,
while
I'm
talking
about
it
is
that
go
1.18
is
actually
adding
a
workspace.
L
Function,
that's
called,
I
think,
go
workspaces
it.
It
solves
it
by
basically
adding
a
new
folder
at
the
root
and
adding
all
your
intra-repository
modules,
but
it
actually
in
the
proposal,
calls
out
that
you
know
it'd
be
nice
if
a
tool
could
automate
this
also
crosslink
is
already
kind
of
positioned
to
where.
If
we
want
to
support
go
workspaces,
it
should
be
a
fairly
simple
design
kind
of
function
improvement.
L
So
I
think
that's
pretty
cool
also-
and
this
is
definitely
applicable
to
way
more
than
just
open,
telemetry,
it
kind
of
targets
any
any
kind
of
bigger
repository
that
wants
to
manage
these
replace
statements
that
need
to
be
inserted
cool.
Thank
you.
It
works.
I'm.
L
I
think
it
works,
it
works
pretty
well
so
far.
I
have
tested
it
so
far
and
contribute
go
and
it's
cleans
it
up
nicely.
L
Yes,
I
I
haven't
thought
much
into
this,
but
if
this
were
to
be
added,
I
believe
you
would
be
able
to
explicitly
define
if
you
would
like
to
use
workspaces
or
kind
of
the
previous
pre-work
spaces,
inserting
replace
statements.
L
K
Yeah,
I
think,
when
workspaces
come
around,
there
would
be
two
modes
of
operation
for
this
one
where
it
would
insert
replace
statements
into
go
mod
files
directly
within
a
repository
and
another
where
it
would
use
the
dependency
graph
that
it
builds
up
to
create
a
workspace
file.
They
would
be
kind
of
exclusive,
a
maintainer
would
choose
to
use
one
or
the
other.
L
Is
there
any
follow-up
questions
for
crosslink
that
I
could
answer
but
other
than
that
I
think
that's
all.
I
have
on
that
point.
A
All
right,
if
there
are
no
questions,
the
next
item
is
from
eric
7533
new
component
seek
before,
although
extension
so
yeah
eric.
I
think
you
have
a
sponsor
already,
and
this
is
a
vendor
specific
extension,
so
it
is
automatically
accepted.
I
think,
but
anything
specific
you
want
to
discuss.
M
I
wanna
yes,
oh
yeah,
go
ahead,
maggie.
D
One
thing
usually
from
sponsors,
try
to
find
approvers
in
the
project.
I
I
mean
I
know
arita
is
very.
C
Involved
and
she
can
help,
but
she
she's
not
gonna,
do
code,
reviews
and
maintenance
of
that
component.
I
will,
though,.
C
As
I
said,
but
it
was
our
only
time
like
anthony
is
a
great
example
not.
M
M
So
I'm
working
on
this
new
extension
and
basically
what
it
will
be
used
for
is
for
adding
the
sigbi4
process
of
authentication
information
for
hd
http
based
exporters.
So
what
this
would
be
used
for
is
for
hd
http
based
exporters
for
aws
services.
M
So
an
example
of
this
would
be
using
this
extension
with
the
prometheus
remote
right
exporter
and
the
idea
there
is
that
using
those
two
together
would
mimic
the
functionality
of
the
aws
prometheus
remote
right
exporter,
and
so
because
of
that,
the
plan
is
to
also
deprecate
the
aws
prometheus
remotely
exporter,
with
the
addition
of
the
extension,
and
so
that's
just
a
quick
little
overview
of
what's
going
on
here
in
that
issue,
I
have
a
design
document
linked
in
a
comment
near
the
bottom.
M
If
anyone
would
like
me
to
go
over
it,
I'd
be
more
than
happy
to.
If
not,
if
anyone
had
any
questions,
I'd
be
happy
to
answer
any
questions,
but
that's
pretty
much
what
I
wanted
to
touch
on
just
really
briefly,
but
yeah.
K
So
if
anybody
has
known
use
cases
for
that,
please
take
a
look
at
the
design
and
let
us
know
if
there
are
any
parts
of
your
use
case
that
this
wouldn't
satisfy.
A
All
right,
I
guess
the
last
one
is
6722
security
concerns
with
the
exact
receivers.
K
Yes,
I
think
we've
talked
about
this
a
little
bit
before,
but
it
looks
like
there's
been
some
action
on
this
issue
again
after
a
review.
It
seems
to
me
that
we
should
probably
just
take
the
decision
to
remove
all
of
these
exporters
that
include
process
management.
K
I
don't
feel
that
the
the
collector's
core
competence
is
process
management,
or
should
it
be-
and
there
are
other
ways
to
deal
with
this,
but
I
I
want
to
make
sure
that
we
get
some
visibility
on
this
and
have
others
weigh
in
if
they
think
that
it's
important
that
we
keep
it.
B
A
A
B
Mean
they're,
it's
it's
a
spectrum
right
jurassic
security
is
not
binary
right.
How
secure
how
insecure
they
are?
It's
probably
we
we
need
to
make
the
call
after
all,
but
before
we
just
outright
say
that
we
just
remove
the
components.
I
think
some
some
effort
is
warranted
to
try
to
actually
improve
the
whatever
they
are
doing
to
make
them
more
secure,
if
possible,
and
and
only
only
give
up.
If
we
see
that
there
is
no
way
right.
A
Yeah,
so
I
guess
that's
that's
my
point.
I
mean
no
matter
what
we
do
there.
They
are
still
going
to
spawn
external
processes
and
a
and
that
can
be
misused
and
that
can
be
exploited
by
attackers
based
on
on
vulnerabilities.
In
other
components,
you
know,
so,
even
if
that
component
is
very
secure
and
it's
doing
what
it's
supposed
to
do,
it
can
still
be
used
if
other
processors
are
insecure.
A
Then
you
know
that
people
using
the
the
exact
receivers
they
can
just
you
know
they
are
going
to
be
vulnerable
to
that,
even
though
those
components
are
very
secure
and
just
the
thought
that
external
processes
are
going
to
be
spawned
without
without
many
you
know,
without
a
proper
process
management
for
them.
That
is
enough
for
me
to
leave
them
out
of
the
distributions
that
are
out.
B
The
vulnerability
that
you're
describing
sounds
extremely
hypothetical
to
me,
you're
saying
there
is
a
receiver
that
has
a
vulnerability
which
results
in
that
receiver
receiving
some
code
and
writing
to
a
file
system.
I
mean
yeah
in
theory,
it's
possible
in
theory.
It's
also
possible
for
the
receiver
to
receive
code
and
start
executing
it,
but
we
do
not
disallow
receivers
to
receive
data
right.
B
K
I
think
the
the
other
way
to
look
at
it
would
be.
Would
we
accept
these
as
new
receivers
now,
I'm
not
sure
the
provenance
or
where
these
came
from
when
they
were
added?
What
evaluation
was
done
when
they
were
added?
But
if
we
were
to
receive
a
proposal
to
add
one
of
these
now
and
I
think.
K
B
At
least
not
as
as
is
as
is
right
in
the
form
that
they
are,
they
just
don't
match
the
requirements
that
we
have
outlined
in
the
contributing
guides
with
regards
to
what
is
allowed
and
is
not
allowed
to
be
done,
but
maybe
maybe
we
would
accept
them
if
they
were
implemented
in
a
different
way.
I
don't
know.
A
I
don't
know,
I
think,
all
the
alternatives
to
that
are
way
more
secure
than
letting
the
the
collector
manage
the
process
so
having
a
a
small
binary
that
is
responsible
for
doing
the
prometheus
exact
receiver
in
a
separate
processes,
process
itself,
and
that
process
sends
data
to
the
collector.
A
That
would
be
a
way
that
I
would
prefer
my
systems
to
to
act
instead
of
having
everything
happening
within
the
collector,
because
any
any
any
problems
that
I
would
have
in
terms
of
security
for
that
component
would
be
isolated
into
that
binary,
and
I
can
send
box.
I
can
change
root.
I
can
do
whatever
I
need
to
do
to
keep
that
one
specific
piece
very
secure
and
a
collector
in
a
measurement
setting.
B
B
A
That's
true,
you
know
so
we
can,
and
we
should
certainly
discuss
on
on
how
to
do
plugins
for
the
collector
and
that's
really
not
the
the
problem.
A
plugin
has
a
very
well-defined
interface
for
input
and
output,
whereas
those
receivers
here
they
do
not,
they
just
spawn
external
processes
and
the
processes
can
do
whatever
they
want.
A
K
B
Yeah
yeah,
which-
and
I
said
they
should
not
be
doing
that-
I
submitted
a
pr
which
says
that
no
component
is
allowed
to
do
that,
and
I'm
saying,
let's
see
if
we
can
bring
these
components
to
comply
with
that,
I'm
not
arguing
that
they
are
should
be
allowed
to
be
to
be
executing
an
arbitrary
code.
That's
not
what
I
said.
A
But
prometheus
receivers,
I
think
that's
the
term
or
exporters
they
are
basically
binaries
and
the
prometheus
exact
receiver.
All
that
it
does
is
execute
those
binaries
and
there
is
no
interface
between
them.
It's
just
a
a
binary
that
we
just
hope
that
the
that
binary
would
then
open
up
a
an
http
port
with
open,
metrics
or
permit
use
metrics
that
we
that
this,
that
another
processor
can
our
receiver
can
scrape
from.
B
Okay,
so
I'm
not
sure
what
what
is
it
that
we're
discussing
you're
talking
about
what
it
does
and
you're
saying
it's
wrong,
and
I
agree
it's
wrong,
I'm
saying:
let's
fix
it,
you
you're
saying
we
can't
fix
it.
I
think
that's
what
I'm
hearing
and
I
disagree
with
that.
I
think
there
is
probably
a
way
to
fix
it,
or
at
least
it's
worth
trying.
Maybe
there
is
no
way,
but
I
don't
want
to
just
just
without
trying
even
saying
it's
not
possible.
That's
my
only
point.
I'm
trying
to
make.
K
C
For
the
for
the
moment,
and
without
that
you
don't
have
it.
A
So
what
I
was
trying
to
ask
is
who
is
going
to
try
to
work
on
those
components
to
make
them
reliable
or
more
secure?
A
So
if
we
decide
to
to
to
give
them
a
second
chance,
who's
going
to
work
on
them,.
B
A
All
right,
so
I
think
this
discussion
here.
It
actually
originated
from
the
component
that
anthony
mentioned
a
component
to
do
process
management.
So
I
think
that
one
should
be
blocked
right
for
now
and
for.
G
A
Yeah
yeah
and
before
working
on
that,
then
people
who
are
the
code
owners
for
those
three
components
that
we
are
talking
about
so
gmx,
fluenty
and
prometheus.
They
should
work
on
making
them
more
secure
into
a
state
where
we
accept
them
as
part
of
the
contribute.
K
K
C
I
would
make
them
a
feature
flag
to
not
break
unless
we
decide
to
build
a
component.
So
so
the
reason
why
we
choose
a
feature
flag
is
because,
if
we
don't
put
them
in
our
distributions,
users
that
are
using
right
now
cannot
do
anything
like
they
have
to
build
their
own
distributions
and
so
on.
So
I
would
prefer
the
version
where
we
don't
break
them
too
much.
I
mean
we
break
them,
but
not
as
bad.
A
C
A
So,
basically
remove
it
and
wait
for
people
to
scream
no.
B
A
Agree
with
that,
so
let's,
let's
give
the
code
owners
a
chance
to
get
them
better
for
the
next
reversion.
So
how
about
so?
We
are
at
0
44
right
now
right
and
we
have
0
45.
At
least
I
have
it
on
my
schedule
for
next
week
so
next
wednesday.
So
how
about
we
expect
code
owners
to
come
up
with
a
proposal
by
the
046
time.
C
Can
we
just
add
an
log
message
during
the
creation
of
that
component
so
like
when,
whenever
we
in
the
create
receiver
function,
a
message
that
hey
this
may
be
insecure,
just
log,
a
message?
I
think
this
is
not
breaking.
This
is
just
raising
awareness
about
this.
A
A
G
I
I
listed
as
a
countdown
for
two
of
them.
I
inherited
them
from
another
splunk
folk
who,
who
initially
wrote
them
so
I'll,
take
a
look
and
two,
but
there
is
fluent
beat
extension,
who
doesn't
have
three
code
owners,
so
I
can
probably
take
that
one
as
well,
so
just
to
take
three,
which
one
is
two
three
fluent
bit
extension
doesn't
have
a
code
owner,
but
I
suspect
that
it's
written
by
the
same
person
initially.
So
I
can.
I
can
take
ownership
as
well,
so
I
can
work
on
both
three.