►
From YouTube: 2021-12-01 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Yes,
yeah,
so
zoom
is
playing
tricks
on
me
again,
I'm
not
sure
if
you
heard
my
question
so
is
anybody
knows
where
our
lead
is
coming
today?
I
think
it's
an
event
at
aws
this
week.
It's
really
important.
D
D
E
F
C
C
C
So
I'm
having
problems
with
with
zoom
today,
so
if
I
just
disappear,
you
know
it's
probably
on
my
side
but
okay.
So
this
is
the
collector
and
we
care
about
the
new
items
since
perhaps
well.
Let's
start
with
the
most
recent
ones,
different
uses
of
timeout
on
exporters.
C
So
do
you,
how
would
you
classify
this
one
here,
so
I
guess
it
is
an
enhancement
right,
but
there
is
also
a
problem
with
inconsistency
of
usage
across
yeah.
G
Yeah,
so
it
looks
like
so.
This
is,
after
pablo,
had
opened
an
issue
or
had
opened
a
pr
to
use
a
different
timeout
for
using
http
client
timeout
instead
of
the
overall
exporter
timeout,
and
then
he
pointed
me
in
the
direction
of
this
being
a
repeated
pattern
across
exporters
in
the
collector,
where
many
people
are
basically
setting
the
overall
timeout
to
zero
and
disabling
it,
and
then,
instead,
using
that
configuration
option
for
the
http
client
timeout,
and
I
guess
my
my
question
was:
should
we
have
separate
config
items
for
it?
G
H
Oh
yes,
I
see,
I
don't
know
if
that's
what
happens
on
the
other
exporters,
but
it
is
what
happens
when
the
data
exporter
and
that's
why
we
change
it.
F
F
H
F
One
thing
we
did
by
the
way
one
thing
we
did
in
in
signalfx
exporter,
because
we
had
this
problem
of
sending
multiple
requests
based
on
some
informations
in
the
data.
We
we
put
a
simple
thing
in
front
like
before:
hitting
the
q
retry
logic,
we
we
have
a
way
to
split
the
we
split.
The
request
into
multiple
metrics
require
a
metric
speed
data
thing,
so
that
one
corresponds
to
only
one
request
and
then
a
lot
of
the
logic
index.
F
Yeah,
so
our
condition
was
based
on
based
on
an
argument
in
the
resource.
We
will
send
a
request,
multiple
requests,
because
we
we
have
a
token
passed
in
the
resource
anyway,
some
stupid
things,
but
in
the
the
way
how
we
did
it
was
we
split.
We
split
the
the
request
before
before
sending
it
to
the
cutie.
Try.
H
That
make
sense.
I
don't
know,
though,
if
that's
what
the
reason
why
other
exporters
are
not
using
the
exporter
helper
timeout,
so
maybe
there
are
other
reasons
valid
reasons
to
to
not
use
it.
F
C
I
guess
the
question
at
least
for
this
one
here
is
whether
so,
I
guess
we
we
know
and
understand
that
there
are
different
use
cases
for
timeout
and
different
ways
of
consuming
this
information,
and
one
of
the
questions
here
is
whether
we
want
them
all
all
of
those
different
use.
Cases
to
use
the
same
attribute
name
so
have
different
semantics
for
the
same
property.
E
It
seems
to
me
that
there
might,
it
might
make
sense
to
have
two
separate
levels
of
timeouts,
one
at
the
overall
consume,
metrics
or
consume
trace
level
and
another
at
each
individual
request
that
it
makes
where
each
request
should
be.
Given
that
context,
that's
this
governing
the
the
consume,
metrics
or
consume,
trace
time
out,
but
could
have
a
shorter
timeout
of
its
own.
E
F
H
Yeah,
I
think,
for
the
end
user.
It's
a
subtle
distinction
that
doesn't
really
make
sense
like
they
don't
even
need
to
know
that
I
consume
metrics
or
consume
traces
function
exists.
H
E
F
C
E
D
E
I
think
he'd
interact,
but
I
I
agree
with
that,
but
I
think
that
also
we
can
have
an
implementation
that
allows
the
flexibility
to
have
both
patterns
where
a
developer
can
choose
to
use
the
more
complicated
pattern
if
it
makes
sense
to
them.
F
We
do
have
that,
I
I
mean
the
implementation.
Doesn't
stop
you
to
do
that.
I
J
Pablo
bro,
quick
question
so
hope
you're
doing
good
is
it?
Is
that
related
to
those
context,
deadline,
error
issues
that
occasionally
pop
up
or
is
this.
J
I'd
be
curious,
okay,
cool
yeah,
that's
a
common
not
to
it
was
a
common
issue
for
users.
So
if
that
helps,
if
what
pablo
is
proposing
helps
resolve
that,
I
think
it
would
be
important,
but
I
don't
have
all
the
contacts
so
then
I
don't
represent
that
company
anymore.
So
I'll
talk
with
pop
off
on.
J
Sorry,
there's
a
common
issue:
you
get
these
like
context,
deadline
exceeded
errors.
If
you
look
in
the
contrib
open
issues,
a
bunch
of
large
enterprise
users
have
mentioned
it,
it
was
kind
of
it
would
occur
randomly
anyway.
I
I
don't
want
to
rabbit
hole.
This
discussion,
I
just
was
looking
curious
for
my
own
personal
purist.
H
So
on
on
that
issue,
there
is
something
that,
even
if
on
the
yaml
configuration
every
exporter
uses
a
timeout.
H
Option,
maybe
it's
not
ideal
if
the
exporters
that
do
not
use
the
exported
helper
use
the
timeout
settings
struct,
because
that
makes
it
more
difficult
to
add
more
fields
to
that
stroke
in
the
future.
H
There's
at
least
five
components
that
don't
use
it
five
exporters-
I
don't
use
it
but
yeah
we
can
follow
up
on
the
issue.
I
think,
like
it's
a
minor
thing.
J
And
not
to
keep
rabbit
hole
in
here,
but
pablo
is
the
reason
we
don't
use
it
because
we
send
the
different.
We
send
a
a
trace
payload
and
a
metrics
derived
from
trace's
payload
and
those
are
mutually
exclusive.
Payloads.
J
That's
sorry,
cool
yeah.
That's
a
pretty
con
for
context.
It's
a
the
use
cases
that
you
know
the
the
trade,
the
blob
of
trace
data,
which
is
in
an
arbitrary
format.
Not
you
know,
hotel
is
different.
You
know
that's
like
tristan's
fans
and
then
there's
you
know
some
histograms
or
distributions
or
something
that
we
derive
or
not.
We
datadog
derives
from
those
trace
metrics,
and
so,
if
you
know
one
of
those
endpoints
is
down,
you
would
still
want
the
other.
J
E
Yeah
and
that's
the
sort
of
situation
where
I
expect
that
having
an
overall
timeout
separate
from
the
client
timeout,
it
would
be
useful
but
yeah
that
we've
consumed
the
time
box.
Let's
move
on.
B
Is
no
longer
with
us
he's
I'm
here:
oh,
we
are
back.
C
I'm
intermittently
here
and
not
here:
let's,
let's
try,
let's
see
if
it
works
this
time
yeah,
so
I
have
a
task
on
my
queue
to
document
how
we
can
create
telemetry
for.
E
And
he's
gone
again,
so
the
the
topic
he's
talking
about
here
is:
how
do
we
start
the
migration
for
the
consensus?
Oh,
is
it
back?
No,
I
don't
know,
am
I
back?
Yes,
we.
C
C
We
used
to
have
views
for
that,
and
views
are
not
implemented
on
on
the
go
sdk
right
now,
and
that
seems
like
a
blocker
to
me,
and
I
guess
my
question
right
now
is:
do
we
want
to
continue
working
or
following
the
open
transfer
sdk
or
do
we
want
to
provide
an
api
for
our
users
and
use
open
sensors
for
now
and.
E
Right
would
that
be
in
the
configuration
I
mean
there
would
be
per
per
component
views
that
need
to
be
specified.
I
think
that's
what
jurassic
is
kind
of
saying
too
isn't
like.
If
you
have
a
component
that
has
a
histogram,
you
need
to
configure
the
boundaries
for
it
or
use
exponential
histograms,
but
we're
I
don't
know
if
those
are
well
supported,
yet
either.
F
Okay,
so
indeed
that
may
be
the
case
if
you
have,
if
you
want
to
produce
a
fixed
bucket,
but
I
think
in
the
end,
that
decision
is
the
the
admin
devop
that
runs
the
collector
decision
about
if
they
want
to
use
exponential
histograms,
because
they
have
a
fancy
background
that
supports
that
and
they
don't
care
about
defining
a
fixed
bucket
histogram.
E
And
there
are
two
parts
to
that
in
the
proposed
sdk
the
hints
and
views.
I
think
they
were
kind
of
collapsed
in
open
senses
right
where
the
the
instrumentation
might
have
defined
the
view
itself,
but
they
would
need
to
be
separate
here
so
that,
yes,
the
operator
could
define
the
view,
that's
ultimately
used
or
could
decide
to
completely
ignore
the
hint
that
there
should
be
a
fixed,
bucket,
histogram
and
use
an
exponential
histogram.
Instead
that
doesn't
exist.
E
F
Yeah,
that's
it
it
is
it
is.
I
don't
think
that
personally,
I
don't
think
that's
a
blocking
issue,
because
I
will
explain
why
I
think
the
the
use
of
histograms
for
us
is
very
minimal.
We
have
like
two
or
three
history
really
histograms,
that
we
need
and
we
care
about,
and
for
those
I
think
we
can
simply
allow
users
to
hard
code
that
or
something
like
that.
So
I
I
don't
think
we
have
too
many
histograms.
That's
my
point.
We
have
way
more.
C
The
batch
processor
itself
has
two
types
of
histograms
already
one
is
latency
and
latency
is
the
classic
case,
and
I
think
exponential
histograms
would
work
would
work
fine
for
that,
for
that,
but
the
other
one
that
we
have
on
the
same
processor
is
the
the
size
of
the
bucket,
where
the
size
of
the
batch
right.
So
we
have
a
the
buckets
on
the
size
of
the
batch
and
those
would,
I
don't
know,
I
don't
think
they
would
work
well
with
exponential
histograms
as
a
component
developer.
C
I
would
probably
know
better
what
are
the
boundaries
that
I
would.
I
would
like
to
split
the
metrics
for,
but
I
mean
I
don't
know
if,
if
we
define
that,
you
know
it's
not
up
to
the
component
developers
to
know
that
I'm
I'm
willing
to
give
it
a
try.
If
that's
the
general
message
around
open
telemetry
as
a
whole,
you
know
because
that
seems
like
a
decision
that
is
taken
at
this
back
level.
Not
not
you
know
our
team
here.
F
So
so,
in
general,
at
the
spec
level
we
said
that
a
lot
of
the
times
the
developer
may
know
the
ranges
but,
for
example,
think
about
http
request
size
histogram.
If
you
have
an
http
request
size,
it's
very
it's
very
specific
to
to
the
application.
If
I
have
a
upload
server
versus,
if
I
have
a,
I
don't
know
an
ack
server
or
something
like
that,
one
that
just
gets
an
empty
request
and
sends
an
empty
request
versus
one
that
uploads
gigabits
of
file.
It's
completely
different.
F
How
what
I'm
expecting
and
what
sizes
am
I
expecting
so
in
general,
in
open
telemetry?
We
believe
that
for
histogram
cases,
lots
of
times
is
the
application
owner.
Who
knows
better
what
to
expect
there.
So
at
most
it's
just
a
hint,
as
you
pointed
it's
just
a
hint
about
the
view
hint
view
for
for
that.
For
that
instrument
I
think
it's
not
gonna
be
a
hint
view.
It's
actually
going
to
be
a
on
the
api
you're
going
to
have
a
hint
of
the
aggregation,
but.
C
So
if
we
are
saying
that
histograms
are
that,
we
should
only
record
data
points
and
we
should
not
care
about
how
to
aggregate
or
how
to
view
those
data
points.
Then
I
guess
that
that
makes
my
life
easier
with
this
issue
that
we
have
right
now,
because
I
just
have
to
care
about
adding
things
to
the
metrics
right,
so
just
adding
data
points
that
makes
our
api
surface
also
smaller,
because
you
know
we
don't
have
to
register
views
and
we
don't
have
to
take
care
of
those
things
now.
C
F
So
you
mean,
if
you're
using
prometheus,
you
need
to
define
some
boundaries
for
histograms
and
how
do
you
choose
the
right
boundaries.
C
Yeah
I
mean
it
doesn't
have
to
be
like
right
now.
If
our
story
is
application,
owners
are
responsible
for
specifying
the
boundaries,
that's
fine.
We
can
do
that
later.
We
can
create
a
document
later
to
tell
them
how
to
do
that,
but
we
need
eventually
to
instruct
our
users,
because
they're
going
to
be
the
application
owners
on
how
to
monitor
how
to
consume
our
telemetry
data.
E
I
think
until
the
go
sdk
has
view
support,
though,
for
the
application
owner
to
configure
any
of
this
discussion
has
to
be
seen
as
premature,
because
currently
histogram
buckets,
for
instance,
are
configured
as
an
exporter
aggregation
option
and
apply
to
all
histograms
sent
to
the
exporter,
which
I
don't
think
is
ever
going
to
be.
The
correct
answer.
C
F
Yeah,
but
I
don't
want
to
come
with
the
other
api
by
the
way.
It's
that's
a
no-brainer.
We
we,
we
should
use
this
api
now,
okay.
So
what
we
are
identifying
here-
and
I
would
like
to
bring
this
to
the
metrics
see-
is
the
problem
that
we
have
right
now
is
we
need
a
hint?
Essentially,
we
need.
We
need
to
have
a
hint
api
and
by
this
hint
api
it
means
what
the
developer
believes.
It's
a
reasonable,
I
mean
go.
F
Azeki
lacks
views,
that's
a
that's
a
something
that
we
already
defined
in
the
specs.
It
has
to
come.
So
it's
it's
on
them.
It's
not
something
that
the
the
metrics
sdk
metrics
should
care
about,
because
it's
just
go
dog
that
doesn't
have
it,
but
on
the
other
side,
I
I'm
hearing
from
you
that
a
hint
api
would
be
very
useful
for
for
developers
to
to
be
able
to
provide
some
hints
about
these
histograms.
C
Is
that
correct?
To
be
honest,
I
I
don't
know
I
mean
I
guess
my
my
my
contact
with
the
matrix
side
of
things
are
very
limited.
So
if
you
tell
me
that
exponential
histograms
is
a
solution
for
all
of
our
problems,
then
I
believe
you,
but
if
you
say
that
no,
we
really
need
like
a
30
api
like
a
hint
api
for
us
to
kind
of
tell
our
application
owners,
or
you
know
the
people
running
collectors,
what
they
should
be
considered
using
as
a
boundaries.
Then
then,
that's
fine
by
me.
F
But
exponential
histograms
are
not
a
solution
for
all
the
questions,
because
if
people
are
using
prometheus
on
the
other
side,
they
they
don't
support
exponential
histograms,
hence
yeah
somebody
still
has
to
define
something
about
boundaries.
Buckets
for
that.
So
maybe
the
answer
for
for,
if
you
are
using
paid
solutions
that
support
exponential
histograms
but
is
not
the
answer
for.
C
Yeah
I
saw
somewhere
also
on
this
pack
that
the
collector
should
also
be
providing
aggregation
capabilities
to
metrics.
So
it's
fine
by
me
if
we
can
do
those
those
aggregations
on
the
collector
itself,
so
we
can.
We
generate
telemetry
data
for
the
component
and
we
export
this
data
to
a
specific
pipeline
within
the
collector,
and
the
collector
then
exposes
those
histograms
to
provisions
in
openmetrics
format
that
prometheus
can
consume.
You
know,
so
it's
not.
It
doesn't
have
to
be
part
of
prometheus
itself.
It's
just
it
feels
wrong.
C
I
guess,
but
I
guess
you
know
it's
not
specific
to
prometheus.
I
guess
that
that's
my
point.
It
doesn't
have
to
be
specific
to
prometheus.
We
just
have
to
come
up
with
a
story
to
the
final
users
to
our
end
users
saying
you
know
this
is
how
you
consume
the
metrics
and
I
guess
we
have
to
show
end
to
end
how
it
works
with
an
open
source
solution.
C
Right
now
is
what
what
should
component
developers
do
and
what
is
the
role
of
the
application
owner.
So
what
what
each
one
of
those
should
be
doing
and
how
they
should
be
doing,
we
can
define
later.
I
If
we,
I
guess
we
can
keep
that
maybe,
but
the
alternate
is
pro
also
not
to
do
that,
and
instead
configure
an
exporter
or
tlp
exporter,
for
example
right
and
then
maybe
you
can
point
it
to
a
pipeline
local
pipeline.
If
you
want
it
to
be
like,
say
scraped
as
a
prometheus,
so
we
kind
of
flip
it
to
it
in
the
opposite
way.
Right.
I
I
Yes,
like
you
configure
the
exporter
or
if
it's
not
possible,
then
you
configure
a
pipeline,
so
I
don't
know
which
way
we
want
to
go,
but
that
flexibility
needs
to
be
that
that's
what
I'm
saying
we
need
to
be
able
to
choose
for
the
end
user
to
choose
how
they
want
this
metrics
to
be
sent
out
from
the
collector
right.
Is
it?
Is
it
by
scraping
or
by
otlp
or
whatever,
right
and.
I
E
Yeah
and
I've
been
thinking
about
that
for
a
while,
and
I'm
wondering
if
the
right
answer
is
not
to
try
to
configure
the
open,
telemetry
sdk
exporter
in
that
section,
but
instead
to
configure
a
collector
exporter
and
have
an
in
an
internal
pipeline
where
the
open
telemetry
sdk
is
always
going
to
produce
otlp
or
p
data
and
then
hand
it
to
a
collector
exporter
that
the
user
configures
in
the
telemetry
configuration.
A
I
I
C
We
have
we
have
this
kind
of
pattern
already
not
like
you
know.
The
component
is
instantiating
anything,
but
we
have
this
this
similar
pattern
with
the
routing
processor.
I
think
so.
The
routing
processor
accepts
a
list
of
exporters
and
it
makes
a
decision
on
which
exporter
is
receiving
the
data
based
on,
but.
C
F
Otherwise,
we're
not
going
through
that's
why
we
find
that
dummy
pipeline.
Remember
they
have
to
define
a
dummy
pipeline
just
for
for
having
a
pipeline
anyway.
By
the
way,
there
is
a
better
proposal
from
from
then
about
the
connector
that
will
probably
fall
a
lot
of
the
things
that
we
have,
which
is
a
special
thing
that
can
be
an
exporter
and
receiver.
In
the
same
time,.
F
I
F
F
I
I
want
to
do
it,
how
the
other
users
will
do
it
via
an
otlp
exporter
that
pushes
to
to
a
receiver,
and
we
define
the
pipeline,
because
the
reason
why
I'm
saying
this
is
because
it
forces
us
to
provide
feedback
to
the
open,
telemetry,
yeah
dog
food,
exactly
dog,
fooding,
right
yeah.
I
don't
want
to.
F
F
So
yeah
I
I.
I
definitely
think
we
should
be
able
to
collect
otlp
exporter
for
that
and.
I
F
Can
choose
to
to
be
prometheus
if
they
don't
want
to
push
withop
and
they
they
have
an
environment
where
prometheus
scrapes
directly
from
them
and
stuff
so,
but
but
definitely
it's
wrong
for
us
to
expose
prometheus
and
scrape
ourselves
to
put
it
in
the
pipeline.
So
that's
that's!
That's
the
the
hack
that
I
would
not
want.
C
All
right
so
going
back
to
the
original
problem.
What
should
I
do,
then,
should
I
wait
for
for
you
to
come
up
with
a
proposal
on
how
to
how
to
do
that.
F
So,
let's,
let's,
let's
first,
let's
first
identify
how
many
so
we
identify
the
problem
with
with
the
histograms
correct,
let's
see
how
how
many
times
we
have
this
problem.
If
possible,
we
can
look
into
how
many
histograms
we
define
in
our
code
base
probably
do
a
search
correct.
F
That's
one
thing
that
I
would
do
to
see.
How
big
is
this
problem?
I
think
jurassic
is
out
based
on
the
fact
that
he
froze
so
yeah.
I
will
wait
until
he
comes
back.
M
Yeah
so
yeah,
we
have
a
question
about
the
current
status
of
the
new
major
config
support
approach,
and
currently
we
previously
will
have
a
i3
read
the
music
config
pr
and
that
one
would
directly
use
a
copy
provider
from
the
sponsor
repo
and-
and
so
we,
because
that
one
that
part
in
the
sponsor
repo
is
now
is
using
the
internal
folder.
So
we
need
to
either
wait
for
splunk
to
move
that
part
to
the
mirror
or
other
the
we
talked
with
paulo,
and
he
said,
oh
sir,
go
ahead.
F
That
pr,
first
of
all,
that
pr
will
not
be
able
to
merge
it's
like
8
000
lines
of
code
change,
and
I
I
bet
that
we
can.
We
can
do
better
than
having
one
pr,
especially
when
we
have
a
new
component.
So,
first
of
all,
I
would
encourage
you
to
implement
the
provider
interface
that
we
right
now
have.
M
Yeah,
but
that
would
be
the
yeah
like
the
cinema
portra
like
the
sprongs
pr
right,
a
splunk,
the
configure
provider,
the
part
right
it
will.
M
It
will
use
like
the
same
similar
approach
like
like,
where
you
use
a
similar
code
after
splunk's
run
right
since
we
are
using
we,
since
we
are
going
to
implement
the
i3
written
from
the
multi-con
multi
sources
of
config
likes
from,
for
example,
one
from
the
local
and
the
one
from
that
three,
and
so
we
want
to
have
so.
In
this
case,
we
want
wait.
That's
why
that's
why
we
directly
use
a
sponsor
called
before
and.
E
Yeah
so
so
there
there
exists
the
ability
to
merge
config
maps
from
providers
now
so
there's
there's
a
merging
provider
that
can
be
provided
a
local
file
config
and
an
s3
file
config,
and
that
those
two
can
be
merged
together.
That
has
to
be
done
in
code
right
now.
So
if
we
implemented
an
s3
provider,
we
could
have
that
capability,
but
it
would
be
less
flexible
than
what
we
what
we
ultimately
want.
I've
added
a
link
to
the
agenda
to
a
draft
pr.
E
That
tigran
has
that's,
outlining
the
the
definition
of
a
way
to
specify
multiple,
local
and
remote
configurations
through
configuration,
whether
through
cli
flags
or
through
a
config
file
that
that
says
here
the
config
files
you
should
use.
I
think
that
is
the
the
desired
end
state
of
this,
but
the
first
step
would
be
to
implement
a
config
map
provider
that
can
get
a
config
map
from
s3.
E
F
So
once
you
have
that
we
are
still
involving
a
bit
that
interface
and
you
saw
tigran
is
trying
a
draft
there
and
I
talked
to
him
and
I
I
said
that
I
will
put
I'll,
create
another
draft
trying
to
do
a
different
way
just
compare,
but
for
you
tindale,
if
you
provide,
if
you
implement
that
map
provider,
you
will
do
50
percent
of
the
work
that
you
need
to
do
to
have
what
you
need.
So
I
think
you
have
to
implement
that
interface.
E
I
I
F
F
Put
another
file
create
another
file
that
only
overwrites
whatever
you
need
to
override
the
rest.
Is
it's
a
merge
on
this
so
because
of
that,
I
I
believe
we
should
allow
at
least
the
dash
dash
config
to
be
a
list
of
things
that
we
we
we
merge,
for
example,
and
that
will
give
you
the
same
functionality
that
you
need
correct.
I
F
I
Is
that
yeah
the
problem
with
that
is
that
then
you
have
to
have
some
sort
of
registry
for
providers
like
the
name
that
that
prefix
has
to
be
registered
somewhere,
and
then
you
instantiate
that
provider,
which
then
I
guess
is
kind
of
similar
to
that
full
pr.
In
that
case,
I
I
don't
know.
Let's
that's
why
I
said
I
will
propose
another
draft.
I
F
The
command
line-
and
we
will
compare
and
look
but
no
matter
what
it
is,
they
have
to
implement
the
provider
interface
and
it's
just
we
play
with
the
configuration
and
installation
and
stuff,
but
so
far
you
should
implement
that
interface.
Tindal.
F
E
Yeah
we'll
be
able
to
do
that
for
testing.
I
don't
think
we
would
be
able
to
release
that,
though,
because
that
would
require
we
add
cli
flags
that
we
don't
want
to
support
long
term.
F
Yeah,
I'm
not
saying
to
release
it
to
public,
but
I'm
at
least
making
sure
that
every
functionality,
wise
everything
works
that
you
want
right
right.
That's
what
I
want
to
confirm
that
if
I
give
you
somehow
to
configure
that,
is
that
enough
for
your
requirement
or
you
need
actually
more
than
just
that.
E
Does
it
make
yep
sense
yeah?
I
think
that
makes
sense.
Can
we
have
this
the
agenda
for
the
meeting
we
have
this
afternoon
as
well?
E
F
So,
oh
jurassic,
I
don't
know
if
you
are
back
but
the
action
item
that
we
said
and
I
think
you
are
not
that.
F
Okay,
I
will,
I
will
write
the
action
items
for
for
the
other
one,
but
I
want
to
tell
everyone.
So
what
I'm
thinking
to
do
is
investigate
how
many
histograms
we
have
with
these
problems,
because
I
I
believe
we
have
not
that
many
or
we
have
most
most
of
them
are
using
the
same
bucket.
So
we
may
be
fine,
because
we
are
hard
coding
buckets
right
now
anyway,
so
we
may
be.
We
may
be
well
served
by
hardcoding.
I
K
I
I
just
had
one
question
about
the
proposal
that
was
being
discussed
about
the
multiple
files
right
for
the
configs.
So
if
there
are
so
right
now
in
the
main
config,
we
have
the
service
definition
where
we
have
all
the
pipelines
and
then
the
telemetry
configurations
and
whatnot
right.
How
would
that
look
like
if
there
are
multiple
files?
Would
each
one
of
them
have
their
own
service
configuration.
F
The
functionality
that
we
provide
is,
if
you
have
two
yaml
files
as
we
will
merge
them,
you
may
be
able
to
say
just
add
a
new
property
to
one
exporter
from
previous
file
or
whatever
you
want
to
do.
We
we
don't
have
necessary,
we
don't
we're,
not
we're
not
saying
you
should
do
it
this
way
or
that
way,
we're
just
gonna
blindly
merge
the
files.
I
The
answer
is
yes,
you
can
have,
let's
say
service
definition
here
and
there
and
they
can
have
different
pipelines
and
then
you'll
have
two
pipelines
in
that
case
after
emerging
or
you
can
have
the
same
pipeline
defined
in
two
places,
so
they
will
be
merged
on
a
key
key
by
key
basis
and
they
they
will
override
right
in
that
case,
so
you
can
override
or
you
can
have
like
separate
pipelines
in
different
files.
I
E
Okay,
are
pipelines
defined
as
an
array
or
a
map,
because
I
think
the
map
structure.
I
F
F
I
don't
think
we
should
add.
We
should
allow
people
to
add
more
receivers
to
an
existing
pipeline.
We
should
better
tell
them
to
define
their
own
pipeline
and
to
not
mess
up
with
existing
pipelines.
L
I
I
would
like
to
add
this
configuration
thing
actually,
for
example,
in
our
splunk
distribution.
We
you
provide
a
default
configuration
and
we
want
the
users
to
add
something
new,
and
usually
it's
only
just
one
receiver
or
something
like
that,
and
we
would
like
them
to
be
able
to
do
a
definition.receiver
and
actually
add
that
receiver
default
pipeline.
L
L
Because
it's
a
defining
you
know
pipeline
means
that
they
need
to
recreate.
All
of
the
exporters
exist
check
the
default
exporters
create
them
set
up
like
duplicate
another
pipeline
and
the
processors
and
so
on.
So
they
want.
L
E
Yeah
so
and
when
the
maps
get
merged,
they
won't
need
to
redefine
the
entire
pipeline.
They
can
simply
redefine
the
list
of
exporters,
but
they
would
speculate
the
entire
list
of
exporters
or
resources
exactly
so.
If
they
just
wanted
to
add
a
receiver,
they
they
say,
receiver
receivers
is
the
list
from
the
default
plus
the
one
I
want
to
add,
but
they
don't
need
to
do
anything
about
the
processors
or
the
exporters.
L
That's
right
and
they
in
order
to
do
that,
they
need
to
check
what's
the
default
list
of
receivers
or
to
redefine
that
that
list
in
their
configuration
and
it's
fine,
but
if
default
configuration
changes
to
the
new
version
upgrade
they
will
lose
one
of
their
like
changes
in
that
pipeline.
So
it's
like
tricky.
I
I
was
I'm
thinking.
If
we
can
define
like
have
a
have
a
way
how
lists
are
merged
configurable,
so
an
ansible
for
example.
L
There
is
such
thing
called
combined
list
merge
and
there
you
can
specify
whether
you
override
whether
you
append
whether
you
prepared
with
keeping
some
other
things
and
so
on.
There
are
different
different
choices
there.
I
am
thinking
about
having
something
like
that.
In
that
case
it
will
be
much
easier
for
user.
For
example,
I
add
a
new
new
receiver
and
I'm
specifying
that
I
want
to
append
that
receiver.
L
L
I
E
L
I
don't
know
some
of
the
are
just
sets.
E
Yeah
so
like
the
list
of
receivers
in
a
pipeline
is
just
a
set.
It's
you
know
here
are
the
receivers
that
that
exist,
there's
no
order
to
them,
so
appending
could
be
sensible
in
most
cases
as
a
merge
mechanism,
but
some
lists
are
ordered
lists
that
have
to
happen
in
a
certain
order,
like
I
believe
pipelines
are
ordered.
L
I
E
This
is
a
problem
that
would
have
to
be
solved
in
the
map
structure,
library
that
we
use
for
managing
these
configuration
maps
and
if
it
can
be
solved
at
that
level,
then
it's
a
capability
we
could
consider
using.
But
it's
it's
well
outside
of
our
area
of
responsibility
and
our
core
competency.
Here
we
shouldn't
be
trying
to
solve
that
as
part
of
the
collector.
E
I
F
I
F
Yeah,
that
may
be
a
good
workaround
for
the
moment.
Also,
looking
to
the
proposal,
I
pointed
here
dimitri
the
proposal
from
from
daniel
about
the
connectors
and
stuff
we
that
may
be
a
connector
something
if
you
look
at
their.
There
are
very
interesting
pictures
there
and
and
examples
of
how
to
to
use
them,
and
that
may
be
another
option
for
us.