►
From YouTube: 2020-08-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
B
C
Sure
I
can
kick
things
off
then,
and
dan
who's
also
will
will
share
a
little
bit
of
on
the
agent
side
itself.
So
let
me
just
share
my
screen.
C
C
So
I
just
want
to
give
a
little
bit
of
background
quickly.
I
know
most
folks
are
aware
of
blue
medora,
slash,
observe
iq,
but
just
in
case
you're
not
we've
been
focused
on
building
integrations,
primarily,
but
also
on
the
agent
side
of
things,
although
we've
been
doing
it
in
partnership
with
a
lot
of
platforms
that
exist
out
there,
so
it's
you
know
focused
on
building
source
integrations
for
specific
destinations
for
logs
and
metrics.
C
So
it's
kind
of
the
background
there
and
what
we've
built
in
the
past
has
been
focused
on
both
metrics
and
logs,
so
proprietary,
metrics
agents
and
proprietary
log
agents,
although
based
on
fluid
d
in
the
past
and
we've
kind
of
run
into
the
same
issues
that
I
think
a
lot
of
folks
have
with
fluently
or
log
stash
just
in
terms
of
performance.
I
know
none
of
this
is
really
new
and
then
looking
at
some
of
the
other
options
or
like
floatbed.
C
It's
been
some
challenges,
we've
run
into
there
as
well,
and
so
what
we
were
really
looking
for
was
kind
of
a
a
full
replacement
for
internal
use.
Initially
was
to
replace
you
know,
fluenty
log
stash
fluid
bits
inside
of
our
agents,
and
that
meant
a
major
focus
on
something.
That's
very
high
performance.
C
You
know,
for
us,
one
of
our
internal
requirements
is
to
use
you
know,
go
or
more
generally,
just
a
modern
language
that
would
encourage
contributions,
also
that
it
was
really
simple
to
install
and
configure
and
had
high
levels
of
compatibility
across
operating
systems.
So
you
know
linux
windows,
even
mac,
os
and
others,
and
also
for
us
embed
ability,
it's
kind
of
key.
So
the
ability
to
use
this
as
a
library
inside
of
another
agent
and
you
know,
kind
of
day
one
to
have
a
very
large
plug-in
library
to
start
out.
C
So
that's
just
kind
of
the
goals.
But
let
me
flip
through
a
couple
of
things
here
to
show
you
are
you
seeing
a
spreadsheet
benchmark.
C
Summaries
cool
yes,
so
this
was
actually
wesley,
provided
this
benchmarking
tool
that
aws
amazon
had
created
and
then
open
sourced,
and
so
we
use
that
for
all
of
our
benchmarks,
just
to
provide
kind
of
apples
to
apples
view
of
what
we're
doing.
C
If
you
take
a
look-
and
this
is
shared
inside
of
the
the
doc,
the
sig
doc,
so
you
can
go
in
and
look
at
this
if
you
want
to,
but
it
has
what
we're
using
for
methodology
the
instance
types,
how
we're
setting
this
all
up
and
then
the
actual
resulting
data
that
we
had.
So
this
is
looking
at
fluency
fluent
bit
and
the
observe
iq
agent
and
just
high
level,
I
think
we're.
We
were
really
happy
with
the
results
that
we
were
seeing.
C
You
know
our
goal
was
really
to
to
match
or
so
fluid
bit,
and
I
think
in
a
number
of
ways:
it's
it's
outperforming.
Certainly
this
benchmark
anyway.
Fluency
is
not
a
surprise.
You
see
we're
looking
at
single
file
as
it
ramps
up
log
lines
per
second
in
the
cpu
usage
and
then
similarly
memory
usage
at
the
bottom
and
then
at
some
point.
We
would
run
this
until
it
was
no
longer
capable
of
doing
it
with
this
configuration
on
this
system.
So
this
is
where
flip
d
ends.
C
This
is
where
fluid
bit
ends
and
then
the
observe
iq
agent,
obviously
we'd
never
recommend
a
customer
using
it
at
this
level
of
cpu
usage.
But
you
can
see
it's
consistently
better
cpu
throughout
the
spectrum,
so
it's
really
good
for
us
and
for
the
customers
that
were
we're
targeting
this
at
we're
very
large
customers
and
have
a
lot
of
throughput
they're
trying
to
use
this
for
some
central
log,
processing
and
throughput
and
similarly
for
memory.
You
know,
as
you'd
expect
go
versus
c.
C
It's
not
we're
not
smaller
than
fluid
bit
at
any
point
or
at
most
of
the
points
anyway,
but
but
it
is
very
comparable.
So
if
you
look
up
here,
it's
something
like
you
know:
it's
like
20
to
30
yeah.
D
C
Flip
it
to
be
like
20
35,
20
35
roughly
throughout.
So
that's
that's
kind
of
number
one.
When
we're
looking
at
the
our
benefits-
or
you
know
kind
of
the
advantages
of
the
agent
and
then
I
wanted
to
just
give
a
quick
overview
of
what
we
have
out
of
the
box,
so
kind
of
a
critical
part
of
this.
In
an
area
where
we're
expanding
is
just.
C
We
have
concepts
called
operators,
there
could
be
file
inputs
in
fluentd
or
or
sorry
input,
plugins
or
output,
plugins
they're,
just
operators,
it's
what
we're
using
to
build
up
a
pipeline
of
operations
on
any
log,
and
this
is
what
we
have
available
today.
So
standard
ones
like
a
file,
input
or
a
windows
event
or
tcp,
udp
et
cetera,
and
then
the
parsers
that
we're
using
so
these
are
and
then
I'll
call
it
the
outputs
which
right
now,
we
just
support
google
cloud
logging
in
elasticsearch.
C
C
So
for
something
like
these
are
a
handful
that
are
released
today.
I'll
talk
about
what's
coming
and
when,
but
let's
say
you
know
cassandra,
this
is
a
configuration
that
is
available
that
allows
you
to
provide
things
like
the
log
paths,
but
then
automatically
handles
the
parsing
and
the
pulling
out
time
stamps
or
understanding
the
data
manipulating
the
data
before
sending
it
along.
C
So
for
us,
that's
a
big
focus
and
something
that
we
want
to
make
sure
that
we
have
wide-ranging
support
early
on,
so
there
are
11
of
them
that
we
launched
with
by
the
way
this
went
open
source
a
few
weeks
ago
now,
so
it
was
closed
source
prior
to
that
open
sourced
under
apache
2.0
about
three
weeks
ago,
and
there
were
11
of
them
or
so.
But
basically
all
these
pull
requests
are
new
plugins
or
almost
every
one
of
those
are
new
plugins
that
are
being
added
in
what
we're
doing
is
migrating
over.
C
We
have
a
little
over
50
of
these
kind
of
application,
specific
plugins
from
our
previous
log
agent
that
was
based
on
fluentd
and
so
we're
porting
them
over,
and
I've
actually
developed
some
tooling.
That
makes
it
simple
to
to
port
over
fluid
d
in
general
configurations
in
general.
So
I
think
that's
going
to
be
pretty
beneficial
going
forward,
as
we
start
to
add
to
these,
but
just
just
see
aware
so,
there's
11
now
and
then
27
more
they'll
be
over
50
and
by
the
end
of
the
month,
which
was
our
target.
C
I
I
did
mention
that
you
know
it's
focused
on
being
embeddable,
that's
something
that
is
the
dan
will
show
off
a
bit
because
that's
part
of
the
integration
with
open
telemetry
that
we're
going
to
show
you-
and
you
know
for
us
today,
in
terms
of
where
we're
at
you
know,
our
focus
is
on
expanding
this
out,
specifically
the
plugins
and
some
of
those
operators
that
you
saw
just
to
kind
of
complete
that
a
bit.
C
That
dan
will
show
in
just
a
second-
and
you
know,
our
focus
is
we
want
to
have
a
path,
because
we
see
the
open
telemetry
project
is
kind
of
core
as
well,
and
so
we're
hoping
for
a
path
to
allow
our
log
agent
to
be
used
within
open
telemetry.
C
C
E
E
All
right,
sorry,
just
sharing
my
screen.
E
F
E
Okay,
I'm
back!
Okay.
Can
you
see
my
screen?
It
should
just
be
an
idea.
Yes,
okay!
Well,
so
I
just
want
to
show
three
really
quick
scenarios
here
in
each
one,
I'll
just
start
with
the
open,
telemetry,
collector,
config
file
and
then
kind
of
just
point
out
a
couple
things
about.
E
You
know
what
we're
doing
with
it,
how
it's
working
and
just
kind
of
build
the
complexity
here
so,
first
of
all,
just
implemented
an
observer,
iq,
receiver
very
high
level.
This
is
doing
just
what
mike
said:
we're
just
importing
the
relevant
packages
here,
we're
consuming
the
logs
pretty
standard
like
we're,
just
consuming
them
off
of
a
channel.
E
So
in
this
example
here
the
the
main
part
of
the
config
in
in
the
open
telemetry
receiver
is
the
pipeline.
That's
pretty
much
mirrors
the
open
telemetry
pipeline
in
general,
but,
as
mike
said,
we
make
it
up
out
of
operators.
So
in
this
one
I've
got
three
operators:
first,
one's
just
generating
input,
so
just
a
very
simple
record.
E
Second
one's
just
rate
limiting
that
input
and
then
the
third
one
is
basically
just
passing
that,
along
to
the
rest
of
the
open,
elementary
collector.
E
So
if
I
run
this,
I'm
just
and
I'm
exporting
this
to
just
a
local
file
here.
So
just
tell
the
file
to
show
that
so
we
get
just
some
basic
logs.
C
I'll
just
call
out
one
thing
here
too,
because
it's
notable
I
was,
I
was
kind
of
talking
about
our
agent
running
independently
and
it
can
do
that
it
can
run
stand-alone,
but
in
this
case
with
dan's
showing
that
has
all
the
configuration
there.
So
it's
not
requiring
any
additional
external
configuration.
It's.
It
exists
right
there
within
this.
E
File
so
the
second
one,
just
kind
of
introducing
that
concept
of
plugins
mike
already
touched
on
this.
So
the
way
we
do,
this
is
just
point
to
a
directory
where
those
plugins
are
located.
So
I've
got
one
here.
I've
got
two
plugins
in
it.
First.
One
here
is
just
say
hello,
basically
doing
almost
exactly
the
same
thing
as
that
first
example.
E
The
only
real
difference
here
is
that
this
is,
you
know,
extracted
into
a
plug-in
like
a
shareable
format,
it's
parameterized,
so
I
can
pass
a
value
in
here
name
and
I'm
just
you
know
showing
that
we
have.
We
support
like
arbitrary,
structured
data
here,
so
the
configuration
for
this-
I
just
say
I
want
an
operator
called,
say
hello,
because
that's
the
name
of
the
plugin
and
then
I
pass
that
value
in
so
run
this
detail
into
that
same
file
to
see
a
couple
outputs
here,
hello,
dom
got.
E
So
you
see
that
mapping
to
the
open,
telemetry
format,
okay
and
then
the
third
one.
This
is
a
little
bit
more
of
a
production
example,
so
just
using
a
tomcat
plugin
here
so
mike
showed
off
one
cassandra.
Specifically,
this
is
just
tomcat
consuming
one
type
of
log,
the
tomcat
access
log.
E
Nothing
fancy
but
just
parsing
this
out
and
we're
actually
sending
this
along
to
google
cloud
logging.
I
wanted
to
show
this
going
to
a
back
end
so
implemented
just
a
very
rough
receiver
for
google
cloud
logging.
E
Yeah,
so
I
mean
it's
basically,
you
know
those.
E
You
know
the
field,
individual
fields
of
note
were
being
parsed
out.
We've
got
support
for
the
resources,
labels,
time
stamps,
verity,
log
name
yeah,
so
that's
the
gist
of
it.
Any
any
questions.
E
Yeah,
that's
right.
It's
yeah,
I
wouldn't
say
it's
done,
but
it's
it's
working
a.
B
Already
yet
right
yeah
did
you
did
you
by
any
chance
to
any
benchmarking
for
the
for
this
collector
with
the
with
the
receiver
that
you
have
not
the
benchmarking
that
you
need
your
standalone
agent,
but
this
combination
in
the
open
center.
B
Would
be
interesting
to
see
how
that
produced
with
the
I
guess:
there's
an
overhead
with
the
translations
and
all
that
stuff.
Okay,
how
did
you
specify
the
location
of
the
file
that
it
is
trailing?
Is
it
specific
for
each
of
the
input
types
or
it's
a
generic?
How
do
you.
E
Sorry,
the
yeah,
sorry
yeah.
So
if
we
look
at
the
plug-in,
I'm
specifying
the
path
just
like
this
and
then
that's
parameterized
in
this
plugin.
So
that's
passed
in
here
and
picked
up
as
part
of
our
file
input.
Plugin.
B
Let's
see
so,
this
plugin
is
defined
in
terms
of
building
filing
typically.
E
E
F
E
A
good
question-
and
it's
something
I
haven't
haven't
really
tried
to
tackle,
yet
I
think
that
we
could
could
consider
adding
some
annotation
at
the
configuration
level.
If
it's
you
know
if
it's
static
could
also
within
the
receiver.
If
there's
you
know
good
points
to
pull
in
the
relevant
data
and
we
can
annotate
it
there.
B
Kevin
I
can
try
to
answer
that
for
the
for
the
resource.
Presumably,
the
collector
knows
where
it
is
run
right
at.
B
B
So
technically
it
could
do
that
part
of
the
enrichment,
the
host
name,
whatever
it
knows
about
about
where
it
is
running,
for
the
request
context
for
the
trace
context,
I
don't
think
the
collector
or
any
externally
running
process
knows
that
right,
there's,
no,
there's!
No
there's
no
access
to
that
sort
of
information.
So
the
idea
is
that
the
emitting
application
should
be
recording
this
information
in
the
logs
when
it
is
immediately
released.
B
That's
that's
how
it's
at
least
specifically
specified
in
the
open
climate
tree
logic
proposal.
If
you
have
an
application
you,
if
it's
your
application,
if
you're
able
to
to
modify
it,
then-
and
let's
say
it-
uses
some
well-known
login
library
for
which
open
telemetry
sdk
provides
some
sort
of
tooling
to
to
modify
the
the
output.
B
Then
you
could
use
that
right.
The
idea
is
that
the
open
telemetry
sdk
for
java,
for
example,
has
support
for
for
for
logging,
libraries,
let's
say
log
4g,
which
basically
looks
into
the
the
current
context.
The
thread
local
storage
extracts
the
the
span
context.
If
it
exists
there
and
includes
the
trace
id
and
spam
id
into
the
output
there
is
david.
Did
the
proof
of
concept
of
this
approach,
but
there
is
no.
Obviously
there
is
no
production
implementation,
but
that's
that's
the
idea.
Basically,.
F
Yeah,
that's
that's.
What
I
was
expecting
would
be
the
the
most
typical
thing
I
was
wondering.
I
was
just
curious
whether
anybody
come
up
with
any
bright
ideas
how
to
how
to
do
this
in
the
collector.
If
that
what
you
you
couldn't
do
that
you
know
in
the
app
doesn't
sound
like
anybody's
figured
that
out,
which
I
don't
know
whether
I
could
either
but
it'd
be
cool.
If
somebody
could
and
a
good
example
would
be
that
tomcat
access
lock,
where
you're
not
going
to
pay.
You
know
your
your
application.
F
A
F
A
B
Then
this
is
very
nice.
Thank
you
for
sharing
this
I'd,
really
love
to
know
how
this
performs
and
to
end
in
the
collector
when
it's
inside
the
collector.
What
does
the
performance
look
like
in
terms
of
the
cpu
and
memory
consumption
similar
to
what
you
did
with
the
standalone
version
and
yeah
we'll
need
to
think
about
with
so,
first
of
all,
whether
you
guys
are
open
to
open
telemetry
using
basically
embedding
right
the
way
that
you
need
portions
of
the
of
your
agent
as
receivers
and
what
you
did
is
exactly.
B
I
think
how
I
I
would
do
that
if
you're
open
to
that,
then
we
need
to
look
into
whether
how
exactly
we
can
use
it.
What
are
the
use
cases
we
want
to
tackle
the
the
original
thinking
about
supporting
logs
in
open
telemetry
was
that
we
will
use
an
external
agent.
I
think
that's
still
valid.
We
probably
don't
want
to
bring
too
much
code
and
then
be
forced
to
maintain
all
of
that
in
open,
telemetry
collector,
but
maybe
for
the
most
frequent
use
cases
with
fire
tailing.
B
We
would
probably,
I
think,
we're
open
to
having
a
built-in
support
from
the
collector,
rather
than
always
having
people
go
and
run
your
external
logging
agent
chain
it
with
a
collector.
It's
it's
more
complicated
right.
If
we
can
have
a
simpler
solution
for
the
most
frequent
cases,
then
I
think
it's
worth
having
the
support
for
those
cases
readily
available
in
the
collective,
without
the
need
to
run
an
external
process.
That's
and.
C
That's
primarily
really
why
we
want
to
bring
this
up
because
it
seemed
like
it
could
be
a
good
opportunity
for
that.
We're
certainly
open
to
embedding
make
sense,
we'll
have
the
receiver
that
dan's
working
on
I'll
probably
be
ready
for
a
pr
in
two
weeks
or
so
so,
and
at
that
same
time
I
think
we
can
provide
some
some
benchmarking
data
with
it
as
well
and
we'd
love
any
feedback
that
the
team
has
outside
of
this
meeting
too
yeah.
H
Kind
of
minor,
but
in
terms
of
the
naming
you
may
so
is
it
called
carbon
apparently.
D
D
B
Yeah,
yeah,
and-
and
that's
why
I'm
saying
the
initial
thinking
was
that
an
external
agent,
an
approach
that
uses
an
external
agent
is,
let's
say,
faster
right
from
information
perspective.
It
shortens
our
time
to
market,
but
if
there
is
a,
there
is
a
way
to
actually
have
an
embedded
built-in
support
for
logs.
B
D
Yeah
I
agree.
I
wanted
to
ask
a
question.
You
mentioned
the
issue
of
applications
needing
to
decorate
the
span
and
contacts
which
I
completely
agree
with,
and
the
collector
knowing
the
resource
identifier,
but
there
is
an
additional
one
like
if
you
look
at
the
spark
agent
today,
the
spark
agent
decorates
the
information
of
like
you
know
it
knows
that
if
it
found
the
file
in
so-and-so
directory
that
correlates
with
the
certain
application
source,
so
it
does
correlate
the
log
field
with
the
source
application.
B
B
It's
still
resource
yes
resource
is
the
location
of
the
emitter.
Basically,
who
who
emits
the
trace
context,
is
the
the
current
execution
context?
What
is
the
request
that
is
being
processed
right
now,
regardless
of
who
is
processing
that
that's
the
distinction
between
resource
context
and
trace
context
in
open.
D
F
D
F
Actually
an
a
example
of
that
in
in
the
java
project.
So
what
it
is,
is
you
set
up
a
special
grpc
context
listener,
which
is
what
the
spam
context
is,
and
it
just
pushes
that
data
over
to
the
for
j
context?
So
then
you
can
you
know
the
the
what
log4j
calls
the
mdc.
F
So
you
know,
and
then
you
just
print
out
all
your
mdc
contents
and
you
have
it.
It
works.
It
works.
Well,
it's
very
useful.
B
F
B
Let's,
let's,
let's
see
right,
let's
see
the
pr
when
it
comes
we'll
have
a
look,
but
I
think
this
this
looks
interesting
and
promising.
We
will
I'm
guessing.
So
this
is
probably
embedding
a
lot
we'll
need
to
see
how
how
much
this
increases
the
size
of
the
executable.
So
this
is
presumably
an
entire
agent
being
embedded
inside
the
collector.
C
Do
you
have
any
guidelines
just
so
we're
aware,
as
we
go
through
this
anything
in
mind
or
size
in
mind,.
G
B
D
B
H
Yes,
well,
I
was
going
to
say
that
prometheus
is
actually
I
mean
somewhat
of
an
existing
example
of
where
this
has
been
done
right
and
we
import
dependent
yeah.
We
import
the
entire
prometheus
agent
right.
B
H
B
H
E
Looks
like
if
you
build
master
on
the
contrib
repo,
it
is
108
megabytes
with
the
observer
iq
agent.
It
is
about
124
megabytes
and
you
know,
I
think,
presumably
there's
some
probably
some
optimization.
We
could
do
and
put
any
effort
into
that.
B
D
D
C
F
F
You
know
have
the
have
a
prometheus
receiver
that
have
for
me
to
send
all
that
to
the
collector
and
the
prometheus.
You
know
collecting
what
it
does,
or
else
you
could
deploy
have
have
the
collector
collect.
Have
it
like
a
receiver
that
collects
all
the
kubernetes
or
scrapes
all
the
kubernetes
stuff
itself,
deploy
it
as
a
demon
set
and
then
shut
off
the
prometheus
collection
of
that
stuff
and
then
have
prometheus
pull
the
collector.
D
Yeah,
I
think,
we'll
try
to
evaluate
both
options.
One
feedback
we
got
from
our
kubernetes
team
is
that
cubestats
has
a
tendency
of
blowing
up
memory,
so
we
need
to
look
at.
You
know
whether
it
should
run
as
a
separate
process
that
you
then
pull
for
a
receiver
or
whether
we
just
need
to
re-implement
cube
stats
as
a
receiver.
F
D
D
B
Right
thanks
mike
and
then
this
was
very
useful.
Let's
move
to
the
next
topic
yeah,
I
want
to
say
something.
B
I
Okay,
hey,
this
is
for
you
mike.
First
of
all,
I
want
to
say
nice
work.
I
looked
through
the
like
all
of
the
stuff
you
gave
on
the
performance
test
and
everything
last
week
and
like
really
thorough,
and
I
really
liked
it
really
cool.
One
thing
I
was
interested
in
if
you
can
discuss,
is
like
what
is
the
max
highest
throughput
that
you
have
seen
in
in
production
from
like
clients
per
node.
I
So
I
was
curious,
especially
because,
like
you
showed
in
the
graph
like
it
going
up
all
the
way
to,
like
you,
know,
70
000
events
per
second.
Have
you
actually
seen
that
in
the
real.
C
C
Let
me
talk
to
a
couple
folks,
because
one
of
our
largest
customers,
I
know,
we've
been
going
through
some
testing
with
them
and
they
probably
know
I
could
probably
come
back
to
you
and
let
you
know
like
where
we're
seeing
it
at
and
but
I
agree
with
you
in
general.
I
think
generally,
that
most
customers
it's
lower
on
a
per
agent
basis
and
it's
those
edge
cases
with
certain
customers
that
have
these
pretty
extreme
use
cases
where
it
starts
to
creep
up.
B
Yeah,
my
if
my
memory
serves
me
well
when
I
was
at
vmware
mark.
Maybe
you
remember
too
this
where
this
were
under
the
debug
settings,
was
producing
something
like
40
000
events
logs
per
second
on
a
single
machine.
That's
the
number
I
I
remember.
Maybe
I
might
be
wrong
with
that.
This
was
years
ago.
B
So,
let's
move
forward,
we
wanted
to
do
some
triaging
before
we
do
that.
Let's
see
if
there
are
smaller
topics
there,
there
is
one
request
to
do
a
review
on
pr
number
498.
Is
there
anything
that
we
need
to
discuss
here
in
this
meeting.
J
No,
I
think
the
requesting
run
was
to
get
a
review
for
it.
We've
been
we've
been
kind
of
waiting
on
it
to
get
some
feedback
so
again,
just
put
a
drawing
attention
to
it.
B
Sure
thank
you.
We
were
actually
intending
to
do,
triaging
and
as
part
of
triaging.
We
want
to
assign
all
all
open
prs
to
to
reviewers
to
make
sure
they
they
get
reviewed
on
time.
K
Yeah,
this
is
the
ca
hi,
I'm
the
aws
intern
working
on
the
prometheus,
remember
exporter
and,
like
the
assumptions
we're
making
for
that
exporter,
is
that
we
will
only
like
we'll
drop
every
counter
histogram
and
summary.
K
That
is
not
the
cumulative
format
and
then
how
I
was
testing
it
was
that
I
was
trying
to
use
the
prometheus
receiver
to
get
prometheus
counter
histogram
to
the
collector
and
then
see
if
it
can
get
exported,
but
with
the
prometheus
receiver
it
seems
to
be
converting
like
prometheus
counters,
to
oc
metrics
and
then
to
otp,
metrics
and
somewhere
in
the
process.
I
think
it
gets
converted
to
like
a
delta
temporality,
and
so,
like
my
exporter,
never
my
exporter
would
drop
that.
B
I
I
remember
a
very
recent
discussion
around
this
topic.
Some
of
the
metrics
guys
were
discussing
how
the
open
telemetry's
magic
types
should
be
mapped
to
prometheus,
and
I
remember
something
that
is
very
similar
to
what
you're
describing
they
were
thinking
about
some
sort
of
aggregation
happening
somewhere.
Potentially
in
the
collector.
I
don't
know
all
the
details
it
may
be
worth
reaching
out
to
the
metrics
people
open
in
the
open,
telemetry's
specification.
J
Channel
yeah,
I
mean
we
are
working
with
them
closely.
We
had
raised
the
issue,
you
know
in
terms
of
the
gaps
in
the
metrics
aggregation
functionality
in
the
collector,
so
we
have
been
working
with
bogdan
and
with
josh
on
you
know
getting
that
functionality
built
out
either
supported
in
the
collector
or
before
it
hits
the
collectors,
because
you
know
again,
if
you
have
multiple
instances
of
the
collector,
how
do
you
actually
guarantee
the
right
cumulative
results?
Exactly
so?
J
So
that's
that's
an
issue
that
is
still
open,
but
this
is
kind
of
you
know
even
before
it
hits
the
collectors
right,
because
we
were
trying
to
use
the
prometheus
receiver
that
exists
today
to
be
able
to
generate
cumulative
data
to
test
the
end
to
end
data
flow,
and
what
we
are
seeing
is
that
it's
not
the
prometheus
receiver
is
not
generating
the
cumulative
data
that
we
need
for
specific
types,
types.
B
J
I
mean
it
definitely
is
a
translation,
you
know
function,
but
should
we
just
follow
up
with
bogdan
and
josh,
then
is
that.
H
Yeah,
so
hopefully
I
don't
want
to
like
kind
of
right
hold
on
this.
I
guess
I
mainly
want
to
see
if
any
anybody
feels
strongly
and
maybe,
if
there's
any
other
people
that
feel
strongly,
we
can
have
a
separate
discussion
if
it
has
to
go
on
for
a
while.
So
basically,
my
proposal
is
to
merge
to
combine
the
repositories
to
combine
contrib
into
core,
and
so
basically
so
we
would
keep
this.
H
So
the
proposal
is
to
keep
the
same
structure
of
contrib
right
where,
like
you,
can
still
build
contrib
without
you
know,
you
know,
other
you
know
without
various
components,
so
it'd
still
be
the
same
same,
build
structure
just
literally
putting
to
trib
inside
the
core
in
a
direct
directory
called
contrib.
So
it
avoid
the
issue
where
one
like
we
have
to
constantly
bump
the
version
you
bump
the
import
of
core
into
contrib.
H
Sometimes
changes
get
made
in
core
and
it
breaks
contrib
and
we
don't
notice
until
some
guy
somebody
goes
and
tries
to.
You
know
core
and
turns
out
it's
broken,
and
so
there's
all
this
kind
of
like
just
busy
work
involved
in
in
managing.
H
You
know
issues
you
have
to
create
milestones
and
in
both
right
you
have
to
create
releases
in
both
like
so
there's
all
this
overhead,
and
so
that
is
to
keep
the
same,
build
structure
right
this,
that's
modular,
so
that
you
know
there's
still
there's
still.
This
is
there's
still
this
clear
separation
between
core
and
contrib
at
a
code
level
and
build
level,
but
it
just
removes
the
the
two
repositories.
B
Jay
my
opinion
is
that
it's
a
good
idea.
We
need
to
look
at
what
are
the
implications
of
that.
I
very
briefly
discussed
this
with
bogdan.
He
had
some
reservations.
It
may
be
worse
for
you
to
discuss
it
with
him.
I
didn't
go
into
details.
He
didn't
have
much
time,
but
he
had
some
some
thoughts
about
what
can
be
worse
in
that
particular
setting.
So
maybe
talk
to
him
and
see
what
what
did
he
have
in
his
mind.
H
Okay,
cool
yeah:
if
anybody
else
has
any
reservations
or
questions
or
concerns
about
it
feel
free
to
put
me
on
twitter,
and
I
can
look
you
into
those
discussions.
A
B
Indicate
an
inventory,
yes,
we're
assigning
to
the
milestones
first,
and
then
I
don't
know
if
we
have
time
right
now
to
decide
to.
It
was
also
assigned
to
people
the
specific
issues,
but
let's
at
least
assign
to
the.
B
F
Yeah,
I
agree.
That's
it
should
if,
if
it's
not
an
acute
time
stamp
of
the
last
time,
it
actually
got
some
metrics
in
and
just
blank
them
out
when
it
gets
pulled
by
prometheus.
B
B
F
But
what
would
happen
if
you
didn't
have
the
collector
in
in
the
pipeline
and
you
know
something
died
and
so
prometheus
pulls
it
and
it
doesn't
get
a
response
back
so
and
then
it
doesn't
show
anything
yeah.
It
shows
that
I
don't
have
any
data
where,
if
you
know
the
collector's
sitting
there
and
it's
got
the
last
values
and
it
for
me
just
pulls
it
just
got
less
values.
It
shows.
You
know
you
just
get
this
flat
line.
Yeah.
B
Yeah,
I
think
this
is
a
valid
concern.
I
am
not
sure
how
important
it
is.
I'm
gonna
put
it
into
the
backlog.
B
A
A
B
Okay,
so
generally
we
were
pushing
the
anything
related
to
the
tail
centering
after
the
ga.
We,
we
don't
have
anybody
who
is
currently
actively
working
on
improving
the
health
center
processor.
There
are
no
issues
with
that
and
I
don't
know
if
I'm
not
aware
of
anything
exactly.