►
From YouTube: 2021-05-12 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
A
C
Question:
hey
hi
there,
so
I
believe
I
have
the
answer
in
the
on
the
slack
in
the
thread
that
I've
linked
to
to
the
agenda.
I
just
wanted
to
confirmation
from
sorry
for
mispronouncing
your
name
crossy,
but
basically
I
I
believe
that
the
off
extension
would
be
the
way
to
go
for
me.
I
just
need
to
investigate
a
little
bit
further
into
how
to
structure
it,
but
I
guess
it's
doable
with
that.
A
framework.
A
Yeah,
I
was
just
replying
to
the
thread
here
on
the
slack,
and
the
only
thing
that
might
be
missing
here
from
your
comment
is
that
your
extension
can
be
both
doing
the
retrieval
of
the
api
that
you
need
and
at
the
same
time
act
as
a
authenticator
right
and
your
authenticator
would
then
have
the
opportunity
of
adding
a
or
changing
the
round
tripper.
So
you
can,
you
can
intercept
the
call
and
then
instrument
the
outgoing
call
with
your
id
with
your
api
id.
A
C
C
Just
you
know,
sort
of
abusing
the
of
using
the
framework
and
then
get
that
from
from
a
method
that
will
be
exported
on
that
particular
extension
and
the
reason
I
I
thought
of
that
is
that
the
id
will
be
used,
for
instance,
to
you
know
as
a
part
of
the
url
that
will
be
requested,
so
it
I
I
don't
think,
there's
a
way
in
the
framework
that
exists.
You
know
t
that
would
allow
to
to
somehow
put
that
in
a
proper
place.
B
Can
we
step
back
for
a
second
jurassic?
We
do
not
yet
have
the
exporter
authenticators
defined
right
that
we.
A
Yeah,
I
think
I
think,
there's
one
one
thing
here-
that
we
did
not
consider
before
a
remind
based
on
this
new
information
from
patrick.
That
is,
we
might
add,
a
new,
so
we
are
working
also
so
jose.
Carlos
is
working
on
on
the
pass-through
of
the
authentication
data
from
from
you
know,
the
the
receiver
to
the
exporter,
and
we
are
planning
on
adding
that
as
a
structure
within
the
p
data
within
the
resource.
A
And
what
I'm
thinking
here
is
that
we
could
perhaps
allow
the
this
extension
so
patrick's
extension
to
add
arbitrary
data
to
the
p
data
resource
right,
so
that
yeah
he
could
inject
just
like
we.
We
are
thinking
about
adding
the
raw
authentication
data
or
raw
token
that
we
get
from
from
the
receivers.
A
B
At
what
point
in
the
lifetime
of
the
pipeline
or
for
the
data
passing
through
the
pipeline,
you
think
that
can
happen.
The
extensions
don't
participate
in
the
pipeline
propagation
at
all,.
A
A
B
A
Same
time
it
can
just
implement
the
two
interfaces,
and
then
it
is
the
same
instance
of
the
extension
having
the
opportunity
to
inject
things,
both
at
the
at
the
beginning
of
the
pipeline
and
at
the
end
of
it,.
B
So
I
I
think
we
need
to
consider
the
use
case
when
we
only
want
outgoing
authentication
right.
We
want
an
authenticator
just
for
outgoing
http
requests,
so
it
should
not
be
a
prerequisite
for
the
outgoing
authentication
to
work
to
also
have
a
receiving
authentication.
The
two
should
be
decoupled,
although
you're
completely
right,
you
may
want
to
implement
an
extension
that
does
both
and
also
does
the
propagation,
so
kind
of.
A
Right
now
is
that
we
have
an
interface
that
has
to
be
implemented.
I
think
it's
a
config
auth
authenticate
authenticator
that
has
implemented
for
it
to
be
a
a
receiver
or
a
server
authentication
and
another
one
has
to
be
implemented
for
it
to
be
a
client
authenticator,
and
that
is
right
in
fig
dot,
client
house
or
something
like
that.
So
if
an
extension
decides
to
implement
both
in
addition
to
the
you
know
the
extension
interface
right,
then
it
is,
it
is
called
two
parts
of
the
life
cycle:
yeah
yeah,.
B
A
Else,
no,
no!
It's
someone,
someone
pravana
thing:
okay,
okay,.
A
B
C
B
C
So
for
now
granted
that
this
gets
merged
anytime,
soon
I'll
be
able
to
at
least
do
what
I've
been.
What
I've
described
in
the
beginning
right
and
then
whatever
comes
next
and
trying
to
leverage
that.
E
Yeah,
so
I
think
I'm
just
still
waiting
on
the
pr
to
get
admired
by
broke
down,
so
we
got
like
approved
from
all
our
members.
It
was
discussed
and
rebuilt
and
approved,
so
bogdan
just
need
to
merge
it.
I
don't
think
any
other
is
going
to
help
me
here.
Maybe
I
don't
know
if
any
other
has
the
power
to
merge,
it.
G
Everyone
has
the
power,
it's
it's
I
mean
yeah,
so
I
mean
the
authority
of
tomorrow's.
You
know.
F
Deep
down
can
also
emerge,
and
it's
not
it's
not
about
the
power
here.
So
I
I
did
not
look
in
the
past
couple
of
days.
I
have
some
problems
to
take
care
of,
but
I
will
outlook
in
in
the
next
few
hours,
maybe
tomorrow,
if
not.
E
H
H
I
E
J
E
B
And
I
guess
related
to
that
right.
We
morgan
and
I
discussed.
What
can
we
do
to
make
things
move
faster
with
the
prs
and
we
came
up
with
a
proposal
to
decentralize
the
development
of
the
vendor-specific
receivers
and
experts
vendor-specific
components,
not
everything
for
now.
We
would
like
to
start
with
with
receivers
and
exporters.
This
is
still
very
significant
chunk.
What
we're
starting
with
so
we
want
to
propose
to.
F
F
H
Because
tigran
again
it's
a
complex,
you
know
set
of
discussions
right.
So,
let's
all.
A
All
right,
so
we
I
was
talking
to
jose
carlos,
not
sure,
he's
here.
So
I
I
brought
this
item
here
and
that's
basically
something
that
we
talked
before
on
bringing
authentication
data
into
the
p
data.
He's
the
one
doing
that
at
work,
and
he
had
some
questions
that
I
couldn't
answer
and
it
is
linked.
A
So
the
discussion
is
linked
there
on
slack,
but
mainly
is
there
a
document
or
a
pattern
somewhere
that
we
can
take
a
look
at
and
see
how
that
can
be
done
because
apparently,
at
least
from
from
what
I
saw
all
the
fields
in
there,
they
expect
related
fields
to
also
exist
in
in
the
protobufs
that
are
generated
by
hlp
spec.
I
think,
and
there
couldn't
find
any
precedent
for
fields
or
exist
only.
A
Sorry,
sorry,
not
spence
resource.
I
think
because
authentication
data
is
not
tied
to
tracing
its
all
the
signals.
Oh.
B
B
F
Resource
always
exists,
it
may
be
empty
if,
but
that
doesn't
matter
jurassic
as
a
starting
point,
can
we
put
it
as
an
attribute
to
the
resource
with
a
specific
reserved
key.
F
So
so
resource
has
attributes,
which
is
a
map
devalued
with
everything
right,
including
bytes.
So
you
can
put
anything
you
want
there.
Can
we
start
by
by
adding
a
semantic
convention
in
the
collector
to
say
that
underscore
underscore
out
underscore
data
underscore
underscore
whatever
key
we
come
up
with,
and
we
put
it
as
an
attribute
in
the
resource.
A
F
That's
fine!
So
what.
A
So
what
I
wanted
originally
is
to
have
a
struct
within
the
resource
start
now,
so
not
resource
but
resource
star,
based
on
what
we
discussed
and
a
a
struct
within
that
called
auth,
for
instance,
and
within
the
all,
then
we
have
a
three
string
attributes.
A
One
of
them
would
be
like
raw,
so
it
is
a
raw
data
that
we
got
from
from
the
receiver
from
the
client
and
then
a
subject
or
principle
or
username
or
whatever
name.
We
want
for
to
represent
the
user
and
a
slice
of
strings
representing
the
group
memberships
for
that
particular
user.
B
A
At
these
yeah,
not
always
but
also
in
general,
so
it
is
very
normal
to
use
either
principal
or
subject
as
the
username,
because
it
might
not
be
a
user,
it
might
be
a
service
account
right
and
the
service
account
is
not
a
real
user,
so
those
would
be
like
subjects
or
principles.
Depending
on
on
on
the
system
that
you
look
at.
B
B
F
Requests
but
but
how
do
you
implement
a
proper
pass-through?
For
example,
you
have
a
library
talking
to
a
intermediate
agent
talking
to
the
collector,
and
you
want
to
authenticate
from
the
library
to
the
back
end
so
kind
of
similar,
with
what
splunk
does.
With
the
token,
the
user
sets
the
token
in
the
application
and
passes
to
a
collector
that
talks
to
another
collector
that
talks
to
the
back
end.
So
just
the
last
one
needs
these
data
to
be
authenticated
for
that.
F
So
so,
in
that
case,
more
or
less,
we
also
need
to
put
it
on
the
wire,
because
the
authentication
between
the
first
library
and
the
in
the
in
the
first
collector
will
use
ssl
or
whatever
other
other
authentication
world
to
or
whatever.
But
it's
not
going
to
be
the
same
token
that
you
need
for
authenticating
with
your
backend.
B
It's
well.
F
B
F
B
A
It
is,
I
think,
it's
pretty
much
the
same
case
as
patrick
was
talking
about
before,
so
you
don't
have
authentication
on
the
receiver,
but
then
one
extension
of
yours
to
the
authentication
or
use
a
token
from
I
don't
know
where,
from
its
configuration,
perhaps
to
obtain
a
token
against
an
identity
server,
and
that
token
is
then
used
for
the
exporter.
Now,
that's
not
in
pass
through
that.
A
It's
just
a
client
exporter
or
you
know,
client
authentication
on
the
exporter
side
pass
through
would
be
on
the
second
level,
where
a
collector
would
receive
this
token
from
the
client
or
from
from
the
agent
under
a
certain
header
and
would
use
the
same
contents
from
the
same
header
on
on
doing
the
the
next
hop
right.
So
I'm
making
right.
B
B
Yeah
bogdan
is
describing
a
more
complex
case.
Consider
that
let's
say
you
want
to
use
passthrough
so
that
somehow
the
the
very
first
node
in
in
your
pipeline
is
wants
to
pass
a
token
through
all
of
the
agents
and
collectors
all
the
way
to
the
back
end.
But
at
the
same
time
let's
say
the
first
node
is
an
application
right.
The
application
wants
to
also
authenticate
with
the
agent.
B
You
have
two
bits
of
data
that
you
need
to
preserve.
Somehow
that's
what
bogdan
is
describing.
So
you
cannot
just
use
the
headers
for
passing
through
the
token
because
the
headers
are
already
occupied.
They
contain
whatever
the
authentication
information
is
necessary
for
the
first
leg,
from
the
application
to
the
agent
and
supposedly
that's
different.
I
mean
that's
possible
to
imagine,
but
I
don't
think
we
need
to
support
that.
B
That's
that's
how
I
would
put
the
restriction
there,
and
so,
if
you
do
that,
then
for
the
pass-through,
the
assumption
is
that
if
it
comes
in
the
http
headers,
then
you
put
the
authenticator
in
on
the
receiver.
You
tell
that
this
is
the
header
that
needs
to
be
preserved
and
passed
through
the
pipeline,
and
you
put
another
authenticator
on
the
exporter
and
specify
the
exact
same
header
to
be
passed
through
and
that's
how
it
works
right.
It
gets
propagated
through
the
pipeline
and
through
the
http
headers
all
the
way.
F
Okay,
so
so
what
I'm
hearing
so
far
is
we
may
need
a
way
in
p
data
to
properly
to
propagate
things
that
are
not
actually
propagated
on
the
wire.
B
A
B
Well,
one
way
is
to
to
just
touch
the
the
the
proto
right
not
touch
the
product,
so
we
patch
the
generated
files
to
allow
in-memory
additional
in-memory
field
in
the
in
a
generated
message
in
the
generated
pro
generate
the
prototype
message,
the
in-memory
version
of
it
so
do
not
touch
the
proto.
There
is
no
field
that
is
serialized
or
deserialized.
B
F
But
but
again
the
suggestion
is,
we
can
start
doing
it
with
attributes
to
prove
that
things
work
and
don't
release
that
publicly
whatever,
but
just
to
to
continue
the
work
and
and
in
two
three
weeks,
when
we're
more
clear
of
how
the
format
of
that
struct
should
look
like
and
stuff,
we
can
discuss
how
to
make
it
happen.
B
It
makes
sense
I
understand,
but
jurassic
has
a
good
point.
There
is
a
danger
of
leaking
this
data,
which
you
cannot.
So
if
you
want
to
guarantee
that
we
don't
leak,
then
in
the
pipeline
we
should
insert
something
that
sanitizes.
This
data
ensures
guarantees
that
the
exporters
never
see
this
data.
Okay,
I
think.
F
L
A
Okay,
I
I
think
I
would
prefer
to
have
like
you
know
as
much
as
I
don't
like
the
patching
idea,
because
of
the
obvious
reasons
for
that,
but
I
think
I
would
prefer
that
over
the
using
the
attributes.
A
So
if
you
don't
mind,
I'd
like
to
at
least
for
the
prototype
to
follow
with
this
approach,
sure.
B
Next,
all
right,
next
prometheus
receiver,
enhancement,
question.
M
Yeah,
so
this
is
a
just
like
some
questions
I
have
so
just
give
an
introduction.
Iris
and
I
another
aws
intern
are
working
on
our
project
where
we're
going
to
enhance
the
prometheus
receiver
within
the
open,
telemetry
collector,
and
what
we're
trying
to
do
is
we're
looking
to
implement
a
server
with
an
endpoint
in
the
prometheus
reserve
receiver
that
will
take
requests
to
update
a
list,
a
a
list
of
scraped
targets
within
a
file
that
the
precious
receiver
is
watching,
and
I
linked
the
github
issue
there.
M
If
anyone
would
like
to
look
at
it,
but
basically
when,
when
going
to
implement
this
server
that
serves
an
endpoint
and
take,
and
it
takes
in
requests
to
update
a
list
of
scrape
targets,
we
just
had
a
couple
of
design
questions
just
to
get
some
kind
of
advice
or
any
knowledge
about
the
collector
from
the
others.
So
my
first
question
is
just
if
anyone
has
any
suggestions
for
how
the
server
should
be
set
up,
design
wise,
because
right
now
we're
thinking
of
using
a
normal
http
server.
M
That
would,
for
example,
take
in
say
a
put
request
to
update
the
list
to
scrape
targets,
but
if
anyone
has
any
other
suggestions
or
has
any
more
knowledge
around
setting
up
a
server
there
I'll
be
happy
to
hear
them.
B
M
So
the
goal
here
is
this
is
just
a
small
part
of
a
larger
project,
but
right
now
this
small
part
is
just
being
able
to
take
in
requests
to
update
the
list
of
scrape
targets
within
on
the
that
the
previous
receiver
is
scraping
from.
So
it
would
update
this
list
of
straight
targets
within
a
file
and
that
produces
receiver
would
be
set
up
and
set
up
with
configuration
to
watch
this
file
using
a
file
sd
config
to
update
the
list
of
scrape
targets.
There.
B
So
there
is
this
notion
of
remote
configuration
that
is
being
added
to
the
collector.
There
is
already
an
implementation
on
our
own
distribution
and
we
will
be
adding
it
to
the
core
which
allows
you
to
essentially
do
the
opposite.
I
guess
not
not
the
opposite.
You
could
do
exactly
just
that.
You
could
have
a
remote
configuration
source,
a
custom,
one
implemented
that
you
can
implement
and
then
that
that
could
serve
well.
B
It
could
do
a
pulling
of
the
configuration
or
it
could
serve
an
http
endpoint
to
which
you
can
connect
to
push
the
configuration,
which
is
what
I
understand
you
want
to
achieve
here.
I
would
do
it
that
way
right.
Instead
of
making
this
some
special
functionality
that
is
available
only
to
prometheus
receiver,
I
would
just
implement
a
custom
or
maybe
a
normally
a
custom,
very
generic
configuration
source,
remote
configuration
source,
which
you
can
then
use
in
the
configuration
file.
B
H
Yeah
I
mean
I
mean
there
are
obviously
outstanding,
there's
some
a
couple
of
outstanding
prs,
at
least
one
and
then
also
thanks
for
bringing
up
that
you're
thinking
about
this,
but
in
the
short
run
you
know
there
are
deliverables
that
we
wanted
to
actually
add
for
prometheus
the
prometheus
pipeline,
specifically
and
perhaps
again
I'd
like
to
better
understand
what
your
timeline
for,
adding
that
you
know
that
functionality
is
because
we'd
like
to
see
this.
H
You
know
by
the
end
of
may
I
mean
that's
the
target
we
were
trying
to
hit
and
we
need
to
have
this.
So
perhaps
we
can
edit
in
the
short
run,
to
the
prometheus
receiver,
and
then
you
know
happy
to
remove
it
once
the
general
implementation
is
available.
Would
that
make
sense.
B
It
makes
sense:
let's
do
this,
I
will
talk
to
paulo
he's.
Not
in
this
call,
I
think,
no
he's
not.
I
will
talk
to
him.
I
believe
everything
is
ready
and
works
on
our
distro.
So
it's
a
matter
of
just
upstreaming
it,
but
I
may
be
wrong.
I
don't
know.
Maybe
there
are
some
things
that
block
this.
Let's
do
that.
I
will
talk
to
him.
If
we
can
do
that
quickly,
we
will
upstream
it
to
the
core
and
then
you
will
be
able
to
implement
it
as
a
remote
source.
B
B
H
Yeah
I've
tagged
you
tigran
on
the
issues.
I
definitely
am
waiting
for
some
comments
from
you,
but
that's
good
to
know
again.
I
definitely
would
say
that
if
we
could
help
you
know
in
in
just
getting
the
prometheus
pipeline,
you
know
because
this
is
very
focused
and
limited
to
the
prometheus
receiver
right
now,
and
you
know
that
unblocks
the
compliance
testing
that
we
want
to
be
doing
so
again.
Timelines
are
important
here.
B
Okay,
I
understand
and
let
me
think
through
maybe
what
I'm
suggesting
is
not
even
a
very
good
fit
for
what
you
want
to
achieve.
Let
me
try
to
put
the
proposal
in
the
issue.
You
can
have
a
look
and
if
it
does
fit
only
in
that
case
we
will
move
with
that.
If
not,
then
we'll
just
do
what
what
you
wanted
to
do.
Okay,.
B
Okay,
I
think
well
we're
almost
out
of
time,
but
there
is
one
last
thing
david:
do
you
want
to
maybe
go
with
that
and.
L
Yeah
sure
I'll
try
and
be
quick.
We
discussed
this
last
time,
I'm
trying
to
add
ede
metrics
for
pipelines,
and
I
prototyped
three
different
implementations
that
I
was
able
to
get
working.
L
One
is
to
do
ede
with
using
context,
so
it
relies
on
all
the
components
actually
propagating
context
correctly
and
I
haven't
implemented
it
for
the
batch
processor,
but
I
think
it
could
be
done
to
support
a
list
of
basically
being
able
to
merge
two
contacts
together
and
end
up
with
a
list
of
start
times
and
multiple
metrics.
L
L
I
did
find
in
implementing
the
prototype
that
it
does
require
the,
for
example,
processors
that
create
a
new
p
data,
dot,
whatever
to
actually
have
to
add
code
themselves
that
deals
with
making
sure
that
the
start
time
is
correct.
So
I'm
less
excited
about
that
option.
As
I
was
last
week
and
the
other
option,
if
neither
of
those
two
are,
we
think
are
palatable
or
are
going
to
work
well
is
just
to
stick
with
per
receiver
per
processor
per
exporter,
latency
metrics.
L
B
L
L
But
that
doesn't
mean
that
I
couldn't,
for
example,
if
we
really
don't
think
the
options
are
good,
settle
for
receiver,
processor
and
exporter
latency,
because
you
could
measure
each
one
of
those
individually
and
then
come
up
with
something
that
roughly
represents
represents,
or
at
least
measures
that
things
are
moving
along
in
an
unexpected
amount
of
time.
F
So
every
time
when
a
new
p
data
is
created,
it
is,
it
adds
a
start
time
or
or
creation
time
to
it,
and
then
you
measure
it
at
the
end.
But
the
problem
is:
there
is
no
guarantee
that
people
will
not
mess
up
with
the
p
data
and
will
keep
the
data
everywhere
so
that
that
may
not
work.
The
other
option
is
to
use
the
context.
B
Things
not
just
patch
right,
all
other
asynchronous,
no
asynchronous.
I
think
when
you
skip
the
context
which
we
preserve
the
complicated
ones
like
grouped
by.
B
B
We
yeah
that's
a
problem,
no
matter
what
you
do
it's
a
problem,
I
don't
know,
what's
what?
How
do
you
calculate
the
latency
of
the
data
that
changes
like
morphs
over
when
it
passes
through
the
pipeline
right,
the
latency
of
what?
Precisely,
if
what
enters
the
pipeline
is
no
longer
what
exits
it?
It's
it's
difficult
to
to
tell
actually
what's
what's
the
latency
here,
what
how
do
we
identify
the
bits
of
the
data
to
tell
that?
Okay,
this
is
the
same
thing
that
entered
and
exited,
and
here
is.
B
I
can
timestamp
those
events
and
that's
that's
my
latency.
If
you
do
those
weird
things
like
like
group,
by
does
it's
no
longer,
I
don't
know
how
to
define
the
latency
in
that
case,
even
but
maybe
that's
fine
right.
Maybe
what
we
care
about
is
the
regular
simple
cases
when
you
have
just
those
simple
processors,
synchronous
ones,
plus
the
batch.
The
most
common
use
case
right.
B
L
Okay,
this
useful
feedback.
I
think
what
I'll
do
is.
I
will
try
more
fully
implementing
the
context-based
end-to-end
ones
for
things
like
batch
and
are
there
any
other
processors
I
bugged-
and
I
think
you
mentioned
see
there-
was
a
group
buy
and
then
are
there
any
others
that
are,
you
think,
will
be
problematic.
F
I
don't
think
so
group
by
is
the
one
that
will
cause
you.
Probably
the
most
pain
batch
batch
is
pretty
nice
in
a
way
that
it
doesn't
split
too
many
of
the
p
data.
It's
just
like.
F
Maybe
I
think
that's
that's
the
only
one
that
will
will
cause
there
are
others
there.
There
is
another
one
that
splits
the
prs.
I
think
that's
functionality
in
the
batch
as
well,
so
so
splits.
B
B
F
Option
david
would
be
for
you
to
to
use
the
kind
of
monitor
from
outside
because
of
the
comp
complexity
stuff
you
may
want,
when
you
give
slas
to
the
user
to
monitor
from
outside.
So
essentially
what
I'm
suggesting
is
you
can
ingest
a
metric,
a
span
whatever
measure
the
time
when
you
ingest
it
and
when
it
goes
out
for
that
specific
span
that
specific
logs,
that
will
maybe
be
another
way
to
to
measure
your
sla.
L
Yep,
I
agree
is
that
something
that
you
would
see
as
being
included
in
the
collector.
I
think
that
was
actually
proposed
in
this
issue
was
to
add
a
metric
called
metric
or
telemetry
freshness,
or
something
basically
saying
this
is
at
the
time
when
it
exited
this
component.
This
is
the
age
of
the
metric
or
the
trace,
or
something.
F
B
All
right,
thank
you.
So
what
we
want
to
do
is
to
enable
the
vendors
vendor
specific
components
to
be
to
be
developed
outside
of
open
telemetry
github
as
a
vendor.
You
can
create
your
component
in
your
own
repository
at
your
own
pace,
reviewed
by
your
your
developers
and
we
just
import
it
as
a
dependency,
even
auto,
updated
by
dependable.
B
Why
we
want
to
do
that.
First
of
all,
we
believe
that
it's
a
very
significant
portion
of
our
time
of
maintainers
and
approvers
to
review
these
components.
So
we
want
to
just
eliminate
that
time,
meaning
that,
first
of
all,
obviously,
then
those
components
themselves.
You
can
move
at
a
faster
pace
because
you
control
yourself
as
a
vendor
and
also
by
freeing
our
time
from
doing
that.
We
can
also
do
better
job
on
the
other
components
which
still
stay
in
the
collector
repository.
B
Now.
To
do
that,
we
would
like
to
have
automated
checks
that
verify
that
components
that
live
outside
the
collector
still
satisfy
certain
criteria
before
they
are
included
in
the
default
field
of
the
collector,
I
put
together
a
document
which
outlines
what
we
would
like
to
be
to
be
validated
for
components
in
automated
manner.
B
We
we
still
would
like
to
have
the
documentation
one
place
for
the
user
to
have
a
look
at
so
generation
of
the
documentation
from
the
source
from
your
source,
with
publishing
in
the
collector
repository
the
verification
of
the
configuration
compatibility
so
that
we
ensure
whatever
changes.
You
do
don't
break
the
existing
configurations
and
I
think
that's
that's
probably
it
right
not
not
a
ton
of
work,
not
a
ton
of
requirements
right,
but
we
would
like
to
set
the
bar
there.
I
think
yeah.
H
Okay,
that
doesn't
mean
it's
the
project's
proposal,
so
I
mean
I
would
request
that
we
go
through
a
more
formal.
You
know
discussion
on
this
because
again
I
agree
with
you
that
the
flexibility
is
good,
but
on
the
other
hand,
you
know
without
the
right
quality
requirements.
Just
as
you
mentioned,
you
know
from
a
point
of
view
of
you
know,
providing
the
testing
compliance
and
you
know
other
quality
guidelines.
B
Sure
we
can
discuss
that.
I
mean.
H
You
know
some
of
the
use
cases
you're
coming
with,
but
on
the
other
hand
you
know,
could
we
work
with
an
intermediate
approach
of
you
know
having
repos
on
the
project
separate,
you
know
and
then
not
burdening
the
maintainers
at
the
same
time
adding
maintainers
to
the
project
for
those
components
and
also
building
out
the
testing,
and
you
know
other
compliance
requirements
which
then
enable
the
ability
to
support
third-party
hosting,
as
well
as
on
the
project
itself,
because
I.
B
B
I
I
did
not
mean
it
happens
today,
absolutely
not
s4,
maybe
kind
of
restricting
the
location
to
github
to
open
climate
github
org,
but
still
decentralizing
it
to
some
degree
by
making
it
separate
repository.
That's
a
that's
a
possible
option.
Let's
discuss
that.
If
you
would
like
it
to
be
a
maybe
a
broader
project
discussion,
you
can
do
that,
but
I
guess
anyway,
that's
that's
the
sentiment
that
product
bogdan
and
I
have.
We
would
like
to
give
more
power
to
contributors
to
do
their
work
more
independently.
B
How
far
we
go
with
that,
I
hear
you
let's,
let's
discuss
that
before
we
make
the
final
decision.
However,
I
think
the
conceptually.
I
would
like
to
see
that
right,
instead
of
forcing
the
maintainers
and
approvers
to
to
do
the
reviews
to
keep
the
quality
bar
and
and
and
pass
all
the
changes
through
the
central
authority
right
so
to
say,
I
would
like
to
give
more
power
to
contributors
to
work
independently
from
that.
H
Yeah
tigran,
thank
you,
for
you
know
the
again
starting
to
think
about
it,
because
you
know
and
that's
why
even
isn't
even
for
a
starter
and-
and
this
is
again
an
interesting
area
which
is
kind
of
great
but
would
like
would
be
a
good
example.
H
Is
the
prometheus
components
right
because
they,
even
if
they
sit
in
their
own
reapers,
it
would
actually
greatly
unblock
the
you
know
core
maintainers
and
be
able
to
provide.
You
know
the
ability
to
prototype
the
model
of
testing
compliance
and
ci
cd
requirements,
etc,
which.
I
H
Then
be
replicated
as
a
clear
process
for
quality
and
compliance
for
the
project
you
know
and
other
any
kind
of
other
instrumentation
or
third
party
development
that
that
occurs
in
the
long
run.
B
Okay,
so
I
guess
we're
on
the
same
page:
we
will
need
to
figure
out
the
specifics
here
and
either
way.
This
cannot
happen
until
we
do
the
ga
yeah.
F
B
Discussed
that
with
program
this
should
happen
only
after
that,
because,
with
the
current
approach
with
single
location
of
all
components,
it
is
when
we
make
breaking
changes.
It's
on
us
to
also
fix
everything,
and
it
is
possible
to
do
that
precisely
because
it's
in
one
repository
in
one
location
as
soon
as
we
decentralize,
that
is
no
longer
going
to
be
possible,
and
we
will
not
do
that
until
we
are
sure
that
the
interfaces
are
stable
and
we're
not
going
to
break
people's
code
by
making
breaking
changes
that
we
do
today,
because
we're
not
stable
right.
B
So
this
is
for
for
that
right
for
after
the
ga
this,
so
we
still
have
time
to
discuss
and
figure
out
the
exact
details
of
how
we
want
to
do
that.
So
I
guess
let's
do
that
bogdan
if
you're,
if
you're?
Okay
with
that,
maybe
we
we
go
and
talk
to
also
the
gc
members
were
interested
in
this
and
we
clarified
the
details
and
then
make
the
final
proposal.
A
The
children
have
a
couple
of
questions.
The
first
one
is
on
the
motivation
and
I
think
part
of
it
you
just
mentioned
so.
When
you
make
breaking
chains,
then
maintainers
have
to
go
and
fix
the
issues,
but
what
other
pieces
of
of
code,
or
which
other
modules
are
the
maintainers
you
and
both
of
them
are
taking
care
of
in
the
contrib,
which
is.
B
For
the
experts
and
receivers
vendor
specific-
that's,
I
would
say
maybe
even
the
majority
of
the
code
volume
wise,
not
maybe
not
recently,
from
the
perspective
of
how
many
pr's
we
get,
but
from
the
perspective
of
how
many
lines
of
code
exists
there.
I
think
that's
majority.
I
may
be
wrong
there
and
we
still
get
plenty
of
that.
B
F
So,
for
example,
majority
of
the
time
is,
is
dependency
and
it
is
not
reviewing
the
dependable
because
that
are
trivial.
We
can
optimize
to
merge
them.
If
you
want
it's
the
fact
that,
because
it's
we
have
dependency
on
the
whole
internet.
With
all
of
these
things,
we
we
have
a
dependency
mess
there.
So
so
the
problem
is
resolving
all
dependencies
moving
things
one
by
one
in
order
to
to
not
break
dependency,
graphs
and
stuff,
like
that,
that's
that's
a
very
hard
problem
which
indeed,
it's
going
to
be
the
same.
F
H
Again
bogdan,
these
are
good
assumptions.
You've
also
said
you
know
earlier,
as
even
rehan
is
suggesting
that
we
want
to
make
sure
that
you
know
the
project
and
the
collector
being
at
the
heart
of
that.
They,
you
know,
maintain
integrity,
right
and
and
quality,
and
that's
a
core
goal
of
the
project.
So,
let's
work
through
the
stay.
H
You
know
again
the
technical
issues
like
the
dependency
issues,
and
we
need
to
have
clear
guidelines
on
how
to
maintain
that
so
that
if
we
decentralize,
you
know
how
does
that
actually
get
resolved
and
and
set
up?
I
mean.
A
All
right,
so
perhaps
one
foot
for
thought
here,
perhaps
not
even
a
real
suggestion,
but
perhaps
what
we
can
think
about
is
having
the
maintainers
not
care
that
much
about
the
contrib
from
a
specific
point
in
time
and
on
and
then
create.
Instead
of
you
know,
branching
the
contrib
into
multiple
repositories.
A
Keep
that
essay,
as
one
with
more
you
know,
lacks
requirements
for
merging
things
so
having
more
maintainers
for
that
specific
repository
and
then
having
a
separate
repository
for
the
distributions.
So
we
could
have
a
one
specific
repository
that
just
builds
open
temperature,
collector
distributions
like
the
core
and
then
a
whole.
All
all
modules
are
all
contrib
or
whatever
you
want
to
name
it
and
then
those
sets
of
tests,
those
configurations
or
those
requirements.
F
F
F
The
other
thing
that
was
hard
for
me,
fyi
for
for
contrib,
is
everyone,
has
a
crazy
idea
of
doing
something
they
put
a
pr
down,
and
it's
very
hard
to
say
to
say
no
or
to
say,
tell
me
why
and
stuff
like
that.
So
so
these
are
another
another
things
like
we
spend
a
lot
of
time,
reviewing
things
unexpected
prs
or
unexpected
ideas.
H
I
think
bogdan
that
that's
you
know,
and
I
totally
feel
your
pain
there.
I
think
that's
because
you
know
there
is
no
clear
process
for
submitting
designs
for
you
know,
requirements
for
there's
not
enough
information
on
the
issues,
and
you
know
contributors
absolutely
have
to
re.
You
know
raise
the
bar
in
terms
of
submitting
more
technical.
You
know
design
detail
when
they
are
proposing
components
to
be
built,
so
I
think
that
that
would
alleviate
process
would
alleviate.
F
Okay,
no,
that's
that's
a
good
good
suggestion
so
anyway,
also
to
clarify
a
bit
with
what
tbram
was
proposing
was
not
to
move
common
processors
like
group
by
trace
id
or
something
like
that
was
more
referring
to
splunk
exporter
or
aws,
specific
exporter
that
that,
unless
you
are
using
aws,
you
don't
need
that
like
unless
you
are
using
aws
x-ray.
F
You
don't
need
that,
but,
for
example,
for
for
for
the
eks
or
other
things,
we
want
to
have
them
common,
because
people
may
run
on
eks
and
they
may
not
use
x-ray
or
or
emf
or
whatever.
So
so
things
that
are
shareable.
We
still
continue
to
have
them
in
country
things
that
are
only
for
that
vendor.
We
wanted
to
to
have
them
separately.
That
was
the
initial
thought
again.
H
Yeah
makes
sense,
makes
sense
bogdan.
I
mean
again
really
appreciate
you
guys,
you
know
bringing
us
up
and
thinking
about
it.
I
think
that
if
we
could,
you
know
prototype
some
of
these
possible.
E
Hello,
I
had
one
concern
here
like
so
I
think
the
vendor
can
already
do
that
to
the
right.
We
have
open,
telemetry
character.
Then
we
have
the
quantity
collector,
which
is
a
common
like
matrix,
transform
processor
group
by
active
group
by
trace
processor,
something
like
this
and
the
aws.
We
can
already
do
that.
We
have
our
own,
like
collector
quantity
people,
so
I
think
already
all
the
vendors
have
the
option,
but
what
we
are
looking
for,
like
okay,
so
still
these
things
are
common.
E
Maybe
we
are
trying
to
put
those
things
into
the
quantity
people
and
also
yeah.
I
see
like
there
are
some
like
sx
flank
explorer
and
also
like
aws
continents.
That's
what
are
the
quantities,
so
is
our
plan
like
moving
them
to
like
another,
lower
level
like
ot
collector,
then
quantity,
then
the
vendors
who
is
are
going
to
maintain
under
the
open,
telemetry
yeah.
H
I
mean
that's
a
that's
a
possible
possible
workflow
right,
I
mean
again,
this
is
just
a
discussion.
This
is
not
a
decision
right,
it's
it's
something
that
we
need
to
work
through
and
make
sure
that
they're,
the
right
checks
and
balances
in
terms
of
compliance
in
terms
of
making
sure
integrity
of
code
is
maintained
and,
at
the
same
time,
functionality
and
and
maintainer.
H
E
N
All
right,
I
think
I
have
a
question
on
this.
Presumably
to
be
a
dependency,
you
have
to
have
meet
some
certain
minimum
criteria,
like
rightly,
if
I
just
you
know,
build
some
sloppy
experiment,
that's
a
proof
concept.
I
wouldn't
meet
the
criteria
right.
What
are
those?
What
are
those
criteria
or
is
that
something
that
you'll
think
you'll
flesh
out,
as
you
figure
this
yeah.
B
H
Joe,
I
mean
again,
the
thinking
would
be
here
that
you
know
there
are
workflows
fully
automated
workflows,
which
you
know
can
support
the
standardized,
ci
standardized,
cd,
standardized
security,
vulnerability
checks,
standardized
compliance
tests,
passing
etc.
So
there
that
is
not
in
place
today,.
N
Okay
and
sorry,
one
follow-up
question:
is
there
sort
of
a
subgroup
of
people
who
are
interested
in
focusing
on
the
problem
of
integration
testing
as
it
pertains
to
the
collector.
N
H
But
what
there
is
not
is
not
a
formal
group
again
proposal
that
has
been
discussed
in
the
gc
also
is
to
have
a
work
group
within
the
tc.
You
know
who
is
formed
of
these
folks,
I'm
I'm
one
of
them
who
is
interested
in
seeing
that
compliance
and
that
integrity
being
maintained
through
okay,
all.
N
N
Me
among
the
interested
we
we
are-
I
don't
know
if
josh
or
punya
or
anybody
has
shared
this,
but
we
have
a
goal
to
to
add
30
third-party
application
integrations
to
the
to
our
agent,
which
is
based
on
the
linux
collector
by
the
end
of
the
year,
and
that
doesn't
sound
like
a
ton.
But
in
practice
it.
You
know
it's
a
lot
of
stuff
to
work
through.
F
N
H
Can
use
it
so
definitely
would
love
to
see
that.
D
Can
I
ask
you
a
question
really
sure
and
notice
there's
a
recording
going
on.
Do
you
know
where
that's
posted
exactly?
It
is
uploaded
to.
D
Correct
and
then
I
had
another
quick
question,
sorry
if
anyone
else
is
waiting,
but
I
think
a
few
of
us
are
like
just
joining
into
the
aws
observability
team
as
interns
this
summer
and
we're
just
wondering
if
you
have
any
advice
for
us
to
get
started,
especially
like
we've
been
trying
to
look
through
like
some
of
the
github
issues
like
good
first
issues.
But
I
don't
know
if
you
would
recommend
we
look
into
any
resource
or
contact
anyone
just
legally.
B
Are
you
so
I
guess
if
you're
asking
from
the
perspective
of
the
collector,
unfortunately
we're
out
of
time
and
another
meeting
is
starting
right
now
in
this
room.
So
please
please,
post
your
questions,
maybe
in
github
or
or
or.
B
B
I
think
different
time
zones
all
over.
I
guess
we
have
people
from
europe
as
well.
B
Abd
welcome
all
right,
so
I
guess,
let's
start
with
the
with
the
body
versus
attributes
right
premek,
you
were.
You
were
looking
into
this.
You
were
discussing
this.
Maybe
can
you
I
guess
I
I
read
the
thread,
the
comments
that
you
and
the
the
other
person
did.
I
am
I'm
still
not
sure.
What's
the
if
there
is
a
proposal,
what's
the
proposal
and
if
not
maybe
we
can
discuss
it.
O
Yeah
and
yeah-
I
was
going
a
bit
back
and
forth
with
that,
because
really
we
have
attributes
at
several
levels.
O
We
have
resource
level,
attributes
that
have
some
specific
meaning
assigned
with
those
we
have
record
level
attributes
and
then
we
might
also
have
attributes
in
body
if,
if
body
is
structured
and
can
contain
map
and
now
the
question
is
like
when
a
record
level
attribute
should
be
used
versus
where,
when
body
attributes
should
be
used
and
jesse
was
asking
this
question,
actually
he
was
suggesting
that
maybe
we
should
just
have
let's
say
instead
of
body
having
either
a
raw
string
or
some
sort
of
map
or
or
ri
or
or
something
like
that.
O
Maybe
it
should
be
just
like
grow
string,
the
only
option
and
the
attributes
should
go
to
attributes
or
maybe
something
different
and
we've
been
discussing
this
a
little
bit.
One
of
the
concerns
that
jesse
had
was
if
someone
is
using
some
custom
key
names,
some
custom
tags
that
are
not
present
in
semantic
conventions.
O
It's
even
okay
to
put
those
into
attributes,
but
actually
the
specification
addresses
this
issue.
There's
example,
for
example,
with
zab
logging
that
if
you
have
some
custom
attributes,
then
they
are
fine
in
the
attributes
field
of
of
the
record.
So
I
think
that
we
ended
up
with
with
no
conclusion
here,
but
this
is
from
from
my
perspective.
This
is
something
that
comes
back
every
now
and
then,
when
to
use
body
level,
attributes
versus
record
level,
attributes.
B
B
If
you
look
at,
I
don't
know,
I
brought
the
example
of
syslog
right.
There
is
the
message
and
there
are
structured
attributes
right.
There
is
the
clear
distinction
of
the
message
or
something
that
is
unnamed
and
the
attributes
which
are
named.
Similarly,
if
you
look
at
the
logging
libraries,
they
also
have
this
notion
of
right.
There
is
the
message-
and
there
are
then
fields
that
you
put
in
addition
to
the
message,
so
I
think
the
both
both
body
and
attributes
have
their
place
and
they
are
necessary
simultaneously.
B
As
for
making
the
body
just
a
string,
I
think
that's
also
probably
not
very
desirable.
We
were
just
discussing
very
recently
the
ability
to
record
raw
data
in
the
body
just
bytes
and
not
strings,
and
that
actually
makes
it
at
least
the
second
data
type
that
you
would
want
to
record
in
the
body
right.
So
it's
not
simply
strings
which,
at
least
in
the
current
implementation,
they
are
implied
to
be
unicode
strings,
so
valid
unicode
character
sequences,
but
you
can
have
binary
data
right
in
the
body
uninterpreted
unknown
encoding,
in
which
we
want
to
add.
B
So
I
think
well
at
the
minimum.
I
can
imagine
these
two
use
cases
for
the
body
being
separate
from
the
attributes,
but
still
not
being
just
a
string
when,
when
you,
when
you
do
that,
I
guess
the
body
is
a
string
or
a
binary
data.
I
mean
it's,
it's
a
small
leap
right,
it's
it's
not
kind
of
a
big
leap
to
say
that
okay,
let's
make
the
body
an
arbitrary
any
value
that
we
also
allow
for
for
all
the
attributes.
B
I
mean
we
just
say
that
let's
make
it
possible
to
record
more
complicated
data,
let's
not
restrict
it
because
I
don't
quite
see
what
do
we
gain
by
doing
that
restriction
right
this?
Does
it
buy
us
anything?
P
P
So
I,
I
think,
figuring
out
kind
of
what
semantically
those
things
are,
how
they're
different
makes
a
lot
of
sense
to
to
kind
of
kind
of
at
least
have
some
advice
for
that.
B
The
generic
advice
is
that
is
the
difficult
part
right
for
the
specific
cases.
It's
pretty
clear
when
you
look
at
the
specific
data
sources
that
you
need
to
model
in
in
our
day,
you
know
log
records,
that's
pretty
clear
like,
and
we
have
the
examples
right
for
c
slog.
You
have
the
message
and
you
have
the
structured
attributes,
pretty
clear.
B
You
put
the
message
in
the
body
the
attribute
and
the
structured
fields
go
into
the
attributes,
but
I
I
have
a
hard
time
coming
up
with
more
generic
recommendations
about
having
some
sort
of
litmus
test
right.
When
you
have
this
data
source,
how
do
you
make
that
decision?
So
that's
that's
the
difficult
part
that
I
was
not
able
to.
J
I
mean
I'll
put
up
there
what
the
way
I've
thought
of
it
in
the
past,
not
that
this
is
necessarily
the
best
way,
but
or
to
even
suggest
that
it
is,
but
it's
just
to
put
it
up
there
as
a
data
point.
So
I've
always
thought
of
the
attributes,
as
basically
meaning
interpreted
by
someone
who
is
consuming
these
logs,
like
which
is
essentially,
I
think,
equivalent
to.
J
Conventions,
so
this
is
to
say
that
someone's
gone
through
this
picked
out
each
piece
of
data
and
said
this
belongs
here
and
it
should
be
called
this,
but
logs
can
contain
you
know
a
mix
of
things
like
that
and
other
stuff
that
just
had
it.
It
may
be
well
structured
and
predictable,
but
no
one's
taken
the
time
to
understand
it.
Yet
it
may
be
completely
arbitrary,
written
by
a
developer.
You
know
just
blasting
fields
into
a
you
know.
Was
that
blog
or
something
like
this,
and
so
that's
been?
J
The
distinction
for
me
is
and
kind
of
the
reason
why
I've
always
thought
of
the
body
is
sort
of
like
when
you
just
interpret
it
as
json.
Now
you
have
something
structured
but
uninterpreted,
and
then
you
sort
of
manually
promote
these
things
that
you've
interpreted
up
to
that
attributes
and
put
them
where
they
belong.
O
Yeah-
and
I
think
that
one
the
way
I
would
phrase
this-
this
problem
here
is
like
when
it
makes
a
difference
like
when
it
makes
a
difference
in
pipeline,
or
maybe
in,
let's
say
the
vendor
implementation
and
then
just
as
you
have
mentioned,
like
data
that
can
have
some
meaning
assigned
versus
data.
That
is
only
structured
and
that
that's
it.
O
Maybe
when
you
are
like
ingesting
this
data
and
you
want
to
let's
say,
index
attributes
as
separate
fields,
because
you
want
to
query
by
them,
and
maybe
you
don't
need
to
do
that
with
the
body,
even
if
it's
structured
and
you
can
like
store
it
in
any
other
way.
Maybe
maybe
this
is
the
difference
that
we
assume
that
attributes
are
always
can
be
processed
easily
and
body,
not
necessarily,
but
it's
a
weak
definition.
P
Well,
I
think,
actually,
this
kind
of
dovetails
a
little
bit
into
tigran's.
Recent
schema
url,
that
now
that
we've
got
a
schema
that
that
you
know
we
can
actually
change.
We
can
say
you
know
it's
not
this
game,
it's
that
schema.
I
B
B
Yeah
in
some
ways,
I
guess
well,
at
least
in
the
current
definition
of
the
schema
it
only.
It
only
defines
what
goes
into
the
attributes.
It
doesn't
tell
what
the
body
looks
like,
but
it
may
in
the
future
right.
P
Q
So
so
I
still
feel
I
think
I've
been
from
the
beginning,
a
big
proponent
of
having
body
to
be.
You
know
very
open-ended
right,
because
you
know
the
wide
variety
of
you
know,
logging,
that
you
know
we
all
know
we've
seen
over
many
years
in
the
field
and
there's
very
little.
Q
It
just
is
a
massive
effort
to
sort
of
try
to
basically
normalize
all
of
it
right.
Yes,
you
know
resource
always
made
perfect
sense
to
me,
because
we
had
come
at
that
sort
of
conclusion
that
that
is
true
metadata.
It
describes
the
thing
that
emitted
the
telemetry,
so
that
was
easy
attributes
is
a
little
bit
sort
of
of
an
in-between
right,
where
we
kind
of
indicate
that,
if
you
happen
to
have
some
structure,
you
should
feel
free
to
put
it
there.
Q
The
spec
suggests
that
you
should,
you
know
it
doesn't
say,
must
right
follow
the
semantic
conventions.
I
think
the
interpretation
that
I
would
you
know
suggest
is
that
if
you
have
stuff
that
follows
the
semantic
conventions,
you
should
be
very
welcome
to
put
it
there,
and
I
think
vendors
will,
like
you
know,
pick
up
on
that.
If
you
just
basically,
you
know
to
sort
of
dance,
you
know
thing
if
somebody
just
blasts
out
kvps,
you
know
in
sap
those
don't
go
to
attributes
in
my
mind,
just
doesn't
seem
to
really
make
a
ton
of
sense.
Q
So,
even
though
the
spec
says
that
you
should,
it
doesn't
say
you
must
use
semantic
conventions,
it's,
like
you
know
it's
a
free
country.
After
all,
you
know
so
you
know,
hopefully
a
world
or
what
have
you
and
then
let's
not
go
there,
but
then,
then,
that's
that
you
do
that
at
your
own
peril.
You
know,
but,
like
ideally,
attributes
are
for
things
where
semantic
conventions
exist
outside
of
water
resources.
Q
I
think
you
know
there
is
this
design
and
again
you
know
these
days
unlock
it
doesn't
just
sometimes
it
has
a
message.
Sometimes
it
has
a
message
and
some
sort
of
trailer
of
like
kpps
or
all
these
kinds
of
different
permutations.
I
think
there's
good.
There
continues
to
be
an
unease
about,
like,
I
think
many.
Many
many
many
folks
have
this
kind
of
you
know.
Q
First
of
all,
from
a
data
format
perspective,
so
you
can
select
deserialize
it
if
it
happens
to
be
json
or
whatever,
and
then
also
maybe,
if
there
is
a
schema,
you
know
so,
and
I
I
would
not.
I
don't
think
message
is
a
thing
I
think
it's
it's
easy
to
like
it's
almost
a
thing
which
is,
I
think
we
will
probably
continue
to
have
folks,
who
feel
somewhat
strongly.
That
message
should
be
a
top-level
feel,
but
I
think
it's
just
almost
a
thing.
It's
not
quite
a
thing.
Q
So
I'd
like
it
using
you
know
highly
scientific
terms
here.
So
that's
kind
of
my
that's
kind
of
my
my
thought
on
this.
I
was
following
this.
You
know
I
was
going
to
jump
in,
but
you
know,
I
think,
did
a
good
job,
picking
it
up
so
yeah.
So
I
I
don't
want
to
be
dogmatic,
but
I
think
what
we
did
originally
just
back
this
out
like
in
my
mind,
still
still
stands.
P
I
definitely
think
that
the
structured
body
is
the
most
important
being
able
to
take
an
arbitrary
json
object
and
be
able
to
keep
that
structure,
but
not
necessarily
be
able
to
know
what
it
is.
I
think
that's
super
important
you
know,
and
you
know,
being
able
to
hold
some
of
that
information
up
into
the
attributes
has
some
power,
but
it
that
distinction
is
tough.
Q
You
know
so
maybe
you
know,
maybe
we
have
to
sort
of
do
a
critical
read
of
the
spec
again
and
see
whether
there
is
language,
and
we
can
have
to
make
this
a
little
bit
more
clear
that
the
body
just
represents
the
raw
log
in
whatever
freaking
form
you
have
it,
and
you
know
if
your
raw
log
happens
to
be
somewhat
cooked.
You
know,
then,
that's
also
fine,
just
put
it
in
there
cooked
if
it's
like,
cooked
to
a
degree
where
the
line
at
least
partially
aligns
with
semantic
conventions.
Okay,
put
it
into
attributes.
P
Well,
you
know,
I
think,
the
semantic
conventions
piece,
because
now
we've
got
the
schema
that
we
can
change.
What
those
you
know
that
if
we
want
to
use
a
different
schema,
that's
a
different
and
if
I'm
wrong
here,
please
squash
me
here,
but
if
the,
if
we've
got
a
different
semantic
convention,
that
we
want
to
use,
that's
where
we
use
the
schema.
So
that
gives
us
an
escape
hatch.
P
P
B
Am
not
so
sure
about
that.
Take
the
syslog
example
again
right:
if
it's
the
structure
syslog
it
has
the
fields
right
should
then
somehow
we
not
put
that
into
the
attributes
and
how
we
do
that.
In
that
case
it
goes
into
the
body
you're
saying
so
we
will
need
to
invent
some
way
of
representing
already
structured
data
in
the
body.
Q
Yeah,
you
would
just
whatever
that's
just
look
for
ideas.
It's
standardized
now
I've
did
a
bunch
of
piles
of
rfcs
right
whatever
that
is,
you
know
the
whoever
produces.
It
will
hopefully
follow
it
most
people,
don't
I
guess
yeah
most
people
do
it
mostly.
You
know,
but
in
my
mind
that
continues
to
just
go
into
the
body,
and
then
you
say:
okay,
you
know,
for
people
who
find
this
interesting.
You
know,
please
interpret
interpret
the
body.
B
B
That
does
not
know
anything
about
the
schemas,
so
I
think
putting
the
line
there
trying
to
say
that
if
you're,
if,
if
you
put
anything
in
the
attributes
that
you're
then
you're
supposed
to
be
schema
aware
and
follow
the
semantic
conventions
of
that
schema,
I
think
that's
that's
too
much
of
a
restriction.
I
would
still
want
legacy
data
sources
which
are
structured
in
a
way
that
matches
our
understanding
of
the
body
and
attributes
they
do
use
the
body
and
attributes
as
we
have
the
we
have
the
example
mappings
in
that
document.
Q
Q
B
I
Q
B
P
P
So
so,
at
that
point,
we've
got
a
mix
of
semantic
and,
and
who
knows
stuff,
so
there's
again
back
to
no
distinction
between
like
if
it's
just
a
raw
json.
If
we
put
that
into
the
body
now
we're
kind
of
treating
that
differently,
both
places.
B
Right
right,
but
that
there
is
still
this
newer
world
when,
when
the
the
sources
know,
what
is
schema
know
what
are
semantic
conventions
and
they
follow
them,
and
in
that
case
the
schema
url
is
also
recorded
and
included.
If
it's
not,
then
it's
an
indication
that
this
is
emitted
by
somebody
who
has
no
idea
about
the
schemas
about
maybe
even
semantic
conventions,
or
maybe
they
know
about
semantic
conventions
that
they
are
mixed.
And
you
remember
we
have
this
recommendation
about
how
to
name
your
custom
attributes.
P
I
think
a
little
bit
where
I'm
struggling
here
isn't
necessarily
where
the
attributes
are
where
the
body
is,
but
it
is
putting
stuff
that
we
know
what
it
is
and
stuff.
We
don't
know
what
it
is
into
the
same
bag
that
like
now
we're
sticking
everything
into
this
one
bag,
and
we
don't
know
so.
If
we're
looking
at
a
logger
level
right
is
something
that
we
semantically
should
know
about,
but
if
everything's
going
in
the
same
bag
we
can
instead
of
being
able
to
say,
do
we
know
the
logging
level?
No,
we
don't.
P
P
B
Okay,
that's
that's!
I
guess
that's
a
good
point.
I
don't
know
how
you
solve
that,
but
at
least
in
the
world,
in
the
world
that
is
being
transitioned
to
to
the
shiny
future,
where
everything
is
known,
attributes
and
everything
has
the
schema
associated
with
that
the
reality
reality
is
going
to
be
more
like
what
you
described
right.
Some
mix
of
custom
attributes,
plus
some
of
the
attributes
actually
having
the
the
matching
the
semantic
conventions.
That's
that
is
going
to
be
the
reality
likely
right.
P
B
Yeah-
I
I
I
guess,
your
but
you're
what
what
you're
raising
is,
probably
maybe
a
separate
addition
or
maybe
a
related
issue,
but
I
would
make
it
as
separate
right.
This
is
still
it
does
not
help
us
to
tell
whether
things
should
go
into
the
body
or
attributes.
What
you're
saying
is
more
like
there
are
some
well-known
attributes
versus
some,
not
so
well-known
ones,
the
custom
ones.
Maybe
that's
it's
I
don't
know
maybe
well.
It
sounds
like
a
separate
piece,
well
kind
of.
I
think
what
I'm.
P
B
Okay,
so
anyway,
I'm
still
not
sure,
even
after
this
discussion,
if
anybody
is
able
to
come
up
with
some
concise
litmus
test
that
we
can
put
into
the
data
model
document
like
one
or
two
sentences
which
say
put
this
in
the
body
put
that
in
the
attributes.
B
That
would
be
great.
I'm
still
struggling
myself
to
come
up
with
that
concise
definition.
P
Well,
I
think
what
what
I'm
trying
to
figure
out,
how
to
say
is
that
I
think
that
then
not
knowing
whether
it
belongs
to
the
schema
or
it's
entirely
arbitrary,
and
we
can't
know
what
it
is
that
indistinct
piece
is
what
makes
this
unstable.
We
don't
know
which
way
it
goes
because
we
don't
know
what
it
is.
I
P
And
so
I
think
that's
that's
the
problem.
We've
got
to
find
an
answer
to
to
be
able
to
make
it
consistent
is.
B
P
P
We
yeah
well
so
from
the
message
level
right.
We
don't
know
what
it
is.
If
we
do
know
what
it
is,
then
we
should
put
it
in
the
place
where
we,
where
we
know
what
it
is
right.
So
like
zap
or
log
for
j
log
level
right
like
either
we
either
it
comes
in
a
format
that
we
can't
parse
out.
P
You
know
it's
a
json
formatted
thing
and
we
just
put
it
in
the
body
because
we
don't
know
we
don't
know
how
to
hoist
it
up
or
we
do
know
how
to
hoist
it
up
and
it
should
go
into
the
semantic
convention,
but
there's
there's
kind
of
like
either.
We
know
about
it
from
a
processing
standpoint
or
we
don't
know
about
it.
If
we've
got
a
big
list
of
things
it
could
be.
That
seems
like
a
bad
place
to
go
right.
I
I'm.
B
Not
sure
I
follow
what
you're
saying
there,
so
I
I
don't
see
it
quite
as
just
having
two
extremes:
either
we
don't
know
not
anything
or
we
know
everything
about.
It
seems
like
there
is
the
middle
ground
as
well
right
with
with
zap
logger
the
example
right
there
is
no
schema,
we
don't
know
the
schema
in
the
sense
that
we
give
so
there's
no
open,
telemetry
schema.
We
don't
know
exactly
what
the
field
names
are,
but
we
still
know
something
right.
B
P
P
Should
we
should
at
least
be
able
to
capture
everything
in
the
body
like
everything
we
know
about
it,
whether
that's
you
know
whether
we
know
the
semantics
of
the
keys
is
different
than
being
able
to
keep
everything
that
we
could
be
able
to
discover
later
right,
but
as
far
as
like
a
message
from
blog4j
that
we
don't
it's
not
coming
in
a
open,
telemetry
format,
but
we
know
it's
got
the
right
information
having
an
enrichment
function
at
some
point
that
takes
in
something
and
and
says,
okay.
Well
for
log4j.
P
Q
But
if
you
don't
mind
jump
in
so
david,
I
think
what
you're
struggling
with
is
that
there's
one
container
called
attributes
right
and
we
sort
of
say
those
kvps
should
you
know,
follow
the
semantic
convention,
but
if
they
don't
that's
okay
as
well,
and
then,
if
you
look
at
it
from
the
perspective
of
the
recipient,
that's
a
bit
of
a
you
know,
wtf
kind
of
thing,
because
now
what
you're
supposed
to
do
right
so
at
the
risk
of
like
exploding
that
whole
thing
you
know
into
too
many
fields
but
like
what?
Q
If
we
had
like
you
know,
don't
I'm
gonna
use
descriptive
names.
You
know
two
fields,
one
like
attributes
as
described
for
schema
and
and
and
then
another
fear
that
basically
says
you
know
free
for
all
and
you.
I
P
Q
J
Christian
in
the
way
you
described
it
when
you
say
the
free-for-all
field,
if
we
had
that
to
me
that
that's
like
how
I
thought
of
the
body
like
that
right.
P
So
the
challenge
there
is
kind
of
tigran's
example
of
some
logan
libraries
have
a
body
plus
free
form
for
freeform
attributes.
So
it's
easy
with
with
some
of
them
that
have
just
freeform
like
you're,
just
logging
in
a
you,
know,
json
structure
and
then
those
that
are
strings.
That's
easy!
But
when
it's
a
combination,
how
do
we
handle
that.
J
So
perhaps
it
would
be
helpful.
I
know
this
discussion
is
coming
along
at
some
point.
We
should
move
on,
but
just
to
put
another
thought
out
there,
so
perhaps
it
would
be
helpful
to
take
a
step
back
from
what
we
currently
have
as
the
data
structure
and
just
articulate
sort
of
all
the
buckets
that
people
have
asked
for
right
and
then
we
have
like
the
superset
of
what
we're
going
to
end
up
with
and
we
can
think
about
how
we
combine
things
and
what
the
trade-offs
are
that
we're
making
right.
J
So
to
me,
it's
like
we
have
at
the
top
it's
like
resource
this
identifies
where
the
law
came
from.
Then
we
have
data
that
has
been
semantically
interpreted.
Then
we
have
structured
but
arbitrary
sort
of
free-for-all.
We
have
you
know.
I
know
it's
been
said.
Message
is
not
necessarily
a
field,
but
some
some
people
would
want
just
a
string
field
that
is
going
to
sometimes
be
there.
The
message
sometimes
not,
and
then
I've
even
heard
at
some
point.
We
want
the
entire
raw
log
like
no
matter
whether
it's
been
parsed
up
or
not.
J
We
want
to
preserve
that.
So
like,
if
we
had
this
fully
fleshed
out
data
struct,
it
might
include
all
of
those
things,
but
probably
that's
too
much
and
we
need
to
narrow
it
down.
But
as
a
starting
point
I
mean
is
that
roughly
how
everyone
could
classify
all
the
types
of
data
here
and
certainly
there's
a
few
other
fields.
We
know
like
severity
and
time,
stamping
things,
but
we
don't
think
there's
any
debate
about
those.
Q
So
you
already
get
it
as
a
nice
consumable
thing
which
is
nice
and
then
indeed,
and
then
there's
a
subset
of
those
where
the
semantics
are
also
known
right,
and
you
know
if
you
happen
to
actually
know
those
semantics,
and
we
want
to
build
some
on
top
of
that.
That
would
be
really
like
for
again
from
a
backend
perspective
to
latch
onto
that
like
make
it
as
easy
as
possible
would
also
be
good.
Q
That
seems
like
something
one
could
probably
somehow
finagle
and
actually
explain
as
a
recommendation
to
people
to
what
they
should
do
right
right.
I
think
we're
not
like
super
far
from
that
right,
you
know
and
and
then
again
you
know,
people
who
are
emitting
this
stuff
are
still
free
to
take
the
whole
thing
and
club
it
into
body
as
well.
Q
I
know
we
all
suffer
from
you
know,
data
overload
and
all
of
that,
but
you
know
so
if
you
want
your
sap
body
to
be
the
whole
thing
in
some
sort
of
serialized
format
with
like
kvps
that
are,
you
know,
there's
a
name
for
that.
I
forgot
what
it
is,
but
basically
you
know
kvp
is
with
like
a
space
in
between
so
be
it.
You
know.
Q
Q
B
If
somebody
can,
that
would
be
great,
I'm
I'm
not
sure
I
can
summarize
what
we
discussed
properly
still
quite
unclear
to
me,
but
sorry
guys.
I
want
to
call
time
on
this.
We
have
a
couple
other
items
to
discuss.
If
you
don't
mind,
maybe
you
can
comment
on
the
github
issue.
You
have
here
thoughts
if
you
want
to
summarize
your
own
thoughts
there.
So
if
you
don't
mind,
let's
move
to
the
next
one,
okay,
so
the
next
one
is
about
the
the
encoding.
B
J
Yeah,
so
until
recently,
this
was
a
concern
only
of
the
file
input
or
the
file
log
receiver.
J
J
The
question
here
really
is
okay,
so
the
current
behavior,
which
I
think
was
the
default
behavior
that
most
users
will
find,
is
that
we
are
basically
not
not
assuming
any
encoding
we're
just
reading
in
bytes
we're
casting
that
to
a
string
turns
out.
This
is
a
bad
approach,
because
protobuf
requires
that
that
is
that
that
string
is
utf-8
compatible.
Even
really,
I
haven't
personally
run
into
this
a
lot
or
that's
a
problem,
but
it
certainly
can
be,
and
so
that's
sort
of
what
we've
been
calling
the
knob
encoding.
J
Or
you
know
this
is
just
the
default
behavior
there's
you
know
this
other,
I
think
probably
the
right
approach
here
is.
We
can
assume
that
we
need
to
just
sort
of
escape
those
just
use.
Utfa
encoding.
It
just
does
a
replacement
on
any
invalid
characters.
It
replaces
them
with
a
specific
character,
called
the
what's.
It
called
the
something
replacement
character.
I
can't
remember
the
name
but
anyways,
it's
just
one
specific
character
that
just
gets
just
obliterates
whatever
was
there.
J
That's
kind
of
one
question
that
we
have
here-
and
I
think,
maybe
even
just
taking
a
step
back,
there's
probably
an
assumption
to
validate
here
that
that
we
even
should
be
reading
this
data
in
and
converting
it
to
strings,
because
we
could
very
well
just
leave
it
as
bytes
by
default
and
let
that
be
up
to
the
downstream
to
interpret.
B
So
I
think
we're
looking
for
your
thoughts
on
this
then
the
the
default
today
is
knob
right,
that's
the
default,
but
it
tries
to
put
that
data
into
a
string
correct
which
cannot
be
enough.
There's
no
way
for
this
to
be
not
right.
You
have
to
do
something
for
it
to
be
a
valid
unicode
string.
That's
that's
the
problem
we
have
today.
I
guess
one
of
the
problems.
J
Correct
I
mean
it's
just
to
clarify:
it's
not.
I
think
it's
not
necessarily
an
invalid
string,
and
typically
it's
not,
but
it
can
be
an
image.
B
B
That's
that's
the
question:
do
we
keep
knob
as
the
default
and
sanitize
the
input,
in
which
case
not
sounds
kind
of
as
a
wrong
name
for
what
we
do
right,
because
we
we
do
things
to
the
data,
maybe
in
that
case
we
change
the
default
to
be
udf8,
and
in
that
case
it's
completely
valid
to
do
this
sanitizing
financialization.
We
do
and
put
it
in
the
into
the
stream
and
nope.
Then
we
change
knob
to
to
actually
be
not,
but
then
put
the
data
into
the
bytes
right.
B
J
Yeah,
I
think
that
the
question
of
raw
versus
not
is
just
a
naming
a
naming
question,
whether
maybe
to
a
user
that
might
be.
B
I
guess:
okay,
okay,
so
just
to
make
it
clearer
what
exactly
it
does,
or
it
does
not
so
just
to
reflect
the
fact
that
it
doesn't
do
anything
at
all
with
incoming
data.
It
just
puts
the
bytes
as
it
sees
them
into
the
into
the
bytes
value
which
we
just
introduced
correct,
but
if
okay,
so
that's
if
we
do
that,
what
is
the
default?
In
that
case,
do
we
still
keep
not
or
raw
as
the
default
or
udf8
becomes
the
default?
The
new
default.
J
Certainly,
we
want
feedback
from
the
community
here,
but
you
know
utf-8
would
mean
that
it
behaves
as
it
does
now.
I
don't
know
how
much
of
that
priority
or
as
we
like
expected
it
to
behave
now.
Basically,
I
don't
know
how
much
of
a
priority
that
is,
but.
B
B
B
Perhaps
we
could
also
do
auto
detection.
There
is
this
notion
of
byte
order
mark
in
unicode.
The
files
can
start
with
that.
So
we
could
technically
try
to
detect
that.
I
don't
know
if
we
supported
all
other
unicode
encodings
today.
If
we
we
do,
we
could
use
that
for
that
purpose,
but
anyway,
all
I'm
saying
is
that
it
does
sound
as
a
reasonable
approach
to
me
that
file
log,
in
particular
by
default,
assumes
that
the
files
that
it
is
reading
are
utf-8
and
it's
still
configurable.
J
Okay,
there
is
one
additional
complication
we
should
think
about,
and
maybe
this
is
something
we
can
take
offline
if
we,
if
this
isn't
relevant,
but
so,
if
you
put
it,
if
you
are
passing
raw
bytes
through
and
then
you
process
that
through
one
of
the
parsers
that
we
have
let's
say
the
regex
parser,
I
think
there
currently
there's
an
expectation
that
that
value.
I
think
it
does,
handle
it
if
it's
bites,
but
it
will
just
cast
it
to
a
string
and
process
it
as
a
string
yeah.
J
So
then
we're
this
is
sort
of
like
a
way
where
these
invalid
characters
sort
of
can
weaken
downstream
into
the
pipeline.
They
could
end
up
getting
cast
as
strings
and
so
they're.
Maybe
this
is
just
you
know.
We
have
to
be
aware
of
what
the
setting
was
cast
it
all
back
to
bytes.
Then
that's
something
I
don't
know.
We
have
to
figure
out
what
to
do
with
this.
B
I
guess
there
are
two
options
there
right.
One
is
to
actually
restrict
the
usage
of
the
parsers
to
text
data
right.
If
it's
already
uef8
encoding
or
any
other
text
encoding,
you
can
use
the
parser
for
raw
data
either.
You
cannot,
or
you
explicitly
have
to
have,
an
operator
which
converts
the
raw
data
to
some
text
and
then
that
converting
operator
there,
you
specify,
what's
the
expected
input
encoding
for
your
raw
data,
and
only
after
that,
you
can
apply
parsers
as
the
second
and
subsequent
steps
in
in
the
list
of
operators.
B
B
There's
one
thing
that
you
also
mentioned
there
was
that
we
don't
want
to
support
multi-line
with
with
raw
data,
which
kind
of
matches
exactly
what
you
said,
because
we
can't
make
assumptions
about
the
encoding
and
there
is
no
way
to
do
the
pattern
matching
regex
matching
in
that
case.
But
does
that
mean
that
we
don't
do
any
any
event
breaking
at
all
in
the
incoming
data
and
just
the
limit
by
the
size?
Or
we
still
do
the
new
line
matching
by
default?.
B
J
That's
true,
so
maybe
it's
more
of
a
notion
of
as
opposed
to
multi-line.
It's
just
simply
a
notion
of
byte
sequence,
light
sequence
that
you
would
like
to
split
on
yeah,
okay,
I'll,
give
some
thought
to
how
we
should
support
that.
Maybe
it
fits
into
that
multi-line
or
that
gets
renamed
or
something
but
or
maybe
it's
just
a
separate
parameter.
That's
only
valid
in
this
circumstance,
but
this
seems
like
something
we
could
solve.
B
You
likely
know
something
about
your
your
log
that
you
don't
want
it
to
be
collected
as
a
byte
string.
You
want
it
to
be
collected
as
records
and
if
it's
records
it's
not,
I
think
I
guess
too
much
of
an
assumption
to
make
that
you
know
the
encoding
of
your
of
your
file.
If
you
know
that
it
is
composed
of
records,
maybe
I'm
wrong.
But
what
I'm
saying
is
that
somehow
the
raw
seems
to
be
applicable
to
the
byte
stream,
support
less
so
less
applicable
to
the
other
use
cases.
But
maybe
there
are.
B
I
just
can't
think
of
real
good
use
cases
when
you,
when
you
still
want
to
do
the
splitting
by
the
by
some
sort
of
delimiter,
whether
it's
newline
or
something
else.
But
you
don't
know
what
the
encoding
is.
So
you
want
to
preserve
the
the
original
data
as
as
bytes
and
not
convert
not
interpret
them
in
some
sort
of
encoding.
J
Yeah
I
mean
perhaps
there's
a
you
know,
kind
of
interpreting
what
you're
saying.
If
I'm
understanding
it
right,
I
seems
like
there's
a
decent
case
to
be
made
for
just
you
know.
If
you
just
want
raw
bites,
you
probably
don't
you
probably
don't
want
any
interpretation
or
any
processing,
yeah
yeah
right,
you're,
just
shipping,
but
it's.
B
Just
I
guess
I
guess
what
I'm
asking
is:
if,
if
you
do,
if
you
do
raw
encoding,
do
you
also
want
to
do
any
sort
of
breaking
into
log
records?
Do
you
care?
Is
there
even
a
use
case
like
that
right
what
we
were
just
discussing,
so,
let's,
let's
allow
breaking
into
the
events
by
some
sort
of
byte
sequence.
B
I
mean
yes,
we
can
do
that,
but
are
we
inventing
something
that
has
no
use
case?
Maybe
that's
that's
what
I'm
trying
to
understand.
B
B
I
Jonah
you
want
to
yeah
just
a
quick
update,
yeah,
it's
a
little
late
for
the
folks
in
israel
and
there's
kind
of
a
lot
going
on,
so
that
engineers
on
our
side
that
are
working
on
the
on
the
sdks.
The
implementations
progress
is
going
pretty
good
a
couple
people
sort
of
working
part-time
on
it.
So
we
have
all
the
tasks
and
stuff
mapped
out
in
terms
of
what
they're
going
to
be
building
and
we're
making
some
progress
there.
So
hopefully
you'll
start
to
see
some
pull
requests
soon.
I
Some
code
coming
in
so
the
work's
definitely
underway
at
this
point,
so
just
a
little
update.
That
was
all
that's
for
for
java
right.
Yes,
we're
doing
that
in
java,
but
in
a
modular
way,
where
you'll
be
able
to
separate
out
those
different
pieces,
even
though
we'll
likely
implement
and
log
back
or
log
for
jay.
B
B
I
guess
that's
that's
my
mistake.
Maybe
jonah,
what's
the
best
way
to
connect
david
with
the
engineer
who
works
on
this.
I
I
can
facilitate
it
david
if
you
want
to
just
pop
your
email
into
the
zoom
chat.
I
can
grab
it
and
and
get
a
discussion
going
and
provide
some
more
details,
cool,
thank
you
and
you,
you
did
the
early
implementation.
You.
G
I
B
Okay,
one
other
thing
that
probably
would
be
useful
to
do
is
before
maybe
you
start
submitting
the
prs
with
the
implementation.
It
would
be
great
to
join
the
java,
seek
meeting
kind
of
to
prepare
the
maintainers
that
this
is
coming
so
that
they
know
they
expect.
I
I
did
talk
with
with
john
watson,
one
of
the
maintainers,
but
it
was
a
couple
months
ago,
so
it
would
be
good
to
kind
of
refresh
this
discussion
so
that
the
the
expectation
is
that
they
know
that
this
is
coming.
I
B
B
I
B
All
right-
and
there
is
no
more
items
in
the
document.
Does
anybody
have
anything
else
to
discuss.
B
Q
Q
Yeah
yeah,
so
we're
going
to
erase
this,
maybe
until
early
next
week
to
see
who
is
the
first
person
that
finds
the
25th
hour
in
any
given
day
and
then,
if
we,
if
the
race
doesn't
result
in
anything,
then
we
have
to
go
talk
again
but
like
yeah,
okay,
we're.