►
From YouTube: 2022-09-28 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
C
C
C
Okay,
I
am
going
to
demo
a
little
web
application
that
I
wrote
currently
codenamed
Auto
I've
been
asked
to
demo
this
to
to
this
group.
C
It's
currently
there's
an
issue
for
it
and
there's
a
draft
PR
open
against
contrib,
but
it's
it's
quite
experimental
and
we're
just
trying
to
get
an
idea
for
you
know
where
we
want
to
take
it
if
we
want
to
take
it
anywhere
at
all.
So
let
me
just
go
ahead
and
run
it.
C
And
I
have
a
Reddit
instance
running
locally,
so
we'll
we'll
get
some
metrics
from
Reddit
as
soon
as
it
starts.
C
Let
me
go
to
localhost
808,
and
here
is
the
interface,
so
this
is.
This
is
just
kind
of
my
pet
project
that
I
have
been
sort
of
experimenting
with
and
I've
been
given
some
leeway
to
build
it
out
a
little
bit.
Basically,
it's
just
to
make
creating
a
collector
config
easier
and
to
start
to
give
folks
a
way
to
visualize
data
as
it
goes
through
the
pipeline.
C
So
we
here
we
have
all
of
our
metrics
capable
receivers
and
I'm
going
to
select
redis
since
I
have
read
this
running
locally
and
when
I
do
that
it's
going
to
make,
it
makes
a
call
to
the
server
to
get
some
basically
to
introspect
the
the
redness
receiver
config,
and
it
also
gets
some
godocs
as
well
and
types
which
puts
here
on
Hover-
and
this
is
this-
can
be
arbitrarily
deep.
So
I
can
click
into
into
these
if
I
wanted
to,
and
you
can
click
on
forever.
C
Obviously,
and
build
out
all
the
different
levels
it
it
does
pick
up.
It
also
creates
an
instance.
It
uses
the
factory
to
well
yeah.
It
creates
an
instance
of
the
of
the
config
for
the
component
that
looks
at
all
the
types
that
were
pre-populated,
basically
all
the
defaults
and
it
and
it
fills
those
out
for
you.
So,
for
example,
TCP
here
is
a
default
for
transport.
C
10
seconds
is
default
for
collection
interval,
so
I'm
just
going
to
put
two
seconds
here
for
the
collection
interval,
I'm
going
to
click
apply
when
I
get
rid
of
this,
because
we
didn't
do
anything
and
I
will
start
the
receiver.
C
So
what
this
does
is
it
actually
starts
the
receiver
here
on
my
local
machine
and
it
starts,
and
it
puts
it
into
kind
of
a
fake
Pipeline
on
the
back
end,
and
then
it
also
opens
up
a
websocket
and
sends
the
sends
these
metrics
back
out
to
the
browser
and
the
browser
presents
all
the
all
the
metrics
here
and
then
you
can.
C
You
know
click
on
these
rows
if
you
want
so
each
of
these
rows
represents
one
message
from
the
component,
so
I'll
click
on
this
row,
and
this
is
right
now
this
is
just
Json
dump
of
of
that
one
message,
and
just
to
kind
of
show
you
the
next
possible
step.
I'll
grab
one
metric
out
of
here,
oh
and
by
the
way
these
columns
represent
metrics,
and
if
you
hover
over
them,
you
get
the
full
name.
C
The
zero
value
columns
are
kind
of
pushed
over
here,
because
they're
not
very
interesting,
now
select
a
processor,
let's
do
a
filter,
processor
and
we're
in
a
metrics
pipeline.
So
we
click
that
we're
going
to
include
and
match
match
type.
Strict.
This
to
me
is
a
little
bit
of
a
failure
that
I
had
to
type
this
and
I'd
like
to
be
able
to
just
make
this.
If
it's
an
enum
make
it
selectable
somehow
but
remains.
Have
you
seen
whether
or
not
that's
possible?
C
So
in
this
case
I'm
going
to
select
the
metric
name
and
I
all
the
way
down
and
now
I
have
a
config
for
this
filter.
Processor
and
I
can
start
it.
So
the
filter
processor,
as
you
just
saw
I
just
selected
only
one
metric
name.
So,
even
though
we're
getting
a
bunch
of
metrics
here
above
the
filter
processors,
only
spitting
at
one
metric.
C
C
So
here
we
see
the
data
right
now
and
then,
if
I'm
happy
with
this
I
can
click
the
generate,
collector
yaml
button
and
basically
just
puts
all
that
stuff
together
and
you
can
copy
and
paste
your
collector
Gamel
and
you
know
put
it
in
production
or
whatever
it
is.
You
want
to
do
so.
Yeah,
that's
pretty
much.
It
I
can
show
you
a
logs,
Pipeline
and
a
traces
pipeline,
but
I
figure
you
can
sort
of
fill
in
the
blanks.
C
So
do
you
guys
want
to
see
that
or
do
you
do
you
guys
have
any
questions
foreign.
D
C
So,
let's
see
here
logs.
E
Sorry
Pablo:
can
you
define
two
metrics
Pipelines.
C
No
okay,
so
now
yeah
so
right
now,
it's
I
mean
this
started
out
as
kind
of
a
proof
of
concept
and
I
sort
of
built
it
out
I
think
if
we
get,
if,
if
people
like
it,
if
we
can
kind
of
demonstrate
that
it
has
value,
I
think
my
my
company
will
will
support
me
working
on
it.
More
and
I
would
love
to
be
able
to
do
that
at
different
pipelines,
multiple
receivers,
processors
and
exporters
yeah.
All
that
good
stuff.
F
Yes,
sorry
like
for
each
of
the
things
you're
choosing
like
each
receiver
and
things
like
that,
how
does
it
know
what
to
configure.
C
Yes,
so
what
it
does
is
it
basically
uses
reflection
on
the
back
end,
so
it's
got
in
an
inventory
of
all
the
components,
and
then
it
uses
a
a
library
that
I
wrote
a
while
back
called
config
schema,
which
will
basically
give
you
give
you
all
of
this
information
in
in
a
data
structure.
So
it's
basically
yeah.
It
makes
a
call
to
the
the
back
end.
C
The
back
end
grabs
does
introspection
grabs
all
that
it
creates
a
a
struct
and
sends
it
out
to
the
browser
as
based
on,
let's
see
so
we're
going
to
make
a
log
attempt
my
log.
This
is
still
a
little
bit
raw
and
there's.
Definitely
some
rough
edges
here
that
I'd
like
to
get
cleaned
up.
For
example,
like
this
text
here,
is
a
little
bit
wonky,
so
is.
C
That's
a
good
question:
it
uses
vanilla,
JavaScript,
so
no
no
framework,
nothing.
Just
vanilla,
JavaScript
and,
let's
see
I'll
start
this
receiver.
Why
should
I
start
it.
E
C
I
would
love
to
be
able
to
do
that
to
be
able
to
import
a
config
and
fill
all
this
stuff
out
and
then
let
people
test
it
make
modifications
and
stuff,
but
yeah.
All
that
stuff
is
gonna
require
some
some
time
but
yeah
I
would
love.
I
would
love
to
be
able
to
do
that.
I'm
hoping
we
can
sort
of
demonstrate
value
here
and-
and
you
know
get
my
my
you
know-
management
folks
on
board
to
keep
going
with
this.
B
C
C
If
you
want
to
take
a
look
at
it,
there's
a
draft
PR
out
it's
a
little
bit
large,
but
yeah
I'd
be
happy
to
get
feedback
on
this
I,
don't
even
know
if
it
should
live
in
contrib
at
all,
like
you
should
live
in
its
own
repo,
I'm
I'm,
not
sure,
but
yeah.
Definitely
interested
in
feedback.
I
know.
C
I,
know
folks
have
mentioned
on
the
issue
for
this
for
Otto
that
they
would
like
or
possibly
prefer,
a
command
line,
a
tli
interface
to
be
able
to
do
relatively,
like
basically
the
same
thing:
I
actually
wrote
a
CLI
version
a
while
back,
but
I,
don't
think
I
ever
merged
it.
So
that's
definitely
they're
not
mutually
exclusive.
We
can
do
both
if
we
want.
D
Thank
you
I
think
it's
really
cool
Pablo.
What
do
you
have
here?
It's
really
nice
and
it's
impressive.
One
question
that
I
do
have
is
what
kind
of
user
workflow
do
you
foresee
for
this
tool.
C
I
I
really
don't
know,
I,
don't
basically
maybe
yeah
that
that
part
is
that's.
That's
kind
of
the
hard
part.
Is
that
I?
C
Don't
you
know
I,
don't
really
like
personally
I
don't
run
a
collector
in
like
I,
have
not
gone
through
the
process
of
you
know,
starting
up
a
project
and
getting
a
collector
running,
and
you
know
going
through
all
that
all
those
steps,
so
I
can
only
guess
but
yeah
as
far
as
the
workflow,
it's
to
me,
I
I
kind
of
Envision
sort
of
a
relatively
new
user
who
doesn't
know
very
much
about
about
the
collector
wanting
to
just
sort
of
play
around
with
it
see
what
The
Collector
is
capable
of
see
what
all
the
components
are,
what
they
can
do
and
probably
create
a
relatively
basic
config.
C
That
would
be
kind
of
like
a
good
starting
point
for
what
they
want
to
do
and
then
yeah,
that's
that's
that's
I
would
say
yeah,
so
so
I
would
say,
focus
more
on
newer,
newer
users
who
aren't
familiar
with
collector
and.
D
C
So
right
now
this
is
a
single
user
application
that
just
runs
like
I
run
it
you
know
all
the
time
daily,
I,
just
read
it
from
my
local
repo
I,
don't
expect
users
to
be
able
to
check
so
to
actually
check
out
a
collector
repo
and
have
go,
install
and
all
that
stuff.
C
C
I
mean
we
could
we
could
put
it
on
Amazon
or
something
and
and
just
just
point
people
at
it
and
let
you
know
folks
play
with
it.
It's
a
little
bit.
C
The
the
running
pipeline
part
is
a
little
bit
weird
in
that
environment
and
that's
another
thing
that
I
would
like
to
do
is
is
I
would
like
to
be
able
to
separate
the
collect
the
con,
the
configuration
part
and
the
Running
part,
like
I
kind
of
feel,
like
those
two
should
be
separate
in
the
UI
they're
kind
of
mashed
together
here,
because
this
is
sort
of
an
experiment.
C
I
still
think
it,
even
even
though
they're
they're,
together,
I
think
this
thing
can
provide
value
but
yeah
long
term
I
would
I'd
love
to
be
able
to
separate
those
two
and
and
possibly
be
able
to
host
at
least
a
configuration,
the
configurator
part
somewhere.
So
folks
don't
have
to
actually
install
anything.
G
How
does
this
deal
with
different
sets
of
components
so
like
if
you
were
using
the
core
collector?
You
wouldn't
have
all
of
the
components
that
are
listed
here
available
to
you.
C
Yeah,
that's
right
so
so
right
now
this
is.
This
is
tentatively
living
in
contrib.
But,
for
example,
you
know
at
signal
effects.
We
have
our
own
distribution.
We
could
run
it
out
of
the
signal,
FX
distribution.
We
would
basically
have
to
write
some
wrapper
code
to
be
able
to
load
up
all
the
components
from
that
all
the
available
components
and
send
them
into
the
auto
server
when
it
starts
up
so
yeah
that
it's
definitely
possible
whatever.
F
I
mean
one
way
to
consider
it
would
be
to
to.
If
it
were
part
of
the
built
collector,
then
it
could
use
the
introspection
on
The
Collector
to
say
what
components
does
it
actually
have
available
to
it,
and
then
the
configuration
could
be
limited
to
those
things
that
were
actually
in
install
collectors,
so
it
like
could
be
an
optional
feature
when
you
build
a
collector
to
say,
I
want
this
interface
available
to
me,
and
then
you
basically
are
querying
the
collector
for
its
own
configuration
Builder.
G
C
Yeah
I
think
that
would
be.
That
would
be
very
cool
if
that's
an
expansion,
you.
H
C
I
mean
it's
hard,
like
maybe
yeah
you
could
you
could
you
could
load
the
I
suppose
you
could
load
the
config
of
the
local
collector
and
update
the
config
as
far
as
the
seeing
the
data
flow
through
on
an
existing
collector
that
I
think
that's
doable,
but
it
would
require
some
some
coding,
so
we
could
hook
into
right.
We
could
hook
into
the
pipeline
and
grab
that
data,
but.
H
C
That's
very
yeah,
Innovative
I
think
I
think
that
would
be
really
cool.
Obviously
we
would
need
a
way
to
reduce
potentially
a
huge
amount
of
data,
but
that's
what
yeah.
C
C
If
not,
then
thank
you
for
for
the
feedback
and
for
watching
appreciate
it
very
much.
B
It
was
awesome,
I
I'm
curious,
so
what
what
do
other
folks
in
the
community
feel
about
where
this
should
live
and
should
it
live
in
contrib
I?
My
my
first
instinct
is
I'm
terrified
of
having
a
you
know
a
bunch
of
JavaScript
code
living
in
the
contribute
repo
because
it
already
has
a
very
large
surface
area.
I,
don't
really
want
to
add
another
language
into
it,
but
that's
kind
of
my
first
instinct
it's
more
of
a
knee-jerk
reaction.
Really
that's.
G
A
It
can
be
it's
vulnerable
and
used
by
the
contribution,
it's
a
library
or
something
because,
as
we
discussed
previously,
it
should
be
tied
to
the
collector
build
Builder
to
be
able
to
inspect
wherever
is
there
so
something
like
that?
Maybe
but
I
I
I
don't
feel
bad
about
you
putting
in
the
same
Ripple.
To
be
honest,.
G
So
I
I
think
that
there
are
parts
of
this
that
could
live
in
core
organ
trip.
The
parts
related
to
providing
that
configuration
API
for
the
application
to
use
I
think
the
web
application
portion
of
it
is
what
deserves
to
live
in
a
separate
repo
and
then
finding
where
to
make
that
that
cut.
As
far
as
what
goes
in
core
contribute.
What
goes
in
that
web
app?
We
can
then
refine.
B
B
C
I'm
hearing
generally
people
like
the
idea
of
a
different
repo,
the
config
schema
code
like
this.
That
code
is
already
in
contrib,
so
yeah,
that's
already
taken
care
of
sorry.
Somebody
was
talking.
D
D
Was
I
was
just
saying
that
hosting
the
web
application
on
some
website,
openclosure.io
or
or
something
else
would
work
only
for
the
configuration
part
not
for
the
running,
not
for
the
testing
part
I
think
we
do
have
a
couple
of
uis
already
like
we
have
P
Prof
and
we
have
the
the
what
is
the
name
of
the
other
one
Z.
A
D
Yeah,
so
we
we
do
have
like
you
know
some
UI
components
in
there
already
I,
don't
think
the
the
Y
code
itself
comes
from
from
from
you
know,
those
components
they
just
consume:
libraries
that
provide
that
UI.
So
perhaps
the
same
could
be
thought
about
here.
So
perhaps
you
could
extract
the
UI
Parts
into
its
own
module
into
its
own
repository.
D
That
then
I
don't
know
just
generates
a
a
I.
Don't
know
we
just
go
Ambit
to
embed
those
into
a
go
file,
and
then
we
can
just
consume
that
as
within
an
extension
on
the
contrary,.
D
So
that
would
be
in
line
with
the
other
parts
that
you
see
why
you
are
doing
right
now.
I
think
that
should
be
acceptable.
I
think
I,
like
the
idea
of
having
that
as
part
of
the
extensions
meaning
as
part
of
the
contribute
repository
but
I.
Also
don't
like
the
idea
of
the
build
breaking,
because
some
JavaScript
version
of
something
is
either
outdated
or
something
like
that.
Yeah.
D
Yeah
as
long
as
the
UI
itself
is
built
outside
and
we
can
version
and
consume
it,
you
know
separately,
I
think
it's
fine
to
have
it
in
the
Apple
trigger.
C
Just
to
be
clear,
there
is
no
build
step,
it's
just
it's
just
a
bunch
of
static
JavaScript
files,
there's
no
npm
or
anything
like
that.
You
know
dependencies,
but
yeah
definitely
I
think
that's
fun.
To
put
it
in
separate
repo.
D
The
reason,
but
will
there
be,
does
it
make
sense
in
the
long
run
to
have
I,
don't
know
to
have
a
more
complex
UI?
Would
it
make
sense
to
use
some
library
of
some
sort,
I,
don't
know
I'm,
not
a
front-end
developer,
so
I
can
I
can
answer
that
question,
but
I
could
imagine
that
it's
not
going
to
be
that
simple.
Only
step
two
after
the
MVP
is
done.
You
know
so
foreign
all
right
sounds
good.
D
H
I
guess
the
call
that
we
need
to
make
is
whether
we
want
this
to
be
a
separate
repository
and
if
it's
a
separate
repository,
who
would
be
the
maintainers
of
it,
we
definitely
don't
want
it
to
be
a
repo
that
is
maintained
by
a
single
person
Pablo
in
this
case,
so
we
need
at
least
one
more
person
who
is
interested
in
being
the
maintainer
of
that
right.
We
think
we
can
take
the
offline.
H
This
discussion
feel
free
to
comment
on
the
issue,
but
generally
I
I,
like
I,
like
the
possibilities
that,
but
this
this
idea
brings
angle,
can
evolve
it
in
a
number
of
ways.
I,
like
it.
D
B
C
C
You
know
it
was
just
a:
it
was
a
kind
of
a
code
name,
OT
is
for
open,
Telemetry
and
so
I
wanted.
It
I
wanted
something
that
was
related
and
then
Auto
is
a
kind
of
a
pun
on
auto,
so
that
and
it's
a
small
easily.
You
know
just
tiny
word
so
happy
to
change
the
name.
That's
just
a
kind
of
kind
of
a
placeholder.
Okay.
D
All
right
so
from
one
Pablo
to
another,
probably
had
a
comment
here.
You
have
a
request
for
comment.
I
think.
B
B
And
one
of
them
at
least
involves
a
something
that
could
be
a
breaking
change,
so
I
wanted
to
to
get
some
feedback
from
everyone
before
I.
Do
everything
anything
both
on
whether
we
should
do
it
and
also
on
how
to
go
about
it
so
yeah.
Please,
yes,
comment
on
on
the
issue.
D
All
right
any
any
comments
that
we
should
be
or
any
points
that
we
should
be
discussing
here.
I
think
we
have
a
couple
of
comments
there
already
from
me
and
from
and
from
Anthony.
B
D
Because
you
know,
if
we
just
bind
to
a
local
IP,
it's
not
going
to
be
exposed
to
the
host
from
the
container,
so
containers
would
probably
just
break
I
believe
that
kubernetes
would
also
break
because
we
have
to
bind
to
the
public
AP
and
not
the
local,
IP
and
I.
Think
just
binding
to
the
public.
Ip
is
as
bad
as
binding
to
zero
zero,
zero,
zero.
D
Yeah,
so
how
do
other
projects
do
and
I
mean
I
guess
containers
would
not
need
to
break
if
we
I
don't
know.
If
we,
if
we
expose
that
behind
a
feature
gate
and
if
we
can
set
the
feature
gate
using
command
flag,
we
can
change
the
containers
just
set
this
flag
by
default,
so
it
wouldn't
break
those
users.
D
B
A
I
just
want
to
say
that
it's
mostly
about
like
user
experience,
most
of
the
user
just
say
otlp
as
a
receiver,
and
that's
it
nothing
else
and
they
rely
on
the
default
and
for
most
of
them
it
will
be
a
breaking
change.
So
if
we
want
to
change
it
to
localhost,
I
would
probably
suggest
making
it
a
endpoint
the
required
tool
first.
So
whenever
they
upgrade
is
hey,
this
field
is
not
required.
You
need
to
update
your
configurations,
then
they
specify
whatever
they
need,
whether
it's
zero,
zero
or
localhost.
After
that,
it's
probably
will
be.
A
We
don't
even
need
to
like
change
the
default
at
all.
It's
if
it's,
if
it's
required
field,
so
I
mean
making
it
required,
will
be
less
surprising,
at
least
instead
of
changing
default.
D
So
that
that
brings
another
problem
in
itself,
I
mean
as
a
user.
If
I
see
that
endpoint
as
a
required
property,
then
what
should
I
add
as
a
value
if
I'm
money,
containerized
environment,
then
I
I,
don't
know
which
IP
is
going
to
be
assigned
to
my
container
so
I
I
just
cannot
use
a
specific
value.
I
have
two
e000,
so
you
know
I
still
have
the
same
problems
before
it's
just
a
continual
environment.
Variable
right,
you
could
use
an
environment
variable,
that's
true,
yeah!
D
It's
typically
expose
it
as
a
as
a
environment
variable
not
for
hosts,
though
typically
or
is
it
I
mean,
perhaps
for
for
cloud
hosts
yeah.
D
No,
it's
not
right,
we
cannot
and
as
a
user
I
think,
I
would
just
use
the
zero
zero
zero
zero
anyway.
So
I
guess
the
problem
is:
is
a
CV
cwe
problem,
The
Collector
problem,
or
is
it
my
problem?
Now
it
becomes
my
problem
right.
Yes,.
A
D
Question
no
sorry
go
ahead,
no
I
was
just
just
to
wrap
it
up.
I
think
you
know
we're
just
trading
who's,
the
owner
of
the
problem.
So
is
it
we
by
providing
the
full
value
or
is
it
the
user
who
would
be
then
assigning
a
value
that
is
easy
for
them
like
zero,
zero,
zero,
yeah,
I,
don't
know,
I
think
it's
upbringing,
any
real,
tangible
benefits
to
users
other
than
a
potentially
bad
user
experience.
A
D
D
G
D
A
D
I,
don't
know
I
mean
for
I'm
all
for
secure
by
default
settings,
but
in
this
case
here
I
don't
know
it
looks
like
we're
Trading
too
much
in
the
naval
security.
G
G
B
So
what
if
we?
What
if
we
work
on
providing
a
warning
if
someone
leaves
the
default
of
zero,
zero,
zero,
zero
and
we
add
the
section
to
our
best
practices.
As
far
as
like
configuring,
an
endpoint.
A
B
B
G
B
So
let's
say
I
running
the
collector
in
my
container
and
I
see
that
warning,
but
then
I
don't
change.
Anything
because
I
need
to
put
0.0.0.0
would
I
have
like
a
flag
in
the
configuration
to
silence
the
warning
or
how
would
that
work?
Would
I
always
have
the
warning,
but
just
ignore
it.
G
I
would
think
just
always
have
the
warning
and
let
users
ignore
it
if
it's
not
applicable
to
them,
otherwise
we're
adding
yet
more
logic
that
could
introduce
yet
more
defects.
D
Ideally,
it's
actionable
so
ideally
there's
a
link
on
the
message
itself
to
a
documentation
page
from
us
stating
why
it's
a
bad
idea
to
have
zero
zero,
zero
zero
and
what
they
can
do
to
make
it
safer.
B
Okay,
yeah,
so
I'll
try
and
start
working
on
on
those
two
one
yeah.
Well,
if
we
change
our
mind
of
the
about
the
default
later,
we
can
always
do
it
make
sense.
H
Yeah
this
is
this
is
a
new
issue
and
it
was
opened
by
a
person
who's
implementing
a
conflict
provider
that
fetches
the
config
from
a
remote
source.
So
they
found
that
we
have
a
few
limitations
there
about
how
the
providers
work
number
one.
Is
you
don't
know
if
the
configuration
that
you've
returned
is
actually
valid
and
it's
going
to
result
in
the
service
properly,
accepting
it
or
failing
to
load,
and
when
we
fail
to
load
it
can
be
pretty
bad
actually
right,
it
can
even
terminate
on
The
Collector,
so
number
one
is.
H
That
is
there
a
way
to
to
validate
the
results
and
to
stop
applying
the
configuration
and
not
just
stop
I
think
we
stop
applying
if
we
detect
it
and
retires
it,
but
to
also
let
that
component
know
that
the
config
is
not
valid.
H
The
final
resulting
config
is
not
valid,
while
let
the
component
know
is
because
the
component
may
want
to
persist
or
Cache
that
portion
of
the
config
locally,
something
that
it
receives
from
remote
remote
Source
persist
in
order,
so
that
when
the
collector
is
we
started
when
the
processes
we
started
the
next
time
it
can
load
the
load,
the
config
immediately
from
the
cache,
without
needing
to
wait
for
for
the
remote
source
to
reply
right.
So
that's
that's
a
possible
option.
So
some
somehow
notified
the
provider
that
what
you
provided
is
good.
H
H
We
have
this
merging
logic
right
and
the
effective
is
necessary
in
the
cases
when
the
provider
also
serves
as
a
sort
of
a
status
report
link
facility
right,
which
is
the
case
with
op-amp,
for
example,
which
allows
both
receiving
a
configuration
also
and
also
reporting
the
state
of
of
the
of
the
agent,
so
the
provider
that
implements
the
protocol
open
protocol
will
need
a
way
to
know.
What's
the
end
result
of
all
of
these
operations,
we're
missing
this
capabilities,
essentially
to
to
different
degrees,
like
validation
is
partially
there.
H
We
just
don't
modify
about
it,
but
validation
is
also
depending
on
the
fact
that
we
only
validate
what
can
be
pre-validated,
essentially
right.
What
the
components
can
tell
us
before
the
start
before
in
the
start
function
essentially,
there's
also
a
portion
that
can
happen
after
the
start,
asynchronously
I,
don't
think
we
can
do
much
about
it
at
the
moment.
So
maybe
that's
a
little
limitation
that
we
have
to
live
with
for
now,
or
maybe
we
can
do
something
about
it
anyway.
Long
story
short.
H
What
I'm
looking
for
here
is
some
opinions,
thoughts
from
from
the
from
the
approvers
on
what
we
can
do
about
this
I
posted
a
possibility
of
extending
the
provider
with
a
notification
function;
I'm,
not
sure
it
completely
solves
the
problem,
though,
so
maybe
there
is
another
way
to
do
it.
I
wonder
if
there's
any
other
thoughts
on
this.
H
Correct
the
in
in
our
implementation,
like
we
have
the
file
provider,
we
have
environment
provider,
I
think
those
two
are
always
active:
they
would
ignore
it,
they
don't
care
about
it,
but
the
one
that
is
a
remote
provider
that
would
use
that
that
notification
to
to
do
two
things
to
save
the
the
the
the
most
recent
one
that
it
received
in
a
local
cache
and
to
also
report
the
effective
configuration
to
to
the
destination.
Whatever
the
status
reporting
needs
to
go.
Those
two
things.
G
G
Especially
for
the
effect
of
configuration
reporting
that
that
would
be
super
helpful,
we've
also
got
an
HTTP
provider
and
there's
a
PR
for
an
S3
provider
in
control
that
are
remote
sources.
They
don't
do
that
sort
of
caching
right
now,
but
they
certainly
could
yeah
yeah.
H
I
I
am
also
not
sure
whether
this
should
be
one
function
that
notifies
about
the
success
and
failure
and
also
tells
what
the
effective
configuration
is
or
it
needs
to
be
two
separate
functions,
because
the
effective
configuration
may
also
be
available
before
before
I
guess
the
providers
match
the
data.
I
don't
know
if
they
need
to
know
about
this
effect.
Probably
they
don't
I,
guess,
maybe
that's
sufficient,
so
I
don't
know
if
we
needed
to
be
one
function
or
split
it
into
two
separate
things.
H
That's
also
I,
guess
a
question
so
I
guess
I'm
looking
for
some
some
brainstorming,
maybe
some
options
here.
Some
thoughts
on
what
people
think
is
is
the
right
thing
to
do
here,
as
as
it
is
now,
the
provider
interface
I,
think
it
it's
incomplete
right,
at
least
in
terms
of
what
we
wanted
to
achieve
eventually
with
the
remote
sources.
H
H
I
That's
me:
hi
I
am
here
briefly
to
talk
about
an
Otep
and
an
issue
I
filed
yesterday
related
to
the
P
data
package.
I
try
to
keep
it
quick,
basically,
there's
an
otat,
that's
been
researched
and
finished
for
a
while
I'm,
actually
looking
for
approvals
on
that.
By
the
way,
this
is
to
add
an
Apache
Arrow
protocol
on
the
side
of
otlp
that
could
do
a
round
trip
successfully
and
exactly
so
that
we
can
get
column
compression
into
otlp.
I
This
is
an
experiment
and
I've
begun
working
on
it
and
I
wanted
to
give
a
little
bit
of
a
report.
So
I
wrote
it
up
quickly
in
the
issue
linked
here
for
one
thing,
so
so
I
have
this
code
base.
It's
like
several
thousand
lines
of
code
written
by
I
will
say
a
non-go
programmer.
I
I
want
to
say
that
first,
it
helps
me
structure
the
code
and
avoid
the
the
pattern
that
I
saw,
a
lot
of,
which
is
like
build
an
object,
build
an
array
set
the
array
in
the
slice
like
it
forces
you
to
reorient
architect
your
code
in
a
lot
more
natural
way,
I
think
by
forcing
you
to
allocate
your
objects
before
you
fill
them
and
not
copy.
I
I
would
like
to
be
able
to
use
this
library
in
the
hotel
go
SDK
eventually
and
right
now,
I
have
a
functional
piece
of
code.
Thousands
of
lines
of
it
that
deals
with
an
otel
protocol
object
for
me
to
use
this
in
the
collector,
which
is
my
objective
I,
either
need
to
rewrite
it
using
pdata,
which
I've,
mostly
finished
and
I
think
is
the
way
to
go,
but
at
that
point,
I've
just
burned
a
lot
like
a
week
of
time
on
something
that
was
already
working
with
protocol
objects.
I
So,
as
you
know,
it's
impossible
for
me
to
actually
just
use
your
protocol
objects,
they're,
internal
and
I
have
to
be
one
of
the
privileged
packages
to
get
at
them,
which
I
understand.
So
one
of
my
Solutions
here
was
to
take
this
thousands
of
lines
of
code
and
like
bake
it
into
the
P
data
internal
hierarchy
somehow
or
the
pdata
hierarchy
itself,
which
again
I'm
getting
further
and
further
away
from
my
objective,
which
is
to
like
have
a
generic
converter
for
otlp
protocol
objects
that
you
could
use
essentially
anywhere
and
I.
I
I
wonder
if
this
has
been
discussed.
I
don't
want
to
take
too
much
time.
I'm
sure
people
have
asked
whether
you
could
take
the
pdata,
essentially
API
generation
and
and
pull
it
out
of
collector,
maybe
have
a
way
to
generate
data
like
apis
for
new
protocols.
I'm
not
actually
here
to
ask
for
that.
But
that's
one
of
the
solutions
I
could
could
imagine.
Let's
just
say,
I
really
like
P
data
I
can
see
the
otel
go,
SDK
being
forced
to
use
it.
The
Infectious
problem
isn't
so
bad
like.
I
Actually,
if
we
can
produce
bytes
of
otlp,
Cheaper
or
faster,
using
p
data,
eventually
that'll
be
a
really
good
sell
for
the
hotel
go
SDK,
and
so
P
data
is
all
right.
Now.
The
real
question
I
had
when
I
came
to
this
might
be
a
discussed
elsewhere,
is
about
copying
and
I.
Think
the
the
goal
of
a
p
data,
essentially
to
constrain
you
to
avoid
mistakes.
I
It
forces
you
to
copy
data
and
never
Alias.
It
never
pass
a
byte
arrays,
for
example,
that
you
might
modify
later,
but
it
does
mean
that
I'm
making
a
lot
of
copies
on
my
on
my
ingest
path.
Now,
if
I'm
just
copy
this
code
in
the
P
data
internal
hierarchy,
I
could
actually
bypass
all
that
and
just
like,
provide
you
with
protocol
buffer
objects
in
the
correct
in
of
the
correct
package.
I
H
I
H
I
think
so
I
I.
So
first
of
all,
we
did
discuss
in
the
past
the
possibility
of
extracting
pdata
as
a
separate
Library.
Maybe
we'll
do
that
in
the
future.
But
who
knows
when
we
will
do
that
right?
So
you
have
a
specific
goal:
I'd
like
to
understand
what
is
that
specific
goal?
Are
you
looking
at
implementing
only
an
exporter
which
converts
speed
data
to
the
kilometer
format,
so
that
you
can
can
send
it
over
the
network
in
a
more
compressed
way,
or
you
also
want
the
receiver
implementation
to
support
the
new
format.
I
Yeah,
so
this
is
called
phase
one
in
the
otap.
It
would
be
a
receiver
and
an
exporter
that
you
should
be
able
to
use
to
create
a
bridge.
So
you
could
imagine
your
Edge
collector
on
your
your
network
receiving
standard
otlp
from
all
the
sdks.
It
then
compresses
it
to
arrow
on
the
output,
there's
a
receiving
end
that
re-encos
it
as
a
TLP
and
sends
it
to
your
sort
of
like
infrastructure
and
then
over
the
wire.
I
We
expect
like
much
better
compression
we're
talking
like
70
90
here
compared
to
you,
know:
30
40,
what
we're
getting
from
gzip
Snappy.
H
Type
for
compression
so
I'm,
guessing
the
the
the
the
most
benefits
we'll
see
here
is
from
the
exporter
portion
right,
so
that,
on
whatever
is
your
last
leg
of
delivery,
typically
from
the
customer's
side
to
the
vendor
side,
that's
where
the
network
costs
are
incurred.
If
you
had
an
exporter
which
supported
this
in
the
collector
that
would
serve
that
purpose
to
technically
don't
need
a
receiver
there
right,
a
receiver
to
support
that
format
as
well.
You
only
need
a
receiver
if
you
actually
have
another
source
which
produces
the
columnar
format
of
data.
I
H
I'm
going
to
apply
them,
I
absolutely
understand
yes,
I
get
it
I,
get
it,
but
as
a
first
step,
I
guess
in
The
Collector
we
could
maybe
start
with
the
exporter
and
then
figure
out
for
for
the
efficient
implementation
of
the
receiver.
You
write
this
probably
needs
to
be
baked
into
the
P
data
or
somehow
we
expose
non-copying
operations
from
P
data
so
that
you
can
implement
it
more
efficiently.
I,
don't
know
which
way
we
go
right
so
until
we
figure
that
part
out
I
think
maybe
we
could
start
with
the
exporter
implementation
right.
I
I
I
think
you're
right,
I,
I'm
actually
going
to
start
with
both
because
that
I
I
don't
feel
it
will
be
a
success
without
that.
But
I
can
take
the
P
data.
The
copies
aren't
so
bad,
really
I
mean
like
the
compression
is
still
going
to
be
great.
It's
just
a
question
of
how
much
CPU
and
memory
and
so
on.
So
I
meant
this
to
sort
of
point
out
the
like
what
I
see
in
the
future.
So
thank
you
you're,
absolutely
right,
Cuban.
My
only
other
point
quickly
was
I.
I
I
saw
a
little
inconsistency
with
how
P
common
maps
are
handled
as
a
user
who
wants
to
compute
a
unique
signature
for
my
data.
I
need
to
find
a
way
to
de-duplicate.
Now
I
can
sort
the
data
because
you
have
a
sort
method
but
but
I'm
forced
to
range
through
them
to
kind
of
like
build
my
own
copy
of
a
map
and
if
I
want
to
de-duplicate
there's
no
way
to
do
it
in
place
essentially,
and
so
anyway,
I'm
finished,
with
all
my
feedback.
I
This
it's
very
early
days
to
figure
out
what
we
want
to
do.
I
will
have
a
copy
using
pdata
and
we
will
ignore
the
efficiency
issues
for
for
now,
yeah.
H
Anyway,
I
think
I
I
support
what
what
you
were
suggesting
right,
that
there
is
one
implementation
of
otlp
that
you
can
use
also
as
a
vendor,
one
receiver,
implementation
or
one
implementation
of
otlp
decoding
or
whatever
we
call
it
that
is
used
by
The,
Collector
receiver
and
by
vendors,
who
Implement
receiving
or
TLP
and,
by
necessity,
I
think
not
necessarily
but
I.
Think
it's
a
good
option
possible
option
that
it
relies
on
P
data
and
P
data
is
a
separate,
Library
I
think
that's
the
option.
That
probably
is
more
most
viable
at
the
moment.
H
So
we
should
probably
think
about
how
we
do
that
in
the
meantime.
I
guess:
yes,
you're
right,
you
could
do
the
implementation,
which
is
slightly
less
efficient,
which
does
a
bit
more
copying
than
is
necessary,
but
it
will
work
right.
You
can
make
it
work
yeah.
It
can
be
good
to
demonstrate
that
it
actually
works
and
prove
and
show
that
how
much
it
saves.
H
It
will
be
another
strong
argument
in
favor
of
more
moving
towards
that
particular
solution
and
then
we'll
make
the
case
that
okay,
let's
extract
the
P
data
by
then
the
P
data
will
be
finalized
at
this
work
in
progress.
Right,
Dimitri
is
working
on
it
once
it
is
stable,
it
will
be
easier
to
extract
and
and
support
it.
As
a
separate
Library
sounds.
D
The
follow-up
question
to
that
and
some
of
the
things
that
Joshua
said
was
having
I
had
yesterday
as
well
like
when,
when
implementing
noglp
receiver
for
logs
I
I
was
asking
myself
whether
I
should
use
what
we
have
on
the
respect,
something
under
the
spec
repository
and
then
Hotel
p,
and
then
we
have
a
private
boxing
there.
Then
we
have
a
collective
service
in
there,
but
we
also
have
a
collect
a
hrpc
service
for
p
data.
D
So
we
have
two
versions
of
collector
grpc
servers
and
it's
not
clear
to
me
and
I
guess:
I
I
know
a
little
bit
more
than
regular
hlp
users
in
software
to
me
what
I
should
use
in
the
end,
I
I
know
it's
p
data,
but
you
know
I
think
we
should
be
very
clear
on
the
documentation
on.
You
know
what
what
should
people
be
using
and
under
which
which
cases
so
are
there
cases
where
the
regular
respect
is
preferable
than
P
data?
Or
you
know
so.
H
So
payday
is
a
separate
module
already
right,
it's
its
own
module,
it's
just
in
the
same
repository
as
the
core
yeah
I.
Don't
know.
If
that's
that's
fine
long
term,
maybe
it
is
when
if
it
is
then
I
guess
the
only
thing
that
prevents
you
from
adopting
people
more
widely
on
the
backends,
for
example,
is
that
it
is
actually
declared
stable
so
that,
because
we're
very
actively
breaking
it
right
now,
so
I
wouldn't
want
anybody
else
externally
dependent
just
yet.
H
I
The
hotel
go
group
has
published
Hotel
Proto
go
which
is
like
a
compiled
form
of
the
protobuf
like.
Ultimately,
the
rest
of
the
community
wants
to
use
that,
and
it
would
be
nice
to
see
the
The
Collector
accept
that
and-
and
it
would
be
even
nicer
if
you
didn't
have
to
serialize
that
data
and
unserialize
it
to
get
into
the
collector
pipeline.
I
The
question
for
me
and
I
don't
know
the
answer
is
whether
there's
a
goal
to
take
P
data,
the
API
and
start
generating
protobux
raw
without
a
protobuf
library
which
I
think
is
you
know
extreme,
but
I've
seen
it
happen
many
times
and
I
would
expect
it
one
day,
and
so,
if
that's
the
case,
then
there
will
be
a
p
data,
like
sort
of
like
like
raw
format.
I
H
E
Yeah,
hopefully
this
will
be
a
quick
one.
That's
me
if
I
mention
my
screen,
so
I've
noticed
that
I've
been
doing
some
proposals
for
metric
names
and
implementing
them
in-house
metrics,
receiver
and
I
noticed
that
the
memory
metric
names
in
host
metrics
receiver
are
different
from
the
semantic
conventions.
So
the
semantic
conventions
are
process.
Memory,
usage,
process,
memory
virtual,
but
what's
the
host
metrics
receiver
uses?
E
Is
these
two
physical
users
and
virtual
usage
and
I'm
going
I
want
to
add
a
new
one,
a
process
memory
utilization,
which
is
a
percent
of
total
memory?
It
seems
that
these
names
are
nicer,
but
I'm,
not
sure.
If
you
want
to
introduce
a
breaking
change
in
the
house,
Matrix
receiver
changing
these
names.
Does
anyone
have
an
opinion
on
this
so.
H
We
have
already
semantic
conventions
about
this
you're
saying
and
The.
Collector
is
non-compliant
with
those
conventions.
Yeah
I
guess
our
default.
Thinking
should
be
that,
yes,
we
need
to
make
both
metrics
compliant
unless
there
are
some
strong
arguments
in
favor
of
fixing
the
specification
itself
which
I
don't
know.
If
there
is
any
arguments
to
do
that.
A
A
Likely
that
that's
what's
happened,
yeah
yeah
and
we
can,
we
should
align
to
semantic
conventions
and
making
it
and
throw
official
feature
Gates
as
a
breaking
chain.
So
we
duplicate
old
one.
We
put
feature
geese.
We
had
the
warning
that
we're
gonna
switch
to
the
new
one
and
so
on.
So
we
we
run
through
this
process
a
few
times
before.
So
we
need
to
do
the
same
for
this
one.
B
Are
the
it's
a
semantic
convention
for
this
metric
stable
or
is
it
experimental
still.
E
G
H
We're
I
think
we're
meeting
next
week,
the
first
time
so
we'll
try
to
do
something
about
it.
It's
not
going
to
stay
unstable
forever,.
G
So
we
we
recently
made
a
change
to
our
contributing
guides,
saying
we
won't
Implement
pre-release
spec
features,
partly
on
the
back
of
the
issues
that
we
saw
with
metric
Direction
moving
from
named
attribute,
and
things
like
that.
G
Does
that
impact
this
like
these
are
released
but
released
experimental
and
thus
can
still
change
in
the
same
way
that
any
unreleased
feature
could
do.
We
not
release
anything
that
uses
simulator
conventions
for
metrics,
or
do
we
just
put
a
big
red
warning
on
this
saying
everything
here
is
subject
to
change
at
any
time.
Yeah.
H
I
think
we
can't
stop
stop
working
on
this,
so
I,
it's
taking
so
long
that
we
have
to.
We
have
to
provide
some
value
in
The,
Collector
and
host
metrics
receiver
business.
So
we
do
our
best
with
the
current
experimental
conventions,
so
I
don't
think
we
should
not
provide
a
host
networks
at
all
just
because
they,
the
conventions,
are
experimental.
A
I
I
believe
we
have
stability
definition
on
the
collectors
saying
that
component
component
receiving
metrics
can
be
marked
as
stable.
Only
if
all
the
semantic
conversions
are
stable
or
something
like
that.
So
until
that
is
done,
we
can
keep
the
stability
level
to
alpha
or
beta,
and
we
should
give
it
should
give
us
ability
to
change
it
in
future.
A
A
couple
of
like
one
feature
gate
probably
to
switch
to
the
new
one,
is
disabled
by
default,
but
we
say
a
word
in
hey:
we
want
to
rename
this
metric,
please
update
like
wherever
you
have
to
use
that
metric.