►
From YouTube: 2023-01-04 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
A
B
E
All
right
should
we
should
we
get
started.
Let's
do
it
all
right,
Antoine
is
that
you
who's
got
the
first
few
PRS
yeah.
B
They're
really
straightforward
changes.
I
think
Pablo
made
a
bit
of
an
umbrella
issue
for
this,
then
it
got
cut
into
all
the
different
components.
So
I've
got
those
two
PRS
that
are
standing
they're,
very
small
they're.
All
about
changing
the
configuration
of
the
signal
significance
receiver
in
the
Splunk
hex
exporter
to
use
configure
pack
for
string
so
just
need
a
review,
make
sure
that
they
they're
good
for
everybody.
E
B
E
Okay,
yeah
I
can
I
can
take
a
look
at
them.
I
see
that
you've.
B
All
right,
let's
move
on
to
the
next
one.
The
next
one
is
interesting,
is
an
addition
to
the
resource
detection,
processor.
B
B
B
So
on
that
one,
the
big
question
is:
can
we
move
this
scene
as
it
is?
Do
we
need
to
get
more
help
from
first
having
a
specification
review?
The
specification
review
is
linked
on
the
pr
and,
unfortunately,
it
hasn't
moved
at
all
in
three
weeks
and
I'm,
not
sure
what
I
should
be
doing
there.
So
I'm
kind
of
asking.
What's
the
best
approach
here,
you
know
I've
got
a
couple
plus
ones
on
those
and
I'm,
not
sure
how
I
get
unstuck.
A
A
B
Yeah,
whose
resource
detection
code
on
errors
could
be
I,
think.
F
Dashboard
David
yeah,
are
you
my
code
owner
I
think
you
are.
G
Thank
you.
Yeah
I
mostly
work
on
the
gcp
side
of
things,
but
if
there's
something
you
need,
my
help
with
I
can
look.
G
F
B
Yeah
and
I,
don't
think
jgr
Camp
has
not
been
interactive
recently
is
that
is
that
a
correct
assumption,
yeah
look
if,
if
that
helps
I'm
happy
to
continue
owning
this
and
contributing
and
maintain
it.
So
let
me
know
that,
maybe
that's
a
that's
something
you
can
put
into
the
pr
so
add
a
code
on
a
line
for
this
particular
hierarchical
component.
Where
I,
would
we
volunteer
to
maintain
that
going
forward.
F
H
Okay
on
the
on
the
specification,
I
saw
that
there's
an
issue,
but
is
there
a
PR
yet
because
I
think
so.
B
That's
that's.
A
very
good
point
is
what
I'm
going
to
ask
you
folks,
I
didn't
know
if
it
was,
you
just
told
me
that
we
don't
need
the
specification
to
be
settled
before
we
merge
this
in
right.
Right
I
tried
to
create
a
full
request
in
a
Up
instrumental
specification
repository
and
when
you
do
you
get
a
prompt
right,
you
get
the
the
pull
request
prompt
and
it
said
if
you
don't
have
an
issue
and
it's
not
been
approved,
do
not
open
a
full
request
or
you
will
be
closed.
B
So
yeah
it
also
points
to
a
graph
that
is
showing
a
diagram
of
how
work
should
be
done
and
I,
don't
think
we're
in
compliance,
because
it
says
that
any
issue
needs
to
be
reviewed
in
three
days
of
it
being
open
for
the
specification
repository
so
unfortunately,
I'm
stuck
in
between
a
rock
and
a
hard
place
where
I
can
disobey
the
flowing
convention,
because
my
PR
is
ready.
I
can
open
it
anytime
right,
but
then
I
might
actually
get
it
closed
because
I'm
not
fully
processed.
B
But
then
the
process
of
reviewing
the
issue
in
the
10d
manner
is
not
also
followed.
So
because
it's
a
holidays-
and
you
know
Happy
New
Year,
everybody
I-
was
going
to
let
that
Fester
a
little
bit
for
a
little
longer.
But
if
you
have
an
idea
how
to
get
me
unstuck,
I'll
take
it
the
the
move
that
I
know
would
work
is
I
show
up
at
8am
on
a
specification
call
and
I
bring
this
up,
but
I
haven't
had
time
recently.
H
Yeah
I
don't
know,
I
haven't
I,
haven't
seen
this.
That
requirement
followed
super
closely
on
the
spec
repo.
Maybe
I'm
I'm
not
super
closely
involved
in
the
repo
all
the
time
here,
but
I've
seen
people
just
open
PR
isn't
it
seems
to
just
get
a
review
if
you've
got
some
thumbs
up.
That's
probably
enough.
I
mean
there
are
probably
others
here
who
are.
G
H
B
This
all
right,
that's
good
advice.
Thank
you.
Oh
thank
you
to
try
to
get
it
done
this
week
and
it
seems
that
helps
move
things.
Okay,
I,
don't
wanna,
take
the
whole
call.
I
have
one
more.
That's!
Okay,
so
just
bring
attention
here
to
everybody
that
you
have
a
new
AJ
proxy
receiver,
which
has
been
committed
two
weeks
ago.
So
it
was
during
the
break.
Maybe
you
didn't
see
it
and
it's
not
functional
this
time.
It's
just
the
shell
of
itself.
It
is
not
bound
to
any
components
list.
It's
not
shipping.
B
As
part
of
the
collector
right.
It's
it's
just
sitting
there,
and
this
PR
17287
adds
the
first
iteration
on
adding
actual
features
on
it
right.
B
The
80
proxy
receiver
exposes
metrics
that
is
able
to
scrape
from
the
AJ
proxy
process
by
connecting
over
to
it
over
a
socket
or
a
TCP
connection,
and
the
data
itself
is
being
exposed
to
to
us
yeah
CSV
file
format.
So
you
connected
with
a
socket.
You
send
a
comment.
You
get
some
CSV
back.
The
CSV
is
now
in
the
pr
as
a
as
a
test
sample
and
using
the
CSV,
you
can
infer
metrics
and
you
can
you
can
put
a
bunch
of
things
together
and
send
them
back.
B
This
is
the
first
move
on
that.
This
is
just
one
metric,
just
not
to
overwhelm
ourselves
with
too
many
of
those,
because
we
don't
know
which
one
should
be
custom,
which
one
should
be
default
quite
yet
and
allows
us
to
start
playing
a
bit
more.
So
it's
been
approved
by
Sean,
a
movie
star
guy
and
he's
he's
the
other
code
owner
of
this
component.
B
I
think
it's
ready
for
merge.
I,
don't
know,
tell
me.
E
I
can
I
can
add
it
to
the
list
of
Tiara's
album
review
after
this
I
I
think
you're
right.
If
it's
already
approved
I'll
just
do
like
a
quick
look
over
but
I
don't
know
it
should
be
good
to
go.
A
E
Thank
you,
J
macd
you're.
Next.
C
Hi
all
right-
this
is
a
little
bit
of
an
advertisement
for
the
project.
I'm
working
on
I
don't
have
anything
to
present
so
I'll,
just
I
just
dumped
a
bunch
of
links
in
the
document
here.
C
I've
been
in
this
group
once
or
twice
to
talk
about
this
project
before
this
is
the
Apache
Aero
ecosystem
for
columnar
data
transport
and
in
memory
processing
being
used
for
open
Telemetry
transport.
There's
an
Otep
that
we
started
over
a
year
ago
and
there's
a
collaborator
of
mine
at
F5,
working
on
the
reference
implementation
of
an
adapter
between
open,
Telemetry
and
arrow
I've,
put
links
to
the
fork
of
The
Collector
that
we're
using
for
this
development
as
a
sort
of
prototype,
I
put
in
I,
guess
I
put
in
why?
C
Aside
from
the
the
growing
and
significance
of
Apache
arrow
on
its
own
there's,
just
a
huge
compression
benefit
of
it
available
to
users
who
go
to
the
trouble
of
configuring
this,
and
we
envision
this
as
a
vendor
being
used
for
a
bridge
between
our
customer
and
our
data
center,
where
we
process
the
customers
data
and
that
would
be
essentially
the
otlp
Aero
Bridge
means
you're,
going
to
compress
using
arrow
on
the
edge
of
your
network
and
it's
going
to
be
decompressed
on
the
edge
of
our
Network
and
we
expect
to
see
savings.
C
Substantial
savings
above
the
Z
standard
compression
alone.
So
we
have
a
reference
implementation
and
the
the
Forex
collector
repo
that
I
showed
that
I
linked
to
here
has
components
that
are
drop-in
replacements
for
the
exporter
and
the
receiver,
the
otlp
export
and
receiver,
and
the
reason
we
did
it
this
way
is:
we
want
to
make
sure
it's
a
seamless,
graceful
upgrade
and
downgrade.
C
So
at
this
point,
I
have
been
working
on
some
validation
and
I
wanted
to
see
if
I
could
help
with
the
test
bed
project
in
this
group,
I've
basically
put
together
a
draft
and
confirmation
of
that
draft
for
the
test
bed
to
see
how
you
all
feel
about
it.
I
felt
an
issue
and
the
first
PR
that
I'm
linking
to
here
in
The
Collector
repo,
is
called
support,
monitoring,
bytes,
read
and
written
compressed
and
unpressed
by
a
pipeline.
So
there's
a
few
ways
we
could
do
this.
C
C
My
proposal
is
to
add
a
small
little
API
to
that
package.
That
would
let
you
optionally
record
how
many
bytes
you're
reading
and
writing
from
The
Wire,
as
well
as
the
uncompressed
form
of
that
data,
which
is
usually
measured
by
the
sort
of
decompressed
protocol
buffer.
C
This
is
directly
connected
with
the
grpc
stat
scandler
interface
inside
of
the
otlp
exporter,
as
I've
done
it
so
so
this
is
an
option
for
all
exporters
and
all
receivers,
but
I've
done
it
just
for
the
otlp
export
and
receiver
in
this
draft.
Now,
as
I
said,
there's
more
than
one
way
to
do.
This
I
could
just
have
these
metrics
for
myself.
I
could
put
them
in
my
packages.
C
So
the
proposal
is
to
create
an
optional
package
in
object
port
for
supporting
this
stuff
using
standard
metric
names.
That
would
be
part
of
that
sort
of
sort
of
conventions
that
are
already
there
and
then
have
the
test.
Bed
recognize
those.
So
when
you
use
one
of
these
optional,
opt-in,
metrics
apis
for
your
component
and
you
use
the
test
bed,
the
test
bed
will
just
see
them
because
it's
able
to
scrape
the
target
which
I've
added
so
the
second
PR
here
is
a
proof
of
concept.
C
More
or
less
it's
complete,
but
I
haven't
written
tests
for
any
of
this
I
want
some
some
essentially
some
agreement
before
I
go
and
test
all
this
stuff,
some
columns
to
the
testbed
output
that
show
you
how
many
megabytes
per
second
input,
how
many
megabytes
per
second
output,
as
well
as
the
uncompressed
form
of
both
of
those
numbers,
and
that's
it
I.
Think
I
would
like
your
attention
on
those
in
particular.
If,
if
you
gave
me
the
go
ahead
on
the
Ops
report,
stuff
I
would
I
would
get
that
tested.
C
C
The
thing
we
want
to
do
once
we
get
an
alpha
release,
ready,
which
is
happening
this
month,
is
get
some
tools
that
you
can
use,
if
you're
an
eager
user
to
adopt
this
and
help
us
measure
and
perform
improvements
refinements
on
the
optimistic
on
the
compression
of
this
data.
So
if
you're,
a
user
who
wanted
to
record
some
data
anonymize
it
obfuscate
it
and
send
it
to
us
that
would
be
awesome
and
we're
going
to
provide
some
tools
for
that.
C
So
I
linked
to
an
open
issue,
I'd
like
to
improve
the
file
logging,
exporter
and
receiver.
If
there
is
such
a
thing,
I
think
there
is
such
a
thing
I'm
going
to
be
working
on
that
myself
this
week
this
month
as
well,
so
that
was
it
I
wanted
some
attention
on
that
and
lastly,
just
as
enticement
I
think
we're
going
to
see
this
year.
Apache
era
is
going
there's
an
amazing
project
available,
it's
written
in
Rust,
so
it's
not
quite
as
accessible
as
we'd
like
it's
called
Data
Fusion.
C
A
year
ago
there
was
a
demo
done
of
data
Fusion
on
otlp
data.
That
was
where
we
got
started
with
this
and
I'm
I'm,
predicting
that
that's
going
to
continue
to
be
an
appealing
destination
for
us.
So
if
you
hear
data,
Fusion,
you're
going
to
think
Apache,
arrow
and
otlp
Arrow
coming
together,
we
can
get
this
in
the
open,
Telemetry
and
arrow
together,
and
data.
Fusion
will
be
one
of
the
reasons
we
want
to
do
that.
That's
all
I
have
for
you.
Thank
you
all.
B
Do
Josh
I
was
we
are
working
and
I
know.
There's
Pablo
on
my
teams
working
on
a
file,
exporter
I
think
there
was
one
photo
while
I'm
not
sure
what
happened
to
it,
but
there's
a
new
version
coming
up
or
something
like
that
so
be
interested
to
find
out
more
about
what
that
looks
like
I.
Think
it's
just
in
the
first
stages
of
that.
So
if
you
want
to
help
him
I
think
he'll
take
the
hell.
B
C
Sounds
good
I'll,
look
for
links
and
maybe
talk
later
about
exactly
which
one
you're
referring
to,
but
that
does
raise
like
what
to
me
is
maybe
one
of
the
biggest
open
questions
about
the
work
we've
done
and
I
know
and
find
that
you
were
involved
in
some
work.
I
think
you
were
involved
in
some
work
on
the
parquet
question.
The
parquet
question
parquet
and
different
pronunciations
around
the
world
is
is
essentially
saying:
okay,
Apache
arrow
is
an
in-memory
representation.
It's
a
wire
representation.
B
C
I've
this
essential
I
believe
this
is
one
of
the
open
questions
like
people
who
don't.
If
we
can't
explain
to
you
why
we
don't
have
a
parquet
exporter
ex
something's
wrong
and
we're
working
on
the
explanation
it
it
it's
partly
that
that's
not
deliverable
that's
on
the
Prairie
list,
although
I
think
it's
extremely
appealing
to
open
Telemetry
as
an
open
source
Community
to
have
that.
It's
not
my
the
vendor.
There's
no
vendor
need
for
that
right
now.
So
it's
not
on
our
priority
list.
C
Is
a
good
answer
to
this
question
and
and
I
think
if,
if
someone
comes
to
that
Otep
and
reviews
it
from
the
like
from
scratch,
the
question
might
be
well.
Why
is
it
that
you
know
some
of
those
earlier
efforts
to
get
a
parquet
exporter
directly
from
the
protocol
definition?
Why
haven't
we
chosen
that
option?
And
the
answer
is
that
we
to
get
the
compression
benefit
that
we're
after
we're
essentially
generating
a
custom
schema?
That's
exactly
the
data
you're
sending
so
the
every
resource
attribute
has
a
column.
Every
span
attribute
has
a
column.
C
Every
metric
attribute
has
a
column
and
there's
no
generic
column.
That
says
key
value
because
that
defeats
the
schema
and
defeats
the
compression
benefit.
So
in
a
per
in
a
particular
stream
of
otlp,
you
have
many
schemas,
you
have
lots
of
schemas
and
that
this
can
all
get
dumped
into
a
parquet
file
with
the
appropriate
code
support.
Just
it's
just
a
matter
of
code
you're
going
to
get
a
parquet
file
with
you
know:
10
000
schemas
in
it.
C
If
you're,
not
careful,
I
mean
you're
gonna
get
a
parquet
file
with
10
000
schemas
in
it,
and
we
need
tooling
to
deal
with
that.
This
is
where
we
enter
the
data.
Fusion
question
again,
like
I,
think
we're
going
to
see
this
develop,
but
for
now
what
we've
built
is
essentially
a
translation
to
and
from
Arrow
that
is
optimized
for
compression
optimized,
for
exactly
the
schema
that
you
gave
us
so
that
we
can,
you
know,
just
send
the
data
as
efficiently
as
possible
in
a
column,
format.
B
Yeah,
the
the
perfect
effort
was
a
huge
shiblets
of
things
coming
together.
First
I
had
to
find
the
pocket
generator
that
would
work
without
Pluto
files.
I
had
to
modify
it
to
make
it
happen.
Eventually,
I
managed
to
get
something
there
was
some
I
approachable
and,
like
you
said,
would
negate
most
of
the
actual
benefits
of
using
parquet
in
the
first
place,
because
it
was
using
key
value
maps
all
over
the
place.
B
But
you
know
it
was
it's
like
the
the
right
browser
right,
the
the
planes
leaving
the
floor
a
little
bit
for
20
years
and
then
then
actual
review
took
place
and
the
outcome
from
that
was
from
I
think
tigrant
kind
of
nailed,
the
last
nail
in
the
coffin
was
from
Tigger
and
saying
I'm.
Sorry,
I,
don't
understand
this.
It's
like
I
have
no
time
for
this.
If
someone
else
has
time
and
can
actually
bring
value
into
this
and
help
Shepherd,
this
I
need
a
parking
expert.
B
Now,
I
can't
do
this
myself
and
I
completely
understood
that
it
was
very
good
for
him
to
have
this
innovative
public
confession
of
powerlessness
like
I.
That's
it
I'm
done.
I
can't
understand
this.
This
is
too
much
and
it
kind
of
stayed
open.
There
were.
There
was
actually
Community
folks
coming
and
saying.
Hey
I
want
to
help
and
it
didn't
come
up
and
then
there's
another
type
that
has
been
brewing
around
this
anyway.
So
that's,
okay,
we're
gonna,
stop.
B
We
brought
it
as
far
as
we
could
they're,
certainly
interest
inside
Splunk
to
do
a
good
job
with
parquet
in
S3
and
all
this
type
of
storage,
but
it's
very
nascent
so
yeah.
If
we,
we
still
have
a
pocket,
Explorer
exporter
in
the
open,
Telemetry
collector,
because
back
then
we're
much
more
lenient
about
having
stuff.
That's
just
like
you
know
not
really
making
sense
yet,
but
you
can
have
it
so
it's
in
development.
It's
not
doing
anything.
B
We
could
talk
about
removing
it.
We
could
talk
about
doing
what
you
said.
We
changed,
take
a
custom
parking
schema
and
have
some
sort
of
a
full-fledged
translation
from
a
TLP
to
Barcade
for
each
of
the
sources
you
care
about.
There
might
be
an
outcome
that
we
can
talk
about
if
you
want
and
that's
what
I
have
here.
C
Yeah,
thank
you.
I
I
agree.
Thank
you
for
the
summary.
Those
of
you
not
following
there's
been
a
long
history
of
PR's
that
Antoine
open
in
the
protocol
repository
in
addition
to
the
collector
related
Parts,
I
I.
Think
I
myself
was
in
the
same
boat
as
tigan
back
then,
when
I
was
reading,
I
was
really
eager
to
have
like
as
a
as
an
as
a
user.
I
wanted
to
be
able
to
write
my
data
to
a
file
and
then
I'd
like
to
process
it
with
some
tools
off
the
shelf.
C
So
you
don't
realize
what
you're
asking
for
now
that
I
understand
the
a
little
bit
more
of
what's
Happening
I
I
still
think
we
can
get
a
parquet
file,
and
but
my
question
is
actually
how
to
make
that
usable
say
in
data
Fusion
or
in
one
of
the
Apache
flight.
You
know
Apache
flight
SQL
database
like,
for
example,.
F
E
E
All
right,
I
think
it's
been
oh
there
you
go,
he's
dropped
off
completely.
All
right,
maybe
we'll
bring
up
the
next
topic.
While
we
wait
for
him
to
come
back,
Raphael
I
think
you're
you're.
The
last
topic
here.
D
Hey
hello,
everyone,
so,
as
you
may
know,
we
have
HTTP
config
Mac
provider
that
was
recently
added
to
The
Collector.
That
allows
you
to
fetch
configurations
from
ATP
servers
and
it
has
been
a
desire
to
have
the
equivalent
for
https.
D
But
the
thing
with
is
that
with
HPS.
Sometimes,
if
you
don't
use
the
default
system
system,
CA
certificates,
you
are
required
to
pass
some
parameters
to
the
to
the
client,
making
the
connection
to
the
PS
or
HPS
server.
That
has
the
configuration
so
I
created
this
PR,
which
adds
support
to
https
config
provider
and
initially
I
was
passing
parameters
to
that
using
environment
variables.
D
But
that's
not
let's
say
after
after
I
create
a
pair
I
realized
that
that's
not
scalable,
because
we
can
have
multiple.
You
can
fetch
configuration
from
multipoint
servers
and
then
use
environment
variables.
You
cannot,
let's
say,
have
multiple
configurations,
each
server
individually
with
its
own
configuration
and
then
I
got
this
suggestion
to
use
URL
fragments
to
pass
parameters
to
each
tweet
server.
That
I
will
twitch
clients
that
will
connect
to
it
in
each
different
servers.
So
I
would
just
like
some
feedback
on
this
approach
before
I
move
on
with
the
implementation.
D
I
think
that
that's
a
good
approach,
the
main
issue
is
that
we
cannot
pass
common
line
parameters
to
the
config
match
providers.
Those
are,
let's
say
the
resolver.
The
resolver
component
that
communicates
with
the
config
Mac
provider
doesn't
receive
any
command
line
parameters.
So,
if
I
were
to
add
command
line,
parameters,
I
would
have
to
change
the
whole
structure
of
the
collector
right.
So
this
suggestion
to
use
URL
fragments
is
an
alternative
for
that.
F
My
take
on
this
is
the
the
provider
is
still
useful
without
any
of
these
environment
variables
like
without
adding
the
ability
to
configure.
So
maybe
we
should
do
a
first
PR
that
is
not
configurable
and
try
and
solve
this
configuration
generally
at
the
collector
on
a
separate
issue,
because
I
think
it's
probably
something
that
we'll
have
to
deal
with
on
other
providers
and
we
want
to
be
consistent
on.
E
If
you
wanted
to
use
this
without
the
ability
to
configure
it,
you
could
still
build
the
collector
and
then
replace
your
default
CA
inside
the
container
that
you
would
deploy.
The
Collector
with
and
I
would
give
you
an
option
without
having
to
add
any
environment,
variable
parsing
or
whatever
inside
the
collector
itself.
Right.
D
Yeah
but
let's
say
I
can
see:
let's
say
we
adding
more
and
more
parameters,
so
we
can
even
add
a
mutual
TLS
authentication
to
this.
If
you
wanted
so
each
in
this
case,
each
each
server
to
which
you're
connecting
to
will
have
a
different
configurations.
We
can
have
insecure
versus
no,
no
can
have
name
validation
versus
no
name
validation
for
each
connection
as
well.
So.
D
Yeah
I
agree
that
if
you
only
have
a
single
server
that
you're
connecting
to
that's
a
good
approach
to
to,
let's
say,
create
a
container
with
the
certificates
that
you
want
to
use
there.
But
if
you
are
connecting
to
multiple
servers
which
is
possible,
then
that
wouldn't
work.
But
the
general
case
would.
D
G
I
Yeah,
so
you
have
like
the
the
scheme
for
the
provider
and
then
what
follows
that
is
the
URI
info.
If
you
had
like
some
kind
of
delimiter
to
separate
the
the
URL
from
the
provider
level
config,
like
the
cert
paths
and
things
like
that
and
you'd
be
able
to
take
it
out.
I
I
can
maybe
provide
a
note
in
the
the
pr
itself.
So.
D
E
Yeah,
the
suggestion
is
to
use
your
your
refragments
and
pass
in
the
parameters
as
like
appended
to
the
to
the
URI
that
gets
passed
in.
D
Yeah,
and
so
the
the
Beauties
with
this
URL
fragments
is
that
URL
fragments
are
not
supposed
to
be
passed
to
the
server,
so
we
are
kind
of
compliant
with
the
RFC
that
defines
Uris
so
that
I
that's.
Why
I
like
the
suggestion,
so
I
just
wanted
to
get
more
feedback
before
moving
on
with
the
implementation,
so
it
seems
that
this
was
done.
So
what,
in
what
context
did
you
use
similar
approach.
I
Not
anything
that
generalized
but
I
I
have
like
overloaded
the
URI
I
see
based
on
that
scheme.
I've,
not
I've,
not
done
anything.
That's
very
standardized,
though.
Okay.
E
Yeah
I
guess
a
confusing
part
for
this
is
then
any
other
comp
map
provider
like
it.
It
kind
of
implies
that
the
other
one
other
ones
will
have
a
similar
mechanism
for
configuring,
additional
options
and
and
I
don't
I,
don't
know
if
that's
something
we
want
to
officially
support
on
at
least
not
yet.
I
Yeah,
the
Splunk
distribution,
where
I
think
some
of
the
provider
logic
originated.
We
had
like
this
config
Source
mapping
in
the
actual
config
that
would
allow
configuring
your
provider
instances
and
then
there
are
like
the
directives
that
map
to
them
by
their
component
name
for
like
the
instantial
good,
big
but
yeah.
Obviously
that
would
be
a
new
component
in
the
the
config.
E
So
can
we
could
we
move
forward
with
supporting
https
without
a
dealing
without
any
additional
parameters
to
begin
with,
and
then
creating
a
separate
issue
to
discuss
how
we
want
to
support
a
digital
configuration
parameters
and
then
have
the
community
decide
there
I
feel
like
that
would
be
a
good
way
to
move
forward.
D
Yeah,
you
just
use
the
system
certificates
for
now
and
then
okay.
So
last
question
then
on
this
topic.
So
if
I
were
to
implement
this,
it's
a
one-liner,
the
difference
between
HP
and
HPS
is
a
one-liner.
Does
it
make
sense
to
create
a
new
component
or
because
I
I
there's
a
there,
was
a
comment
I
initially
I
did
asked
in
my
API.
Let's
say:
I:
try
to
reuse
and
pass
as
configuration
to
the
initial
to
the
initializer
of
the
components,
but
that
it
seems
like
breaks
the
the
con
The
Collector.
D
E
C
Oh,
let's
see
if
my
audio
works
now
yeah
I'm
having
a
power
like
a
wind
problem
here,
so
my
network
went
out
but
like.
C
I
was
just
trying
to
add
there
at
the
end
that
the
parquet
question
is
like
the
big
open
one
for
me.
I
want
to
file
as
well
at
the
end
of
the
day,
for
some
of
my
applications
and
I
think
the
Otep
is
where
we
discussed
that
I've,
given
this
feedback
already
to
my
collaborator
at
F5
and
we're
starting
to
research.
The
answer
to
this
question
in
part.
C
What
we
find
is
the
go:
Arrow
library
is
not
quite
there,
and
so
it's
not
fully
documented
we're
still
like
pushing
the
boundaries
of
the
go,
Arrow
support
and
we'll
get
there
I.
Think
part
of
it
is
just
just
needing
to
understand
how
to
get
the
parquet
interfaces
to
get
together
with
the
arrow
interfaces
so
we'll
work
on
it
anyway.
The
discussion
may
happen
in
the
Otep
and
that's
going
to
be
brought
to
a
specification
Sig
for
a
Otep
review
coming
soon.
Thank
you.
That's
all
I
had.
E
I
just
had
a
question
for
you.
The
original
issue,
you'd
open
boggling,
had
had
asked
whether
the
monitoring
should
be
done
via
the
hotel
go
rather
than
doing
it
within
the
Ops
report.
Did
you
ever
get
an
I
I
thought
that
you
responded
to
him,
but
did
you
ever
get
back
from
here
back
from
him
with
your
answer
or.
C
No
for
the
for
the
audience,
this
is
a
question
about
whether
we
should
be
using
an
Hotel
go
instrumentation
library
to
do
the
the
work
as
whereas
I
had
just
written
an
object
report
library
to
do
the
work
and
I
I
feel
like
it's
not
a
well
I
do
feel
like
there
are
there's
an
interesting
use
case
to
have
that
instrumentation
Library
go
should
be
providing
that
instrumentation
Library
and
when
metrics
is
stable,
I
figure
it
will
and
I
imagine
there
will
be
a
configuration
options
which
attributes
like
do
you
want?
C
What
metric
names
do
you
want
to
use
and
there's
still
going
to
be
questions
about?
How
do
you
set
this
up
if
we
want
to
solve
this
question
now,
for
the
test
bet,
I
think
we
should
just
standardize
the
metrics
that
we
use
and
it's
not
a
lot
of
code
to
generate
the
instrumentation.
So
my
point
is
that
it's
hard
to
name
this
stuff
and
if
we
want
to
solve
this,
we
shouldn't
block
on
that
hotel.
Go
instrumentation.
E
C
E
Not
urgent
but
I
guess,
even
even
if
we
were
to
instrument
using
the
hotel
go
instrumentations
I
wouldn't
necessarily
allow
us
to
see
the
data
that's
passed
within
the
pipeline,
so
between
components.
If
you
wanted
to
like
monitor
that
each
component
is
perceiving
or
sending
the
same
amount
of
bytes
or
you
know,
if
maybe
you
have
some
some
component
that
you
care
about.
That's
doing
some
compression
like
a
processor
or
something
like
that,
then.
C
So
the
the
pr
that
I
wrote
only
covers
exporters
and
receivers
doesn't
monitor
essentially
the
pipeline
itself,
and
that
could
be
done.
I
know,
there's
a
method
on
the
underlying
protocol
buffer
object
like
bite,
size
method,
which
usually
is
cached
and
sometimes
is
expensive,
and
that's
the
question
about
GoGo
versus
the
Google
protobuf
I.
Imagine
I
didn't
go
there
because
I
don't
need
that,
but
that
could
be
done.
The
reason
I
mean
one
of
the
answers
for
the
question
about
the
hotel
go.
C
Instrumentation
might
be
that
we
expect
that
support
to
be
available
for
grpc,
because
grpc
provides
that
Handler,
but
any
non-jrpc
protocol.
You
have
to
do
it
yourself
and
now
we
would
have
like
the
hotel
go,
can
do
that
for
grpc
connections,
but
if
you're
using
HTTP
or
you're
using
you
know,
I
was
looking
at
doing
this
for
mqtt
and
like
different
protocols
that
don't
have
the
existing
stats
here.