►
From YouTube: 2022-07-28 meeting
Description
Instrumentation: Messaging
A
A
B
Yeah,
I
know
all
about
it
and
it's
like
the
awkward
part
is
also
coming
back
and
having
so
much
to
do
that
in
other
parts
of
the
workspace.
You
still
don't
really
do
anything
you're
just
just
doing
something
else:
yeah
yeah,
where
was
the
vacation
at.
B
Yeah,
that's
a
real
vacation
or
is
the
weather
with
heavy
believe
sub-sahara,
africa,
I'm
guessing.
A
Well,
no,
it
was
this
was
a
while
ago,
so
this
was
like
two
weeks
ago.
Now
it
was
okay.
Actually
it
was
quite
quite
nice,
while
we
were
there
nice
and
that
one
was
really
nice
yeah,
because
it
it
was
like
65
in
scotland,.
B
A
I
used
to
be
now
I'm
out
of
cambridge
massachusetts.
A
A
B
Let's
see
what
times,
oh
okay,
just
10
o'clock,
I
know
aaron's
not
gonna,
be
able
to
make
it,
but
I
imagine
anthony
should
be
here,
so
we
broadway
for
him
and
then
josh
had
some
stuff.
The
agenda
looks
like
so.
B
Okay,
yeah,
I
imagine
anthony,
will
show
up
in
a
little
bit,
but
we
could
probably
just
jump
in
here
if
you
haven't
already
welcome-
and
please
add
yourself
to
teddy's
list-
and
if
you
have
anything
you
want
to
talk
about,
please
add
it
to
the
agenda
and
we
can
get
started
here.
So
the
first
thing
on
the
list
is
our
check.
In
with
the
metrics
sdk
author
release
progress,
oh
it's
37,
so
making
progress.
B
I
feel
like
every
week.
I'm
surprised
by
that
number
we're
doing
pretty
good,
I'm
getting
questions
from
pms
now
that
are
related
to
what's
our
timeline
and
as
anyone
who's
ever
given
a
number
to
a
pm
knows
like
it's
mostly
made
up,
but
I've
been
telling
him
I'm
thinking
on
the
order
of
a
month
and
a
half,
or
maybe
two
months
before
I
imagined
this
project
being
complete.
B
That
being
said,
like
I
said,
it's
kind
of
made
up,
but
I
think,
based
on
our
progress,
we're
doing
pretty
good.
Let's
see,
look
at
the
project
board
might
be
a
better
view.
We've
actually
got
some
good
movement
in
the
done
column
still
working
on
the
pipeline
structure.
For
creating
aggregations
and
views
and
reader
interchanges
and
then
from
there,
we
still
have
to
stub
out
or
implement
out
the
stubs
instruments
themselves,
which
I
don't
think
should
be
too
hard.
B
Once
we
have
this
pipeline
registration
mechanisms,
essentially
just
matching
up
the
specific
instrument
types
on
the
other
end,
hooking
up
the
aggregators.
We
have
the
last
valley
that
merged
this
week
the
sum
aggregators
pr
is
ready
to
review
sorry
about
the
flopping
on
that
back
back
and
forth
in
the
draft,
but
I
think
that
we've
got
it
nailed
down,
as,
like
the
at
least
the
duties
we
want
to
include
in
the
alpha
release,
and
so
it
should
be
ready
to
mur
to
review.
B
Aaron
also
has
this
filter
aggregator,
where
it's
going
to
be
implementing
any
sort
of
views
aggregations
on
the
attributes?
There's
an
open
issue
for
the
beta
at
least
to
investigate.
If
this
is
the
right
spot
to
be
doing
that,
but
I
think
this
is
a
you
know
a
valid
approach.
I
didn't
say
that
in
the
pr
I
said
to
him
slack,
but
I
think
this
is
a
good
approach.
There's
some.
B
B
If
we
look
at
the
backlog
here,
there's
definitely
a
good
amount
of
stuff
still
to
do.
This
is
probably
the
most
nebulous
one,
but
I
was
planning
on
probably
picking
up
the
delta
histogram
next
myself
package
documentation
should
be
pretty
easy,
the
exporter
code
again,
that
would
probably
be
the
next
thing
I
picked
up
if
they
weren't
already
picked
up
after
the
other
aggregators.
B
Once
we
get
the
pipeline
structure
all
implemented
and
the
instruments
added,
adding
back
this
example
code
or
the
bridge
code
or
the
prometheus
example
code
should
be
opened
up,
and
then
we
really
are
just
updating,
change,
logs
and
merging
issues
at
this
point,
so
I
think
that
77
is
actually
pretty
accurate
and
timeline
wise.
I
think
a
month
and
a
half
two
months
is
a
reasonable
guess,
but
moving
at
the
speed
of
business,
no
one
ever
knows
okay.
B
Cool
then,
moving
on
josh,
you
had
the
next
item
with
two
donation
proposals
for
auto
instrumentation.
C
Yeah
hi,
I
don't
know
if
this
has
been
discussed
at
all
ever
in
this
meeting.
C
The
reason
I'm
here
talking
about
it
today
is
that
the
as
a
technical
committee
member,
we
have
certain
responsibilities
where
we've
advertised
to
the
public
in
the
community
that
we
will
respond
to
certain
proposals
within
a
certain
period
of
time
and
we've
fallen
behind
actually
on
these
both
of
these
proposals,
but
it's
also
the
speed
of
business
happening,
so
I
have
actually
reviewed
them
both,
and
this
is
not
me
bringing
a
technical
sort
of
position
to
this
group.
It's
more
of
a
question
about
organization
and
community
interest.
C
I
think-
and
actually
I
plan
to
take
something
here
and
write
up,
maybe
two
or
three
pages
and
hopefully
get
some
position
or
publish
something
from
the
go
team
on
this
topic
sort
of
sort
of
what
they
think,
because
this
is
really
touchy.
You
know
there's
two
proposals
here:
they
both
build
on
bbpf.
C
They
both
take
radically
different
approaches.
I
think
some
of
them
are
more
or
less
compatible
with
the
go
philosophy.
I
don't
think
I
would
use
them
either
either
them
in
public.
My
in
in
production
myself,
but
I
certainly
would
use
them
in
debugging,
and
maybe
I
could
be
convinced,
otherwise
I
don't
think
that
either
of
these
questions,
these
proposals
needs
to
be
considered
at
the
open,
telemetry
level
as
because
it's
specific
to
one
group.
C
However,
both
of
these
groups
are
kind
of
looking
for
an
endorsement
official
kind
of
stamp
of
approval,
and
it's
not
really
mine
to
give
it's
kind
of
this
communities
to
give.
I
think
the
best
outcome
here
would
be
that
we
have
some
kind
of
combination
or
group
collaboration
on
on
kind
of
like
any
approach
to
auto
instrumentation
in
go
based
on
ebpf.
C
I
definitely
would
think
it
goes
in
a
different
repository,
but
I
don't
I
don't
know
that
either
of
these
is
actually
appropriate
without
asking
all
of
you
know
us
what
we
think
here,
so
I'm
soliciting
feedback,
informal
or
otherwise,
either
on
these
issues
or
to
the
kind
of
meta
question
of
what
we
want.
C
C
We
are
looking
at
a
profiling
project
in
hotel
like
clearly,
profiling
is
connected
with
tracing
somehow,
and
the
thing
that
connects
them
often
is
having
implicit
knowledge
about
where
you
are
during
all
the
gaps
in
your
trace.
Ebpf
does
help
with
that
and
you
can
fill
in
gaps
in
your
trace
using
assistance
from
that
sort
of
tool.
So
it's
worth
exploring.
I
think-
and
I
think
there
are
sort
of
two
levels
of
feature
at
least
here
being
offered.
C
One
is
like
basic
support
for
latency
monitoring
of
individual
requests
and
there's
a
question
of
whether
you're
willing
to,
and
how
will
you
build
up
the
sort
of
state
needed
to
do
that?
You
know
it's
pretty
expensive
calculation
to
pair
up
all
the
incoming
and
outgoing
request,
ids,
I
guess
you'd
say
to
build
up
spans
from
those
events
over.
You
know
inside
of
an
instrumentation
probe
written
in
eb
bbf,
but
it's
not
impossible
and
people
do
that
sort
of
thing
so
anyway,
yeah
please,
please
speak
somebody.
B
Thanks
for
bringing
these
up,
we
have
talked
a
little
bit
about
we'll
definitely
talk
about
them.
Both
of
the
contributors
here
actually
came
to
the
sig
meeting
a
few
months
ago
and
presented
it
was
awkward,
not
awkward.
It
was
just
coincidental
on
the
same
day
that
presented
both
approaches,
but
I
do
want
to
also
make
sure
that
you
understand
that
I
don't
think
this
one
unless
something's
drastically
changed
it's
based
on
ebpf.
This
is
a
compiler
plugin.
B
I
think
where
it
actually
modifies
the
source
code
before
the
compilation
time,
which
is
a
very
different
approach.
Thank.
C
You
I
yes,
sorry
you're
correct.
I
thank
you.
B
Yeah,
I
think
I
think,
as
we
kind
of
saw
like
very,
like
and
kind
of
like
what
you're
saying,
they're
two
very
different
approaches.
One
will
essentially
change
your
code.
The
other
one
doesn't
change
your
code,
but
it
changes
the
operation
of
the
binary
itself.
B
I
have
been
following
this
one,
a
little
bit
closer
than
the
other
one
and
in
this
process
I
know
that
the
poc
tracks
go
routines
to
essentially
track
process
input
and
output,
and
this
was
something
that
was
originally
asked
when
they
first
came
to
present.
Here
is
how
do
you
actually
map
across
you
know
multi,
threaded
or
concurrent
programming
so
like?
B
If
you
have
multiple
requests,
you
know
active
at
the
same
time,
there's
no
guarantee
by
the
go
compiler
that
like
or
I'm
sorry
to
go,
runtime
that
the
go
team
that
actually
started
a
program
or
a
process
is
going
to
end
it,
and
it
was
something
to
do.
They
wanted
to
build
a
map
structure
here
to
actually
support
that.
I
think
the
initial
poc
is
just
supporting
a
single
threaded
processes,
but
it
was
changed
also
to
say,
like
that's,
probably
not
a
really
good
idea.
B
In
fact,
steven
pointed
out,
it
was
actually
discouraged
in
some
of
the
go
notes
and
so
they'd
switch
to
modifying
a
context
so
functions
that
accept
the
context
will
actually
have
a
modified
context.
I
I
was
a
little
bit
confused
on
this
one,
because
I
I
thought
that
evpf
has
a
read-only
interface,
but
I
asked
that
question
and
then
I
was
told
to
go.
B
Look
at
the
context
propagation,
because
there's
an
example
there-
and
I
haven't
had
the
time
to
go-
do
that
yet
so
in
theory,
I
think
we
can.
We
can
do
this
based
on
the
assertion
that
this
is
kind
of
conveying
so
that'd
be
really
cool
but
kind
of
back
to
your
like
the
the
technical
feasibility.
I
think
both
of
these
are
technically
feasible.
B
I
do
need
to
kind
of
take
a
look
at
this,
but
this,
I
think,
is
the
key
thing
is
the
maintainer
track,
because
I
don't
think
that
this
sig
specifically
could
take
on
the
full
responsibility
of
this.
I
I
think
that
we
have
developer
capacity,
that's
at
times
hard
to
get
things
done
in
the
sig
itself,
let
alone
trying
to
add
more
capacity
to
all
the
instrumentation
majority
supports
and
the
sdk
and
apis.
B
That
being
said,
I
do
think
there's
people
in
this
sig
that
could
could
overlap,
and
I
would
probably
recommend
that
we
create
a
distinct,
auto
instrumentation
for
go
group,
and
then
you
know,
maybe
some
of
those
people
can
be
a
part
of
those
groups
similar
to
how
java
does
it,
where
there's
an
auto,
instrumentation,
sig
and
there's
a
a
java,
sig
itself,
and
also
there
there's
overlap.
But
I
definitely
I
would
like
that
to
happen.
I
don't
know
if
it
has
to
happen
off
the
bat.
B
You
know
if
you,
if
I'm
talking
to
the
person
with
the
the
technical
committee
hat
on
right
now,
maybe
it's
something
that
this
sig
just
picks
up
in
a
new
repository
at
first
and
then
we
split
it
off
with
a
new
group
to
actually
do
the
maintenance
and
the
approvals
of
that
project.
But
I
do
think
that,
like
just
based
on
the
current
developer
capacity,
we
have
that.
That's
not
gonna
be
feasible.
C
That
sounds
good
and
that
sounds.
Thank
you.
I
think
that's
about
what
I
thought
you'd
say
and
I'm
glad
to
heard
it
from
you.
So
it
sounds
like
we
might
be
willing
to
create
a
repository
with
the
current
ownership
equal
to
this
group.
You
know
approvers
and
maintainers,
with
an
intent
to
move
to
to
either
shut
it
down
eventually
or
hand
it
over
to
a
sort
of
adjacent
group
with
interest
in
auto
instrumentation
I'll,
try
and
phrase
that
to
them,
and
thank
you
very
much
I'll
do
what
I
do.
C
I
can
to
convey
that
I
guess,
with
my
hat
on.
C
B
Yeah
exactly
because
I
definitely
know
that
some
people
are
weary
of
their
source
code
being
modified
without
them
reviewing
it,
which
I
think
you
can
still
review
it,
because
it
would
create
a
diff
and
then
some
people
are
wary
of
a
binary's
runtime
being
messed
with,
so
they
would
have
a
choice.
I
guess
the
answer
there.
The.
C
The
one
that
has
a
json
file
of
symbol,
offsets
freaks
me
out
a
little
bit.
It
kind
of
feels
like
dlls
again
and
such,
but
I
have
myself
used
tools.
I
mean
at
least
for
debugging
like
to
to
like
scan
the
source
code
and
print
out
a
manifest
of
like
say
every
log
string
with
its
syntax
in
its
format.
You
know
so
that
I
can
reverse
engineer
those
strings
when
they
come
through
the
login
pipeline.
C
Sorry.
The
next
agenda
item
was
me
wanting
to
ask
a
question
quickly
of
this
group,
so
I
put
up
a
pr
number
30
22
and
it's
not
a
priority
it.
It
uses
the
same
code
that
I
have
been
running
in
production
at
lightstep
to
test
out
the
exponential
histogram
and
validate
this
code
for
a
couple
weeks.
Now
it's
basically
copied
from
our
launchers
optional
subdirectory
that
that
has
you
know
a
framework
for
using
this
code
right
now.
C
What
I
did
was
I
took
out
all
of
the
I
guess
I
would
say:
implementation
specifics
that
are
about
how
to
bind
the
code
to
a
specific
sdk.
So
my
you
know
my
lightstep
sdk
that
I've
talked
about
in
the
past
year
has
its
own
discipline
about
locking
and
copying.
You
know.
C
Aggregators
are
different
than
the
the
one
that
you
guys
are
working
on,
but
I
thought
that
if
I
could
factor
out
a
portion
of
this
code
that
was
essentially
a
pure
data
structure,
so
there's
no
locking
here,
it's
just
a
histogram
basically
and
it
builds
on
the
mapping-
functions
that
we
merged
a
few
months
ago
in
a
path
that's
already
there
in
the
go
repository.
So
currently
we
have
sdk
metric
aggregator,
exponential
mapping
and
a
few
subdirectories
under
that
merged,
and
I
have
a
draft
change
about
that
which
is
related
to
spec.
C
But
the
idea
in
this
pr
here
is
to
put
the
sibling
next
to
that
mapping
directory,
which
has
the
structure
that
anyone
could
use,
including
the
collector
or
like
I've,
done
here,
an
alternate
sk.
C
So
that's
an
option
for
us.
It
would
require
either
having
support
for
1.18
generics
in
the
main
branch
or
it
would
require
copying
the
mapping
functions
out
of
the
main
branch
into
the
new
sdk
branch,
either
or
anyway.
I'd
share
that
and
ask
for
opinions,
since
once
this
is
somewhere,
I
can
kind
of
want
it
to
be
in
this
repository
not
having
it.
C
B
Yeah,
so
I
got
mixed
feelings
josh.
I
I
really
like
the
idea
of
having
an
exponential
histogram.
I
also
like
having
a
centralized
location
that
this
kid
will
all
live
in.
I
think
it's
really
important
and
honestly.
I
think
that
exponential
histograms
and
like
the
way
that
opencentury
is
trying
to
support
them,
is
going
to
allow
the
project
to
stand
out
in
in
its
ability
to
handle.
You
know
this
exposition
format.
B
I
worry,
though,
because
it's
making
a
change
to
the
metrics
sdk
on
the
main
branch,
that's
outside
of
the
new
sdk
development,
and
it
seems
like
a
conflicting
interest
there.
I
it's
also,
you
know,
1800
lines
of
code.
It's
tough
to
ask,
I
think
the
approvers
on
here
if
they
could
review
that
and
also
review
the
new
sdk
given
they
need
the
context
of
both
of
those.
B
That
being
said,
that's
just
feedback.
If
approvers
on
this
call
are
more
than
happy
to
say
like
I
would
like
to.
You
know,
approve
this,
and
I
will
definitely
merge
that
if
it
gets
too
appropriate
reviews,
but
I
definitely
don't
think
I
have
the
time
to
do
a
review
myself
specifically.
C
That's
totally
fair,
thank
you.
I
I
will
state
just
for
the
record
that
the
code
for
that
sort
of
ethical
reason.
I
didn't
want
to
block
this
group's
resources
with
my
side
project,
but
I
did
want
to
advance
the
hotel
spec
on
that
on
the
exponential
histogram.
So
I
did
get
light
step
engineers
to
review
that
code
and
they've
done
a
pretty
honest
job
of
it.
As
I'm
saying
in
my
opinion,
that
includes
at
least
one
figure
that
this
group
knows
a
little
bit.
Gustavo
was
sort
of
emeritus.
C
Here
he
did
some
review
for
that
code
for
what
it's
worth.
B
Yeah
and
I
I
think
that
that's.
C
B
Yeah,
I
think
also
having
yeah,
like
you
said,
like
a
working
example
being
run
from
lightstep
is
a
great
proof
of
concept,
so
I
definitely
think
this
is
the
thing
like.
I
really
want
this
code
included.
B
I
just
I
would
like
to
look
at
it
after
I
get
the
the
alpha
sdk
merchant,
but
I'm
also.
B
C
You
ask
tyler,
I
would
I
would
retarget
my
pr
to
the
new
sdk
branch.
That's
actually
easy,
because
you've
already
got
1.18
support.
I
know
it
won't
make
it
available
sort
of
widely
because
it'll
be
on
a
branch
at
that
point,
but
that's
okay
with
me,
too.
I
think
that
it
might
block
the
collector
work
I
want
to
do,
but,
but
that's
okay
too.
I
mean
at
least
I
could
draft
that
pr.
C
So
I
don't
really
have
one
the
the
reason
I
put
this
up
when
I
did
was
that
the
moment
I
published
the
lightstep
alternate
sdk
for
metrics
just
to
advance
my
own
kind
of
like
corporate
needs.
It
became
suspicious
and
doubtful.
So
I
wanted
to
offer
this
backport
immediately.
I
have
no
timeline
pressure
because
for
the
most
part
I
needed
this
for
like
two
customers,
including
our
internal
use,
and
they
you
know
they
can
take
it
from
where
I've
published
it
now
and
we're
happy.
B
C
A
C
So
if
we
merge
it
to
the
main
branch,
it
would
require
lifting
whatever
1.8
logic
or
1.18.
Logic
was
needed
for
the
build
pipeline,
and
I
have
no
idea
what
that
would
look
like.
But
so
I
did
at
one
point
a
few
months
back
prototype
what
it
looks
like
for
a
complete
back
port
of
the
new
data
structure
to
the
old
sdk,
and
I
think
nobody
has
a
real
like
desire
to
touch
that
code
and
I'm
included
in
that
group.
C
But
I
did
like
work
through
it
and
the
problem
is
not
that
you
can't
get
it
to
work
like
exponential.
Histogram
is
just
another
one
of
the
sort
of
aggregators
you
can
use
to
a
histogram
and
problem.
Is
you
have
to
know
how
to
set
up
the
sdk
that
old
sdk
is
impossible
to
set
up?
And
so
even
if
I
gave
you
the
thing,
it
would
be
so
hard
for
people
to
use.
I
don't
think
it
would
win
any
anybody
over.
So
I
I
sort
of
halted
at
that
point.
C
But
if
we
had
the
1.18
support
in
the
main
main
branch,
then
I
could
send
you
a
pr
that
would
just
be
this.
The
same
pr
I've
got
today
is
ready.
That
would
be
just
the
data
structure
and
that
won't
help
the
old
sdk.
I
mean
it.
Could
it's
just
that
you
have
to
do
more
than
just
have
a
data
structure.
C
You
need
to
have
a
new
aggregator
type,
a
new
constructor
for
the
aggregator
type
and,
like
all
the
switch
case,
all
the
switch
statements
to
handle
that
kind
of
stuff
has
to
be
updated.
And
then
you
need
to
go
touch
the
otlp
exporter.
At
the
very
least,
you
might
have
to
go
touch
the
built-in
exporters
like
prometheus
and
standard
error.
C
If
that's
the
case,
you
know
like
you're,
going
to
have
a
big
impact
for
a
lot
of
code
that
we
intend
to
throw
away,
that's
kind
of
why
I
don't
want
to
do
it,
but
there
are
definitely
people
in
the
community
looking
for
a
statsd
server
support,
and
this
is
kind
of
like
what
they've
been
waiting
for
and
I
can.
I
can
probably
send
a
pr
to
the
collector
using
the
light
step
branch,
but
I
just
you
know
I
want
to
make
sure
it
doesn't
look
like
we're
sidestepping
this
group.
B
Yeah
I
mean
I,
I
think
you
could
ask
the
collector.
I
I
think
that
they're
their
own
independent
group.
C
B
Okay,
all
right
looks
like
I
did
jump
over
one
item
that
was
mine,
so
we'll
come
back
to
the
agenda,
but
jamie
you're
up
next
with
the
hotel
launcher
design
doc.
Let
me
start
sharing
a
screen
again.
D
Yeah,
so
I
wanted
to
start
with
saying
thank
you
for
some
of
the
feedback
that
we've
got
on
this
we've
been
working
through
it
a
little
bit,
and
I
I
have
some
notes
that
I
wanted
to
add
to
the
pr
as
well,
but
I
just
kind
of
got
some
final,
like
some
notes
right
before
this,
and
I
didn't
want
to
throw
them
in
here
and
make
everyone
just
read
it
as
I
wrote
it.
D
One
thing
I
I
wanted
to
mention
in
general
is
kind
of
reiterating
the
idea
that
we
do
not
want
to
recreate
the
go
sdk
in
any
way
right.
We
don't
want
to
have
these
two
competing
projects
in
any
way,
shape
or
form.
The
goal
is
to
have
it
simplify
the
initialization
of
your
go
instrumentation
and
right
now.
The
the
main
idea
of
it
so
far
is
that
it
has
our
otlp
or
our
hotel
environment
variables.
D
So
the
general
note
is
that
yes,
right
now,
some
of
them
are
just
specifically
implemented,
but
the
goal
would
be
that
any
otlp
environment
variable
would
be
supported.
So
we'd
want
to.
You
know,
probably
use
like
I
think
it's
like
a
resource
detector
or
something
like
that
to
be
able
to
pull
in
hotel
exporter
otlp
endpoint
traces,
endpoint
things
like
that,
so
we'd
want
to
have
all
of
those.
D
One
of
the
specific
questions
I
wanted
to
ask
about
was
related
to
exporters,
and
I
think
tyler
you
may
have
mentioned
it.
I
know
anthony
did.
I
see
unfortunately,
he's
not
here
today,
but
my
thought
on.
It
was
so
there
was
a
question
of
hey.
This
is
specific
to
otlp
exporters.
D
What
about
vendors,
who
do
not
today
support
otlp?
So
I
think
my
understanding
right
now
is
is
at
least
it's
potentially
a
two-pronged
approach.
D
One
of
the
initial
intention
is
specifically
for
otlp,
so
the
thought
would
be
that
we
would,
at
the
very
least
start
with
we
have
otlp
grpc
and
otlp
http
and
from
there
it
may
be
that
we
extend
on
it,
expand
on
it
with
future
issues
or
future
additions
with
other
exporters
or,
alternatively,
that
you
know,
someone
might
create
a
package
similar
to
how
we
noted
that
we
would
have
like
a
honeycomb
specific
package
that
would
go
with
that
or,
alternatively,
use
a
collector
for
doing
that.
D
B
I
think
I
probably
have
to
see
it
in
the
pr
or
just
like
a
little
bit
more
of
an
explanation
and
think
through
it.
My
initial
thought
is
that,
like
I,
I
like
having
otlbb
a
default,
but
I
do
think
that
the
extensibility
is
can
be
critical
for
the
point
that
anthony
mentioned.
So
I
kind
of
have
to
see
that
proposal
yeah,
I
feel,
like
I'd,
speak
out
of
ignorance.
I
think
at
this
point,
without
thinking
about
it,
for
a
little
bit.
D
Okay
sure
and
I
think,
along
those
lines,
the
and
and
I'm
realizing
now
again,
as
I'm
talking
through
it
a
little
bit,
it
probably
is
easier
to
discuss.
Once
you
see
the
the
notes
written
down,
I
wanted
to
link
to
a
couple
things
about
how
these
would
work
with,
like
the
honeycomb
specific
vendor
package
and
like
the
light
step,
specific
vendor
package
and
things
like
that.
D
But
there
was
the
question
generally
there's
a
couple
of
questions
about
separating
either
separating
out
the
launcher
from
distribution
options
and
to
like
having
having
some.
D
Basically
if
we
wanted
to
do
http
and
grpc
as
options
instead
of
having
them
as
part
of
the
config.
It's
not
there
at
all
and
a
user
still
has
to
import
it
separately.
D
B
D
D
Yeah,
I
think
we
would
have
it
import
both
by
default.
It
would
we
would
have
http
and
grpc,
and
I
think
the
question
was
like
not
having
them
at
all
and
having
the
end
user,
pull
it
in
on
their
own
separately
from
the
launcher,
which
adds
slightly
more
code
that
the
end
user
has
to
write,
and
I
think
we
were
sort
of
not
loving.
That
idea,
because
again
trying
to
minimize
that
code
and
the
general
thought
that
we
have
and
again
I'll
note
this
down
in
more
detail.
D
But
seeing
if
there's
initial
thought
here,
the
general
thought
that
we
have
is
we
want
it
to
be
as
easy
as
possible.
D
So
that
would
still
be
a
primary
concern
of
these
are
a
lot
of
defaults
and
things
that
you
can
use
when
you're
getting
started.
And
if
you
get
to
a
point
where
you
say,
I
need
something
else
that
this
launcher
is,
is
making
it
a
little
complicated
for
me.
You'd
still
be
able
to
do
that
without
either
bloating
the
launcher
or
taking
away
too
much
of
the
defaults
from
the
launcher
and
having
the
end
user
do
what
they
do
today
with
the
sdk.
D
So
that
being
a
general
theme
of
how
we
would
want
the
launcher
to
look
again.
If
that
extra
package
is
a
little
bit
too
much
bloat,
then
the
launcher
may
not
be
for
you
and
that's
okay,
and
we
won't
make
it
difficult
for
you
to
say,
drop
the
launcher
and
just
go
with
the
sdk.
B
Yeah,
I
think
I
think
that's
probably
not
the
person
we're
targeting
yeah
once
it's
slim
binary
or
something
like
that
right
yeah,
I
I
don't
know
I
just
have
that
opinion
of
like
kind
of
what
the
auto
prop
package
did
is
really
where
I
would
sit
on
this
one
where
they
import
everything.
That's
in
the
standard
open,
some
repository
plus
the
contribute
repository
and
then
they
allow
for
extension.
So
if
you
wanted,
like
your
own
exporter,
that's
in
the
wild
or
something
like
that,
then
you
could
also
add
it.
B
D
Okay
makes
sense:
okay,.
B
I
could
just
see
from
like
the
user
story
here
right,
like
you're,
not
talking
about
the
the
developer,
who's
been
working
with
hotel
for
six
months
and
is
tasked
with
rolling
this
out
to
you
know
thousands
upon
thousands
rather
they're
doing
a
proof
of
concept
they're
exploring
well.
B
They
want
to
get
up
and
started
within
five
minutes
instead
of
an
hour
right,
and
so
you
know
maybe
they're
rolling
this
out
to
10
servers
so
that
the
memory
overhead
of
like
the
footprint
print,
that
they're
actually
going
to
be
using,
is
not
as
critical
to
them.
So
having
a
binary,
that's
ten
times
larger
is
I
don't
think
it's
gonna,
be
that
big,
but,
like
just
say,
it
is
like
ten
times
larger,
isn't
as
critical
to
them.
B
So
just
making
sure
that,
like
that
time
to
value,
I
think
is,
is
really
important
here,
like
the
hotel
launcher,
you
import
that
you're
up
and
running
in
five
minutes
versus
trying
to
you
know
the
sdk
like
I
who
who
was
it.
I
think
I
think
it
was
somebody
last
week
jamie,
I
don't
thought.
D
Yeah
right
exactly
cool
so.
B
One
of
the
other
things
also
jamie
that
I've
been
thinking
about,
so
I
was
tasked
with
a
few
months
ago
an
embarrassing
long
time
ago.
B
It's
a
standardized
configuration
with
a
file
in
open
slump
tree,
and
this
week
I've
actually
started
to
put
in
earnest
a
draft
for
an
otep
that
I
had
started
a
while
ago
and
I'm
seeing
a
lot
of
overlap
here,
and
so
it
might
also
work
that
there's
gonna
be
something
I
think
that
could
work
in
concert
here,
and
I
think
that,
like
this
is
going
to,
I
think,
playing
really
nicely
to
what
you're
what
you're
building,
but
essentially
it's
like
you
know,
all
of
the
things
that
you're
still
describing
are
going
to
be
configured
in
a
code
or
environment
variables,
but
we
might
also
want
to
consider
like
doing
a
proof
of
concept
here
in
it.
B
You
know
in
the
projects
as
as
this
is
the
thing
like
accepting
a
file
format
in
that
file
format,
I'm
trying
to
standardize
across
open
telemetry
in
general,
but
go
could
be
a
really
good
use
case
of
like
a
prototype
or
something
like
that.
But
this
this
launcher,
where
we
accept
a
file
format
and
it
we'll
be
able
to
configure
the
launcher
based
on
pipelines
or
something
like
that.
Yeah.
B
Maybe
I.
C
B
C
B
I
think
you're
right
josh,
but
I
think
you're
describing
the
next
iteration.
B
I
think
I
think
you're
describing
an
agent
an
actual
agent,
and
I
think
what
we're
talking
about
are
the
component
parts
of
that
agent,
and
so
that's
a
good
point.
I
think
that
maybe
we
should
keep
that
in
mind
as
we're
designing
this
launcher
as
to
like,
what's
the
next
iteration
and
how
it's
going
to
be
used
in
in,
like
a
a
full
featured,
you
know
cohesive
yeah,
like
somebody
comes
to
the
project,
and
it's
like
that.
B
Five
minute
goal
that
I'm
kind
of
talking
about
is
not
only
do
they
set
up
the
sdk
in
five
minutes,
but
their
source
code
is
instrumented
in
the
process.
Right,
like
that's,
really
shown
value
there
but
yeah.
I
think
that
that's
that's
a
that's
a
good
long-term
direction.
I
don't
think
the
launcher
is
going
to.
I
wouldn't
want
to
try
to
fight
that
off.
At
this
point,.
D
Yeah
yeah,
I
think
it's,
I
think
it's
good
to
ideally,
probably
like
start
small
and
kind
of
build,
as
we
see
what's
really
valuable
what
people
really
need
or
what
maybe
isn't,
isn't
as
important
or
ends
up
adding
too
much
but
yeah
like
the
idea
of
being
able
to
set
a
few
environment
variables
and
go
without
needing
to
add
the
extra
blocks
of
code
in
there
is
is
kind
of
a
big
part
of
the
goal,
and
that
is
something
nice.
D
I
don't
know
if
that's
josh
what
you're
referring
to,
but
the
java
agent,
where
I
can
just
say:
okay,
run
my
like
even
setting
aside
just
the
auto
instrumentation
but
being
able
to
run
the
java
agent
alongside
my
app
and
just
set
some
environment
variables
and
like
boom.
I'm
done.
I
don't
have
to
add
in
use
this
use
that
it's
all
in
the
environment
variables.
So
being
able
to
do
that
with
go,
would
be
really
nice.
C
C
So
I'm
glad
I
don't
have
to
do
that,
but
this
conversation
also
reminds
me
that
one
of
the
like
first
oteps
in
the
entire
otep
lifetime,
was
like
number
four
two
or
something
like
that
which
ben
siegelman
lightstep
founder,
wrote
to
say:
hotel
stands
behind
auto
instrumentation
as
a
principal
like
we
will
do
this
and
I
think
in
in
go
it's
a
little
bit
like
counter
to
the
go
philosophy,
as
I
was
saying
earlier,
to
do
kind
of
magic,
stuff
behind
a
context
or
a
thread
id
or
go
routine
id,
but
at
the
same
time
hotel.
C
I.
I
used
that,
for
example,
justifying
our
original
global
api,
like
the
idea
of
a
global,
is
contentious,
but
if
you've
got
if
you're
going
to
do
something
with
global
with
auto
instrumentation,
there's
going
to
be
some
source
that
provides
you
that
instrumentation
and
global
was
as
close
as
I
could
get
without
doing
something
more.
I
think
there's
an
implementation,
detailed
question
for
for
go
like
there's,
no
plug-in
support,
really,
there's
no
dependency,
injection
really
and
and
magic
is
discouraged.
So
what
are
we
left
with?
C
It's
like
something
that
maybe
the
launcher
can
help
us
with,
but
even
if
you
don't
have
auto
instrumentation
something's
going
to
set
up
a
global
and
it's
going
to
be
done
in
the
launcher.
I
guess
I'm
saying.
B
Yeah
and
jamie,
to
kind
of
like
in
this
grand
scope
of
auto
or
agents,
the
thing
that
I'm
describing
for
the
configuration
would
be
if
you
were
to
pass
a
file
to
the
java
agent,
like
if
it
accepted
a
file
format
like
a
yaml
file
format,
to
define
like
how
it's
gonna
set
up
its
you
know,
tracing
pipelines
as
well
as
samplers,
and
all
these
other
things
we
currently
do
with
environment
variables
and
more.
B
B
Yeah,
especially
if
you
want
multiple
pipelines
like
currently,
you
can't
do
that
so
yeah,
so
that's
kind
of
the
goal
and
I'm
mostly
behind
on
like
getting
something
out,
but
I'm
actively
working
out
this
week.
So
I
think
that
it
may
be
something
that
we
can
include
in
this
project.
It
also.
A
B
Too
much
out
of
scope,
and
maybe
something
would
take
the
next
iteration,
but
I
also
see
it
as
like
if
we're
gonna
build
a
prototype
for
that,
you
would
need
to
have
these
imports
of
everything,
and
it
would
have
to
be
fully
contextually
aware
of
all
the
propagators,
all
the
exporters
and
it
sounds
very
similar
to
what
we're
trying
to
do
with
this
launcher.
So
that's
why
I
bring
it
up.
I
guess.
D
Yeah,
that
makes
sense
all
right,
so
I
guess
right
now,
as
far
as
like
next
steps,
I'm
gonna
go
through
and
put
in
some
of
those
some
of
the
responses
to
the
feedback.
A
little
bit
more
information
see
that
I
think
I
think.
Generally
it's
relatively
straightforward.
I
think
there's
a
few
things
that
we
sort
of
agreed
with
a
couple
things
we
weren't
as
sure
about
and
maybe
some
things
that
we
just
need
to
flesh
out
a
little
bit
more.
D
But
I
should
be
able
to
add
some
comments
in
here
this
afternoon
and
then
I
need
to
take
a
little
bit
more
of
a
look
at
the
the
auto
prop
that
you
talked
about.
I
saw
it
got
merged
in
so
I
want
to
make
sure
I
understand
that
a
little
bit
more
so
I'm
speaking
intelligently
about
how
it
may
compare
or
how
we
might
use
a
similar
format
to
it,
but
yeah
so
so
I'll
go
through
and
update.
D
Some
of
those-
and
you
know,
keep
the
shared
config
in
mind.
Even
I
guess
thinking
about
that.
A
similar
idea
would
just
be
basically
those
environment
variables
right.
So
if
one
of
my
environment
variables
was
an
endpoint,
the
idea
was
that
my
config
vial
could
have
this
listed
in
there,
as
opposed
to
being
in
my
environment,
okay,
yeah.
B
And
just
a
heads
up
I'll
try
to
have
something
out
next
week,
just
kind
of
what
I'm
targeting
I've
got
like
three
quarters
of
an
hotep
together.
So
hopefully
this
won't
be
as
abstract
of
a
conversation.
Next
week.
D
Okay,
that
sounds
good
and
I'd
like
to
also
have
we'd
already
been
working
with
alex
at
lightstep,
and
you
know
kind
of
separated
out
examples
of
what
our
vendor
packages
would
look
like.
I
want
to
try
and
get
that
into
a
place.
That's
also
easier
to
point
to
so
as
you're
reviewing
the
dock
and
like
looking
at
that,
it
helps
paint
the
full
picture
of
you
know.
What's
what's
sort
of
right
now
the
idea
of
what
this
would
look
like.
D
So
hopefully
we
have
that
this
week,
as
well
or
at
least
again
proof
of
concept,
work
in
progress,
sort
of
thing.
B
Awesome
that
sounds
good
thanks
so
much
for
taking
on
this
task.
I
know
it's
not
easy,
so
I
appreciate
it.
C
There's
an
issue
filed
somewhere,
that's
related
to
this
question
about
whether
you
have
to
import
everything.
I
know
I
wrote
it,
so
I
can't
remember
the
number,
but
it's
saying
basically
that
we're
probably
gonna
want
to
have
a
registry
where
you
can
like
underscore
import,
like
my
fancy,
special
aggregator,
that
I'd
like
to
use
from
this
launcher
and,
for
example,
one
of
the
minor
features
that
I
put
into
this
release
of
the
hotel
launcher.
C
The
sdk
for
recently
was
a
min
max
sum
count
aggregator
and
there's
a
history
in
hotel
of
talking
about
this,
and
it's
not
part
of
our
standard.
But
if
you
look
at
spec
for
the
histogram
like
it
supports
the
notion
of
optional
buckets,
so
zero
buckets
equals,
I
mean
maximum
count
in
the
data
structure.
So
that
means
all
I
need
to
do
is
load
a
special
aggregator
to
get
min
maximum
count.
C
It's
a
little
bit
more
optimized
than
loading
a
zero
bucket
histogram.
I
don't
know
if
that
matters
to
anybody,
but
it's
an
example
of
what
I
might
like
to
load
is
you
know.
Currently
I
have
this
behavior
that
I
can't
get
from
the
default.
Sdk.
D
Right
and
I
guess
that
that
maybe
ties
into
which
may
end
up
creating
more
questions
from
mine
as
well
like
in
our
package,
we
have
like,
I
noted
on
one
of
the
comments
we
have
like
a
baggage
spam
processor
and
a
dynamic
attributes
processor,
which
kind
of
adds
a
little
bit
of
nicety.
That
may
or
may
not
want
to
be
in
the
go
sdk,
and
so
I
guess
that
would
be
a
question
as
well
as
like.
Does
that
live
in
a
vendor
package
or
like
because
right
now
right?
D
D
You
know
up
down
counter,
I'm
sorry,
I
don't
remember
the
exact
name
of
it,
but
if
it
lived
in
that
package
you
would
have
access
to
it
there,
but
I
guess
I
can
see
that
being
weird.
If
you
wanted
light
steps
up
down
counter-
and
you
wanted
honeycombs
baggage
span
processor,
that
you're,
adding
in
extra
vendors
and
unnecessarily
so.
C
I
was
thinking
of
a
case
where
you've
got
a
static
like
a
static
package
level
map
would
buy
a
string
name
to
something,
and
you
can
underscore
register
yourself
and
that
makes
the
string
min
max
some
count.
So
you
you
underscore
import
that
that
package
and
then
it
would
be
available
in
your
configuration
but
gosh.
It
sounds
like
this
configuration
is
going
to
include
that
practically
a
mirror
of
the
complexity
or
the
feature
richness
of
the
collector
configuration
which
includes
pipeline
setup
and
that
sort
of
thing
I
wonder
how
much
this
ended
up.
C
Looking
like
a
collector
pipeline,
but
that's
cool.
I
look
forward
to
it.
B
Yeah,
I
do
kind
of
wonder
how
close
it
will
be
to
the
collector.
I
also
am
thinking
that
with
the
config,
but
I
don't
necessarily
know
if
that's
a
bad
thing.
Okay,
I
think
that's
it
for
the
agenda
outside
of
an
open
question.
I
had
for
a
release.
I
was
thinking
this
week,
but
I
don't
know
if
it's
gonna
get
out
this
week,
but
there
are,
I
guess,
anthony's,
not
here
or
aaron,
release
milestones.
This
is
all
that's
done
in
the
main
repo
on
the
main
branch.
B
The
some
kind
of
stuff
is,
I
think,
the
big
one
that's
getting
released
in
this
and
then
there's
some
small
fixes
as
well
as
well
as
the
split
metric
transform.
So
it's
an
iteration
on
the
schema
would
be
released
for
the
contrib.
Let's
see,
I
think
it's
depend
on
updates
using
this
semcon
version,
adding
a
new
function.
Actually,
this
is
kind
of
important
the
textmap
propagator
function
that
was
added
to
auto
prop,
so
you
could,
instead
of
using
environment
variables,
just
pass
in
strings
which
is
needed
by
the
collector.
B
So
this
is
kind
of
an
important
one,
and
then
there
was
this
last
one.
I
haven't
actually
taken
too
much
to
look
at,
but
I
do
see
david's
on
the
call,
and
I
think
you
review
this.
What
are
your
thoughts
on
whether
this
is
ready
to
merge
or
not.
A
I'm
really
hoping
to
get
feedback
from
anthony.
It
sounds
like
to
give
background
the
contributor
is
a
googler
who's
been
very
helpful
actually
in
providing
a
lot
of
feedback
on
this
project
in
particular.
A
In
my
review,
not
knowing
that
I
asked
him
to
move
stuff
to
a
different
package
structure,
and
so
we've
gone
back
and
forth
a
little
bit
on
package
structure,
and
I
I'd
like
to
I'm.
Hopefully
don't
want
to
ask
them
to
change
again
without
anthony's
opinion
or
someone
else's
opinion.
B
Okay,
so
if
I'm
hearing
you
correctly,
we
need
some,
we
need
anthony's
opinion
or
somebody
else
to
take
a
look
at
this
and
provide
some
context.
B
I
think
that
that's
something
I
can
do,
especially
if
I'm
gonna
be
trying
to
get
this
release
out.
Do
you
think
it's
worthwhile
trying
to
get
this
included
in
the
release.
A
It
can
definitely
wait.
It's
it's
an
additive
feature.
It
certainly
will
be
very
helpful
because
it
it
lets.
You
exclude
health
checks
from
grpc,
which
are
a
really
common
source
of
noise.
So,
as
far
as
usability
goes,
it's
a
big
win,
but
there's
nothing.
You
know
it'll
catch
the
next
train,
so,
okay.
C
It
strikes
me
that
this
support
is
a
lot
like
what
metrics
views
do,
giving
you
a
fine
grain
way
to
control
which
bands
are
written
out
and
it's
interesting,
I'm
not
sure
what
to
say
more
than
it's
like.
We
have
this
complicated
specification
for
metrics
and
we're
taking
a
long
time
to
get
it
done.
But
when
it
comes
to
span
filtering
you're
on
your
own
and
there
might
be
a
contrib
package
for
you-
maybe
isn't
the
greatest
position
to
be
in.
B
Yeah,
that's
a
I'm
fearing
that
more
and
more
across
people's
concerns.
I
know
tristan
also
in
the
erlang
spec
is
concerned
about
this.
He
was
wondering
about.
If
you
have,
you
essentially
want
to
turn
off
instrumentation
for
a
particular
import
of
a
library.
How
would
you
do
that?
Which
is
not
a
really
great
answer
for
that
either
right
now,
yeah.
C
B
Yeah,
I
think
it's
a
basis
to
start
having
that
conversation,
but
still
like
that,
like
you're,
saying
a
sampling
question
or
even
a
an
allow
list
or
as
a
nihilist
would
be
something
eventually
we
need
to
build
into
this,
but
okay,
but
specifically
this
pr.
I
will
take
it
out
of
this
release
and
I
will
try
to
find
some
time
to
review
it.
B
Josh
has
already
reviewed
it,
and
then
we
can
try
to
progress
and
go
forward
on
this
one.
Okay,
I
think
that's
it
for
yes,
that's
it
for
the
slated
agenda.
Anyone
else
have
anything
they
want
to
talk
about.
That's
not
written
down.
B
Okay,
anybody
have
some
cool
uses
of
open
telemetry,
not
even
specifically
go.
C
We
had
a
hack
week
at
whitestep
this
week
last
week
that
was
exciting
uses
of
open
telemetry,
although
it
wasn't
go
I'll
share
it
briefly.
So
I've
I've
pitched
this
idea
in
the
past.
Prometheus
does
great
job
doing
service
discovery
and
it
gives
you
the
ability
to
get
your
target
info,
which
is
something
that
david's
working
on.
C
But
the
way
you
you
get
it
today
is
by
using
a
prometheus
receiver,
and
that
means
you're
going
to
try,
scraping
it
prometheus
target
and
I
think,
there's
a
lot
of
value
just
in
exposing
the
raw
data
from
the
prometheus
service
discovery.
So
I've
been
saying
this
for
a
long
time.
We
finally
did
it
for
the
hack
day,
so
I
have
a
little
demo
I'm
trying
to
let
my
I'm
trying
to
get
my
boss
to
let
me
open
source
it.
Basically,
it's
three.
C
It's
three
collector
components:
one
receiver,
which
is
like
a
butchered
version
of
the
prometheus
receiver,
that
just
outputs
target
info,
the
other
is
a.
I
call
it
a
resource
definition
processor
and
you
can
figure
it
with.
C
Basically,
what
key
sets
you're
interested
in
joining
on
and
and
prometheus
would
generally
be
an
instance
in
a
job
variable
or
in
prometheus,
or
you
know,
in
in
open
telemetry
semantic
conventions,
maybe
a
service
name
and
service
instance
id
and
then
the
third
component
was
this
resource
join
processor,
which
took
some
regular
expressions
of
well
first
for
the
demo,
the
metrics
that
you
want
to
modify.
C
So
the
idea
is
that
after
it,
this
second
processor
is
going
to
read
the
definitions
that
are
written
by
the
definition
processor,
as
well
as
the
definitions
written
by
the
service,
discovery,
receiver
and
then
join
them
together,
producing,
roughly
speaking
what
you
would
expect
from
a
prometheus
installation,
but
pushing
data
all
the
way,
and
only
only
using
that
service
discovery
component,
which
is
a
huge
piece
of
code
and
needs
to
be
reused.
C
It
allows
you
to
do
relabeling,
but
only
of
the
target
info.
So
that
was
exciting
to
me
and
I'm
hoping
to
open
source
that.
So
I
can
point
at
it
more
in
the
future.
B
To
be
added
to
the
collector
control
right.
C
Yeah,
that's
the
idea,
I'm
I'm
just
checking
off
that
we
aren't
going
to
embarrass
ourselves.
If
I
open
sources.
C
I
I
actually
I'm
curious
about
that,
and
I
know
I
have
unanswered
research
to
do
because
I
I
was
investigating
the
prometheus,
both
the
receiver,
as
well
as
the
prometheus,
remote
right
exporter,
trying
to
understand
that
issue.
We
were
working
on
earlier
this
week
david
and
I
saw
that
there
was
a
kubernetes
service
discovery
component
being
done
in
the
resource
which
actually
might
might
say
that
would
probably
help
me
automate
this.
This
joint
definition,
processor
stuff.
So
like
the
definition.
C
A
Yeah
yeah
it
at
least
for
the
kubernetes
attributes.
It
should
basically
do
what
you're
asking
and
that,
if
you
use
like
the
pod
roll
for
kubernetes
sds
in
the
prometheus
receiver,
you'll
automatically
get
like
pod
name,
pod
name
space
as
resource
attributes
coming
out
of
it.
And
then,
if
you
were
to
use
like
a
prometheus,
remote
write
exporter.
Those
would
then
end
up
as
dimensions
on
your
target
info
metric
and
you
could.
C
Yeah,
so
I
want
that
for
my
my
other
plugin
that
just
dumps
that
data
into
lightstep
as
sort
of
zero
one
value
metrics,
it's
really
cool,
though,
and
and
that'll
make
it
easy
to
automate.
C
So
the
idea
is
that
the
the
sdks
themselves
would
have
some
sort
of
initialization
that
bootstraps
themselves
the
same
way.
If
you
can
figure
out
your
own
kubernetes
attributes,
then
you
just
put
a
few
identifying
attributes
on
your
own
telemetry
and
push
it
to
a
collector.
The
collector's
got
the
service
definitions.
Catalog
knows
the
definitions
that
you
want
to
join
on
and
then
they
can
do
the
rest
fully
rehydrate
your
metrics,
the
way
a
prometheus
server
would
or
not.
If
the
backend
can
handle
joining
them
itself,.
C
C
A
C
A
B
B
Exactly
yeah,
this
is
all
really
cool.
Actually
I
I
thought
about
that
for
a
while,
but
it's
cool
to
hear
that
it's
actually
getting
worked
on
so
appreciate
you
sharing.
C
I
just
realized:
I
have
another
one
speaking
of
go.
If
anyone
wants
to
hear
it.
Apache
arrow
is
a
project
that
is
a
mostly
java
and
other
like
scala.
Based
like
rust,
is
popular.
There's
not
much
go
representation
there,
but
there
is
a
go.
Library
came
up
because
there's
a
contributor
at
f5
working
on
an
otlp
form
of
arrow.
Basically
it
would
be
a
column
compressed,
it
would
fit
in
with
parquet
files.
C
I
thought
it
was
cool
and
it's
go
go
library,
so
you
could
potentially
use
that
for
a
like
a
demonstration
very
easily,
where
you
have
a
go:
sdk
outputting,
multivariate
data
stream
so
like
instead
of
well,
you
could
have
raw
data
basically,
rather
than
aggregated
data,
coming
out
for
both
spans
logs
and
metrics.
C
B
That's
cool
because
you.
B
I
think
a
lightweight
sdk,
with
minimal
implementation
and
minimal
overhead
would
be
really
cool,
but
I
also
think
that
there's
a
few
other
tasks
to
do.
C
Yeah,
I
I
know
that's
not
what
I'm
asking
for,
but
I
was
thinking
of
like
php.
You
know,
for
example,
like
I
don't
think
you're
going
to
see
a
complete
implementation
of
views
in
php
and
you
might
want
to
do
some
manipulation
of
your
metrics
data.
You
might
get
a
metrics
sdk,
but
you
might
not
get
those
you
know,
views
I'm
sure
options.
B
That
is
really
cool,
partial
idea,
so
I
think
that's.
It
then
looks
like
we
have
two
minutes
left,
so
I
want
to
be
respectful
and
make
sure
I
don't
nerd
snipe
on
that
and
dive
into
it
too
much,
but
thanks
again
josh
for
sharing
it
thanks
again
everyone
for
joining.
We
will
end
it
here
and
then
next
week
see
you
all
again
or
virtually
yeah
bye.