►
From YouTube: 2021-10-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
C
A
A
It
was
a
bit
too
late.
Yesterday
I
had
a
short
day,
so
unfortunately
I
didn't
have
time
to
read
it,
but.
A
I
think
tell
us
the
conclusions,
because
we
had
three
questions
there
and
the
questions
were
like
what
is
the
problem
that
we're
solving
and
should
we
consider
disabling
or
not-
and
I
don't
remember
the
third
one,
but
I
remember
these
two.
D
Yeah,
so
for
folks
that
are
not
in
the
loop,
this
is
adding
a
default
user
agent
to
otlp
exporters
that
has
collector
information
in
it
and
the
problem
that
we're
trying
to
solve.
So
there
was
an
original
request
from
someone
else
to
add
more
insight
into
this.
That's
already
been
done
for
the
x-ray
exporter
or
some
sort
of
amazon
exporter
for
us
as
service
owners.
D
We
just
really
need
that
information
to
help
customers
debug
stuff
with
the
collector,
because
we've
already
had
a
number
of
instances
where,
like
version
upgrades
broke
stuff
or
like
our
service,
was
expecting
one
proto
version
and
the
collector
they
were
using
was
another
proto
version,
and
so
we
were
just
like
what
are
you
using
and
they
can't
always
tell
us
exactly
what
they're
running
or
they'll
tell
us
the
wrong
thing.
So
that's
kind
of
the
motivation,
the
disabling
part.
D
My
opinion
is
that
it
shouldn't
be
disabled
because
we're
not
really
changing
the
behavior,
we're
just
changing
the
value
and,
in
my
understanding,
the
kind
of
canonical
way
for
users
to
control
outgoing
user
agent
is
through
the
headers
configuration
in
the
config,
which
currently
works
for
http
exporter
and
will
continue
to
work
currently
does
not
work
for
grpc
exporter
will
continue
to
not
work,
but
I
think
that's
kind
of
a
separate
concern
to
fix
if
that
makes
sense,
and
then
finally,
the
multiple
collectors
chaining
together
that
really
has
no
bearing
on
it,
because
it's
only
the
outgoing
client
that
says
the
user
agent
they
they've
never
been
chained
before.
D
A
E
A
E
E
A
F
Your
head,
I
was
going
so
I
think
bogdan
last
time
you
said
sort
of
semi-jokingly
that
this
sounds
a
little
bit
like
tracing,
and
I
wonder
if
we
ought
to
lean
on
that
further
saying.
User
agent
has
a
well-defined
purpose.
Let's
use
it
for
that
purpose.
If
we
want
to
do
tracing
like
things,
let's
use
tracing,
okay,.
A
Yeah,
I
know
I
was
joking
about
this,
I'm
just
asking,
because
I
I
need
to
set
up
the
right
expectation,
so
you
will
not
follow
up
on
doing
any
crazy
joining
or
anything.
We
will
just
stick
with
the
fact
that
we
send
the
user
agent.
I
would
only
the
only
answer
that
did
not
correlate
with
my
thinking
is
the
ability
to
disable
this
behavior.
A
If
the
user
wants,
I
think
it's
important
to
allow
users
to
disable
if
they
don't
want
to
send
user
agent,
for
whatever
reasons,
if
they
are,
I
don't
know
what
institution
or
whatever,
and
they
don't
want
you
to
see
that
they
are
using
the
hotel
collector
and
they
just
send
you
without
this.
I'm
I'm
just
saying
that
I
don't
think
we
should
force
opt
out
sure
we
can.
We
can
have
this
enabled
by
default,
and
normal
users
will
never
disable
this
unless
they
are
paranoid.
But
I
think
we
should
give
the
users
that
option.
D
Yeah,
I
mean
I'm
happy
to
do
that,
although
I.
A
Did
here
sit
again,
we
can
do
it
in
a
follow-up
pr.
We
can
discuss
about
that,
but
I
I
do
believe
it's
important
to
to
give
the
users
the
power
of
controlling
this
behavior.
I
know
it's
right
now
for
http
at
least
you
can
do
it
by
overwriting
with
an
empty,
but
you
said.
A
That
doesn't
work,
so
they
have
no
option
for
them
and
we
can
discuss
about
this.
But
but
I
like
the
motivation-
and
I
like-
I
will
look
at
the
pr
today
now
that
we
have
the
the
comments.
H
H
I
think
it
applies
currently
only
to
all
tlp
exporters
and
receivers,
and
so
no
it's
in
the
exporter.
A
Helper,
I
think
at
least
that's
how
I
remember
it.
H
F
It
doesn't
sorry
it
doesn't
encode,
it
doesn't
put
the
collector
version
in
as
far
as
I
know,
but
it
does
let
you
override
the
user
agent.
I
And
the
amazon
prometheus
remote
red
exporter-
and
I
think
also
the
x-ray
exporter-
does
some
construction
of
its
own
user
agent
information.
But
I
think
that
the
way
this
was
implemented
was
that
it
added
a
capability
to
set
a
user
agent
on
the
config
http
helper,
which
any
exporter
could
then
that
uses
a
config
http.
Could
then
utilize
if
we
wanted
to
make
that
a
default
capability
of
the
config
http
exporter,
and
so
that
it
always
set
this
unless
a
user
or
exporter
author
overrides
it?
A
I
will
I
will
look
into
this
and
understand
better,
but
I
think
the
agreement
so
far
is
if
we
can
do
it
for
everyone,
it's
better
unless
they
enable
disable
or
whatever
it
is.
So.
Let's,
let's
try
to
do
that.
I
will.
I
will
look
into
to
this
in
the
the
format
of
these
anthony.
How
is
your
format
different
than
this.
I
E
Okay,
I
spoke
up
merely
because
I
think
the
w3c
defines
user
agent
as
something
which
is
not
the
thing
that
you're
doing
here.
It's
sorry
there's
a
bird
on
my
head.
The
the
agent
is
the
one
delivering
telemetry,
not
the
one,
presenting
information
to
a
user,
which
is
why
I
call
it
telemetry
agent
and
I
agree
with
anthony.
The
purpose
here
is
to
to
convey
information
about
the
center
of
telemetry.
E
For
example,
in
lightstep
we
already
have
a
telemetry
agent
property
that
we
use,
and
it's
because
I
would
like
to
prioritize
data
coming
from
an
agent
over
data
coming
from
an
arbitrary
end
user,
for
example.
So
prometheus
sidecars
are
sending
us
data
and
I
know
it's
a
prometheus
sidecar
because
it
tells
me
through
some
telemetry
agent
property.
That's
that's
the
type
of
use
we
have.
A
A
Which
is
eric
has
a
pr
yes
and-
and
this
is
jurassic's
fault,
because
I
ask
what
is
the
order
that
I
need
to
merge
these
prs
and
jurassic.
Tell
me
what
I
do
what
I
need
to
do.
D
H
You
can
you
get
me
to
the
point
again.
I
was.
I
didn't
comment
about
the
user
agent
that
I'm
created.
Oh
the
command,
oh
yeah,
okay!
So
right,
I
guess
the
problem
is
between
the
first
pr
and
the
second
pr
we
discussed
about
moving
the
builder
into
the
core
repository
again
and
the
second
pr
here
had
a
problem
with
not
installing
the
computer
in
a
way
that
it
comes
with
the
versioning.
H
So
when
you,
when
you
install
the
builder
using
go
install
it
just
compiles
the
builder
and
doesn't
contain
you
know
the
version
information
that
is
added
at
build
time
when
we
release
the
builder,
so
we
recommend
downloading
the
binary
and
when
you
run
the
downloaded
binary
and
you
run
like
oc,
ocp
or
open
temperature,
collector
builder
version,
it
spits
out
the
version
that
of
the
builder.
H
Now,
if
we
are
moving
the
builder
to
the
core
repository,
then
this
pr
here
would
not
have
an
effect
would
be,
would
have
to
be
reverted
right.
So
it's
work
that
we
have
to
undo
to
accommodate
the
builder.
H
So
I
guess
we
kind
of
talked
about
this
already
eric.
But
if
you
have
any
specific
concerns
that
we
should
be
discussing.
A
Here
I
think
you
should
suppose
you
should
propose
a
more
concrete
plan
jurassic
because
you
understand
better.
The
problem,
tell
everybody
exactly
what
is
the
order
of
things
that
need
to
happen
and-
and
I
think
we
can
do
them.
H
Yeah,
so
I
I
don't
know
from
the
top
of
my
head:
what
are
all
the
steps
that
needs
to
be
done?
I
think
it's
easier
for
me
if
I
just
scratch
a
pr
with
the
move
itself,
so
moving
things
to
the
to
the
to
the
builder,
because
there
is
one
more
complexity,
and
I
think
we
can
use
the
time
here
to
talk
about
that,
which
is
the
releases
repository
as
well.
H
So
originally
we
had
the
releases
because
of
a
timing,
difference
between
the
core
and
the
builder,
and
we
had
the
releases
now
that
release
now
that
we
are
going
to
have
the
two
of
them
together,
then
I
don't
see
a
good
advantage
of
having
the
releases
as
a
separate
repository.
A
H
A
True-
and
I
think
it
should
be
there,
but
we
can
discuss
that
later.
Let's,
let's
fix
this
builder
right
now,
because
as
a
project,
we
give
you
the
framework-
and
we
give
you
a
builder
to
build
on
top
of
this
framework
in
one
place
and
in
one
shot
with
consistent
versions
and
stuff.
So
let's
start
with
this
and
then
and
then
and
then
and
then
we
can
do
the
other
so
so
for
eric
for
aaron.
We
need
to
to
then
wait
for
your
merch
pr,
which
will
come
probably
today
or
tomorrow.
H
Sorry
come
again,
so
I'm
not
I'm
not
expecting
the
pr
4199
to
get
merged
in
anytime.
Soon.
H
Well,
that
one
that
one
can
be
can
be
merged,
it
would
just
remove
the
current
command
and
I
think
it
is
going
to
be
reused
for
for
the
next
iteration
of
of
how
to
build
the
collector
yeah.
So
the
only
concern
the
only
concern
is,
I'm
not
ready
to
make
a
pr
like
tomorrow
for
for
moving
the
builder
into
the
core.
H
H
I
I
only
have
this
and
the
authentication
work
on
my
queue,
so
I
can
start
as
a
top
priority.
I
can
start
tomorrow
morning.
A
Yeah,
it's
not
on
that
because
I
believe
the
move.
The
move
of
the
code
is
not
that
a
big
deal,
maybe
I'm
wrong,
but
I
think
I'll
give
you
famous.
A
Yes,
you
probably
want
me
to
add
credentials
for
for
for
this
repo
in
the
github.
I
will
do
that
the
docker
hub
credentials.
So
then
we
can
release
the
the
image.
H
So
I
yeah
what
I
have
on
the
builder
that
I
would
need
here
is
some
credentials
sure,
but
I
would
also
I
would
also
create
a
new
go
module
for
the
builder
itself,
so
yeah.
A
Think
that
can
be
incremental
the
the
the
smart
building
to
not
build
when
we
change
one
part
or
the
other.
I
think
that
can
be
incremental.
I
don't
necessarily
see
as
a
blocker
for
the
first
pr,
even
though
yes,
it
would
be
annoying
that
you
build
extra
things
that
you
don't
need
to
build,
but
it's
we
can
do
it
incremental
and
we
can
ask
somebody
to
help
us
with
that.
That's
true.
H
I
We
have
someone
at
amazon
who
is
starting
to
look
at
the
contrib
repo
for
doing
precisely
that,
trying
to
identify
dependencies
of
things
that
have
changed
in
apr
so
that
we
can
do
partial
builds.
So
if
we're
successful
there,
we
can
try
to
apply
that
to
core
as
well
once
the
builders
moved
in
yeah.
A
I'm
gonna
make
the
same
joke.
I
think
we're
gonna
be
right
based
on,
but
at
one
point
we
will
look
into
that
project.
Thanks
anthony
just
make
sure.
As
I
said,
we're
not
gonna,
we
shouldn't
do
too
much
work.
There
are
tools
that
already
does
do
this.
Maybe
maybe
the
solution
is
to
move
to
use
these
tools.
Instead
of
writing
our
own
tools.
F
If
we're
talking
about,
if
this
is
a
a
discussion
about
bazel
without
talking
about
bazel,
I
do
think
we
should.
We
should
look
at
the
the
experience
report
coming
out
of
kubernetes,
where
they
have
recently
decided-
or
I
guess
earlier
this
year,
they
decided
to
stop
using
bazel
because
of
the
friction
involved
in
maintaining
parallel
build
systems
with
a
community
where
other
people
were
not
bought
in.
So
I
think
we
should
at
least
we
should
take
that
experience
seriously
when
we
evaluate.
F
A
Okay,
so
I
think
we
have
a
very
interesting
topic
now.
Then,
if
you
are
oh,
no,
I
think
we
s.
A
Okay,
premesh
clement.
J
Yeah,
I
hope
that
this
topic
is
interesting
as
well
yeah
and
I
I
think
it's
a
quick
one.
So
we
have
this
functionality
in
open,
telemetry
collector,
where
we
associate
and
a
client
ip
address
and
store
it
in
the
context,
and
this
information
is
later
later
being
used
by,
for
example,
by
kubernetes
attributes
processor.
J
So
we
have
source
ips,
so
we
can
associate
pod
and
then
extract
both
information
and
include
it
in
resource
attributes
and
it's
a
quite
important
processor
for
kubernetes
workloads
and
now
the
problem
is
that
this
association
was
not
working
with
otlp
http
receiver.
So
I
made
a
like
really
quick
fix
and
it's
like
just
a
couple
lines
of
code.
But
what
I'm
wondering
about
is
if
we
could
add
some
more
capabilities,
maybe
into
consumer
test,
traces,
sync
or
somewhere
else,
to
to
make
it
easier
test.
J
These
things
in
the
future,
or
maybe
we
should
think
about
some
other
solution
and
there's
a
note
from
gerasi
that
he
was
discussing
this
with
you
today.
So
I'm
wondering
what's
the
takeaway,
because
I
I
recall
we
had
some
discussions
about
storing
authentication
context
to
maybe
using
some
dedicated
field
or
so
so
perhaps
this
is
along
those
lines.
H
So
so
I
guess
I
take
away
from
the
conversation
that
we
just
had
before
this
meeting
here
is
that
I'm
gonna
play
with
authentic
using
authentication
data
within
the
client
info,
and
the
second
step
would
be
so
if
you've
been
following
the
authentication
context,
pr,
we
are
able
to
inject
authentication
data
into
the
pipeline
into
the
p
data
right,
so
it
which
means
that
we
have
a
central
place
where
we
can
add
authentication
data
that
is
outside
of
the
receivers
and
exporters,
and
you
know
the
components
themselves.
H
So
what
we
can
try
to
do
is
inject
the
client
info
at
that
point,
at
the
same
place
where
we
inject
authentication
data
and
make
it
available
to
all
the
components
after
the
you
know.
After
the
the
the
data
is
received
by
the
first
receiver,
so
the
receiver
itself
would
see
the
client
information
and
all
the
processors
and
and
exporters
coming
after
that.
H
So
so
yeah,
so
that's
the
the
proposal,
that's
the
current
idea
and
if
it's
gonna
fix
any
existing
problems,
I
hope,
but
it
might
conflict
with
components
that
currently
use
the
client
information
or,
more
precisely,
with
components
that
inject
the
client
information
into
the
context.
A
So
I
think
that
pr
fixes
right
now
issue,
I
think.
Personally,
I
don't
know
if
that's
the
right
thing
jurassic,
we
discuss
about
more
generic
solution
to
have
this
enabled
for
everyone,
and
do
it
right
a
bit
more
correct
than
than
that
pr.
But
are
you
okay
to
because
I'm
asking
you
you'll
become
an
owner
of
these
problems?
So
are
you,
okay
with
me
merging
this
pr
and
then
fixing
this
with
the
things
that
we
discuss?
Is
that
okay
for
you,
okay,
yeah
then
I
will
merge,
is
not
my
problem
anymore.
A
Done
now,
this
is
the
nice
problem
which
I
like
that
josh
is
here
as
well
yeah.
We
we
need
to
start
adding
semantic
conventions
for
metrics
and
more
and
more
not
just
process,
but
we
will
need
more
and
more
semantic
conventions
for
this
and
would
be
good
to
to
have
a
way
to
do
this
consistently
and
maybe
thinking
even
to
use
the
tool.
A
The
the
semantic
generator
tool
convention
generator
tool
that
we
have
for
for
tracing
and
stuff
make
it
work
for
metrics
and
have
those
defined
in
yaml
files,
which
would
be
great
because
then
we
can
reuse
the
same
files
in
our
metadata
yaml
that
we
we
have
then,
instead
of
having
duplicate
the
the
information.
C
Yeah,
I
think
there
is
a
good
conversation
in
the
specification
meeting
about
rolling
that
out
and
there's
a
plan
to
do
so.
I
forget
who's
actually
doing
that,
but
I
just
wanted
to
raise
this
particular
proposal
to
the
collector
group
just
to
make
it
put
it
on
people's
radar,
because
it
has
an
impact
on
the
collector.
C
C
Basically,
the
data
model
is
slightly
different
and
the
names
are
slightly
different,
so
this
could
potentially
impact
people
if
we,
if
we
decide
to
establish
one
convention,
which
I
think
we
probably
will
and
then,
but
I
think
likely,
the
internal
telemetry
metrics
will
be
renamed
and
but
anyways
just
putting
this
on
everyone's
radar
in
case
this
impacts,
you
please
take
a
look
if
you
want
click
through
to
the
underlying
issue,
and
you
can
see
some
analysis
of
what
the
what
the
different
metrics
are
and
how
they're
collected
and
how
they
compare
to
each
other
in
terms
of
their
actual
meaning.
A
C
Yeah,
so
the
my
objective
with
this
proposal
was
to
basically
establish
the
least
common
denominator
between
the
two
well,
rather,
the
the
union
of
the
two
in
a
compatible
way.
So
I
just
took
what
was
there?
The
host
metrics
receiver
is
only
collecting
those
three
states
and
the
internal.
A
So
so
it's
very
very
system
specific,
and
that's
why
I'm
asking?
Why
did
you
choose
this
three,
but
I
think
we
document
it
as
three
for
whatever
reason,
but
I
I
know
for
sure
it
collects
more
more
labels.
It
also
collects
the
cpu
core
information
for
for
system,
so
we
we
actually
do
have
two
different,
multiple
different
cpu
metrics.
One
is
a
system
cpu
metric
where
you
get
all
the
information
about
the
entire
system
on
the
from
the
kernel,
and
we
have
one
specific
to
a
specific
process.
A
C
Yeah,
that's
correct,
so
I
think
that
the
aspirationally
they
would
be
the
same,
but
you
know,
of
course,
we're
going
to
just
take
whatever
nuances
are
involved
with
either
one.
In
this
case,
I
think
the
states
are
a
little
different,
but
the
naming
and
the
units
and
all
and
descriptions
and
things
can
align
pretty
closely.
C
There's
also
this
interesting
nuance
with
this
that
particular
metric
where
in
the
internal
telemetry
it's
it's
only
asking
for
a
total
cpu
time
and
in
the
other,
in
the
host
metrics
receiver,
it's
breaking
it
down
by
state
and
so
there's
actually
like
a
metrics
data
model
in
question
here,
which
we've
called
out
on
this
pr.
That's
basically,
what
is
the
appropriate
way
to
handle
that
the
the
aggregation
of
total
with
subtotals
makes
this
value
not
meaningful,
so
we
want
to
avoid
that.
I
think
so.
What
do
we
suggest
to
people?
C
A
C
It's
not
my
objective
with
this
pr,
it
seems
I
think
I
agree
with
the
the
objective
and
I'm
happy
to
be
involved
in
that
effort.
Even
but
I
my
objective
immediately
here
is
to
just
establish
what
these
conventions
should
be
and
capture
them.
Officially
perfect.
A
No
makes
total
sense
josh
do
you
have
any
recommendation
of
what
we
should
do
here.
E
You're
asking
me
how
I
feel
about
this
labeling
question
in
the
cpu
convention
yeah.
E
I
just
read:
I'm
catching
up
on
it
right
now,
but
I,
the
premise
here,
is
that
we
may
or
may
not
want
extra
metrics
when
we
have
different
dimensions-
and
I
I
mean
I
believe
from
the
perspective
of
the
data
model-
we've
designed
it
all
all
along,
so
that
you
could
have
optional
dimensionality,
and
so
I
would
expect
that
single
label
case
to
be
omitted
and
be
implied.
And
then,
when
you
have
optional
labels,
you
add
them.
I
think
I'm
supporting
the
pr
the
way
it
stands.
A
Yeah,
so
the
only
rule
that
we
need
to
apply
then
is
the
following:
whenever
we
have
a
metric
with
multiple
data
data
points,
we
should
not
include
data
points
with
different
labels
in
that
metric,
because
otherwise
we
don't
know,
we
know
no
longer
gonna
know
how
to
aggregate
them.
So,
for
example,
we
should
not
include,
let's
say
three
points
with
state
equals
user
state
equals
weight
and
state
equal
system
and
another
point
with
no
state,
which
is
the
total,
because
at
this
moment
any
normal
aggregation
that
we
will
apply.
A
For
example,
if
we
say
reduce
this
label,
we
will
count
double
the
total
that
we
will
calculate
will
be
double
because
of
that
of
presence
of
that
total.
So
you
either
choose
at
the
metric
level.
You
either
you
make
a
decision
whenever
you
send
a
metric
proto
to
have
one
label
zero
labels
or
two
labels.
I
don't
care,
but
you
are
consistent
across
all
the
points
that
are
part
of
the
same
metric.
E
Yeah,
I
think,
actually
you
could
add
something
to
the
data
model
to
say
that,
essentially
it's
not
it's
not
explicit.
E
And
and
just
to
summarize,
if
the
data
point
is
a
sum
meaning
a
keynote
meaning,
cumulative
or
delta
and
meaning
asynchronous
or
synchronous,
doesn't
matter,
the
implications
of
removing
a
label
should
be
that
you
add
up
those
points
and
that
that
that
explains
everything
you
just
said
earlier,
as
well
as
a
few
other
behaviors
and
and
that's
implied
by
the
label,
erasure
rules
and
the
single
writer
property
and
so
on.
A
Yeah,
okay,
so
I'm
fine
with
this,
I
will.
I
will
read
the
pr
after
this
meeting
thanks
done
for
doing
this,
and
I
think
we
should
maybe
maybe,
during
the
next
spec
meeting,
we
should
discuss
about
the
plan
to
to
make
the
yaml
configuration
for
metrics
in
the
future.
A
B
Hello
everyone,
so
I
just
have
a
quick
question,
so
my
question
is
on
this:
telemetry
schema
so
like
I'm
just
trying
to
understand.
If,
if
I
have
multiple
sources
emitting
traces
in
different
schema
versions
and
how
do
I
unify
them
into
one
schema
version
which
my
back
end
expects?
So
I
see
there
is
a
note
saying
that
there
is
something
like
and
plan
or
propose
that
schema
translator,
processor,
which
translates
the
schema
from
one
version
to
another.
But
I
don't
find
that,
but
how
others
are
dealing
with
the
schema
unification.
A
So
I
think
schema
is
very
new.
We
just
added
that
concept
still
two
months
ago
or
something,
but
it's
still
pretty
new.
To
be
honest,
we
don't
even
set
the
right
schema,
all
the
time
which
I
think
it
should
be
at
least
the
beginning
of
the
story
to
to
set
the
schema.
Whenever
we,
we
are
a
producer
of
the
data
of
p
data,
we
should
set
a
schema
in
for
for
the
data.
That's
that's
the
first
thing
that
we
need
to
track,
and
maybe
this
is
good.
A
B
Okay,
so
does
does
this
mean
that,
like
currently,
we
are
seeing
and
like?
We
are
expecting
something
like
this.
So
let's
say
if
we
have
a
tag
key
like
with
the
tag
name
as
kate's
dot,
cluster
dot
name
and
then
in
another
version
it
comes
up
with
kubernetes
dot,
cluster
dot
name.
So
we
have
two
tag
names
with
the
same
value,
so
the
querying
might
become
a
bit
noisy.
So
we
are
a
bit
thinking
how
to
solve
this
or
like
I'm
just
understanding.
B
A
I
don't
think
there
is
anything
right
now,
but
eric
you
are
raising
your
hand.
G
I'm
trying
good
I'm
trying
more
to
be
a
thoughtful
member
of
chats
because
I'm
really
bad.
I
talk
over
people
yeah
yeah.
I
looked
into
this-
I
probably
bugged,
tigran
and
maybe
anthony
about
this
like
two
weeks
ago
and
then
didn't
follow
up
but
yeah.
As
far
as
I
know,
the
precursor
work
for
the
schema
translation
processor
is
going
on
in
open
telemetry,
go
like
a
which
is
like
a
parser
like
a
some
code
to
parse
the
schema,
so
I
think
that's
actually
merged.
G
So
some
of
the
the
precursor
work
is
there.
Having
implemented
this
in
ruby,
I
think
there's
a
lot
to
be
desired
in
in
practice.
It's
somewhat
useless
and
for
context.
We're
handling
this
stuff
by
just
like
handling
it
via
custom
exporters,
where
we
just
manage
all
these
translations
ourselves
with
ad
hoc
code.
So
you
know,
I
think,
yeah
I
think,
there's
I
wouldn't
count
on
this
being
like
a
in
collector
component
in
the
immediate
term.
G
B
Okay
and
a
follow-up
question
on
this,
so
why
is
this
a
conversion
wrapper
part
of
open
telemetry
go
client
library,
because,
ideally
it
should
be
part
of
the
collector
right.
So
if
we
start
yeah.
G
Anthony
might
have
more
context
here.
I
don't
know,
that's
where
the
work
got
done.
I
Yeah,
so
I
think
the
the
expectation
was
that
it
would
be
a
module
that
would
be
shared
between
the
go
client
library
and
the
collector.
The
go
client
library
is
going
to
need
it
as
well
to
deal
with
resource
detectors
if
they
have
different
schema
versions.
I
Because
currently,
if
you
try
to
merge
two
resources
that
have
different
schema
versions,
it
returns
an
error
and
says:
there's
nothing.
I
can
do
with
this.
Here
have
an
empty
resource
instead,
and
we
want
to
avoid
that.
We
want
to
ensure
that
if
we
have
detectors
or
user
provided
resources
that
are
using
different
schemas,
we
can
get
them
all
to
a
common
schema
version.
So
that's
going
to
have
to
be
implemented
in
the
go
sdk
and
that's
why
we
started
there.
G
Makes
sense,
would
there
be
any?
I
was
actually
confused
by
influencer
movie,
not
to
ramble.
That,
like
the
merge
is
an
error.
The
the
resource
merge
itself
is
an
error
and
not
just
like
there's
a
warning
that
gets
logged
on
the
schema,
urls
attempting
to
emerge
and
the
attributes
still
get
merged
into
a
new
new
resource,
but
with
an
empty
schema
url
is
there?
Was
there
like
a
strong.
I
G
I
I
It
specified
the
scheme
of
merge
or
a
resource
merge
with
two
different
schemas
is
an
error
and
the
res
the
resulting
scheme
or
resulting
resource
is
undefined
in
go
we've
chosen
to
implement
that
as
a
new
mp
resource.
We,
we
simply
don't
know
what
to
do,
and
so
we
do
the
safest
thing,
which
is
nothing.
G
The
sdk
does
some
like
default
resource
creations
that
like
grab,
I
don't
know
like
service
name
or
stuff
like
that
which
is
like
technically
a
semantic
convention,
and
so
if
a
user
were
to
use
any
sdk
sort
of
like
default
resources
and
then
attempt
to
add
their
own
schema
url
they
would,
and
it
would
you
know
any
resource
they
attempt
to
create
would
be
a
an
error.
G
So,
instead
what
we
have
to
do
is
like
not.
We
have
to
create
these
sort
of
like
default
resource
sdk
resources
without
any
schema
url,
even
though
technically
it
is
a
should
point
to
it.
You
know
like
even
service.name
is,
as
you
know,
has,
could
change
something
it's
unlikely,
but
I
don't
know
so
stuff
like
that.
I've
found
yeah.
G
I
Think
that's
the
same
choice
we
made
in
go
as
well
was
that
we
simply
don't
put
a
schema
url
on
those
stock
resources
or
same
thing
with
resources
coming
from
the
environment,
because
we
can't
know
what
schema
they're
using.
So
it
kind
of
avoids
the
problem
for
now,
but
I
think
that's
why
it's
all
the
more
important
that
we
build
the
capability
to
translate
between
schema
versions
and
have
an
ability
to
get
to
a
common
schema
version.
This
whole
week
sounds.
G
I
think
I'm
in
the
next
one.
Actually
am
I
sorry,
I'm
making
a
sandwich?
Oh
yeah,
I'm
just
doing.
Does
anyone
know
offhand
of
a
a
processor
which
operates
on
like
both
the
resource
attributes
and
the
span
attributes
just
within
the
community?
If
I
know
there's
a
few,
but
any
if
there's
any
that
are
like
well
maintained,
I
would
that
and
are
apache
too
I'd
love
to
know.
G
I
have
to
do
some
custom
stuff
and
it
seems
like
a
common
use
case,
but
I
couldn't
find
anything
that
really
supports
it.
What
do
you
need
to
do?
Transformation
on
both
or
the
context
is
like,
given
some
resource
attributes
on
you
know
a
set
of
resource
bands
set
or
modify
span
attributes.
G
So
I
you
know,
like
let's
say,
there's
certain
resource
attributes
that
would
determine
whether
you
need
to
redact
or
drop
certain
span
attributes,
because
those
resource
attributes
could
indicate
that
the
service
is
a
particularly
sensitive
service
or
that
it's
you
know
some
you
know
contains
information
that
can't
be
stored
in
a
back
end
or
whatever.
I
couldn't
find
anything
like
that.
Yeah.
A
So
there
is
this
effort
that
anurag
and
punya
and
everyone
are
leading,
not
everyone.
K
A
Two,
so
we
want
to
have
a
unification
of
the
transformation
processor
and
one
of
the
part
is
starting
with
the
filtering
configuration
or
or
selection
configuration.
How
do
you
select
the
span
that
you
want
to
change
or
modify,
and
that
will
include
capability
of
saying?
Has
some
resource,
attributes
or
or
something
like
that?
A
Has
some
instrumentation
give
the
name
and
has
some
attribute
itself
or
has
a
name
itself
or
something
like
that,
so
you
will
be
able
to
select
the
span
that
you
want
to
change
based
on
all
these
informations
and
then
and
then
do
some
transformation
at
the
span
level.
I
think
that
will
be
soon
available.
A
G
No
worries
I'll
keep
an
eye
on
it,
things
in
the
issues.
Okay,
thank
you,
baghdad
and
everyone
as
well
as
always.
I
So
there
was
a
question
that
came
up
in
the
prometheus
work
group
in
the
prior
hour
that
I
asked
them
to
to
come
to
this
sig
with
regarding
security
warnings
that
they're
getting
from
the
pentabot
in
a
fork
of
the
collector
and
collector
builder
repos,
it
looks
like
we
don't
have
dependable
vulnerability,
warnings
enabled.
I
know
we
we
keep
on
top
of
merging
depend
about
suggested
changes,
but
there
was
some
concern
that
there
may
be
vulnerabilities
in
some
of
our
transitive
dependencies
that
are
not
being
caught.
I
I
don't
it
was
in
a
fork
of
their
in
in
their
fork
of
the
repository.
I
believe
it
related
to
a
mongodb
client
library,
which
is
probably
related
to
some
transitive
dependency
that
the
builder
has,
through
the
collector
or
one
of
its
components.
So
it's
it's
hard
for
me
to
say
what
exactly
it
was
without
so
right
to
show
that,
but.
A
Right
now,
we
have
only
one
only
one
dependable
alert,
which
says
is
moderate
severe
and
just
for
for
fun.
It's
in
aws,
prometheus,
remote
exporter,
so
oops.
A
And
you
can,
you
can
do
this,
but
but
so
we
do
have
them
enabled
that's
what
I'm
trying
to
say
and
as
I
mentioned,
we
do
have
one,
but
it's
not
related
to
mongodb.
It's
related
to
json.
I
A
We
do
a
lot
of
these,
so
we
have
different
back
channels
where,
where
we
like,
even
in
the
tc
every
time
like
people
like
armin
or
even
yuri
point
us
to
to
these
security
things
that
we
treat
very
seriously
and
one
of
the
the
thing
we
did
was
even
the
moving
to
go.
17
was
pointed
as
one
of
vulnerability
and
we
did
it
in
a
couple
of
hours
like
I,
I
don't
know
what
exactly
problem
they
see.
I
Okay,
yeah
that
aligns
with
my
expectations.
I
just
wanted
to
make
sure
that
we
close
the
loop
on
that.
I
wish
they
had
been
able
to
to
come
and
report
their
experience,
but
I
I
think
that
this
addresses
my
concerns.
H
A
Even
even
if
you
did
not
enable
it,
it
doesn't
matter
because
you
move
the
code
to
collector,
where
we
have
this
enabled
so
yeah.
But.
A
Okay,
how
did
you
enable,
or
how
did
you
see
that
is
not
enabled.
H
A
H
A
A
Okay,
thanks
anthony
for
bringing
this
and
yeah,
I
think
we
should
treat
all
the
securities
very
high
priority
and
we
should
always
upgrade
all
the
dependencies
and
fix
all
these
problems.
If
we
see
any.