►
From YouTube: 2022-08-31 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
B
A
B
B
All
right,
I
think
we
can.
I
mean
it's
five
past,
so
I
think
we
can
get
started
and
we
have.
I
guess,
a
few
items
on
agenda.
The
first
one
is
from
tyler,
I
suppose,
decide
about
the
context
scope
for
metrics
in
tql.
B
D
Before
jumping
into
this,
this
is
gonna
be
a
bit
of
a
long
conversation.
Should
we
go
over
the
other
one
and
then
come
back
on
this.
B
C
Yeah
so
header
setter,
extension,
pretty
exciting
stuff,
multi-tenant
support,
really
cool
was
wondering
if
there's
plans
or,
if
there's
possibilities
to
add
other
ways
to
update
the
headers.
D
C
Or
is
there
a
way
to
like
grab
from
an
attribute?
I
don't
really
like
the
attribute
idea,
because
secrets
on
attributes
aren't
good
but
like
is,
there
are
other
options
to
like
extend
that
capability.
D
Yeah
so
one
option
jurassic,
you
can
correct
me,
but
one
option
is:
you
can
have
a
processor
that
mutates
the
context
and
then
in
the
extension
you
can
read
from
the
context.
C
D
A
C
With
the
context
like
like
messes,
with
the
actual
incoming
request
context,
or
would
it
we
be
able
to
set
like
something
specific
or
not
something
specific,
but
like
something
extra
that
then
the
extension
can
go.
Look
at
that
part
of
the
context
and
change
it
or
do
we
have
to
go
mess
with
the
incoming
request.
Headers.
D
You
have
to
mess
with
the
incoming
request
headers,
because
the
reason,
the
reason
is
the
way
how
context
works
in
golan.
You
create
a
private
key,
which
only
usually
only
you
or
or
very
few
have
access
to
it.
So
the
extension
will
not
have
access
to
that
key,
but
I
think
the
extension
already
has
access
to
the
to
the
incoming
headers.
A
D
D
Option
is
to
if
we
want,
we
can
use
baggages
from
from
from
open,
telemetry
baggage
and
baggage
api.
We
can
set
a
baggage
and
retrieve
the
baggage.
I
think
that's
actually
a
better
use
of
this.
Maybe.
C
I
think
I
think
I
like
the
first
idea
better
with
messing
with
the
context
and
getting
the
headers
updated
and
then
pulling
from
the
headers.
It's
not.
We
don't
want
the
piece
of
information
to
end
up
in
like
the
telemetry
package
that
gets
sent
across
like
we
wouldn't
want
to
put
it.
I
don't
think
we
want
to
put
it
in
baggage
or
an
attribute,
or
something
like
that.
D
Do
you
think
with
our
key
that
is
not
propagated
like
baggages
or
anything,
it's
just
part
of
the
yeah
and
then
and
then,
for
example,
we
can
even
take
care
in
in
places,
I
think,
like
batching
and
stuff,
to
merge
them
together
and
do
some
smarter
based
on
this.
If
we
invent
our
own
thing,
that's
an
alternative
which
I
think
will
work
better
because
you
are
using
a
random
key
or
a
random
thing.
C
C
D
By
the
way
to
in
order
to
approve
this,
I
think
we
should
first
document
the
couple
of
use
cases
that
we
just
discussed
yeah
and
then
then
please
provide
the
the
api
of
like
a
rough
api
of
these
things,
that
we
are
building
yeah
for
agreeing
on
that,
and
then
we
should
be
good
to
go.
Sounds
perfect
thanks.
C
All
right,
sweet
I'll-
I
don't
know
when
I'll
start
working
on
that.
But
it's
something
that
someone
at
new
relic
or
me
will
work
on.
B
All
right
so
make
sure
to
copy
me
on
on
whatever
comments
you
make
and
copy
ruslan
as
well.
So
jerusalem.
A
B
Oh
yeah,
absolutely
yeah,
yeah,
so
yeah,
so
we
can
continue
discussion
on
on
the
issue
itself,
all
right,
the
next
one
that
we
have
is.
Even
so,
would
you
like
to
sure.
A
B
A
Yep,
I
think
you
guys
have
seen
me
in
a
couple
of
the
sig
meetings.
I've
been
going
to
the
triager
meeting
so
like
to
help
triage
some
issues
in
the
contrib
repo.
I've
got
a
issue
open,
so
take
a
look
at
it,
I'm
just
looking
for
input
there.
I
know
that
there's
also
a
call
for
triagers
in
the
core
repo
I'd
be
glad
to
help
out
there
too,
not,
as
I
haven't,
been
involved
there,
but
would
be
glad
to
help
out
but
either
way.
Just
please
take
a
look
at
the
issue.
A
Yeah,
so
I've
made
a
few
contributions
to
contrib
for
the
aerospace
receiver
and
as
well
as
reviewing
prs,
that
came
from
other
devs.
After
that
I
realized,
never
actually
applied
for
membership.
I
think
I'm
all
set
except
I
do
need
a
sponsor
from
outside
my
company,
so
something
to
find
one.
D
Okay,
I
I
couldn't
hear,
but
I
I
saw
you
in
a
couple
of
meetings,
so
I'm
happy
to
support
it.
D
Then
do
we
have
a
issue
for
that.
A
We
don't
yet
I
want
to
make
sure
I
had
one,
so
I
could
ping
them.
I
made
the
issue
so
I'll.
Do
that.
D
Okay,
I
think
this
is
the
last
topic,
so
we
come
back
to
the
discussion
between
me
and
tyler.
D
So,
what's
your
question
yeah,
give
me
I'll,
give
you
guys
a
context,
so
this
is
about
the
pql
or
transform
query
language
that
we
we
build
the
framework
that
we
build
within
each.
We
have
a
notion
of
a
context
which
means
the
context
is
the
the
the
unit,
usually
that
that
triggers
the
conditions
that
we
have
in
the
query
language.
D
So
essentially,
you
will
say,
apply
this
operation
where
this
happens,
and
that
in
that
unit,
is
the
the
unit
that
triggers
this
now
the
units
for
traces
are
spans,
so
so
the
context
for
traces
is
a
span
in
in
in
which,
like
we
can
say,
if
spam
has
this
name
triggered,
do
this
action
now.
The
problem
with
this
is
that
inside
the
the.
C
Correct
so
right
now
the
metric
is
exposed
as
a
virtual
field,
similar
as
the
like
resource
and
instrumentation.
C
So,
like
you
can
do
things
like
perform
this
function
on
a
data
point
if
the
metric
name
is
a
or
whatever
but
you're
right,
interacting
with,
like
all
the
metric
data
points
at
one
time
is
tricky,
like
I
don't
think
right
now,
we
could
support
an
aggregation
function
because
essentially,
we
would
say,
like
we
say,
for
each
resource
metric
for
each
scope
metric
for
each
metric
for
each
data.
Point
aggregate
all
the
data
points
which,
like
that,
doesn't
make
any
sense.
C
That
will
to
replace
the
metric
transform
processor,
absolutely
yeah,
the
the
flip
side,
if
we,
if
we
had
the
so
that's
what
we
can't
do
today
right
if
we,
if
we
go
at
data
points,
it's
hard
to
aggregate
because
we're
we
would
be
doing
it,
you
know
per
data
point
that
doesn't
make
any
sense.
On
the
flip
side,
if
we
were
at
a
metric
with
the
current
implementation
of
the
library,
it
gets
really
hard
to
change.
Only
individual
data
points
based
on
the
values
of
those
individual
data
points.
C
So
if
you
had
a
scenario
where
you
wanted
to
mess
with
or
add
a
attribute
to
a
subset
of
the
data
points
of
a
metric
based
on
a
condition.
Well,
if
that
condition
is
at
the
metric
level,
it's
going
to
be
hard
to
say
only
apply
this
function
to
this
data
point
when
this
condition
is
met,
because
maybe
that
condition
is
dependent
on
the
actual
individual
data
point
yeah.
D
Yeah,
I
I
agree
with
you
and
I
I
think
it's
it's
also
important
to
probably
identify
some
couple
of
use
cases
and
see
how
we
can
do
in
both
cases.
I
mean.
Definitely
the
aggregation
is
impossible
to
do
at
the
metric
level,
but
agreed,
but
the
the
mutation
of
individual
points,
for
example.
If
I
want
to
drop
one
specific
data
point
for
whatever
reason,
I
want
to
drop
a
data
or
things
like
that,
let's
see
how
how
we
can
come.
What.
D
C
Yes,
that's
what
I'm
thinking
right
now
as
well?
My
gut
is
saying
that
we're
gonna
like
right.
Now
we
have
the
traces
context,
the
metrics
context
and
the
logs
context.
My
gut
is
saying
that
we're
going
to
have
to
expose
a
data
point
context
that,
like
knows
how
to
work
on
data
points
and
a
metrics
context
that
likes
to
do
things
at
the
metrics
level.
C
I'll
definitely
do
some
experimenting
to
see
if
we
could
just
stay
with
one
context
and
if
we
can't,
I
don't
think
multiple
contexts
is,
is
a
bad
solution
like
the
whole
concept
of
the
telemetry
query
language
or
one
of
the
key
properties,
is
that
you
can
pass
in
any
context
you
want.
So
providing
extra
context
is
a
valid
solution.
D
Yeah,
the
other
thing
is,
we
can,
even
in
the
future,
may
have
a
context
at
the
scope
level
or
the
resource
level
for
yeah
could
do
that.
Another
thing
now
that
being
said,
what
I'm
trying
to
to
suggest
now
is
how
I
mean
between
the
data
point
and
metrics.
To
be
honest,
my
decision
would
be
to
go
with
metrics.
First,
the
reason.
The
reason
why
I'm
saying
here
is,
I
know
it's
ugly
to
mutate
data
points
in
this,
but
it's
possible
versus
with
the
current
one.
C
There
may
be
some
impossibilities
as
well
with
the
with
the.
If
we're
going
metric
down
to
data
point,
I
need
to
test
it
out
my
like
trying
to
think
through
it
without
actually
writing
any
code
for
it
yet
like
just
mentally
in
my
head.
I
think
there
are
certain
situations
where
users
have
the
option
today
to
set
conditions
based
on
specific
data
points
that
we
couldn't
do
if
we
had
it
at
the
metric
level,
but
I
need
to
like
go
try
it
like
you
were
saying.
I
just
need
to
go.
Try
it.
D
Yeah,
the
other
thing
is
right.
Now,
in
the
processor
we
have
three
sections
tracey's
metrics
logs.
I
think.
C
C
C
We
would
list
out
the
context
which
I
mean
technically
right
now
it
is
the
context
because
the
transform
processor,
like
the
way
that
it
passes
in
the
context
to
the
telemetry
pure
language,
is
totally
separate
to
the
what
context
to
the
image
the
query
language
package
exposes
so
like.
If
we
wanted
to
add
a
fourth
one
in
there,
that's
you
know,
I
guess
we
could
do
spans
logs
metrics
data
points
like
that's
totally
independent
of
the
package.
We
can't
do
that.
Yeah.
D
E
C
C
That's
probably
I
I
can't
speak
for
sure
we
allow
the
other
thing
to
happen.
I
don't
actually
know
what's
how
it's
being
used
today,
yeah
but
yeah.
I
do.
C
That
are
in
the
transform
processor,
specifically
that
are
kind
of
funky,
but
meet
some
customer
needs
of
transforming
some
incoming
prometheus
data
that
isn't
formatted
correctly.
C
They
they
all
four
of
those
functions,
add
new
metrics
to
the
list,
and
I
don't
see
why
those
wouldn't
function
anymore.
If
they
were
at
the
metric
level.
So
like
adding
functions,
aggregation
functions,
I
think
that's
all
appropriate
to
do
at
the
metric
level
versus
the
data
point
level,
although
I
do
think
one
of
the
functions
like
creates
a
new
a
new
sum
metric
based
off
of
a
summary
data
points,
count
or
some
value,
but
that
could
still
be
done
at
the
metric
level.
C
I
think
it
would
be
easier
to
do
at
the
metric
level.
It's
also
possible
that
the
transform
processor
just
shouldn't
deal
with
that
level
of
manipulation
and
dropping
should
be
handled
by
the
filter.
Processor.
That's
a
possible
outcome
as
well,
because
I
didn't
really
like
what
I
wrote
up
for
the
transform
processor.
D
C
Yeah
and
there's
actually
a
an
issue
open
right
now,
trying
to
use
the
telemetry
query
language
package
and
the
routing
processor,
which
I
thought
was
really
cool
to
see.
But
right
now
the
solution
for
that
is.
We
have
to
have
like
some
no
op
function,
because
the
the
package
has
no
concept
of
how
to
parse
a
query
without
a
function
like
we
need
to
expose
just
the
condition.
So
there's
an
issue
open
for
that
as
well
and
once
evan's
logs
refactor
is
added.
D
Yeah,
so
even
the
drop,
if
you
we
do
drop
as
a
function
inside
transform
processor
or
implement
it
as
a
standalone
thing
in
the
filter
processor,
which
I
think
we
can
do.
But
still.
I
would
like
to
use
the
condition
where,
where
to
agree
which
operations
to
drop
and
everywhere,
we
would
like
to
use
the
conditions
from
yeah.
C
The
conditions
are
super
super
powerful
for
any
processor,
so
working
towards
that
refactor
is
on
the
top
of
my
list
right
now,
right.
D
D
And
now
thinking
about
that,
I
think
people
would
like
to
have
conditions
at
the
the
resource
scope,
the
scope
scope,
the
drug
records,
I
think
we
should
end
up-
will
end
up.
Actually,
probably
the
right
thing
to
do
is
actually
refactor
our
stuff
and
have
cont
context,
support
which
we
yeah.
Now
we
three,
but
we
we
should
add,
even.
C
Yeah
yeah
yeah,
you
can
technically
like
you,
can
get
away
with
doing
stuff
at
the
resource
level
like
you
could
say,
only
perform
this
function
if
a
resource
name
is
whatever,
but
what's
gonna
happen,
at
least
the
way
that
the
transform
processor
implements.
It
is
it's
gonna
for
that
resource
for
all
the
scope
for
all
the
whatever
it's
gonna.
Do
that
every
time
and
check
the
condition
every
time
unnecessarily,
yeah.
D
E
D
D
C
D
Or
we
have
two
options:
we
either
stick
with
the
interface,
but
then
then
I
would
not
expose
a
get
item
so
so
yeah
your
interface,
that
everyone
converts
to
whatever
struct
or
whatever.
Then
you
want
yeah,
because
right
now,
right
now
the
reason
if
we
keep
a
get
item,
you
still
can
have
a
strike
there
because
the
item,
the
interf,
the
the
any
item
that
we
have
right
now
can
embed
anything.
C
C
I
do
think
we'll
have
to
have
some
sort
of
access
functions
for
whatever
context
we
make,
because
otherwise
the
generic
functions,
which
all
take
a
generic
transform
context,
wouldn't
be
able
to
grab
any
data
without
a
cast,
but
any
explicit
cast
to
a
context
kind
of
moves
that
function
into
only
that
context
capability.
So
we
do
have
to
have
some
sort
of
accessor
functions.
I
think
it's
just
how
many
and
what
are
they?
That
what
we
can
figure
out,
we'll
figure
it
out
yep
that.
C
B
Excuse
me,
so
if
you
record
the
decisions
in
a
doc
or
or
somewhere,
I'm
sure
that
muslim
can
and
he
wants
to
work
on
that
stuff.
Okay,.
C
Perfect,
some
of
these
decisions
are
also
like
far
out
there's
other
there's
like
stepping
stones,
like
I
think
we
started
off
like
switching
from
data
point
to
metric,
is
a
more
reasonable
thing
to
do
now.
Some
of
those
other
things
that
we
just
talked
about,
there's
a
lot
of
in-between
steps
that
have
to
happen.
First,
yeah.
D
Yeah,
so
I
think
I
think,
let's
start
with
the
following.
I
think
we
should
have.
Let's
start
with
the
with
the
idea
of
having
four
context
next,
which
is
yeah
one
context
on
that
works
on
metrics,
one
context
that
works
on
data
points,
data
points
and
then
start
from
there
see
how
that
goes,
see
yeah
we
give
where
it
moves
us
and-
and
we
will
understand
what
we
need
to
agree.
A
B
C
E
D
So
so
condition
will
work
on
metric
the
function.
The
function
may
work
on
the
scope.
Okay,
the
the
condition
has
to
be
on
metric
because
you
will
want
to
say
where
metric
name
or
meta,
something
and
and
maybe
doesn't
have
points
or
has
points
or
whatever
things
we
offer.
A
D
Wow,
I
don't
think
we
should
do
that
then
so
so
I
I
I
think,
that's
way
too
advanced
and
if
somebody
wants
to
do
that,
they
should
write
their
own
custom
processor.
I
I
I
think
we
will
come
so
usually
I
we
have
this
tendency
of
generalizing
a
lot
which
is
good,
but
sometimes
it
comes
with
such
a
big
cause
for
for
the
common
case
that
that
I
think
we
should.
If
anyone
wants
this,
we
should
write
a
specific
process
or
they
should
write
their
custom
processor,
for
that
does
it
make
sense
like.
D
A
E
The
problem
here
is
that
we
are
missing
like
mixing
the
context,
just
data
point
and
metric
that
that
that's
the
yeah
okay.
C
Well,
I
think
that
might
be
that
might
end
up
being
possible.
It
just
depends
on
how
we
implement
things
for
the
drop
case.
C
Like
the
function
in
order
for
the
function
to
do
the
dropping,
it
has
to
be
at
the
right
level
like
you
were
saying
demetrius,
so
the
function
might
have
to
take
effect
at
the
scope
level.
It'll
be
an
interesting
one,
but
I
don't
love
the
what
I
wrote
for
the
current
implementation
like.
I
wouldn't
want
to
check
that
in
that
was
it's
pretty
trash.
So
if
people
have
to
stick
with
filter
processor,
which
can
drop,
can
the
filter
processor
drop
metrics
right
now,.
E
C
C
When
we're
working
towards
that,
okay,
speaking
of
that
working
towards
condition
drop,
if
you
want
to
take
a
look
at
evan's
logging,
pr
again,
that
would
be
much
appreciated
because
that's
the
first
step
for
working
towards
exposing
conditions
without
functions.