►
From YouTube: 2023-03-22 meeting
Description
Open cncf-opentelemetry-meeting-3@cncf.io's Personal Meeting Room
A
B
C
A
A
So
there
was
a
proposal
to
add
this.
Okay,
let's
start
again,
the
there
is
a
SQL
query,
receiver
that
allows
you
to
run
queries
against
the
database
and
the
interpret
them
as
metrics,
and
it
seems
that
makes
sense
to
also
allow
run,
queries
and
interpret
them
as
logs.
You
think
it
would
be
a
good
idea
to
add
it
to
this
receiver.
D
I
think
it's
a
good
idea,
I'm,
not
sure
about
what
would
be
the
payload,
how
we
would
translate
the
logs.
E
F
A
D
But
please
check
events
specification
for
the
events
and
we
should
be
aligned
with
that.
F
All
right,
my
issue
is
next:
I
just
wanted
to
propose
using
milestones
for
each
release
to
make
finding
what
went
into
which
release
a
little
bit
easier
for
a
user.
So
I
was
talking
to
a
customer
who
was
wondering
when
certain
PRS
or
certain
fixes
went
into
the
into
a
particular
version
of
The,
Collector
and
I.
Think
you
know
they
can
use
the
change
log,
but
then
that
means
that
people
have
to
go
and
kind
of
go
backwards,
whereas
if
they
know
what
PR
or
what
fix
they're
looking
for.
C
So
that's
that's
for
in
progress
PRS,
because
the
the
merge
PRS
the
merge
PRS
will
always
have
a
an
annotation
there
with
the
the
first
tag
that
they
they
got
tagged.
So
so,
if
you
go
to
a
merge
PR
you
will
find
out
which,
which
is
the
first
tag
that
was
at
like
Mark.
You.
C
Yeah
for
the
comment
yeah
the
comment
or
PR
whatever,
but
it
it
tells
you
which,
which
tag
is
the
earliest
tag
available
for
this?
So
is
that
what
they
are
looking
for
or
is
something
else
I.
F
How
will
someone
know
what
like
what
release
is
going
to
be
a
part
of
and
I
guess
without
some
kind
of
Milestone?
It's
it's
hard
to
know
that?
Oh
okay!
Well,
this
this
PR
got
merged
just
like
two
days
ago
and
there
hasn't
been
a
new
release.
Therefore,
it's
going
to
go
into
the
next
release
right
like
if
it
just
said
this
will
be
part
of
like
0.75.0,
then
at
least
I
know.
Okay,
when
that
thing
goes
up
comes
out,
then
I
I'll
know
to
look
for
my
fix
or
whatever.
F
C
I,
don't
want
to
create
a
label
per
release.
Sorry
another
possibility
is
to
to
have
a
label
pending
release
and
then
remove
it
like
we,
we
add
pending
release
to
every
Mr
that
we
merge.
That
is
not
yet
released.
Then
when
we
release
we
remove
that
label.
F
Yeah
I
I
like
Milestones,
because
they
don't
really
they
don't
really
have
a
purpose.
Otherwise,
in
our
project
like
like,
we
use
them
just
for
like
this
idea
that
we're
going
to
like
accomplish
a
particular
size
of
project
or
whatever,
like
we
don't.
We
don't
really
like.
We
use
labels
pretty
heavily
in
a
lot
of
different
ways
right
now,
so.
F
F
F
G
Bogdon
gave
me
some
feedback
and
then
I'm
waiting
for
answers.
I
have
questions.
I
think
this
is
important
and
I'm
willing
to
do
whatever
is
asked
of
me
I'm
just
waiting
for
feedback
and
put
that.
A
C
So
I
want
to
hear
others
opinion
I,
I
I'm
I'm
struggling
a
bit
convincing
myself
that
people
will
not
hit
that
metadata
High
cardinality
problem,
so
you.
G
Especially
in
my
use
case,
in
this
case
light
step,
users
use
a
single
token
for
their
authorization
of
data,
so
I'm
expecting
they'll
be
low
cardinality
and
it
would
be
pretty
rare
for
there
to
be
a
cardinality
explosion
of
this
particular
attribute
and,
as
I've
documented
I
mean
I.
Think
it's
something
the
user
could
protect
themselves
against
actively
if
there
was
a
threat
of
that
like
a
different
mechanism
like
an
off
extension
would
help.
But
the
the
use
case
is
that
I
have
pending
the
hotel.
G
Aero
work
will
be
a
bridge
that
you
can
send
your
hotel
data
to
which
will
then
propagate
compressed
data
across
the
wide
area
network
cheaply.
I
need
to
propagate
the
metadata.
I
also
need
batching
for
that
compression
to
be
good
performance.
So
without
this
support,
I
really
can't
get
that
to
work.
Cardinality
is
really
not
a
concern
of
mine
like
not
at
all.
B
B
Is
that
having
zero
context
but
I'll
just
play?
The
useful
idiot
is
the
is
the
issue
here?
Not
what
I'm
not
playing
it
I.
Am
that
you,
okay,
is
the
is
the
concern,
not
necessarily
that
you
know
your
use
case
will
have
cardinality,
but
that
exposing
this
interface
to
users
would
would
create
you
know
they.
Could
it's
a
foot
gun
to
users
potentially
and
is
that
something
we
could
solve
it?
Is
that
the
pushback
or
is
there
another
question.
G
That
was
one
of
the
questions.
There
were
two
questions.
One
was
do
we
need
to
send
out
the
data
after
batching
the
metadata?
You
could
imagine
a
useful
scenario
where
you
batch
play
metadata,
but
don't
propagate
it
I'm,
not
convinced
that
that's
I
can't
think
of
a
use
for
that
myself
and
the
second
feedback
was:
should
there
be
a
limit
in
case
of
cardinality
explosion
and
I'm
I'm
I'm
sort
of
less
concerned
about
it?
The
collector
will
oom
eventually,
all
I
did
was
take
the
current
batch
processor
design
and
split
it
up
by
metadata.
G
So
if
you
have
a
thousand
Keys
you'll
get
a
thousand
vouchers
and
eventually
you'll
you'll
run
in
a
memory.
I
would
be
willing
to
put
in
something
simple
like
a
hard
limit.
If
you
get
to
100
batch
Bachelors
I'm
going
to
start
refusing
and
creating
errors
essentially-
and
that
would
be
easy
to
do-
but
that's
a
different
sort
of
put
gun
I,
don't
know.
B
Okay,
you
know
I'm
not
a
approver
or
maintainer.
It
feels
like
something
that
could
be
solved
with
documentation
or
you
know
some
hard
limit
and
but
I
have
no
other
other
feedback.
You.
G
F
I'll,
throw
in
one
quick
note.
Yesterday,
I
was
running
through
some
dependency
checks
and
I
noticed
that
the
new
version
of
Jaeger
now
requires
go.
120
go
120
is
probably
not
another.
What
like
five
months
away
so
I
guess
we're
gonna
have
to
be
pinned
to
the
old
version,
I
think
Pablo,
you
suggested.
Maybe
we
wanna
bump
only
particular
components
if
we
had
to
but
I
I,
don't
know
what
the
right
thing
to
do
here:
I
guess,
depending.
C
It
the
other
thing
by
the
way,
Alex
there
is
an
ongoing
discussion
with
jurassi
to
split
the
iagar
part
that
we
need,
because
we
we
depend
on
the
whole,
the
other
just
because
we
need
the
data
model
for
for
that.
So
maybe
maybe
that
can
be
prioritized
and
then
hence
we
can
make
that
goal
119
for
us.
H
Sorry,
can
you
can
you
come
back
to
this
one
here?
What
is
painting
on
younger
side
so.
C
So
remember
we
discussed
that
we
depend
on
the
holyager
module
because
we
there
is
no
split
of
the
just
the
model
part
that
we
need.
We
we.
H
So
I
think
the
produce
they
are
part
of
a
different
repository.
They
are
part
of
the
eager
ideal,
but
Benedict
is
here
he
works.
Also.
On
the
other
side,
I
I
haven't
been
following
Digger
for
a
long.
C
Time
so
so
we
need
a
model
directory
if
from
the
Jagger
from
the
Jagger
repository.
If
somebody
gives
us
the
model
directory
as
a
standalone
module,
that
would
help
us
tremendously
because
that's
where
they
generate
from
the
protos,
they
generate
the
files
and
they
have
some
handwritten
files
for
the
IDS.
They
have
some
transformation
between
drift
and
and
protos
that
we
are
using.
So
it's
not
only
the
protos
that
we
are
using.
We
are
using
also
some
of
these
helpers
that
are
in
that
model
directory.
C
H
Yeah,
so
it
doesn't
have
to
be
moved
to
a
different
repository.
It
just
has
to
be
a
new
module
like
a
new
goal:
module
yeah
so
just
go
in
it
and
start
a
new
module.
But
you
know
again:
it's
been
more
than
a
year
since
I
last
looked
at
the
jager's
code
base,
so
I
have
no
idea
how
it
is
right
now,
I.
H
They've
changed
a
couple
of
things
to
be
more
more
in
line
with
the
collector
itself,
so
perhaps
it
is
a
problem
that
has
been
solved,
but
I
don't
know,
make
it
really
tall
I.
E
Honor
Jager
maintainer
is
going
to
be
okay
with
staying
with,
go
119
for
that,
because
it
doesn't
seem
like
the
airport,
the
whole
project,
so
I
don't
know.
Should
we
ask
first?
Should
we
make
sure
that
this
is
okay
with
them,
I
mean?
Well?
Maybe
we
want
to
do
it
regardless
of
the
the
goal
issue.
The
conversion
issue.
C
But
I
I
want
to
do
that
regardless
of
that,
because
we
bring
a
lot
of
dependencies
from
them
and
we
also
have
a
circular
dependency
between
us
and
them
because
of
that
that
depends
okay,
but
still
we
can
short
term.
We
can
ask
for
the
Go
version
to
be
downgraded
if
that's
a
quick
fix,
but
I
would
like
to
get
the
the
model
split
so
that
we
we
depend
only
on
that.
E
F
While
we're
talking
about
dependencies
I
think
we're
also
stuck
on
upgrading
the
Prometheus
dependency,
this
is
a
bigger
debate
of
what's
happening
with
Prometheus
and
open
metrics,
but
I
I,
don't
know
what
the
plan
moving
forward
for
us
is
I,
don't
know
Anthony.
Do
you
have
any
thoughts
on
this.
J
Yes,
Fabian
from
Prometheus
reached
out
to
me
yesterday
to
ask
about
how
we
can
get
back
to
being
aligned
on
this
I
suggested
that
he
start
with
the
go.
Tell
Prometheus
working
group
meetings
which
are
before
this
meeting,
but
every
other
week.
So
next
week
would
be
the
next
one.
I
think
he's
unavailable
at
that
time.
But
we'll
try
to
start
some
conversation
on
slack
in
the
Prometheus
work
group.
J
Slack
Channel,
as
well
as
far
as
I
could
tell
it
seems
like
open,
metrics
might
be
dead
and
Prometheus
is
back
to
de
facto
standards
that
we
may
need
to
integrate
with,
but
we'll
work
that
out.
As
we
start
having
conversations
with
them.
F
J
I
mean
I
I
think
we
can
disable
the
tests
that
are
now
failing
as
part
of
the
Prometheus
upgrade.
If
we
want
to
continue
to
move
forward
and
expect
that
exemplars
on
all
the
metric
types
are
going
to
be
the
way
we're
going
to
go,
it's
it's.
Basically,
a
sanity
check
that
the
Prometheus
parser
has
stayed
compliant
with
open,
metrics
and
our
use
of
it.
But
if
we
expect
that
to
be
a
Divergence,
then
those
tests
are
not
adding
any
value.
H
So
I
understand
there
is
a
there's,
an
issue.
There
is
a
conversation
scheduled
to
happen
for
that
as
part
of
the
Prometheus
working
group,
perhaps
or
a
message
is
being
changed
between
some
people
here
to
grafana
that
are
working
on
open
metrics
and
perhaps
you
Anthony.
J
Yeah,
if
you've
been
reached
out
to
me
yesterday
and
I
directed
him
towards
For
Mickey's
working
group,
but
I
understand,
he
has
a
conflict
with
the
meeting
next
week.
Yeah,
but
I've
asked
him
to
start
the
discussion
in
slack.
There's
a
permit.
These
word
group
slack
Channel.
There
yeah.
H
So
if
it
is
not
that
critical
I
mean
I
understand
it's
failing,
so
we
should
probably
just
disable
the
test
for
now,
but
not
remove
them
completely
until
until
those
that
conversation
happened
because
so
when
I
saw
your
comment
on
the
leads,
Channel
I
brought
this
internally
and
we
had
it
in
some
discussions.
H
So
there's
no
consensus
that
as
far
as
I
understand,
there
is
no
consensus
that
open,
metrics
is
back,
has
been,
has
been
violated
there
and
I'm
trying
to
get
people
from
multiple
parts
of
the
company
and
wonderful
parts
of
different
projects
to
you
know,
get
a
consistent
or
get
a
a
single
answer
or
find
a
single
answer.
H
But
I
feel
like
this.
This
has
to
happen
with
this
group
also
like
the
Prometheus
working
group,
so
I
wouldn't
make.
H
A
yeah
a
decision,
basically
without
that
consensus
being
reached
as
part
of
that.
J
I
would
say
to
the
extent
that
those
compliance
tests
are,
you
know,
an
expression
of
what
the
spec
expects.
It's
now
failing
it's
been
violated,
so
if,
if
the
expectation
was
that
this
change
did
not
violate
the
open
metric
spec
that
most
compliance
tests
should
have
been
updated
first,
as
part
of
this
to
make
that
clear,
so.
H
Are
those
tests
ours
or
who
wrote
those
tests?
I
Rose
part
of
the
open,
metrics.
J
Yeah
we
have
imported
a
set
of
Exposition
and
expectations
from
an
open,
metrics
conformance
test
Suite.