►
From YouTube: 2022-11-15 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
Hey
everybody,
so
it's
three
minutes
after
the
initial
hour.
Okay,
so
I
guess
we
can
start.
Let's
hope
that
more
people
jump
in.
Thank
you
for
joining.
So
let's
go
over
the
agenda.
The
first
item
is
tigrant
is
still
not
here
copy.
It
shows
up
soon
request
for
comments
regarding
the
portable
generation
and
its
exposure
or
non-exposure
your
final.
You
know
customers
or
users.
A
A
B
Was
reading
the
comments
in
the
I
was
reading
the
comments
and
the
story
in
the
pr
like
give
me
one
minute.
A
A
A
B
So
I
think
the
discussion
derives
from
to
different
points
in
the
in
the
pr
I.
Don't
think
we
are
discussing
about
using
the
protobuf
as
the
data
that
we
pass
between
the
SDK
and
exporter,
because
we
already
made
a
decision
that
that's
not
going
to
be
protobuf
so,
and
my
argument
is
because
we
already
Define
our
own
custom
format.
That
is
not
protobuf.
Why
do
we
have
to
expose
the
protob
implementation
and
that
comes
with
a
lot
of
restrictions
and
I?
B
Don't
buy
pigments
I
recommend
that
we
shouldn't
add
new
new
service
methods
in
the
grpc,
because
that's
the
very
reasonable
way
how
you
extend
grpc
endpoints
by
adding
a
new
method,
not
by
any
the
new
endpoint.
So
I
want
to
hear
others,
but.
C
I
think
the
the
downside
here
relating
to
the
API
seems
incorrect
to
me:
I'm,
not
sure
where
a
use
of
protobuf
structs
would
appear
in
the
API
at
all
I.
Don't
I,
don't
think
this
would
impact
the
API
or
require
any
apis
to
use
a
protobuf
library.
C
D
C
In
any
event,
I
I
think
that
this
is
something
we
don't
need
to
specify.
We
can
say
we
don't
recommend
it,
but
I
don't
think
we
need
to
prohibit
it.
E
Yeah
I
feel
there's
a
separation
of
responsibility.
Here.
One
is
the
spike.
Another
is
the
implementation.
If
we
don't
open
climate
goal,
I
think
the
the
spec
can
focus
on
saying,
like
what
type
of
changes
are
considered
as
non-breaking
changes,
but
if
the
Gola
maintenance
decided
they
want
to
take
extra
stuff
with
the
consequence
of
even
if
there's
a
compatible
spec
change,
it
might
translate
to
a
breaking
change
in
the
goal
API
or
the
SDK
package
and
they're
willing
to
do
a
major
version
bump.
Every
minor
spec
release
it's
their
choice.
B
You
know
that
that's
not
going
to
happen
because,
because,
even
if
you
specify
anything
here,
unless
you
enforce
two
people
to
not
expose
this,
when
when
that
moment
comes,
we
will
have
so
much
pressure
that
will
not
be
allowed
to
do
that
change.
That
will
break
somebody
like
Java
or
whoever
made
that
that
decision.
E
B
E
Such
pressure
and
I
I
I
feel
that
we,
we
should
have
a
clear
boundary
like,
even
if
that
pressure
is
like
semantic
convention,
people
are
saying
that
broken
semantic
convention
has
been
checked
in
existing
for
three
years.
If
we
decided
it's
bad,
we
should
still
go
and
break
it,
not
saying
like,
because
someone
just
implemented
that-
and
it's
just
too
big
to
fail.
I
I
think
once
you
have
that
pressure,
then
the
spike
wouldn't
have
its
neutral
position
anymore.
It
will
be
kidnapped
by
implementation.
B
C
B
C
B
So
so
the
goal
protobuf
generates
interfaces
for
every
service,
hence
and
implements
that
interface
for
a
client
and
for
a
server,
a
server
stab
and
the
clients
Tab,
and
but
that
is
an
interface,
so
it
will
add
a
new
function
to
an
interface.
So
hence
you'll
break
your
sim.
Your
conventions,
correct.
C
Go
Community
has
already
said
that
we
can
reserve
the
right
to
add
methods
to
interfaces
and
not
call
that
a
breaking
change,
because
the
spec
has
insisted
that
it
needs
to
be
able
to
add
methods
to
interfaces
at
any
point
and
not
be
a
breaking
change.
So
I
think
we're
already
in
that
boat.
B
And
again,
I'm
not
arguing
with
you
Anthony
that
you
made
any
mistake
or
anything
I
think
God
did
their
choices.
Java
did
their
choices
and
it's
it's
it's
right
to
do
the
that
way.
I'm
just
saying
that
I
want
to
have
the
flexibility
in
the
future,
as
Riley
pointed
if
we
find
a
mistake
to
be
able
to
fix
it
and
I
think
giving
too
many
not
having
restrictive
rules
will
prohibit
us
to
do
that.
C
F
Do
think
there's
an
important
call
out
here,
and
this
is
one
thing:
I
hate
about
Google
protocol
buffer
libraries
am
I
allowed
to
rant
about
them
so
effectively,
they're
designed
in
a
world
where
you
recompile
your
code
on
pages
they're,
not
designed
in
a
compatible
world
and
and
that's
actually
one
of
the
problems
that
I
have
like.
If
you
look
at
what
Java
did
with
them,
it's
interesting
because
they
do
the
same
thing
that
P
data
did.
F
They
actually
wrap
their
own
generation
of
their
own
data
classes,
where
they
have
a
clear
binary,
compatible
interface
that
they
expose
for
data,
and
you
have
a
completely
different
way
of
constructing
them.
The
only
complaint
I
have
when
we
do
that,
like,
for
example,
P
data
is
oftentimes,
we'll
make
that
binary
compatible
interface
for
reading
the
data,
but
we
won't
make
it
easy
to
actually
create
the
data
either
in
a
compatible
way.
And
so
then
users
who
need
to
write
tests
are
left
screwed
over.
F
But
I
think
this
is
actually
an
endemic
problem
with
protocol
buffers,
because
the
generated
libraries
do
not
produce
code
that
you
can
just
keep
producing
in
a
compatible
way
with
how
they
work
out
of
the
box.
That's
just
a
fundamental
limitation
to
them.
So
I
think
that's
a
technology
Choice
problem.
Unfortunately,
I,
don't
know
if
it's
on
this
community
to
fix
it
for
Google
protocol
buffers,
but
I
do
think
that
we
have
to
deal
with
it.
Yes,.
B
And
and
Josh
we
already
recommend
by
the
way
we
already
recommend
between
the
SDK
and
the
pro
and
the
in
the
in
the
exporters
to
not
use
protobuf,
because
if
we
were
trusting
protobufs,
we
would
have
said
why
not
produce
protobuf
as
a
result
of
the
SDK
and
pass
that
to
the
to
the
exporter.
But
we
don't
do
that.
We
do
what
you
said,
something
like
P
data:
it's
not
like
P
data,
it's
actually
way
less
complex
than
the
data,
but
but
indeed
you
do
you.
B
We
do
have
that
in
the
specification
where
we
say
we
prohibit
the
use
of
plot
above
there
and
we
we
say
everyone
should
Define
their
own
P
data
like
data
structure
or
whatever
we
call
them
I
think
we
call
data,
structure,
data
data
structure,
okay
anyway
bad
words
but
yeah,
so
so
I
think
these
protocol
Buffs
are
only
important
in
the
implementation
of
otlt
exporters.
B
So
so
they
cannot
use
and
they
shouldn't
be
used
anywhere
else.
Since
we
have
those
data
like
pojo
objects,
then
then,
why
do
we
need
to
expose
this
product
above
generated
code?
It's
the
only
place
where,
where
we
need
them
is
when,
in
the
otlp
exporters,
when
we
convert
between
this
internal
format,
to
the
protobuf
to
generate
the
protocol
above
wire
format,
and
that's
it.
B
C
B
B
E
Understand
our
part,
I
I,
feel
I
I
feel
despite
owner,
have
the
right
already.
E
E
E
C
E
D
B
So
if,
if
you
say
that
no
no
contract,
no
words
means
no
contract.
I'm
fine,
but
I
I,
believe
based
on
my
discussion
with
George
suret
and
others
who
maintain
languages
like
Scala
and
and
other
communities,
not
having
a
contract,
actually
means
user
can
Define
their
own
contract.
F
So
I
want
to
I
want
to
caveat
some
of
the
things
I've
said
so
like
one,
we
don't
have
to
get
the
details
of
Scala,
but
there
was
a
large
feature
that
was
added
called
macros,
that
it
was
added
as
experiments
and
we
told
people
not
to
rely
on
it
that
it
could
break
at
any
moment.
F
The
problem
was
this
feature
met
such
an
important
need
in
the
community
and
like
there
was
such
a
desperate
need
to
solve
a
type
of
problem
that
this
feature
solved,
that
everyone
adopted
it
and
then
it
was
impossible
to
make
changes
to
it
because
it
was
so
well
adopted
to
some
extent
it's
a
good
problem
to
have.
It
means
you've
got
a
feature
that
nailed
a
use
case
for
your
user.
F
It
does
suck
as
a
developer
because
you
have
to
get
really
really
clever
with
fixes
and
you
want
to
avoid
breakages
I'm,
actually
more
concerned
hearing
that,
like
major
changes,
are
breaking,
go
compatibility
a
little
bit.
I.
Think
that's
something
worth
discussing
external
to
this
and
I
do
think.
We
need
to
be
careful,
but
I
also
think
there's
this
notion
of
being
too
careful.
If
we
never
make
that
feature
that
solves
that
giant
need
what
happens.
F
Is
users
go
somewhere
else
to
solve
the
problem
and
that
somewhere
else
could
be
even
more
scary
and
worse
for
us
in
the
long
run
right
so
like
there's
a
balance
here
that
we
have
to
adapt,
you
have
to
be
careful,
we
have
to
be.
We
have
to
have
these
guaranteed
things,
but
also
like.
If
there's
a
need,
that's
not
being
met
and
I
would
argue,
the
need
is
not
on
us.
F
It's
actually
on
the
protocol
buffer
libraries,
not
creating
compatible
code
here,
it'd
be
so
much
nicer
if
they
were
version
by
version
compatible
in
the
generated
code,
but
anyway
that's
a
different
story,
because
that
need
is
not
being
met.
We
have
to
do
something
about
it
right
and
that's
that's.
What
I
think
this
is
going
after
here,
I
feel
like
we
should
time
box
this
discussion
a
little
bit
maybe
take
some
of
this
offline,
but
there's
a
bigger
theme
around
us
breaking:
go
compatibility
that
I
want
to
raise
later.
I.
F
G
Thanks
I'd
love
to
help
on
that
front
as
well
Anthony
I
I
do
think
we
want
to
above
all
make
sure
that
we
aren't
breaking
existing
instrumentation
because
I
think
that's
going
to
create
trouble
down
the
line.
So
let's
not
be
doing
it
by
accident,
because
the
spec
was
saying
something
dumb.
H
I
think
I'm
familiar
with
the
issue
and
it's
an
unnecessary
API
that
the
go
SDK
is
supporting
and
it's
an
option
and
I
think
Anthony's,
probably
right
that
it's
not
necessary
to
specify
anything
about
that
specific
one.
But
I
also
think
it
should
be
removed.
It's
not
helping
anybody
that
much
and
we
probably
don't
want
to
be
using
the
protocol
buffer
in
the
long
term,
as
the
encoder
for
our
SDK,
so
go
should
remove
the
API
and
we
should
stop
breaking
the
protocol.
Both.
B
By
the
way,
Josh
McDonald,
we
already
agreed
that
we
will
stop
breaking
any
wire
compatibility,
we'll
stop
making
any
wire
compatibility
change
breaking
changes.
So
that's
that's
already
agreed
it's
all
about
for
things
that
are
not
wire
required.
Do
we
care
about
them
or
not?.
C
H
H
You
know
if
users
really
need
that
they
should
go
to
The
Collector,
where
we
have
P
data
and
we're
maintaining
exporters
like
that's
what
people
are
getting
is
the
ability
to
export
data,
that's
like
almost
in
the
protocol
format
that
you
need,
and
you
can
do
that
with
a
cluster.
Now
the
go
code
can
remove
that
and
we
can
maintain
the
pdata
interface.
So
it's
our
one,
expensive
wrapper.
B
C
I
want,
let's
can
we
stop
here
for
a
second
I
did
not
make
any
accusations.
I
made
a
statement
and
I
think
we
can
quite
clearly
go
to
the
draft
Pierre
that
tigrin
has
had
up
for
a
long
time,
you're.
The
only
one
who
has
raised
any
concerns
about
everybody
else
who
has
commented
on
that
has
said:
I
support
this
and
we
should
commit
it
now.
B
Sure
I'm
moving
forward
because
I
I
don't
have
a
debate
video
about
this
anyway.
The
decision
here
probably
is
to
not
say
anything
because
that's
what
I'm
hearing
from
from
everyone
that
we
don't
want
to
over
specify,
but
we
don't
want
to
under
specify.
So
let's
just
do
nothing
and
deal
with
this
problem
later
when
we
probably
don't
have
too
many
cases,
but
that's
it.
A
Well,
I
I.
Also
we
I
suggest
we
continue
offline.
We
also
need
Tigres
input
on
this
one.
So
already
you
know
volunteer
to
to
follow
up.
So
let's
do
that
offline.
If
that
makes
sense.
A
Okay,
thank
you
so
much
for
that.
Okay
moving
forward,
also,
for
the
sake
of
time,
thematic
convention
stability
working
group,
a
few
topics
there
Josh
you
want
to
take
over.
F
Yeah,
so
here's
here's
some
crossover
topics
that
I
think
we
wanted
to
bring
to
this
group.
What
the
first
thing
is.
We
want
to
start
collecting
what
folks
consider
blockers
for
declaring
https
semantic
conventions
stable.
So
if
they're
things
that
you're
worried
about,
if
that
right
now,
I
have
nothing
in
there.
F
If
there's
things
you're
worried
about,
if
there's
things
that
you
want
us
to
pay
attention
to
and
look
at
before,
we
would
declare
these
stable,
I
I'm,
we're
basically
asking
the
spec
committee
and
we're
going
to
go.
Ask
the
HTTP
folks
as
well
I'd
like
to
fill
this
out
with
things
that
we
consider
blockers.
F
So
we
get
a
notion
for
what
we
need
to
finish
before
we
consider
this
done
before
we
consider
HTTP
submitted
to
Convention
stable
again,
there's
nothing
in
there
today,
because
I'm
just
asking
people
what
they
consider
a
blocker
before
they
would.
You
know
say:
yes,
I!
Consider
this
stable,
okay,
so
I
want
a
blank
slate
when
I
came
in
cool.
That's
part,
one!
That's
just
another
question.
F
Oh
sorry,
yeah
in
terms
of
how
to
give
feedback.
There's
a
semantic
convention,
stability,
working
group
or
you
can
just
ping
me
directly
on
slack
and
say:
hey
I,
consider
this
issue
blocking
and
I'll
add
it
to
this
list.
If
you
have
access
to
the
board,
I
think
everyone
does,
you
might
be
able
to
just
move
it.
There
yourself
there's
a
way
that
you
click
a
little
button
to
add
things,
if
not
just
ping
me
and
I'll
add
it
to
the
list.
F
But
basically
let
me
know
what
you
consider
a
blocker
ping
me
in
the
in
the
issue
and
say:
hey
Jay
Surat.
This
is
a
blocker
for
HTTP
semantic
convention,
stability
and
I'll
get
the
list
filled
out
effectively.
So
you
can
pick
me
in
slack.
You
can
ping
me
on
GitHub
either
one.
F
I
F
You
all
right,
so
that's
just
a
call
for
a
call
for
bugs.
Maybe
that
doesn't
happen.
Often
all
right
next
is
where
we
have
a
write-up
about
allowing
histogram
bucket
boundaries
to
change
from
release,
to
release
and
why
we
consider
this
a
non-breaking
change,
there's
a
little
bit
of
a
write-up
here.
This
is
again
we're
looking
for
feedback
on
this.
We
still
want
to
reach
out
to
the
Prometheus
Community
about
this.
From
what
we
understand.
This
is
considered
a
non-breaking
change
in
Prometheus
right
now.
We
describe
why
we
consider
a
non-breaking
change.
F
The
the
lesson
here
is
this
would
allow
the
specification
to
change
the
default
histogram
bucket
boundary
at
any
release
for
free.
Like
that's
fine,
however,
for
any
given
individual
SDK,
that's
running,
those
buckets
should
remain
stable,
so
it
like
in
process,
you
wouldn't
be
changing
your
bucket
boundaries
for
the
explicit
bucket
histogram
there.
That's
an
exponential
histogram
thing.
Okay,
so
that's
that's
the
that's
the
proposal
on
the
table.
F
We
do
want
to
walk
this
through
the
Prometheus
Community
to
make
sure
they're
on
board
as
well,
but
this
would
be
effectively
semantic
conventions
would
not
include
histogram
bucket
boundary
enforcement.
You
could
possibly
put
recommendations
in
there,
but
you
can.
We
won't
enforce
any
bucket
boundaries,
we'll
allow
users
to
change
them
as
they
need
foreign.
B
Have
you
have
you
identified
how
so
it
is
very
easy
to
to
understand
why
for
a
Time
series
Once,
you
restart
it.
You
can
change
the
buckets,
but
but
but
the
problem
is
how
how
do
we
deal
with
the
situation
where
we
have,
let's
say
a
Java
app
in
the
python
app
and
they
have
different
buckets
and
we
want
to
kind
of
see
the
the
results
together
or
merge
the
results
together.
F
That's
kind
of
the
same
problem
as
as
merging
between
two
different
instances:
almost
the
difference
is
you're
merging
kind
of
together,
and
so
so
it's
it's
whether
or
not
your
query
spans
that
time
boundary
where
the
buckets
change
or
whether
or
not
you're
joining
those
two
series
together,
I
think
it's
still
the
same
fundamental
problem.
B
F
I
guess
I
still
I
still
see
that
as
an
issue
today,
like
no
matter
what,
across
things
so
I
we're
basically
deferring
to
back
ends
to
handle
this
problem.
F
F
Basically,
the
idea
behind
the
bucket
boundaries
are
changing
where
your
error
rates
are
for
calculating
your
rates,
but
it
shouldn't
break
your
alerts
effectively
unless
your
your
bucket
boundaries
are
horrible,
but
the
reason
so
if
you
scroll
down
in
this
document,
I,
don't
know
who's
presenting
the
reason
we
want
to
allow
changing
is
basically
it's
that
side
note
right
up
above
there.
F
Why
do
users
want
to
change
bucket
boundaries?
It's
if
we
got
it
wrong.
So
let's
say
our
default.
Latency
assumes
latency
is
around
one
second
right.
If,
if
users
are
monitoring
something
with
a
histogram
where
the
latency
is
around,
you
know,
let's
say
a
minute
for
whatever
process
they're
monitoring.
Now
our
bucket
boundaries
are
bad
for
getting
accurate
representations
of
what
latency
are
there.
So
they're
going
to
want
to
shift
them
right
and
we
want.
We
want
to
make
sure
that
that
can
happen
going
forward.
F
So
that's
why
we
explicitly
don't
want
to
enforce
this
as
part
of
semantic
conventions
or
like
any
kind
of
stability
requirement
around
histograms.
We
think
this
is
in
line
with
what
Prometheus
already
does,
but
we
want
to
verify
with
the
Prometheus
Community
First,
so
there's
a
there's,
a
bug:
that's
open
around
defining
what
metric
stability
means
if
you'd
like
I,
can
open
a
sub
bug
specific
to
histograms,
where
we
can
put
more
offline
discussion,
but
I
just
wanted
to
raise
this
to
everyone
and
kind
of
collect
feedback.
Please
also
comment
in
the
doc.
F
I
E
Supportive
of
this
effort,
I
I,
have
one
question
about
the
the
smarter
version
of
histogram,
for
example
the
exponential
bucket
histogram.
Currently
we
have
a
a
special
bucket
that
catches.
Things
are
too
small
like
near
zero
and
at
certain
point
we
might
consider
adding
another
boundary
just
to
capture
things.
Are
that
are
crazy.
Large,
for
example,
we're
saying
anything
like
bigger
than
10
powered
by
300
should
be
considered
as
infinite
mostly
consider
those
as
non-breaking
changes,
I
I
think
we
should
leave
that
space.
Otherwise,
I
can
see
like
down
the
possible
range
issue.
F
Yeah
we
can
we
we
actually
because
exponential
histogram
buckets
aren't
like
considered
stable,
yet
we
haven't
added
them
to
our
discussion.
A
stable
Telemetry
just
yet,
but
I
would
make
the
argument
that
in
exponential
histogram
buckets
every
bucket
boundary
can
change
every
report
interval
yeah,
then
that's
why
yeah
so
so!
F
That's
that's
actually
would
be
a
non-breaking
change
in
that
sense,
and
we
I've
tried
to
confirm
several
times
with,
like
Gotham
on
the
Prometheus
side,
that
this
is
actually
how
they'll
be
implemented
in
the
query
time
and
that
this
will
be
supported
because
I
don't
think
Prometheus
was
planning
to
change
bucket
boundaries
on
their
exponential
histogram,
but
they
are
okay.
If
ours
does
so,
at
least
according
to
the
discussion
cool.
F
Cool
all
right,
so
then
the
last
thing
to
to
raise
is
basically-
and
this
is
a
call
for
do-
we
think-
there's
interest
there
when
we
were
triaging
bugs
around
Samantha
conventions.
There's
one
about
up
and
how
do
we
have
an
up
metric
and
we've
talked
about
this
so
many
times
what
I'd
like
to
propose.
F
If
you
scroll
down
a
bit
I
put
together
my
own
thoughts
around
if
you
scroll
down
even
further
at
the
very
bottom
I
think
there's
like
some
diagrams
of
like
what
this
looks
like
so
the
top
one
is
what
the
use
case
that
we
should
probably
support
is
of
people
who
are
using
both
Prometheus
and
open
Telemetry
together
in
a
coordinated
system.
F
F
So
if
you
scroll
down
what
I'm
proposing,
is
we
Define
a
derived
uptime
metric
based
on
pull
and
push-based
things,
but
I'd
like
to
get
a
group
together,
like
an
expert
group?
That
kind
of
says
here
is
our
thinking
on
how
to
solve
this
up
problem
for
people
who
are
using
both
pull
and
push-based
metrics.
What
I'm
afraid
of
is,
do
we
have
the
time
and
attention
to
do
this
right
now
from
the
people
who
need
to
be
involved
in
that
discussion?
F
I'd
like
for
that
discussion
to
happen,
I
think
it.
It
should
take
this
bug
and
run
with
it
and
come
back
with
a
proposal
I'm
asking
for
folks
who
are
experts
in
metrics
and
uptime
if
they
have
time
and
availability
to
kind
of
focus
on
this
issue.
E
Suggest
we
table
this
until
we
can
ship
the
initial
stable
version
of
any
metrics
like
HTTP.
If
we
cannot
get
one
single
thing
done,
let's
don't
try
to
handle
the
other
one
and
I
feel
the
up
metric
I,
don't
even
know
if,
if
the
like,
this
Thomas
guy,
who
opened
the
issue,
he's
talking
about
the
same
premises,
up
metrics
or
something
else,
you
can
imagine
different
definitions.
There
are
people
talking
about
app
mean
the
metric
is
still
flowing.
If
you
see
the
data
you
know
the
data
is
coming,
keep
keeps
coming.
E
H
In
part,
it's
about
what
the
Prometheus
definition
of
up
means
in
an
open,
Telemetry
World,
which
is
a
really
valid
question,
but
I
kind
of
agree
that
we
should
be
able
to
finish
something
before
we
begin
specifying
a
bunch
of
semantic
conventions
on
metrics
that
we
haven't
made
stable.
Yet
I'd
like
us
to
see
a
1.0
I
also
have
been
waiting
to
work
on
synchronous,
gauge,
support,
I've
got
a
draft
and
it's
just
I
don't
want
to
publish
that
and
it's
at
the
same
level
of
importance,
I
think
as
handling
the
up
question.
H
So
yeah
I
like
to
wait,
but
I
do
have
time.
Every
time
I've
tried
to
discuss
this
in
the
past,
it
feels
like
we
weren't
ready,
I'm
glad
to
hear
other
people
asking
for
it,
though,.
F
Okay,
so
in
terms
of
what
we
should
do,
I
can
comment
on
the
bug
that,
like
this
is
something
we
know
needs
to
be
addressed,
but
we're
going
to
defer,
because
we
think
we
don't
have
enough
bandwidth
to
handle
this
problem
right
now.
So
we're
going
to
defer
this
until
here's
the
triggers
when
we'll
pick
this
back
up
and
work
on
it,
and
that
would
be
like
maybe
h2b
semantic
Adventures
or
Mark
stable.
H
F
E
J
H
H
I
B
Enter
the
world
of
monitoring
liveness
of
services
with
this
or
or
I
I.
Think
I
would
start
in
this
discussion
with
specify
exactly
the
goals
of
this
app
metric
because
because
in
Prometheus
world,
because
it's
a
pool
base,
the
up
metric
can
be
used
as
a
live
net
signal
and
and
I
don't
know
if
we
want
to
do
the
same
thing
in
in
a
push-based
system.
So
unless,
unless
we
have
a
clear
understanding
and
goals
of
this
up,
metric
I
would
retained
from
from
proposing
anything.
F
Yeah
my
my
goal
with
this
is
all
I
want
to
do.
Is
people
in
Prometheus
that
come
to
open
telemetry
if
they
use
Prometheus
pool
and
open
Telemetry
everything's
gravy,
but
if
they
start
using
our
push-based
protocol
to
get
into
Prometheus
that,
by
the
way
Prometheus
decided
they're
going
to
add
otlp
ingestion
to
Prometheus
I?
Think
maybe
you
saw
this
from
there.
F
That's
one
of
the
things
on
their
docket
of
things.
They're
going
to
do
right,
I!
Think
that's
also
going
to
exacerbate
this
problem.
So
that's
why
I
think
just
having
a
notion
of
what
up
means
in
Prometheus
for
push-based,
open,
Telemetry
metrics
is
the
thing
that
we'd
like
to
resolve
because
users
expect
it
they
they
look
for
it.
They
want
to
know
what
the
hell
it
means,
and
so
we
could
say
you
know
what
it's
not
a
good
thing:
here's
an
alternative,
I,
don't
care!
B
Okay,
I,
don't
know
about
what
Prometheus
is
gonna
do.
Is
it
the
otlp
gonna
be
used
only
be
as
the
replacement
for
prw.
H
H
F
F
A
Okay,
perfect,
thank
you
so
much
for
that.
Okay,
that's
all
from
your
side!
Josh!
In
that
case,
let's
move
to
the
next
item.
Thank
you
so
much
for
that
and
say
somebody
convention
versus
implementation,
chicken,
egg
and
problem
you.
A
I
I
So
there's
there
was
this
chat
on
slack
the
other
day.
Let's
reference
here
in
the
link.
I
have
raised
the
queue.
Sorry
full
requests
in
the
specification
and
pull
requests
in
the
open,
Telemetry
con
trip,
repository
for
host
Matrix
receiver
to
add
some
metrics,
and
there
was
discussion
about.
Apparently,
there
is
also
discussion
about
whether
we
should
actually
have
first,
the
semantic
convention
semantic
conventions
established
and
then
continue
the
implementation
or
the
other
way
around
if
I
understood
correctly
Josh.
This
commented
yes.
K
I
So
yeah
I
wanted
to
put
this
under
the
discussion.
So
my
interest
is
for
those
metrics
to
be
added
to
the
host
Matrix
receiver,
which
I
think
is
declared
as
beta,
so
that's
yeah
and
and
if
it's
possible,
without
adding
them
to
semantic
conversions,
I'm
fine
with
it.
But
if
we
need
time
informatic
confessions,
that's
probably
harder
right.
J
We
have
several
main
instrumentations
that
we
maintain
obviously,
and
people
ask
us
all
the
time
to
add
Telemetry
to
add
attributes
to
them
which
are
not
specified
and
in
general
I'm,
always
hesitant
to
do
it.
But
when
there's
no
specification,
it's
really
unclear
what
to
do
and
I
wish
that
there
was
some
area
that
I
could
say
like
if
you,
if
you
prefix
it
with
like
x,
dot
or
or
experimental
or
something
along
those
lines,
then
we
could
add
it
yeah
I.
J
Think
the
the
comment
from
Josh
here
illustrates
the
problem.
Well,
but
this
is
something
that
I've
been
seeing.
Come
up
a
lot.
L
In
the
Java
instrumentation
group,
our
approach
has
been.
We
will
take
things
that
aren't
expect,
but
we
hide
them
behind
a
feature
flag.
That's
called
experimental
dot,
something.
J
Foreign,
that
seems
like
a
reasonable
solution.
It
doesn't
then,
when
you
look
at
the
outputted
Telemetry,
there's
no
way
to
distinguish
which
attributes
are
the
experimental
ones
versus
which
are
not
right,
correct,
but
us.
You
know
if
you've
done
your
homework
and
read
the
documentation
that
shouldn't
be
a
problem,
but
I
suspect
that
it
probably
would
be
if
somebody's
making
a
dashboard
on
the
back
end,
and
they
see
some
data
there
they're,
probably
not
going
to
go
check
that
it's
a
stable
piece
of
data
that
they're,
depending
on.
F
I
I
love
your
idea
of
adding
x
dot,
mostly
because
I
was
a
child
of
the
90s
as
well,
but
I
think
I
think
like
so.
The
question
would
be,
if
you
add,
that
x
dot
and
then
something
becomes
stable.
Suddenly
that
breaks
everyone
who
was
using
the
x
dot
right
so.
J
It
depends
how
you
specify
it,
because
you
could
say
you
could
specify
that
receivers
should
look
for
attributes
in
like
the
attribute
name
or
in
x,
dot
the
attribute
name
and
if
they
find
it
in
x,
dot
the
attribute
name.
Then
they
should
treat
it
as
potentially
invalid
right,
because
the
format
may
have
changed
between
when
it
was
experimental
and
when
it
was
stabilized.
F
Yeah
I
guess
the
question
would
be
like:
if
we
do
the
so
I
do
like
the
x
dot
thing
from
the
sense
of
it
gives
users
a
notion
that
this
could
break
or
change
when
they
use
it.
When
they
write
queries
when
they
write
alerts
when
they,
when
they
interact
with
it
right,
the
the
question
I
would
have
is:
if
we
can,
we
attach
that
to
a
policy
of
when
someone's
stable,
something
stabilizes.
We
leave
the
x
dot
attribute
in
addition
to
the
new
attribute
for
a
while.
E
Hey:
hey
Josh,
that
reminded
me
of
opengl
and
CI
size
both
allow
vendor
to
add
some
extension.
I
I,
remember
like
like
back
in
the
old
days
like
Google
Chrome
Chrome
I
did
something
about
transparency
in
CSS.
It
is
under
the
Google
vendor
name
and
later.
When
it
becomes
standard,
then
you
start
to
see
like
three
different
Brothers.
They
all
have
this
like
Microsoft,
IE,
Google,
Chrome
and
and
many
brothers.
They
support
all
of
them.
It
is
actually
in
the
CI
test
pack,
but
there's
a
transition
period.
E
So
if
I
remember
clock
is
saying
in
this
major
version
it
is
allowed,
but
next
major
version,
only
the
the
one
without
vendor
name
is
allowed.
So
people
should
use
that
period
of
time
to
move
away.
Opengl
I
believe
have
something
similar
about
the
Shader
language.
I
know
like
Nvidia,
has
been
adding
some
extension
and
proof.
That's
working
well
and
then
have
like
AMD
adopting
the
same
thing.
Then
they
move
that
to
the
upper
stream.
J
Yeah
I
think
I
agree
with
Josh
that
the
the
value
of
the
experimental
attribute
is
is
lost.
If
you,
if
you
drop
it
immediately,
you
have
to
have
some
transition
period.
J
F
Yeah
I
also
have
a
second
point
of
maybe
this
is
out
of
scope
for
us,
but
if
I
am
outside
of
open
Telemetry
and
I
write,
instrumentation
I
can
just
make
attributes
and
do
whatever
the
hell
I
want
right,
and
do
we
want
to
make
that
friendly
for
people
to
adopt
and
use
right.
So
one
of
the
goals,
I
thought
was
eventually
some
open.
F
Soldier
instrumentation
goes
into
the
underlying
systems,
so,
for
example,
I
think
like
Apache
airflow
is
looking
at
adopting
a
hotel
right,
they're
going
to
Define
their
own
set
of
instrumentation
their
own
set
of
attributes,
their
own
set
of
stuff
that
they
own
and
and
keep
stable
right.
Do
we
need
a
semantic
convention
for
that
kind
of
a
thing,
or
is
that
something
that
they
maintain
on
their
on
their
own
there's?
F
This
idea
that
we
had
of
like
instrumentation
stabilizing
external
and
then
we
take
the
best
practices
and
put
them
in
senkov,
which
we
absolutely
cannot
do
if
we
cannot
experiment
so
I
think
we've
had
enough
discussion
here
that
maybe
I
Dan
I
don't
know
if
you
want
to
take
this
offline.
Maybe
we
come
back
with
a
proposal
around
here's.
F
What
experimentation
with
semcon
looks
like
here's,
how
you
make
experimental
features
with
semconf,
here's
how
you
get
that
out
in
hotel,
and
this
will
be
the
way
that
otel
experiments
with
defining
semantic
conventions
and
gets
a
notion
of
whether
or
not
they're
actually
ready
for
stabilization
before
they're
before
they're
turned
into
official
ones.
F
But
maybe
we
can
work
together
on
a
proposal
for
what
that
process
looks
like
come
back
to
this
group.
With
that
proposal,
yeah.
J
I'm
happy
to
work
on
that
I.
Think
probably
it's
an
Otep
I
think
it's
going
to
be
impactful
enough
to
yeah.
E
K
I
I
just
had
a
look
at
as
part
of
the
rum
working
group
and
we've
got
people
like
Splunk,
who
are
already
creating
attributes
with
prefixes
like
Splunk
dot
on
the
front.
So
this
is
already
occurring
today,
but
then
to
specific
prefixes.
E
J
J
F
Yeah
the
key
the
key
here
is
to
unlock
our
community
of
instrumentation
to
make
progress
and
that's
one
reason
why
it's
a
big
deal
with
semantic
conventions
and
thank
you
again
for
raising
this
okay,
so
I
think
we
have.
We
have
a
I
think
Daniel
and
I
have
put
together
an
Otep
and
and
get
that
out
the
door
to
try
to
address
the
fundamental
like.
F
How
do
we
experiment
with
things
I
still
think,
there's
an
issue
with
the
pull
request
that
you
have
right
now,
that's
probably
worth
either
discussing
on
the
issue
or
in
general,
and
that's
kind
of
around
the
pull
request
adds
another
way
of
doing
the
same
metric,
and
so
that's
another
fundamental
question
of
that
needs
to
get
resolved
that
we
haven't
addressed
so
I
kind
of
want
to.
If,
if
we
have
time
to
walk
into
that,
if
we
don't,
we
can
defer
that
until
next
week,.
F
Yeah
so
I
we
had
like
one
of
the
reasons
I
think
this.
This
specific
semantic
convention
is
blocked
is
because,
like
Sumo
logic
is
trying
to
take
Telegraph
metrics
and
pull
them
into
open,
Telemetry
and
keep
things
similar
and
apparently
in
Telegraph,
they
used
percent
utilization
and
our
hotel
semantic
inventions.
Right
now
is
raw
usage
like
raw
megabytes,
so
you
get
a
total
and
you
get
the
current
usage
and
you're
supposed
to
divide
that
to
get
a
percentage
from
those
two
metrics.
F
F
If
you
read
some
of
the
semantic
invention
recommendations,
we
kind
of
recommend
against
having
two
different
ways
of
solving
the
same
problem,
and
so
I
think
that's
why
these
utilization
metrics
are
having
some
friction.
What's
weird,
though,
is
we
have
and
I?
You
can
correct
me
if
I'm
wrong
Andre
here,
but
the
I
think
we
already
have
some
utilization
metrics,
but.
I
K
E
And
they
have
big
issues,
we
try
to
implement
that
in
donut
and
we
have
no
Health
idea
how
to
do
that
precisely
and
we
have
a
lot
of
debates
and
then
we
realize
it's
a
bad
idea.
Maybe
it
should
be
removed
and
change
to
something,
that's
very
precise,
so
you
can
already
see
the
issue
and
the
reason
I
I
think
we
blocked
the
pr
is.
If
you
try
to
mimic
this
Behavior
introduce
the
same
thing,
it'll
be
even
Messier
and
I
feel.
Maybe
we
should
focus
on
getting
the
first
hdb
metrics
table.
E
M
I
Okay,
let's
I
think
that's
ends
this
discussion.
There's
another
point
from
myself.
This
is
again
about
a
specific
metric
and
the
comment
from
Riley
about
this
I
propose
process
signal
spending
which
is
about
the
number
of
pending
signals
for
a
specific
process,
and
we
have
a
discussion
whether
this
should
be
maybe
namespaced
in
a
something
posix,
not
sure.
If
that
makes
sense
to
discuss
this
here
now,.
A
E
Think
we
can
so
the
the
issue
is
this
signal
spending
currently
is
defined
as
a
politics
specific
thing:
it
won't
apply
to
any
non-posit
system.
My
question
is:
if
this
is
very
clear,
then
should
posix
be
in
the
prefix.
Otherwise
other
systems
would
find
it
very
hard
how
to
implement
that
so
I
feel
either
we
put
Politics
as
a
namespace
or
we
try
to
make
it
very
generic.
So
it
covers
other
systems,
but.
B
I
Yes,
so
my
argument
was
that
jvm
is
something
that's
visible:
that's
a
it's
a
component
and
with
posix
it's
a
feature
of
a
system.
That's
not
it's,
not
necessarily
as
obvious
for
users
who
just
are
acquainted
with
Linux
that
this
is
actually
posix
specific
right
with
Linux
being
like
the
default
theme
servers,
but
this
for
just
for
some
context.
If
we
look
at
the
Recently
Added
metrics
in
the
host
metrics
receiver,
which
was
that
sort
of
the
process,
scraper
I
think
this
would
mean
open
file.
I
This
I
think
there
are
some
Linux
specific
metrics
here
on
my
right,
open
file,
descriptor,
so
which
one
was
that
it
doesn't
say
here.
I
think
this
is
actually
added
by
me.
Well,.
F
Here's
here's
a
question
like
if,
if
page
faults
and
number
of
say,
like
Windows
kind
of,
has
signals
kind
of
but
they're
kind
of
emulated,
if
that's
a
metric
that
you
can
get
and
you
have
access
to
and
we
think
it's
useful
to
users
like
we
can
add
it,
that's
that
then
it's
no
longer
posix
specific,
but
I
think
we
asked
the
question
of
like.
Is
this
going
to
include
windows
in
the
initial
implementation?
That's
that's
not
important
for
the
semantic
conventions.
It's
will.
It
ever
include
Windows
and
yeah
like
that.
F
That
was
I
was
the
one
who
asked
like.
Is
this
posix
specific
or
are
you
planning
to
include
Windows
signals
because
they're
kind
of
emulated?
If,
if
you
ever
plan
to
include
those
there,
then
I'd
agree
with
Riley,
you
can
leave
it
as
just
you
know,
signals
pending
or
whatever,
and
that's
fine
page
fault.
Similarly,
do
we
think
that
Windows
page
faults
kind
of
makes
sense
as
a
metric
and
could
we
ever
get
Windows
page
vaults?
F
That's
the
question
I
would
have
here
on
whether
or
not
you
have
to
put
like
you
know:
linux.process
paging,
vaults,
that's
it's!
It's
not
about
like
well
anyway.
It
also
could
be
it's
so
impractical
to
grab
that
data
and
so
useless
that
we'll
never
do
it,
but
I
think
that's
also
an
important
question
here
of
how
General
the
the
metric
is.
E
Yes,
I
already
see
the
problem
of
paging.
False
Windows
has
a
similar
concept
and
I'm
worried
about
the
description
saying
something
about
implementation.
I
think
the
Matrix
description
is
considered
as
one
important
of
the
metric
stream
and
it
should
it
should
describe
what's
the
what's
the
semantic
of
the
metric
instead
of
like
whether
it's
supported
on
certain
system.
Imagine
later
you
want
to
support.
Maybe
Unix.
I
E
E
I
Be
right,
ultimately,
this
should
be
generated
from
the
semantic
conversions
I
guess
so
so
that
wouldn't
fit
here
anyway.
Yeah,
but
okay
I
see
your
point
yeah.
This
probably
shouldn't
all
be
part
of
the
description
that
part
of
an
additional
documentation
yeah.
Currently,
the
problem
is
the
whole.
This
whole
file
is
being
auto-generated
but
I
think
that's
the
difference.
B
I
Okay,
there's
also
another
effort
to
generate
more
documentation
from
the
semantic
converter
or
yeah
from
the
semantic
conversion
specification
for
metrics
I'm,
not
sure
if
that
fits
here,
I'm,
not
sure
if
it
would
cover
the
implementation.
I
think
it
only
covers
the
specification
documentation
right:
okay,
okay,.
D
B
I
B
B
I
think
in
the
semantic
convention
we
should
have
some
of
these
metrics
say
that
they
are
optional
on
some
of
the
languages
or
they
are
not
applicable
for
some
of
the
languages
or
things
like
that,
because
I
think
having
Microsoft
folks
coming
and
trying
to
implement
this
will
give
them
a
hard
time.
I
F
The
way
I've
heard
it
described
and
I
can't
take
credit,
for
this
is
a
t-shaped
API,
where
the
semantic
conventions
describes
the
bare
minimum
that
everyone
can
expect
to
exist,
but
there
might
be
more.
That
gives
you
more
value,
that's
kind
of
how
a
few
of
us
have
been
thinking
about
this
and
that's
why
I
think
you're
also
running
into
friction,
trying
to
add
things
that
are
that
are
kind
of
optional,
like
the
either
or
notion
think
of
semantic
conventions,
as
if
I
only
had
semantic
inventions.
A
Sorry
we're
on
time.
Sadly,
let's
continue
flying
no.
B
Thank
you
so
much,
let's
go
to
the
next.
Let's
go
to
the
next
topic,
because
it's
important
for
next
week
do
we
have
a.
B
B
A
B
D
G
M
Yeah
sure
so
yeah
welcome
everybody.
I
think
you
haven't
looked
at
the
agenda.
Someone
added
Google
doc.
D
M
Yeah
so
I
can
start
line
others
heading
stuff
to
the
agenda,
so
the
main
thing
that
I
wanted
to
talk
about
is
like
just
verify
that
there
are
now
measure
things
that
are
blocking
us
from
merging
like
this
big
pull
request
of
of
the
VPN
device,
instrumentation
I
think
I
answered
all
the
comments
that
people
left
and
I
know.
M
This
is
probably
huge
and
complicated
pull
request
and
we,
we
probably
dived
into
it
better
but
I,
think
a
better
way
to
do
it
is
maybe,
after
we
merge
it
to
the
reaper.
Maybe
we
can
like
walk
on
top
of
that.
Instead
of
keep
discussing
on
this
single
pull
request,
I
really
wanted
to
make
like
a
deep
dive
of
how
things
are
working.
So
you
guys
I
understand
the
code,
especially
the
sequel,
because
I
think
the
girl
code
is
pretty
straightforward.
M
Yeah,
so
just
wanted
to
hear
from
you.
If
you
see
anything
that
blocks
us
from
merging
this
pull
requests
and
like
working
on
top
of
it.
P
Yeah
I
think
I
agree.
We
can
Moses
PR
and
come
up
with
a
road
map
of
like
what
we
want
to
do
so
that
as
a
community,
everyone
understands
and
increases
the
knowledge
of
the
code
base,
like
small
issues
to
fix
or
testing
and
whatever
we
can
come
up
with
some
things
so
that
everyone
can
increase
their
knowledge.
I.
Think
that
way,
it's
better
rather
than
trying
to
fix
that
in
this
PR.
I
totally
agree
with
that.
P
K
Just
yeah,
because
we
split
up
the
work
too,
once
once
it's
merged
instead
of
having
it
at
this
one
bottleneck.
P
Yep
yep
and
that
way
we
can
actually
start
testing
it
out
thoroughly
late.
O
We
took
this
approach
into
instrumentation
that
we
also
had
like
even
we
had
a
probably
less
production
ready
thing
which
we
emerged
initially,
and
then
we
will
just
work
working
on
top
of
it
and
it
was
pretty
efficient.
It's
still,
it
still
took
a
long
to
have
an
alpha
release,
but
at
least
we
had
something
which
you
could
could
iterate
on.
M
Yeah
and
the
things
that,
as
long
as
the
redmi
file
says,
this
is
some
better
something
which
is
in
construction,
I.
Think
we're
good.
O
Yeah
so.
P
O
Example,
internet
we
had
a
notice
in
the
beginning
of
the
readme
file.
This
is
in
like
in
ex
very
early
developed,
like
very
early
development
phase.
Please
do
not
use
in
production.
Thank
you
really.
We
had
such
a
notice.
M
M
M
N
Yeah,
so
so
I
wanted
to
ask
what's
going
on
on
this
regarding
this
ownership
issue
that
we
have
because,
as
you
know,
we,
the
plan
is
to
match
this
service-based
instrumentation
into
countries
repo,
but
first
we
and
we
have
to
define
the
ownership
of
components
and
I
wanted
to
ask
Tyler
about
that,
but
he's
not
on
this
meeting
today
regarding
the
status
of
code.
N
So
of
course
there
are
still
bugs,
but
I
would
like
to
go
similar
process
as
add-in
Solutions,
so
to
maybe
merge
it
as
as
it
is
now
and
then
incrementally
work
and
fixing
bugs
and
improving
this
this
code.
O
Yeah
my
question
is:
is
only
Tyler
like
the
only
working
on
this
one
I
think
like
it's
I,
don't
know
how
to
call
it
like
a
single
point
of
failure
that
you
know
Taylor
is
very
occupied
and
maybe
it
will
be
good
to
ask
someone
else.
Some
other
others
from
the
maintainers.
N
O
O
Pretty
late,
it's
in
Google
Calendar!
If
you
go
to
GitHub
and
Telemetry
Community,
but
I
think
in
our
time
zone
it's
like
7,
PM
or
8
P.M,
it's
pretty
late.
M
O
Yeah
I
just
wanted
to
make
you
heads
up,
because
I
was
like
complaining
on
the
legal
stuff
on
your
PR
and
I
wanted
to
give
you
some
heads
up
what
is
going
on.
Basically
I
had
some
experience
working
with
the
legals
and
I
just
wanted
to
make
sure
that
these
things
in
Opel
Elementary
are
like
that.
We
won't
have
trouble
basically
so
I'm
in
contact
with
our
Legos
team
and
I'm
making
baby
steps,
but
as
far
as
I
know,
right
now,
right
now
we
are
okay
from
the
legal
perspective
in
whatever
country.
O
But
if
we
release
something
like
a
distributed,
you
know
artifacts,
the
best
way
to
be
safe
is
to
have
a
notice
file
which
will
contain
a
list
of
all
dependencies,
because
otherwise,
if
someone
downloads
a
binary,
you
may
not
have
anything.
You
know
about
the
source
code
Etc.
So
usually
the
next
file
is
just
side
by
side
to
the
place
where
you
download
the
artifact.
That's
the
safest
way
from
the
legal's
perspective
should
I
rephrase,
or
is
it
clear
enough
for
now.
O
O
M
P
K
I'd
also
like
to
see
a
little
more
documentation.
D
Right
now
in
it,
but
just
a
deeper.
M
K
P
M
Lectures
there
is
currently
an
example
in
documentation
on
yeah,
not
the
often
television
demo,
but
the
different
application.
My
emoji
work
I
think,
which
is
also
a
couple
of
applications
that
are
talking
with
each
other
overgrpc.
P
O
I
remembered
that
the
instrumentation
supports
many
versions
of
the
libraries,
but
is
there
any
automation
regarding
testing
it
not
just
generating?
You
know
the
code.
I,
don't
know
the
script
Source
but,
for
example,
to
execute
tests
on
multiple
versions.
P
O
M
Yeah
yeah
I
think
that
yeah
Anton
testing
should
be
like.
O
M
O
O
So,
for
example,
if
you'll
do
it
in
go,
you
could
like
during
build
process,
you
can
simply
change
the
go
mode
file,
for
example,
to
change
the
versions.
Things
like
that
and
yeah
I
think
that
we
could
start,
for
example,
just
testing,
so
our
testing
subscription.net
is
like
that
that
we
in
general
we
test
in
continuous
integration.
We
have
the
oldest
version
and
we
use
depend
about
as
an
indicator
if
it's
working
for
the
newer
versions-
and
we
want
to
we
want
to.
M
O
So
so
for
dotnet,
we
are
doing
it
like
that
that
we
have
basically
a
sample
application
and
the
test
is
basically
building
it
and
running
like
setting
up
instrumentation
like
basically
in
the
same
way,
almost
like,
like.
Like
the
real
case
scenario,
the
only
difference
between
real
thing
is
that,
instead
of
using
a
real
collector,
we
make
something
like
an
HTTP
test.
You
know
server
and
we
capture
the
the
spans
and
we
are
checking
if
the
things
are
coming
as
expected,.
M
M
So
it's
also
something
it
can
write
here,
not
really
sure
I
haven't
made
a
lot
of
progress
with
it,
but
so
something
that
was
already.
M
M
O
O
Sql
packages
I
mean,
but
I
think
that
these
are
the
these
are
the
most
commonly
used
like
HTTP
grpc
and
basically,
the
SQL
and
May.
And
for
my
experience
these
are
the
mostly
used
packages.
M
O
O
I
just
wanted
to
say
that,
basically,
if
we,
if
we
want
to
prioritize
the
libraries
we
could
basically
you
know
you
could
we
have,
for
example,
here
I,
don't
know
if
Mike
is
from
data
doc
or
do
not
remember
correctly
or
am
I
wrong.
D
O
P
M
M
P
Think
this
is
just
a
startup
right,
I
think.
Once
we
start,
everyone
starts
making
contributions,
we
can
keep
prefacturing
slowly
through
it.