►
From YouTube: 2022-01-24 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
it's
four
minutes
after
let's
get
rolling
so
welcome
back
everyone
to
the
weekly
maintainers
call.
There
was
a
holiday
in
the
u.s
last
week,
so
while
we
did
hold
the
call
attendance
was
quite
low
and
it
was
a
little
short.
So
this
is
our
first
sort
of
heavily
attended
meeting
in
the
last
two
weeks.
Well,
so
we
will
start
with
the
sig
check-in,
starting
with
the
specification
carlos.
Do
you
want
to
lead
us
through
that.
A
All
right
we'll
come
back
to
the
spec
as
well
as
metrics.
B
Yeah,
basically
we're
trying
to
make
specification
release,
but
you
know
this
month
early
next
month.
That's
all
for
now.
We
need
more
reviews
on
a
pair
of
metric
issues.
Please
please
review
that
for
us.
C
I
can
I
can
perfect,
there
are
a
couple
of
pr's
open
that
are
needing
reviews.
They
are
on
really
esoteric
kind
of
minor
corner
cases
and
stuff.
So
talking
about
duplicate
instruments,
duplicate
observations
from
asynchronous
instruments,
and
my
impression
is
that
we
have
stumbled
this
month
because
we
lost
josh
sureth
to
a
family
emergency
and
so
we're
just
not
moving
very
fast
right
now.
Unfortunately,
I
would
have
to
pull
up
a
couple
pr's
for
a
better,
more
thorough
summary.
C
A
Yeah
I
mean
I
just
having
a
family
emergency
sounds
bad.
I
didn't
know
so
that
that's
really
rough,
okay,
okay,
thanks
for
the
update
all
right
for
logs,
so
tigran
gave
an
update
last
week,
but
I
can
give
the
same
update
this
week,
because
we'd
lower
attendance
to
the
holiday
last
week,
so
the
log
sig
is
working
towards
declaring
a
data
model
and
as
well
as
the
otlp
logging
format
stable
in
the
first
quarter
of
this
year.
A
There
is
a
pr
or
a
rather
marketing
file
that
is
linked
to
in
the
meeting
notes
to
that.
So
please
review
it.
T
grant
and
a
number
of
the
people
who
are
working
on
the
logging
sig
are
like
just
desperately
looking
for
anyone's
attention
or
or
or
reviews
on
this
so
like.
Please,
please,
please,
like
whether
you
think
it's
great
whether
you
think
it
could
use
a
few
tweaks
or
if
you
think,
it's
the
worst
thing
ever
created
in
human
history.
A
Please
go
comment
on
it
and
raise
issues
so
that
they
can
get
your
feedback
because
they
they
want
to
keep
rolling
with
this
and,
of
course,
once
this
becomes
stable
or
locked,
it
becomes
very
problematic
to
change.
So
please
regardless,
like
even
if
you,
if
you
think
it's
great
just
go,
say
you
think
it's
great.
A
I
think
it's
terrible
go
say
you
think
it's
terrible
and
say
why
they
just
want
feedback,
so
the
link
is
there
and
the
logging
sig
last
month
shifted
back
to
a
weekly
cadence
instead
of
bi-weekly.
So
please
join
the
logging
sig.
If
you
have
any
interest
in
logs
attendance,
is
definitely
picked
up
from
heartwood
performance,
a
lot
of
people
there
every
week,
okay
and
there's
also
a
log
of
blogging
library,
sdk
spec.
A
In
a
particular
place
since,
since
these
aren't
pr's,
I
think
just
as
issues
or
joining
a
logging
sig
and
discussing
it,
okay,
yeah
either
one
is
fine
yeah,
but
best
to
do
those
because,
like
I
mentioned
t,
grant's
name,
t
grant
is
not
the
only
person
working
on
this
right.
There's
there's
all
the
people
from
observe
iq.
So
if
you
just,
I
am
one
person
that
probably
won't
get
shared
around
enough.
E
There
there
was
one
issue
that
tigran
requested
that
we
specifically
bring
up
in
this
meeting.
I'm
gonna
post
a
link
to
it
in
the
perfect
in
the
doc.
A
All
right,
perfect,
so
yeah.
If
people
want
to
review
the
the
tracer
and
meter
pr,
that's
also
a
great
place
to
provide
feedback
to
all
the
people
on
the
and
sig
thanks
jack.
A
All
right
going
down
got
php
point:
0.0.5
was
released,
which
was
part
of
the
monthly
release
cadence.
Anyone
from
php
want
to
speak
about
that
all
right.
A
A
Cool
then
java
we've
got
1.10.1
was
released
with
a
few
bug
fixes
java
instrumentation.
Similarly,
1.10.1
will
release
this
week
with
a
few
bug
fixes,
and
I
just
realized-
I'm
not
actually
presenting
so
I'll
start
doing.
That.
A
Okay,
javascript
otlp,
trace
exporters
are
now
1.0
all
sync
and
async
instrumented
instruments
are
implemented
currently
working
on
exemplars
next
step
is
to
migrate
existing
exporters
to
the
new
sdk,
and
there
will
be
a
joint
blog
post
with
ub,
ruby
underlying
that
is
complete,
but
not
yet
published,
and
that
is
in
my
email
inbox
and
I
need
to
review
that,
and
I
think
I
saw
elite
and
a
few
others
reviewed
earlier
this
morning,
so
that
should
go
out
fairly
soon
for
publishing.
Then.
G
Yeah
just
a
clarification
I
didn't
put
there
metrics
exporters
and
metrics
sdk,
I
meant
to
say
I
I
didn't.
I
don't
want
to
imply
that
we're
rewriting
the
whole
trace,
sdk
or
anything.
I.
G
I
saw
somebody
just
put
and
views
there.
I
assume
that's
one
of
the
other
javascript
maintainers
on
the
call.
A
Okay,
cool
python,
we've
got
six
more
metrics
prs,
merge
the
sdk
and
otlp
exporter.
A
Work
is
ongoing,
we'll
be
releasing
1.9.0
this
week
for
bug,
fixes
any
updates
for
dotnet.
C
May
I
take
that
I
have
produced
a
couple
of
prs
to
explore
what
an
sdk
would
look
like
with
views
and
using
go
generics.
We're
we're
basically
re-evaluating
everything
there,
but
in
the
discussion
we
had
last
week,
it
was
basically
looking
like.
We
think
this
is
a
promising
direction.
We're
going
to
keep
moving
on
a
views.
C
Implementation
that
will
match
go
goes
future
release
at
this
point,
but
right
now
we're
ready
to
start
finalizing
the
api
first,
and
that
means
we
can
put
in
a
new
api
and
just
bridge
it
to
the
old
sdk.
It
might
be
a
little
bit
hacky
to
do
so,
but
that
will
mean
that
we
can
finish
the
api
and
be
done
and
then
let
the
sdk
work
continue.
We
are
still
holding
up
the
sd.
C
The
metrics
specification
on
views
we're
a
little
closer
having
gotten
through
this
work
and
go
last
thing
I
wanted
to
add
is.
I
did
put
two
links
up
above
in
the
metric
section
for
active
prs
that
I've
written
that
that,
if
they
merged,
would
really
help
us
finish
the
metric
specification,
the
first
is
essentially
blocking
the
ga.
C
The
second
is
helping
go
team,
do
what
it
wants
and
I'd
like
they're
related,
so
I'd
like
you,
if
you're
interested
in
duplicate
instruments
or
how
to
craft
the
callbacks
to
instrument
relationships
at
runtime
that
this
is
what
that's
about,
and
I
think
loosening
and
making
what
we
want
to
do
possible
will
really
help.
Anyway.
That's
my
input
for
go.
Please
take
a
look.
A
C
So
it
comes
out
in
1.18
in
february's
release,
I've
been
using
a
beta
release
to
prototype
it.
I
am
pretty
pleased
with
the
generic
features
it
does.
However,
it
will
remind
you
of
go
in
the
first
place.
Go
is
making
things
that
are
hard
simpler,
but
not
automatic.
So
you
have
to
rewrite
things
a
few
times
and
go
just
to
keep
it
simple.
It's
still
that
way
with
generics,
because
it's
just
one
narrowly
focused
generics
feature
anyway.
I,
like
it
cool.
C
We
have
an
issue
posted
on
why
the
api
should
not
use
it
yet,
if
ever
because
it's
the
limitations,
are
not
they,
they
go
up
against
the
open,
telemetry
trace
guideline
to
separate
your
api
and
your
sdk.
It's
hard
to
do
that
with
generics.
A
Very
cool
thanks,
josh
c
plus
plus
metrics
sdk
work
is
in
progress.
Aggregation
poll
requests
review
is
ongoing.
Looking
for
more
contributors
version,
1.2.1
release
is
planned
for
this
week,
which
is
mostly
focused
on
the
logger
sdk
changes,
cool,
no
updates
for
ruby
or
swift
collector
version.
0.43.0
release
was
delayed
last
week.
That's
been
pushed
this
week
and
rust
version
0.17
was
released.
A
Okay.
Moving
on
to
the
next
first,
like
actual
discussion
topic,
please
note
that
the
instrumentation
library
change
of
scope
points
at
scope.
I
will
open
this
up.
Does
that
did
anyone
add
this?
Who
wants
to
speak
to
it?.
F
You
gotta
dan,
if
you
want
it,
oh
please
go
ahead
and
explain.
I
just
want
to
point
out
that
this
is
this
was
the
issue
that
was
linked
earlier
in
discussion
here.
E
Yeah
that's
correct,
so
the
basic
idea
is
that
in
in
in
the
log
space
we
don't
have
an
api,
we
only
have
an
sdk,
and
so
the
apis
are
effectively
provided
by
whatever
log
framework
a
user
is
using.
E
So
in
the
java
world
log4j
is,
is
an
api
and
we
we
route
the
logs
collected
via
that
api
to
the
log
sdk.
And
so
the
question
is:
where
do
we
capture
the
logger
name
and
so
the
natural
place
to
do
it
would
be
in
the
instrumentation
library
name,
but
that
kind
of
violates
the
definition
of
what
instrumentation
library
name
does
today
and
so
this
pr
kind
of
modifies
that
to
say
that
the
instrumentation
library
name
refers
to
the
instrumentation
scope.
E
E
E
I
Has
got
a
lot
of
approval,
so
maybe
I'm
just
misunderstanding
this,
but
it
seems
like
instrumentation
scope
is
very
different
from
instrumentation
library.
This
rotation
library
talks
about
the
library
that
is
providing
the
instrumentation,
but
instrumentation
scope
talks
about
what
is
associated
with
the
telemetry,
which
is
not
necessarily
what's
providing
it.
E
The
intent
from
tigran
is
to
get
a
bunch
of
eyes
on
this,
because
this
is
kind
of
a
a
change
that
affects
a
lot
of
a
lot
of
areas
potentially,
and
so
you
know
even.
E
To
keep
it
open
for
a
while
and
so
yeah,
so
this
is
just
a
call
to
get
extra
eyes
on
it
so
that
it
can
be
properly
discussed
and
considered
cool.
I
A
A
All
right,
one
from
dan.
The
current
process
to
add
main
nutanix
refers
to
the
sig
charter,
but,
as
you
can
tell,
no
sig
has
a
charter
document
yep.
We
discussed
this
on
gc
last
week.
If
you
think
has
one
please
let
me
know,
because
I
am
working
on
rewriting
the
process
for
promoting
approvers
to
maintainer.
G
Yep,
nothing
really
to
add
here.
I
looked
at
a
handful
of
cigs,
certainly
the
most
popular
ones,
and
I
don't
think
a
single
sega
has
a
charter,
so
our
governance
documents
refer
to
the
charters
and
the
charters
don't
exist.
G
Thus
nobody
is
a
maintainer,
but
since
this
call
is
all
maintainers,
we
obviously
have
some
who
are
we
yeah?
So
if
you
are
part
of
a
sig
that
has
a
charter,
please
send
it
to
me,
because
if
there
is
an
existing
process,
I
want
it
to
at
least
be
taken
into
consideration.
A
Next
from
john,
should
we
add
additional
sig
check-ins
into
this
meeting,
instrumentation
client
telemetry
et
cetera,
I
yeah.
That
makes
sense.
Actually
I
don't
know
why
we
haven't
been
doing
that.
That's
my
bad.
J
Also
just
say
that
I
didn't
mention
in
the
java
update
that
jack
berg
who's.
You
know,
obviously
been
commenting
quite
a
bit,
and
this
call
is
also
now
a
java
maintainer.
Congratulations.
J
G
Yeah,
that's
what
javascript
did
too
and
it
seems
like
that's
what.
Basically,
everyone
is
doing.
Congratulations,
jack.
I
Recently
added
a
third
maintainer
in
ghost
again,
it
was
very
similar.
The
two
of
us
got
together.
We
talked
we
met
with
the
the
new
maintainer
talked
about
what
our
vision
for
the
this
egg
was
and
made
sure
we
were
all
on
the
same
page
and
then
made
a
pr
to
change
the
notes
and
added
them
to
groups.
H
I
will
say
that
the
important
thing
that
we
didn't
document
but
is
considered,
is
just
the
added
level
of
commitment
needs
to
be
expressed
appropriately
and
that
we
had
some
sort
of
understanding
of
like
you
know
that
we're
not
necessarily
always
gonna
be
in
agreement
that
we,
but
at
the
same
time,
that,
like
we,
welcomed
outside
contributions
and
outside
opinions,
but
we
also
wanted
to
have
the
same
long-term
vision.
I
think
was
key.
A
Fantastic
all
right,
josh,
I
see
one
from
you
about
who
cares
about
probability,
sampling.
C
I'm
asking
in
this
group
because
I've
sort
of
said
the
same
thing
in
the
spec
sig
for
weeks,
some
in
a
row.
This
pr
is
pretty
big
and
it's
hard
to
get
anyone
to
review
it.
I
we're
at
the
point
of
having
technically
passed
the
barrier
of
having
two
approvers
one
who
is
not
from
my
company,
but
it's
a
huge
spec.
C
C
Should
we
go
with
it
or
not?
My
recommendation
is
to
go
with
it.
B
I
could
say
that
we
go
with
it.
The
reason
is
that
odmar,
who
is
an
expert
on
the
field
from
vendor
trace,
already
spent
many
cycles,
giving
a
lot
of
a
lot
of
feedback.
So
it
means
that
at
least
at
this
moment
at
least
there's
one
company
besides
likes
I'm
interested
in
this,
and
since
since
this
could
be
experimental,
I
don't
see
I
I
couldn't
see
any
problem
at
all.
A
A
C
Yeah,
I
don't
I
don't
know
either
I'm
I'm
being
cautious,
probably
overly
cautious.
The
the
thing
is
that
you
will
hear
people
say
in
various
groups,
including
this
one
and
spec
sig,
like
our
my
vendor,
doesn't
care
to
do
that.
Whatever
that
is
so
there's
a
difference
between
probability,
sampling
of
just
like
make
your
sample
or
do
50
sampling
right.
That's
that's
a
behavior
and
then
there's
this
functionality.
We're
trying
to
spec
out
here,
which
is
maybe
the
harder
part
which
is.
If
you
do
50
sampling.
You
should
count
every
span
two.
C
If
you
do
25
sampling,
you
should
count
every
span.
Four.
If
you
do
something
that
is
25
sampling
mixed
with
something
that's
non-probabilistic,
this
specifies
what
to
do
so
that
a
vendor
can
just
count
spans
that
have
been
sampled.
I
just
feel
like
for
such
a
big
piece
of
work.
We
should
have
more
approvals
from
the
core
group,
meaning
people
with
an
approval
count.
So
mars
he's
only
been
involved
in
sampling
and
he
doesn't
know
open
telemetry
very
well.
C
So
yes,
his
opinion
really
matters,
but
he
doesn't
have
a
background
in
tracing
the
way
we
do.
So.
I'm
asking
it's
gotta,
you
gotta
click
the
large
diff
below
it's
a
pretty
big
file.
It
is
I
wanna
just
just
if
you
since
we're
talking
about
it.
It
is
an
optional
behavior.
It
is
meant
to
be
put
into
a
contrib
repository
to
get
us
started
on
something
better.
There
were
lots
of
reasons
why
we
couldn't
get
the
end
state.
C
We
wanted
right
away,
it
touches
on
the
w3c
trace
context
and
it
it's
sort
of
like
not
actually
what
the
users
are.
Looking
for.
In
the
first
place,
the
users
want
a
configurable
sampler.
What
lightstep
wants
is
to
have
the
base
case,
be
a
probability
sampler
that
we
can
count,
and
those
are
both
eventually
going
to
happen.
C
If
we
merge
this,
I
will
personally
begin
working
on
configurable
sampling
much
sooner
so
that's
the
that's
the
direction
we're
heading.
We
just
need
to
start
merging
stuff.
A
A
But
beyond
that,
like
I
think,
you're
correct
that,
like
a
lot
of
the
vendors
like
splunk
advertises,
we
do
100
sampling,
we
pull
on
every
span
and
that's
great
for
our
customers.
There
are
many
many
people
who
use
open,
telemetry
and
and
sample
probabilistically,
so
I
think
fleshing
out
that
functionality.
C
Big
customers
that
can't
handle
that
or
can't
have
that
or
can't
pay
for
that,
and
we
want
to
be
able
to
show
metrics.
Essentially,
so
that's
where
we're
heading,
and
this
should
help.
But
if
you're
a
user-
and
you
read
through
this
you're
going
to
say
what
is
all
this
stuff,
it
doesn't
need
to
be
giving
me
what
I
want
and
the
answer
is.
This
is
a
building
block.
We
have
two
or
three
more
steps.
D
D
But
like
what
carlos
says,
because
this
is
moving
into
experimental
mode,
I
think
moving
it
there,
advertising
it
that
it's
a
feature
available
may
start
to
get
you
the
kind
of
feedback
that
you're
looking
for,
because
I
suspect,
like
you
say,
a
lot
of
this
feedback
will
come
from
end
users
and
people
who
are
just
trying
to
actually
turn
this
on
and
use
it,
and
they
will
be
able
to
give
you
better
feedback
by
actually
trying
to
use
an
experimental
version
of
it.
C
Thank
you
yeah.
I
agree
if
you
do
find
yourself
say,
morgan
reading
through
this
and
you
get
to
the
bottom,
you
say
gosh.
This
test
specification
feels
like
you've
just
you're
just
like
overdoing
it
here
like
this,
is
too
much
and-
and
I
I
think
my
my
position
I
was
in
was
people
seem
uncertain.
Let's
keep
doing
more
to
prove
that
this
is
going
to
work.
So
the
very
end
of
that
document
is
a
technical
specification.
You
could
use
to
validate
an
implementation.
I
wanted
to
get
that
far.
C
One
of
the
feedbacks
you
could
give
me
is
this
is
too
simple
to
write
such
a
big
test.
For
I
mean
we
do
know
what
probability
sampling
basically
does
statistical
tests
probably
aren't
necessary,
so
you
could
go
in
there
and
say:
let's
make
this
test
optional
or
the
test
is
good
to
show
that
we
know
what
we're
doing,
but
no
one
should
implement
it.
That's
the
kind
of
feedback
I
might
take.
C
For
example,
the
hotel
go
contrib
pr.
I
have
standing
with
this
test
implemented
times
out
because
it
takes
a
little
bit
too
long
to
run.
So
I
don't
know
that
we
need
a
sampling
spec
tested,
but
I
wanted
to
show
that
we
know
what
we're
doing.
So.
That's
that
specification.