►
From YouTube: 2022-07-25 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
B
All
right
welcome
back
everyone,
so
we'll
get
started
with
the
overall
sig
check-in,
just
fyi,
there's
the
house
next
to
me
and
the
house
next
to
that,
are
both
under
construction.
So
there's
a
lot
of
background
noise,
where
I
am
not
at
this
moment,
but
there
might
be
times
when
there's
a
lot
of
background
noise.
I
apologize
in
advance,
so
our
first
updates
are
for
php.
We
have
two
issues
remaining
for
our
beta
release
of
tracing
looks
like
for
java.
We
have
1.16,
that's
gone
out
for
javascript
metrics,
ga
release.
B
We
have
seven
open
items.
Two
are
in
review.
14
are
closed
all,
but
one
open
issue
are
small
items
of
documentation.
That's
fantastic!
That
sounds
like
we're,
really
really
close,
slow
progress
due
to
vacations
of
course
summer.
That's:
okay,
python!
We
have
a
working
metrics,
ga,
seven
open
issues,
four
six
done
issues,
and
when
that
and
three
in
progress,
two
maintainers
are
just
back
from
vacation
and
covet.
B
B
C
I
mean
no
might
be
just
as
good
as
the
answer
I
give
you,
but
I'm
guessing
a
month
or
two
just
based
on
currency
products,
for
progress,
but
also
just
keep
in
mind
like
the
alpha
release
is
not
the
ga.
I.
B
C
B
Okay
for
c
plus,
the
metrics
sdk
is
46
complete
through
release
candidates,
46
complete
with
seven
open
issues.
Fourth,
which
are
in
review.
Six,
are
closed.
Release
1.5
is
planned
for
this
week,
which
includes
build
improvements,
async
upload,
which
is
hidden
behind
a
feature
flag.
B
Next,
we
have
the
community
demo
we're
on
track
for
a
kubecon
end
of
september.
Release
cool
new
front-end
designer
client
and
kubernetes.
Work
is
in
progress.
Our
main
focus
going
forward
is
otel
signal,
maturity
and
documentation.
Improvements
do
we
have
carter
or
anyone
from
the
community
demo
site
who
wants
to
just
give
a,
maybe
a
broader
update
of
of
because
I
know,
there's
been
a
bunch
of
discussions
recently
and
some
like
decisions
have
been
made
and
I
apologize.
B
I
haven't
been
on
those
calls
since
I've
been
recurring
contracts,
but
for
the
broader
group
here
are
you
able
to
give
a
an
update
on
on
the
direction
we're
headed.
D
Yeah
sure
I
can
give
a
quick
update,
so
the
demo
effort's
been
kind
of
ongoing
for
about
three
months
now
our
main
focus
was
taking
this
google
demo
and
then
giving
it
a
complete
language
coverage
outside
of
swift
and
then
also
open,
telemetry
signal
coverage
across
metrics,
logs
and
traces
for
all
the
ga
sdks.
D
D
We
have
a
php
admin
service
coming
and
php
itself,
and
then
we
also
have
added
c,
plus
plus
rust,
ruby
and
potentially
one
more
as
well
so
we're
trying
to
give
kind
of
the
complete
portfolio
of
open
telemetry
features
across
these
various
languages
and
giving
customers
kind
of
a
semi-realistic
way
of
running
this
on
their
own
machines.
They
could
run
this
in
docker
or
also
optionally,
run
it
in
kubernetes
and
a
wider
scale
as
well.
D
So
that's
kind
of
I
guess
that
the
base
overview
and
then
we're
targeting
that
in
the
september
release
so
that's
kind
of
our
v1
and
we
have
some
requirements
we
put
together
in
that
the
actual
demo
repo.
If
you
all
wanted
to
take
a
look
at
that
and
that
will
kind
of
eliminate
our
exact
v1
requirements,
but
essentially
it's
having
a
complete
language
coverage
and
also
complete
ga
hotel
signal
coverage
as
well
and
in
a
kind
of
a
working
microservices
demo,
and
then
we're
actually
adding
like
an
actual
front-end
client
as
well.
B
B
B
Interesting,
I
assumed
we
had
him
so
then
I
will
speak
to
this
so
to
consult
this
issue.
Consider
pushing
spec
changes
to
the
sigs
as
issues
in
the
repositories.
B
This
came
directly
from
feedback
in
the
end
user
working
group
last
week.
So
for
those
of
you
who
didn't
attend
that
which
is
probably
most
people
we
on
a
monthly
cadence
we're
getting
feedback
from
large,
open,
telemetry
end
users
last
month
was
github.
B
This
month
was
shopify
and
one
of
the
pieces
of
feedback
that
shopify
gave
is
that
you
know
they're
an
end
user
and
a
maintainer
of
open
telemetry
and
they
primarily
maintain
the
ruby
language
sdk
and
they
don't
always
come
to
this
maintainers
call
and
they
don't
typically
come
to
the
specification
call
because
that's
a
large
time
investment
and
so
in
certain
instances,
with
certain
spec
changes,
t
grant
or
others
in
the
spec
spec
who
regularly
attended
spec
call
had
been
opening
new
issues
on
each
of
the
language.
B
Repo
saying
here's
a
spec
change
here
are
the
details.
You
must
go
do
this.
This
is
now
part
of
the
specification.
They
found
this
incredibly
useful
because
this
meant
that
if
they
missed
a
few
spec
meetings
or
weren't
regularly
attending,
they
didn't
completely
miss
parts
of
the
specification
being
changed
or
or
added.
B
And
so
what
t
grant
is
doing
here
is
that
he's
he
wants
to
formalize
this,
and
so
there's
there's
a
discussion
here
about
whether
spec
changes
should
automatically
or
perhaps
manually,
but
should
always
be
manifested
not
just
in
the
specification,
but
also
as
new
issues
on
the
language,
repos
or
other
sig
repose,
that
that
those
spec
changes
impact
and
require
implementation
of.
B
Looks
like
we
don't
have
any
comments
here,
other
than
tigrans.
You
just
opened
this
on
friday.
I
think
so.
If
you
have
thoughts
on
this,
please
reply
back
on
the
issue.
Of
course
we
can
have
a
short
discussion
here
as
well.
B
C
C
Take
a
positive
action
and
go
respond.
I
think
via
yeah
some
sort
of
emotional
reaction-
and
I
I
just.
B
Yeah
looks
like
we
have
eleven
thumbs
up
here,
so
any
thank
you
for
pointing
that
out.
So
we
have
a
lot
of
people
there
any
other
comments
or
or
parts
of
this
that
people
want
to
discuss
right
now
on
the
call.
E
Yeah
I'd
like
to
clarify
like
would
this
be
all
spec
changes
like
any
change
log
entries
and
what
did
happen
when
a
pr
is
merged
or
on
release
and
how
quickly
then
our
language
maintainers
expected
to
you
know,
what's
what's
the
turnaround
time
expected
on
various.
B
I
don't
know
like
to
I
I
I'm
I'm
speaking
on
tigran's
behalf
here,
so
I
might
be
misspeaking,
but
I
don't
believe
he
or
anyone
intends
this
to
change
the
turnaround
time.
This
is
really
just
a
a
nice
notification
for
sig
maintainers
so
that
they
don't
have
to
religiously
attend
the
spec
calls,
or
rather
I
mean
they're
encouraged
to.
Rather,
if
they're
missing
the
odd
spec
call,
something
doesn't
just
fall
between
the
cracks.
It's.
E
Yeah,
I
guess
it's
kind
of
just
details,
but
what
I,
what
I'm
getting
at
is
like
you
know,
if
you
do
it
on
on
merge
to
main
and
then
something
changes
before
release,
then
you
have
two
issues
that
kind
of
are
on
the
same
thing
that
are
like.
You
may
have
a
lot
of
sort
of
duplicated
ish
issues.
If
you
do
it
on
a
release,
you
probably
avoid
most
of
that.
I
guess
I
would
suggest
we
do
this
on
release
and
then
only
create
issues
for
stable
changes.
E
B
I
think
that
makes
sense
as
well
again
like
we
don't
have
to
ground
here,
to
discuss
it,
but
I
kind
of
suspect,
that's
what
was
intended
as
it's
only
for
for
effectively
ga
spec
changes.
E
B
Yeah
because
it's
going
to
clog
up
your
issues
list
and
perhaps
even
build
expectations
of
people,
monitoring
that
that
you're
supposed
to
implement
this
immediately
when
in
some
cases,
you're
not
because
it's
still
experimental.
B
Yeah,
do
you
want
to
reply
back
daniel
on
the
issue
just
to
clarify
that
with
tigran
sure,
okay,
perfect,
all
right,
any
other
questions
or
or
comments
about
that.
A
B
Is
that
going
to
be
tied
to
the.
A
B
Oh,
I
see
you're
asking
I
don't
know
so
I
I
would
say
this
like.
I
think
the
spec
compliance
matrix
is
tied
to
the
state
of
the
spec
itself.
I
don't
think
these
issues
getting
generated
and
pushed
out
would
change
really
change.
B
The
timing
of
when
we
add
things
to
the
spec
compliance
matrix,
I
think,
probably
as
a
side
effect
of
what
daniel
described
of
will
probably
like
I
imagine
we
will
set
this
up
to
only
occur
when
there's
like
a
a
sort
of
ga
committed
change
to
the
spec
that
thus,
yes,
it
would
change
the
spec
compliance,
but
it's
got
nothing
matrix,
but
that
has
nothing
to
do
with
this
process
that
we're
discussing
here.
That's
just
because
a
g,
a
change
will
go
into
the
spec
it'll,
be
considered.
B
B
All
right,
so
the
final
item
here
I
had
added
this-
is
just
an
update
about
our
presence
at
cubecon,
north
america.
I'm
sure
many
people
submitted
sessions
to
the
normal
track.
I
have
no
info
on
those
because
I'm
not
part
of
the
organizing
committee
or
anything,
hopefully
speakers
here
back
soon.
B
But
what
I
do
know
about
is
the
sessions
that
we
apply
for
as
a
project,
so
we've
applied
for
a
maintainer
talk
for
the
project
as
usual,
if
you're
a
maintainer,
if
you're
gonna
be
at
kubecon
like
please
reach
out,
we
want
to
include
as
many
maintainers
in
this
as
possible
and
we'll
like
we,
we
typically
like
to
include
a
maintainer
q,
a
in
these
sessions
where
everyone
comes
to
the
front
we
put
on
some
governance
committee
member
names
for
the
maintainer
session.
B
For
now
those
are
effectively
placeholders
because
there
are
no
plans
yet.
But
if
you
have
any
interest
in
speaking
at
the
maintainer
session,
please
please
let
me
know,
and
we
will
get
that
set
up
just
one
fyf
for
maintainer
session.
It's
only
about
half
an
hour
long.
I
think
it's
usually
35
minutes,
plus
a
short
q.
A
so
we
can't
do
a
huge
amount
of
speaking,
but
it's
a
good
opportunity
for
us
to
speak
to
people
who
are
both
using,
but
also
much
more
sort
of
interested
in
the
project.
B
The
other
item
that
we
have
now
is
a
four
hour
meeting
room
this.
These
are
this
we'll
use
this
as
a
sort
of
in-person
community
session
to
go
over
our
roadmap
and
and
discuss
you
know
anything
we
can
improve
with
the
contributor
experience
and
user
experience
with
open
telemetry,
we
held
a
similar
session
to
the
supplementary.
I
think
it
was
quite
productive.
B
This
one
we
have
four
hours.
So
if
we
want
to
do
any
if
anyone
who's
a
maintainer
really
contributes
at
all
to
the
community
wants
to
do
any
kind
of
talk
or
discussion
with
primarily
contributors
to
the
project
about
the
about,
you
know,
sort
of
the
maintainer
experience
or
tips
or
anything
like
that,
or
your
primary
audience
is
contributors.
B
B
If
you
want
to
participate
or
just
reach
out
to
the
gc,
if
you
prefer
to
talk
to
the
gc
directly,
it
works
either
way.
But
those
are
the
two
sort
of
project-wide
things
that
we
have
planned
thus
far.
We
will
also
be
getting
a
booth
or
kiosk
on
the
project,
show
floor
and
we'll
organize
that
a
little
closer
to
kubecon,
but
we'll
have
that
just
as
a
this
is
different
than
valencia.
We'll
have
that
for
all
day,
rather
than
than
half
day
session,
so
we'll
have
every
single
day
of
the
conference.
B
We'll
have
a
booth,
that's
available
for
open,
telemetry
and
community
members
can
staff
it
still
early
days.
I
think
kubecon
is
in
november,
so
we've
got
lots
of
time
to
plan
this,
but
for
now,
if
you
think
you're
going,
can
you
just
write
your
name
down
here?
We
can
start
to
use
that
to
coordinate
for
some
of
these
sessions
and
for
the
kiosk
and
everything
else.