►
From YouTube: 2021-09-29 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
don't
see
will
here
yet
but
amir
since
valentine's,
not
gonna,
be
here
for
the
entire
call.
Are
you
okay?
If
we
talk
about
the
api
topic,
first,
yeah,
yeah,
sure,
okay,
let
me
move
this
up
so
for
those
that
haven't
already
seen,
there's
been
some
discussion
about
this
in
the
past,
but
I
created
a
draft
pr
on
the
api
for
it.
But
essentially
the
idea
is
to
introduce
a
context
api
which
allows
you
to
change
the
current
context
instead
of
using
width.
That
requires
a
callback.
A
It
would
change
the
context
for
your
current
async
execution.
A
A
A
I
don't
know
if
anybody
has
any
other
ideas
beyond
what
I've
listed
here,
but
essentially,
what
I
came
up
with
is
either
to
have
a
pre-release
version
with
a
pre-release
identifier
or
to
have
some
experimental
name
space
within
the
api.
So
you
call
like
api.unstable.context.attach.
A
Or
whatever,
or
we
could
just
document
the
methods
themselves
as
unstable,
so
we
just
add
it
as
a
regular
method,
but
we
say
this
is
an
unstable.
Potentially,
you
know
changing
method
use
it
at
your
own
risk
or
we
could
create
an
experimental
api
package
which
would
wrap
and
extend
the
api
and
users
that
want
to
use
the
experimental
functionality
would
use
that
instead,
those
are
the
ideas
that
I
came
up
with.
B
C
A
D
I
had
a
question
about
this
because
I
was
wondering,
because
we
have
the
official
sdk,
which
is
1.0
and
we
have
the
appear
in
the
way.
But
what
exactly
happened?
If
actually,
for
example,
another
sdk
exists
tomorrow
from
from
some
vendor
and
they
don't
want
to
implement
those
methods?
D
It's
I
mean,
since
they
are
optional
on
the
api
side,
should
we
should
we
do
they
have
like
no
implementation
if
they
do
what?
If
we
just
what?
What
did
we
discuss
before
like
to
to
to
be
able
to
just
nope?
If
there
is
a
no
implementation,
do
we
do?
We
need
everyone
to
implement
all
the
api
method,
or
do
we
accept
that
the
people
just
doesn't
implement
the
whole
api.
A
D
Yeah
yeah,
but
I
I
was
well
roughing
in
the
future
case
of,
for
example,
if
we
start
supporting
more
platforms,
some
a
discussion
about
other
platforms
for
the
esm
hooks,
for
example,
vm
or
dino,
or
cloud
fireworks,
there's
a
lot
of
platform
that
we
could
implement.
What?
If
there
is
no,
for
example,
way
to
actually
implement
those
methods
it
was.
A
So
I
would
probably
leave
it
up
to
the
sdk
for
that
environment
right.
So,
if
you
have
a
like
a
dino
sdk
and
you
implement
the
api
and
there's
a
method
that
you
can't
implement,
that's
not
feasible.
I
would
say
that
they
should
implement
a
stub
for
the
method
and
decide
how
to
handle
it,
because
they
will
know
more
about
it
than
us.
You
know
it
would
be
yeah
a
case-by-case
basis
and
maybe
different
on
different
platforms.
A
You
know,
in
some
cases
they
may
want
to.
You
know
just
log
an
error
or
in
some
cases
they
may
want
to
fall
back
to
some
other
functionality.
There's
no
way
for
us
to
know
really
how
to
handle
the
case
where
it's
not
feasible,
to
implement.
D
Okay,
so
so
that's
I,
I
think,
that's
the
main.
That's
that
that's
two
issues
there.
D
It
is
the
fact
that
we
are
implementing
something
that
is
not
stable
and
the
the
behavior
when
the
method
will
not
be
feasible,
and
so
this
pretty
much
answered
the
question
like
if
it's
not
feasible
for
for
the
js,
for
example,
we
just
can
we
just
can
move
the
function
or
something
like
this,
but
yeah
for
for
the
the
the
whole
deposition
or
or
just
naming,
I
would
advi
I
mean
I
would
prefer
to
do
it
like
the
node
way,
where
we
can
just
have
branches
for
each
release
and
have
a
specific
name
for
formatters
that
are
specific,
not
name
but
names,
not
in
space
two,
but
they
have
like
in
the
documentation.
D
Yeah
yeah-
and
I
mean
I
mean
we
could
we
could
release
it
just
for
now
for
the
development,
because
there
is
no
implementation
yet
available.
I
think
we
could
raise
it
as
an
alpha
and
then,
as
soon
as
we
have
the
sdk
working
with
the
the
the
the,
for
example,
the
syn
hooks
context
manager
with
the
sdk
node
sdk.
We
could
release
it
as
a
1.1.
I
think.
A
E
A
Ted
I
mean
I
would
be
totally
fine
with
that
as
long
as
we're
very
clear
about
the
fact
that
it's
unstable,
it
would
probably
have
to
be
in
the
ts
docs
as
well
as
in
you
know,
the
written
documentation,
but
I
I
know
that
node.
Does
it
that
way,
I'm
sure
that
there
are
others
that
also
do
it
that
way
and
to
me
that
seems
like
the
easiest
way
to
do
it,
to
let
users
use
the
method
and-
and
you
know
discover
issues
with
it
before
we
declare
it
as
stable.
D
I
I
think
we
maybe
there's,
there's
already
a
discussion
for
other
things
about
how
to
handle
this.
So
maybe
this
is
already
inside
somewhere.
A
Yeah,
that's
a
good
point.
Maybe
I
should
bring
it
up
in
the
maintainers
meeting
on
monday
nev.
It
sounded
like
you
were
gonna,
say
something.
F
No
okay,
I
I
have
a
question
yeah.
So
as
long
as
it's
experimental,
we
cannot
use
it
in
any
instrumentation
from
the
country
right,
so
we
can
only
play
with
it
and
see
how
we
feel,
but
we
cannot
release
any
code,
that's
using
it
that
might
break
if
we
remove
or
change
the
the
function.
Their
signature
right.
A
A
At
all,
I
think
that
it
would
just
be
up
to
whichever
instrumentation
author
you
know
to
to
properly
document
it
and
say:
look
we're
using
unstable
api
features.
You
must
use
api
1.1
or
it
won't
work
and,
to
probably
you
know,
implement
it
in
some
safe
way:
either
wrap
it
in
a
try
catch
or
you
know
assume
it
could
possibly
fail
because
you're
calling
an
unstable
method,
but
I
think
that
no,
I
don't.
I
don't
think
that
we
would
banish
it
from
contrib.
A
We
probably
would
not
put
it
in
the
stable,
like
that,
the
instrumentations
in
the
core
repo,
like
fetch
and
http
and
and
those
grpc,
but
for
the
contrib
ones.
I
don't
see
any
reason
to
to
banish
it.
D
A
I
think
that's
the
whole
point
of
contrib
is
to
to
allow
the
authors
to
do
what
they
feel
is
best
and
if
they
have
some
feature
in
the
experimental
api
that
they
need
or
that
they
think
is
cool,
it's
up
to
them
to
properly
communicate
to
people
that
are
installing
it
yeah
what
the
risks
are
essentially,
okay,.
A
The
steps
would
be
early
developments
of
the
api.
A
Alpha
implement
sdk.
A
The
plan
outlined
below-
let's
see
unstable
methods,
may
change
with
minor
version.
A
A
I
suppose
that's
a
matter
of
definition.
I
consider
them
as
part
of
the
sdk.
C
A
Yeah
that
would
definitely
work.
I
mean
it
would
be
a
pain,
but
yes,
it
is
for
sure
possible
to
use
it
without
to
use
it
with
explicit
context
without
a
context
manager.
D
Right
and
we
without
context
too
there's
there
were
some
some
people
from
netflix.
I
think
at
the
time
of
open
census
that
wouldn't
use
the
context.
Nothing
was
just
api,
so
they
just
pass
around
spans
every
other,
their
own
framework
at
the
time
so
yeah,
I
think,
that's
it's
still
still
possible
and
it's
possible-
and
it
was
a
designed
use
case
at
the
time.
C
Yeah
I
mean,
I
think
this
is
how
the
entire
go.
The
go
language
api
works
as
well.
It's
always
using
the
explicit
context.
Yeah.
A
Yeah,
that's
because
there
is
no
implicit
context
in
go
and
that's
like
a
well-established
idiom
of
the
language
yeah.
C
So
I
I
guess
to
me
I
like
in
python
we
don't
have
pluggable
context
managers.
We
just
have
the
contextvars
implementation,
so
it's
just
part
of
our
api
and
it's
not
pluggable
really.
C
A
I
mean
just
context
manager
implementers,
but
I
expect
most
sdk
implementers
to
also
be
a
context
manager
implementer
in
most
cases
or
I
guess
they
would
use-
maybe
the
open
source
one.
D
Well,
I
think
I
mean
the
closest
one.
That's
that
is
like
this
is
data
dog
which
are
they
have
their
own
agent.
They
could
use
the
the
the
core
api
and
sdk
from
pen
telemetry
there,
but
they
will
always
use
their
own
context
manager
because
they
have
some
hkgs
to
fix
and
they
support
way
more,
not
just
version
than
us,
so
that
would
be
like
they
are.
They
should
have
their
own
sdk
and
they
have
their
own
contact
manager
because
for
whatever
use
case,
they
want
to
fix.
A
Are
we
okay
with
this
plan
as
it
yeah
since
I
mean
nothing's,
going
to
happen
overnight?
I
I'll
reach
out
to
the
tc
and
make
sure
that
they're
okay
with
this,
and
we
will
need
to
outline
a
a
development
process
for
the
api
that
allows
us
to
develop.
You
know
a
1.1
branch
parallel
with
the
1.0
and
back
port
fixes
and
stuff
which
we
haven't
really
come
up
with
processes
for
that.
A
Yet
so
that's
sort
of
a
a
precursor
to
this
is
we
need
a
development
process
that
allows
us
to
develop
a
new
1.1
and
still
backport
fixes
to
1.0.
A
Okay,
I
think
that
probably
covers
this
topic,
then
amir
or
john,
would
you
like
to
talk
about
this.
F
Yeah,
I
can
talk
about
it.
So
jonathan
encountered
an
issue
where
he
had
both
open
telemetry
and
gcp
instrumentation,
I
believe
installed
in
his
app
and
the
open,
telemetry
instrumentation
for
already
crushed
this
application,
because
they
don't
play
well
together.
They
both
try
to
patch
the
same
object
and
it
was
just
logically
not
working
together.
F
Do
we
want
to
support
it?
Do
we
want
to
state
that
it's
not
supported?
F
Do
we
want
to
do
a
best
effort
and
if
user
installed
both
of
them,
then
we're
not
giving
any
guarantee
about
the
quality
of
the
results?
F
A
A
G
Just
just
to
give
just
a
slight
bit
more
so
the
reason
I
had
both
in
one
project
was
because
I
just
couldn't
get
the
open,
telemetry
exporter
to
trace
working
in
the
project
so
rather
than
just
like
losing
all
of
my
traces
for
my
developers.
I
you
know
I
put
both
in
there
and
that's
when
I
started
seeing
this
problem.
G
Yeah,
I'm
I
I
appreciate
the
idea
of
just
saying
like
well,
we
don't
support
it
or
you
know
it's
kind
of
like
a
known
issue,
and
so
you
know
just
letting
people
know
that,
and
the
reason
was
because
I,
when
I
said
I
couldn't
get
the
the
trace
working
the
exporter
working
for
that.
A
You
know
if
we
did
find
a
way
to
make
it
work
together.
We
could
make
you
know,
suggestions
where
you
know.
If
implemented
this
way,
then
we
find
that
they,
it
works
better
with
these
external
tracing
tools,
but
there's
there's
essentially
no
way
that
we
can
make
a
guarantee
that
our
instrumentations
will
play
nice
at
best.
A
We
could
make
a
guarantee
that
our
sdk
won't
cause
conflicts
and
say
it's
up
to
instrumentations
to
deal
with
conflicts
on
their
own,
but
when
you're
monkey
patching
the
internals
of
modules
and
wrapping
methods-
I
I
just
don't.
I
don't
see
that
there's
any
way
to
guarantee
that
we're
not
interfering
with
anything.
A
That's
why,
for
example,
the
dynatrace
agent,
if
it
discovers
open
telemetry
within
within
the
same
process
it
disables
the
open,
telemetry
sdk,
and
it
you
know,
wraps
the
open,
telemetry
api
methods
itself
to
make
sure
that
the
hotel
spans
still
work.
But
you
know
so
we
concluded
that
there
would
be
no
way
to
make
it.
You
know
to
essentially
guarantee
that
they
would
work
together.
A
Two
two
different
tracing
agents
in
one
process
is
an
extremely
difficult
problem
to
solve.
That
would
require
probably
changes
not
just
in
open
telemetry,
but
also
in
whatever
other
tracing
tool
you
want
to
run
in
the
process.
It
probably
requires
some
sort
of
collaboration
with
the
google
cloud
trace
team,
I'm
open
to
trying
to
solve
that.
A
If
somebody
on
that
team
is
open
to,
you
know
some
sort
of
collaboration,
but
somebody
would
also
have
to
have
the
time
to
develop
test
and
you
know
push
through
whatever
changes
would
be
required.
Yeah.
To
be
honest,
I
I
in
my
mind
this
is
in
the
category
of
a
a
good
idea
that
I
think
is,
you
know,
is
well
intentioned,
but
maybe
not
feasible.
A
I'm
not
I'm
not
familiar
with
this
specific
crash,
but
the
my
understanding
is
that
it's
actually
the
double
patching.
That's
causing
it
to
crash
right
that
somehow
the
wrapping
itself
is
causing
an
issue
or
is
it
some
internal
of
our
sdk
that
has
caused
the
problem.
F
Oh,
it's
the
wrapping,
it's
the
instrumentation
and
the
jonathan
suggested
suggested
a
solution,
but
it
will
work
only
if
a
hotel
is
being
set
up
before
gcp
and
it
will
make
it
so
that
the
auto
instrumentation
will
not
be
able
to
bind
its
contacts
to
some
of
the
callbacks.
A
G
Well,
it's
it's
understandable.
I
I
mean
honestly,
if
someone
has
the
will
or
time
to
help
figure
out
why
the
exporter
to
trace,
isn't
working
that'd,
be
another
thing
to
kind
of
solve
and
and
fix
up,
but
you
know
that
that
would
then
solve
both
problems
because
then
the
answer
in
the
docs
is
just
go:
use
the
trace
exporter
and
stop
trying
to
do
both.
G
A
That
in
their
own
repo,
I
don't
know
how
actively
it's
maintained.
I
know
it
used
to
be
maintained
by
mayor
kale,
who
used
to
be
a
maintainer
of
open
telemetry.js,
but
he's
sort
of
moved
away
from
the
community,
and
I
don't
know
who's
maintaining
that
anymore
or
how
active
they
are.
C
Yeah,
I
just
made
a
release
for
it
yesterday
for
the
0.25
sdk.
I
don't
know
if
that'll
solve
the
problem,
it
sounds
that
that's
just
like
an
open,
telemetry
exporter,
though
so,
if
there's
an
issue
with
the
instrumentation,
I
don't
think
it
would
affect
that
yeah.
G
G
Do
the
quick
high
level,
so
I
have
as
part
of
my
my
code
that
it,
you
know,
creates
a
node.
You
know
whatever
a
node,
provisioner
or
whatever.
I
forget
the
provider
and.
G
Yeah
choice
provider,
then
it
creates
a
jager
exporter
and
a
trace
exporter
and
then
adds
all
the
instrumentation
I
can
get
stuff.
All
the
stuff
goes
out
to
jager
properly
and
just
trace.
Just
totally
just
goes
flat
at
that
point
where
it
just
doesn't
seem
to
get
anything
out.
So
maybe
it
is
because
I
am
on
you
know
the
dot
25
version.
G
G
A
G
A
A
A
You
know,
even
if
that
means
only
use
google
trace
instead
of
instead
of
using
open,
telemetry,
it's
always
easier
to
to
just
use
one.
Obviously
I
would
prefer
you
use
open
telemetry,
but
you
you
got
to
do
whatever
is
best
for
you
in
the
case
that
you
absolutely
can't
use
just
one.
I
just
don't
know
what
guarantees
we
can
make.
A
Yeah
I
mean
maybe
we
should
document
on
our
on
our
readme
or
something
like
that.
Specifically,
if
you're
using
you
know,
we
don't
recommend
to
to
agents
in
the
same
process
and
just
call
it
out
as
explicitly
unsupported.
A
I
know
that
doesn't
really
solve
any
real
technical
problems,
but
yeah,
I
just
don't
know
what
else
we
can
do
honestly.
A
If,
if
somebody
wants
to
put
in
the
effort
to
to
try
to
come
up
with
a
solution
to
make
it
work
with
multiple
agents
in
one
process,
I
know
that
that's
something
that
a
lot
of
people
run
into
like
you
know.
We
have
the
dynatrace
agent
that
has
to
run
in
the
same
process
as
open
telemetry
sometimes,
and
we
would
love
it
if
there
was
a
solution
for
them
to
actually
work
well
together.
A
You
know,
I'm
sure
that
gcp,
whoever
develops
gcp
trace
probably
feels
the
same
way,
and
you
know
other
abm
vendors.
It's
just
such
a
huge
effort
that
you
know
we
decided
it
wasn't
necessarily
worth
it
to
try
to
do
that.
A
A
Is
there
anything
else
about
this
topic
that
we
want
to
talk
about,
or
should
we
move
on.
H
Yeah
I
had
already
mentioned
it
last
week
and
I
just
wanted
to
call
out
that
yeah.
It
seems
it's
pretty
easy
to
rebase
it
on
your
change,
to
move
the
exporters
to
experimental,
and
so
it's
ready
to
go
now,
but
I
think
we'll
be
blocked
until
the
1.0
core
releases
out,
or
rather
I
guess
actually
until
the
next
experimental
release
is
out.
Now
that
all
the
ex
exporters
are
in
0.26
or
our
in
experimental.
A
H
Yeah,
I
I
think
that
it
just
needs
to
wait
until
0.26,
for
because
this
is
now
entirely
an
experimental
change
with
all
the
exp
exporters
in
experimental
and
so
assuming
that
1.0
of
the
core,
the
stable
sdk
will
depend
on
0.26
of
the
experimental.
H
Then
I
think
that
this
could
be
merged
after
0.26
goes
out
and
be
included
in
the
next
release.
Okay,
so.
A
My
my
current
plan
is
to
release
0.26
like
this
afternoon
and
release
the
0.26
experimental
like
immediately
right
on
its
heels
as
quickly
as
possible,
and
then
you
know
let
that
sit
for
some
short
amount
of
time,
just
to
make
sure
everything
isn't
completely
broken
and
then
release
the
1.0
this
week.
A
I
have
been
hoping
to
do
it
today,
but
I
think
at
this
point
this
week
is
probably
the
best
that
I
can
do
and
then
so
I
think
after
that,
core
20
or
that
after
the
experimental
26
release,
we
can
probably
merge
yours,
but
it
hasn't
gotten
any
reviews.
Yet
so
it's
I
guess,
where
you're,
what
you're
saying?
Is
it's
ready
to
be
reviewed
right,
but
not
yet
merged.
H
Exactly
yeah-
and
I
was
having
this
really
tricky
unit
test
issue,
but
it's
resolved
because,
presumably
because
the
exporters
are
all
an
experimental-
and
I
wasn't-
and
I'm
not
doing
any
weird
like
cross
linking
between
stable
and
experimental
anymore,
so
yeah.
So
it
should
be
good
to
go
now
good
to
get
reviews
and
then
merge
after
26.
B
A
Do
you
have
anything
beyond
that
to
say
about
it,
or
are
we
good
to
move
on
nope?
That's
it.
Okay.
This
next
issue
is
also
yours,
but
yeah
relatively
quick
answer
to
this.
We
update
those
docs
every
time
we
have
a
release.
Okay,
that's.
A
A
That's
been
a
frequent
pain
point
for
me
and
one
that
we
talked
about
last
week
and
how
you
know
some
ways
that
we
can
get
around
that.
But
this,
like
the
technical
docs
that
are
auto
generated,
they
are
released
on
they're,
published
on
every
release,
specifically
to
avoid
that
problem.
So
if
anybody
goes
to
opentelemetry.github.io.
A
A
C
A
C
A
You
know
I
could
go
either
way
on
that.
Honestly.
The
sdk
node
package
is
just
so
much
easier
to
use
that.
I
think,
even
though
it
is
experimental
like
users
that
are
just
getting
started,
I
think
we
should
show
them
like
the
easiest
way
to
to
get
up
and
running
as
possible.
A
That
said,
it
would
be
really
frustrating
as
a
new
user
if
the
docs
were
out
of
date,
or
you
know
if
the
if
the
api
changed
in
some
breaking
way,
that
wasn't
immediately
obvious
and
that
would
be
harder
for
new
users
to
debug.
If
it
doesn't
work
perfectly,
I
guess
I
don't
how
how
extensively
is
it
used
in
the
website?
Docs?
Is
it?
Is
it
everywhere
or
is
it
just
in
one
part,
I
put.
A
Yeah,
so
it's
the
getting
started,
I
mean
honestly.
I
think
it's
fine,
it's
just
such
an
easier
way
to
get
going.
It's
unfortunate
that
it's
not
going
to
be
a
part
of
the
1.0.
Maybe
in
this
documentation
we
should.
We
should
probably
have
a
like
manual
setup
example.
That's
not
like
not
the
easiest
like
getting
started,
but
like
here's,
how
to
manually
set
it
up
for
advanced
users
and
then
at
the
top
of
the
getting
started
guide.
We
could
have
a
little
disclaimer.
That
says
this
uses
an
experimental,
sdk
node
package.
C
A
Yeah,
I
think
that
that's
going
to
be
true
for
a
long
time.
You
know
metrics
1.0
stability,
I
think,
is
going
to
be
six
months
out.
If
we're.
If
we
work
as
quickly
as
possible,
instrumentations
are
going
to,
you
know,
probably
go
1.0
sooner
than
that,
but
not
all
of
them,
I'm
sure
only
you
know,
they'll
go
one
at
a
time
when
the
particular
authors
are
ready,
I
think,
having
zero
dot
x
versions
somewhere
in
your
pipeline
is
going
to
be
a
problem
for
a
really
long
time.
A
Ivan
asks
if
we
have
updating
the
docs
as
an
action
item
for
pull
request
templates.
No,
we
do
not,
but
that
is
a
good
idea
ivan.
Do
you
mind
making
a
pr
to
add
that.
A
So
I'd
like
to
talk
about
the
1.0
release,
you
can
see
at
the
bottom
the
the
sort
of
rough
short
roadmap
for
the
next
couple
of
days
that
I
have
I'd
like
to
release
the
the
0.26
core
today
and
the
experimental
hopefully
also
today.
As
long
as
I
can
get
reviews
on
it
quickly
enough
and
then
probably
the
1.0,
either
on
friday
or
on
monday,
depending
on
how
quickly
we
can
get
it
approved
and
how
the
26
update
goes.
A
So
I
have
asked
at
a
few
meetings:
are
there
any
issues
that
we
think
should
block
the
1.0?
I
think
we've
covered
everything.
That's
been
brought
up
so
far,
but
if
anybody
does
have
an
issue
that
they
think
should
block
the
1.0,
now
is
really
your
last
chance
to
bring
it
up,
so
you
have
it
until
essentially
friday
to
bring
it
to
my
attention.
A
A
If
anybody
sees
any
of
these
that
they
think
is
not
ready,
please
speak
up,
otherwise
I
think
that
we're
going
to
move
forward.
A
So
I
suppose
this
is,
I
don't
have
much
to
say
about
it
beyond
now's
the
time
to
speak
up
if
you're
going
to
otherwise
you
know
we
can't
wait
forever.
A
I
don't
have
much
other
much
else
to
say
other
than
that.
So
if
anyone
has
anything
that
they'd
like
to
bring
up,
they
can
feel
free
and
if
not,
then
I
don't
have
anything
else
on
the
agenda.
So
if
somebody
has
something
not
related
to
1.0,
we
have
about
15
minutes
left.
A
I
A
question,
but
I
have
a
question
and
then
I
don't
have
a
lot
of
context.
Since
it's
one
of
my
first
meetings,
I've
been
attending.
What
is
the
plan
for
metrics
like
upcoming
weeks
months,
or
there
are
some
there's
some
usage
already
of
the
package
we
have
as
expect
experimental
right
now,
and
I
know
that
the
future
phrase
happened
for
the
metrics
already
and
other
libraries
are
catching
up.
Is
there
anyone
leading
the
efforts
there?
I
Is
that
something
like
multiple
folks
could
collaborate
as
well,
and-
and
I'm
asking
that,
because
some
libraries
did
implemented
the
the
specs
in
different
ways,
whether
with
computers
or
different
naming
conventions,
for
some
of
the
things-
and
you
know
that
might
change
a
little
bit.
So
perhaps
I
created
that
ticket
and
the
github
to
kind
of
track
that
progress.
But
perhaps
I
can
post
the
proposed
interface
for
the
api
at
least
and
then
from
there.
I
A
Yes,
so
bart,
one
of
the
maintainers
wrote
most
of
our
existing
metrics
implementation,
and
I
know
that
he
was
excited
to
work
on
the
updates
he's
on
a
two-week
vacation
right
now.
A
So
you'd
have
to
wait
for
him
to
get
back
and
there's
an
engineer
named
george
perklebower
from
dynatrace
who
was
working
on
python
a
little
bit
and
has
been
joining
the
metric
sigs
for
the
last
two
or
three
months
at
this
point,
and
he
is
planning
on
dedicating
time
to
updating
the
metrics
in
js,
and
I
think
at
this
point
he's
just
waiting
for
bart
to
get
back.
A
So
those
are
the
two
people
that
I
know
will
be
working
on
it
if
not
full-time
then
close
to
full-time.
A
So
those
are
the
two
people
that
I
would
reach
out
to
and
if
you
have
not
been
joining
the
metrics
sig
meetings,
I
would
probably
suggest
that
you,
if
you
can't
join
them,
then
at
least
watch
the
recordings
or
look
at
the
agenda
to
get
a
feeling
where
things
are
at.
A
A
Okay,
if
not,
then
please
take
a
take
a
look
at
the
the
release,
26
pr
and
when
that
merges,
you
know
expect
to
see
the
experimental
release
pr
soon
on
its
heels,
the
quicker
we
can
get
those
approved
and
merged
the
more
quickly
we'll
be
able
to
get
ready
for
the
1.0.