►
From YouTube: 2022-04-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
B
C
I
don't
see
valentine,
I
think
he
was
gonna
join,
but
so
feel
free
to
add
yourself
to
the
the
attendees
list
on
the
agenda
here.
C
First
item
I
have
on
the
agenda:
is
the
the
end
user
working
group
is
working
on
a
feedback
survey
that
they're
going
to
give
to
end
users
to
to
try
to
gather
feedback
for
the
project
and
they
have
requested
before
we
send
this
to
end
users.
They
want
you
know,
maintainers
and
approvers
to
to
give
them
feedback
on
the
surveys
themselves
to
make
sure
they're
actually
collecting
the
information
that
we
want.
C
So
I
don't
know
if
anyone
in
this
group
is
interested
in
that,
but
if
you
are
take
a
look
at
that
and
give
them
feedback,
I
will
be
doing
that
myself
since
I
feel
like
we
don't
get
a
lot
of
feedback
from
end
users
and
I
think
that's
it's
a
gap
that
we
need
to
fill,
but
I
didn't
know
if
that
would
be
interesting
for
anyone
else
here.
C
The
before
I
move
on.
Does
anyone
have
questions
about
that.
E
C
Well,
for
those
that
don't
know,
we
released
the
the
1.2.0
and
the
0.28
release
this
week
or
last
week
technically,
so
I
wanted
to
talk
a
little
bit
about
what
the
next
steps
are
here,
particularly
in
terms
of
metrics.
C
There
were
a
lot
of
metrics
updates
and
I
think
we're
we're
in
a
pretty
good
state
at
this
point.
The
last
item
that
really
needs
to
be
done
before
the
metrics
sdk
is
useful,
is
updating
the
version
of
the
proto
that
we're
using,
which
is
being
done
by
by
mark.
I
think,
there's
a
draft
pr
open
for
this
already
right.
Mark.
C
Open
and
ready
for
reviews.
Okay,
can
you
add
the
the
link
here
just
so
that
people
can
find
it.
F
C
So
that's
sort
of
the
last
really
required
change
pr
before
the
new
metrics
sdk
can
be
used
by
more
people,
so
I
think
that's
currently
probably
our
highest
priority
open
pr.
Please
take
a
look
at
that
once
we
get
through
that
there
are
a
handful
of
things
that
we
want
to
do
before
we
mark
it
as
stable
or
generally
available.
C
There
are
a
few
pr's
open
here
and
I've
linked
them
that
that
are
in
need
of
review.
But
beyond
that
we
are
definitely
in
need
of
documentation
and
examples.
C
I've
already
been
working
on
the
website
documentation,
so
I'm
handling
that.
Let
me
put
my
name
there,
but
we
need
examples,
particularly
using
them
with
different
exporters
using
the
sdk
with
the
views
api,
which
is
a
completely
new
concept
and
that
type
of
thing.
So,
if
anybody
is
interested
in
that
type
of
work,
we
definitely
are
in
need
of
that
at
the
moment.
C
C
The
the
other
thing
that
we
have
so
I
mentioned
that
the
prs
that
we
have
here,
the
prometheus
exporter
is
not
completely
specification
compliant
according
to
the
latest
spec.
C
I
can't
use
my
computer,
let's
see,
how
do
I
find
this
project
sdk.
C
So
this
is
the
issue
that
is
tracking
things
that
still
need
to
be
done,
so
add
that,
here
again,
if
people
are
looking
for
for
tasks,
I
think
nobody
is
currently
working
on
this.
So
this
is
a
good
up
for
grabs
work
item
that
will
definitely
need
to
be
done
before
the
ga.
C
I
know
I
kind
of
breezed
through
that
fairly
quickly.
Does
anyone
have
any
questions
or
is
the
the
short-term
metrics
roadmap,
pretty
clear
to
everybody
here.
F
F
I
just
noticed
it
while
I
was
working
on
the
on
the
last
few
prs
that
it
seems
to
me
that,
like
there's,
plus
minus
one
percent,
total
code
coverage
being
reported
like
not
very
consistently,
I
had
recently
had
a
pr
a
draft
pr
open
where
I
just
rewrote
some
of
my
commit
messages,
and
then
the
code
coverage
step
failed
for
some
reason,
and
that
was
no
change
at
all.
F
A
Yeah
I
have
had
it.
I
think
it
was
showing
like
minus
five
percent,
once
it's
quite
random,
because
we
only
run
tests
for
changed
packages,
so
every
commit
that
gets
merged
has
only
a
subset
of
the
coverage
reported,
and
that's
why
it's
it
seems
random
right,
because
learner
isn't.
A
I
think
the
learner
isn't
always
very
consistent
about
what
it
detects
has
changed
either
sometimes,
and
it
has
totally
different,
like
criteria
for
the
changes
right,
so
the
coverage
changes
from
commit
to
commit
and
pr2pr,
because
the
last
commit
that
your
pr
is
based
on
has
basically
a
random
subset
of
the
test
trend.
A
There
you
go
the
coverage
reported
as
well.
There
is
a
feature
called
partial
update,
something
code
com.
Basically,
if
you
like
partial
code
coverage,
google,
it
it's,
there
is
some
configuration
options.
You
can
apply
that
should
take
care
of
it,
but
I
haven't
gone
deeper
into
it,
because
because
the
core
repo
has
an
older
version
of
gotham
anyways
rank,
which
needs
to
be
updated
first,
I
guess-
and-
and
I
have
had
my
plateful
of
other
things-
basically.
A
May
be
the
case,
but
I
know
we
have
very
similar
symptoms
in
country
and
I'm
pretty
sure
it's
due
to
the
fact
that
we
only
test
partially.
F
Yeah,
just
just
one
example
that
has
happened
to
me,
was
that
I
was
working
on
a
completely
other
part
of
the
code
and
then
code
coverage
of
the
zipkin
exporter
actually
was
reported
as
having
gone
away,
which
was
quite
surprising
to
me.
So
I'm
yeah,
I'm
not
sure
whether
this
is
the
same
issue
or
not,
but
it's
it's
quite
odd.
Sometimes
one
thing
that
may
affect
it.
C
You
know
more
than
seven
days
old,
so
code
cub
may
not
be
comparing
a
fixed
branch
against
a
fixed
branch.
It
may
be
comparing
one
you
know
the
the
pull
request
with
the
the
changes
with
the
correct
branch
against
a
previous
state
with
an
incorrect
path
on
the
pr
you
were
working
on
mark.
Was
that
a
recent
pr
or
is
that
one
that's
been
open
for
a
while.
F
That
was
a
recent
one.
Actually,
that
was
the
the
proto
proto
update
that
I
was
working
on.
There
are
like
in
the
history:
it's
now
not
there
anymore,
because
I
had
it
marked
as
draft,
and
then
I
did
some
rebasing
before
I
opened
it
for
review.
So
it's
not
in
the
history
anymore,
but
yeah
there's
also
difference
sometimes
between
the
reported
coverage
in
this
code
coverage
report
and
the
reported
one
in
the
check
which
is
off
by
like
0.5
percent,
which
is
also
also
a
bit
weird.
F
I'm
I've
tried
figuring
out
what
what
causes
this,
but
I
haven't
really
gotten
behind
it.
Yet.
C
Okay,
I
mean
I
wasn't
aware
of
the
the
code
of
partial
updates
that
rana
just
mentioned,
so
that
might
be
worth
looking
into.
G
C
Yeah,
so
that's,
I
think,
a
lot
of
our
testing
tooling
is
outdated
partially
because
we
are
trying
to
support
old,
node
runtimes
and
we
can't
update
you
know
like
mocha
can't
be
updated
because
it
breaks
node
eight
and
I
think
code
cov
was
the
same
thing.
When
the
dependable
tried
to
update
it,
it
failed
to
run
on
older
node
versions.
C
Maybe
it's
worth
discussing
dropping
testing
for
for
those
older
node
runtimes.
I
know
there's
been
ongoing
discussions
about
that
in
the
spec
repo
and
there's
actually,
I
believe,
a
spec
pr
currently
open
about
this.
C
So
this
changes
the
changes,
the
wording
to
say
a
major
version
bump.
Originally
it
said
or
a
drop
in
support
for
a
language
runtime,
but
that's
removing
it.
So,
with
this
change,
we
would
be
able
to
drop
support
for
old
runtimes
without
a
major
virgin
bump.
C
I
know
this
is
something
we've
talked
about
almost
every
meeting
for
like
two
months
now,
and
we
keep
kind
of
coming
to
the
same
answer
that
the
spec
didn't
really
allow
it,
but
it
I
I
think
it's
causing
is
really
causing
pain
now
and
as
we
move
forward
in
the
future,
it's
going
to
only
cause.
A
C
Pain
not
less.
If
we
don't
do
something
about
it,
and
I
really
don't
want
to
bump
the
major
version
this
soon
in
the
life
cycle,
I
guess
in
the
node
ecosystem.
I
know
that
it's
probably
it's
not
really
all
that
consistent,
there's,
definitely
some
packages
that
bump
their
major
versions
when
they
drop
support
for
runtimes,
but
then
there's
some
that
never
you
know
have
any
official
runtime
support
at
all
and
it
just
works
on
whatever
they
happen
to
test
it
on,
and
we,
I
believe.
C
C
Now,
personally,
if
it
was
me,
I
would
say
less
drop
support
for
at
least
node
8,
probably
also
node
10
and
not
bump.
The
major
version
do
that
as
a
minor
version
in
the
sdk
at
least,
I
think
in
the
api
we
probably
don't
need
to,
because
we
have
yeah
more
backwards
compatibility
requirements
there
and
there
aren't
as
many
pr's
it
doesn't
change
as
often-
and
I
think
it's
easier
to
to
maintain
that
backwards
compatibility
in
the
api.
C
But
what
do
you
guys
think?
Do
you
think
it's
time
to
to
talk
about
dropping
at
least
eight?
Maybe
ten.
C
G
It
could
happen
right
because
users
install
us
with
the
carrot
and
they
will
just
get
the
latest
version
and
it
will
break
the
system,
I'm
not
sure
what
we
what
we
should
do
about
it,
but
it
is
possible
right.
E
H
No,
I'm
not
saying
that
that
we
should
drop
support
for
note
12
in
three
days,
but
just
just
giving
with
the
releases
page
that.
G
G
E
C
If
something
breaks,
those
users
can
always
roll
back
right.
If,
if
they
have
a
system,
that's
working
and
they
update
their
open,
telemetry
and
they're
running
on
a
very
old.
You
know
they're
running
on
node,
8
or
node
10,
something
that's
deprecated.
They
update
their
dependencies
and
it
doesn't
work.
Then
they
roll
it
back.
I
think
users
that
tend
to
be
on
on
things
like
node,
8
and
node,
10
and
really
old
runtimes
are
kind
of
used
to
that
happening
and
used
to
needing
to
do
that.
G
Yeah,
like
like,
I
think
most
of
our
problems
are
in
the
dev
dependencies
like,
for
example,
if
we
add
something
that
was
like
url,
that
was
added
in
note,
10
and
it
for
sure
will
not
work
for
node
8.
Then
we
know
it
will
not
work
but
we're
just
like
it's
supposed
to
work,
but
we
don't
test
it.
It's
a
different
statement.
I
think.
C
It's
not
obviously
a
perfect
test,
but
it's
a
good
indicator
that
it's
at
least
it's
still
working
in
the
in
in
at
least
the
most
basic
case.
C
I
would
vote
for
dropping
eight
and
probably
ten
and
then
making
a
policy
for
when
we
will
drop
versions
right
now,
on
a
readme
we
say
we
support
node,
8.17
and
up,
and
I
think
instead
we
should
say
we
support
all
current
versions
of
node.
C
Okay,
so
can
one
of
you
does
someone
have
time
to
handle
that
I'm
actually
going
on
vacation
for
the
rest
of
the
week?
So
I,
unless
I
can
do
it
when
I
get
back,
but
if
we'd
rather
do
it
now.
A
I
don't
think
we
are
in
a
hurry,
but
I
mean
we
have
had
it
anyways
and
it's
not
pressing
right
now.
We
don't
have
any
current
pressing
issues
that
would
solve
if
we
would
drop
today.
C
We
should
wait
for
the
spec
issue.
Actually
I
just
realized
yeah
anyways.
Where
did
it
go?
I
think
I
lost
it,
but
there's
a
spec
pr.
It's.
C
Yeah
please
edit
the
document,
and
I
guess
I
will
create
an
issue
so
that
people
who
aren't
in
this
meeting
are
at
least
aware
of
our
plan
here.
G
C
A
I
A
D
Sorry
quick
question:
I
might
have
missed
part
of
the
conversation,
but
like
going
forward,
is
it
going
to
be
what
would
be
the
guideline
for
dropping
support
for
going
forward.
C
We
don't
have
an
official
policy,
yet
we're
a
little
bit
waiting
for
the
spec
issue
to
complete.
But
if
it,
if
it's
merged
in
its
current
form,
then
we
will
probably
make
our
own
policy.
I
think
it'll
go
sig
by
sig
and
I
would
recommend
like
end
of
life
plus
one
year
or
something
like
that.
But
that's
something
that
we'll
have
to
discuss
as
a
group.
D
C
Yeah,
we
should
certainly
look
at
least
at
things
like
lambda
and
see
what
their
policies
are,
so
we
can
kind
of
mirror
them,
because
we
don't.
C
You
know
we
certainly
don't
want
to
be
too
aggressive
in
dropping
support
for
old
versions,
but
I
think
we
also
don't
want
to
just
maintain
them
forever
when
they're
causing
you
real.
A
We
have
something
a
discussion
around
dropping
support
for
old
node
versions,.
D
D
C
Okay,
until
that
spec
issue,
or
until
that
spec
pr
emerges,
I
think
we
probably
shouldn't
do
anything
but
yeah.
It
looks
like
ronald.
You
already
approved
this.
Anyone
that
hasn't
seen
this
may
want
to
take
a
look
at
it
just
to
make
sure
everybody's
on
the
same
page.
C
Yeah
so
phase
one
is
just
like
the
runtime
isn't
updated
anymore
and
phase
two.
Is
you
can't
actually
create
lambda
functions
with
the
with
that
particular
runtime?
It
continues
to
work
if
you
have
already
had
one
deployed,
but
you
can't
deploy
a
new
version
of
it
or
anything
like
that.
C
The
issue
yeah
putting
it
in
the
issue
would
probably
be
helpful.
There
may
already
be.
A
C
Okay,
I
guess
we
didn't
really
finish
the
the
code
cub
discussion.
As
far
as
I
know.
Actually,
I
realized
after
I
think
the
code
coverage
only
runs
on
one
particular
version
of
node
anyways.
I
think
it's
running
on
14
yeah,
so
it
actually
shouldn't
be
a
problem
to
update
codecov.
A
We
still
have
problems,
but,
as
I
mentioned,
we
also
run
the
tests
on
only
changed
packages
which
also
causes
another
set
of
problems,
at
least
on
top
of
that,
or
maybe
maybe
we
have
solved
the
underlying
issue,
but
that
one
that
the
core
has
but
have
another
set
anyways.
We
also
have
problems,
but
okay,.
C
So
I
mean
maybe
that's
a
good
first
step
before
we
look
too
deeply
into
it.
We
should
make
sure
we're
at
least
running
the
latest
code
cup
version.
C
C
Okay,
as
per
per
usual
at
these
meetings,
we
have
a
list
of
prs
that
are
waiting
on
reviews
mark
already
added
a
handful
of
them
here
before
the
meeting.
A
C
A
couple
it
looks
like
a
mirror
added
one
maybe,
but
it
is
exactly
what
it
says
on
the
box
vrs
that
are
open
and
in
need
of
reviews.
So,
as
always,
please
review
prs.
G
It's
like
they
completely
refactored
the
package
and
I
had
to
like
implement
a
different
logic
for
a
version
four
along
with
their
tests,
and
it
is
working,
but
there
are
few
problems
like
the
ci
can
only
run
a
single
version
of
the
tests.
So
currently
it's
running
version
three.
G
Then,
if
we
need
to
support
two
versions
of
the
instrumented
package,
then
we
have
to
choose
one
and
we
can't
use
both,
and
it's
really
complicates
development
because
we
have
to
change
the
packet
json
every
time
we
want
to
to
test
one
version
or
another.
So
I
don't
know
if
I
have
a
good
solution
for
it.
I
just
wanted
to
bring
it
up
and
ask
people
here
if
they
have
any
good
idea
of
how
to
improve
this
workflow,
because
it's
very
annoying
currently
to
support
multiple
versions.
A
What
I've
done
is
basically
and
what
kind
of
the
policy
is
unwritten.
One,
though,
is
that
we
we
run
continuously
run
tests
on
the
latest
version,
but
then
have
older
versions
run
in
the
desktop
version
setup,
which
will
which
can
do
the
changing
the
package
json
for
you
right.
A
It
does
create
issues
with
types
which
I
don't
have
any
solution
for.
Basically,
you
have
to
either
like
through
some
type
unions
or
like
trickery
there
or
just
use
like
very
loose
enemies
or
stuff
like
that
which
I've
also
done
on
some
packages.
A
It's
not
awesome,
but
I
mean
with
packages
that
have
like
three
sets
of
conflicting
types
which
like
to
conflict
from
the
from
the
package
point
of
view,
but
are
hardly
any
different.
Then
it's
basically
way
to
go
in
my
opinion,
because
you
know
it
would
otherwise
be
like
a
nightmare
to
to
maintain.
G
G
A
Yeah,
especially
when
the
package
has
been
rewritten
like
this,
in
many
cases
they
are
mostly
the
same,
so
basically
instrumentation
code
will
be
exactly
the
same,
except
for
some
cases.
So
some
some
attributes
have
to
be
looked
up
from
another
location,
but
basically
they
are
the
same.
So
I
haven't
had
such
a
like
a
situation
on
my
plate.
G
Yeah,
so
we
have
to
to
apply
common
sense
on
this,
but
at
least
for
radius,
like
none
of
the
code,
is
reusable.
It's
like
all
the
patches
are
had
to
be
re-implemented
and
the
test
had
to
be
re-implemented
and
it's
like
a
completely
different
package,
so
it
might
be
worth
in
this
scenarios
to
just
create
new
instrumentation,
but
I'm
sure
it
has
a
lot
of
disadvantages
as
well.
G
C
The
definitely
typed
project
has
this
problem
for
sure
and
what
they
do
is
if
you
install
at
types
slash
my
sequel
or
whatever.
The
major
version
of
the
types
package
is
the
same
as
the
major
version
of
the
package
that
they
are
targeting.
C
But
then
the
minor
version
is
just
being
permitted
as
they
update
it,
and
that
might
be
one
way
to
go
for
us.
So
this
would
be
open.
Telemetry,
instrumentation,
redis
version,
4.
G
A
You
can
technically
install
two
packages
of
the
two
versions
of
the
same
package
by
renaming
them
in
in
npm.
It's
not
a
commonly
used
feature,
but
it
does
exist
and
we,
I
think
we
use
it
in
winston
instrumentation
check
the
package.json
out,
and
you
will
see
an
example
of
that.
E
C
I
know
I
brought
this
up
last
week,
but
there's
a
link
here
to
the
open
telemetry
community
day
for
those
that
aren't
aware
of
it
it'll
be
june
20th
in
austin
texas.
I
will
be
there.
So
if
you
want
to
come,
say
hi
to
me,
it
would
be
awesome
to
to
see
people
there,
the
more
people
we
get
from
the
project
to
go.
C
I
think
the
more
successful
the
event
will
be
so
just
encouraging
participation
for
those
that
can
and
for
those
that
are
interested,
there's
also
a
link
here
to
a
google
form
for
lightning
talks
and
workshops.
If
anybody
feels
like
talking
at
conferences,
it's
not
really
a
conference,
the
way
that
you
normally
think
of
a
conference.
So
it's
a
a
little
bit
less
formal
if
you've
been
maybe
thinking
about
getting
into
speaking
but
you're
not
sure
it
may
be
a
good
sort
of
half
step
for
those
people.
H
Does
anybody
know
if
there's
any
community
events
planned
around
monterey
in
portland
at
the
end
of
june.
C
H
Monitor
emma,
I
think
it's
monotorama.com.
C
C
C
245
issues,
okay,
drop
support
for
older
node.js
versions.
We
already
talked
about
that
in
this
meeting.
Some
references
added
there,
these
two
we
just
created
review-
so
I
created
this
issue
this
morning.
For
those
that
aren't
aware,
the
contrib.
C
It
technically
would
still
work
for
javascript
users
like
there
was
no
backwards.
Compat
there
was,
there
was
no
break
of
the
backwards
compatibility,
but
because
the
typescript
type
check
checks,
private
properties,
when
you
have
concrete
classes.
C
The
the
the
compilation
broke
so
the
answer
or
that
the
solution
to
that
is
to
use
interfaces
only
so
I
fixed
this
particular
instance.
So
there's
a
pr
for
that,
but
in
general
we
need
to
review
our
sdk
exports
to
ensure
that
all
of
the
interfaces
that
we're
exporting
only
depend
on
interfaces
and
not
on
concrete
classes
and
that's
kind
of
a
nuanced
issues.
Anybody
have
questions
about
it
or
feel
like
they
don't
understand.
What's
going
on.
A
C
You
exactly
what
happened.
Let's
see
it
was
in
shred,
I
don't
remember
which.
G
A
C
Yeah,
so
we
have
the
sdk
trace
base
pack
pinned
here
1.0.1,
but
then
the
sdk
trace
web
is
a
carrot
dependency,
so
it
gets
the
latest
one
and
then
the
latest
one
depends
on
a
later
version
of
trace
base
and
according
to
npm,
that's
all
fine.
They
just
both
get
installed
deeper
in
the
tree.
C
So
and
then
you
end
up
with
two
different
versions
of
sdk
trace
base
which
should
work.
It
should
be
okay,
but
because
the
type
check
checks,
private
properties
for
classes
it
fails.
C
C
C
I
created
this
issue
just
before
the
meeting
I
didn't
assign
it
to
anyone.
Is
this
something
that
someone
feels
like
tackling
or
it's
kind
of
a
nuanced
issue?
C
I
I
C
C
The
answer
is
sort
of
yes,
that
is,
it
is
kind
of
breaking.
There
is
a
workout,
a
workaround.
You
can,
like,
I
said,
use
the
skip
limb
check
which
would
prevent
that
problem,
but
it's
already
broken
like
right.
Now
it
doesn't
compile
so
yeah
yeah,
that's
fair,
I
think.
Instead
of
you
know
we
we
could
revert
the
change
to
the
span
class
that
we
made
to
make
the
private
properties
match
or
we
could
just
update
the
interface.
C
So
I
I
think,
yes,
it
is
potentially
breaking,
but,
as
someone
mentioned,
I
think
it
may
have
been
a
mirror.
The
onstart
interface
of
the
span
processor
is
a
fairly
simple
one
and
I
think
not
used
in
too
many
places.
So
it's
a
relatively
low
risk,
but
I
mean
yeah.
The
the
strict
answer
to
your
question
is
yes,
this
could
be
considered
breaking.
G
But
what
what
will
make
it
break?
It's
the
same,
like
the
interface
exposes
all
the
public
properties
from
the
concrete
class.
So
unless
someone.
C
It
would
break
the
other
way.
So
if
you
have
the
interface
that
expects
the
concrete
span
any
pass
in
oh
yeah,
no,
it
would
actually
be
have
been
broken
anyways,
because
the
new
span
is
because
the
span
class
wouldn't
work
so
yeah.
I
think
it's
not
breaking.
I
think
it's.
This
is
a
case
where
it's
backwards
compatible,
but
not
forwards
compatible
and
that's
okay.
I
I
think
so
I
don't
think
it
would
break
anybody's
like
compiled
code.
I
think
typescript
would
complain
if
you
had.
If
so,
if
I
wanted
to
spend
processor
in
my
own
code
and
I
was
expecting
the
concrete
class
and
then
I
upgrade
the
package
and
now
I'm
not
I'm
implementing
a
more
specific
interface
than
the
spam
processor
interface,
so
I
just
have
to
change
my
code.
C
C
C
The
concrete
span
implements
the
interface,
but
then
also
has
private
properties
like
typescript
does
check
the
private
properties,
so
it
would
say
the
interface
does
not
satisfy
the
concrete
requirement.
A
A
I
So
the
the
they're
they're
violating
like
the
least
specific
type
that
the
interface
should
accept.
C
Yeah,
so
if
I,
if
I
had
a
previous
onstart
implementation,
that
expected
a
concrete
span,
I
may
have,
you
know,
used
a
private
property
of
it,
which
typescript
actually
allows.
If
you
use
the
brackets
syntax,
then
when
I
update
it,
the
guarantee
now
is
that
I
will
only
receive
an
object
which
implements
the
interface
it
may
have
different
private
properties.
C
Satisfied
not
that
all
of
the
private
properties
would
too
for
as
far
as
an
end
user
goes,
they
would
need
to
update
their
onstart
method
to
point
to
the
new
writables
fan
interface.
You've
just
changed
the
name
of
it,
but
if
they're
not
using
any
private
properties
of
the
span,
that's
all
that's
required.
C
C
C
A
C
Okay,
yeah,
I
mean
that's
fine,
we
could
probably
mark
it
as
never
still
then
so.
It
doesn't
end
up
closed.
G
Issue
yeah:
it's
only
used
for
examples
and
it's
like
forgotten
during
cleanup.
As
far
as
I
could
understand
and
yeah
trojan
needs
to
be
removed,
then
yeah,
it's
an
easy
one.
C
Update
proto
versions,
this
one's
already
being
handled
by
mark
azure
functions
on
kubernetes
console
span.
Exporter
is
logging
trace,
object
on
multiple
line?
Oh
this
one
and
the
next
one,
I
believe
are
are
fine
to
be
left
as
stale.
They
were
opened
a
long
time
ago
and
the
user
never
responded
to
us.
C
B
C
The
hotel
status
description
attribute
here-
I
don't
think
this
is
in
the
semantic
conventions-
is
it
is
this?
Does
this
come
from
the
status
property
on
the
spam.
B
C
Then
I
guess
it's
not
stale.
I
will.
C
Okay
and
we're
out
of
time,
I
have
a
hard
stop
today
too,
so
I
thank
you,
everybody
for
your
time
and.