►
From YouTube: 2021-05-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
C
A
A
D
A
E
That's
interesting.
I've
actually
had
my
mic
do
that
to
someone
before
using
slacks
calling
like
the
slack
call
functionality.
I
had
that
happen
as
well.
For
just
it
captures
like
a
segment
of
audio
and
starts
looping.
A
A
F
F
It
looks
like
there
are
so
many
questions.
My
eyes
are
not
failing
me.
Efficiency
concerns
for
redis.
H
H
Yeah
sorry,
I
did.
I
add
this
burning
question
here
about
efficiency
concerns
and
instrumentation,
so
we
have
redis
instrumentation
as
an
example
that
does
quite
a
bit
of
work,
allocation
stream,
manipulation.
You
know
up
casing
stuff,
we're
adding
or
we
have
a
pr,
that's
adding
some
obfuscation
there
as
well.
H
So
it's
just
getting
a
little
more
complicated
and
we're
doing
a
bunch
of
work
that
may
or
may
not
be
necessary.
I
think
it's
worth
having
some
documentation
somewhere
for
people.
Writing
instrumentation.
That
really
emphasizes
efficiency
concerns
when
you're
writing
or
reviewing
instrumentation
pr's,
and
then
we
should
also
take
a
pass
over
our
instrumentation
at
some
point
and
just
think
about
efficiency.
C
I
think
that's
actually
a
really
good
thing
to
talk
about.
It's
not
directly
related
to
the
open
source
work,
but
ariel,
and
I
were
looking
at
a
problem
internally
with
instrumentation,
causing
a
large
increase
in
object,
allocation
and
actually
leading
to
a
garbage
collection,
run
and
horrible
things.
So
I
think
it's
actually
just
a
really
good
thing
to
consider.
In
general,
I
think
it's
really
easy
to
accidentally
write
something:
that's
not
performant
in
ruby,
maybe
even
more
so
than
other
languages.
C
Is
there
any
way
to
leverage
rubocop
to
help
us
here?
I
know
the
frozen
string.
Literals
is
something
that
enforces
and
that
helps
a
lot.
Is
there
anything
we
might
be
able
to
to
do
there?
I
don't
really
know
I
actually
don't
actually
use
robocop
that
much
myself.
H
Yeah,
I'm
not
sure
that
it
really
does
much
in
the
way
of
static
analysis
beyond
kind
of
code
styling
issues.
So
it's
probably
not
going
to
be
super
helpful.
Perhaps
if
there
are
you
know
common
patterns
of
code
that
are
particularly
inefficient,
it
could
offer
suggestions,
but
I
don't
know
if
those
sort
of
rules
exist.
H
I
E
I
haven't
actually
made
another
attempt
of
isolating
where
it's
coming
from,
but
we
know
that
the
the
latest
release,
I
believe,
was
like
0.158
or
something
like
that.
We
were
still
seeing
it
causing
boom
kills
in
our
production
app.
So
we've
actually
pinned
protobuf
to
3.14,
I
think
so,
just
because
it
was
causing
a
cascading
effect
of
like
just
anyone,
updating
or
tracing
instrumentation
and
getting
kills,
and
after
the
first
like
four
releases
of
3.15.
E
We
just
pinned
it
because
it
just
every
release,
wasn't
fixing
it.
So
it's
something
that's
kind
of
on
my
to-do
list
to
go
back,
but
I
I
do
believe,
there's
still
an
issue
as
late
as
3.15.8,
but
I
haven't
narrowed
it
down
yet
so,
like
it's.
I
think
it's
unfair
to
like
poke
at
it.
Without
any
evidence
I
don't
know,
but
we
are
still
seeing
problems.
E
I
Okay,
yeah
there
hasn't
been
another
one.
Since
since
point
eight,
I
I
asked
because
you
know
from
from
my
work
on
google
client
libraries.
We
we
also
have
found
a
string
of
issues
in
the
three
three
top
15
release.
That's
us!
So
it's
yeah,
so
we
know
that
there
are
very
issues
that
they're
still
trying
to
that
they're
trying
to
clean
up
and
there's
anything
else.
That's
but
yeah.
I
I
I
mean
I
have
kind
of
an
insights
track
on
the
the
engineers
that
are
working
on
it.
So
if
there's
anything
particularly
pressing
that
that
we
know
about,
then
I
can
I
can
poke.
I
can
poke
at
it.
H
Cool
thanks
yeah
it
was,
I
think
it
was
aggravating
for
the
team
that
was
trying
to
experiment
with
ruby
3.0,
because
that
support
was
added
in
315.
So
they
kept
poking
us
and
saying:
can
you
just
let
people
upgrade
and
it's
like?
Well
sorry,
they're
getting
you
killed
they're,
not
going
to
be
happy
to
do
that.
F
In
terms
of
instrumentation
concerns,
yeah,
I
always
throw
out.
Redis
is
always
like
the.
F
I
don't
know
it's
kind
of
like
the
poster
child
for
how
instrumentation
can
affect
the
execution
of
your
program,
just
because
redis
is
generally
super
fast
and
that
I've
done
around
even
the
lightest
instrumentation,
creating
a
simple
span
around
adds
a
noticeable
amount
of
time
to
each
call,
so
that.
G
F
I've
always
kind
of
had
this
dream.
Well,
there's
multiple
dreams.
I
will
just
throw
them
out
there,
two
dreams,
the
the
first
stream
is.
F
This
is
ruby
bench,
I
don't
know
how
familiar
people
are
with
it,
but
one
thing
gonna
be
super,
awesome
is
basically,
this
runs
on
like
every
commit
or
every
release
of
rails
or
ruby
or
whatever
project
is
included,
and
it
will
give
you
kind
of
a
graph
of
iterations
per
second
for
kind
of
this
is
the
code
that's
getting
run
and
then
the
number
of
objects
allocated
and
as
an
instrumentation
writer
or
I
don't
know-
I
I
can't
say
that
anymore,
but
but
I
did
this
for
a
while-
and
the
one
thing
that
I
always
wanted
was
to
be
able
to
run
this
script
with
instrumentation
and
without
get
like
the
same
kind
of
like
data
out
of
it.
F
I
thought
that
would
be
for
for
trying
to
understand
the
impact
of
the
things
that
we're
writing
and
just
help
us
be
more
effective,
as
we
try
to
yak
shave
down
the
the
issues.
So
you
know
that
is
a
dream
I
talked
about
for
a
long
time,
but
I
have
never
executed
on
and
I
guess
dream
it
does.
Look
like.
F
Perhaps
random
open
projects
could
ask
really
nicely
to
be
on
unruly
bench
and
if
open
telemetry,
ruby
does
become
something
that
the
community
is
interested
in
and
looks
to.
Maybe
maybe
we
can
maybe
we
can
be
here
and
just
make
sure
that
you
know
for
this.
The
simple
case
of
making
sure
that
we're
not
introducing
regressions
into
the
the
sdk.
F
I
have
no
clue
if
that
is
actually
possible
or
or
how
to
make
that
happen,
but
I
figured
I
would
throw
these
out
here
since,
since
I
brought
it
up.
C
I
really
like
the
idea-
I
don't
really
know
where
to
go
with
it,
but
I
think
it's
a
cool
idea,
actually
benchmarking,
our
performance
and
seeing
what
changes
over
time
would
be
very
cool.
F
Yeah,
I
feel
like
it's
one
of
these
things
that
would
be
super
useful.
It's
non-trivial.
F
A
H
Sorry
she's
gonna
say
on
that
point:
it's
my
prior
experience,
developing
jvms
at
ibm.
We
had.
We
ended
up
with,
like
a
team
of
interns,
building
something
similar
to
this
called
benchme
and
it's
kind
of
project
that
appeals
to
interns.
If
we
have
another
google
summer
of
code
cohort
or
something
assigned
to
open
telemetry,
then
maybe
we
can
ask
for
some
folks
or
propose
this
as
a
project.
F
Yeah,
that
sounds
good.
I
guess
maybe
for
now
we
can
back
up
our
minds.
I
suppose
we'd
make
you
someday.
Perhaps.
C
Yeah
does
cncf
have
any
hardware
I
mean:
do
they
have
any
machines
they
own
somewhere,
because
this
is
also
the
kind
of
thing
you
could.
You
could
actually
use
like
a
github
actions.
Runner
on
your
own
hardware
and
integrate
this
into
the
ci
builds
of
different
open
telemetry
projects
would
be
an
interesting
thing
to
think
about.
C
I
I
don't
know
just
I
don't
know
if
they
have
any
resources
like
in
my
mind,
actually
getting
the
the
consistent
repeatable
results
from
bare
metal
hardware
is,
is
the
difficult
part
of
it?
I
mean
there's
difficult
parts
all
around,
but
like
that's,
that's
that's
an
obstacle.
I
don't
know
how
to
necessarily
overcome.
C
F
B
H
H
Yeah,
at
least
for
tracking
allocations.
That's
easy
enough,
like
you,
don't
need
to
run
on
bare
metal
for
that
right.
H
F
All
right
so
yeah,
I
think
I'm
just
going
to
keep
this
in
the
back
of
my
mind,
we
should
all
do
the
same.
I
know
that
at
one
point
like
there
was
some
talk
about
performance
testing
in
an
open
telemetry.
I
don't
know
where
that
has
exactly
landed,
I'm
not
sure
it's
on
on
the
back
burner,
but
if
that
stuff
comes
up
again,
maybe
this
is
this
would
be
time
to
talk
about
something
like
that
and
kind
of
see
like
what
at
least
infrastructure
would
would
be
there.
F
F
Yes,
two
things
two
and
a
half
things.
The
first
thing
is
just
there's
some
semantic
conventions,
things
to
take.
F
F
You
can
set
the
status
as
error,
but
I
think
the
reason
why
only
the
user
can
set
okay
is
sometimes
instrumentation
will
identify
something
as
being
an
error
that
the
user
doesn't
really
think,
is
an
error
or
doesn't
want
to
be
an
error
when
it
comes
to
like
pager
duty
and
other
things.
So
having
like
this
kind
of
override
to
be
like.
No,
no,
really.
This
is
okay.
That's
kind
of
the
idea
there.
F
F
So
yeah
this
was
talked
about
the
spec
sig
a
couple
weeks
ago
that
there
are
sometimes
multiple
points
where
this
could
be
set
and
by
being
able
to
pass
in
a
a
status
at
span
and
would
would
be
useful,
be
useful.
I
forget
exactly
where
this
was
coming
up.
B
B
F
And
yeah
nikita
did
this
for
java
and
he
reported
back
that
it
did
work
for
for
their
use
case,
but
he
kind
of
had
two
bullet
points
that
are
some
some
issues.
I
guess,
as
a
result
of
that
and
having
this
this
overloaded,
spanned
end
where
you
pass
in
status,
could
be
slightly
confusing.
F
The
other,
so
I
guess
he
was
suggesting
that
possibly
another
approach
could
be
simpler
to
span
set
status,
freeze
the
status
if
it
was
called
with.
Okay
at
any
point
in
time
and.
F
Like
one
of
the
complications
with
this
is
there
is
a
desire
to
kind
of
have
this
streaming
api
a
streaming
tracer
that
kind
of
essentially
like
would
have
it
would
have
the
same
open,
telemetry
api,
but
the
state
would
not
really
be
maintained
in
the
process.
Things
would
be,
events
would
kind
of
be
like
streamed
off
in
response
to
it,
and
I
think
the
discussion
was
like
how
complicated
does
that
make
the
streaming
tracer
and
people?
I
don't
know,
I
don't
know
how
seriously
people
take
this
streaming
tracer.
I
I
think
there.
F
H
Yeah,
I
think
it's
been
prototyped
in
the
c-plus
plus
open
telemetry.
F
F
And
I
think
yeah
having
having
a
c
plus
implementation,
would
allow
dynamic
languages
to
kind
of
build
an
implementation
on
on
top
of
it,
and
it
could
have
some
performance
benefits
if.
F
H
So
the
last
time
you
mentioned
this
one
of
the
points
I
think
you
brought
up
or
somebody
brought
up,
was
in
our
instrumentation
that
our
auto
instrumentation
there's
often
not
a
point
where
you
can
actually
hook
in
to
set
the
status
to
okay,
which
is
effectively
equivalent
to
saying
you
know,
please
ignore
errors
before
the
span
ends.
H
H
The
other
consideration
is
that
often
this
sort
of
thing
is
rules
driven.
It's
you
know
for
this
application.
These
error
codes
are
not
really
errors
and
that's
the
sort
of
thing
that
you
could
handle
in
the
collector.
If
you
were
taking
that
approach,
but
if
you're
not
using
the
collector,
then
you
could
do
it
with
a
span
processor
and
it's
kind
of
what
span
processors
were
intended
for.
F
I
was
actually
a
pretty
big
advocate
for
this
approach.
Back
when
errors
were
at
the
forefront,
is
that
you
know
it
doesn't
seem
like
you
should
have
to
complicate
your
your
code
base
to
like
identify
some
like
yeah.
I
I
think
my
opinion
was
a
the.
F
The
country
should
send
up
everything,
that's
exception
or
error-like,
and
it
should
always
annotate
it
as
such,
and
that
your
your
back-end
or
collector
could
have
some
overrides,
and
some
rules
too
to
handle
that,
and
I
don't
know
it
was
not
super
popular,
which
is
why
we
ended
up
with
this
system
that
we
have
so.
D
I
think
I
also
like
being
as
simple
as
process
being
as
simple
as
possible,
with
the
generation
of
a
span
and
shoving
it
into
a
pipeline
and
make
the
pipeline
a
little
bit
smarter
about,
like
spans
of
this
spans
that
meet
these
criteria
are
interpreted
this
way
for
errors,
so
that
you
can,
you
can
choose
to
sh
like
going
back
to
the
performance
thing.
I
can
choose
to
fire
off
these
spans
to
a
processor
or
a
collector
running
on
different
compute,
so
that
my
span
generation
is
fast
and
the
rules
engine
runs
somewhere
else.
F
Yeah,
I
think
there
are
some
pros
and
cons
to
the
approach,
but
in
general
I'm
in
favor
of
it.
I
think
the
biggest
con
is
just
that
every
everything
receiving
the
stands
or
yeah
everything
receiving
this
fans
needs
to
worry
about
this,
instead
of
having
the
spanish
just
come
already
in
in
the
way
that
you
want
them.
F
But
I
think
I
think
this
is
like
continued
kind
of
like
fallout
from
that
in
some
ways
where,
like
there,
there
are
complications-
and
I
would
argue
that
this
is
like
complicated
stuff
to
use,
learn
how
to
use
and
just
kind
of
like
know
that
there
is
like
this
mysterious,
okay
value
that
you
can
set
for
a
span,
but
only
the
user
can
do
it,
and
if
the
user
does
this
anything
else
that
you
know
you
can
be
looking
at
like
a
line
in
this
in
the
source
code
like
it's
setting
that
you
know
it's
setting
the
status
as
error,
but
it's
coming
through
is
okay.
F
D
I
don't
know
I
feel
this.
This
does
sort
of
feel
like
like
the
developer
experience
of
knowing
what
your
error
state
is
in
the
midst
of
your
business
logic
and
making
the
decision
then,
which
there's
a
certain
amount
of
sense
to
that.
C
You're,
like
I
think
that
that's
the
kind
of
thing
that
would
crop
up
in
large
code
bases
where
someone
somewhere
else
miles
deep
in
a
few
other
files,
has
set
the
status
manually
this
way
and
you're
working
on
a
part
of
the
code.
That's
included
in
this
trace
and
you
don't
understand
why
it's
not
generating
an
error,
even
though
you
think
it
should.
I
it's
the
type
of
thing
that,
if
you
have
a
small
enough
code
base,
will
be
easy
to
easy
enough
to
grasp
and
get
your
head
around
it.
C
But
if
you
have,
if
you're
working
in
a
very
large
code
base,
you
could
very
easily
lose
the
threat
of
it,
which
is
also
I'm
sorry,
oh
no,
I
was
just
gonna
say
generally,
I'm
not
a
huge
fan
of
the
proposal
as
it's
written,
but
but
that
was
it
go
ahead.
D
H
Yeah
to
to
me,
the
problem
is
really
that
you
know
if
you're,
using
a
an
http,
client
library
and
that's
behind
you
know
a
few
other
things
koala
comes
to
mind
if
you're
talking
to
the
facebook
api
and
you
by
default,
that
instrumentation
is
going
to
mark
spans
as
error.
If
certain
conditions
apply,
you
know
certain
certain
response.
Codes
are
turned
into
status,
equals
error.
H
H
It
makes
a
lot
more
sense
to
me
to
just
you
know
you
you
turn
on
your
instrumentation,
you
deploy
this
and
then
you
start
looking
at
traces.
You're
like.
Oh,
that's,
not
an
error.
I
don't
care
about
that,
so
being
able
to
just
go
to
a
rules
table
somewhere.
That
says
you
know
if
I
see
this
response
code,
not
an
error,
that's
the
easiest
thing
from
my
perspective,
as
opposed
to
actually
having
to
figure
out
how,
to
you
know,
monkey
patch
things
to
get
in
the
right
spot.
D
Yeah,
I
think
I
agree
with
that
too.
Also.
How
much
should
like
since
open
telemetry
spec
is
like
a
standards
body?
How
much
should
a
standards
body
put
into
their
standard
people
allowing
people
to
not
follow
standards?
If
your
http
client
gets
500,
it's
an
error,
don't
like,
and
and
if
you
want
to
deviate
from
those
standards
it
involves
some
shenanigans
so
make
a
span
processor
a
collector
that
annotates
a
span
saying
like
status.
Okay
with
another
attribute
added,
you
know,
processor
overridden
so
that
you
can
find
this
stuff
later
you
can.
F
So
for
those
of
you
who
are
around
almost
a
year
ago,
probably
for
a
month
or
two
or
maybe
even
three,
I
was
like
rambling
on
like
a
weekly
basis
about
this
notion
of
having
an
error
hint
and
that
was
kind
of
the
proposal
that
preceded
this.
One
is
that
set.
F
Status,
api:
there
was
actually
a
set
set,
exception
api
and
it
would
set
an
error
hint
equal,
true
on
the
span
so
like
if
it
was
an
error
status,
you'd
get
an
error
hint
on
the
same
on
the
span,
and
that
was
basically
meant
to
be
like.
This
is
probably
an
error
unless
some
rule
somewhere
along
the
line,
says
that
it's
not
and
it
was
named
as
such
just
so,
it
wasn't
like
so
definitive
that
you're
like
oh.
This
is
definitely
an
error.
F
You're
like
okay,
this
is
probably
an
error,
but
when
we
talked
about
that,
that
was
that
was
one
suggestion
to
be
not
that
popular
another
alternative
suggestion
that
was
also
not
popular
was
to
have
a
configurable
error
hitler.
That
kind
of
would
give
you
an
actual
handle
on
an
exception
and
the
component
it
was
coming
from
and
you
could
decide
to
do
something
with
it.
F
There
I
feel,
like
both
of
those
are
a
little
bit
more
flexible
and
in
light
of
this
kind
of
getting
a
handle
on
the
current
span,
so
you
can
set
the
status
kind
of
concept.
I
think
both
of
those
have
some
some
merits
in
that
world.
What
is
this
one
needing
to
actually
get
the
handle
on
a
span
can
can
be
challenging,
as,
as
we've
been.
D
F
Missing
good
question:
to
be
honest,
like
I
remember
the
discussion
from
three
hours
ago
and
the
stuff
that
was
19
days
ago,
or
it's
not
fresh
in
my
mind.
So
if
you
just
reread
this,
then
I'm
going
to
go
with
your
opinion.
F
H
I
can
table
that
question
maybe
for
next
week
the
the
origin
of
this
was
that
the
original
proposal
was
to
have
a
status
source
that
indicated
whether
user
code
or
application
code
had
made
the
decision
that
this
was
actually
an
error
or
not
an
error
versus
auto
instrumentation,
because
the
argument
is
that
auto
instrumentation,
you
know
the
auto
instrumentation
is
going
to
be
written
according
to
the
spec
and
the
spec
says
you
should
interpret
status
codes
in
this
way,
but
really
user
applications
are
the
things
that
have
the
final
say
here,
so
they
should
be
able
to
say
no,
no,
no,
that's
not
actually
an
error
or
know
that
you
know
200.
H
You
got
from
graphql
that
actually
has
the
error
somewhere
in
inside
that
json
response
like
that,
is
actually
an
era,
and
you
should
mock
it
as
such.
F
So
I
don't
know
I
feel
like
we
should
probably
wrap
up
our
conversation
on
this
sooner
than
later,
but
I
do
agree
that
a
rules-based
system
would
probably
be
a
lot
easier
to
use.
I
don't
know
if
the
ship
has
sailed.
F
I
don't
know
if
it's
too
late
to
make
a
comment
to
that
effect,
but
if
anybody
has
opinions
on
this
and
wants
to
make
them
known
like
this,
this
issue
is
here,
and
I
think
like
if
you
are
like
a
strong
advocate
for
a
rules-based
system,
even
though
we
kind
of
discussed
this
and
it
wasn't
super
popular
before.
C
B
F
Yeah,
the
follow-up
was,
we
talked
about
how
js
had
a
tc
review
of
their
implementation
and
they
found
that
they
had
this
tracing
concept
and.
F
F
Proposing
this.
I
just
never
actually
made
it
into
the
spec,
and
it's
proven
to
be
largely
controversial,
as
can
be
seen
by.
F
By
the
long
thread
and
yeah,
the
tldr
was,
I
feel,
like
nobody
can
agree
on
this,
at
least
not
enough
to
reach
a
resolution
for
tracing
1.0.
F
So
I
think
they
are
encouraging
the
js
sake
to
move
that
functionality
into
like
an
extensions
package
just
so
that
it'll
be
used,
but
it's
not
officially
part
of
the
api.
So
if
things
are
changed,
it
won't
be
a
problem,
and
if
this
does
become
part
of
the
api,
there
will
be
an
easy.
E
I
had
something
from
last
week.
I
totally
forgot
to
bring
up
and
I
forgot
to
bring
up
the
beginning
of
the
meeting
today.
If
we
get
a
chance
to
talk
about
our
change
logs,
I
had
started
a
discussion
on
it
and
I
was
hoping
to
steal
a
little
bit
of
time
today
to
talk
about
it.
It's
now
an
okay
time
to
interject.
E
The
a
lot
of
the
observations
and
commentary
came
from
the
last
release
that
came
up
specifically
of
how
we're
setting
changelog
notes
on
various
gems,
because
how
we
release
and
lockstep.
E
So
one
of
the
examples
that
came
up
was
when
we
changed
the
propagators.
The
faraday
tests
were
updated
and
only
the
tests
for
faraday,
so
none
of
the
actual
non-test
code
was
modified.
However,
in
the
change
notes
or
the
change
logs
for
faraday
instrumentation
you'll
see
that
it's
marked
as
having
a
breaking
change
while
it
didn't
so
it
leaves
things
a
little
bit
muddy
in
terms
of
like
consumers
of
this.
E
I
think
we've
done
pretty
good
so
far,
but
I
think,
as
we're
hitting
the
1.0
release,
we
should
look
at
improving
things
a
bit
my
suggestion
or
like
the
idea
that
I
think
would
make
sense,
is
much
more
manual.
So
I'm
definitely
open
to
push
back
in
that
regard,
but
I
think,
like
I
would
like
to
see
a
simpler
approach.
E
In
the
sense
that
say,
I
put
up
a
pull
request
to
modify
something
in
the
sdk,
because
we
haven't
done
a
release
or
one
point
is
released
yet,
and
it
is
a
breaking
change,
but
it
requires
modifying,
for
example,
something
and
like
a
test.
Another
gem,
instead
of
when
we
go
to
do
the
release
having
all
the
chain
change,
logs
and
notes
generated
at
that
point,
it
would
be.
The
onus
would
be
on
me
to
have
these
kind
of
unreleased
change.
E
Log
notes
where
it
would
say
you
know,
for
the
sdk
I
changed
xyz,
it's
potentially
a
breaking
change,
linked
to
the
pr
that
introduced
this
change
and
then
for
the
changelog
and
the
instrumentation
gym
that
had
only
its
tests
changed
the
same
thing
except
you'd
say
it's
not
a
breaking
change.
You
just
link
to
the
pr
the
introduce
the
change
and
say
I
don't
know
updated
tests
for
compatibility
with.
Why
or
something
innocuous
like
that,
but
that
requires
a
lot
more
human
responsibility.
E
E
Looking
at
okay,
what's
all
the
things
you
brought
in
there
and
let's
try
to
build
up
the
notes
for
all
the
different
gems
did
this
change
did
that
change,
but
yeah,
that's
kind
of
the
premise
of
where
I
think
that
we
could
benefit
from
moving
to,
but
it
is
a
little
bit
trickier
because
it
requires
more
human
policing
and
making
sure
that
any
time
someone
chooses
to
change
it
that
they're
updating
things
accordingly
across
whatever
gems
they
modify.
H
One
of
the
alternatives
here,
of
course,
is
that
we
move
the
instrumentation
out
to
a
separate
repo,
so
we're
not
constrained
by
having
to
fix
those
fix
things
in
the
instrumentation.
In
the
same
pr
that
where
we
made
a
breaking
change
in
the
sdk
or
the
api,
of
course,
where
you've
got
dependencies
like
instrumentation
base
changing
things,
you
know
that's
more
challenging,
but
I
think
the
bigger
problem
is
where
we
have
like
an
sdk
change.
That
then
impacts
a
whole
bunch
of
tests
in
random
places,
but
aren't
breaking
changes
for
those
instrumentation
gems.
C
Yeah,
I
am
moving
moving
instrumentation
out
to
another.
Repo
definitely
could
solve
the
problem,
but
that's
a
lot
of
work.
Maybe
it's
not
actually
that
much
work,
certainly
non-zero
work
manually
curating,
our
change
logs
is
also
work.
I
I
think
for
me,
the
root
of
the
problem
is
that
just
the
the
conventional
commit
tooling
it
just.
Doesn't
it's
just
not
smart
enough?
It
saves
us
some
time,
but
it
you
know,
causes
problems
in
other
ways.
C
I
think
I'm
in
favor
of
these
sort
of
we
should
make
sure
somehow
that
people
add
notes
to
an
unreleased
change.
Log
like
robert
was
suggesting
I
don't
know
if
that's
something
we
could
enforce
via
a
ci
job,
perhaps
some
sort
of
linter
that
says,
like
you've,
changed
files,
you
know,
but
there's
no
change
to
any
of
the
change
logs.
That's
not
okay!
You
know,
you
know
that
enforces
that
you
touch
at
least
one
change
log
somewhere
in
the
repo.
C
When
you
make
a
change,
and
maybe
that's
a
lighter
touch
way
of
encouraging
slash
enforcing
that
people
write
a
little
bit
of
this
unreleased
change
log
when
they
make
their
changes
without
having
us
to.
Like
always
remember
to
do
it
and
then
then
compiling
the
change
log
is
not
maybe
as
difficult.
D
I'd
say,
certainly
for
any
bang
commits
that
are
backwards.
Breaking
should
like
at
least
one
changelog
should
have
words
in
it.
Right
about
that.
H
Yeah,
the
other
thing
is
that
if
we
have
change
logs
with
unreleased
sections
in
them,
then
it's
pretty
easy
to
write
tooling.
I
think,
as
part
of
the
release
process,
to
ensure
that
all
those
unreleased
chunks
have
actually
been
turned
into
version
numbers
with.
H
E
Yeah,
just
to
kind
of
stress
one
point
I
didn't
mention
it
is:
I
just
see
a
ton
of
value,
making
sure
that
the
change
will
look
includes
a
link
to
the
pr
that
introduced
the
change,
because
I
think
I'm
your
typical
developer,
I'm
probably
only
really
looking
at
it.
If
I'm
bumping
a
gem,
I
might
scan
through
and
look
for
the
word
breaking.
Otherwise,
I'm
going
with
it
and
then
something
breaks.
C
Oh,
it
makes
you
feel
better.
That's
that's
the
same
way
that
I
operate
when
I'm
upgrading
gems,
unless
I'm
very
specifically
setting
aside
time
where,
like
I,
am
going
to
go
and
clean
the
house
on
this
application
and
then
I
go
through
and
I
proactively
read
change
logs.
Normally,
I'm
just
bumping
the
gem
version
and
hoping
nothing
breaks.
You
know
as
long
as
it's
I'm
not
bumping
a
major
version
and
then
I
only
read
through
if
I
have
a
problem.
D
D
It's
also
very
friendly
for
people
who
are
using
dependable
to
bump
their
stuff,
because
the
pentabot
slurps
in
changelog
and
blats
out
the
changelog
in
the
pr
it
opens
and
if
you've
already
got
in
the
changelog
links
to
the
issues
like
it's
super
easy
to
trace.
Well,
not
super
easy
easier
to
find
what
has
changed.
D
Personally,
if
I
put
a
bang
in
a
conventional
commit,
I'm
probably
putting
notes
about
what's
different
and
how
you
would
have
to
adjust
your
behavior
in
the
commit
messages
itself.
But
I'm
one
of
those
people
who
writes
memoirs
in
his
commit
messages,
and
I
don't
that's
generally
not
a
super
friendly
or
in
the
end
effective
requirement
to
levy
upon
open
source
projects,
because
not
all
contributors
want
to
write
memoirs
in
their
commit
messages.
H
C
H
D
C
Dark
and
stormy
night,
I
I
wanted
to
say
to
it:
you
know
without
changing
anything
today
we
could
certainly
start
manually
generating.
You
know
manually
reviewing
all
of
the
change
logs
for
the
release.
Pr,
and
I
am
more
than
happy
to
make
that
a
recurring
task
anytime,
there's
a
release
that
goes
out.
C
If
someone
else
wanted
to
do
it
too,
I
guess
I
just
want
to
make
sure
that,
like
I'm
offering
the
the
human
power
to
do
it,
you
know
to
if
it's
just
so
that
that's
not
a
concern
about
you
know
changing
our
process
for
the
better.
H
Yeah,
that's
cool.
I
will
go
ahead
and
not
include
any
commit
messages
for
any
of
my
prs
from.
D
A
good
transfer
of
knowledge
right,
that's
what
I
mean
I
mean
I
went
through
the
manual
the
for
the.
I
guess,
the
last
release,
what
17
18.
it
took
some
brain
power
and
a
little
bit
of
time,
but
it
wasn't
awful.
So
I
I
think
that's
probably
a
good
idea
andrew,
like
do
it
manually
a
few
times
of
how
painful
is
it
and
then
figure
out
what
to
automate
once
you've
got
a
manual
process
down.
H
That'd
be
awesome
that
might
be
a
great
segue
given
we've
just
got
five
minutes
left
might
be
a
great
segue
into
the
last
point
here
about
the
1.0
release.
H
Cool,
so
there's
been
a
little
bit
of
discussion
here.
I
had
some
open
questions
at
the
top
which
we
haven't
focused
so
much
as
like
we've
been
more
focused
on
okay.
What
should
the
version
numbering
be,
which
is
great,
but
I
do
want
to
call
out
those
issues
which
are
if
you
want
to
scroll
to
the
top.
H
H
Those
are
the
two
big
ones:
we've
discussed
version
numbering
I'm
actually
suggesting
so
all
the
way
down
the
bottom,
I'm
suggesting
that
hey.
Maybe
we
can
just
go
ahead
with
doing
the
release
candidate
for
the
api
and
the
sdk
packages,
and
everything
else
can
do
a
0
18
0
release
with
dependency
on
the
release,
candidate
of
api
and
sdk.
C
H
It's
because
the
instrumentation
isn't
really
part
of
the
1.0
like
api
stability,
guarantees
or
anything
like
that,
and
it
gives
us
time.
We
want
to
focus
on
releasing
the
api
and
sdk
and
then
for
the
instrumentation.
We
can
take
the
time
to
go
through
it
figure
out.
You
know
better
ways
of
doing
instrumentation
based
class
or
whatever.
It
is
there's
still
time
to
mess
about
with
that,
and
then
I
think,
over
time
those
instrumentation
packages
should
start
to
diverge
in
their
version
numbers.
H
So
if
we
decide
that
you
know
the
my
sequel
to
instrumentation
is
really
solid.
We
can
do
a
1.0
release
of
that
and
say:
okay,
we
think
we're
stable
here.
The
other
thing
to
consider
is
that
there
are
no
stability
guarantees
yet
around
semantic
conventions,
and
we
probably
shouldn't
do
a
1.0
release
of
any
of
the
auto
instrumentation
until
we
understand
until
that
process
is
defined
for
kind
of
nailing
down
some
anti-conventions
and
then
being
able
to
evolve
them.
C
That
makes
sense,
I
think
I
agree
with
that.
You
know
the
instrumentation,
isn't
that
a
1.0,
and
so
it
doesn't
need
to
be
and
yeah
that'll
make
sense.
Now
that
I
think
about.
C
H
So
matt
do
you
know
if
there's
a
formal
process
for
kind
of
release,
candidates
and
then
releasing
1.0?
It
sounds
like
the
js
sig
went
through
some
kind
of
review
process.
F
Yeah,
I
I
asked
about
this
and
the
the
tc
review
is
optional,
so
it's
kind
of
opt-in.
If
we
would
like
to
do
it,
we
can,
it
did
seem
like
it
was
fair,
fairly
thorough,
so
assuming
it
was
fairly
thorough.
F
I
don't
know
how
enthusiastic
the
tc
is
about
reviewing
all
these
things,
because
it,
I
think
them
some
time,
but
I
think
it's
up
to
us
to
decide
if,
if
we
want
them
to
do
that,
other
than
that,
I
think
if
we
do
not
go
through
a
tc
review,
I
think
it's
kind
of
up
to
us
when
we
release
an
rc
and
when
we
declare
1.0.
B
C
I've
been
thinking
about
it
off
and
on
over
the
past
week
and
honestly,
really
nothing
jumps
out
if
there's
something
that
could
be
improved,
there's,
certainly
nothing
that
I
think,
is
horrible.
I
think
it's.
I
think
it's
good
enough.
Personally
yeah,
I
would
say
it's
good.
E
Yeah,
I
have
no
strong
opinions
around
like
the
version
naming
specifically
whatever
people
are
familiar
with
enough,
that
they'll
understand
what
it
means
and
I
think,
that's
kind
of
what
was
suggested
there
in
terms
of
packages
that
you
might
want
to
change
the
only
one
I
can
think
of
that
probably
needs
a
little
tlc
is
the
auto
detector
stuff.
But
I
that's
not
part
of
this
one
point
of
view,
so
I
don't
think
that
matters
too
much.
H
That's
a
good
question
I'll
have
to
double
check
whether
the
resource
detectors
as
part
of
this.
If
it
is
part
of
it,
I
think
the
main
thing
we
need
to
do
is
split
out
the
gcp
auto
detector
from
the
the
resource
detectors
package
itself,
yeah,
that's.
That
would
be
the
change.
F
I
don't
know
there,
there's
there's
one
thought
that
I've
had.
I
should
probably
keep
it
to
myself,
but
but
I'll
throw
it
out
here,
just
because
we're
talking
about
it-
and
that
is
most
of
our
subsystems
are
kind
of
like
linked
up
under
the
the
open
module,
so
you
can
say
just
open
telemetry.propagation
and
that's
kind
of
the
gateway
into
propagation
or
tracer
provider,
etc.
F
One
exception
to
this
is
context
is
at
its
open,
telemetry,
colon
colon
context,
and
I
think
there
are
reasons
why
that
is
a
good
idea,
but
I
think
one
thing
I
just
kind
of
mulling
over
is
whether
there's
any
merits
to
having
that
just
be
open
chemistry,
dot
context
and.
H
Is
that
something
that
you
ever
change
like?
Is
that
something
that
you're,
where
you're
plugging
in
an
implementation
with
the
sdk?
I
don't
think
so
right,
then
the
core
functionality
is
actually
provided
by
the
api.
F
I
think
this
is
true,
and
so
that's
one
reason
why
I
think
it
makes
sense
for
it
to
be
its
own
name.
Space
is
that
it
is
open
to
them.
Free
context
is
meant
to
kind
of
just
be
a
a
in
many
ways:
a
independent
context
system,
but.
H
Right
from
an
ergonomics
perspective,
do
you
feel
like
it's
better
to
have
open
telemetry.context
or
or
are
you
okay
with
the
colon
colon
context,
because
we
could
add
the
adder
reader
that,
just
you
know,
references
the
the
module
ins
but
not
add
the
adder
writer?
If
that
makes
sense.
C
I
think
my
feeling
is
generally
that,
like
the
ergonomics
of
the
context,
api
are
not
great,
but
it's
not
because
the
context
is
not
accessible
at
the
top
level.
The
colon
colon
context
to
me
is
not
what
makes
it
difficult
to
work
with.
I
would
say
I
don't
have
fully
fully
formed
thoughts
on
precisely
what
I
don't
like
about
the
api,
but
but
to
me
that
that
name
is
not
necessarily
where
I
would
start
to
look
at
and
say
like.
C
H
Okay,
adding
a
reader
later
on,
if
we
felt
that
was
useful,
should
be
fine.
I
think
we
we
can
improve
things
in
that
way
later
on,
but
I
yeah
I
I'd
love
to
get
the
feedback
on
what
you
feel
is
less
than
great
about
the
context.
Api
itself.
C
Yeah
I'll,
I
will
continue
to
compose
some
thoughts
and
share
those
later.
I
I've
started
to
get
feelings
about
it,
but
they're
they're
not
done
baking
yet.
F
All
right,
so
we
are
five
minutes
over.
Our
our
goal
is
to
figure
out
what
we
want,
what
we
want
to
do
before
releasing
a
1.0
or
what
we
want
to
actually,
when
we
want
to
release
our
first
rc
so.
F
I
guess
yeah,
I
guess
I
have
two
questions.
One
is:
do
we
want
to
request
a
tc
review
and
two
do
we
want
to
release
something
or
do
we
want
to
like
make
this
make
this
the
week
that
we
really
really
get
in
all
of
our
feedback
about
that?
We
think
we
might
want
to
improve
before
before
our
saying,
or
is
this
I
think,
get.
H
Think
getting
some
feedback
in
for
anything
that
we
feel
might
block
the
rc.
We
should
do
that
as
soon
as
possible,
so
if
we
can
get
that
done
this
week,
that
would
be
great.
We
do
have
some
stuff
that
has
gone
in
and
we
may
want
to
do
a
0
18
0
release
in
the
interim
at
least
to
get
that
out
and
also
because
the
instrumentation
all
release
was
broken
last
time.
So
I
should
probably
fix
that
yeah.
H
But
I
think
we
should
make
a
call
next
in
in
our
next
meeting
so
next
week
to
go
ahead
with
the
release
candidate
thing
and
in
the
meantime
I
have
half
a
mind
to
kind
of
punish
the
tc
by
making
them
read
some
ruby
code
but
yeah,
I
I
don't
think
it's
strictly
necessary,
but
I
do
know
some
of
those
folks
and
it
would
be
fun
to
force
them
to
do
that.
H
A
C
D
Also,
I
had
a
question
in
chat:
what's
the
tc
in
tc
review
technical
committee,
I
thought
that,
but
I
also
thought
I
wouldn't
assume
it
thanks.
F
Yeah
I
was
going
to
say
summarize,
I
think
what
francis
was
saying:
let's
get
in
our
feedback
this
week
we
think
could
change
and
then
talk
about
that
at
our
next
meeting
and
then
I
guess
we
can
all
think
about
this
tc
review
on
whether
we
want
to
request
that
as
well.
H
C
H
H
Well,
I
mean
if,
if
we
need
to
change
the
api
in
some
way
to
make
it
you
know
more
ergonomic,
then
we
should
do
that
before
we're
committing
to
stability
for
three
years,
so
that's
kind
of
an
important
part
of
the
api
and
we
should,
if
you
have
feedback,
we
should
get
that
in
now.
Okay,
outside
of
that,
it's
you
know.
H
H
You
know
I
get
these
errors
showing
up
in
my
logs.
What
does
this
mean?
Can
we
make
it
more
easy
to
debug,
which
is
a
pr
that
I
put
in
this
morning,
but
that
those
kinds
of
things
we
can
iterate
on
post
1.0.
C
Okay,
I
will,
I
will
get
any
context
related
feedback.
I
have
in
soon,
then
that
that
makes
sense,
cool.
C
H
That
cool
we're
10
minutes
over,
so
we
probably
want
to
call
it
here.
It's
just
so.