►
From YouTube: 2022-10-13 meeting
Description
Instrumentation: Messaging
A
B
That's
a
good
thing:
it's
about
one
after
I'll,
probably
wait
till
three
or
four
after
I
know:
Tyler
mentioned
that
he
will
not
be
here
so
we'll
get
started.
Then.
A
B
All
right,
it's
about
three
after
we
probably
had
about
as
many
people
as
we'll
show
up
so
I,
want
to
say,
welcome
everybody
to
the
hotel,
go
Sig
meeting,
I'm
stepping
in
for
Tyler,
as
he
said
this
he's
going
to
be
out
today.
B
If
you
haven't
already,
please
put
your
name
in
the
attendees
list
and
then
we
will
start
the
meeting
Anthony.
Do
you
want
to
kick
off
the
batch
span,
processor,
with
the
full
qmetrics
and
logs.
C
Sure
so
this
is
initiated,
it's
come
up
and
I
wanted
to
get
more
eyes
on
and
and
thoughts
regarding.
So
the
request
here
is
basically
to
emit
a
log.
Every
time
the
batch
band
processor
drops
an
incoming
span
because
its
queue
is
full.
Presently
we
keep
a
count.
C
I
think
the
expectation
was
that
we
would
admit
that
as
a
metric,
when
we
were
able
to
utilize
metrics
API
from
within
the
trace
SDK,
but
there's
obviously
a
desire
to
have
more
visibility
into
that
sooner
than
that
timeline
would
allow
so
I
want
to
get
other
people's
thoughts
on
how
to
address
this.
B
Well,
it
seems
like
there's,
there's
no
comments
so
far.
I
will
say
that
there
has
been
a
history
of
this
I'm.
The
person
who
put
the
the
debug
statement
in
there,
and
there
was
debate
back
and
forth
about
what
is
the
appropriateness
of
this
a
while
back
and
I'm
not
entirely
sure
that
they're
that
that
this
solution
is
the
best
there
is
I'm
sure
there
is
some
want
for
that.
B
We
could
always
have
a
little
bit
more
complicated
logic
that
that
actually
changed
tracks
when
when
there
is
a
change
to
the
count
that
it's
actually
been
updated
and
we
log
a
higher
level
message
when
that
happens,
there's
there's
a
number
of
different
solutions,
but
I
would
venture
that
I
don't
know.
I
would
venture
that
I
would
not
want
to
see
a
log
message
every
time
I
export
the
after
the
first
time.
B
It
happens
if
it
does
a
a
Spanish
dropped
and
if
it's
never
dropped
again
like
I
would
want
to
see
One
log
message,
but
we
currently
don't
have
a
way
to
track
that
that
kind
of
change
right.
B
D
One
thing
is
I
I
think
logging
is
would
be
interesting
if
there
was
something
in
the
span
that
made
it
not
be
be
dropped
or
overspend
not
be
dropped.
Based
on
the
content
of
the
span,
which
is
not
the
case,
and
it
feels
like
clogging
will
just
like
duplicate
data
and
when
we
drop
spans,
it
probably
means
that
the
services
I
at
a
higher
capacity
than
usual.
D
So
that's
going
to
be
causing
lots
of
logs,
which
again,
like
the
content
of
a
span
itself,
is
not
useful
to
debug
why
the
data
is
being
dropped,
which
is
why
I
think
a
metric
is
better.
C
Yeah,
that
was
my
concern
as
well.
Is
that,
if
we're
dropping
spans
because
the
the
batch
band
processor's
queue
is
full
we're
already
under
heavier
load
than
anticipated
and
thus
emitting
a
log
every
time
a
span
ends
is
just
going
to
exacerbate
that.
C
Well,
that's
also
what
we're
doing
right,
there's
there's
a
debug
that
is
emitted-
that
in
fact
the
one
that's
in
the
problem
statement
here
right
that
debug
log
is
emitted
on
export,
so
it's
not
on
every
span
end,
but
it's
when
the
batch
processor
attempts
to
export
it
will
emit
how
many
were
in
the
batch
and
the
the
current
total
dropped
count
which
may
or
may
not
have
changed.
B
Yeah
I'm
thinking
more
along
the
lines
of
a
warning,
something
at
a
higher
level
that
indicates
that
there
was
more
dropped.
This
export
so
like
between
the
last
expert
cycle
and
this
export
cycle,
I,
don't
know
if
we
could
accommodate
that
kind
of
logic,
but
I
think
that
might
be
kind
of
an
acceptable
middle.
For
me,.
C
B
Something
along
that
lines
is
what
I
would
see
working
yeah.
It
makes
it
a
little
more
complicated,
but
I
I
think
I
would
be
okay
with
that
that
extra
complication
to
be
able
to
facilitate
this.
But
besides
that
I
think.
Maybe
the
answer
is
to
wait
for
the
metrics
API,
which
might
be
a
while.
C
C
C
B
It
emits
a
structured
log
at
the
debug
level,
is
what
that
does.
D
Yeah
I
meant
like
something
like
the
error
Handler,
but
for
things
that
would
not
be
errors.
B
There
could
be
something
to
that:
I
think
that's
I
I
think
that
probably
is
Way
Beyond
the
scope
of
solving
just
this
problem,
though,
like
I,
think
that
would
also
involve
us
coming
together
and
agreeing
to
what
a
warning
should
look
like,
and
how
do
we
structure
it
so
that
people
can
get
stuff
out
of
it,
and
that
is
that
it's
a
big
ball
of
worms
but
I
I.
If
somebody
wants
to
try
and
propose
something,
I
would
I'd
be
I'd,
be
down
to
see
that
could
be
cool.
C
Okay,
well,
I.
Don't
think
we
need
to
run
this
down
in
this
meeting,
but
if,
if
Aaron,
if
you
could
comment
on
the
issue
with
your
proposal
and
if
anybody
else
has
any
other
ideas,
they
think
should
be
considered.
Let's
move
it
to
the
issue
then,
and
get
feedback
from
the
original
reporter.
B
Sure
I
will
I'll
put
a
comment
in
there
about
my
what
is
acceptable
for
me
and
then
we
can
maybe
move
around
from
there.
I.
A
I
guess
one
question
I
have
is
one
of
the
action
that
the?
What
is
the
action
that
the
the
author
would
do
when
they
see
this
message?
Is
it
a
increase?
The
batch
span
size
and
the
configuration
as
a
b
increase
the
CPU
allocated
to
the
you
know
the
to
the
program?
What
are
the
sort
of
things
that
people
would
do
in
response
to
this.
B
B
If
it's,
if
it's
a
temporary
blip
versus
if
it's
a
consistent
problem
of
not
having
enough
bandwidth,
essentially
to
export
it,
if
there's
not
enough
bandwidth
like
the
the
solution,
is
either
get
more
bandwidth
or
export
less
spans
like
sample
more
yeah.
A
Yeah,
okay,
that's
just
a
general.
You
know
query
about
what
people
would
do
once
they
saw
this
message,
but
anyway
we
can
go
on
to
the
next
thing.
B
Okay
next
thing
on
the
agenda:
I
just
wanted
to
put
out
a
notice.
We
had
a
release
of
v32.2,
specifically
in
the
otlp
grpc
package
that
relied
on
some
unreleased
changes
in
version
1.11..
B
B
B
C
B
Just
wanted
to
make
sure
people
are
aware
of
it
if
they
just
updated
and
saw
that
and
whatnot.
So
the
last
thing
on
the
agenda
currently
is
the
triage
session.
I
had
one
last
week
on
Friday
I
was
the
only
person
to
show
up
to
it,
so
I
groomed
the
backlog
myself.
B
B
That
being
said,
if
you
can't
make
it
and
there's
something
that
you
want
readjust
readdressed,
let
me
know
put
it
on
an
agenda
either
here
or
we'll.
I'll
create
another
one.
B
I
hope
people
will
show
up
and
care
about
the
backlog,
but
I
know
it's
it's
a
reach.
Even
for
myself.
C
B
We
might
be
all
almost
done
two
weeks
from
now,
so.
B
All
right,
so
we've
reached
the
end
of
the
agenda.
This
has
been
a
pretty
short
meeting.
Does
there
anything
that
anybody
wants
to
put
on
the
agenda
before
or
before
we
start
closing
out.
B
I'm
I'm
more
than
happy
to
talk
about
it,
but
do
you
mind
putting
on
putting
a
link
to
it?
I
don't
want
to
quite
go
diving
for
it
right
now,.
C
There
you
go
Katie
Hockman
opened
an
issue
saying
basically
hey
you've
got
a
bunch
of
interfaces.
That
say,
you
might
add
methods
to
them.
That's
a
breaking
change
was
no
we've.
We've
got
that
language
in
there
because
it
was
our
interpretation
of
the
open,
Telemetry
stability
guidelines
that
we
will.
The
the
open,
Telemetry
spec
will
reserve
the
right
to
add
methods
to
apis
at
any
time,
and
rather
than
forcing
a
major
version
upgrade
on
these,
we
took
the
decision
that
sdks
need
to
be
prepared
to
stay
up
to
date
with
the
API.
C
That's
forced
on
us
by
the
fact
that
we
live
in
the
same
repo
and
RCI
will
break
if
we
don't,
but
for
others
who
are
trying
to
build
a
third-party
SDK,
they
will
need
to
keep
on
top
of
this,
otherwise
their
users
may
run
into
problems.
C
C
I
I
I'm
still
not
sure
where
I
fall
on
this,
but
it
would
be
good
if
others
who
have
opinions
could
weigh
in
so
that
we
can
try
to
come
to
a
consensus
and
figure
out
what
our
posture
should
be.
It's
not
too
late
for
us
to
to
remove
those
comments
and
and
say
you
know,
we
we
aren't
going
to
make
these
sorts
of
changes
in
the
future.
We
haven't
done
so
yet.
C
So
it's
still
a
decision.
We
can
change
course
on,
but
we
should
get
everybody's
opinion
on
that
before
we
do
so.
B
I
I
totally
agree.
That's
been
something
that
I've
had
on
my
backlog
to
comment
on
specifically
yeah
a
thorny
issue
of
backwards
compatibility,
while
keeping
Upstream
compatibility.
D
I
think
something
that
I
read
from
the
specification
is
that
new
method
may
be
added
in
minor
releases,
which
doesn't
mean
they,
like.
A
new
method
has
to
be
only
in
a
minor
release
if
it's
I'll
add
it
alone.
Obviously,
so
the
way
I
read
it
is
his
specification
allows
us
to
decide
what
we
trigger
a
major.
B
I
would
much
rather
prefer
not
to
make
a
major
release
if
the
time
comes
to
it,
just
because
that
has
that
can
and
usually
will
have
a
lot
of
return
for
people,
even
if
it
is
just
going
and
updating
all
of
the
mark.
The
places
to
V2.
B
B
C
I
but
I
am
but
I'm
I'm
also
worried
that
we
might
get
back
into
a
corner
if
we
do
so,
the
the
latitude
that
was
retained
Upstream
to
do
this
was
felt
to
me,
like
a
I.
Don't
want
to
stop
making
changes.
Requests
like
I
I
want
to
be
able
to
make
changes
at
any
time.
I
want
so,
but
I
don't
want
to
call
it
a
major
version,
because
there's
so
many
headaches
there,
we've
never
had
to
do
this,
but
I
I,
don't
know
what
might
Force
us
to
and
I.
B
B
I'm
thinking
I'll
like
how
readers,
if
you
have,
if
you're
a
reader
and
you
have
a
right
to
method,
the
reader
interface,
will
go
and
and
silently
or
in
the
background
check.
If
you
implement
that
method
and
prefer
to
use
that
over
the
over
other.
So.
B
There
are
possibilities
that
we
could
accommodate
Upstream
changes
in
a
non-breaking
way,
but
there
there
may
be
some
need
to
make
those
breaking
changes,
and
that
would
mean
like
that
would
mean
we
either
like
we
would
have
to
follow
along
with
whatever
we
dictated
now,
instead
of
making
the
decision
at
that
point,.
C
Yeah
so
I,
certainly
we've
got
the
option
of
creating
additional
interfaces
and
checking
to
see
if
those
interfaces
are
implemented,
but
I
think
that
puts
a
lot
of
bonus
on
the
user
right
so
like
it.
If
an
end
user
who's
receiving
a
tracer
provider,
let's
say
if
they
want
to
make
use
of
what
a
new
method
that's
added
to
the
Tracer
provider
interface,
then
every
time
they
want
to
do
that
they
need
to
go
check.
Does
it?
C
Does
it
also
implement
this
Tracer
provider
enhanced
interface,
whereas
they
should
be
able
to
just
assume
that
the
provider
implements
that.
B
B
So
I
I'm
definitely
going
to
put
in
my
thoughts
on
where
our
stance
should
be,
but
I
would
encourage
everybody
to
put
in
their
thoughts
as
well.
B
How
do
we
want
to
try
and
make
a
decision
around
this
like?
Should
we
move
forward
with
this
or
not
I
figure?
We
need
to
come
to
some
kind
of
consensus
around
whether
we
leave
this
verbiage
in
there
or
not.
C
B
I
think
I
was
going
to
suggest
you
know
coupons
in
what
a
week
and
a
half
we
could
have
done
it
there,
but
I
don't
think
Tyler
will
be
there.
So
we'll
have
to
do
this
in
a
in
a
different
form.
B
Do
we
want
to
wait
until
Tyler
gets
back
or
do
we
want
do?
We
think
we
could
accomplish
consensus,
a
get
to
consensus,
asynchronously.
C
Let's
continue
the
conversation
on
this
issue
asynchronously
and
when
Tyler's
back
touch
base
with
him
and
see
where
he
wants
to
go
with
it.
I
don't
know
if
he's
back
tomorrow
or
next
week
or
what.
B
All
right
that
gets
us
to
the
end
of
the
new
agenda.
Is
there
anything
else
that
anybody
wants
to
add.
B
B
How
about
anything
anything
at
all
open
floor
if
you
want
to
go
on
a
wrap,
go
for
it.
B
Okay,
well
then,
I'm
gonna
call
it
here,
give
everybody
about
a
half
hour
back.
Thank
you
all
for
coming.