►
From YouTube: 2022-07-27 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
C
A
B
Yeah
yeah,
so
I
had
actually
looked
at
the
issue
for
new
relic
a
couple
of
years
ago
and
I
remembered
that
the
the
difference
was
quite
striking,
and
so,
if
things
look
similar
enough,
I
mean,
despite
the
variations,
then
I'd
say
yeah.
It
looks
good,
but
if
there's
a
problem
it
it'll
be
noticeable.
D
Hey
guys,
I
was
on
vacation
all
of
last
week,
so
I
am
not
quite
caught
up
on
what
happened
last
week
in
this
repo.
B
Yeah,
I
I
ended
up
making
time
to
review
the
meeting
from
last
week,
just
because
I
I
felt
like
it
added
some
additional
context
to
some
of
the
the
things
going
on
with
the
issues.
So
you
might
want
to
take
a
look
at
that
when
you
have
time.
B
Yes,
it's
it's
been
hot,
it's
do
you
have.
D
B
I
do
luckily,
and
and
luckily
it
is
still
working
at
least
it's
nothing
like
last
year,
where
we
had
the
extreme
heat
new
record
by
over
20
degrees.
B
D
Yeah
in
seattle,
it's
pretty
hot
as
well.
I
think
today
it's
supposed
to
hit
94
fahrenheit,
so
that's
34
celsius
and
I
don't
have
ac.
D
B
But
yes,
I
think
you're
in
a
worse
situation,
without
the
ac.
A
B
Yeah,
my
kids
are
basically
enjoying
the
kiddie
pool.
We
have
set
up
in
the
yard,
so
they're
they're
living
in
the
water.
B
B
And
the
other
one
is
the
smoke
test
for
the
prometheus
exporter.
I
asked
a
question
on
that
robert.
When
you
get
a
chance
to
to
take.
B
A
C
B
Yeah,
okay,
so
I
think
that's
straightforward
I'll
just
go
into
slap
and
approve
on
right
now,
while
I'm
at
it.
B
Okay,
so
then,
in
this
other
pull
request,
we're
updating
some
documentation,
as
well
as
updating
the
dependable
labels
for
our
test
applications.
A
D
Yeah
it
comes,
it
comes
with
some
of
its
own
quirks,
so
maybe
there
can
be
improvements
trying
to
do
it
from
scratch,
but
yeah
it
does
allow
testing
against
multiple
versions
of
packages.
So
it's
very
helpful
to
actually
there's
like
some
new
version
of
like
stack
exchange,
redis
that
came
out
and
we
were
able
to
tell
oh,
it
doesn't
work
on
that.
A
B
So
we've
got
another
one
where
we're
adding
the
stack
exchange
redis
support,
so
we
were
just
talking
about
this
earlier
in
the
meeting
about
comparing
the
the
packets
for,
for
me,
that's
just
a
minor
thing,
it's
something
that
I
was
burned
by.
So
I
I
just
don't
want
to
be
burned
again.
So
that's
why
I'm
calling
it
out.
D
Was
that
did
you
put
that
in
a
comment
in
the
pr
or
oh?
Is
that
just.
B
Yeah-
it's
just
just
some.
I
I
don't
want
us
to
invest
too
much
time
in
a
performance
testing
framework.
This
is
just
one
that
I
I
know
that
redis
usage
is
performance
sensitive
and
I
have
seen
people
care
about
it
and
I
have
seen
instrumentation
affected
in
a
negative
way.
So
it's
just
a
sanity
check
for
us.
D
B
Yeah,
but
otherwise
this
the
changes
in
this
pr
is
built
upon
some
of
the
other
pr's
we've
we've
had
in
the
past,
so
similar
patterns,
just
with
a
different
library.
B
And
so
zach
one
thing
yeah,
you
might
be
interested
in
just
with
the
background
from
datadog,
so
assembly
versions,
especially
for
net
framework
they're,
often
pinned
to
just
the
the
major
version
to
minimize
the
amount
of
binding
redirects
that
are
necessary.
D
We
haven't
really
seen
any
issues
where
even
for
private
methods,
where
the
signature
changes
in
between
like
dot
releases
but
still
is
the
same
like
assembly
version
or
if
it
does,
then
we
just
add
like
we
would
add
an
overload
to
handle
that
one.
So,
whichever
path
it
takes
in
that
semi
version
we'll
we'll
cover
it,
I
not
sure
what
issues
we
would
have
with
semiversion
if
we're
doing
like
source-based,
instrumentation
or
yeah
source-based
instrumentation.
B
Yeah,
in
this
case,
we
are
using
the
the
bytecode
base
instrumentation,
and
so
so
that's
where
we're
specifying
the
the
version
checks.
Okay,.
D
Gotcha,
okay
yeah.
As
long
as
we
cover
the
different
overloads,
I
mean
only
one
is
probably
gonna
fire
on
that
on
that
hot
path.
So
that's
we
haven't
seen
any
other
like
we
don't
really
have
any
unknown
issues
with
as
long
as
we
instrument
the
different
overloads.
We
haven't
really
seen
issues
with.
You
know,
publishing
new
binaries,
but
with
the
same
assembly
version.
B
Okay,
yeah,
I
I
didn't
look
close
enough
in
this
case
to
know
if
our
minimum
version
was
being
specified
by
the
source
code
instrumentation,
and
so
I
I
wasn't
sure
if
we
were
attempting
to
bootstrap
the
source
instrumentation
on
an
older
library,
where
the
thing
that
the
source
code
instrumentation
doesn't
exist.
Yet.
B
Yep
but
it
sounds
like
you
found
a
way
to
get
it
to
work
for
the
supported
versions
for
for
this
particular
library.
A
A
B
Yep,
so
take
a
look,
I
I
think
this
is
really
close.
D
In
this
repo,
I
don't
know
how
it's
being
done.
I
when
rasmus
was
doing
the
mongodb
one.
I
don't
know
how
he
did
it,
maybe
copy
paste
and
in
the
data.
C
D
Yeah
we
had
a
build
step.
That
would
do
that,
so
you
can
well
now
it's
gone
from
our
master
evo,
but
you
can
see
that
from
maybe
like
versions
1.28
or
before,
or
something
like
that
and
just
see
some
of
the
logic.
But
it
was
basically
a
two-step
thing
where
we
built
the
the
manage
assembly
that
had
all
the
instrumentations
and
then
just.
D
Yeah,
I
get
the
attributes,
just
use
reflection
to
read
all
the
information
and
then
generate
it
in
this
certain
schema,
so
it's
more
or
less
like
foolproof,
but
it
basically,
you
have
to
do
two
builds
one
to
build
the
instrumentation
library
and
then
the
second
one
to
build
and
run
a
program
which
is
going
to
generates
the
integrations
json.
A
D
D
D
A
D
That
one's
gonna
be
more
work
and
more
prone
it'll
just
take
longer
to
do.
A
B
So,
let's
see
so
this
other
one
is
a
refactor
on
the
assembly
load
initializer.
B
So
this
is
a
follow-up
on
another
pr
that
was
already
done,
but
it's
trying
to
improve
upon
it.
So
there's
been
a
lot
of
back
and
forth
on
this.
B
A
A
B
Okay,
so
first
one
documenting
automatic
instrumentation,
so
there's
a
page
out
there
in
the
open,
telemetry
docs
site
that
talks
about
library
instrumentation.
We
have
a
pr
out
there
to
to
rename
the
or
to
change
all
the
language
that
refers
to
automatic
instrumentation
to
be
library
instrumentation.
A
C
C
B
Okay,
I
can
put
it
with
this
beta.
A
B
Mongodb
tests,
okay,
so
it
looks
like
dependable
and
our
test
discovered
that
we
don't
support
the
the
newest
driver,
at
least
in
our
tests,
and
so
we
want
to
update.
C
Yeah,
I'm
not
sure
I
haven't
looked
that
deep,
I'm
not
sure
if
it's
the
problem
of
the
application
that
it's
using
you
know
some
stuff
which
has
been
removed
from
the
api
or,
if
it's
even
worse
than
the
you
know,
the
instrumentation
does
not
work
as
well.
I
think
it's
just
about
the
instrument
application,
but
sorry
that
the
test
application,
but
I
thought
it's
worth
to
create
an
issue
for
it.
B
A
C
B
B
This
issue
was
a
follow-up
based
on
a
comment
on
some
updates
to
the
redis
instrumentation
library,
where
a
concern
was
raised
up
that
switching
from
a
foreground
thread
in
that
instrumentation
to
a
background
thread
could
potentially
lead
to
some
data
loss
in
the
application.
B
Yeah
zach
this
is
this-
is
one
that
I'd
be
interested
in
your
opinion
on
as
well
having
to
rely
on
on
exit
in
order
to
flush
data
as
well.
D
I
can
see
what
we
do,
but
I
I
do
think
that
it's
still
on
a
background
thread-
and
we
haven't
really
had
so
many
concerns
about
that.
Yet,
but
I'm
not
100
sure
if
it's
foreground
background.
So
I
need
to
actually
go
back
and
sort
of
understand
that.
A
A
B
C
B
B
So
robert,
I
know
that
you
brought
up
some
ideas
on
how
to
handle
this,
but
some
of
the
things
that
that
are
standing
out
to
me
that
I
haven't
necessarily
written
down.
Yet
it's
more
just
out
loud
thoughts.
It
seems
like
this
is
a
case
where
there's
different
dependencies
for
this
instrumentation
library,
depending
on
what.
B
A
framework
the
application
is
ultimately
targeting,
and
so
with
net
6,
there's
no
additional
dependencies,
but
with
net
with
net
core
app
3.1
there's
some
additional
dependencies,
but
we're
only
checking
for
one
of
them,
and
so
I
don't
know
if,
if
this
is
a
case
where
the
problem
truly
goes
away
with
a
newer
version
of
net
core
app
3.1
or.
A
A
B
At
the
same
time,
net
net
core
app
3.1
is
supported
through
it's
the
november
or
december.
Something
like
that.
C
I'm
not
sure
chris,
if
it's
only
about
dot
net
core
3.1.
Okay,
I
when
I
read
it,
I
thought
that
it
might
be
possible
to
also
reproduce
on
dot
net
six,
but
I
just
read
it
to
you
quickly.
I
think
this
is
a
similar
case
that
someone
you
know
can
host
for
example.net
six
without
those
references.
That's
my
guess.
A
B
Okay,
so
I
think
this
is
fairly
straightforward:
it
just
adding
an
ignore
first
for
one
of
our
projects
that
that
has
a
reference
to
this
and
here's.
C
Not
really
the
problem
is,
I
saw
that
dependable
works
like
that
that
it's
not
only
updating
the
references
like
in
the
folder
that
you're,
like
you,
know,
setting
it's
using.
It's
checking
all
the
references,
so
this
pr
was
created
for
the
for,
because
of
the
watching
at
the
unit
tests
projects,
some
addition
and
every
so.
Basically,
there
was
there
was
five
pr's
which
were
doing
the
same
okay
and
I
didn't
know
what
to
do
just
to
be
honest.
B
A
A
C
B
B
D
A
B
B
A
B
D
D
A
B
So
this
is
for
updating
our
diagnostic
logs
to
reuse.
The
the
existing
diagnostics
logger.
B
C
So
during
the
last
meeting
like
we
were
feeling
that
probably
we
could
change
it
in
managed
code
and
in
like,
but
in
native
code,
probably
to
we'll
just
try
to,
for
example,
put
the
logs
in
the
same
places,
etc,
because
yeah
whatever
will
be
simpler,
but
also
for
sure
we
need,
we
need
the
locks
these
are.
This
is
the
one.
This
is
for
sure.
C
B
So
I
don't
think
we
should
tackle
this
with
the
current
beta,
but
perhaps
target
for
the
next
beta
just
try
to
reduce
the
risk.
B
So
we've
already
walked
through
the
pr's
is:
does
anyone
have
anything
in
flight
that
they
want
to
talk
about.
C
Yes,
so
the
second
in
progress,
okay,
because
here
there
are
two
pr's
which
are
not
in
our
repository,
so
the
first
one
is
the
rename.automatic
to
libraries,
and
I
also
call
out.
I
think
it
will
be
good
to
have
another
approval
approved
here
or
review
from
a
maintainer
and
also
this
one
as
well.
C
B
Yeah,
so
this
is
this:
is
the
p
one
of
the
prs
that
I
talked
about
earlier,
where
we're
renaming
the
current
documentation,
that's
referring
to
net
automatic
instrumentation
to
be
net
library,
instrumentation
and
robert
did
some
work
to
make
it
align
with
how
go
is
using
the
the
language
for
the
for
the
two
different
terms.
D
Do
they
have
the
same
concept
of
automatic
instrumentation,
or
do
you
have
to
because
we
kind
of
have
automatic
where
you
build
against
the
sdk
and
then
you
just
light
up
which
ones
you
want,
but
we
also
have
automatic,
which
is
like
no
code
changes
does
go,
have
the
same
thing.
C
So
go
has
right
now
they
have
this
library
instrumentations
when
they
change
the
code
like
this,
the
k-way,
but
they
are
exploring
to
make
the
automatic
way
using
ebpf,
probes
and
right
and
right
now
they
have
this
section
and
they
call
this
automatic
using
instrumentation.
I
use
library,
instrumentation
and
yeah,
so
everyone
from.net
sdkc
also
says
that
it's
good
to
clarify
it
here.
C
The
problem
is
also
the
the
glossary
which
some
find
not
not
descriptive,
and
this
is
about
the
second
pr.
I
try
to
clarify
the
term
auto
instrument,
automatic
instrumentation.
C
D
Can
you
add
these
as
tasks
on
or
like
little
checklist
items
on
our
issue.
D
B
But
in
the
yeah
in
the
documentation
issue
robert
bring
this
in
here,
so
you
should
see
it
in
your
github
notifications.
A
C
Okay,
so
I
put
I
put
one
to
the
last
one
to
the
commit:
could
you
go
back
or
you
already
closed?
No,
the
last
one
you
decommitted,
so
I'm
committed
to
this
discussion.
Fairfax
versus
scientific.
I
didn't
have
chance
to
work
on
it
because
I
said
that
I
will
follow
up
and
I'm
going
on
one
month
pto,
so
someone
can
or
take
it
or
I
can
just
continue
it
when
I'm
back.
I
don't
think
it's
extremely
important.
C
I
think
it's
fine
for
it
to
be
to
wait
or
be
assigned
back.
A
Yeah
so.
B
B
And
so
since
we're
in
agreement
on
that.
I'm
leaning
towards.
We
can
take
our
time
on
this,
and
I
think
it
can
wait.
But
any
other
thoughts.
D
That
sounds
fine
to
me:
can
we
do
we
have
any
sort
of
documentation
or
maybe
in
the
like
developer
docs
of
that
preference,
so
kind
of
the
guideline
of
we
can
fail
fast
on
initialization,
but
not
during
instrumentation.
A
A
B
C
Yes,
so
this
is,
we
were
just
discussing
it
like
on
our
like
european
meeting
that,
basically
often
sometimes
from
our
time
zone
cannot
join
because
it's
late
and
the
questions,
if
it's
acceptable
for
you
to
have
it
sooner,
preferably
one
and
a
half
hour.
So
it's
9
00
a.m.
For
you,
I
don't
if
one
9
a.m
is
okay
for
you,
I
don't
know
when
your
kids
are.
You
know
starting
school,
etc,
at
least
in
poland
at
9
a.m.
It's
like
the
good
time
for
starting
dailies,
etc.
A
D
I
was
gonna
say:
9
a.m,
I'm
already
online
then,
and
that's
a
good
time
for
meetings.
D
D
So
I'm,
okay,
I'm
okay
with
earlier
9,
30
or
10,
might
be
better,
but
I
can
accommodate
for
nine.
B
B
But
otherwise
yeah
moving
moving
the
time
works
for
me.
A
B
So
another
topic-
I
don't
know
if
it's
the
right
time
to
bring
it
up
without
having
paulo
and
raj,
but
there's
a
performance
discussion
last
week
and
there
were
there
were
questions
about
what
other
vendors
were
doing,
for
performance,
testing
or
and
even
other
sigs,
and
so
I
know
from
from
new
relic
side,
specifically
with
the.net
agent.
B
B
Because
everybody's
application
was
so
different
and
and
things
are
executed
differently
and
so
memory
overhead
numbers
varied
cpu.
Overhead
numbers
varied.
Similarly
with
network
overhead,
it
varied
as
well.
D
Yeah
we
have
the
same
thing
at
datadog.
We
have
made
a
conscious
effort
to
not
publish
any
sort
of
performance
numbers
because
the
same
issues-
customers
are
running
on
different
types
of
machines,
they're
running
different
workloads,
so
we
can
try
to
approximate
it,
but
it's
never
gonna
match,
and
so
there's
gonna
be
mismatching
expectations
internally,
we
do
have
dashboards
and
we
keep
track
of
that.
D
So
in
terms
of
that,
we
have
a
couple
different,
like
stress
tests
like
regarding
network,
we
also
have
micro
benchmarks
as
well
for
our
specifically
just
the
inside
our
tracer.
So
we
run
that
on
like
every
pr,
so
that
could
be
a
good
way
to
get
some
numbers
using
benchmark.net
but
yeah.
I
would
also
not
advocate
publishing
numbers.
D
B
Yeah
and
and
speaking
of
variation,
so
even
if
somebody
was
running
the
exact
same
app
that
we
were
using
to
benchmark
against,
depending
on
the
load
of
the
system,
they're
running
that
application
on.
B
D
Yeah
I've
also
seen
issues
where
we
set
up.
We
tried
to
set
up
some
like
a
pool
of
like
vms,
but
the
underlying
cpus
were
different,
and
so
one
was
node
was
actually
faster
than
others
so
run
to
run.
You
couldn't
actually
tell
like
you
would
have
to
find
a
way
to
assign
the
same
vm
or
the
same
yeah
wherever
like
the
kubernetes
pods
are
set
up,
like
you
had
to
make
sure
you're
running
on
the
same
host,
which
is
another
layer
of
complexity,
because
you
can't
tell
that
without
really
in-depth
metrics.
D
B
Yeah
we're
approaching
the
the
end
of
summer
holiday
season,
so
I
suspect
that
robert,
you
won't
be
the
only
one
gone
this
month
or
this
this
next
month.
C
B
Preferences
so
honestly,
I'm
thinking
that
the
set
that
we
that
we
have
in
flight
right
now
is
good
enough
for
that
for
the
next
release
and
what
I'm
imagining
is.
We
can
just
slowly
tackle
other
ones
as
with
with
follow-on
betas.
C
A
A
B
I
know
both
are
popular
for
us.
My
one
hesitation
with
having
wcf
with
our
current
beta
is,
is
that
it's
net
framework
only
and
we
we
have.
We
have
some
research
that
we
want
to
do
for
net
framework.
C
C
B
Okay,
yeah
do
we
already
have
an
issue
submitted
for
so
you
mentioned
grpc
being
one
of
the
popular
ones
that
we
haven't
assigned.
Did
we
create
an
issue
for
that
and
assign
it.
A
B
So
I
think
if
we've
got
the
popular
ones
based
based
on
your
data
in
the
current
beta
I'd
say,
we
can
call
that
good
enough
for
40.3
and
we
can
add
more
and
follow
on
betas.
B
B
A
I
have
one
more
comment
to
wcf
the
instrumentation
on
contrib
site.
I
think
support
also
dotnet
core,
not
only
the
dotnet
framework.