►
From YouTube: 2021-10-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
Okay,
I
think
just
let's
go
straight
to
some
discussions
that
kind
of
pop
pop
it
up
in
vrs.
I
think
I
think
the
first
one
is
the
most
recent
about
branching
and
how
the
the
pr
thing
we
we
should
do.
I
the
the
open
source
projects
that
typically
we
follow
like
the
ones
from
the
linux
foundation
and
many
others.
Typically
they
work
by
you.
When
you
create
prs,
you
create
your
fork
first
and
then
you
push
to
a
branch,
and
then
you
open
the
dpr.
C
I
think
we
should
follow
that
model.
I
know
that,
especially
for
people
that
work
on
data
dark
from
upstream
is
is
a
bit
different,
but
I
think
that
kind
of
established
model
for
open
source
projects.
So
I
think
we
should
just
follow
it's
not
hard.
It's
just
a
matter.
You
fork
the
ripple
on
github,
add
a
remote
and
then,
when
you
publish
your
branch,
that's
farted
from
the
main
from
our
branch.
C
Then
you
push
to
your
fork
of
the
branch
when
you
are
doing
the
push
to
the
remote.
C
I
think
I
was
briefly
talking
with
this
with
robert.
He
pointed
that
there
are
some
situations
that
we
may
want
to
allow
the
different
behavior,
but
I
think
the
general
approach
should
be
that.
Does
anyone
have
kind
of
any
concern
about
trying
to
follow
that
practice?.
C
All
right,
so
I
think
I
think
we
are
we
are
in
agreement,
I
don't
know
just
perhaps
I
read
some
readme
note
kind
of
the
procedure
if
the
case
some
someone
is
not
used
to
that.
You
know,
but
I
I
it's
pretty
trivial.
You
know
so
I
think
everyone
here
is
familiar
with
that,
but
just
in
case,
if
someone
shows
up
and
is
not
familiar
with,
that,
we
have
some
instructions
somewhere
I'll.
Try
to
add
that
to
something.
E
C
Yeah
well,
yeah,
send
me
to
choose
like
listen,
will
use
those.
D
So
related
questions
to
this:
do
we
need
to
adjust
the
branch
protection
rules,
or
do
we
leave
them,
as
is.
C
Robert
pointed
me
to
to
me
a
case
that
we
may
want
to
relax
this
rule.
Can
you
remind
me,
robert
what
you
mentioned.
E
E
If
there
is
some
something
important
so
for
us,
probably
a
change,
lock,
entry
or
read
me
with
some
gotchas
things
like
that,
because
dependable
creates
a
branch,
it
doesn't
create
a
fork
and
if
you
want
to
add
some
commits
on
top
of
it,
then
you
will
need
to
relax
this.
This
rule.
F
We
could
so
we
could
add
another
branch,
filter
or
protection
rule
specifically
for
dependable
prefixed
branches
and
then
that
disables,
the
that
hard
signing
thing
that
we
have
right
now.
C
I
I
feel
more
comfortable
if
we
protect
all
the
branches
right
and
open
the
exceptions
as
we
need
for
for
dependability
and
other
cases.
C
Okay,
yeah.
So
to
answer
chris
question,
I
think
we
keep
the
rules
as
it
is
and
open
the
exceptions.
I
would
say
the
following:
let's
open
the
exceptions
as
needed,
so
when
we
have
the
concrete
cage
of
depend
about,
we
create
that.
So
for
now
the
good
thing,
no
action.
C
Yeah
that
that
that's
a
good,
a
good
thing
to
to
have
kind
of
discuss
it,
at
least
very
briefly
between
us,
so
we
we
agreed
another
thing
that
it
was
mentioned
on
raj's
pr
was
about
the
how
to
organize
the
integration
tests
I
think
upstream
has
has
a
good
way
when
I
did
the
last
spool
from
upstream.
I
saw
a
lot
of
improvements
there.
C
G
C
C
So
one
alternative
that
I
think
crosses
my
mind
across
that
robert's
mind
some
time
ago
was
perhaps
have
instead
of
a
big
integration
test,
dll,
which
you
have
a
smaller
integration
test,
dll.
That
is
also
the
application
itself,
because
we
can
create
that
net
and
run
as
the
application.
We
also
use
the
same
assembly
to
be
the
tests.
C
The
advantage
of
that
is
that
kind
of
each
one
is
self-contained,
but
is
this
still
one
application
per
integration,
and
perhaps
it
makes
harder
to
kind
of
access
stuff
from
the
tracer,
because
you're
in
the
same
dll?
So
let's
say
I
think,
perhaps
it's
easier
on
upstream
model.
C
If
you
want
to
have
reference
to
the
tracer
itself
on
the
tests,
you
know,
so
these
are
kind
of
the
two
models
that
come
to
my
mind
and
we
can
just
keep
following
the
model
of
upstream
but
at
first
these
are
the
two
choices
that
come
to
my
mind.
I
yeah.
D
There's
kind
of
a
third
model
too
that
I
that
new,
relic
was
kind
of
switching
to
to
manage
the
the
explosion
in
the
number
of
different
test.
Apps
was
that
we
created
a
more
generic
console
test,
app
where
we
can
dynamically
load
different
libraries
for
testing
purposes.
D
C
C
D
Go
ahead,
chris,
I
was
going
to
say
I
I
linked
to
read
me
about
it
and
I
want
to
say
it
was
raj's
pr
where
he
was
adding
the
integration
tests,
just
just
as
an
example.
B
Yeah,
maybe
someone
can
do
introdu,
and
so
we
can
replicate
it
easily
to
others.
F
Yeah,
I
I've
seen
what
sort
of
experience
you
had,
chris
with
that,
if
you
could
speak
to
like,
were
there
instances
where
that
didn't
work
well
or
any
edge
cases
that
you
ran
into
that?
We
might
need
to
consider
for
as
writing,
tests.
D
Yeah
so
where
it
was
harder-
and
I
don't
remember
exactly
where
we
left
off,
because
I
haven't
been
working
directly
with
that
team
for
a
while
now
is
when
dealing
with
certain
classes
of
web
applications
where
we
actually
had
to
publish
out
more
than
just
the
test
library
in
order
to
get
the
the
web
app
to
work.
D
So
that's
where
you
gotta,
publish
out
all
the
javascript
and
views,
and
things
like
that.
So
I
I
think
it
was
partially
solved.
But
I
I
don't
remember
all
of
the
details,
but
one
of
the
nice
benefits
was
that
all
of
these,
so
every
time
you
have
to
publish
a
new
web
app,
that's
at
least
30
different
boilerplate
files
that
you're
adding
to
your
repo
every
time,
and
so
it
just
helped
minimize
that.
C
I
see
one
thing
that
the
data
dog
has
that
is
very
good,
is
about
the
comprehensive
test
of
versions
is:
is
that
also
supported
quiz
in
this?
This.
D
Not
to
not,
to
the
same
extent,
that
comprehensive
testing
was
more
manually
managed
where,
in
your
test
library,
we
had
some
ability
to
declare
which
version
of
net
to
to
run
the
test
app
with
or
do
the
publish
with,
but.
D
I
I
think,
the
way
datadog
handled
that
the
comprehensive
testing
was
much
cleaner.
I
B
F
I
F
Think
the
issue
would
be
if
so,
the
comprehensive
testing
is
where
you're
testing
like
multiple
nuget
package
versions
in
the
same
like
build
run,
but
then
you
get
some
exponential
factor
of
okay.
Well,
I
want
to
test
this
version
of
redis,
so
that's
one
build
of
the
all
in
application
and
then
wait.
I
also
want
to
test
multiple
versions
of
this
other
dependency
like
postgres,
so
I
think
that.
C
Okay,
so
you
you
need
multiple
builds,
but
in
the
end,
you
get
the
coverage
of
the
target,
libraries
that
you
want.
It's
a
single
build
of
the
instrumentation
and
that
stuff,
because
the
instrumentation
is
deployed
according
to
range
that
are
captured
at
runtime.
But
you
need
multiple
apps
of
the
target
applications.
G
C
C
Makes
sense
because
we
can
say
a
range
for
no
get
on
the
project
right
so
and
especially
for
that
net
framework
with
strong
name
signature
that
becomes
relevant,
but
but
this
is
kind
of
already
going
further
of
the
discussion
that
we
are
having
just
kind
of
thinking
that
perhaps
we
could
simplify
the
build.
But
in
the
end
you
still
have
to
change
the
dependence
somehow
for
run.
The
version
that
you
really
want
to
test
against.
You
know
the
build
is
the
fireproof
sure
way
of
getting
that.
C
I
I'm
just
thinking
in
a
way
kind
of
trying
to
reduce
the
builds
to
have
this
comprehensive
run
is
still
validating
all
those
versions
that
we
are
trying
to
cover
with
the
instrumentations,
but
I
think
more,
more
important.
We
have
these
alternatives.
I
think
we
should.
C
I
think,
for
the
time
being,
should
be
following
what
we
have
at
this
moment,
but
I
think
it's
something
that,
while
we
are
relatively
small
on
the
number
of
instrumentations
is
the
time
to
discuss
if
anyone
really
wanna
kind
of
search,
invest
or
try
to
improve
that
you
know,
because
once
you
develop
a
model,
it's
kind
of
people
are
gonna
see
I
need
to
add
a
new
one.
I'm
gonna
look
at
what
exists
and
I'm
gonna
copy
and
do
the
same
you
know
so
I
think,
right
time.
C
I
don't
know
if
if
what
people
feel,
but
perhaps
we
can
kind
of
create
some
kind
of,
I
will
not
say
a
work
group
but
kind
of
have
small
things
to
follow
kind
of
hey.
Let's
look
perhaps
at
what
chris
mentioned
from
new
relic
and
perhaps
next
week
we
have
some
quick
questions
for
chris
and
discuss
so
kind
of
not
a
high
priority
thing,
but
something
that
is
in
the
back
burner
for
us
to
kind
of
have
the
discussion.
C
A
E
C
D
While
we're
loading
just
a
quick
question
for
zach's
pr,
do
we
want
to
merge
it
as
is,
and
then
just
do
a
follow-up
to
to
do
the?
What
is
it
remove
the
cmake
deprecations.
C
Yeah,
I
I
rather
do
that
you
know
kind
of
I
I
think
it's
a
a
small
and
kind
of
safe
pr.
I
think,
let's
do
that
and
then
zach
or
any
of
one
of
us
can
do
the
removal.
You
know.
C
Yeah,
so
even
better,
so
I
think
let's,
let's
merge
zach's
pr
as
it
is,
and
erasmus
follow
up
perfect.
C
All
right
so
you're
you're
saying
that
you
think
there
are
issues
that
we
didn't
put
here.
So
this
is
basically
the
integration
test
is
what
we
discussed
here
before.
So,
let's
I
I'll
add
a
little
bit
that
just
we
are
going
to
be
following
these
kind
of
in
a
in
a
in
a
secondary
track
and
for
next
week
kind
of
is
for
people
to
take
a
quick
look.
What
new
relic
has
from
the
links
that
let
me
copy
the
link
here
before
I.
C
I
lose
this
link
before
from
what
chris
shared
and
we
go
from
there.
For
this
new
item.
I
would
say
that
this
is
something
that
we
don't
need
for
our
planet
beta
bar,
but
I
think
we
we
want
to
do
some
progress
on
that
independent
of
the
video.
K
Sorry,
which,
which
dot
you
mean
the
the
normally
the.
E
New
issues
there's
one
more
here
which
I
put
already
into
committed
state,
but
I
I
was
wrong.
It
should
be
in
progress
because
I'm
working
on
it-
or
maybe
it's
yeah,
it's
in
progress
already.
Yes,
this
support
essential,
open,
telemetry
variables,
so
you
may
just
take
a
look
if
you
think
that
these
are
the
only
that
we
need
currently,
chris
already
take,
took
a
look,
but
I
have
removed
one
line.
E
I
have
to
remove
the
hotel,
lock
level,
because
I
think
it
will
be
hard.
That's
one
thing
to
support
it,
both
on
our
side,
because
you
will
need
to
support
in
both
native
code
and
and
you
know,
and
and
and
manage
code,
and
also
we,
I
don't
know
how
it
works
in
hotel,
where
I
think
basically,
the
auto
level
almost
does
not
exist,
because
everything
is
going
to
this
activity
sources
right
and
I
don't
know
if
there
are
log
levels
there.
C
I
I
actually,
but
do
you
mean
the
log
levels
for
the
tracer
that
we
are
using
for
the
sdk
or
you
are
talking
about
log
level
for
operational.
C
E
D
Yeah-
and
I
think
that
supports
levels,
if,
if
I'm
remembering
correctly,
because.
C
You
want
to
receive
correct,
that's
correct
that
that
you
can
just
say
hey.
I
just
want
to
see
the
errors
you
know,
so
it
supports
that
by
the
way
you
need
to
kind
of
have
a
consistent
story
there,
or
at
least,
if
we
don't
have
a
consistent
story,
I
mean
we
need
to
be
clear
to
the
user,
how
to
get
the
traces
you
know,
because
we
have
the
the
files
that
are
generated
by
the
the
clr
profiler
and
we
have
also
the
event
source
events.
C
You
know,
so
they
are
kind
of
right
now,
very
different,
and
at
least
we
should
kind
of
have
a
small
explanation
in
a
single
place,
so
people
can
refer
to
which
what
each
log
represents
and
how
to
get.
G
D
At
the
same
time,
for
our
project,
I
think
we
may
just
want
to
simplify
things
and
just
always
write
out
to
the
file
logs
like
the
way
the
the
data
dog
agent
is
today,
because
I
mean
it
just
greatly-
simplifies
troubleshooting
things
so.
E
G
Yes,
exactly
yes,
but
that
will
be
too
much
of
logs
right,
yeah
sdks,
like
for
each
and
every
telemetry,
it's
going
to
emit
some
log
so
and
they
have
a
self-diagnostic
feature
which
can
be
turned
on
just
by
enabling
a
json
file
there.
So
both
should
be
kept
separate
on
a
need
basis.
They
can
enable
that
whatever
the
sdk
feature,
we
have
it.
We
can
utilize
and
enable
that.
D
Yeah,
so
if
the
concern
is
performance,
though,
that's
where
I
think
we
should
really
focus
on
which
levels
we
enable
by
default,
because
I
suspect
that
at
a
high
enough
level,
there
aren't
going
to
be
many
messages
coming
through
there.
G
Sure,
but
only
performance
is
not
a
problem,
I'm
thinking
about
the
file
that
we
are
writing
to
or
the
space
we
are
writing
to.
If
an
app
is
going
to
frequently
restart
in
our
case
either
we
will
end
up
in
creating
too
many
number
of
files
or,
if
you're
planning,
to
write
to
a
single
file,
it's
going
to
grow
up.
So
so
we
need
to
think
about
that
also
so
how
we
are
going
to
overwrite.
So
we
already
have
that
implementation
in
the
sdk
space,
which
is
done
especially
it
listens
on
the
like.
G
What
are
the
event
source
and
writes
to
a
text
file
implementation
is
already
present.
We
just
enable
it
by
just
creating
a
json
file
in
the
root
of
the
sdk
or
the
app
folder.
I
believe,
and
we
have
a
control
to
say
that
hey,
I
need
error
only
or
warning,
or
everything
we.
If
we
have
that
part
in
the
sdk,
so
I
don't
think
we
can
take
that.
The
reason
is
we
need
to
like
start
collecting
the
traces
from
the
beginning.
G
As
like
paulo
mentioned,
we
need
the
traces
from
the
clr
like
profiler
too,
so
we
cannot
go
with
that.
One,
so
maybe
like
collecting
a
minimal
trace
will
help
us
from
our
project
and
sdk
may
be
enabled.
By
that
feature,
we
can
keep
it
separately
and
rely
on
the
sdk
itself.
If
needs
to
be
done.
The
other
part
needs
to
be
included
instead
of
taking
that
over
here.
D
Okay,
that
they
won't
always
have
right.
Access
to
the
folder
that
that
json
file
needs
to
be
placed
in
is
is
part
of
the
problem.
C
Okay,
yeah
and
one
thing
that
is
because
it's
auto
instrumentation
some,
they
think
that's
good
about
the
approach
from
upstream.
From
my
perspective
is
kind
of
things
like
errors.
They
are
always
logged.
So
if
somebody
tries
and
is
not
running
without
even
asking
kind
of
hey
go
change
the
debugger
or
something
change
a
file
or
anything
you
can
just
ask
for
the
logs
and
if
it
depends
on
the
kind
of
error
that
you
are
looking
for.
Usually
it's
there.
You
know,
so
I
I
think
that's
what's
attractive.
G
So
I
I
completely
like
agree,
even
I
thought,
of
speaking
about
a
feature
like
saying
that
self-serve
kind
of
tool,
even
more
than
what
we
like,
we
need
to
write
it
to
some
place
and
we
need
to
have
something
to
read
that
and
say
hey.
This
is
what
is
failing
for
you
go
and
do
it.
We
cannot
expect
every
user
to
understand
the
error
log.
G
We
need
to
write
so
that's
the
first
thing
is
definitely
to
write
to
a
like
a
log
file
or
something,
and
we
need
to
have
a
mechanism
or
some
ui
or
something
provided
just
for
the
users
to
read
and
understand
what
is
that,
so
that
will
be
a
good
win
for
us,
like
not
only
the
logging
just
like
taking
the
logging
and
showing
what
is
the
status
is
what
is
going
to
help
us
in
this.
D
Yeah
now
you
mentioned
some
other
concerns
about
log
rotations
and
number
of
log
files
being
generated
because
of
the
process
id
changes.
Now.
This
is
something
that
datadog
has
been
using
in
practice
for
a
while
new
relic
does
something
similar
and
has
been
for
a
while,
and
I
know
from
the
new
relic
set
of
things
that
that
hasn't
been
a
big
enough
problem
to
to
address
now.
D
It
has
come
up
from
time
to
time
with
this
usage
growing
over
time,
but
it
hasn't
been
as
big
of
a
concern
as
other
things
and
my
assumption
that
it's
likely
a
similar
case
for
datadog,
but
maybe
zach,
might
have
more
insight.
There.
F
I'm
sorry
I
I
kind
of
tuned
out,
because
I
was
trying
to
update
my
pr.
We
haven't
really
run
into
too
many
issues
with
log
rotation.
We
have
a
small
amount
of
log
rotation
like
files
because
we
use
serolog,
but
we
haven't
gotten
issues
about
losing
information
or
it
growing
too
large.
C
Yeah
I
I
was
just
gonna
say
that
it's
the
same
model
as
datadog,
the
ones
that
we
have
at
splunk
and
so
far
we
didn't
hear
any
anything
about
too
many
logs
or
things
like
this.
You
know
the
only
thing
that
we
did
and
but
I
think
I
I
think
upstream
also
did
that
we
also
just
for
convenience.
In
certain
scenarios.
C
We
also
enable
log
to
standard
output.
You
know.
C
And
that,
of
course,
is
very
convenient
if
you
are
in
a
situation
that
you
are
running
something
on
a
dev
box
or
even
it
could
be
a
a
user,
but
in
the
dev
box,
then
you
just
say:
hey
set
this
and
we
can
see
the
output
right
there.
G
Even
we
have
something
like
that,
but
it
is
done
in
the
similar
way
of
how
the
open
telemetry
sdk
handles
set.
So
we
have
a
member
and
we
do
the
in
a
cyclic
way
that
so
no
issues,
but
I'm
like
just
worried
about
like
to
implement
having
already
opened
telemetry
sdk
as
an
implementation.
G
C
Perhaps
can
we
kind
of
do
everything
in
code
that
we
can
then
have
one
environment
variable
that
the
user
can
set
up
and
the
thing
just
works
with
some
basics?
If
they
want
to
do
something
more
specialized,
then
they
go
the
down
level
interactive.
They
join,
do
all
the
stuff.
But
if
you
wanna
kind
of
very
simple,
can
we
add
that
on
top.
C
D
Yeah,
just
simply
not
needing
to
rely
on
having
a
file
written
to
the
system.
I
I
think
would
be
all
we'd
really
need.
C
But
so
I
think
I
think
this
perhaps
is
something
also
that
we
should.
This
may
be
something
at
least
to
have
a
an
approach
decided
for
beta.
You
know,
I
think
we
should
add
a
height
and
for
our
data
to
have
kind
of
this
at
least
decide.
Okay.
This
is
what
we're
gonna
do
for
beta.
You
know.
C
I
C
Okay,
so
jump
into
the
issues.
E
E
C
Yeah,
I
think
I
think
these
these
are
required
for
sure
I
don't
know
if
we
are
missing
something,
but
this
perhaps
the
bsp
we
we
could
live
without
configuring,
the
batch
we
could
accept
the
defaults,
you
know,
but
all
the
other
stuff.
I
think
it's
pretty
much
required.
C
Yeah
so
yeah,
no
from
my
perspective,
as
I
said,
we
may
be
missing
something,
but
everything
that's
here.
I
think
it's
really
needed.
E
I
also
created
a
pr
in
the
specification
regarding
basically
about
auto
propagators,
but
it's
a
more
generic
one,
because
there
in
the
specification
there
is
not
set.
What
should
be
done
if
there
is
an
empty
value
of
an
environmental
variable
and
some
of
the
environment.
Some
of
the
like
this,
for
example,
the
traces
exporter,
has
defined
that
if
it
is
said,
none
then
you
can.
E
It
means
that
there
is
no
exporter
configured
right
and
the
defaults
for
the
auto
hotel
propagators.
Are
you
know
there
there's
something
there's
this
w3c
and
I
just
basically,
if
you
have
more
knowledge
about
other
other
languages,
then
you
can
take
a
look
or
just
because
I
propose
to
handle
the
empty
value
in
the
same
way
as
if
it
was
not
set
at
all.
C
Yeah,
no,
it
seems
see,
makes
sense
to
me,
and
I
I
think
especially
for
us.
We
we
want
to
show
make
sure
that
works
in
in
this
case.
That
stuff
is
not
filled.
We
need
to
have
good
defaults.
We
we
can't
depend
on
on
the
user.
If
something
is
required,
it
should
make
their
very
visible,
but
in
general
it
should
not
depend
on
on
anything
from
from
those.
C
By
the
way,
the
I
I
have,
the
pr
updating
the
the
the
nougat
package
so.
C
C
Yes,
yes,
I
know,
I
know
that
just
a
aside
here
before
clicking
here
on
the
issues,
I'm
gonna
close
the
dependable
prs
in
favor
of
the
pr
that
I
opened,
because
the
dependent
bots
were
updating
some
of
the
stuff,
but
not
all
so
I
I
created
one
update
and
all
of
them
and
I
will
close
the
panda
box
in
favor
of
that
one.
C
C
B
Yeah,
so
the
issue
is
basically
that
the
samples
are
referencing
the
clr
profile
and
managed
dlr
directly,
and
when
some
issues
are
with
the
clr
loading,
then
you
can
notice
that
the
minus
layer
is
loaded
anyway,
because
there
is
a
large
reference
to
the
zero
clr
profile
manage,
which
itself
has
our
reference
to
other
minus
dlls.
F
Probably
the
reason
that
some
of
these
exist
is
to
check
some
things,
like
maybe
an
asp.net
core
web
page.
You
know
showing
on
like
when
you
hit
the
home
page
you
can
see,
is
the
profiler
attached
and
upstream
and
datadog,
we've
basically
removed
all
of
these
ourselves.
All
these
references
and
just
used
some
reflection
routines
to
sort
of
get
at
that
information.
C
C
I
see
but,
but
I
think
the
to
josek's
point
is
kind
of.
Basically,
there
is
really
no
need
for
that.
Perhaps
it
was
a
convenience
to
indicate
some
other
things,
so
I
think
the
honesty
is
on
others
that
they
have
to
kind
of
remove
the
ones
that
exist
and
not
adding
more
of
those.
C
Do
we
have
those
right
now
in
the
integration
test.
C
Are
we
referring
to
something
that
we
shouldn't,
because
it's
kind
of
the
sdk
or
something
on
the
integration
test
directly.
C
F
C
I
see
okay,
so
I
think
I
think
that
if
I'm
understanding
this
right
so
the
the
point
is
that
we
should
remove
any
improper
reference
that
we
have
there.
If
there
is
any
I'm
I'm
not
sure.
If,
because
right
now,
we
have
very
few
of
these
tests,
so
I'm
not
sure
if
we
we
brought
some
already
or
not.
C
B
Lost
what
does
the
name
of
the
folder
profiler
leave,
or
something
like
that?
So
it's
like
bringing
it
into
different
parts.
It's
not
like
a
one
package
that
are
nougat,
not
look
at
the
new
case
building.
B
C
C
So,
as
I
was
saying,
I
think
I
think
mostly
it's
just
for
us
to
be
aware
of
that
and
with
the
hope
that
this
thing
really
kind
of
restarts
you
adding
more
instrumentation
sometime
next
year
on
the
beginning
of
the
year,
but
then
we
we
need
to
have
the
good
examples
for
people
to
look
at
and
move
ahead
from
there.
You
know.
So
if
we
do
have
this
reference,
we
need
to
remove
slowly
then
I
I
don't
think
erasmus.
C
Can
you
take
a
look
and
remove
any
if
you
find,
if
you
don't
find,
then
it
becomes
a
practice
and
then,
when
there
is
a
new
merge,
we
we
look.
B
Yeah,
basically,
I
need
to
rewrite
the
directory
build
props
and
that
is
removing
both
managed
and
native
air
copping,
and
there
is
probably
I
think
it
was
a
test
helper
that
is
trying
to
find
the
reference
to
native
layer.
It
has
like
rios.
It
has
like
three
different
versions
how
it
could
find
the
possible
location
of
the
native
dll,
so
we
just
need
to
make
sure
it's
picking
up.
Only
the
nuke
package,
folder.
B
Okay
makes
sense,
makes
sense
to
me
yeah,
and
I
think
I
already
have
that
code.
Also,
that's
looking
for
the
new
folder,
but
it's
in
the
asp
branch,
oh
yeah,
and
about
the
asp
branch.
If
somebody
has
time
and
knows
what's
wrong
with
the
clr
profiler,
and
why
it's
not
attaching
when
it's
running
on
github
actions
and
inside
the
container,
something
it's
just
not
attaching
only
inside
the
container
and
github
actions
locally,
everything
is
fine.
B
C
I
I
just
noted
that
we
we
have
michael
in
the
meeting-
I
I
didn't
say
I
think
I
think
michael
is
also
from
microsoft.
I
think
I
saw
in
some
pr
right,
yeah.
H
That's
right:
yeah,
I'm
gonna
be
starting
to
work
with
raj
on
some
of
the.
What
is
it
some
of
the
other
instrumented
libraries
for
the
auto
instrumentation.
I
All
right,
I
like
that,
it's
a
party
yeah,
let's,
let's
join
the
party.
G
G
I
can
note
is
not
open
just
a
little.
Let
me
restart
that
and
share
my
screen,
so
I
have
a
very
simple
application,
which
is
built:
using.net,
core
3.1
and
I'm
trying
to
load
a
diagnostic
source
which
is
higher
version
from
fire
auto.
Let's
take
a
look
at
it
and
see
what
is
the
behavior
and
what
we
are
trying
to
I'm
going
to
create
a
issue
after
this
demo
to
follow
up
with,
so
it
will
be
easier
for
us
to
discuss
in
that
issue.
G
I
don't
know
why
visual
studio
has
crashed
on
that
project.
I
had
it
ready,
so
let's
try
and
use
I'm
bringing
it
again.
F
I
can
jump
in
really
quick.
Well,
I
just
signed
that
up.
I
had
to
open
a
new
pr
for
that
cmake
stuff,
because
in
order
to
update
to
maine
it
had
to
do
a
push,
and
then
that
was
not
allowed,
so
I
opened
a
new
pr
I'll
I'll
paste.
The
link
in
the
zoom
chat.
So
if
you
guys
could
approve
that
that'd
be
cool
thanks,
zach.
G
Have
they
informed
yeah?
This
is
a
very
simple
console.
App
like
I
have
nothing
in
this.
All
I'm
doing
is
just
printing,
whatever
the
libraries
that
are
loaded
inside
this
console
app.
That's
what
this
app
does,
so
it
works
like
without
any
I'm
not
doing
any
startup
hook
or
anything,
so
it
prints
and
this
app
uses
a
system.diagnostic
source
which
is
off,
which
is
from
the
3.1
version,
so
so
I'm
using
a
startup
hook
now
in
the
startup
book.
G
All
I'm
doing
is
assembly
load
of
five
version
here
and
in
case,
if
the
load
files,
I
have
a
resolve
event
handler
to
load
it
manually
from
that
the
direct
location
over
here.
So
it
definitely
this
fails
and
it
comes
here.
Even
this
also
fails.
So
let
me
add
a
dotnet
startup
hook
here.
G
So
if
you
look
at
it,
I
there
is
no
fancy
thing.
I
have
done
the
moment
whenever
I
try
to
bring
a
diagnostic
source
version,
which
is
higher
than
whatever
the
like
application
has
got.
We
simply
crash
it.
We
don't
need
a
startup
hook
or
clr
profiler
to
demo
this,
even
in
my
code,
if
I
do
this
assembly
load
inside
my
main
method,
this
issue
can
be
reproduced
here
itself.
We
can,
I
can
do
an
assembly
load
off
like
diagnostic
source,
five
version
or
a
six
version.
It
will
crash
it.
G
G
So
this
is
the
real
problem
that
exists
with
diagnostic
source.
Now
open
telemetry
has
taken
a
like
a
reference
on
the
logging
library
also,
so
that's
another
space
that
we
need
to
like
handle
it
for
the
same
scenario.
So
the
logging
is
not
directly
used
by
open
telemetry.
When
I
looked
at
it
yesterday
when
using
the
zipkin
exporter,
that
is
when
I
think
they
export
it
down.
G
As
a
like
a
reference
to
the
like
that
logging
package
reference
yesterday,
whoever
like
took
a
had
a
look
at
the
pr,
would
understand
what
I'm
speaking
about
it
is
that
microsoft.logging.configuration
library
which
I'm
speaking
about
so
this
is
the
issue
like
we
are
trying
to
solve.
So
I'm
going
to
create
an
issue
on
this
and,
like
I've,
already
shown
a
small
demo
on
how
we
can
resolve
this.
So,
instead
of
doing
that
as
an
anymore
demo,
I
will
start
creating
pr.
G
The
first
thing
I'm
going
to
my
plan
is
to
create
a
the
structure
to
support
this
and
then
change
our
loader
and
everything,
and
I
want
to
really
do
this
after
our
beta,
because
this
might
act
as
a
blocker
for
our
beta
as
we
might
go
slow
in
this
space
because
we
need
to
whatever
we
are
going
to
do.
Every
instrumentation
library
should
have
a
proxy
mechanism
to
like
export
the
activity
created
in
the
lower
version
to
the
higher
version.
G
So
so,
and
I
think
our
plan
is
to
go
to
like
the
first
beta
version
releases
on
the
9th
of
november
with
dotnet.
So
I
will
add
a
like
we
can
think
about
merging
a
little
later.
But
I
will
add
the
draft
and
take
the
pr
feedback
earlier
than
that.
C
So
just
thinking
here,
because
this
is
dot
net
core,
all
the
deployments
that
we
care
about,
do
we
have
the
depth
j
zone
or
not
just
thinking.
G
Depths
json
will
work
only
with
the
framework.
There
are
two
deployment
models
in
it.
One
is
like
a
framework
deployment
model
and
the
other
one
is
the
self-contained
deployment
model.
So
depth.json
will
work
perfectly
with
framework
dependent,
but
if
we
think
about
the
other
one,
it
does
not
honor
the
depth.json,
so
additional
depths
cannot
be
our
runtime
store.
Feature
cannot
be
used
with
this
one.
G
So
this
is
a
feature
like.
I
also
spoke
to
the
dot
net
team
about
this
problem.
Actually,
so
currently
they
cannot
do
anything
about
this
one
in
dot
net
six,
but
the
plan
is
to
get
some
runtime
changes
to
avoid
this
in
dot
net
seven.
So
I
don't
know
how
far
it's
going
to
happen
but
like
at
least
like
we
are
in
conversation
with
the
dot
net
team
to
get
this
addressed
in
dot
xl.
C
So
I
I
I
still-
I
think
I
we
already
mentioned
this-
I'm
sure
you
already
mentioned
this,
but
whenever
we
get
to
that
point,
my
question
is
okay.
You
have
the
wrapper
type
to
kind
of
represent
the
higher
type,
but
then
what
does
the
sdk
is
gonna
access?
You
know.
G
So
it's
always
like
diagnostic
source
like
if
you
look
at
it,
it
has
a
backward
compatibility.
It
is
not
broken
so
at
any
point
in
time.
What
like,
with
our
model
or
whatever
the
model,
it's
always
going
to
load
the
latest
version
of
the
diagnostic
source.
C
Yeah
yeah,
but
I
I
think
my
my
worry
is
is
the
reverse.
Like
this
example,
you
have
a
very
low
version
for
something
that
is
shipped
with
the
runtime.
I
don't
think
the
sdk
can
work
with
that
version.
It
needs
six
and
higher.
G
The
thing
is
that
if
you
look
at
it
even
now,
sdk
like
you
can
bring
in
asp.net
3.1
app
and
using
sdk
open
telemetry
sdk
with
it.
So
what
happens
is
like
whenever
we
do
a
dynamical
injection
that
app
gets
the
diagnostic
source
version
6,
which
is
caught
by
the
open
telemetry?
G
C
I
I
think
the
the
biggest
kind
of
gap
is,
if
you
go
kind
of
an
application,
that's
built
against
3.1
and
auto
instrumentation
typically
doesn't
have
a
reference,
so
the
application
was
built
against
3.1
self-contained.
Then
how
we
do
to
inject
this
newer
version
needed
by
the
sdk.
G
C
Yeah
yeah:
it's
the
scenario
that
I
think
that,
right
now
we
don't
have
any
workaround.
You
know.
G
The
thing
is
that
even
now
like
we
spoke
about
the
additional
depths
for
the
self-contained,
we
don't
have
it
at
this
point,
so
even
if
they
enable
that,
for
the
sense
contained
application
that
will
suffice
our
need.
We
don't
need
to
do
anything.
Everything
gets
resolved
at
the
build
time,
but
the
dotnet
team
is
not
in
agreement
to
do
that,
because
that's
going
to
break
the
self-contained
like
design
principle.
G
So
what
they
are
planning
to
do
is
they
are
going
to
add
a
like
a
feature
at
the
runtime
so
that,
even
though,
like
libraries
would
have
got
resolved
at
the
build
time,
but
at
the
run
time
it
will
do
a
one
more
run.
It
will
look
at
a
special
folder
to
see
if
there
is
a
library
present
there
and
it
is
going
to
honor
that
and
add
it
to
that
tpa
list.
What
is
what
normally.net
runtime
does
it?
G
C
One
one
thing
that
I
think
it's
a
it's
a
question:
should
we,
but
I
think
it's
on
the
back
of
everyone's
minds
here.
C
How
many,
how
what's
the
kind
of
slice
of
the
people
trying
to
use
the
auto
instrumentation
that
are
gonna
have
self-contained?
C
You
know,
I
don't
have
any
clue
about
that,
because
what
I'm
thinking
right
now
is
kind
of
this
is
something
that
we're
not
going
to
be
able
to
cover,
for
instance,
for
the
beta
for
sure,
but
how
many
people
we
are
excluding
by
not
being
able
to
cover
that
you
know
it's
a
high
priority
scenario.
It's
not
I!
I
don't
have
a
clue
about
that.
I
think
my
my
great
feeling
is
that
is
not
that
important.
Typically,
people
have
the
the
sdk.
C
Even
they
use
containers
that
come
with
the
sdk,
but
I
just
this
is
just
a
completely
guess.
I
don't
have
any
numbers
to
back
that
up.
G
G
C
Yeah
yeah
so
yeah.
If,
if
there
is
any
numbers
that
you
can
share
about
that
with
us
sometime
down
the
road,
you'll
be
nice,
you
know,
so
it
gives
us
a
better
perspective
about
kind
of
how
how
many
users
potential
uses
for
open
telemetry
are
not
served.
If
we
don't
have
that,
you
know.
G
Sure
I
can
get
a
rough
number
from
like
saurabh,
just
checking
him
like
how
many
people
use
fdd
versus
scd,
so
that
can
be
very
easily
got.
I
believe
I
don't
know
whether
it
is
shareable,
but
rough
estimates
can
be
spoken.
Let
me
also
invite
him
in
the
next
meeting.
If
he
can
share
that.
C
Yes,
yes
sounds
sounds
very
good
yeah.
As
I
said,
I
I'm
I'm
very
curious
about
the
solution
when
we
have
the
sdk
requiring
something:
that's
not
there
on
time.
In
the
self-contained
case
you
know
so
yeah
yeah,
please
keep
keep,
keeping
us
keep
in
touch
with
us
in
this
regard,
and
we
we
work
together
and
whatever
can
contribute
on
this.
We
also
even
just
trying
to
contribute.
You
know.
I
All
right
anyone
else.
C
All
right,
then
we
we
had
a
full
meeting.
We
took
more
than
the
hour,
so
I
think
it's
good
for
the
week.