►
From YouTube: 2020-10-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
Greg
while
you
wait
greg,
I
sent
you
email
to.
Maybe
I
I
have
the
wrong
email
at
database.
B
D
D
We
have
we
have
so
many
like
one
of
the
things
that
I
started
doing
is
we
have
a
big
backlog
of
github
issues
and
I'm
trying
to
get
rid
of
that.
D
B
So
open
telemetry
is
one
of
the
tenants
to
say
that
ga
and
everything
because
it
it
merges,
we
do
have
to
support
open
tracing.
You
know,
and
actually
this
is
the
thing
at
least
for.
B
That
that's
at
least,
I
think
that
we
do
have
a
bunch
of
customers
that
are
actually
instrumented
with
open
trace.
B
How
do
you
guys
integrate
no,
not
directly
v,
auto
instrumentation,
also
some
of
our
instrumentation,
but
then
we
use
the
libraries
that
export
open
tracing
and
we
consumed
that
in.
In
the
back
end,.
D
Because
I
know
very
little
about
open
tracing,
but
if
you
already
instrumented
an
application
with
using
open
tracing,
then
that's
completely
like
it's
not
compatible
to
open
telemetry
right.
You
have
to
essentially
re
change.
Your
code.
B
Now
the
the
instrumentation
itself,
it's
open,
tracing
client
right,
so
you
can
write
that
with
open
tracing
and
our
open.
Telemetry
library
should
have
a
shame
to
support
opentracing
dotnet.
I
tested
a
couple
of
months
ago
and
it
was
working
fine
because,
as
I
said,
we
do
have
these
customers
that
use
open
tracing.
So
we
are
one
of
the
things
that
we
are
keen
to
do
for
the
the
ones
that
are
doing
pure
manual.
Instrumentation
is
switch
then
from
our
library
to
open
telemetry.
D
Yeah
yeah
totally
I
I
was
just
thinking
what,
because
I
don't
know
much
about
open
tracing
what
I
can
tell
people
who
requests
who
say
so
right
now,
of
course,
we
don't
have
the
activities
yet,
but
one
day
when
we
do,
can
I
just
tell
them?
Okay,
you
just
switch
from
open
tracing
to
open,
telemetry
and
full
stop,
or
is
it
hard
for
them
and
so
that
they,
you
know
so
that
we
have
children.
B
The
plan,
I
think,
is
to
enable
the
shin
or
open
telemetry,
because
open
tracing
is
just
the
specification
you
have
the
you
instrument,
your
app.
What
you
need
to
do
to
produce
those
traces
is
to
create
an
instance
of
the
tracer,
so
they
already
have
code
creating
the
instance.
What
I
would
tell
to
our
customers
in
that
case
is
kind
of
oh.
Instead
of
using
our
library
or
whatever
library
you
are
using
to
create
the
tracer
using
open,
open,
telemetry.
B
Yeah,
so
for
the.net,
I
already
test
that,
as
I
said
one
or
two
months
ago,
with
various
scenarios
that
we
have
to
support,
because
as
soon
as
it
gets
ga,
we
are
switching
the
people
that
do
pure
manual
instrumentation
to
open
telemetry.
D
D
Cool
that
that's
good,
then
I'll
as
I
go
through
those
our
datadog
stuff.
If
everything
that
is
feature
requests,
I
would
just
say
like
basically
open
telemetry
is
the
future
and
yeah.
If
there
might
be
bugs,
then
I'll
talk
to
like
lukas,
probably
you
and
like
ask,
but
it's
really
a
bug,
because
you
have
more
contacts
there
and
then
we
can
consider
if
you
want
to
fix
it.
B
Yeah
so,
okay,
I
I
I
want
to
start
with
update
about
the
active
activity
source.
So
I
I
look
at
that
prototype
from
greg
that
it
covers
a
bunch
of
stuff
and
it's
always
there
and
I
start
to
move
to
the
list
to
the
second
height
that
greg
put
there.
That
is
kind
of
implementing
these
on
the
branch
and
using
to
fill
out
the
the
gaps
that
we
still
have
on
the
prototype.
So
I
have
the
building
from
sources.
B
What
we've
been
calling
rendering
working
and
I
integrated
that
on
top
of
greg
prototypes,
but
I
think
it
becomes
very
large
change
to
bring
at
once.
B
B
Then
another
small
api
that
kind
of
uses
the
loaded
assembly
to
cache
the
delegates
and
activity.
Stub
then
works
with
those
delegates,
and
then
I
bring
the
the
sources
from
dot
net
run
time
that
implement
that
if
we
fail
to
load
the
assemblies,
I
think
I
I'm
guessing
kind
of
four
six
seven
pr's
to
bring
out
of
this
in
small
bytes,
but
I
think
that
you
is
the
way
to
make
this
reviewable
kind
of,
so
everyone
can
understand
each
part.
I
think
I
would
follow
that
makes
sense,
but.
D
There
is
a
bunch
of
things
that
exist
in
my
mind
that
I
wanted
to
change,
and
maybe
it
doesn't
make
sense
to
review
them
before
I
change
them
and
they're
related
to
so
right
now,
right
now
the
library
loading
is
based
on,
like
you
know
this
file
copy
so,
but
we're
moving
away
from
this.
So
once
we
start
doing
the
so,
I
think
the
first
thing
that
I
would
suggest
is
like
the
whole
vendoring
right.
D
So
that's
definitely
a
good
pr
and
then
the
second
part
is
library
loading,
even
if
it's
not
activated
but
for
library
loadings.
There
is
still
some
code
necessary
right
because.
B
Yeah,
so
what
I
was
thinking
is
build
on
top
of
what
you
already
started,
using
the
same
kind
of
pattern
for
do
the
steps
and
doing
the
logic
on
that,
but,
for
instance,
at
the
first,
this
activity
stub
just
works
with
the
loaded,
and
I
bring
the
source
afterward,
the
source
version,
and
basically
it's
relying
that
either
the
application
is
in
a
runtime
that
supports
the
diagnostic
source
or
the
application
has
a
reference
that
was
a
read
or
not
loaded.
Yet
you
know,
okay,.
D
And
we
can
do
that,
but
either
way
like
I
follow
up
with
the
logic
for
the
other
thing
soon
anyway,
so
but
review
wise
sure
that
that
works.
D
Sorry,
like
pr
wise
makes
sense:
let's
do
it,
but
I'll
follow
up
with
the
other
logic.
Soon
after.
B
Yes,
yes,
and
perhaps
you
already,
we
can
discuss
and
point
the
things
I
think
doing
this
way.
We
also
have
a
chance
to
kind
of
change
more
piecemeal,
based
stuff
that
we
find
okay
sounds
good.
Another
thing
zach
just
updated.
He
is
already
working
on
the
re-org
of
the
folders
and
basically
we're
still
on
the
same
plan
as
soon
as
he's
out
zach
I'll.
Take
a
look
on
the
datadog
folder,
the
doggy
ripple
and
go
from
there.
F
Yep
just
gonna
work
through
just
a
couple
of
bill
changes
and
then
I
especially
get
that
merged
within
the
next
couple
days.
B
Sounds
good
and
one
smaller
thing
that
I
would
like
to
to
do.
We
have
a
few
branches
on
the
ripple
right
now
and
a
few
issues.
I'm
gonna
do
a
scrub
on
those
to
see
if
there
is
stuff
that
we
can
kind
of
perhaps
clean
up
the
blades
or
commit
some
of
the
prs
that
are
there
and
make
advance
that
it's
just
kind
of
house
cleaning
to
be
sure
that
things
are
kind
of
cling
and
we
can
keep
keep
moving.
As
we
start
with
these
pr's.
D
One
question
about
this:
where
are
we
currently?
How
far
is
how
far
is
the
open,
telemetry,
repo
behind
vendors,
now.
B
The
how
how
far
behind
you
mean
the
whole
ripple-
I
I
didn't
pull
anything
since
the
initial
application.
Are
you
planning
to
do
that
so
so
it's
it's
very
far
behind
right
now,.
B
Address
it,
yes,
so
so
I
was
thinking
about.
As
I
mentioned
I
think
last
week
my
idea
was
to
kind
of
wait.
This
big
reorg
and
kind
of
pull
the
the
the
last
commit
before
the
reorg
and
then
pull
the
reorg.
You
know,
okay,.
D
Sounds
good
and
are
you
okay
with
working
on
it
once
once
the
org
is
committed
into
datadog?
D
Sorry,
I
I
I
my
so
once
the
reorg
is
committed
into
datadog.
Did
you
plan
to
work
on
this
on
this
pulling?
Are
you
okay
with
doing
that.
B
Yeah,
no,
it's
okay!
For
me
to
do
it.
I
I
I
look
at
people
from
from
datadog
and
you
to
kind
of
review
and
approve
on
the
open,
telemetry
side.
Okay
sounds
good.
B
And
I
I
I
think
now
we
get
to
the
the
mid
here
that
is
kind
of
discussing
about
the
gae
requirements.
B
And
I
don't
know
if
eric
has
some
start
point,
but
what
I
I've
been
kind
of
for
the
the
libraries
is
very
specific
because
they
have
kind
of
a
table
of
features
for
instrumentation.
I
didn't
see
anything
like
that.
I
didn't
mind
or
close
these
packs
in
the
last
week
or
so,
but
I
didn't
see
anything
like
that.
B
So
I
would
say
that,
from
the
top
of
my
mind
kind
of,
of
course,
instrumenting
and
instrumenting
the
bigger
targets
like
asp.net
asp.net
core
and
having
the
exporters
that
are
kind
of
required
by
the
client
libraries.
You
know
that
is
the
otop
zip
king
jaeger
and
having
metrics
permit
primitives.
So
that
is
kind
of
the
top
of
my
mind.
My
criteria
to
declare
this
eventually
in
ga.
G
Yeah,
so
I
was
just
intending
obviously
you
know
greg
craig's
sharing
the
section
here
I
went
through
and
you
know
took
a
stab
at
things.
This
is
very
much
pulling
from
the
previous
list.
G
That
was
already
there,
but
you
know
the
idea
is
I
just
wanted
to
further
the
discussion
and
really
give
us
a
you
know,
a
high
level
list
of
things
that
we,
you
know
know
need
to
happen
before
ga,
and
then
we
can,
you
know,
figure
out
about
the
you
know
kind
of
stack,
rank
priority
of
that
and
you
know
breaking
it
up
and
potentially
talking
about
if
we
have
like
beta
releases
or
anything
like
that,
you
know
but,
but
I
think
here
at
this
point-
it's
just
very
much
a
high
level
discussion
and
I
think
we
should
you
know
kind
of
flesh
this
list
out
some
and
then
from
you
know
from
from
there.
G
I
think
we
should
figure
out
what's
the
priority
for
these
in
terms
of,
does
this
make
the
cut
list
for
for
ga
or
not
you
know,
and
then
from
from
there?
We
can.
You
know
proceed
with
our
with
our
kind
of
plans.
G
G
D
Yet
just
one
thing
that
I'm
kind
of
spotting
this,
I
think
that
is
very
important
to
me
as
a
whole
architecture
is
when
we
talk
about
configuration,
we
should
make
it
explicit,
which
one
we
are
shooting
for
configurable
by
the
customer
or
by
the
vendor,
because
that
will
require
very
different
configuration
strategies,
for
example
by
the
vendor.
D
It
means
that
configurable
enough
could
be,
if,
like
say,
an
exporter,
implement
some
interface
and
like
say
a
new
relic
as
a
vendor
would
like
to
have
a
tracer
that
sends
data
to
uralic
backend
like
as
an
example
right,
then
they
will
simply
take
their
class
that
implement
that
interface
and
some
place
in
code,
where
you
knew
up
that
class,
you
just
say
new
new
relic
export,
whereas
splunk
was
saying
you
splunk
export.
D
It
doesn't
have
to
be
like
a
configuration
based
thing
because
that
can
be
actually
easiest
and
most
performant
and
the
code
can
be
way
simpler
without
like
spending
cycles
on
dependency
injection,
whereas
if
it's
configurable
for
the
customer,
then
of
course
you
have
to
have
configuration
and
validation
of
configuration,
and
then
somebody
needs
to
support
all
the
different
possibilities
that
the
configuration
can
allow
and
all
of
these
things
so,
like
you
know,
we
should
just
be
clear.
B
Each
one
I
I
I
think
that
we
should
try
to
kind
of
follow
the
same
art
that's
been
established
for
java
and
python,
but
of
course
we
are
different
technology,
so
I
I
think,
for
instance,
for
java
they
have
this
at
the
level
of
the
user
configuring,
so
you
can
choose
export,
if
I
remember
correctly,
just
by
setting
environment
variables,
I'm
not
saying
that
we
should
do
exactly
that,
but
I
should
I
should
think
we
should
try
to
look
at
their
models
to
see
kind
of.
B
If
we
can
keep
these
things
consistent,
you
know
because
one
of
the
things
for
open
telemetry
is
kind
of.
If
you
know
one
language,
you
should
have
kind
of
grips
for
what
happened
in
a
different
language.
I
would
say
that
instrumentation
is
kind
of
harder
to
do
that,
because
the
way
that
we
instrument
is
very
different,
from
instance
from
java,
but
at
at
least
in
principle.
We
should
look
at
what
this
other
sig's
instrumentation
already
achieved
before
us.
D
Okay,
I
have
no
position
to
this
to
me.
It's
way
lower
in
priority
too,
because
you
know
for
for
me
when,
like
when
you
guys
ship.
Essentially,
we
have
a
part
that
is
used
by
everybody
that
is
collect,
telemetry
package
it
and
then
prepare
for
serialization
and
then
once
things
go,
then
you
give
it
your
component
that
is
like
serialized
and
sent
to
the
appropriate
place
and
like
if
we
I
would
like
to
reduce.
D
If
we
have
like
a
customer
customer
scenario
where
an
actual
person
who
wants
to
use
open,
telemetry
technology
wants
to
be
able
to
configure
this
at
at
like
configuration
time,
then
it
becomes
a
good
conversation,
but
I
would
like
to
avoid
the
complexity
for
the
sake
of
just
being
able
to
do
it
so
say.
If,
essentially,
all
your
customers,
like
at
least
for
data,
don't
create
the
way
it
works.
Is
a
customer
comes
and
deploys
the
tracer
on
some
infrastructure.
D
It
can
be
a
vm,
it
can
be
some
cloud
platform
as
a
service
and
they
have
installed
the
tracer
because
they
are
a
day-to-day
customer.
So
that
means
the
data
needs
to
flow
into
our
back
end.
In
whatever
way
is
appropriate,
it's
not
like
they
will
first
install
the
tracer
and
then
they
will
decide
well.
D
Do
I
want
to
be
a
data
dog
customer
or
a
splunk
customer
right,
so
in
that
case
we
make
it
super
easy
for
them
right,
install
the
sync
and
the
data
flows.
H
Yeah,
so
craig,
perhaps
another
angle,
to
look
at
other
than
the
consistency
between
the
different
languages
is
if
we
ignore
all
of
the
different
vendors
and
we're
just
going
with
the
the
pure
open
source,
open
telemetry
approach,
where
they
don't
want
anything
vendor
specific
right
now,
open
telemetry
has
kind
of
two
different
places
to
to
center
your
data,
perhaps
three
for
traces
at
least
there's
jaeger
and
zipkin,
but
then
there's
also
the
hotel
collector,
which
could
be
another
destination
where
people
send
data
in
aggregate,
and
so
at
the
very
least,
there
may
be
a
desire
to
have
some
sort
of
configuration
on
whether
or
not
they
want
to
use
zipkin
or
jager
or
the
collector
as
a
destination
for
for
traces
with
just
the
vanilla,
open,
telemetry
agent.
D
So
that
means
somebody
doesn't
install
the
new
relic
version
of
the
agent
they
install
the
pure
open
telemetry.
I
see
that
makes
sense
that
that's
a.
G
G
Yeah,
that's
that's.
That's
correct.
We've
had
we've
had
customers
asked,
you
know
that's
kind
of
specifically
around
the
hotel,
collector
use
case
there,
but
that
they
want
to
be
able
to
install
the
agents
without
any
vendor
specific
code.
And
then
it's
just
the
collector
that
will
have
the
you
know.
Vendor
specific
exporter.
D
Can
we
then
have
a
two
level
injection
point
here
so
that
we
can
mix
and
match
both
scenarios?
For
example,
say
we
have
the
exporter
that
implements
some
interface,
let's
call
it
export
interface,
and
then
we
have
one
implementation,
that
is
say
a
new
relic
exporter.
That
is
whatever
you
guys
need,
and
then
we
have
another
one
for
datadog
and
splunk,
and
then
we
have
another
one
that
is
called
open,
telemetry
exporter,
and
that
takes
all
the
configurations
and
covers
this.
D
Not
a
specific
zipkin
or
open
telemetry
collector
thing,
but
the
open
telemetry
thing
that
then
it
takes
configuration.
Basically
you
see
what
I
mean.
So
there
is
one
more
interaction
here
so
that
when
a
vendor
like
because
for
me,
is
I
wanna
like
for
our
customers-
I
want
to
make
things
as
flexible
as
necessary,
but
as
simple
as
possible
right,
so
I
don't
want
to
have
them
to
deal
with
a
configuration
thing
that
is
not
necessary.
D
So
that
way
we
can,
if,
if
we
take
them,
if
we
actually
release
the
open
telemetry,
a
repo
would
by
default,
compile
the
the
the
with
the
one.
That
is
open
telemetry
that
does
read
the
configuration,
but
when
a
downstream
vendor
wants
to,
rather
than
have
that
thing
use
the
configuration
to
point
to
the
vendor
specific
guy,
it
would
just
like
we
would
create
a
different
interface
implementation
and
avoid
even
that
configuration
in
the
first
place.
Would
that
work?
Do
you
guys
think.
B
And
just
two
seconds
something
that
eric
said,
I
I
do
get
from
some
seismic
number
of
customers
that
they
wanted
to
move
kind
of
away
from
vendor
specific
stuff
to
kind
of
have
that
at
a
minimal
possible.
You
know
so,
for
instance,
the
collector
gives
that,
because,
basically,
then
your
configuration
accept
by
the
last
export
on
the
node,
it's
the
same
for
any
vendor
that
you
use
or
even
if
you
use
the
a
gazebo
in
promoters.
G
Yeah
and
to
me
it
makes
sense
too.
I
mean
I
think
we
should
you
know,
identify
which
of
those.
When
we
go
the
open,
telemetry
route,
you
know
defining
which
ones
are
supported
there
I
mean,
I
think
it's.
I
think
it's
pretty
pretty
obvious.
There
are
those
you
know
the
otlp
and
the
zipkin
and
jager,
but
you
know
we
should
probably
just
obviously
we'll
firm
this
up
and
confirm,
confirm
details
there,
but
it
makes
sense
that
it
would
be
the
same
things
that
we're
supporting
that
are.
G
You
know,
part
of
in
the
sdk
that
are
actually
we
have.
You
know,
exporters
for
there.
Obviously,
we've
you
know:
we've
moved
out,
vendor-specific
exporters
out
of
into
the
hotel
contribute,
but
there
are
still
like
the
ones
for
dipkin
and
jager.
In
the
you
know
the
the
sdk
side
of
things,
so
I
would
think
we
would
probably
want
to
kind
of
you
know
cross
match
those.
D
Yeah
yeah
to
me
it's
like
yeah,
I
think
we're
on
the
same
page.
It's
just
important
that
for
vendors
who
like
when
who,
if
who
have
customers,
who
don't
need
this
flexibility,
we
can
avoid
the
like
runtime,
overhead
and
complexity
of
dealing
dealing
with
the
configuration,
but
I
think
this
double
direction
solves
this
pretty
well.
Okay,
that's
good.
G
What
about
some
of
these?
Can
we
talk
about
some
of
the
stuff
at
the
at
the
high
level,
say
around
like
the
profiler
improvements
actually
greg.
I
wanted
to
check
with
you.
I
remember
a
few
a
few
weeks
back.
I
think
it's
actually
probably
longer
than
that.
We
were
talking
about
this.
You
know
target
methods
instead
of
call
sites
and
that
there's
somebody
from
datadog
who's
going
to
be
working
on
that,
but
then
they
were
on
vacation
and
I
I
haven't
you
know
heard
any
anymore
from
that.
D
Curious
where
that
was
actually
there
is
a
design
dock
and
which
we
are
in
process
of
implementing
so
yeah
this
this
sprint
I'd
done
zero
progress
on
the
whole
open,
telemetry
things
I
didn't
even
share
the
dock
that
I
promised
last
week
I
was
a
dri,
so
I'll
share
his
design
dock,
so
definitely
for
sure.
D
This
also
depends
on
it
to
be
completed.
That's
kind
of
the
same
thing
right
so
regit
like
it's.
No,
I
mean
it's
separate
effort,
but
it's
very
related
right.
D
This
one
is
also
so
these
I
would
I
would
in
so
I
I
agree
that
both
all
of
it
is
important
in
terms
of
order.
How,
like
I
would
say
like
this
than
this,
and
this
then
this
is
probably
how
it
will
actually
be
done
right
with
you
guys.
Does
it
make
sense
from
implementation
perspective.
G
Yeah
yeah,
so
on
that,
like
you
know,
irrespective
of
whatever
order
makes
sense
from
the
engineering
perspective,
I
would
just
like
to
ask:
are
those
all
things
that
we,
you
know
think
are
you
know,
requirements
for
for
ga
or
to
some
of
these
things
like
the
supporting
the
region,
and
maybe
the
you
know
the
engine
being
disabled?
Is
that
something
that
we
really
need
for
ga
or
not
like?
I?
G
I
don't
have
a
sense
of
what,
like
the
performance
impacts
for
those
are,
and
if
that's
something
that
we're
saying
is
a
request.
You
know
so
I'd
like
us
just
to
unders,
you
know,
understand
and
make
a
statement
on
whether
these
are
required
or
not.
I.
D
See
so
I
I'm
I
would
suggest
region
is
not
like
it's
not
supported.
Legit
is
a
feature
of
the
profiling
apis
that
we
are
currently
not
using,
but
we
will
use
in
order
to
react
to
other
other
runtime
events
to
handle
the
things
that
are
required
for
like
a
code
target
instrumentation.
D
D
B
I
I
I
have
the
tendency
of
thinking
that
we
don't
need
this
perfect
improvements
to
declare
ga
itself.
We
can
give
high
priority
to
them,
but
I
I
think,
in
terms
of
ga,
I
would
like
to
see
more
of
the
functionality,
even
in
the
current
format.
B
We
we
are
one
of
those,
but
I
I
would
say
that,
from
my
perspective,
I
I
prepared
to
kind
of
say
that
we
reach
ga
and
we
are
aware
of
this
backlog
and
kind
of
push
for
that
right
after
you
know,
I
I
it's
very
desired
features,
but
I
I
say
that
for
kind
of
reaching
1.0
our
first
version
I
I
could
live
without
them.
D
So
so,
in
terms
of
performance
improvement,
that
comes
will
come
from
here.
D
Yeah
I
mean,
I
think,
the
question
there
is
like
what
we
believe
is
more
critical
for
the
customer,
like
a
real
world
enterprise
they're.
Like
so
for
example,
say
you
use
zip
killers
export
right
as
an
enterprise,
you
don't
care
whether
the
thing
also
supports
jaeger.
You
say
like
this,
our
scenario
we
use
zip
and
I
choose
zipkin
as
a
random
example
right
we,
this
is
our
exporter.
We
want
this
support
and
we
want
good
performance
otherwise
like.
D
Why
would
we
even
use
this
thing
so
that
that's
why
all
these
features
they're
like
they're,
more
no
single
customer
wanted,
wants
all
of
them.
Customers
want
like
one
of
them
and
our
question
is
like
which
ones
do
we
wanna
offer
to
tell
the
customers?
They
have
some
reasonable
choice,
but
but.
B
But
these,
for
instance,
these
I
I
would
see
them
not
from
the
perspective.
The
customer
want
a
specific
one
of
those,
but
in
the
sense
that
that
gives
a
kind
of
broader
adoption
perspective
for
for
open
telemetry.
You
know
because
it's
kind
of
okay,
I'm
using
zip
king,
but
I
can
stow
open
telemetry,
I'm
using
jaeger
and
jaeger
from
insaga,
doesn't
have
any
more
the
collector
that
they
used
to
have.
Nowadays
they
are
using
the
open
telemetry.
B
So
in
that
sense,
it's
a
kind
of
parrot
and
if
I'm
not
mistaken,
is
a
requirement
from
open
telemetry
itself
for
any
piece
to
declare
ga
you
know
you
have
to
have
some
of
this
support,
because
then
it
becomes
a
piece
that
a
lot
of
people
can
put
in
the
infrastructure.
When
you
have
that
support.
B
I
understand
what
you
are
saying
about
the
performance
from
the
perspective
of
one
customer,
a
single
customer,
because
he
is
worried
about
the
performance.
He
already
has
something
that
export
in
whatever
format
or
specifically
that
they
need,
but
then,
from
the
perspective
of
open
telemetry
itself,
to
have
that
bro,
broad
approach
and
distribution.
D
Yeah,
so
I
have,
I
have
just
don't
get
me
wrong.
I
have
zero
opposition
to
requiring
these
for
ga,
absolutely
not
it's
more
about
where
you
want
to
invest.
First,
because
if
say,
we
support
all
these
things,
but
the
performance
is
abysmal,
then
no
one
will
want
to
use
it
anyway.
D
Now
the
performance
is
not
abysmal.
Already,
it
works
fine,
but
like
what's
what's
more
so
this
one
specifically
just
before
we
kind
of
write
onto
this,
the
co-target
instrumentation
like
it
would
change
so
much
that
like
if
we,
if
we
release
version
one
before
we
do
this,
then
the
next
version,
then
once
we
do
this,
it
will
not
be
one
point
one.
It
will
be
version
two
so
because
it
changes
the
entire
engine
right
and
join.
D
So
because
of
that,
I
would
suggest
that
this
one
we
do
require
4g,
because
it's
such
a
big
architecture.
B
Makes
sense
in
the
context
that
that
you
and
increase
bring
this
up,
make
sense?
Can
we
we
bring
this
discussion
kind
of
in
parallel
to
the
things
that
were
doing
for
active
source,
so.
D
Yes,
yes,
yes,
these
ones.
I
have
a
less
strong
opinion
about
this.
I
can
tell
you
that
from
a
vendor
perspective,
I
would
certainly
invest
like
first
into
this
and
then
so,
if
it,
it
depends
a
little
bit
on
on
what
hat
I
wear,
because,
where
you
it
gives
a
different
perspective.
D
If
I
go
like,
I
just
care
about
success,
of
open
telemetry
right
what
you
said
paulo,
I
agree
if
I
say
I
want
to
at
the
end
of
the
day,
I
want
I'm
a
vendor
and
I
am
doing
telemetry,
because
I
believe
that
the
right
way
is
to
share
the
data
collection
and
to
have
like
a
you
know.
D
We
compete
on
backend
features
and
we
we
all
benefit
from
customer
being
able
to
like
transition
between
all
these
technologies
right
then
I
start
saying:
okay,
what
would
the
real
customers
ask
me
and
I'll
say?
Okay?
Well,
if
they
ask
me
for
like,
if,
like
what
do
they
ask
me
for
more
and
today
they
ask
me
more
for
performance
over
more
expert
possibilities.
D
So
I'm
going
like
what
what
what
would
I
like,
but
I
believe
that
I
want
to
share
the
contribution.
So
what
would
I
like
to
contribute
sooner
performance
or
exporters
and
to
me
it's
clearly
performance,
because
I
know
that
my
actual
customers
want
it.
D
B
Yeah
I
I
think,
especially
in
the
context
of
saying
that
the
changes
that
these
require
on
the
instrumentation
code
yeah
then
then
I
I
I
agree
with
that.
You
know
and
as
I
said
so,
let's
try
to
to
to
bring
that
discussion
as
soon
as
possible.
You
know,
so
we
can
kind
of
start
to
to
work
on
that.
But
but
perhaps
then
we
we
say
that
this
this
first
one
is
our
requirement
for
ga,
although
it's
not
a
open
telemetry
requirement
for
ga,
but
then
it's
for
our
sig.
D
B
And
we
I
I
doubt
that
we
are
doing
these
for
ga,
except
for
metrics,
traces
and
metrics.
You
know,
logging
is
coming,
the
spec
is
kind
of
closing
or
closed
recently,
and
I
don't
think
there
will
be
apis
or
anything
implemented
by
open
telemetry
for
ga
this
year.
You
know
so,
for
instance,
logging
is
not
coming
just
one
thing.
I
I
think
the
the
thing
that
comes
with
logging
is
kind
of
injecting
trace
id
and
spam
id.
B
This
is
part
that
I
see
in
a
lot
of
the
tracing
libraries
so
and
datadog
already
has
that
in
their
ripple.
So
this
is
something
that
I
think
it's
there
and
we
keep
even
with
the
eventual
changes
that
we
made.
H
Hey
paulo
correct
me,
if
I'm
wrong,
but
in
the
last
few
sdk
sig
meetings,
it
sounds
like
for
the.net,
sdk
traces
will
be
ga
and
it's
likely
that
logging
would
also
be
ga
at
the
same
time,
even
though
that
cjo's
not
done
with
that
work.
B
The
logging,
I
think,
the
integration
with
traces
that's
part
of
what
they
are
working
and
there
is
a
a
different
thing
that
is,
that
dot
net
already
has
the
I
logger
interface,
that's
the
standard,
but
the
question
is
what
dot
net
you
do
regarding
when
open
telemetry
has,
if
eventually
it
had
some
api,
you
know,
so
I
think
we
can
catch
up
on
the
pr's.
B
I
I
I
have
not
been
following
exactly
what
they
are
doing,
but
perhaps
I
misunderstood,
but
that's
what
I
understood,
that
that
is
work
to
publish,
trace
id
and
spam
id
via
ilogger,
and
there
will
be
work
to
kind
of
support
v
open
telemetry,
but
that
I,
if
I
understood
correctly,
was
not
coming
for
for
dj.
But
perhaps
I
I'm
mistaken.
H
Yeah,
so
my
understanding
of
what
cg
was
talking
about
is
that
the
work
for
enriching
the
log
data
is
already
done.
It's
already
part
of
the
sdk,
so
it's
already
adding
the
trace
information
and
the
part
that's
actively
being
worked
on,
is
being
able
to
send
this
log
information
through
like
the
open
telemetry
pipelines,
so
that
it
could
be
sent
somewhere
else.
H
Not
that
I
don't
think
there's
any
default
destination
to
consume
those
logs
via
open
telemetry.
B
Yeah
yeah,
so
so
so
thanks
for
for
correcting
me
on
that,
then
I
also
post,
that
is
part
of
the
open
telemetry
protocol,
because
the
open
element
protocol
is
where
they
were
adding
the
logs
so
that
that
makes
sense.
That
makes
sense.
Yes,
I
was
thinking
in
terms
of
the
api,
but
but
I
see.
C
B
See
yeah
so
so
so
then
I
I
think
for
logging.
I
think
I
will
have
to
double
check,
but
I
think
that
is
not
being
required
for
open
the
limit
ga
itself,
but
then,
of
course,
then
it
becomes
something
that
you
have
to
add
down
the
line.
You
know.
G
Yeah
and
just
on
the
on
the
metric
side
of
things,
my
understanding
on
that
is
that
in
the
november
time
frame,
when
open,
telemetry
gas,
it's
just
going
to
be
traces
and
metrics
is
going
to
be
in
a
beta
state.
So
it's
kind
of
the
same
same
question
there.
You
know
if
or
what
you
know
level
of
support
we
want
to
have
for
for
metrics
at
ga
or
if
we
just
say
it's,
none,
because
it's
it
in
a
beta,
you
know
or
or
you
know
or
what.
B
Yeah
I
I
tend
to
see
that
we
are
already
with
a
very
full
plate
ahead
of
us,
so
my
tendency
will
be
if
we
can
get
this
out
of
it
in
the
short
run.
I
I
try
to
get
out
on
the
short
run
and
do
it
later.
You
know.
G
Yeah-
and
I
think
you
know
I
mean
I
think,
I'm
you
know
comfortable
and
with
you
know
us
focusing
on
you
know
getting
a
clear
scope
there
and
cutting
scope
where
we
can
so
I
can
come.
I
can
come
back
to
you
later
to
this
and
kind
of
mark
the
things
that
we're
considering
you
know
eliminating
for
ga,
and
we
can
you
know
kind
of
finalize
that
finalize
that
later.
G
But
what
about?
Actually
we
skipped
over
the
plugable
context,
propagation
stuff.
So
you
know
we've
been
talking
about
this
some
before
and
then
with
the
you
know,
w3
trace
context,
support
and
stuff,
and
I'm
just
wondering
if
that's
that's
something
that
you
think
is
going
to
be
a
requirement
for
ga
and
if
so
kind
of
what
our
plans
are
on
that.
B
I
I
if
I
remember
correct
from
last
week,
I
think
that
that
should
be
a
requirement,
because
there
are
some
propagation
formats
that
are
custom,
so
we
need
to
be
able
to
to
plug
something,
at
least
perhaps
in
a
vendor's
distribution,
to
be
able
to
plug
something.
So
people
can
interoperate
with
other
stuff.
G
Okay,
so
then
we
would,
we
would
definitely
be
considering
that
the
ga
requirement-
and
would
we
by
default
support,
trace
context
out
of
the
box.
Then.
B
B
D
I
think
this
can
also
be
pluggable
in
the
same
way
where,
where
we
have
an
interface
yeah.
C
D
For
for
for
datadog,
what
I
would
like
to
do
for,
like
our
implementation
of
it,
is
what
I
we
discussed
last
time
is
essentially
propagate
w3c
and
data.context
automatically
and.
D
A
Too
question
about
something
that
I
haven't
seen
mentioned
anywhere.
Is
there
a
list
of
frameworks
or
runtimes
that
we
are
going
to
support
fga
like
4.5,
or
is
it
just
461
and
about
accounting.
B
I
I
thought
we
had
this
listed
somewhere,
but
if
I'm
not
mistaken
was
the
four
five.
I
don't
remember
the
number
that
is
the
one
supported
by
microsoft,
the
46
something
also
supported
by
microsoft
and
then
is
not
the
net
core
app
from
two
point.
One
and.
A
B
A
B
Yeah,
I
I
think
we
had
that
listed,
but
I
we
I
we
should
follow
up
and
put
that
splishly
somewhere.
So
it's
a
hundred
percent
clear.
B
A
Is
I
think
it's
four
five
two
well,
according
to
the
downloads
page,
for
the
framework,
four
point,
five
and
four
point
five
point:
one
reached
end
of
light
in
2016,
but
then
four
or
five,
two
still
supported.
D
Yeah
so
four
five
two
now
also,
I
think,
but
currently
we're
supporting
four
five
already
right
and
we
have
a
bunch
of
customers
actually
using
it
so
talking
about
reach,
even
if
microsoft
may
not
support
it.
If
you
know
significant
customers
do
want
it,
then
potentially
we
should
actually
support
it.
For
now,.
I
So
that
the
servicing
releases
may
go
out
of
support,
but
that's
kind
of
they're
kind
of
logistically
viewed
as
the
same
release.
So
four
five
two
is
just
four
five
as
far
as
microsoft
is
concerned,
and
if
we
ever
there's
a
security
change
or
something
then
we'll
release
a
four
five
three
but
they're
all
legit.
You
know
logically
they're
four
or
five
since.
D
I
Right
yeah,
the
only
things
that
are
going
to
go
in
those
minor
point
releases
are
security,
fixes
and,
and
like
really,
you
know
horrible
bugs
that
need
to
be
fixed
there.
There's
not
gonna,
be
any
features
or
stuff
like
that.
If
you
that's
the
like
the
four
five
to
four
six
will
include
fix
features,
but
the
the
servicing
releases
will
be
really
minor
changes
and
there
shouldn't
be
anything
breaking.
You
should
be
able
to
go
from
451
to
452
with
no
issue.
D
G
D
G
So,
oh
no,
we
could
talk
a
little
bit
about
the.
I
think
it's
a
relatively
minor
point,
but
I
think
we'll
maybe
have
some
discussion
around
this.
The
the
deployment
line
item
there,
I
think
we
kind
of
you
know,
talked
about
some
of
this
in
terms
of
the
making
the
exporter
pluggable,
and
what
that
you
know
what
that
means
and
whether
we're
gonna
have
some
sort
of
you
know
default
that
works
with
otlp
or
zipkin
or
whatever,
but
also
like.
G
If
we're
going
to
have
any
sort
of
deployment
artifacts,
whether
we're
you
know
creating
an
msi,
installer
or
you
know
zip
or
nougat
package,
or
if
that's
you
know
something
we're
punting
on
for
vendors
to
to
do.
You
know,
I
think
we
should
kind
of
figure
out
what
our
strategy
is.
D
There
so,
let's
consider
I
don't
know
what
the
good
answer
is,
I'm
just
kind
of
speaking
what
they
like
thinking
aloud
right.
So,
let's
consider
so
somebody
who
installs
a
new
relic
branded
thing.
They
will
be
using
a
new
relic
deployer
like
installation
script
right.
So
now,
let's
consider
somebody
who
is
using
a
new
open,
telemetry
actual
thing
right,
so
this
will
be
an
organization
who
just
configured
some
kind
of
open
telemetry
data
collector
component
within
their
service.
D
D
Do
they
need
because,
like
once,
we
start
dealing
with
that,
then
the
whole
configuration
thing
needs
to
be
somehow
dealt
with.
D
B
What
I
think
for
people
that
are
using
an
already
it's
as
you
describe
it,
but,
for
instance,
people
that
use
windows.
There
are
people
that
have
msi's
on
their
on
their
let's
say:
ci
deployment
pipeline.
B
So
if
you
wanted
them
to
kind
of
migrate
to
open
telemetry,
then
we
we
need
to
to
have
those
msis.
You
know
we
need
to
have
the
the
packages
for
linux
in
the
same
way.
B
In
the
end,
they
do
the
same
thing
that
you
are
saying,
but
if
you
want
to
facilitate
the
migration
of
people,
you
have
to
kind
of
have
out
of
those.
You
know
migration
from
where.
D
B
From
from
whatever
they
have
right
now,
based
on
or
ours
is
based
on
data
dogs.
So
it's
it's
basically
the
same
msi's
packages
so
for
us
to
move
them
from
hours
to
open
telemetry.
We
would
like
to
just
say:
hey.
Instead
of
using
this
msi
use,
this
msi
replace
the
variable
names
to
whatever
is
the
equivalent
or
open
telemetry
and
be
done.
D
That
makes
sense.
I
am
concerned
with
the
effort
required
to
build
and
maintain
these
components
so
say
because
that
means
every
time
we
do
something.
Now
we
need
to
go
test
it.
Like
so
say,
we
have
a
like
change,
some
kind
of
change
that
might
affect
things
right
so
now
every
merge
needs
to
go,
validate
and
change
this
and
so
on.
So
really
it
will
be
just
like
you
say,
but
are
we
not
biting
too
much
like?
Do?
We
have
the
capacity
to
do
this.
B
I
I
I
would,
I
would
hope
and
be
optimistic
that
perhaps
you
could,
but
that's
an
open
question
and
also.
H
Yeah
because
greg
like
you
said
it's
really
just
a
collection
of
files
and
some
environment
variables,
and
so,
if
there's
a
simple
way
that
we
can
do
it
for
just
a
plain,
open,
telemetry
in
release,
then
let's
do
that
and
each
vendor
might
have
their
own
installers,
because
we
want
to
streamline
setups
in
different
environments,
but
maybe
for
the
open,
telemetry
experience.
D
I
I
I
sort
of
agree.
I
I
think
jillian
would
be
like
linux
packages
and
msi
for
windows,
but
that
means
that
we
have
to
kind
of
make
sure
it
works.
D
So
it's
it's
better
not
to
offer
it
than
to
offer
it
and
have
it
broken.
D
Yeah,
so
I
agree
with
chris
and
then
there
is
also
another
whole
space
of.
How
do
we
do
this
for
platform
as
a
service,
because,
specifically
for
dotnet
right
for
like
non.net
stuff
that
live
live
in
linux,
you
know
it's
either
vm
or
contact
or
docker
containers,
but
for
net.
You
have
all
these
platforms
and
service
offerings,
especially
by
microsoft
right.
D
So
once
you
start
running
things
in,
I
don't
know
azure
website
right,
so
suddenly
it
should
deploy
it
there.
So
you
need
like
extensions
and
I'm
I
can
only
speak
about
azure,
I'm
not
an
expert
in
other
clouds,
but
you
guys
correct
me
if
I'm
wrong,
but
so
what
what's
even
more,
what's
even
more
important
for
customers
is,
is
it
a
an
msi
exporter
or
is
it
when
you
know
whatever
is
the
right,
azure
vm
extension,
to
deploy
things
on
an
azure
vm?
D
I
don't
know,
but
it's
at
least
a
consideration.
So
we
like
we
would.
We
would
go
into
this
whole
space,
where
it's
not
clear
whether
we
actually
want
to
be
in
that
space,
given
that
we
don't
have
all
that
many
resources
to
contribute
to
this.
So,
maybe
by
by
explicitly
saying
that
we're
staying
out
of
it,
at
least
for
now,
we
might
like
you
know
not
open
up
this
can
of
worms.
B
D
I
wonder
what
like
neuralis
and
uralic
and
splunk
are
doing,
for
how
do
you
deploy
it
on
platform
as
a
service
offerings?
Your
your
solution
right
now,
the
new
relic
technology,
when
I
wanna
just
explore
it
or
deploy
it
on
azure?
What
do
I
do.
H
Yeah,
so
we've
got
a
couple
of
different
approaches
depending
on
what
the
needs
are,
but
there's
basically
two
options
that
that
we
provide
once
the
extensions
that
that
you
manage
so
some
sort
of
azure
extension,
the
others
via
nougat
package.
H
B
So
so
you
deployed
the
the
profiler
via
nuget
package.
H
Yes,
and,
and
so,
depending
on
what
version
of
nougat
they're
using
determines
how
streamlined
that
that
process
is
so,
I
think
older
versions
of
nuget
allows
you
to
execute
some
sort
of
script
upon
install,
which
can
then
try
to
set
up
some
of
the
configuration
options
for
you,
whereas
I
think
the
newer
ones
you,
you
can't
run
the
the
script.
D
And
what
do
you
do
in
splunk
when,
because
you're
datadock-based,
so
are
you?
B
The
cloud
depends:
there
are
some
specific
solutions
that
we
build
the
the
images
for
customers
on
top,
so
they
can
kind
of
go
for
build
packs
or
something
for
like
cloud
foundry
or
something
that
we
already
provide.
The
images
for
them.
B
For
azure
we
don't
have
offerings
right
now
for
azure,
so
we
are
going
to
be
looking
eventually
into
adding
extensions
for
azure.
But
that's
the
path
that
we're
thinking,
azure
extension
and.
D
For
open
telemetry
data
collector,
what
is
the
wire
protocol,
the
lower
level
protocol
between
the
sdk
and
the
collector?
Is
it
tcp
or
is
it
something
else.
B
The
lower
level
otlp
it's
going
to
be
tcps
over
gr,
grpc,
so
jrpc
over
tcp
yeah
grpc
is
http
right.
So
you
you,
but.
D
The
net
network
level
protocol
to
talk
to
the
open,
telemetry
collector
is
tcp.
D
That
yeah,
the
reason
I'm
asking
this
is,
as
you
go
to
platforms,
it
is
not.
You
cannot
really
rely
on
talking
via
tcp
to
anything
that
is
also
within
the
client.
You
can
always
quote
your
back
end
right.
B
D
Doing
http,
but
you
can't
always
ports
are
closed
and
things
like
that.
B
So
we
had
some
situations
that
we
wanted
udp
in
that
case
with
the
collector.
What
we
did
is
using
the
the
jaeger,
because
the
jager
has
a
agent
that
receives
udp.
The
collector
already
supports
that,
and
then
we
just
ask
the
customer
to
configure
their
library
to
export
the
jaeger
agent.
That's
udp,
but
udp
is
still.
D
I
see
we
have
we
have
we
have.
We
have
difficulties
in
azure
where
tcp
and
udp
connections
are
closed
for
security
reasons,
so
this
will
be
something
that
we
should
potentially
add
to
ga
or
at
least
discuss
it.
Maybe
it
may
be
added
to
a
list
of
things
that
we
aware
is
a
problem
and
decide
whether
or
not
it
should
be
ga,
but
essentially
here's
the
thinking
again-
and
I
am
very
you-
you
will
notice
I'm
sort
of
thinking
about
this,
like
in
terms
of
customers
rather
than
like
generally
open
telemetry
technology.
D
Of
course,
a
lot
of
runs
on
vms,
but
there
is
the
azure
platform
as
a
service
market
is
huge
for
the
net
right
and
within
that
market
there
are
several
offerings,
but
the
biggest
one
is
azure
app
service,
I'm
not
sure
whether
it's
bigger
than
all
the
other
platforms
together,
but
it's
certainly
out
of
if
you
do
use
an
azure
platform
like
out
of
every
single
platform
as
a
service
offering
it's
the
biggest
foot.net
like
across
all
clouds.
D
It's
probably
smaller
than
all
the
others
taking
together
now
anyway,
so
like.
If
we
want
to
support
people
who
run
on
that
one
using
open
telemetry
our
scenarios,
like
all
these
exporters,
I
would
call
it
out
very
specifically
on
the
roadmap,
because
it
will
require
some
testing
and
some
source
problem
solving,
because
if
you
need
to
deploy
a
data
collector
like
technology,
some
sort
of
intermediate
component
that
is
not
like
just
trace
at
your
cloud
within
that
kind
of
environment
there'll
be
tricky
situations.
D
So
if
we
want
to
support
this,
we
need
to
have
this
as
an
explicit
item
to
be
done,
and
if
we
don't
want
to
support
this,
I
think
it's
such
a
big
market
that
we
should
quote
out
explicitly
that
we're
aware
of
it
and
we
choose
not
to
support
it
for
ge.
D
They
would,
I
don't
know
how
they
solve
it,
they
might
not
be
solving
it
or
they
might
just
say
we
don't
care
about
the
sdk.
It's
like
not
our
problem.
The
customer
needs
to
somehow
do
this.
I
am
I'm
sort
of
thinking
in
this
entry
and
scenario
where
I
want
somebody
to
do
like,
because
the
key
has
the
luxury
to
say
I
am
the
sdk
and
my
universe
ends
at
the
boundary
of
an
application.
D
D
Then
then
you
have
to
solve
it.
Somehow
I
don't
know
we
can
take
either
stance,
I'm
okay
with
either,
but
I
would
like
it
to
be
explicit
so
that
we
don't
have
this
feature
creep
where
suddenly
people
request
the
solution
for
it
and
we
are
like
trying
to
do
it,
but
it's
not
quite
working
and
so
on,
and
so
on.
So.
G
Yeah,
so
let
me
just
check
to
make
sure
I'm
I'm
clear
on
this
and
kind
of
like
we're
in
an
agreement
on
this.
So
in
terms
of,
like
you,
know,
installation
methodology,
we've
we've
agreed
that
we're
not
going
to
do
anything,
at
least
for
ga
more
advanced,
like
you
know,
like
an
msi
or
the
linux
distribution
packages,
but
we
think
that
it
may
be
reasonable
to
offer
it.
G
D
Yes,
maybe
one
thing
I
would
like
to
add
as
a
consideration
for
the
distribution,
maybe
we
publish
a
blog
post
from
open
telemetry
wherever
appropriate.
That
says
that
describes
you
know
on
a
high
level.
This
is
how
you
take
the
zip,
the
distributable
and
make
your
application
work
and
to
end,
and
it
would
be
really
like
a
blog
post
that
describes
an
entry
and
setup
picking
some
open
source
back-end.
D
I
think
for
the
blog
post,
one
is
enough,
but
it
would
be
a
since
it
comes
from
open
telemetry.
It
can
be
like
we
can
say
like
it's
not
about
the
back
end,
but
it
needs
to
be
end
to
end.
Like
you
take
your
backhand,
you
take
your
application
and
you
do
blah
blah
blah
blah
and
then
it
works.
So
I
think
if
we
can
publish
a
blog
post
like
this
and
then
just
distribute
a
zip
archive,
that
would
be
good
enough.
G
D
As
far
as
as
far
as
cloud
is
concerned,
I
would
suggest
that
we
make
a
list
essentially
of
all
platform-as-a-service
offerings
across
all
clouds
that
we
care
about
and
I'm
guessing
it's
aws
and
azure,
and
anything
else
that
you
guys
believe
is
important
and
we
say
these
are
supported
and
those
who
are
supported,
like
we
actually
potentially
publish
a
blog
post.
D
How
to
again
do
an
ancient
installation
for
that
particular
platform
as
a
service
offering-
or
we
say
just
do
the
same
thing
as
in
the
other
blog
post
and
the
other
ones
will
say.
Well,
it's
an
open
source
thing.
It
might
work,
it
probably
does
work,
but
we
like
don't
claim
that
it
works
because
something
might
be
missing
and
then
we
as
a
group
as
a
in
the
process
of
writing
the
blog
post,
really
make
sure
that
it
does
work.
D
D
B
Yeah,
I
I
I
think
I
think
we
we
need
kind
of
a
discussion
about
the
the
requirements
and
what
you
want
to
support
and
that,
because
that
that
word
is
kind
of
really
big
if
you're
thinking
cloud
deployment
cloud
installation.
D
D
I
just
really
would
like
this
to
be
an
explicit
thing
where
we
don't
just
say
here
is
a
zip
thing
and
people
then
say:
oh
great,
I
am
using
the
azure
website,
I'm
installing
this,
and
then
they
spend
two
days
trying
to
get
it
to
work
and
say
well
open,
telemetry
sucks,
because
we
couldn't
get
a
cure.
D
So
I
think
it's
totally
okay,
if
we
exclude
it,
but
we
should
make
it
explicit
yeah.
That
sounds
like
a
fair
point
to
me.
B
All
right,
I
think
we
went
over
the
time
unless
I
can
continue
eric.
If
you
want
to
talk
more,
but
I
don't
know
if
all
the
other
folks
can
continue.
G
Yeah,
let's,
let's
go
ahead
and
and
call
it
I've
got
a
another
meeting.
Luckily
it's
a
one-on-one
but
yeah.
I
gotta
gotta
gotta
head
out
so.
D
Then
we
can
look
at
the
list
offline
and
leave
comments.