►
From YouTube: 2020-07-15 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
A
A
B
D
Go
ahead
first,
I'm,
not
on
the
agenda
here,
I
put
the
I
created
a
ga
burndown
project
is
just
a
github
project
to
track
where
we
stand
in
terms
of
GA
readiness.
I,
don't
know
who
has
access
to
add
issues
to
projects
I
think
only
approvers
do
so
feel
free
to
add
issues
to
this.
If
you're,
not
an
approver,
just
go
ahead
and
comment
on
an
issue
and
I'll
add
it,
but
any
issues
added
to
the
project
are
automatically
added
here.
D
D
No
okay,
I
think
it's
relatively
straightforward.
One
thing
I
did
notice
when
I
created
this
so
I
made
this
approved
column
and
if
it
gets
enough
approvers,
then
it's
automatically
moved
over
here.
Pr,
czar
and
I
noticed
our
project.
Settings
right
now
technically
only
require
one
review,
which
is
why
this
is
over
here
and
I
was
wondering
what
people
think
about
changing
the
project
settings
to
match
the
contribution
guidelines
which
currently
state
that
you
need
for
approvals.
D
E
D
I
could
set
it
at
three,
the
honestly
it
doesn't.
You
know
we
pretty
good
about
waiting
for
that
anyways.
The
only
reason
I
even
noticed
is
because
this
automation,
who
this
one
over,
even
though
it
only
has
one
I,
think
I'm
fine
setting
it
like
leaving
the
setting
at
three
but
still
having
the
the
contribution
guidelines
safe
forward.
B
E
D
There
not
gonna
show
the
screen.
Yeah
I'll
show
you,
let's
objects
burn
down,
so
the
the
automation
is
very
simple
any
issue
or
PR
added
to
this.
So
any
issue
added
to
the
to
the
project
or
automatically
put
into
this
one.
That's
technically
an
automation.
Any
PR
additive
project
is
automatically
put
in
this
one
and
then
once
a
PR
gets
the
required
number
of
approvals.
It's
automatically
moved
to
this
column,
so
the
advantage
of
that
will
just
be
that
maintainer
x'
would
be
able
to
quickly
see
which
PRS
are
ready
to
be
merged.
D
So
then,
once
they're
merge,
they're
automatically
moved
to
done
and
the
tickets
should
close
and
automatically
also
be
routed.
So
we'll
see
how
well
that
works,
but
the
automation
options
are
simple,
to
say
the
least:
there's
not
a
lot
of
configuration
available.
Unfortunately,.
D
D
D
That
better
yeah
okay.
So
these
two
reviews,
this
one
is
just
a
environment
variable
configuration,
so
it
should
be
relatively
straightforward
and
I'd
like
to
get
that
merged
this
one's.
Obviously
a
little
bit
bigger,
it
seems
close
to
being
ready
to
be
merged,
but
it
looks
like
Bart.
You
had
some
open
comments
and
concerns
on
this
and
there's
been
a
lot
of
discussions.
So
I
didn't
really
know
what
the
state
of
those
discussions
were
and
I
was
hoping
that
you
could.
Let
me
know
sort
of
what
your
thought
process
this
year
like
how?
B
I
decided
the
current,
but
so
I
can
just
repeat
this
one,
but
presently
the
specs.
It's
about
me,
dr.,
Michael,
nothing
about
push
controller,
so
we
won't
have
a
meter
provided
after
this
PR
it
doesn
t
matter.
It
will
be
a
bit
weird.
If
someone
just
looked
at
this
one,
you
know
according
to
spec,
we
need
to
have
meter,
provided
we
don't
have
it
so.
D
D
So
I
think
this
is
what
he
was
trying
to.
You
know
sort
of
merge
these
components
together
because
well,
this
is
a
separate
component.
It's
a
little
odd
that
it's
like
set
up
this
way.
I
know
if
anyone
has
any
better
ideas,
without
necessarily
merging
these
components
together
how
they
could
be
still
separated,
but
not
have
to
rely
on
like
this
very
side
effect,
ee,
behavior,.
B
D
D
D
B
C
C
D
D
D
D
D
The
other
PR
I
wanted
to
bring
up
is
the
baggage
open,
tracing
shim
it
someone
who's,
not
an
approver
or
not.
A
not
a
regular
contributor
created
this
PR
a
long
time
ago,
yeah
in
March,
but
it's
a
relatively
simple
PR
and
it
only
depends
on
the
API,
so
I
think
it
should
be
relatively
easy
to
get
this
merged.
So
I
would
appreciate
it
if
people
could
review
this
just
so.
We
have
something
because
this
is
required
by
a
spec
and
it's
something
that
we
should
have
for
your
GA.
D
F
Sure
so
this
is
just
a
prototype.
I
made
ready,
so
you
can
have
a
look
and
see
how
big
the
cooking
get
like,
how
much
needs
to
be
changed
and
what
needs
to
be
done
so
I've
tested
this
PR
with
the
existing
plugins
right
now
and
it's
working.
None
of
the
plugins
have
to
be
changed
and
the
only
thing
I
mean
after
this
PR.
F
D
F
G
D
That's
true,
so
obviously,
there's
still
work
to
be
done
on
this,
but
I
think
the
overall
design
is
good.
It
works
for
me
it
seems
like
not
a
huge
change
which
I
do
like
I
agree.
Plugin
manager
is
a
better
name
than
enabler
because
you
can
also
stop
them,
so
it
does
more
than
just
enabling
plugins,
but
other
than
that.
You
know.
I
think
that
this
this
works
for
me
generally
I,
don't
know
if
anyone
else
has
had
time
to
look
at
this,
but
you
know
I
I
think
you're
moving
in
the
right
direction.
F
D
D
Obviously,
if
we
do
that,
then
that
sort
of
removes
basically
everything
cube
ton
in
this
PR
right,
because
that
would
be
modifying
the
plugins
to
use
a
global
I
think
that
this
is
a
would
have
been
a
good
idea
a
long
time
ago,
but
now
we
have
so
many
plugins
that
require
it
to
be
passed
in
that
I.
Think
it's
too
late
for
this.
D
D
F
G
B
C
B
G
F
F
D
No,
you
know
this
says
it
shouldn't
work
before
note,
11
or
10
I
guess,
but
our
tests
are
passing
on
no
date,
so
I
haven't
really
looked
into
why
that
is,
but
I
do
want
to
make
sure
that
the
tests
are
actually
running
because
we've
had
issues
in
the
past
where
a
package
was
completely
skipped
with
the
tests.
So
if
you
could,
please
make
sure
that
this
is
not
an
actual
issue
or
maybe,
if
you
find
some
way
to
explain
why
this
is
happening.
I
would
appreciate
that
all
you're
working
on
new
stuff
I
mean.
B
F
Also
I
want
more
question
about
the
github
action
for
checking
the
linter,
so
I
think
the
link
that
I
sent
I
think
is
that
valid.
Does
that
seem
value
to
you
guys?
It
says
that
because
the
pool
request
event
is
being
sent
to
the
contribute
pool
and
it
doesn't
have
the
work
float,
I
think
that's
why
it's
not
being
run
in
the
PR
yeah.
D
I
think
that
that's
probably
correct
is
running
in
your
branch,
though
right
right,
right,
yeah,
I,
think
that
that's
correct,
that's
a
security
thing
I
think,
because,
if
it
ran
in
the
branch,
then
I
could,
for
instance,
on
a
project.
I
could
open
a
bunch
of
PRS
with
actions
that
they
have
not
approved.
That
would
run
and
fill
up
their
action.
Q.
You
technically
only
have
a
certain
amount
that
can
run
and
throw
things
like
that
on
the
free
tier,
so
I
think
it's
to
prevent
that
from
happening.
D
D
C
C
A
A
H
Caroline
yeah,
so
I
wanted
to
touch
base
on
the
rename
from
plug-in
to
instrumentation.
So
it
seems
like
based
on
the
conversation
that
LinkedIn
that
issue
that
we
plan
on
changing
the
wording
to
be
instrumentation
for
new
plugins
and
then
eventually,
possibly
update
existing
modules.
That
kind
of
not
immediately
so
I
wanted
to
confirm
with
the
group
that
for
a
new
plugin
I
should
be
naming
it
instrumentation
and
that
this
is
kind
of
still
where
we
stand
as
one
could
see
if
there
were
any
new
opinions
on
that.
Yes,.
D
That
is
my
understanding.
I
talked
about,
I
talked
about
the
other
maintainer
x'
about
this
at
the
maintainer
is
meeting
two
weeks
ago
and
as
a
group,
we
decided
essentially
that
this
is
probably
going
to
be
painful
for
some
SIG's
that
have
been
using
different
naming
schemes,
but
that
in
the
end,
it's
important
that
we
use
consistently.
So
instrumentation
is
the
way
to
go
moving
forward.
Okay,.
H
All
right
that
makes
sense,
I'm,
also
just
to
update
on
status,
I'm
finishing
up
the
plugins
for
color
and
should
have
a
peel
out
soon
and
I
was
kind
of
looking
into
other
web
frameworks
to
also
build
instrumentation
for
looking
into
happy.
So
if
anyone
has
any
thoughts
on
that
I
guess
or
feedback
on
that
as
a
choice,
there
is
an
ongoing
issue
there
and
it's
been
a
little
bit
of
conversation,
but
just
wanted
to
bring
up
in
case
anyone
had
any
boss.
D
C
H
A
D
H
D
D
H
D
E
I
just
wanted
to
bring
it
up
because
it's
a
PR,
that's
been
there
for
a
while
in
general,
I
think
it's
in
really
good
shape,
but
I
think
there
was
like
a
lingering
question
about
resource
auto-detection
and
then
I.
Think
yeah
comment
about
having
like
a
global
shutdown
I
feel
like
those
are
like
the
only
two
issues
so.
D
D
So
I'll
add
that
to
this,
but
I
agree
with
you.
I
think
this
has
been
sitting
around
for
a
while.
It
has
had
quite
a
few
comments,
but
I
think
it's
ready
for
reviews
and
we
can
get
it
possibly
merged,
and
you
know
if
changes
need
to
be
made
incremental
II,
that's
easy
to
do,
but
I
think
the
broad
strokes.
It's
generally
there,
yeah
I'm
I'm.
E
D
E
E
I
did
have
a
comment
in
terms
of
on
detecting
resources
in
that
like,
if
you
do
choose
to
auto,
detect
you
kind
of
have
to
auto,
detect
and
then
wait
for
that
to
resolve
and
then
start
the
sdk
so
like
it
might
be.
It
might
be
easy
enough,
just
to
add
a
auto,
detect
resources
flag
to
the
sdk
config.
That
will
that
will
do
that
during
start
for
you,
I
guess.
D
D
E
D
Not
change
yeah
no
I
mean
that.
Does
that
would
also
be
fine.
You
mean,
if,
like
the
SDK
dot
start,
was
asynchronous
and
did
any
waiting
required.
Yes,
yeah
I
mean
that
would
be
that's
fine
with
me.
Also
I
think
it
may
already.
Actually,
oh,
it's
not
so
make
this
method
async
and
then
make
this
method.
I,
guess
synchronous
and
make
a
flag
that
defaults
to
true
for
auto,
detecting
resources
in
the
constructor
and
the
configuration
here
was
that
what
your.
E
D
D
E
D
B
Every
question
this
wouldn't
be
possible
instead
of
like
a
blocking
this
weight,
for
example,
with
exporting
dispense
until
the
resources
are
being
detected.
So
you
are
not
blocking
in
fact
this
decade
at
the
moment
of
when
you
want
to
export
the
Spence
or
matrix,
you
think
you
wait
until
you
get
the
resources,
so
that
would.
B
C
C
D
It
would
just
have
whatever
the
first,
because
it
would
be
the
same
reference
everywhere:
they're
not
copied
there
they're
copied
by
reference,
so
it
wouldn't
necessarily
matter
the
exporter
would
just
the
first
time
it's
even
the
first
span.
It
would
await
the
resource,
I
assume
with
some
reasonable
time
out
and
then
I
guess
it
would
store
the
resource
in
the
exporter.
At
that
point,.
D
E
Yeah
and
I
think,
as
we
start
using
resources
like
they
were
pretty
experimental
and
we
poured
it
all
over
largely
what
was
in
open
census.
That's
kind
of
a
starting
point,
so
I
could
see,
as
we
start
to
use
these
like
needing
some
need
to
smooth
out
some
rough
edges.
I
guess
so
like
if
it
becomes
clear
that
there
is
like
some
better
designs
that
could
be
around
I
think
we
should
capture
that
and
set
it
and
make
an
issue
for
some
possible
improvements
there.
E
D
They
ought
own
detection
notes
have
merged.
It
looks
generally
in
line
with
what
we
already
have,
but
since
you
implemented
our
auto
detection,
I
just
wanted
to
make
sure
that
you
were
aware
that
this
codes
have
merged,
and
you
know
if,
if
there's
no
action
required,
you
can
feel
free
to
just
close
this
issue
with
no
action
required.
That
would
be
totally
fine.
I
just
want
to
make
sure
you
weren't
aware
that.
E
D
A
A
So
I
I
did
that
so
yeah.
If
any
of
the
maintainer
--zz
want
to
give
it
a
look
or
anyone,
you
know
or
not
just
anyone
has
any
feedback
could
certainly
be
appreciated.
It's
a
little
rough
I.
Don't
have
a
ton
of
them.
Okay,
first
thing
my
new
their
own
las
piƱas,
so
this
doesn't
get
enriched.
Thank
you.
I
appreciate
that
yeah
I
wasn't
trying
to
make
it
more
clear.
I
was
thinking
about
the
like
skull
and
crossbones
emoji,
but
yeah.
So
it's
it's
out
there.
A
A
D
A
It's
a
good
question
totally
fair,
so
data
dogs
trace
agent,
which
you
can
think
of
as
an
equivalent
to
I,
guess
the
collector.
You
know
it's
a
daemon
that
listens
for
traces
and
flushes
them
to
any.
That
intake
end
point
requires
complete
traces,
so
so
the
batch
spam
processor
just
flushes
spans
in
a
batch,
but
they
can
be
spans
if
it
ain't
complete
race
fans
of
any.
You
know,
trait.
You
know
just
random
spans.
A
D
A
A
What
is
it,
what
I
also
had
the
probability
sampler
again,
there's
some
excuse
and
or
specific
things
where
we
generate
dated
outside
data
dog,
generate
some
metrics
at
the
collector
level.
The
trace
agent
level
and,
at
the
moment,
with
probability,
sampling
spans
that
are
not
sampled
are
also
not
recorded,
and
so
it
makes
it
difficult.
It
makes
it
and
you
don't
have
access
to
the
probability
rate
at
which
spans
that
you
do
sample
or
sample
that
at
the
exporter
level.
A
D
D
D
A
I
mean
this
bus
sound
totally
reasonable.
I,
definitely
don't
want
to
add
more
complexity
to
the
core
sampling
logic,
just
to
support.
You
know
my
use
case,
but
I
do
think
it's
a
use
case
that
will
I
do
think
understanding.
The
sampling
decision
that
was
made
at
the
spam
level
at
export
time
could
have
a
variety
of
uses.
It
feels
like
valuable
information
but
yeah.
If
you
want
to
link
to
the
the
spec
issue,
I
can
comment
yeah.