►
From YouTube: 2020-09-21 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
D
D
B
We
have
some
co-workers
that
live
in
fort
collins
and
they
were
taught
they
were
telling
me.
The
aggro
agricultural
college
near
them
takes
a
measurement
of
watts
per
square
meter
for
lightness
like
for
light
emittance
for
like
white
flux,
yeah.
It
was
seven
watts
per
meter
recently,
as
opposed
to
a
normal
750
to
900.
So.
F
F
D
G
Oh
yeah,
I
I
just
had
a
quick
question
about
this.
I
think
most
repos
are
set
up
to
have
this
and
I
think
it's
important
to
have
this.
Actually,
if
you
have
code
changes
on
ruby,
we've
run
into
like
a
few
situations
where
people
are
contributing
out
of
forks
or
organizations
are
contributing
out
of
forks.
G
Where,
like
you,
cannot
click
this
button,
the
button
is
not
available
to
to
update
a
branch,
and
I
think
I
don't
know
we
had
like
five
or
six
documentation
prs
and
it
kind
of
was
making
things
hard
to
kind
of
merge
those,
because,
like
repeatedly
after
you
merge
each
one,
you
would
have
to
ping
the
next
person
to
say:
hey.
G
Can
you
please
update
your
branch,
and
the
main
thing
I
was
just
wondering
is:
if
anybody
has
found
an
elegant
solution
to
this,
I
think
or
has
experienced
this
and
how
others
are
handling
it.
G
H
To
no,
I
think
it's
different,
it's
basically
in
the
settings.
You
have
to
mark
that
you
are
required
the
review
and
be
up
to
date,
yeah.
E
We've
got
that
enabled,
for
instance,
in
the
go
rebo,
but
as
matt
mentioned,
we
occasionally
get
contributions.
I
think
mostly
it's
coming
from
the
aws
interns
who
are
working
out
of
another
organization
and
they
don't
have
the
ability
to
allow
maintainers
of
the
target
repo
to
update
the
source
branch
when
they
create
a
pr.
E
I
E
I
Confirm
that
this
doesn't
work
for
organization
repos,
you
can
only
allow
edits
from
maintainers
from
your
personal
repos.
Even
if
you
would
be
an
admin
of
the
of
the
fork
of
the
organization
frog,
it
still
won't
work
and
that's
why
we
disabled
it
in
the
spec,
for
example,
but
since
we
don't
handle
code
but
rather
just
marked
on
there,
it
works
reasonably
well.
I
could
imagine,
however,
that
for
for
code
repos,
it
makes
sense
to
have
the
setting,
but
I
don't
know
any
elegant
solution
for
our
creepers
there,
either.
G
Yeah,
I
I
can
show
it
to
you:
we've
had
it
twice.
One
of
them
is
has
been
with
this
aws
organization,
repo
and
another
one.
There
was
another
organization
organization
that
was
contributing
had
the
same
problem.
So
so
I
guess
that's
a
common
thread
is
it's
forks
that
belong
to
an
organization
rather
than
forks,
that
belong
to,
like
a
user
that
that
has
this
problem.
J
Yeah
in
the
just
about
the
setting
of
require
the
prs
to
be
up
to
date,
we
did
end
up
disabling
that
in
the
java
instrumentation
repo,
primarily
because
our
builds
take
a
long
time,
and
so
we
would
like,
like
you
said
you
have
to
keep
doing
that
like
that
dance
of
okay
now
update
this
pr
run.
This
build
okay,
now
merge.
J
This
now
update
this
pr
wait
for
this
build
to
run
and
then
merge
this
one,
and
so
we
do
have
a
nightly
build
to
catch
which
has
a
couple
times
caught
issues
that
would
have
been
caught
by
you
know
forcing
that
pr
to
be
up
to
date
before
merging,
but
that
that's
worked
out
at
least
as
a
reasonable
compromise.
A
G
Yeah,
it
sounds
like
we're
we're
some
somewhat
like
struggling
against
some
github
decisions
and
functionality
that
really
can't
be
can't
be
worked
around
if
people
are
working
out
of
organization
forks.
So
the
only
only
other
option
is
deal
with
it
or
disable
this.
But
maybe
if
you
wanted
to
save
what
you
I
need
for
some
safeguards
around
that.
I
G
Maybe
but
yeah
I
I
think,
that's
fine.
I
guess
this
answers
my
questions,
everybody's,
having
the
same
experience,
if
anything
does
change
here
or
if
anybody
has
any
great
solutions,
may
just
bring
them
up
in
the
future,
but
I
think
I
know
what
I
need
to
know
about
this.
At
least.
K
I
can
offer
a
perspective
in
repos
outside
of
open
telemetry.
I
mean
this
has
come
across
and
I've
seen
on
other
projects
where
it's
a
bit
too
strict,
or
it
takes
too
long
in
order
to
update
all
the
open
pr's
in
order
to
get
the
branch
up
to
date
with
what
it's
going
to
emerge
in.
So
the
emphasis
was
done
on
the
post.
K
Pr
merge
check
so
as
long
as
there
is
a
sea
ice
system
set
up
in
order
to
verify
that
any
code
coming
in
that's
merged
also
triggers
another
run
of
all
the
tests,
and
that's
dependable
and
quick,
and
can
catch
anything
that
might
have
like
oh
yeah,
that
was
merged
from
a
stale
branch,
and
now
we
detect
it
once
it
gets
down
to
magic
master.
That
was
an
excuse
in
order
to
disable
this
very
strict
record
requirement
on
prs
being
up
to
date.
J
I
J
A
A
J
L
Know
I
know-
and
this
is
for
specification-
that's
where.
A
G
So
this
is
happening
on
open,
telemetry
ruby,
but
the
the
problematic
fork
that
is
being
configured
from
is
the.
It
is
an
aws
work
that
that
interns
are
using.
M
K
All
right
I
put
this
one
in
before.
I
give
the
updates
as
to
what
issues
we've
been
working
on
in
the
past
week.
What's
been
done
and
whatnot,
because
I
was
this
is
related
towards
next
steps
trying
to
visualize
what
are
the
milestones
and
how
many
weeks
we
have
left
until
kubecon
north
america.
I
know
that
that
was
a
date
that
was
thrown
around
before
historically,
yes
seemed
like
a
good
day
I'll
share.
So
that
way
we
can
see
some
stuff.
K
It
seemed
like
a
reasonable
date
within
the
year
of
2020
in
order
to
try
to
shoot
for
in
order
to
achieve
ga.
So
I
just
use
this
like
confluence
tool
and
got
a
screenshot
in
order
to
lay
out
the
blocks
of
weeks
remaining
up
to
cubicle
and
north
america.
K
So
that's
what
each
one
of
these
segments
are
we're
up
at
this
point
right
here
and
we're
in
the
process
of
trying
to
nail
down
the
spec
trace
issues
for
spec
freeze
on
the
on
the
trace
and
then
we're
going
to
follow
on
with
working
on
the
metrics
issues,
but
I'm
not
sure
how
many
weeks
of
work
we
have
left
and
if
it's
longer
or
whether
we
have
other
things
left
over
in
order
to
find
out
how
many
weeks
we
have
for
elbow
room
to
get
to
ga
before
kubecon
north
america.
K
So
I
didn't
like
fill
out
every
single
item
that
we
had
on
our
ga
readiness
spreadsheet.
I
know
some
of
them
can
work
in
parallel,
but
I
I
think
this
is
just
a
a
question
for
like
what
happens
if
we're
slipping
like
what
significance
do
we
have
in
terms
of
our
ga
timeline?
If
we
need
to
take
an
extra
week
in
order
to
work
on
trace,
stuff
or
metric
stuff,.
A
This
just
we,
we
do
a
spec,
a
trace,
spec
freeze
at
the
end
of
this
week.
Right,
that's
what
it
means.
K
That's
what
I
put
as
an
estimate
based
on
doing
the
triage
and
talking
with
people
and
looking
through,
like
how
many
stuff
are
left
on
the
to
do.
We
have
10
items
on
the
to
p1
issues
that
that's
the
milestone
for
that's
the
requirement
for
freezing.
We.
A
K
Audit
after
the
spec
quiz-
yes,
that
means
that
actually
so
there
was
an
update
to
the
labels.
This
was
actually
brought
up
by
riley
the
warning
for
the
labels,
because
we've
been
working
on
for
the
past
few
weeks,
p2s
and
p3s
are
not
blockers
for
freezing
the
trace
spec,
so
it
or
nice
to
have
before
ga
is
what
they're
required
for
ga
means.
So
there's
a
distinction
between
p1s
p2s,
p3s
p1s.
We
really
have
to
get
that
in
for
the
freeze,
p2s
and
p3s
can't
be
sacrificed,
but
they're
kind
of
like
hard
calls.
D
A
Okay,
if
we,
if
we
sorry
sorry
andrew
if
we
freeze
the
spec
supposedly
no-
let's
say
substantial-
significant
changes
are
allowed
after
that
to
the
spec
right.
That's
that's!
Presumably,
when
the
implementations
are
certain
that
the
spec
is
what
they
need
to
implement,
go
ahead
and
start
implementing
it,
we
no
longer
make
it
a
moving
target
for
that
right.
A
Once
we
declare
the
the
spec
freeze,
which
means
that
anything
that
is
p2
or
p3
is
basically
going
to
be
either
editorial
change
right,
no
non-substantial
change,
it's
not
about
being
desirable
nice
to
have
required,
but
it's
about
whether
it's
substantial
or
not
substantial
right.
That's
that's
the
criteria
that
we
allow
open
issues
regarding
trace
spec
after
we
freeze
the
spec.
A
Yes,
I'm
not
sure
that
was
the
criteria
that
we
used
when
we
were
marking
things
as
p1
or
or
less
when
we
were
doing
the
triaging
yeah.
We
had
to
revisit
that.
Yes,
we
may
need
to
revisit
that.
We
may
need
to
revisit
all
p2s
and
p3s
to
make
sure
there
is
nothing
that
actually
affects
the
specification
in
a
way
that
will
result
in
substantially
different
implementation.
For
example,
right.
C
A
A
No,
I'm
not
saying
we
never
I'm
saying
if
you
say
that
the
spec
is
frozen,
I
as
an
as
a
maintainer
of
an
sdk,
and
I
have
a
target
of
three
weeks
after
that
right
to
make
the
release
or
is
it
four
weeks,
four
weeks
after
that,
do
I
need
to
keep
coming
back
and
rereading
the
spec
for
all
of
the
changes
that
you're
adding
to
make
sure
that
I'm
catching
up
and
doing
the
implementation?
A
A
Out
of
yeah,
it's
posga,
that's
that's
what
I'm
saying
that's
a
natural
consequence
of
that
decision.
In
my
opinion,
we
could
not
do
that.
I
guess
that's
a
possibility,
but
I
think
we
are
complicating
the
lives
of
maintainers
by
not
actually
saying
that
spec
is
frozen
guys
if
you
read
it
once
now.
That's
all
you
need
to
implement
that.
We
can
tell
right
we
we
can't
tell
certainly
if
we
we
don't
declare
that
p2s
and
p3s
are
only
editorial
changes.
N
Yeah,
I
think
we
should
aim
to
do
that
exactly
yes,
so
that's
that's
why
we
need
to
revisit
the
list
of
the
p2
mp3
items
just
to
make
sure
that
those
those
items
would
not
be
like
making
the
life
too
complicated
for
maintainers.
Maybe
we
can
plant
something
yeah
nikita.
C
N
If
I
understood
correctly,
the
items
that
are
p1
there
will
be
p1
and
then
from
the
items
that
are
p2
and
p3,
some
of
them
will
be
marcus.
Yes,
like
like
desire
for
ga,
if
possible,
if
they
are
a
minority
where
to
be
there,
so
none
of
the
items
is
going
to
become
a
big
one.
They
will
be
like
you
know
nice
to
have
if
it's
a
small
change,
if
not
possible.
So
we
are
not.
C
D
D
N
Very
people,
I'm
not
worried.
That's
my
feeling,
that's
my
feeling!
Well,
let's,
let's
revisit
using
change
but
yeah
exactly
I
was
going
to
say
that
that
has
been
my
feeling
all
this
time.
That's
why
we
were
trying
to
you
know
to
add
more
weeks
for
some
of
the
changes,
because
we
knew
that
it
could
be
like
small
good
breaking
changes,
hopefully
not.
C
N
N
K
K
D
K
A
D
Tomorrow,
yeah,
so
so
just
just
a
message
from
for
maintainers.
If
you
have
a
p2
or
p3
issue
that
you
think
needs
to
be
a
p1,
please
I
don't
know
if
you
have
the
ability
to
market
as
such,
but
please
tag
like
andrew
myself
and
and
carlos
and
and
bogdan
on
it
and
say
I
I
want
to
make
this
a
p1.
D
So
we
know
we're
gonna
put
a
pretty
critical
eye
on
these
if
there
are
any,
because
we
want
to
shift
it
to
ga
sooner,
but
do
that
in
the
next
two
days,
so
that
then
we
can
review
everything
on
friday.
K
K
That's
been
thrown
in
I've
even
seen
in
the
past
weeks,
some
of
them
very
few
have
been
promoted
to
p1,
which
is
why
some
stuff
has
changed,
and
I
can
go
over
these
numbers
if
you'd
like
yep
this
next
step,
okay,
so
split
burn
down,
we
have
movement
on
to
do's
net
dropped
by
one
in
progress,
we've
have
a
little
bit
less
and
done
has
gone
up
by
seven,
so
several
things
have
been
moving
in
that
front
for
the
p1s
for
the
spec
freeze,
it's
easier
to
see
this
list,
10,
to-do's
or
10
of
these
most
of
them
are
trace,
as
you
see,
with
this
label,
as
I
did
from
last
week,
I'm
just
grabbing
all
the
p1's
that
are
not
spec
metrics,
because
spec
metrics
will
be
the
next
milestone
afterwards
we're
concentrating
on
trace,
which
includes
the
context
and
now
baggage.
K
That's
the
other
fyi
that
I
need
to.
Let
people
know
about.
We
changed
correlation
context
to
baggage.
It's
the
same
thing.
All
the
labels
applied
they're,
just
a
different
name.
N
I
can
take
one
of
them.
I
can
take
the
cabin
both
local
and
remote
spam
parents.
N
F
H
K
Thank
you,
so
that's
that
has
names
on
all
these
people
all
these
issues.
Thank
you
and
I
think
that
addresses
the
numbers,
so
this
friday
we're
shooting
to
try
to
complete
these
10
items,
as
is
as
I
see
it,
as
our
current
milestone.
E
A
The
implementation
timeline
I
see
we
allow
we
give
how
many
four
weeks
after
the
free
spec
for
the
trace
implementation.
Is
it
yes,
three
four
weeks
yes,
do,
which
sigs
feel
confident
that
they
will
be
able
to
implement
after
the
freeze
during
the
four
weeks
to
implement
what
they
need
to
implement.
A
A
D
Q
So,
just
to
kind
of
answer
the
question
directly,
as
I
think
go's
going
to
definitely
be
in
there.
I
don't
know
I
have
to
look
at
the
requirements
once
the
spec
kind
of
solidifies.
I
know
that
especially
like
last
week
I
was
noticing
a
lot
of
changes
come
through,
but
I
don't
imagine
it
being
unreasonable.
I
just
don't
know
the
specifics
right
now.
I
have
to
look
a
little
bit
more.
K
K
I'm
sure
the
estimate
needs
to
be
updated
again
once
we
have
stuff
closer
towards
the
freezing
point,
but
more
specifically,
I'm
looking
at
these,
as
this
is
fantastic
for
helping
to
visualize
it
yeah.
Of
course
it
doesn't.
Have
estimates
like
how
big
is
this
is
like
you
know,
one
week
or
one
day
or
what,
but
at
least
it
itemizes
like
what
needs
to
be
done
on
each
sig.
For
that
one
context:
propagation.
K
Is
the
other
one
that
needs
that
would
help
if
it
was
filled
out
to
list
whether
what
we
need
to
do
in
four
weeks
is
reasonable
or
not,
or
where
we
can
ask
for
help.
K
The
end
of
here
I
believe
it's
not
just
the
trace
table,
it's
not
just
this
table
it,
but
it's
also
context
propagation.
C
K
C
N
L
L
What
I
I
think
the
language
should
be
aware
of
is
in
order
for
them
to
be
declared
ga
they
have
to
have
everything
in
the
matrix.
Okay.
So
now
the
way
how
they
they
split,
the
work
and
stuff.
Maybe
up
to
the
languages,
I
mean
we,
we
let
the
language
maintainers
decide
a
lot
of
things.
Why?
Why
do
we
try
to
control
this.
C
Well,
because
the
exact,
what
have
we
to
implement
in
order
to
be
able
to
say
we
are
compliant
with
that
first
milestone
or
or
do
or
do
we
have
to
do
that
or
only
or
we
only
have
to
say
we
are
compliant
with
ga
and
we
are
not
interested
in
in
milestones.
C
C
L
That
part
that
part,
that
part,
I
mean-
that's-
probably
a
tentative
in
my
opinion.
In
my
opinion,
it's
very
hard
it's
depending
language
by
language,
but
what
I
would
say
is
more
like
hotel,
ga
there
is
where,
in
order
to
be
declared
the
language
I
don't,
I
cannot
imagine
us
being
able
to
say
hotel
is
entirely
ga.
It's
going
to
be
language
by
language
and
everything
would
have
to
do
their
own
roadmap,
based
on
whatever
the
current
status
is.
L
A
A
The
only
piece
that
we're
missing
here
is
what
nikita
raised:
what
for
the
intermediary
milestone,
which
ones
are
necessary
for
the
language
maintainers
to
say
that
they
are
done
with
trace
portion
of
the
implementation
for
the
for
the
final
milestone
of
the
dga
milestone,
everything
that
is
listed
in
that
matrix
is
supposed
to
be
implemented,
and
obviously,
probably
some
exceptions
are
possible.
There
right.
C
C
K
R
Tristan
go
ahead,
is
it
just?
I
need
circle,
ci
disabled
for
the
two
erlang
repos.
I
don't
have
the
admin
permissions
to
do
so
and
hope
someone
on
here
would
would
have
it.
L
Well,
I
have
it,
can
you
ping
me
on
guitar
gimmicks,
for
I
do
what
you
exactly
want
for.
R
Him
and
the
second
one
for
me
is
we
had
a
meeting
this
morning
to
just
discuss
the
context
and
baggage
and
processors
and
context
for
parent
of
a
span
and
in
that
document
there's
a
list
of
some
code
examples
that
would
be
great
to
have
for
all
the
different
languages.
R
E
R
But
yeah
bringing
it
up
here
so
maintainers
can
hopefully
get
buy
in
on
people.
O
D
All
right
next,
we
have
layton
intern
guidance
and
different
cigs.
S
Yeah,
I
don't
know
if
the
this
is
the
right
place
to
bring
it
up,
but
I
was
wondering
if
there's
any
discussion
around
so
recently,
we
had
an
influx
of
like
aws
interns
for
the
python
sig
the
last
couple
months.
We
had
a
bunch
of
google
interns
and
we
were
kind
of
didn't
really
know
how
to
like,
provide
them
guidance
or
anything.
So
I
was
wondering
like.
S
Is
it
like
how
much
like
responsibility
should
we
take
in
order
to,
like
you
know,
guide
them
for
a
success,
especially
because
they're
not
like
a
bunch
of
them,
aren't
like
you
know,
working
in
the
same
company
as
like,
we
are,
you
know
we're
not
directly
like
their
managers
or
anything.
So
is
there
any
guidance
for
this,
or
should
we
care.
L
My
way
of
dealing
with
this
was,
I
know,
every
intern
has
assigned
a
mentor
from
the
company
that
they
hired
them
and
it's
in
the
interest
of
the
company
that
the
mentor
helps
them.
So
I
always
ask
the
mentor
to
be
involved
in
these
and
deal
with
initial,
let's
say
initial
reviews
and
stuff
like
that.
So
is
this
answering
your
questions
so
trying
more
to
involve
with
their
mentors
and
trying
to
to
use.
J
So
yeah
is
that
reasonable
to
ask
vendors?
Who
are
you,
know
sponsoring
and
have
interns
to
have
a
mentor
who
does
initial
review
of
all
of
the
their
pr's
yeah.
L
L
S
I
think
I
think
like
ideally
we
would
like
that,
but
realistically
with
like
so
many
interns
and
like
only
a
couple
of
mentors,
I
think
to
move
things
forward
and
keep
our
velocity
going
like
it's.
It's
not
really
possible
for
that
to
happen
like
for
them
to
look
at
every
code
review
or
to
for
them
to
like
follow
along
exactly
what
the
interns
are
doing.
S
Like
sometimes
like
they're,
part
of
like
not
even
part
of
open,
telemetry,
sometimes
they're
part
of
a
different
sig.
You
know
like
this
is
the
case
with
the
google
interns
last
time
like
their
mentors,
like
they
were
kind
of
just
placed
on
the
random
people's
teams
and
some
of
them
weren't
even
working
on
open
telemetry.
So
what
kind
of
value
can
they
add.
P
Late
my
suggestion
that
happened
for
the
c
plus
plus
sig
as
well,
so
I
I
suggest
that
you
establish
the
the
formal
connection
with
the
intern,
so
there
must
be
an
intern
and
the
mentor
from
from
the
interns
company
and
they
need
to
reveal
the
pr
before
you
as
the
maintainer
start
to
review,
and
you
have
to
make
a
clear
commitment.
P
For
example,
if
we
just
throw
20
interns
to
open
telemetry
python,
you
have
to
pick
from
the
sig
who's
going
to
be
the
mentor
for
this
intern
and
what's
the
work
stream
which
project
the
internet
is
going
to
work
out
in
this
way
you
can
allocate
proper
time
instead
of
saying
like
for
whatever
number
of
interns
I'll
just
review
all
the
code,
it's
impossible.
So
you
don't
want
to
over
commit
on
that
right,
yeah
yeah!
I
was
planning
to
do
that
with
yeah.
J
It
makes
sense
I,
like
the
I
mean
I
like
just
putting
up
that
straight
rule,
that
you
know
that
you
won't
review
pr's
from
the
interns
until
their
mentor
has
approved
it
in
github,
and
then
that
that
will
then
push
it
on
to
you
and
just
ignore
them.
Otherwise,
and-
and
that's
that's
on
the
company
then,
like
pogton,
said
of
of
providing
if
they
want
to
provide
a
good
experience
for
their
interns,
they're
gonna,
they
have
to
provide
some
mentorship
and
some
time.
L
Correct
so
trust
completely
agree
and
also,
even
though
the
mentor
may
not
be
familiar
with
open
telemetry
and
yes,
there
may
be
some
mental
open,
telemetry
issues
that
they
don't
know,
I'm
pretty
confident
that
the
mentor
has
at
least
the
knowledge
of
the
language
that
they
are
working
on.
So
at
least
they
will
pick
the
pro
the
language
problems,
the
the
coding
problems,
the
the
the
coding
issues
and
then
you
as
a
maintainer.
L
L
Q
Kind
of
wanted
to
jump
in
here
on
this
one,
because
I
think,
there's
been
a
lot
of
really
great
things
that
are
being
said,
but
they're
really
focused
on
one
of
the
primary
duties
of
a
maintainer,
and
that
is
ensuring
the
quality
of
the
code
that's
coming
through
and
the
the
velocity
that's
coming
through.
But
you
have
to
also
keep
in
mind
the
the
other
side
of
it,
and
that
is
building
the
community
that
we
all
are
a
part
of
right,
especially
interns.
Q
These
are
people
who
are
brand
new
to
the
coding
space,
a
lot
of
the
time,
as
well
as
just
brand
new
to
the
open
source
community
a
lot
of
the
times,
and
I
do
want
to
make
sure
that,
like
it's
pointed
out
that
you
should
try
to,
I
think,
as
best
as
you
possibly
can
provide
a
welcoming
community.
That
is
a
positive
experience
for
them
and
I
also
understand
like.
Q
I
definitely
think
that
riley
points
out
a
really
good
parenthetical
example
of
just
the
the
c
plus
plus
sign,
because
they
have
they
were
very
much
overwhelmed
and
I
don't
know
the
full
depth
of
it.
And
that
was
kind
of
a
really
bad
situation.
I
think
I
think
there
was
a
probably
I
don't
know
from
in
terms
of
perspective,
but
I
imagine
there's
probably
a
lot
of
frustration
on
both
sides.
There
you
get
a
large
influx
of
prs
and
you
get
a
large
influx
of
review
requests
and
there's
a
quagmire
that
gets
formed.
Q
So
I
think
that
there's
been
some
really
great
suggestions
here
around
ways
to
prioritize
things,
making
sure
that
we're
really
clear
on
our
expectations
from
the
mentors
making
sure
that
we're
really
clear
on
our
expectations
for
what
should
be
included
in
a
pr
and
in
any
sort
of
community
contribution.
Q
But
I
do
want
to
temper
that
with
the
understanding
that,
like
we
really
want
to
try
to
build
a
welcoming
community,
and
so
I
I
don't-
I
don't
necessarily
think
people
that
are
involved
in
the
conversation
so
far.
Aren't
understanding
of
this,
but
just
want
to
make
it
distinct
and
make
sure
that
we
have
a
agreed
upon
understanding
of
that.
J
Yeah,
I
think
it's
about-
I
mean
it's
about
time
and,
for
example,
you
know
if
I'm
going
to
prioritize
spending,
you
know
building
the
a
community
member
who
is
coming
from
out
in
the
wild
who
is
using
this.
Who
is,
I
mean
they're
more
likely
to
do
that
to
to
continue
on
in
the
project
versus
the
interns.
You
know
we
have
them
on
for
three
months:
they
go
back
to
school
and
then
they
go
back
to
aws
and
aws
assigns
them
to
a
completely
random
area.
J
So
we
don't
tend
to
get
a
lot
of
payoff.
We
did.
We
hope
there
was
one
intern
that
not
that
it
never
happens.
We
did
actually
have
one
intern
in
the
java
site,
who
has
at
least
expressed
interest
in
staying
involved
so
or
you're
you're.
To
your
point,.
A
E
A
I
completely
agree
with
you
tyler
about
being
a
welcoming
community,
but
you
probably
have
to
consider
who
you
are
spending
most
of
your
time
on
right.
If
it's,
some
other
person
is
more
promising
to
become
a
more
active
contributor,
you
probably
won't.
We
all
have
limited
time
right.
We
will
have
to
take
that
account.
Q
Yeah
I
get
that
and
I
I
thoroughly
get
that
limited
time
absolutely,
but
I
just
just
want
to
put
the
make
sure
the
scope
isn't
isn't
too
limited
here,
because
you
have
to
keep
in
mind,
like
maybe
you're,
not
getting
an
immediate
return
on
your
investment,
because
these
interns
are
going
to
come
back
and
work
on
open
telemetry.
You
have
to
kind
of
keep
in
mind
that
these
are
also
people
that
are
just
entering
the
fields
and
who
knows
a
year
or
two
from
now.
Q
If
you
know
their
position
changes,
but
they
want
to
actually
as
a
goal
or
career
girl
move
back
into
an
open
source
space
because
they
had
a
positive
interaction,
working
theory
community
and
they
they
were
able
to.
You
know,
find
a
place
that
was
welcoming
and
find
a
place
that
helped
them
and
they
want
to
provide
back
to
that
community.
So
I
yeah
I
totally
get
that
like
you
know,
you
have
to
weigh
that,
but
just
I
don't
want
to
make
sure
the
full
scope
is
understood.
I
guess
is
the
thing
yeah.
J
I
can
I
I
can
start
the
discussion
until
he
yeah
he's.
J
D
B
B
I
B
I
E
I
Of
course,
they
won't
just
just
do
it
for
for
every
repo,
without
any
maintainer
being
aware
of
what.
T
D
There's
a
second
section,
though,
with
a
seamless
move
for
existing
repos
for
existing
repos
renaming
default
branch
causes
challenges
by
the
end
of
the
year,
we'll
make
it
seamless
for
existing
repositories
to
rename
their
default
branch.
When
we
do
this,
they
will
retarget
all
open
pr's
draft
releases
and
move
your
protection
policies
and
more
automatically.
B
D
D
J
So
the
question
is
about
ins
instrumentation
specific,
so,
for
example,
we
capture
some
span
attributes
for
like
elastic
search
that
are
specific
to
elasticsearch
and
don't
you
know,
in
addition
to
you,
know
the
the
database
semantic
conventions,
and
so
the
question
is,
I
mean
we're
kind
of
wondering
if
other
any
other
languages
have
run
into
this,
and
should
we,
you
know,
do
those
deserve
being
added,
as
sort
of
honorable
had
opened
a
spec
issue
in
related
to
this
also
where
to
put
instrumentation
specific
semantic
conventions.
J
A
In
this
case,
I
think
that's.
This
is
different
right.
This
is
us
open,
telemetry,
adding
something
about
a
specific
technology
which
is
probably
a
vendor-specific
thing,
but
it's
us
making
that
suggestion.
So
I
don't
think
it's
the
right
thing
for
us
to
invade
that
that
namespace,
the
company's
namespace
and
offer
from
there
as
if
they
were
offering
like
if
it's
elastic
com,
dot,
elastics
dot,
something
whatever
is
the
attribute
name.
So
it
seems
like
we.
We
need
another
different
recommendation
for
this
particular
case.
It
looks
different
to
me.
J
I
linked
the
the
spec
issue
that
anurag
had
opened
also
because
he
was
from
well
from
two
points
he
was
talking
about
it.
One
is
from
aws
sdk
libraries
sort
of
have
their
own
semantic
conventions,
which
that
might
be
a
little
bit
more
like
a
rpc
or
fast,
or
you
know,
like
a
category
of
semantic
conventions
versus
like
a
one-off,
like
elastic
search,
but
it's
sort
of
similar
just
should
we
put.
J
Does
anybody
have
initial
thoughts
on
if
that
should
be
even
specked
or
if
we
should
just
maintain?
You
know
some
one-off
span,
if
it's
okay,
to
maintain
some
one-off
span,
attributes
in
our
instrumentation.
A
I
think
there
is
definite
value
on
having
this
specified
somewhere,
because,
if
you're,
using
different
language
as
the
case
with
the
same
technology,
so
you're
instrumenting
for
that
technology,
but
in
different
code
bases,
it's
highly
desirable
that
they
are
consistently
instrumental
right.
You
use
the
same
attribute
names
so
that
later
you
can
do
correlation
or
whatever
so
there's
definite
value
in
having
this
somewhere
defined
somewhere.
J
D
A
J
A
I
Yeah
I
already
introduced
something
like
that
with
the
database
semantic
conventions,
for
example,
so
that
one
is
db
dot
and
then
name
of
the
dbms
and
something
for
example.
I
I
don't
remember
entirely
what
I
did
there,
but
I
have
db.mongo
class
name
or
something
like
that
and
the
same
for
jdbc
and
such,
and
I
also
think
that
it
makes
a
lot
of
sense
to
specify
that,
because
from
a
vendor
specif
perspective,
if
you
want
to
do
any
analysis
depending
on
any
semantics,
then
you
you
need
to
have
that
aligned
across
the
different
instrumentations.
J
J
Cool
we'll
open
the
issue
thanks
sure.