►
From YouTube: 2021-05-06 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
Good,
I
was
once
again
surprised
at
the
time
I
always
forget
that
this
meeting
is
coming
to
race
back
up
and
get
ready,
we're
you're
east
coast
right
yeah
in
pittsburgh.
Yeah,
that's
sort,
sorry
near
the
coast,
but
yeah.
D
B
D
C
C
That's
right,
like
just
depends
on
the
week.
Actually,
that's
a
good
question.
I
wanna.
I
know
I've
got
a
conflict
with
one
of
these
meetings.
I
hope
it's
not
next
week.
Otherwise
I
gotta.
C
C
Well,
cool:
it
looks
like
it's
just
after
time,
but
we
can
wait
just
a
little
bit
longer.
Let
me
see
if
I
can
figure
out
how
to
look
at
some
participants:
five
people
on
yeah.
So
if
you're
new
to
the
meeting
or
just
need
a
reminder,
please
go
ahead
and
open
up
the
agenda
doc
and
add
your
name
to
the
attendees.
C
If
you
have
anything,
you
want
to
talk
about
any
pr's
or
issues
you
have
interest
or
want
clarification
or
question
on
some
topics,
please
be
sure
to
add
them
to
the
agenda
and
we
can
get
started
in
just
a
little
bit
here.
I
think
wait
for
trying
to
think
who
else
we
would
evan
would
be
cool
to
have
him
on,
but
you
can
also
make
this
happen
without.
C
Actually,
where
would
he
be
yeah?
Actually,
maybe
we
can
start
he's,
got
a
pr
or
he's
got
an
issue
open.
So
I
wanted
to
double
check,
but
I
think
we
could
probably
get
into
this
regardless.
I
will
start
sharing
my
screen.
C
I
hope
this
is
the
right
one:
okay,
cool
cool,
so
yeah
thanks
everyone
for
joining.
I
don't
think
the
attendees
list
has
changed,
as
I
gave
my
last
feel
so
we'll
just
kind
of
jump
right
into
it,
starting
off
go
over
the
project
boards
again
great
progress
over
the
past
week
got
five
issues
done
slowed
down
a
little
bit,
but
I
think
it
might
be
because
we're
cutting
into
the
meat
of
the
harder
problems
at
this
point,
and
so
we
still
have
some
progress
going
forward.
C
So
I
it's
definitely
I'm
super
excited
about
it
and
it's
it's
definitely
inspiring
so
excited
by
that
we
can
jump
into
the
project.
Part
itself,
that's
tracking
this,
and
I
think
this
is
a
kind
of
a
good
practice
just
to
start
off
by
going
over.
What's
in
progress
and
get
some
status
update
on
this,
I
don't
know:
if
stop.
Oh
there,
he
is
okay
gustavus
on
as
well.
So
I
think
we
have
the
people
to
talk
about
this
robert.
C
I
you
were
first
up
wanna
talk
a
little
bit
about
the
status
of
this
ticket
and
where
you're
at
and
working
on
this
one.
A
So
the
status
is
that
I
started
looking
at
the
like.
Anatomy
wanted
to.
First
make
all
the
changes
in
the
main
open,
telemetry
go
repository,
and
I
just
made
a
small
changes
just
to
make
sure
to
get
feedback
that
I'm
you
know
following
the
correct,
correct
path
and
tomorrow
I
want
to
make
a
little
pr
just
clarifying
that
the
functional
options
are
acceptable
because
just
to
reduce
reduce
some
noise
when
implementing,
because
right
now,
it's
not
obvious
and
but
still
I
will
follow
up
in
in
parallel
with
making
the
changes.
A
C
Thanks
for
clarifying
this,
I
think
that
this
is
a
pretty
good
pattern
going
forward
anthony,
and
I
both
responded
to
this
one
of
the
questions
here
for
people
that
are
just
looking
at
this
is
this
interface
pattern
can
be
extended
really
easily
by
having
some
sort
of
closure
that
you
can
return
and
it
can
be
a
very
generic
closure,
and
so
the
idea
was
to
like
is
that.
C
Okay,
if
you
have
an
opinion
on
that,
both
I'm
summarizing
and
speaking
for
anthony,
so
any
errors
or
my
errors,
not
his
but
like
yeah.
We
were
both
okay
with
that
and
it
seems
like
it
makes
a
lot
of
sense.
So,
if
you
disagree,
please
put
your
information.
Put
your
thoughts
in
this
little
comment
here.
C
Cool
moving
on
gustavo,
where
this
is
actually
something
I
want
to
talk
about
anyway,
so
yeah,
why
don't
we
kind
of
dig
into
this?
I
maybe
just
give
a
little
status
update,
so
I
don't
say
the
wrong
thing.
E
Sure
so
what
I'm
trying
to
do
here
is
just
like
split
the
otlp
export,
so
we
can
make
the
traces
part
stable
and
we
have
different
disability
for
each
export,
and-
and
this
is
just
like
what
I
have
planned
so
far-
and
I
have
also
like
tried
to
integrate
this
idea
that
alex
brought
up
in
some
internal
dailies
in
lightstep
that
we
should
probably
expose
otlp
clients,
because
there
is
a
lot
of
projects
that
could
use
that
and,
for
example,
the
collector
has
an
implementation
of
it
and
we
do
have
another
implementation.
E
So
we
would
end
up
with
some
clients,
and
I
tried
to
create
an
exporter
that
just
accept
a
client
that
could
be
a
jrpc
or
http
or
even
a
std
out,
or
anything
like
that.
If
you
want
to
implement
this
client
interface
and
that's
basically
that.
A
E
Yes,
yes,
probably
I'm.
I
just
try
to
create
this
folder
just
to
be
easier
to
understand,
but
I
would
need
to
to
change
that,
but
I
was
actually
trying
to
to
think
if,
if
it's
possible
to
like
in
the
same
package
that
there
is
a
that
is
the
client
you
like
create
the
exporter.
So
you
don't
need
to
like
otlp
metrics
new
exporter
and
jrpc
metrics
client,
you
only
like
type
otlp,
your
symmetrics
dot,
yeah.
A
F
C
F
Kind
of
get
into
a
weird
situation:
there,
though,
where
you've
got
otlp
twice
in
the
the
folder
path
right
once
at
exporter's,
otlp
and
then
grpc
and
otlp
metrics,
because
if
we
just
have
metrics
or
traces
there,
that's
going
to
likely
end
up
needing
to
be
import
renamed
where
it's
used
to
distinguish
it
from
the
api
or
sdk
metrics
or
traces
package
as
well,
which
are
already
sources
of
somewhat
confusion
and
consternation.
Occasionally,.
C
You're
you're
talking
about
the
yeah
this
is
this
is
kind
of
what
I'm
thinking
too.
I'm
trying
to
think
through
this
we
currently
have,
I
guess,
oh,
we
still
have
a
trace
folder
in
the
metrics
folder
in
this
top
exporters
level,
and
then
this
would
add,
given
this
otp.
F
C
Yeah
but
jokes
aside,
I
see
a
point
like
yeah.
I
kind
of
wonder
about
the
same
thing.
F
A
A
C
Yeah,
possibly
this
the
joke
goes
much
deeper
because
we
have
a
trace
in
the
metrics
at
the
top
level,
because
it's
the
api,
we
have
them
in
the
sdk.
We
used
to
have
them
in
the
export
part
of
the
sdk
as
well,
but
we
removed
that
so.
C
To
get
around
the
sdk
and
the
api
not
having
their
own
because
they
by
definition,
need
to
be
separate,
but
that's
kind
of
like
yeah,
I
don't
know
there's.
This
is
a
little
bit
different
in
the
fact
that
it
is
kind
of
its
own
thing.
One
of
the
other
things
that
gustavo
that
comes
to
mind
is
this
http
and
grpc
split
here:
each
have
their
own
new
client.
Is
there
any
way?
C
You
could
reverse
that
where
the
client
would
be
like
a
metrics
client,
and
it
would
accept
the
transport
similar
to
how
we
have
a
driver
right
now.
C
A
C
I've
been
thinking
a
lot
about
this.
I
feel
bad,
exactly
written
anything
except
a
slack
message,
but
like
it's
just
it's
a
tough
one,
there's
a
lot
going
on
here
and
it's
tough
to
kind
of
like
disentangle
it.
Another
thing
that
also
comes
to
mind
is
just
kind
of
like
what
you
were
saying
earlier
this
stuff.
I
didn't
know
that
alex
had
mentioned
this,
but
like
yeah,
the
universality
of
wanting
to
transport
or
transform
like
something
into
the
otlp
is
also
kind
of
like
an
interesting
problem
space.
C
I
was
looking
at
the
thing
we
talked
about
last
time
where
the
standard
out
exporter
might
want
to
want
to
do
that
with
the
otlp,
but
there's
just
import
problems
at
that
point
because
you
just
are
bringing
in
a
whole
bunch
of
things
so
yeah.
I
don't
know
I'm
kind
of
like
thinking
about
the
pro
package
structure
here.
I
think
a
little
bit
hard
try
to
make
sure
that
we
make
the
right
choice.
C
This
seems
fine
that
you
know
just
maybe
to
ask
a
question:
how
does
this
get
extended
with
logs?
Do
we
just
add
folders,
you
know
here
and
here
essentially
in
here.
C
C
Yeah
josh
made
a
similar
comment
down
here.
This
is
a
lot
of
modules,
but
I
don't
know.
I
think
this
is
what
the
nature
of
the
beast.
As
far
as
I
can
tell,
I
thought
through
it
a
few
times,
but
I
don't
think
it
can
get
away
with
or
or
less
modules
outside
of
doing
this
import
thing,
but
then
there's
this
whole
dependency
injection
thing
which
is
yeah,
I'm
not
even
sure
that
really
reduces
the
number
of
modules.
C
Yeah
yeah,
just
a
number
of
packages,
just
structures
them
differently.
I
guess
would
be
the
thing.
F
I
probably
had
written
about
this
somewhere,
but
I
I
wonder
if
it
would
make
sense
to
instead
of
doing
this
signal
major
to
do
it
or
sorry
protocol
major
to
do
it
signal
major
so
that
we've
got
metrics
grpc
http
traces,
grpc
http.
Then,
when
we
add
another
signal,
we
we
add
another
folder
hierarchy
at
that
layer.
C
Okay,
he
spent
a
fair
amount
of
time
on
this.
I
want
to
make
sure
that
gustavo
has
a
path
forward
and
since
he
was
relying
on
me
to
give
him
that
path
forward,
I'm
gonna
crowdsource
it.
So
the
people
that
are
involved
in
this
conversation
could
you
make
sure
that
you
put
some
ideas
into
this
ticket.
I'm
still
thinking
about
I'll.
C
Try
to
make
sure
I
comment
as
well,
but
I
want
to
try
to
unblock
gustavo
and
making
some
progress
on
this,
because
this
is
definitely
something
that's
blocking
us
to
get
to
nrc.
So
we
need
to.
We
need
to
resolve
this,
and
you
know.
I
think
that
we
want
to
be
careful
not
to
let
perfection
get
the
way
of
us
completing
the
rc.
C
But
I
think
this
is
something
that
we
want
to
spend
a
little
bit
of
time,
putting
some
thought
to
we've
done
some
other
package
reorganizations
which
the
perpetrator
of
didn't
think
through
properly,
and
I
could
say
that,
because
I
was
the
perpetrator
but
yeah.
So
I
think
spending
a
little
bit
of
time
is
a
good
idea.
F
Yeah
and
to
be
clear,
I
wouldn't
object
to
going
forward
with
this.
There
may
be
some
bike
chatting
to
do
as
you
say
to
ensure
that
we're
confident
that
we've
got
something
right,
but
I
think
this
is
something
that
will
work
and
we're
not
tweaking
around
the
edges.
C
Yeah.
Okay,
so
I
think
if
you
want
to
add
some
comments
to
this,
please
do
so
by
the
end
of
today.
I
think
that
that
sounds
like
a
reasonable
timeline
pacific
time,
so
maybe
in
like
three
hours
and
then
gustavo.
If
you
don't
hear
anything
back,
let's
just
assume
like
we're.
Gonna
go
ahead
with
this
plan,
and
this
looks
good.
C
Right,
cool
cool:
where
are
we
at
the
next
one,
is
something
that
was
opened?
It's
a
bug
where
somebody
would
pass
length
to
the
ends,
method
and
the
sdk
implementation
of
the
span
and
drops
these,
because
it's
not
something
that
you
can
actually
set
based
on
the
specification
going
back
to
perpetrators,
refactoring
things.
This
is
kind
of
like
a
byproduct
of
it,
and
it's
something
that
I
did
we
take
lifecycle.
We
did.
C
I
think
we
actually
expand
options
now
to
this
n
method
and
that
accepts
a
link
and
I
need
to
double
check
because
I
need
to
go
read
the
specification
again.
The
idea
is
that
we
may
want
to
like
try
to
engineer
away
this
idea
of
confusion
with
the
user,
but
if
the
api
allows
these
options
to
be
passed
generically-
and
you
know
in
the
future-
the
sdk
implementation
may
actually
change
to
allow
setting
links
or
setting
what
else
can
you
set
status
or
name,
but
that
wouldn't
make
any
sense?
C
I
don't
know
there's
some
bad
ideas
there,
but
I
just
want
to
double
check
that
that's
the
way
it
is
and
if
it
isn't,
then
maybe
we
do
want
to
split
up
the
start
and
end
options
again.
I'm
just
double
checking
the
work.
I
don't
know
if
this
actually
is
going
to
take
any
action
here,
but
I'll
be
sure
to
update
it
and
probably
draw
the
consternation
of
people.
If
I
decide
to
revert
the
idea
to
go
back
to
a
start
and
end
option
cool
then
anthony
you
have
this
here.
F
Yep,
so
I
think
you've
made
some
pretty
decent
progress
on
that.
Actually
I
put
it
on
the
agenda.
I
wanted
to
discuss
this.
If
you
can,
let
me
share
my
screen
I'll
go
ahead
and
show
the
godoc
that
I've
got
being
generated,
and
I
want
to
get
feedback
on
how
people
feel
about
that.
I
think
this
is
the
one
I
want
to
share.
F
You
should
be
seeing
some
godoc
here
right,
yep
yeah,
so
so
this
then,
is
generated
from
the
semantic
convention
yaml
using
their
templating
system
and
that's
in
the
the
build
tools.
So
we've
got
each
group
of
conventions
with
a
brief
description.
This
comes
out
of
the
ammo
each
one
of
these
constants
has
brief
description
type,
whether
it's
required
some
examples,
or
some
of
these
have
notes.
Yeah
like
we
can
see
this
one
as
a
note,
and
so
we've
been
able
to
generate
reasonable
looking
names.
F
I
think
these
are
all
broken
up
by
the
the
groups
of
conventions
that
there
are
the
one
thing
that's
been,
so
I
I,
after
generating
all
of
these,
I
tried
running
the
tests
in
the
api
and
sdk
packages
turned
out.
There
were
about
20
some
instances
of
things
that
were
different
and
other
than
one
was
where
we
had
created
an
http
semantic
invention
that
doesn't
actually
exist.
A
key
that
doesn't
exist
in
the
spec.
F
All
of
the
other
ones
were
different
only
by
capitalization.
Let
me
see
if
I
can
find
some
example
like
this
would
be.
I
think
os
would
normally
be
capitalized,
whereas
this
doesn't
really
do
that
properly.
There's,
probably
a
pid
yeah
like
pin
here,
I
think,
would
normally
be
it's
an
initialism.
F
So
I
think
that's
the
last
thing
to
really
fix
here.
Let
me
scroll
down
a
bit
farther
and
find
the
enumeration
values.
So
once
we
get
through
all
of
the
constants,
we've
also
got
variables
that
are
generated
for
enumeration
values,
with
their
brief
descriptions
as
well,
and
these
all
appear
to
be
either
strings
or
ants,
and
I
think
there's
actually
only
one
set
that
our
integers
did
manage
to
get
these
to
build
correctly
with
the
appropriate
type.
So
I
just
wanted
to
get
feedback
on.
F
Does
this
look
like
it's
reasonable?
This
is
one
of
the
ones
that
end
up
changing.
I
think
we
called
this
net
transport
tcp
and
net
transport
udp
there's
an
ip
in
there.
We
need
to
figure
out
the
the
initial
initialism
handling,
but
I
think
this
is
fairly
close
to
something
that
will
actually
not
create
a
huge
disruption,
since
we
seem
to
have
been
reasonably
good
about
generating
our
constant
names
from
the
values
or
from
the
keys.
B
Anthony,
do
you
have
a
plan
for
how
to
override
the
default
treatment
of
some
of
these
initialisms
like
gcp,
is
in
front
of
us
or.
F
I
really
don't
it
looks
like
golent
has
a
list
of
common
initialisms
that
could
potentially
be
used
in
a
post-processing
pass,
and
then
we
can
maybe
add
some
more.
I
want
to
spend
a
little
bit
of
time
this
afternoon
looking
at
that,
but
I'm
also
not
sure
it's
really
a
huge
problem.
I
mean
some
of
the
the
things
I've
read
about
go
code
reviews
you
know
the
the
go
teams,
as
machine
generated
code
is
held
to
a
lower
standard
than
than
human
human
written
code
right.
F
So
maybe
it
isn't
as
big
of
a
problem.
If
we
can't
find
a
clean
solution
quickly,
I
think
I
would
prefer
to
get
something
stable
and
ships
that
we
know
will
work
going
forward
then
and
spend
a
lot
of
time
trying
to
polish
around
the
edges.
B
Right,
though,
if
we,
if
we
do
publish
this,
then
we
the
improvement
here,
would
be
a
breaking
change
unless
we
retained
the
old
names
as
well.
We
came
up
with
a
clever
way
to
name
these
more
with
how
how
they
ideally
would
be
to
publish
these
yeah
we'd
have
to
break
people.
F
But
yeah
so
whatever
we
end
up
with
at
1.0
we're
going
to
stick
with
if
we
find
a
quick
and
easy
way
to
deal
with
initialisms
then
we'll
deal
with
that.
If
we
don't,
we
won't,
but
whatever
initialisms
we
deal
with
at
that
point
also
will
be
fixed
unless
we're
adding
new
ones
that
don't
already
exist.
F
There
is
the
the
semantic
convention.
Let
me
see
if
I
can
find
the
yeah
here,
it'll
go.
So
if
you
look
at
the
semantic
convention,
syntax
it
does
allow.
I
think
there
was
an
option
for
a
deprecated
field
to
be
put
in
here,
though
I
don't
actually
see
that
here.
F
I
remember
seeing
it
in
in
the
template
processor,
so
there's
a
way
to
indicate
that
a
semantic
invention
has
been
deprecated,
and
we
could
note
that
here
and
I
think
that
they
come
with
a
pointer
to
the
the
one
that
should
be
used
instead.
But
that
again
is
information.
That's
driven
by
the
the
emil,
that's
in
the
in
the
spec
refill.
F
The
stability
is
yeah,
it's
a
question
right,
so
I
think
if
we
look
at
the
actual
spec
for
semantic
conventions,
it
calls
it
out
as
experimental.
I
don't
know
honestly
what
the
the
actual
status
of
this
is.
If
have
you
seen
these
change
significantly
at
all
lately
or
are
we
just
trigger
shy
on
calling
these
stable
because
they
might
change.
C
I
have,
there
have
been,
I
think,
like
http
and.
C
Maybe
it's
in
the
resource
section
there
are,
I
think
there
were
some
that
were
renamed
and
in
that
process
they
they
removed
some
and
replaced
it.
I
think
fast
is
also
one
that
was
pretty
active
a
month
or
two
ago.
C
Yeah,
I
think
that
the
I
think,
there's
actually
even
talk
right
now
that
maybe
even
changed
some
of
the
telemetry
sdk
ones
and
the
instrumentation
version
something
that
people
are
wondering
about
as
well.
So
I
think
there
are
rumors
of
potentially
changing
some
of
these
still.
C
Yeah,
I
agree
well
with
that
or
we
have
to
provide
some
comprehensive
backwards
compatibility
with
deprecation
strategy
right
because
otherwise,
like
we're
on
the
hook
for
this
maintaining
vestability,
I
agree.
C
I
feel
like
some
of
these
might
have
reached
stay
stable,
but
I
could
be
wrong.
I
thought
that
there
was
like
a
mixed
status
on
some
of
these,
but
I
I
don't
know.
F
I
think
these
are
all
experimental
and
I'm
not
sure
that
the
status
is
contained
in
the
em.
All
like,
let's
go
look
at
what
I
used
is
it.
C
Yeah,
I
think
this
is
something
we
need
to
bring
up
with
the
specification,
because
I
I
agree
like
this
is
something
that
we
are
relying
on
in
building
instrumentation
well
instrumentation
and
the
sdk
right
like
this
is
something
if
you're
required.
C
Instrumentation
name
in
your
sdk,
like
that's
for
your
plummetry
version
right
like.
F
Yeah
yeah
telemetry
version
we've
got
some
exporters
that
add
some
resource
attributes
as
well.
Like,
I
think
the
zip
code
exporter
actually
creates
a
span
for
the
stuff
that
it
exports
or
some
some
resource
information
that
it
exports.
There
are
a
few
places
that
we
use
the
the
semantic
conventions
within
the
sdk
itself.
F
Event,
name
right,
that's
one!
That's
followed,
yep
yeah
and
the
exception
of
that
name
and
keys
are.
C
Are
there
yep,
yep,
yeah,
yeah,
okay,
yeah,
I
mean
those
definitely
need
to
be
stable.
If
we're
gonna
ship
this,
at
least
the
name.
I
guess
like
the
value
change
that
doesn't
sound
right,
but
the
the
attribute
names.
F
And
really
even
the
type
is
only
necessary
for
enumerated
types
right
right.
F
That's
the
only
place
where
we
actually
create
values
that
have
a
specific
type.
All
of
the
rest
of
these
are
simply
keys
that
you
can
put
any
type
into,
and
we
suggest
you
use
this
type.
C
Yeah
one
thing:
I
don't
want
to
derail
the
conversation,
but
it
was
really
cool,
like
I've
seen
a
lot
of
instrumentation
buildings
around
these
keys,
so
it
might
be
useful
in
the
future
to
build
templatized
like
functions
as
well.
So,
like
the
fast
cold
start
key
here,
since
it
has
to
be
a
boolean
there'd
be
a
function.
That
was,
I
don't
know
fast
cold
start
and
it
would
accept
a
boolean
value
and
then
that
would
just
like
produce
whatever
you're
expecting.
C
So
we
can
lock
down
that
type
system,
but
that's
that
could
be
an
extension.
That
is
definitely
not
a
1.0
thing.
Just
cbs.
F
Ideas,
I
think
getting
these
constants
and
the
the
enumerated
values
that
are
specified
in
in
the
semantic
inventions
are
what
we
have
to
nail
down
for
1.0
everything
else.
We
can
build
on
top
of.
C
I
agree
so
okay,
do
you
want
to
take
that
as
an
action
item
for
this?
This
looks
good
by
the
way.
You
know
that
it'd
be
it's
nice
if
we
could
like
put
in
a
little
bit
of
some
overrides,
but
I
think
we
could.
C
I
I'd
like
to
get
the
initialism
and
acronyms
fixed
if
you're
gonna
take
another
pass.
I
totally
support
that
and
then
I
think
yeah.
I
think
we
can
just
kind
of
like
keep
improving
this
as
long
as
we
remain
for
compatible
in
our
naming
scheme
for
the
exported
types
here.
So
I
do
want
to
get
those
right,
but
I
understand
it's
also
a
really
hard
process
that
that
might
not
be
possible.
F
F
C
Awesome
cool
great
job,
thanks
for
tackling
that
you've,
definitely
more
patient
than
me
with
dealing
with
yammel,
so
yeah
the
ammo
was
fine.
It
was
the
python
I
had
to
touch
this.
F
F
I
I
made
some
changes
to
the
the
python
yaml
parser
that
I
ended
up
not
needing
because
I
was
misunderstanding,
how
it
was
providing
the
data
to
the
template,
and
there
was
more
there
that
I
had
missed,
which
is
is
good.
So
now
I
can
back
out
all
those
changes
that
I
made
nice.
Okay,.
C
Also
you're
now
an
expert,
so
I
don't
know
if
anybody's
ever
told
you
don't
get
good
at
something
you
don't
want
to
do,
but
well
cool.
I
see
evan's
on
the
call
as
well.
I
wanted
to
ask
you
about
this
issue
here,
if
we're
close
to
closing
it
or
if
you're
still
waiting
on
some
reviews
or
some
things.
G
C
I
I
think
so
it's
in
the
specification
and
I
think
the
majority
of
people
who
are
going
to
be
reviewing
all
of
the
future
instrumentation
are
on
the
call,
but
it
never
hurts.
I
guess
to
document
it.
I
don't
know.
G
G
C
Yeah
and
all
of
the
times
that
that
happened,
I
think
it
was
just
people
copy
pasting
with
some
other
instrumentations
done.
So,
if
we've
cleaned
all
that
up,
then
I
don't
think
we're
gonna
have
that
happening
again,
but
yeah,
I'm
okay
with
closing
this,
and
we
can
always
add
the
documentation
in
the
future.
For
you,
too,
that
sounds
good.
Okay,
any
objection
to
me
closing
this.
D
C
C
That's
a
byproduct
of
the
pm
bug
that
bit
me,
but
I
just
like
we're
really
close,
like
I
think
we're
actually
going
to
make
some
timelines,
so
I'm
really
excited
by
seeing
this
yeah.
So
come
back
to
the
agenda.
I
think
we've
already
talked
about
this,
and
maybe
it's
pertinent.
We
talked
a
little
bit
about
that.
F
Yeah,
so
the
question
was
was
asked
internally
at
aws,
because
we
want
to
have
the
a
dot
for
go
or
aws.
Distribution
for
open
telemetry
go
sdks
able
to
be
released
at
the
same
time
as
the
the
1.0
api
and
sdk
we
have
here,
but
part
of
that
will
be
ensuring
that
the
x-ray
id
generator
propagator
and
the
the
ecs
eks
and
ec2
resource
detectors
that
we
have
in
the
contrib.
Repos
are
also
1.0
packages.
F
So
I
know
we've
discussed
earlier
not
wanting
to
label
one
contrib
packages
1.0
right
at
the
same
time
as
we
do
the
api
and
sdk.
But
I
want
to
talk
about.
Maybe
those
types,
the
the
resource
detectors,
id
generators,
which
I
think
actually
is
the
only
one
and
propagators
that
have
interfaces
that
are
defined
by
the
sdk
and
thus
really
can't
change.
F
So
I
I
want
to
talk
about
whether
we
think
we
can
make
those
1.0
at
the
same
time
or
very
shortly
after
and
then
leave
instrumentation,
for
we
know
we're
going
to
want
to
review
the
interfaces
that
these
provide,
because
they're
providing
their
own
interfaces
and
may
still
need
some
love
and
haven't
been
touched
in
a
while.
C
Yeah,
that's
a
great
question.
I
think
that,
from
what
you
just
described
like
it
sounds
reasonable
to
me.
C
One
thing
that
comes
up
to
it's
kind
of
systematically
across
open
source
tree
is
this
idea
of
ownership,
though-
and
I
think
I'd
like
to
have
a
solution
for
that
in
our
project
for
these
sort
of
things,
because
right
now
like
if
we
release
a
stable
version
of
the
x-ray
propagator
right
now,
it's
literally
a
best
guess
for
me
anthony
you
have
a
lot
better
understanding
of
that,
but
that
kind
of
makes
it
seem
like.
Maybe
we
need
to
start
partitioning
ownership
of
the
contrib.
C
Repo
is
kind
of
my
thought
on
this
matter.
Thinking
a
little
bit
about
this,
so
having
something
like
this
owner's
dock
here
somewhere
anyways,
I
don't
know
where
we
put
it.
C
C
So
there's,
like
a
you,
know,
a
generalized
ownership
that
remains,
but
I'd
like
to
have
things
like
this,
like
an
aws
specific
folder
here,
have
additional
code
owners
to
it,
just
so
that
there's
more
responsibility,
there's
more
that
we
can
like
reach
out
to
and
have
maintenance
people
like
people
that
have
more
context
around
these
sort
of
things
provide
that
level
like
extra
level
of
ownership,
and
they
can
also
you
know,
I
think,
with
that-
retain
higher.
C
You
know
more
permissions
around
these
sort
of
things,
so
maybe
your
approval
also
means
something
more
than
just
you
know.
If
they
were
in
a
different
directory
is
my
idea.
I
think
we
need
to
have
some
policy
in
place
for
how
this
actually
would
work
and
then
maybe
run
it
by
the
gc.
But
I
would
like
to
have
something
like
that
in
place
before
we
start
releasing
a
lot
of
things
in
this
repository.
C
F
I
agree
and
it's
it's
an
issue
that
we've
had
with
the
collector
contribute
as
well,
but
we
haven't
there's
there's
a
lot
of
work
that
goes
in
there.
Far
more
than
is
in
in
goku
trip
right
there.
There
are
more
reviewers
and
approvers
there,
but
there's
still
more
work
than
there
are
people,
and
it's
occasionally
been
a
challenge
to
get
contributions,
reviewed
and
approved
in
a
timely
manner
and
then
merged
in
they
haven't
been
able
to
find
a
solution
there.
F
But
I
think
that
what
you're
proposing
is
at
least
worth
exploring
and
trying
out
where
there
are
teams
that
have
a
vested
interest
in
the
contributions
and
will
will
be
around
for
the
long
haul.
The
aws
ones
are
actually
kind
of-
maybe
not
a
great
example
for
that
in
that
regard,
right
now,
one
because
I'm
probably
the
going
to
be
the
most
stable
participants
on
the
aws
site
on
go
and
I'm
already
in
maintainers.
F
I
think
the
the
team-
that's
that's
doing.
A
lot
of
the
open,
telemetry
stuff
is
kind
of
shuffling
around
right
now
and
so
there's
gonna
be
some
variation
there.
So
I
I
don't
know
if
provided
putting
individual
persons,
you
know
named
people
on
it
or
having
some
sort
of
team
or
group
concept
that
we
can
move
people
in
and
out
of,
is
the
the
right
approach,
but
it's
worth
noodling
on
some
ideas
there.
I
think.
C
Yeah,
okay,
I
think
yeah.
I
think
it's
worth
noodling
on
some
ideas.
You
know
I
know
evan's
been
working
with
some
of
this
instrumentation,
I
think
more
than
I
have,
and
I
would
feel
like
I
kind
of
want
his
opinion
on
that.
Some
of
that.
I
think
that
there's
just
like
people
who
are
more
experts
on
on
this
instrumentation
because
they
work
with
it
every
day
and
it's
a
lot.
G
Yeah,
I
I
don't
want
to
sort
of
admit
my
dirty
little
secret,
which
is
that
I
haven't
actually
been
doing
much
go
work
in
quite
a
while
being
pushed
to
work
into
a
java
world,
which
is
my
great
misfortune,
sorry
so
yeah.
So
in
terms
of
actually
writing
things
which
use
these.
I
mean
I,
I
haven't
really
done
that
myself,
but
a
lot
of
them
seem
to
follow
the
same
sort
of
pattern
right
with
instrumentations
that
mostly
little
wear
and
these
sort
of
incoming
requests.
G
C
Yeah,
I
think
this
is
a
great
one
here
as
well
gcp
like
I
don't
think,
I'm
double
checking
the
people
on
the
call,
but
I
don't
think
anybody
on
the
call
is
actually
from
gcp
like
or
google,
and
we
used
to
have
a
very
strong
commitment
from
google
to
the
project.
F
Yeah,
I
think
there
we
know
who
we
can
reach
out
to
david
ashbal
and
punya
are
still
involved
in
hotel,
even
if
they're
not
as
closely
involved
in
go
as
they
were
before,
so
we
can
still
find
them,
but
yeah.
It
would
be
nice
if
we
had
some
way
for
organizations
that
were
contributing
things
that
they
had
an
interest
in
to
say.
Okay,
here's
we're
going
to
make
sure
that
we
we
support
this
as
well.
C
Yeah
yeah
think
that
that's
kind
of
a
the
I
would
like
that
kind
of
commitment
before
we
went
one-o
without
anything.
I
guess
it's
kind
of
my
feeling
on
the
matter
and
it
makes.
E
C
Of
sense,
given
you
know
your
position
at
aws
that
we
could
do
that
for
the
aws
stuff,
because
I
don't
imagine,
like
that'd,
be
too
much
of
a
conflict,
but
you
know
going
1.0
on
this
datadog
exporter.
C
I
really
don't
feel
good
about
if
somebody
was
asking
me
to
do
that,
because
I
they
didn't
even
contributed
initially,
I
think
this
is
josh
who
contributed
this
initially
so
like
this
kind
of
stuff
is
like.
I
really
don't
feel
comfortable
with
that
going
1.0
at
any
point
without
any
sort
of
commitment
from
the
actual
owner
of
like
the
back
end.
C
So
I
I
mean
that's
a
long-winded
way
of
saying
I
think
we
need
to
like
maybe
put
in
a
little
bit
of
thought
into
the
ownership
if
it's
going
to
be
like
a
dedicated
thing
like
another
thing.
To
also
maybe
point
out
is
just
like
a
lot
of
this
stuff
in
the
github
folder
here
of
the
instrumentation
is
just
open
source
projects
right,
so
maybe
there
isn't
a
real
reason.
C
We
need
additional
like
overview
on
this,
because
we
can
all
you
know,
reach
out
to
those
communities
or
become
you
know,
or
look
into
those
communities
code
itself
to
learn
more
about
them
and,
like
maybe
we
have
the
ability
to
have
somebody
from
one
of
these
projects
take
some
ownership
of
some
of
this
work.
But
I
think
this
isn't
as.
F
Yeah,
I
agree.
I
I
I
think
it's
the
it's
the
ones
that
are
more
specific
to
organizations
like
datadog
or
google
or
aws.
Even
like
the
the
cortex
exporter.
That's
there,
the
cortex
metrics
exporter
was
something
that
aws
had
constructed
and
contributed,
and
you
know
I
I
think
we
will
still
have
an
interest
in
ensuring
that
that's
there
and
usable
because
of
our
interest
in
amp.
C
So
I
think
that
maybe
that's
the
way
I
would
frame
it
is
that
like
if
we
want
to
take
something
in
this
repo
to
a
1.0,
it's
a
sponsor.
I
guess
I
don't
know
what
that
sponsorship
looks
like.
I
don't
even
know
what
the
responsibilities
would
be,
but
I
think
that
we
should
come
up
with
that
before
we
do
1.0.
I
guess
is
what
my
ask
is.
C
Okay,
cool,
I'm
just
gonna,
put
a
note.
C
Here
and
I
think
that
could
apply
to
like
the
open
source
stuff
as
well
like
I
feel
like,
I
could
sponsor
a
lot
of
the
a
lot
of
the
open
source
packages
in
there,
just
because
I
feel
comfortable
that
I
could
figure
it
out
eventually
right.
So
I
think
that's
a
pretty
fair
way,
so
we
can
just
define
whatever
that
sponsor
is
and
their
responsibilities.
I
think
we
should
go
there.
F
Yeah
kind
of
like
evan,
said
you
know
like
the
the
gin
and
gorilla
http
middleware
packages
that
are
open
source
frameworks,
they're
fairly
straightforward
and
simple,
easy
to
understand.
We
we
can
take
on
that
that
responsibility.
At
least.
I
would
feel
comfortable
saying
that
as
maintainers
we're
going
to
take
that
on,
I.
C
Agree,
I
agree.
Cool
sorry
looks
like
we're
taking
up
a
lot
of
time
today,
project
ideas
for
summer
interns
anthony
want
to
jump
in
on
this
one
yep.
So
this
is.
F
Just
you
know
don't
have
to
talk
about
that
much
here,
but
just
want
to
let
everybody
know
that
aws
is
going
to
be
having
yet
more
interns
starting
in
the
next
month
or
so,
and
so,
if
we
have
ideas
for
projects
that
would
be
good
to
have
an
intern
work
on
then
now's
the
time
to
do
some
brainstorming
and
over
the
next
couple
weeks
we
can
talk
about.
C
Yeah,
I
think
that
you
said
over
in
the
next
month
right.
F
I
don't
see
how
elite
on
the
call,
so
I
don't
know
the
exact
timing,
but
I
think
it's
sometime
in
may
or
june.
We
will
have
more
starting.
We've
had
two
actually
just
start
this
week,
karen
and
calvin,
you
can
say
hi
and
introduce
yourselves
if
you
like
they're
here,
you
may
remember:
kelvin
made
some
contributions
last
year
and
he's
returned
to
work
with
us
over
the
summer.
C
Cool
hello,
both
of
you,
I
don't
know
yeah
if
you
wanted
to
say
hi,
we'd
love,
to
get
an
introduction.
D
H
Yeah,
so
I'm
karen
and
last
term
me
and
calvin
are
part
of
the
same
entering
cohort
and
I
was
working
on
the
c
plus
repo.
I
was
working
on
developing
like
the
logging
pipeline
for
that
and
yeah.
I'm
not
sure
what
I'm
going
to
be
working
on
this
term
yet,
but
I'm
looking
forward
to
it
as
well.
H
It
was
confusing
because,
like
there
was
no
spec
at
the
time,
so
my
partner-
and
I
we
just
kind
of
won
the
first
half
of
it
and
then
the
second
half
we
were
like
okay.
Now
we
got
this.
C
Yeah,
there's
there's
both
the
signal
of
logging
and
just
like
debug
and
user
information
logging
thing
that
we
need
to
implement.
So
these
are
some
of
the
ideas
I'm
thinking
of
just
to
be
aware,
but
yeah
we've
also
found
the
better.
We
can
structure
the
work
that
we
want
done
into
small
participle.
C
Yeah
yeah
and
I
think
on
that,
like
one
of
the
things
I
was
thinking,
anthony
was
just
like
we
should,
probably,
after
the
one
over
at
least
just
go
through
all
the
remaining
issues
and
call
the
ones
that
don't
make
any
sense
and
we
should
be
able,
I'm
sure
we're
going
to
find
a
lot
that
are
just
like
this
could
be
really
quick
and
we
can
just
throw
that
on
the
list
as
well
sure
yeah.
G
All
right
a
quick
question
on
on
interns:
do
we
have
anybody
from
google
on
the
core?
I
guess
not.
I.
F
Anywhere
near
as
many
as
they
did
last
year
and
hopefully
they're
going
to
have
some
some
more
coordination
for
them
last
year,
I
think
what
happened
was
because
of
covid.
They
ended
up
having
a
whole
bunch
of
interns
without
people
to
really
guide
them
or
solid
projects
to
work
on,
and
it
says
here
open
telemetry's
this
way
so
they're
gonna
try
to
avoid
having
that
happen
again,.
C
Yeah,
I
that's
a
good
question.
I
haven't
heard
anything
coming
into
the
go,
but
I
think
anthony
just
had
a
really
good
recap
of
what
I
have
heard
jumping
into
another
issue.
Here,
that's
been
open
for
a
little
while
to
pr
and
it's
updating
our
versions
of
go
and
dropping
support
for
1.14
and
adding
1.16
under
the
ci
system.
C
There's
a
lot
of
support
here,
except
for
one
pesky
person
asking
a
question
about
why
we'd
wanna
well,
the
idea
here
is:
is
that,
like
one
of
the
things
in
in
providing
the
most
utility
to
the
users
that
we're
trying
to
offer
this
project
to
supporting
the
largest
number
of
versions
of
any
instrumentation,
as
well
as
any
version
of
the
go
runtime
or
is
something
that
would
increase
the
number
of
users
so
seeing
us
move
forward
without
like
keeping
that
backwards
compatible
support?
C
It's
not
something
I'm
completely
opposed
to.
I
just
want
to
make
that
clear,
but
I'm
also
hesitant
on
this
one,
and
I
was
just
kind
of
wondering
what
everyone
else's
thoughts
on
this
are
especially
this.
So
yeah
I'd
like
to.
B
C
Yeah,
I
can
take
a
stab
at
it
from
my
understanding.
I
think
it's
just
the
hygiene
is
we
wanted
to
test
the
latest?
I
don't
think
any
hygiene
was
really
wanted
to
test
two.
Initially
it
was.
We
want
to
test
the
latest
as
well.
I
don't
know
if
there's
much
value
in
that,
I
had
kind
of
the
same
understanding
of
that
because
I
don't
like
go,
provides
a
non-breaking
api
like
that's
kind
of
like
their
whole
deal,
but
then
anthony
yes,.
F
F
The
handling
of
implicit
conversions
of
integers
to
runes
and
strings
is
a
thing
that
can
really
bite
you
and
we
should.
C
Yeah
I
take
that
back.
You
are
right
that
that
bit
so
I
should
have
remembered
that
one
that
being
said,
and
then
anthony
pointed
out
like
well.
Okay,
so
then
the
initial
pr
was
just
adding
1.16
and
it
you
know.
The
question
is
like:
does
this?
When
does
this
become
untenable?
You
know
like?
C
Is
it
1.16
or
one
point
you
know
19
or
something
like
that
when
all
of
a
sudden
just
we're
testing,
you
know
100
different
things
with
our
compatibility
checks
that
are
going
through,
which
we
need
to
run
here
so
like
yeah,
I
think
that's.
That
was
the
thing
steve
as
to
like
why
we
want
to
get
rid
of
1.14.
If
I'm
not
mistaken,.
B
F
F
Have
a
stated
policy
there.
I
think
we're
kind
of
we
were
implicitly
going
with
go's
policy
of
the
latest.
Two
versions
are
supported,
and
I
think
it's
important
for
us
to
ensure
that
at
least
the
latest
two
versions
are
supported,
because
we
want
to
ensure
that
users
can
always
upgrade
whether
that's
upgrading
the
api
and
sdk
versions
or
is
upgrading
their
go
version.
We
want
to
make
sure
that
users
don't
get
into
a
situation
where
they
try
to
upgrade
and
can't
because
we're
holding
them
back.
F
I
would
also
like
us
for
us
to
establish
a
minimum
supported
version
like
tyler
said,
so
that
if
users
really
want
to
stay
back
and
are
able
to
we're,
not
forcing
them
forward
either
right,
we're
not
forcing
them
to
stay
back
and
we're
not
forcing
them
to
go
ahead,
and
I
I
think
it's
probably
fine-
to
have
three
sets
of
checks
that
we
end
up
doing
in
the
cis,
a
minimum
supported
version,
and
then
the
two
latest
releases,
if
there's
no
overlap
there
right,
which
currently
there
isn't.
F
F
Until
eventually
we
get
to
a
point
where
go
has
added
some
feature.
We
feel
is
important
to
be
able
to
use
such
that
we
want
to
move
up
the
minimum
supported
version
and
at
which
point
we
kind
of
reset
the
counter
on
that.
But
that
would
be
something
we
would
want
to
think
about
carefully.
For,
for
all
the
reasons
I
described
about
not
wanting
to
force
people
to
upgrade
their
version.
B
F
Yeah
I
know
in
the
in
the
past,
we've
had
some
things
where
we've
had
to
do
build
by
gating
and
have
parallel
implementations,
but
I
think
all
of
those
things
were
things
that
depended
on
stuff
that
was
introduced
in
111
or
114
and
and
we're
past
that
and
I'm
not
aware
of
anything,
that's
been
introduced
that
we
would
have
to
do
that
sort
of
build
flight
gating
for
now,
but
that's
also
an
option
that
we
have
available
to
us
to
try
to
maintain
compatibility
with
a
an
older
minimum,
supported
version.
F
So
I
think
liz
isn't
here,
but
I
think
her
point
here
is
that
go
supports
the
last
two
versions.
We
should
support
the
last
two
versions.
Is
there
really
any
reason
to
support
something
older.
C
C
Know,
if
that's
a
really
good
argument,
because
I
think
go-
is
a
really
good
like
they
can
do
that
because
there's
their
language,
but
this
is
this-
is
providing
an
api
for
instrumentation
right
so
like
if
you
have
some
sort
of
business
that
has
taken
a
dependency
on
you
or
some
sort
of
project
has
taken
a
dependency
on
you
and
they
don't
want
to
track.
C
The
latest
version
of
go
like
that's
fine,
because,
like
the
stability
of
you
know,
1.14
is
guaranteed
to
not
change
now
from
the
go
version
but
like
if
we
say
that's
cool,
but
we're
not
going
to
support
one
for
1.14
anymore
and
all
of
a
sudden
like
that
instrumentation
package
can't
use
up
in
telemetry
to
instrument
its
code
anymore.
It
has
to
upgrade
go.
G
Yeah,
I
guess
my
question
would
be
across
organizations.
Are
people
regularly
upgrading
their
go
to
keep
in
line
with
you
know
the
go.
Release
well
see
my
experiences.
No
a
lot
of
code
just
gets
left
and
runs
and
gets
built
off
an
old
compiler
until
somebody
goes
in
and
manually
changes
it.
So
I
I
do
see
the
reason
to
sort
of
keep
it
running
on
older
versions
unless
there's
some
real
reason
that
we
have
to
change
it
like
anthony's.
Here.
B
So
there's
these
aren't
backward
incompatible.
But
if
we
write
code
that
tries
to
take
advantage
of
them
and
somebody's
using
an
old
version
of
go
but
tries
to
pull
a
new
version
of
our
library,
they
basically
have
to
be
able
to
build
our
code
instead
of
us
just
say
us
building
and
distributing
object
code
or
something
that
would
run
fine
in
their
runtime.
B
G
You
can
re-implement
it
using
the
older,
you
know,
standard
library
which
we
did
have
a
case
before.
I
think
where
we
had
that
where
there
was
a
new
way
to
do
it,
which
you
could
do
in
one
line,
but
there
was
older
code,
you
could
do
it
a
longer
sequence
of
things.
C
Yeah,
so
that
that's
the
thing
that
comes
to
mind
is
like
the
os
package
actually
exported
this
error,
and
you
could
tell
if,
like
something
was
closed
or
whether
it
was
an
actual
error,
or
you
could
do
this
like
really
obscure
work
around,
and
we
have
to
do
the
obscure
work
around,
because
that
error
was
only
exported
in
1.16
right.
But
the
thing
that
other
side
that
I
want
to
point
out
is
like
ben
johnson's
clock
right.
We
depend
on
that
in
the.
I
think.
C
It's
just
in
the
metrics
pipeline
for
timing
to
make
sure
that,
like
our
timing
is,
is
correctly
in
order.
We
can't
upgrade
that
because
that
it
has
a
minimum
dependency
of
115
and
they
started
to
take
a
hard
dependence
on
some
things
that
were
introduced
in
115
right.
So
we're
stuck
not
being
able
to
upgrade
that
dependency
currently
because
we
have
to
support
1.14.
C
So
this
is
kind
of
one
of
those
things
like
the
more
that
we
like
do,
that
we
all
start
painting
ourselves
into
a
corner.
But
it's
also
like
we're
supporting
things
that
we
don't
put
that
same
burden
on
our
users
as
well.
F
F
We
can
update
that
that
clock
package
that
now
depends
on
115
at
least
and
then
at
the
point
that
we
have
a
1.0
release.
We
can
reevaluate
alright.
Do
we
want
to
fix
our
minimum
supported
version
at
the
lowest
version
that
was
supported
at
1.0
and
move
on
from
there?
I
don't
think
we
need
to
make
this
decision
right
now,
at
least,
but
it's
good
to
have
the
discussion.
C
I
that's
sorry
is
going
by
that's
kind
of
my
position
on
it
like
I
think
that
we
have
a
large
user
base.
I
was
actually
really
impressed.
I
was
looking
at
some
of
the
dependencies
on
our
api.
Currently
it's
more
than
I
thought
and
so,
like
I
think,
like
we
have
a
large
user
base
right
now,
so
I
kind
of
wanted
to
like
think
about
it
hard,
but
I'm,
okay
with
what
you
just
said
like
if
we
were
just
like
well,
it's
technically
still
not
1.0
we're
bumping
the
minimum
version.
C
C
I
agree:
okay,
cool.
I
will
add
that
to
the
list.
Sorry,
I
realize
that's
probably
the
last
thing
we're
going
to
be
able
to
talk
about.
So
that's
good.
This
needs
eyes.
Oclp
upgrade
it's
a
dependent
thing
that,
but
I
fixed
it.
So
I
can't
really
approve
it.
So
somebody
else
please
go
take
a
look
at
it
and
then
robert
has
a
pr
here.
That
also
needs
some
eyes.
If
you
have
some
cycles,
including
myself,
please
go
take
a
look
at
it
and
we'll
try
to
get
this
merged.
C
I
don't
want
to
think
up
too
much
more
of
what
people
sign,
but
robert.
Maybe
if
you
have
like
a
minute
or
two
just
to
say
something
about
this.
A
Yes,
so
maybe
I
just
can
say
that
initially,
it
was
a
one-liner
change,
but
aaron
spotted
that
it.
It
feels
wrong
and
he
was
very
right
and
basically
it
result.
I
started
digging
deeper,
deeper
and
initially
took
me
like
the
quick
fix,
like
I
did
one
hour,
but
when
I
started
going
deeper,
it
took
me
probably
like
20
hours
or
something
like
that,
because
I
spotted
a
lot
of
the
possible
deadlocks.
C
Okay,
so
it
looks
complex.
A
That
was
the
argument,
and
the
thing
is
that,
from
one
perspective,
it's
low
priority
because
it's
you
know
in
edge
case,
but
the
thing
is
that,
on
the
other
side,
it
changes
so
much
that
the
more
people
contribute
here.
The
more
prs
will
be
like
adding
functionalities
the
more
harder
it
will
be
to
bring
it
back.
C
I
agree:
okay,
cool
thanks
everyone
for
joining.
I
guess
we
are
still
under
50
seconds,
so
I'm
gonna,
I'm
gonna,
count
that
as
a
win
again
thanks
everyone
for
joining
thanks
to
all
the
interns
that
are
joining
as
well
excited
to
work
with
you
all
thanks
again
for
everyone
contributing,
please,
you
know
hit
us
up
in
slack.
If
you
have
any
more
questions-
and
I
will
see
you
all
next
week.