►
From YouTube: 2021-08-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Meeting-
and
we
have
the
first
thing-
we
have
the
demo
from
eddie
about
the
go
monkey
mode.
Are
you
on
the
call
eddie
yeah?
I
am.
C
Thank
you
so
much
good
morning,
everyone.
So
I
think
I
was
just
looking
at
the
notes
that
are
getting
put
under
mine.
So
I'm
going
to
be
doing
like
a
really
quick
demo
of
the
go
multi
mod
releaser
tool
that
we've
been
working
on
for
the
hotel
go
repo.
C
So
I've
gotten
a
lot
of
help
with
like
from
anthony
and
tyler
on
that,
and
I
think
it
will
have
to
do
a
lot
with
the
next
proposal
after
mine,
which
is
like
the
hotel
core
versus
hotel
collector,
like
constrictor
releases.
C
So
hopefully
this
can
add
a
little
bit
of
context
and
just
add
a
little
more
tools
that
might
be
used
in
that
transition
or
proposal
or
whatever.
So
just
like
a
a
high-level
overview.
C
There's
a
pretty
new,
like
conversioning
document
at
the
collector
repo
which
anthony
created
that
describes
basically
how
all
the
different
modules
and
different
components
of
the
collector
and
the
contributors
are
going
to
be
versioned
so
like,
as
you
know,
like
it's
going
to
use
some
bar
2.0
and
currently,
I
think
the
the
main
module
within
the
collector
is
just
like.
C
The
top-level
godot
open,
formatry
collector
module
and
basically
we
wanted
to
create
a
tool
that
could
help
in
this
situation,
where
we
have
a
single
repository
with
multiple
modules
and
it
was
kind
of
hard
for
the
hotel
go
repo
to
version
a
set
of
like
stable
modules
separately
from
the
ones
that
are
unstable.
C
Can
see
here,
basically,
the
experimental
modules
should
basically
have
a
version
starting
with
v0
and
anything.
That's
stable
should
have
something
with
v1
or
above,
but
the
problem
that
we
were
finding
is
that
like
if
we
basically
version
the
entire
repository
together,
that
would
mean
like
whenever
we
want
to
produce
a
stable
release,
we
would
have
to
figure
out
some
what
place
to
put
the
experimental
modules
and
not
be
included
in
that
release.
C
So
back
to
the
collector,
if
we
scroll
down
to
where
it
talks
about
the
contrib
repo,
what
the
recommendation-
or
I
guess
the
spec
is-
that
we're
going
to
be
using
a
single
go
module
for
each
receiver,
processor,
exporter
and
any
basically
any
component
in
the
collector
contributor
and
the
important
thing
there
is
basically,
whenever
these
experimental
modules
are
ready
to
be
promoted
to
stable.
C
So,
for
example,
if
I
just
like
look
at
some
of
these
different
components
in
the
collector
control,
there's
gonna
be,
let's
say
like
a
subset
of
them
that
are
ready
to
become
stable.
C
So
this
tool
that
we
created
basically
should
be
able
to
interface
with
like
the
current
release
process
and
also
to
basically
help
release
certain
sets
of
modules
that
are
like
stable
differently
from
those
that
are
not
stable.
C
Does
that
make
sense
so
far,
yeah
cool,
okay,
so
yeah
just
like
a
couple
more
points
like
it
says,
and
all
experimental
modules
start
with
zero.
Of
course,
all
stable
control
modules
of
the
same
version
will
use
the
same
entire
version
number,
and
I
guess
the
contrib
modules
will
be
kept
up
to
date
with
the
project
releases,
if
you
have
releases
will
be
made
for
all
releases
and
so
on.
So
hopefully,
this
tool
is
going
to
help
with
that.
C
As
you
can
see
with
the
core
repo,
we
see
that,
like
recently,
we
had
the
the
0.31
release,
and
so
in
the
control
side
we
also
had
the
corresponding
the
0.31
release.
C
So
I
think,
like
one
of
the
things
that
we
were
discussing
is
if
we
have
like
the
core,
basically
being
like
just
an
api,
with
only
like
the
really
core
components
it
might
get
to
like
a
stable
version
and
stay
there
for
a
long
time.
So
it
might
not
need
like
lots
of
version
number
changes
and
in
the
collector
contribute
side
we
might
need
to
update
the
dependencies
on
core
whenever
these
releases
happen.
C
So
I
know
someone
pointed
out
that
this
could
take
quite
a
while,
sometimes
so,
I'm
not
100
sure
what
the
current
process
is,
but
I
can
also
give
a
quick
demo
on
how
to
use
the
tool
that
we
are
using
with
the
repository
now.
That
would
help.
E
E
Should
we
be
thinking
about
this,
mostly
in
terms
of
the
world
before
collector,
becomes
stable
or
after,
like
after
core
becomes
stable,
yeah?
That's.
C
A
good
question,
so
it
should
be
able
to
handle
both
cases,
but
it
was
primarily
written
for
the
case
where
we
need
to
have
like
part
of
the
repo
is
stable
and
part
of
it
is
not
stable.
So
it's
probably
once
like
a
stable
release
is
produced
for
either
the
collector.
The
collector
can
trim
some
components
of
the
collector
control
and
it
should
help
basically
separate
some
of
those
components.
E
Will
will
core
just
have
so
right
now?
I
know
core
has
a
single
version,
but
it
contains
a
mix
of
stable
and
unstable
things.
Once
you
once
we
remove.
D
E
F
I
think
at
that
point,
though,
contrib
will
have
a
mix
of
stable
and
unstable
things
and
absolutely
in
my
view,
this
is
most
useful
for
contrib,
for
the
collector
or
before
the
go.
Sdk
core
is
going
to
be
in
a
kind
of
transition
state
for
a
while,
as
we
work
on
new
signals
and
those
are
all
separate
modules,
but
for
the
collector
I
think
the
the
value
here
will
be
for
the
contrib
or
components
repository
where
some.
E
Got
it-
and
I
have
a
follow-up
question
to
that
so,
once
once
core
is
slimmed
down
and
has
a
1.0
release,
why
would
we
want
to
bump
the
core
dependency
in
every
contrib
module?
After
that?
E
What
do
you
mean?
What
I
mean
is
that
in
contrib
at
that
point,
it
seems
to
me
legitimate
that
different
contrib
things
should
have
dependencies
on
different
versions
of
core
as
long
as
they
are
all
v1
plus,
because
once
I
assemble
them,
mvs
will
give
me
their.
It
will
give
me
a
good
enough
version
of
core,
so
I
shouldn't
want
like
it
seems
to
me
like
individual,
like
owners
of
individual
contrib
modules
should
be
in
charge
of
upgrading
their
their
core
dependency
rather
than
having
some
machinery
that
goes
through
and
upgrades
them
all.
A
A
No,
no,
I
think
I
think
he
wants.
He
wants
the
component
to
not
be
touched
by
anyone.
G
E
Is
that
what
you
are
asking?
Yes,
I'm
saying
that
if
that
after
1.0,
if
my
component
takes
a
dependency
on,
let's
say
1.2
right,
then
it
will
automatically
work
with
one
dot,
anything
greater
than
two
sure
right
and
me
saying
that
doesn't
help.
Anyone
in
fact
me
saying:
1.2
advertises
a
useful
fact
that
1.2
is
good
enough.
E
So
automatically
bumping
me
to
new
to
newer
and
newer
versions
to
me
is
not
adding
value
and
it
is
taking
away
a
tiny
amount
of
value.
The
reason
why
I'm
pushing
on
this,
not
just
to
make
kind
of
a
technicality
is
if,
over
time,
we
want
people
to
to
keep
collector
components
outside
of
the
contrib
repo.
E
A
B
A
I
don't
know
but
but
what
I'm
trying
to
say
is,
for
example,
think
about
we
had
that
problem
with
trift,
so
we
don't
depend
on
drift,
but
but
transitively
we
depended
on
thrift
and
it
was
a
mess
to
upgrade.
E
So
bogdan,
I
think
transitive
dependencies
are
a
real
problem,
but
I
don't
think
that
this
solves
that
problem,
because
if
I
have
two
contrib
components
that
have
transitive
dependencies
that
conflict-
and
I
put
them
together
into
a
build
of
the
collector-
that's
a
problem
regardless
of
core
right
so
trying
to
address
it
by
bumping.
The
core
version
is
at
best
a
partial
solution.
E
F
I
think
it's
that's
also
a
very
minor
capability
of
this
tool.
The
primary
feature
that
we're
looking
for
here
is
the
ability
to
maintain
sets
of
modules
at
a
consistent
version
and
to
ensure
that
the
validity
requirements
like
stable
modules,
don't
depend
on
unstable
modules,
happen
it's
for
enforcing
our
version
policies.
I
think
updating
the
version
of
the
core
module
that
is
depended
on
by
contrib
modules.
That's
going
to
happen
by
dependability,
even
if
we
do
nothing
with
this
tool
and
that'll
happen
in
third-party
repositories.
B
Yeah,
I
think
you
know
punia
has
a
point
that
people
should
not
be
required
to
bump
the
versions,
and
I
think
we
we
can
separate
the
discussion
here.
You
know
whether
we
bumped
core
or
we
we
make
these
module
sets.
I
see
value
in
the
module
sets.
I
think
I
reviewed
that
and
I
was
confused
at
first,
but
then
I
say
you
see
you
know
it's
it's
a
nice
addition.
I'm
just
not
sure
that
it
is
a
good
practice
for
us
to
be
doing
something
that
our
other
consumers
might
not
be
doing.
B
You
know,
so
I
I'd
rather
prefer
dependables
to
be
managing
the
core
version
numbers.
If
that's
what
we
expect
our
other
consumers
to
do
than
having
that
managed
by
by
this
special
tool.
F
Yeah
and
I'm
I'm
100
on
board
with
that,
and
I
totally
get
putting
his
point
as
well
about
having
value
in
communicating
here's.
The
minimum
version
that
I
will
function
with.
I
depend
on
a
feature
that
was
added
in
this
version.
I
can't
use
anything
less,
but
anything
more
is
great.
There
is
some
value
in
that.
F
I
think
for
the
go
sdk,
where
updating
the
version
of
the
core
repo
that
the
contrib
repo
uses
is
hugely
helpful,
because
we
didn't
have
the
infrastructure
that
exists
in
the
collector
contrib
repo
for
updating
the
the
version
everywhere,
and
we
also
now
have
multiple
distinct
versions
of
some
modules
in
core,
whereas
the
the
the
traces
are
stable
at
a
at
a
one.
F
Lrc
version
and
metrics
are
at
you,
know,
zero
point
something,
and
so
those
are
not
something
that's
easily
mechanically
done,
and
we
didn't
want
to
depend
on
dependable
to
come
in
and
give
us
300
prs
that
we
have
to
merge
independently,
which
is
also
a
real
pain,
that
it
takes
a
long
time
to
have
to
run
the
test
suite
between
each
dependable
vr
that
you
merge,
whereas
this
can
do
it
in
one
shot.
G
G
Standpoint,
punya
and-
and
you
know,
when
you
are
doing
distributions-
you
either
have
to
communicate
and-
and
I
look
at
the
collector
as
a
distribution,
because
you
know
it
has
a
confederation
of
components
and
and
it's
one
thing
to
keep
core
consistent,
and
you
know
we
can
totally
use
the
same
model
that
the
go
sdk
is
using,
for
you
know
the
release
numbers,
but
but
the
or
the
version
numbers
I
should
say,
but
in
a
distribution
you
could
technically.
G
Even
if
you
had,
you
know,
say
1.2
for
a
particular
component
1.3
for
another
one
1.1,
you
know,
then
you
have
to
actually
provide
compatibility
tests
and
matrixes
to
document
that
so
that
you
know
there
is
an
clear
understanding
by
the
user
who's
using
that
distribution,
that
these
are
the
dependencies.
And
these
are
the
version
numbers
that
are
of
the
components
that
are
associated
in
it.
So.
E
Yeah,
sorry,
I
think
I
I
totally
buy
this
for
the
go
sdk
and
I
don't
mean
to
take
up
the
sixth
time
on
what
seems
to
be
a
digression.
So
thanks
thanks
again
for
the
pointers,
and
if
this
is
the
only
point
of
confusion,
we
should
we
should
proceed.
G
Yeah,
I
mean
again,
I
think
it's
a
good
discussion,
but
we
can
totally
you
know,
work
through
the
details
of
the
version
number
you
know
and
how
the
collector
handles
that
for
components
and
contrib.
C
Yeah
and
that's
it
yeah.
That
was
a
good
point
too,
but
I
think
that
might
be
like
one
of
the
minor
parts
of
like
the
tool
like
anthony
said.
So
there
could
be
like
an
option
with
the
multi-mod
tool.
That's
like
a
sub
command,
saying,
like
sync,
my
modules
dependencies
on
a
different
repo,
but
that
could
that's
just
like
one
part
of
it.
C
The
main
point
is
like,
as
he
said,
I
can
do
verification
that
no
stable
modules
depend
on
any
unstable
components
or
something
like
that
and
then
also
be
able
to
automate,
like
the
creation
of
commits
pull
request,
tagging
that
that
actually
pushes
out
a
release
of
several
modules.
At
the
same
time,
I
think
that's
the
that's
the
main
takeaway.
A
Yeah,
let's
I
think
this
is
very
useful
and
I
think
probably
go
sdk
is
a
very
immediate
candidate
for
this.
I
don't
know
if
they
are
already
using
it,
but
they
they
have
this
problem
and
we
should
start
adopting
this
in
the
in
the
country,
at
least
based
on
our
plan
of
moving
almost
everything
there.
I
think
that's
the
is
going
to
be
the
most
pain
for
us
to
to
support
so
adopting
this.
A
A
Perfect
now
next
topic,
I
think
it's
jurassic.
B
A
D
D
A
Things
like
this,
that's
that's
the
only
thing
that
I
okay,
I
I
have
a
different
opinion,
but
besides
that
everything
looks
good.
Okay,
also,
we
need
to
take
a
a
more
like
incremental
approach,
so
we
we
do
have
to
start
as
we
we
already
planned
to
start
with
all
these
components
to
be
moved
and
then
and
then
start
removing
the
cme
d,
part
and
path
like
that.
So
first
we
need
to
finish
the
move
of
component.
B
Right
so
I
just
shared
a
link
here
to
github
repo
under
my
namespace,
under
my
account
just
to
mention
that
it
we
can
have
some
of
this
work
happening
parallel.
So,
of
course
we
we
would
need
to
do
small
incremental
steps,
and
I
think
punia
has
a
couple
of
appears
ready
to
move
things
around
from
the
core
to
contrib,
but
we
don't
actually
require
that
to
start
building
the
releases
right.
B
So
this
link
here
shows
a
repository
with
two
of
those
releases,
so
one
of
them
is
the
collector,
as
we
know
the
as
the
core
today
and
the
second
one
is
the
load
balancer,
which
is
a
very
minimal
distribution
with
only
otp
receiver
and
otp
exporter
and
double
balancing
exporter.
B
And
if
you
go
to
the
releases
for
the
releases
tab
for
this
repository,
you
see
that
this
is
already
generating
a
lot
of
artifacts
like
rpms
and
binaries
and
apks
and
dabs
and
container
images
for
both
releases
so
for
the
load
balancer
and
for
the
core.
B
The
load
balancer
is
like
seven
megabytes,
so
it's
really
small
right.
So
it's
piracy.
Are
you
sharing.
B
Not
sharing
my
screen,
perhaps
someone
because
I'm
right
now
on
the
browser,
let
me
see
oh.
H
B
B
B
I
can
I
can
explain,
but
it's
basically
a
collection
of
show
scripts
and
go
releaser
so
go
releaser
is
a
very
nice
tool
to
release
go
binaries
and
the
show
script
is
mostly
to
to
tie
a
gold
revision
configuration
together
for
so
so
we
list
the
releases
in
some
place
and
aci
will
check
if
we
have
the
list
up
to
date
and
generate
a
gorilla
user
file
and
go
realizer
will
take
care
of
generating
all
the
binaries
for
all
the
architect,
architectures
for
all
the
container
images
and
so
on.
B
So
if
you
go
to
the
root
yeah,
so
there
is
a
go:
releaser
dot
go
releaser.dml.
This
is
a
go,
releaser
configuration
and
it
this
is
auto-generated
right,
so
that
script
that
you
just
opened
generates
this
one
here
and
this
one
here
is
configured
to
run
for
go
arch,
386,
amg,
64,
arm
arm,
64
s390
for
two
arm
architectures,
and
so
on
so
forth,
right,
nice,
I'm
ignoring
windows.
For
a
couple
of
reasons,
I
think
we
have
to
figure
out
windows
later
it
doesn't
work.
Actually,
I've
got
a
couple
of
issues.
B
I
think
oh
yeah,
so
windows
arm,
I
don't
even
know
if
that's
a
thing,
so
I
just
excluded
yeah
and
I
also
excluded
msi
packaging
that
we
have
for
the
core,
because
I
couldn't,
I
think
it
requires
a
windows
executor,
yeah.
B
G
B
A
This
song
is
great
okay,
so
what
I
think
we
need,
because
you
know
the
best-
I
think
we
need
a
road
map
from
somebody
proposing
a
proper
roadmap
of
the
things
that
needs
to
happen
in
what
order
for
us
to
get
to
this.
B
A
As
I
said,
we
need
a
road
map
because
I
don't
want
to
jump
into
moving
everything
but
yeah
yeah,
probably
in
a
month
or
something
we
will
be
there.
Hopefully
right.
B
What
I
would
what
would
be
nice
to
have
if
we
are
on
board,
at
least
with
the
general
direction,
is
to
get
your
plus
ones
into
the
issue
that
is
linked
a
couple
of
weeks
ago.
I
think,
but
I
can
add
the
link
here
again
so
that
I
get
a
repository
to
move
the
code
that
I
that
you've
just
seen
that
you've
just
shared
to
a
place
within
the
open
telemetry
organization.
A
B
Yeah
I
mean
the
the
builder
is
actually
been
used
by
other
folks
already,
and
it
is
something
that
I
know
that
we're
going
to
use
on
the
eager
repository,
so
it
can
be
like
a
mono
ripple
thing.
It's
just
that
if
we
are
splitting
for
different
audiences,
then
I
think
it
just
belonged
to
its
own
repository,
but
I
mean
I
don't
know.
B
I
think
that's
what
I
would
do
first
and
if
we,
if
we
think
that
you
know
it's
too
much
work
releasing
stuff
and
then
we
can
join
them
later
on
so
for.
A
B
Yeah
I
mean
I
feel
you're
ping,
it's
just
that
you
don't
you
don't
have
to
be
the
one
looking
at
that
ripple.
You
know
so
I
I
personally
prefer
smaller
ripples
that
I
can
selectively
unselect
or
not
nazi
or
unwatch.
B
B
You
know
end
users,
I
watch
releases
for
some
projects,
like
I
don't
know,
circ
manager
and
and
so
on
so
forth.
I
don't
know,
but
that's
me,
that's
how
how
it
works.
For
me.
A
I
I
had
one
bit
of
additional
feedback.
If
that's
okay,
I
had
reviewed
it
and
it's
good
and
we
should
do
this.
This
is
great.
I
was
slightly
concerned
or
I
don't.
I
This
might
be
too
early
in
the
process
and
something
that
just
gets
discussed
as
the
road
map
gets
fleshed
out,
but
there
was
also
some
naming
convention
changes
so
moving
contribute
to
something
called,
I
think
components,
and
so
you
know
like
besides
the
bike
shedding
or
whatever
of
like
what
it
should
be
named,
which
I
don't
care,
it
just
introduces
a
long
tail
of
like
ecosystem
or
product
updates.
I
So,
like
I
don't
know,
maybe
there's
like
some
dock
page
on
like
the
alibaba
cloud
website
or
something
that
needs
to
get
updated
to
point
to
it's
like
a
pretty
significant
naming
convention
change.
So
we
should
just
be
conscious
of
that
stuff
and
have
eyes
wide
open
to
it
going
in
and
try
to
maybe
like
determine
all
those
early
so
that
you
know
we're
not
expecting
people
to,
I
don't
know,
have
to
keep
up
to
date
and
there's
some
deadlines
sitting
in
there
in
their
docks,
or
something
like
that.
I
So
I
you
know,
besides
the
naming
conventions
itself
which
like
yeah,
I
don't
I,
I
have
some
weird
obsession
with
the
like
term
contrib
over
components,
but
I
don't
actually
care
but
like
it.
Just
because
rack
has
a
thing
called
rack
and
trip,
but
like
yeah,
we
should
think
about
whether
we're
introducing
a
ton
of
try
to
limit
the
number
of,
like
you
know,
third-party
doc
changes,
because
at
this
point
it's
significant
and
it's
worth
thinking
about
from
a
product
standpoint
anyway,.
I
G
A
That's
why
I
said
we
need
a
roadmap
because
that's
where
we
we're
gonna
argue
with
jurassic,
because
I
think
overall
I
like
the
whole
idea,
but
these
name
pickings
and
stuff.
I
think
we
should
have
separate
fights
for
everyone.
A
Yep
the
make
a
decision-
and
I
don't
think
jurassic
would
be
very
unhappy
if
one
of
the
naming
changes
are
not
accepted,
and
that's
why
I
said
the
overall
proposal
is
the
way
to
go
some
of
this,
where
this
rapper
lives
or
or
stuff,
like
that,
it's
it's
it's
it's
something
that
we
can
discuss
later.
I.
G
Yeah,
I
I
think
we're
all
on
the
same
page,
but
again
I
I
totally
agree
that
we
should
minimize
impact
wherever
we
can
yeah.
A
Let's
so,
okay,
if
that's
the
case,
everyone
do.
We
agree
to
have
a
separate
repo
for
the
releases.
I
think
it's
also
maybe
better
to
have
a
separate
repo
for
the
releases
for
one
reason,
and
the
reason
is
this:
repo
will
have
its
own
frequency
of
release
like
every
weeks
or
something
compared
with
the
the
tools
or
other
things
that
we
have
also
we'll
have
a
huge
amount
of
artifacts
that
we
will
produce
that
we
want
people
to
download
and
stuff,
so
maybe
maybe
start
as
a
separate
repo.
A
It's
fine
for
for
this
for
the
moment,
but
I
will
also
ask
jurassic
on
slack
if
the
collector
builder
and
we
have
a
bunch
of
other
build
tools.
G
We
only
have
logged
in
only
one
really
one
build
tools,
reaper.
So
we'll
again,
I
would
suggest
let's
map
that
out
in
the
roadmap
document,
so
that
the
tooling
is
also.
Oh.
You
think,
because
of
the
three
reasons.
G
We
have
yeah
fair
point,
but
let's,
let's
capture
that
in
the
dock
and
then
kind
of
go
through
the
pros
and
cons
and
figure
out
what
the
final
final
consolidation
is.
Yeah.
A
A
Let's,
let's
move
to
the
next
topic,
I
think
everyone
agrees.
I
personally,
I
just
want
to
see
the
road
map
of
what
are
the
first
things
that
you
need
to
happen,
because
I'm
pretty
fine,
I'm
pretty
sure
that
we
cannot
just
replace
everything
overnight.
So
we
need
a
roadmap
for
for
a
couple
of
weeks.
A
Perfect
next
one
is
on
integration,
testing.
J
Yeah
hi,
so
I'm
aaron
from
google,
so
we
have
our
google
cloud
exporter
in
the
collector
control
repo
right
now,
we'd
like
to
start
doing
some
like
integration
and
testing
against
the
real
gcp
apis,
so
cloud
monitoring
and
cloud
trace.
J
I
noticed
there's
not
like
much
precedent
for
something
like
this
in
the
control
repo
right
now
I
found
tests
for
like
docker-based,
back-ends
or
instrumentations
that
use
docker
images.
There
was
some
like
mocked
apis
in
testbed,
and
I
think
aws
has
something
called
aot
that
that
does
integration
testing
against
their
real
api
yeah.
So
I
just
wanted
to
have
like
a
discussion
about
what's
the
best
way
to
do
it
and
what
other
people
are
doing
right
now,
we're
sort
of
leaning
towards
moving
most
of
the
exporter
into
our
own
repo.
J
So
we
can
do
this
testing
upstream
and
then
pulling
in
just
like
the
factory
methods
into
the
collector
contrib
repo
yeah,
but
open
to
other
possibilities,
like
maybe
testing
it
in
the
actual
collector
control
repo
as
well.
Testing
in
the
collector
country
will
be
very.
A
Hard
because
of
the
pricing
and
who's
gonna
pay
the
bill
for
for
talking
to
google
yeah.
So
I
would
prefer
not
to
indeed
amazon
and
even
splunk.
We
do
some
integration,
testing
and
stuff,
but
we
have
our
own
distribution
that
we
we
we
use
that
to
to
test
in
our
in
our.
A
In
our
case,
so
I
think
your
option
with
the
with
having
those
is
good.
Also
after
the
jurassic
proposal,
you
can
host
the
entire
exporter
in
your
stuff
and
just
have
a
rule
to
build
to
to
include
it
into
the
all
distributions
that
we
have,
or
in
some
of
the
distributions
that
we
produce.
J
Yeah
yeah,
that
makes
sense,
I
wasn't
sure
so.
The
plan
was,
I
think,
bogdan.
You
know
we
have
like
an
operation.
Google
cloud
platform
operations
go
repo,
we're
planning
to
move
it
in
there
and
then
just
pull
it,
pull
it
into
the
like
a
thin,
be
behind
like
a
thin
wrapper
inside
of
the
collector
control
repo.
I.
A
A
E
In
this
we
would
explicitly
commit
to
doing
this
ourselves
right,
like
you
are,
you
are
doing
it
right
now
and
it's
you're
doing
us
a
favor
by
doing
it,
and
I
think
if
we
were
to
move
out,
we
would
take
responsibility
for
the
upgrades
on
a
time
like
we
would
say
like
within
within
24
hours
of
the
core
getting
out
or
something
like
that.
We
commit
to
upgrading
our
thing.
G
But
I
I
think
that
again,
I
would
like
to
add
that
there
is
a
short
term.
You
know
strategy
which
you
know
is
in
line
with
what
aaron
is
proposing,
that
downstream
testing
and
integration
has
done,
for
you
know
vendor
cloud
provider
stacks,
but
on
the
other
hand,
as
a
project
again,
you
know
we're
very
much
looking
at
the
kubernetes
model
of
certification
and
testing
in
the
long
run,
and
you
know
strategically.
G
I
think
that
the
integration
tests,
even
that
we
have
built
with
aws
distro,
is
something
that
we'd
like
to
add
back
into
the
into
the
project.
I
mean,
obviously,
as
bogdan,
you
said:
there's
infrastructure
costs
there's
other
dependencies.
You
know
what
does
a
whole
certification
program
look
like,
but
that's
long,
that's
the
long-term
view.
Okay,
so
just
I
mean
again
keep
in
mind
that
that
might
be
something
that
changes
in
the
long
run
in
the
short
run.
E
Sure
so
this
seems
to
imply
like
if
someone
wants
to
run
potentially
expensive
integration
tests,
the
choices
are
run
the
tests
on
something
before
it
gets
integrated,
run
it
in
the
contrib
repo
or
run
it
downstream
right.
Those
are
the
three
choices
leaving
aside
if
it's
aws
or
google
or
anyone
yeah
yeah
right,
yeah
choice
number
three
exists.
Only
if
you
have
a
distribution
and
right
now
we're
saying
the
canonical
way
to
do
it
is
in
a
distribution.
That
means
you
have
to
have
your
own
distribution
to
run
integration
tests.
A
But
but
in
general
punia
to
answer
your
thing,
you
may
you
may
test
your
exporter,
you
may
do
integration
tests,
but
if
you
really
want
to
do
integration
tests,
you
probably
need
the
full
distribution
of
from
from
receiver
to
to
exporter,
not
just
the
exporter
being
integrated
like
does
it
make
sense?
What
I'm
saying
like
you
probably
want
to
to
inject
inject
data
to
the
collector
via
the
receiver,
that
you,
you
tell
users
to
use
and
then
through
the
exporter,
so.
E
I
guess
I
think
in
principle
you
are
correct.
We
are
maybe
optimistically
saying
that
there
is
a
separation
of
concerns
that
the
thing
being
tested
is
otlp
translation
to
gcm
api,
and
we
can
test
that
and
then
there's
a
separate
thing
which
the
community
is
testing
and
supporting,
which
is
translating
from
x
receiver
to
b
data.
K
B
So
in
terms
of
costs,
the
csf
just
provide
machines,
and
I
think
we
every
project
has
a
like
a
number
of
minutes
that
we
can
request
them
to
run
on
on
aws
and
gcp.
So
it's
you
know
if
it
is
about
costs,
I'm
quite
sure
that
the
simcef
can
can
stop
in
here.
A
Now
it's
it's
one
about
cause,
then.
Secondly,
I
think
after
I
think,
because
gcp
is
a
cloud
and
we
may
use
them
their
cloud
and
stuff.
A
I
don't
think
we
should
prioritize
their
vendor
for
for
for
this,
so
because
the
the
question
will
become
why
not
doing
that
for
data
dog
for
dyna
trace
for
splunk
for
for
everyone,
let's,
I'm
not
gonna
enter
like
I'm
treating
google,
even
though
it's
one
company
as
two
one
when
it
comes
to
me
testing
on
gcp,
because
that's
a
lot
of
people
are
using
gcp
versus
me
testing
their
stackdriver
products,
which
is
a
bit
different,
even
offering.
I
We
also
have
to
be
fair,
like
some
vendors,
get
a
free
ride
to
some
extent,
because
aws
is
integration.
Testing
is
inclusive
of
certain
contrib
vendors.
So
there's
that
as
well,
which
is
worth
you
know
like
so
it's
you
know,
theory
and
practice
and
stuff's
complicated,
but
yeah,
I'm
not
going
to
pretend,
like
I
haven't
in
the
past,
pointed
to
an
aws
run
integration
test.
When
talking
to
a
customer
and
been
like
see,
we
handle
this
many
traces
per
second
or
whatever.
You
should
definitely.
B
But
I
guess
we
have
two
main
requests
here.
Right
I
mean
the
first
one
is
to
have
something
in
the
java.
Road
is
called
like
a
tck
like
a
test
compatibility
kit,
where
vendors
can
run
against
their
their
end
points
to
verify
that
they
are
able
to
ingest
data
that
the
hotel
clients
can
generate
and
pass
through
the
hotel,
collector
and
then
publish
those
results
as
a
proof
that
they
are,
you
know
compliant,
and
the
second
thing
is
indeed
an
integration
test
on
as
part
of
the
contrib
or
you
know
downstream
releases.
B
G
A
J
All
right,
so
I'm
not
super
familiar
with
the
like
the
proposal
that
drossy
was
just
talking
about
so
you're
saying
until
then
it
might
be
best
to
just
pull
in
the
module
from
the
collector
control.
Do
our
testing
downstream
and
then
after
that's
done,
we
can
do
our
testing
upstream
in
our
own,
like
directly
in
our
own
component,
right,
correct.
G
J
Okay,
okay,
cool
well,
there's
a
discussion,
a
github
discussion
that
I
put
in
the
sig
notes.
So
if
anybody
has
any
other
thoughts,
yeah,
please
leave
them
there
thanks.
Thank
you.
G
Yeah
and
aaron,
if
you,
if
there's
anything,
we
can
help
with.
Let
us
know
I
mean
again,
you
know:
we've
done
a
lot
of
work
with
the
distro
on
integration
testing
into
it,
so
we
can
certainly
help.
A
Last
topic
before
morgan
and
alolita,
so
we
do
have
filter
pr
catania.
I
think
unfortunately,
tigran
is
not
here.
Who
commented
on
that?
Did
you.
L
A
Yeah,
I
I
don't
know
all
the
details,
let
me
let
me
look
at
the
pr
and
what
it
does
I
I
just
saw
tigran
commented
there
and
now
I
see
that
he
actually
asked
me
to
do
something
yeah
I
will.
I
will
come
back
on
that.
I
don't
know
if
we
talked
about
the
filtering
part.
A
We
definitely
talked
about
the
mutation
or
mutator
processor
to
be
per
signal,
but
I
don't
know
if
the
filtering
capability
was
discussed
to
be
per
signal
or
or
or
one
for
for
all
the
the
things
I
have
no
idea.
I
need
to
look
at
the
documents
that
aws
has
about
the
proposal.
A
Your
proposal
about
the
consolidation
of
the
mute
processors
for
for
the
actions
of
mutating
spans
and
metrics
and
stuff
does
that
include
something
about
the
filtering
processor
or
that's
standalone
functionality
separate.
M
Yeah
so
you're
saying
the
filtering
the
data
right
yeah.
So
if
I
want
to
remove
a
metric
from
the
so
from
the
pipeline
from
the
streams
yeah
yeah,
it
does
so
so
the
only
thing
changed
to
right
now
propose
change
to
filter
processor,
just
extend
it
to
handle
the
locks.
So
that's
it.
We
don't
yeah.
We
will
keep
the
filter
processor,
but
exactly.
A
A
On
that
issue
on
this
issue,
it's
sorry
on
the
pr
it's
37.98
on
the
collector.
A
I
send
you
a
link
here,
so
if
you
can
comment,
there
would
be
great
sure
sure.
Thank
you
thanks.
So
much
next
topic
is
alolita
and
morgan.
You
have
the
microphone.
D
Yes,
I
I
signed
up
elevated
for
this
one
unexpectedly,
I
was
wondering
if
you
could,
as
you
so
skillfully
walked
me
through
it
last
week,
walk
through
the
the
few
remaining
work
items
that
we
have.
Yes,.
G
Yeah,
absolutely
so
you
know,
as
all
of
you
know,
we've
been
tracking
the
collector
and
working
on.
You
know
different
parts
with
bogdan
and
tigran
on
getting
to
tracing
stupid,
stable,
and
the
idea
was
to
actually
get
a
stable
release
for
the
collector
core,
based
on
the
items
that
are
in
the
two
phases
of
the
tracing
ga
backlog
right.
So
let
me
let's
pull
this
up,
and
most
of
you
have
probably
seen
it.
I
can
just
give
you
a
link
also.
G
All
right
so
this
is
this-
is
the
first
link
I'll
just
share
it
in
chat.
I
can
also
share
my
screen
and
I
think
we
are
almost
done
with
this
first
phase.
Let
me
share.
G
Can
you
see
my
screen
yeah?
Okay?
So,
as
you
can
see,
you
know
we
are
pretty
much
on
this
last
pr,
where
we
are
completing
a
couple
of
items.
Actually,
one
item
that
bogdan
had
brought
up,
which
is
the
marshalling
interface,
and
I
think
there
is
a
draft
pr
for
this,
but
bogdan
what
exa
is
there?
The
final
part
is
emmanuel
who's
working
on
that.
A
Emanuel
dropped
the
ball
I
mean
I
talked.
This
is
a
separate
proposal,
but
emmanuel
21
days
ago
had
that
pr
and
yes,
no
no,
I.
N
A
A
No
but
yeah,
so
this
is
the
only
item
that
will
so
after
this
I
think
we
are
good
on
declaring
the
p
data
stable
for
for
tracing
and
not
for
metrics.
I
think
we
still
need
prs
for
metrics,
but
for
tracing
yeah.
G
For
tracing
specifically
bugdon,
so
this
this,
you
know
is
the
first
milestone,
and
then
we
have
the
second
milestone,
which.
G
Right,
those
are
just
moves,
I
mean
testing
and
moves,
but
they
need
to
be
done.
G
Yep
yep,
but
again,
let's
figure
out
you
know
so
the
testing
there's
already
a
pr
for
this
again.
I
think
that
the
community
did
not
respond
any
further.
There
was
a
pr
to
rename
the
logging
exporter
to
debug
because
it's
not
a
logging
exporter,
it's
actually
a
debugger
and
I
think.
A
Yeah,
maybe
we
should
have
a
way
to
not
have
this
be
that
breaking
and
let's
okay,
I
don't
think
this
is
a
very
small
one.
I
think
the
biggest
part
of
this
milestone
or
the
biggest
problem
in
this
milestone
are
these
moves,
which
okay,
so
we
have
a
lot
of
them
so
but.
A
A
G
A
B
But
I
think
we
did
talk
about
that
before
in
that
punia
actually
had
a
script
or
something
like
that.
That
would
generate
the
prs
on
the
contribute
side
and
after
that
we
would
remove
them
from
from
this
side.
Here
I
think
there
was
an
experiment
with
the
health
check,
pros
or
extension,
or
something
like
that.
We
need
to
remember
about
that.
G
G
B
I
can
certainly
help
it's
just
that.
I
think
that
pune
has
more
information
that
I
have
right
now,
so
it
would
be
nice
for
him
to
get
information
from
me,
and
I
can.
I
can
ask
him
on
on
the
slack,
because
there
was
a
question
about
the
history
of
the
components
right,
so
he
had
a
script
where
we
would
be
able
to
open
apr
with
the
the
history
yeah,
and
I
think
that's
the
the
merge
problem
that
you
mentioned.
Okay,.
G
B
G
G
So
I
mean
that
should
take
care
of
all
these.
You
know
items
which
are
pending
move,
and
then
there
are
two
bugs
that
bogdan
you
had
added,
which
are
anthony's
already
looking
at
those.
A
Yeah,
that's
great
also,
I
I
don't
know
exactly
where
semantic
convention
should
be,
but
I
will
open
a
different
issue
to
just
for
discussion
where
that
should
be
where
the
code
should
leave.
A
A
G
Yeah,
but
I
mean,
if
you're
targeting
a
release
for
next
week,
then
the
decision
should
be
sooner.
F
In
that
regard,
we've
also
just
recently
moved
the
semantic
invention
module,
adding
a
version
to
its
its
import
path
name.
So
we
have
an
understanding
of
how
much
effort
is
there,
which
is
like
one
pr
contrib.
That
was
a
few
thousand
lines
the
swaps,
so
I
I
think
we
have
a
good
understanding
of
the
effort
there
and
that
it's
doable.
A
That's
even
simpler
anthony
because
it's
that's
now
he's
just
a
set
before
he
was
more
because
some
of
the
constants
move,
but
right
right,
some
of
the
names
changed
yeah.
Now
it's
going
to
be
just
an
import
pad,
but
by
the
way
you
should
look
at
the
two
bugs
that
I
filed
there.
While
I
was
doing
the
the
contrib
upgrade,
I
figure
out
that
there
are
a
couple
of
problems.
F
Yeah
the
so
this
one,
I
think
is
probably
just
we
can
add
a
replacement
or
we
can
update
the
name
of
the
convention
that
would
have
to
go
through
the
spec
if
we
want,
if
one
of
them
is
inconsistent
for
one
reason
or
another
and
the
other
one
about
exception
event
name,
I
think,
probably
that
should
go
to
the
spec
and
get
added.
F
A
It
is,
it
is
in
the
in
the
semantic
convention.
It
says
that
the
event
name
for
exception
should
be
exception.
F
G
So
anthony
you'll
you'll
file
and
pr
on
the
spec
also
then
right.
F
Yeah
yeah,
I
can
take
care
of
that
that'll
that'll
get
probably
in
1.6,
though
so
we
can
add
a
constant
to
one
five
just
for
convenience
and
then
get
it
properly
incorporated
in
one
sec.
If
we
need
to.
A
G
A
G
D
A
G
I
mean
if
even
if
we
do
an
rc
say
on
c
on
monday
bogdan,
then
then
we
can
actually
do
a
release
in
two
weeks
from
then.
If
all
goes
well.
A
Yeah,
if
all
goes
well,
but
but
again,
what's
what's
the
expectation
because
we
may
achieve
what
you
want
simpler
than
than
than
pushing
one
zero.
I'm
I'm
trying
to
to
to
to
make
everyone
happy.
My
initial
thought
was
to
not
put
a
one
zero,
but
to
put
the
world
stable
on
on
bunch
of
of
packages
of
modules,
yeah.
G
And
that
was
the
expectation
bugden
that
we
also
have.
We
were
not
looking
at
1.0,
okay,
but
you
know
a
stable
marker
on
the
core
components
as
well.
As
the
you
know,
dependencies
in
in
contrib.
A
Okay,
so
so,
let's
okay,
let's
then
decide
on
which
packages
we
could
the
stable
market
occur,
because
even
after
this
move
we
still
have
huge
amount
of
other
packages
in
the
core
that
I
don't
know
if
we
want
all
of
them
to
be
marked
as
stable.
G
A
I'm
trying
to
say
is:
okay,
let's
move
everything
and
see
what
is
left
and
then
then
I
will
make
the
fight
we'll
make
the
final
call.
It's
too.
There
are
too
many
packages
that
if
we
don't
yeah
it's
very
hard
to
track
all
of
these
things.
Maybe
you
have
a
better
understanding
of
all
of
these,
but
for
me
there
are
way
too
many
things.
Until
I
see
what
is
left,
I
cannot
a
final
call.
Does
it
make
sense,
yep.
G
Yep,
definitely
I
mean
again,
but
I
think
we
did
map
out
the
each
and
every
you
know
module
in
the
core
and
based
on
that,
you
know
we
had
these
moves
planned
right.
So
everything
else
I
mean.
Obviously,
if
you
know
what
stays
in
core
is,
is
this
list
and
if
everything
passes
here,
obviously
you
know.
G
A
A
But
you
want
to
move
to
the
contrib
jurassic
for
your
component
stuff,
correct
for,
for
the
other.
B
I'd
prefer
so
yeah,
but
I
mean
if
we
are,
if
you're
tagging
individual
components
there
and
if
it's
clear
to
people
that
some
of
those
components
are
moving.
Even
though
they're
this
repo
is
one
zero,
then
it's
fine,
I
mean
to
avoid
confusion.
I
would
move
things
and
tag
only
the
final
state
of
the
repo
yeah,
but
if
you're
in
a
hurry
to
get
a
one,
zero
out
of
the
door,
then
sure.
A
I
think
I
think
what
I'm
trying
to
say
is
model
is
stable
from
here.
Consumer
will
be
stable
component
will
be
stable
and
we
can
start
by
naming
one
by
one
this
these
things
because,
as
I
said,
the
translator
right
now
as
it
is
right
now,
it
depends
on
the
whole
yaga
repo,
because
we
that
model
from
jager-
and
I
would
not
keep
that
dependency
in
core
one
zero.
If
our
intention
was
all
to
have
a
slim
version,
it
brings
way
too
many
dependencies
to
us.
So
there
are
decisions
like
this
that
that's.
A
G
Yep,
that
sounds
good.
I
mean
again.
I
think
the
idea
was
to
mark
as
many
part
components
of
the
core
stable,
so
that
there
is
a
core
release
that
is
available
with
these
stable
components
and
then
and
then
we
would
iterate
on
the
rest
of
the
components
to
drive,
get
them
just
to
stable
state
yeah
yeah.
A
But
again,
I
I
think
there
is
a
mis
understanding
of
what
will
be
declare
stable.
Let's,
let's
start
with
the
having
a
marker
on
the
the
doc
dot
goal
in
every
report
in
every
package
that
we
we
consider
stable
and
and
take
it
from
there
like
at
least
that
will
clarify
my
vision,
your
vision,
others
vision
about
what
will
be
stable
in
the
next
few
weeks.
Couple
of
weeks.
G
A
B
Any
summary
of
them
as
well,
because
having
to
go
to
you,
know
parse
this
information
on
every
dot
go
file
is
going
to
be
a
pain.
G
K
G
Yeah
but
morgan
again,
that's
progress,
but
I
think
we
still
need
to
continue.
D
Yep,
I
won't
be
here
next
week,
but
we
should
pick
it
up.
Then
yeah.