►
From YouTube: 2022-05-10 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
A
Well,
we
just
kind
of
wait
ariel
with
the
that
common
gem
release.
That
seems
to
be
adamant
about
not
really
being
released.
You
retried
it
just
just
now
and
it
failed.
D
A
Yeah
release
pr
and
then
we've
fudged
the
numbers
so
like
I'll
edit,
the
actual
release
pr
so
that
it
doesn't
just
like
skip
a
version.
D
I
will
cross
my
fingers,
as
I
believe
everything
we've
done
up
to
this
point
should
have
worked
right
only
to
find
something
else
getting
in
the
way
great.
D
A
A
For
for
reference
for
everyone
else
me
and
ariel
I've
been
trying
to
release
the
gem
for
about
what
like
36
hours
now,.
D
F
It
just
sounds
like
a
nightmare.
I
I
don't
have
an
answer,
but
you
have
my
sympathy.
It
doesn't
look
fun
and
you
never
completely
understand
exactly
why
the
process
fails.
I'm
very
happy
that
looks
like
you
understand
why.
A
F
D
What
was
the
old
saying
from
that
movie
galaxy
quest.
A
B
B
B
Basically,
there
are
a
lot
of
receivers
in
the
hotel,
collector
repo
and
they
they
generate
quite
a
bit
of
data,
and
none
of
the
data
is
like
officially
specced.
So
I
think,
there's
kind
of
a
drive
to.
B
Get
some
buy-in,
I
guess,
for
semantic
conventions,
for
the
things
that
are
already
being
produced
in
the
collector
today
and
as
soon
as
you
open,
a
pr
trying
to
formalize
what
is
actually
being
collected
by
the
collector
today.
A
lot
of
people
have
opinions,
and
the
opinion
on
this
is
that.
B
Recording
metrics,
like
specific
to
nginx,
might
be
a
little
too
specific,
and
perhaps
there
should
be
a
more
general,
more
general
kind
of
name
space
and
naming
around
these
that
at
least
bare
minimum
could
apply
to
proxies
in
general,
and
then
all
kind
of
proxies
could
use
at
those
those
conventions.
So
I
think
that's
the
direction
that
this
is
headed.
There
are
also
some
discussions
about
like.
B
Could
you
even
fall
back
to
like
more
general
stuff,
like
hotel
system
network
connections?
I
feel
like
if
you
get
that
general.
I
think
it's
very
hard
for,
like
a
user,
to
figure
out
like
what
metric
to
even
go
looking
for
to
monitor
a
proxy
or
other
things.
So
I
think
there's
like
a
probably
a
right
level
of
generality
for
these
things,
and
these
these
discussions
will
probably
be
going
on
in
the
specs
egg
for
the
foreseeable
future.
F
Yeah,
I
think
that
the
discussion
about
the
level
of
generality
is
appropriate
because
there
are
some
things
that
are
really
not
engineer
specific
and
we
can
reuse
them.
But
to
your
point
yeah,
how
do
you
even
know
what
to
look
for?
I
think
that's
already
a
problem
today,
so
I
I
don't
have
an
answer
for
it,
but
I
I
think
I
agree
that
if
you
make
it
too
specific,
it's
going
to
you
know
it's
going
to
make
that
discoverability
problem
a
lot
harder.
B
Yeah-
and
I
think
I
see
these
discussions
come
up
like
what
I
would
really
like
to
see
is
that
some
documentation
for
for
both
for
two
cases,
one
I
would
like
to
monitor
my
proxy
and
like
here
here
are
the
things
that
proxies
produce,
that
you
can
use
to
monitor
them,
and
then
the
flip
side
of
that
is,
I
am
the
author
of
a
proxy
and
I
would
like
to
beg
hotel
in
like
what
what
data
should
I
produce
and,
if
there's
clear
guides,
for
you
know
most
of
the
things
that
you
want
to
monitor.
B
That
would
be,
I
guess,
a
good
level
of
detail,
and
I
think
it
would
make
that
a
lot
easier
if
they
were
kind
of
like
name
spaces
per
type
of
thing
that
you
want
to
monitor,
rather
than
kind
of
like
grab
bags
that
you
end
up
with.
If
you
just
get
too
general
on
everything.
B
All
right
this
next
one
is
interesting,
I'm
not
totally
sure
where
this
is
going,
but
it
seems
like
basically,
we
had
like
must
and
should
more
or
less
as
our
levels
of
of
requirement
for
attributes
in
semantic
conventions-
and
this
is
trying
to
this
is
suggesting
that
we
have
four
levels
of
recommended
or
four
levels
of
required.
B
Try
to
I
mean
I
think
the
goal
behind
all
this
has
always
been
to
have
as
much
uniformity
as
possible.
I
guess
between
instrumentation,
but
I
think
the
should
case
is
at
least
for
this
author,
somewhat
ambiguous
or
like
should
well
under
what
circumstances
should
this
should
happen,
and
that
seems
to
be
the
driving
principle
behind
this.
F
I
will
because
I
do
have
thoughts
actually
real
quick,
I
wanted
to
ask:
is
the
discussion
related
at
all
to
the
sort
of
may
must
should
sort
of
words
that
are
found
in
rfcs
in
general?
My
understanding
was
that
we
sort
of
kind
of
did
it
that
way,
and
I'm
wondering
like
if
we
can
just
you
know
today,
I
don't
know
if
they
talked
about
that
at
all
or
if
somehow
our
requirements
are
just
you
know,
we
just
can't
meet
them
with
that
standard,
rfc
language
that
everyone
knows
about.
B
I
don't
think
it's
necessarily
going
against
the
rfc
language.
It
looks
like
it's
still
kind
of
using
the
rfc
language
to
define
these
various
levels.
F
B
B
B
Yeah,
I
think
there
are
a
number
of
things
that
that
can't
necessarily
be
modeled
by
by
traces,
although
spans
do
have
an
event
system
on
it
on
it.
But
I
think
it
raises
a
bunch
of
questions
about
what
happens
if
you're
trying
to
record
something
outside
of
a
trace.
B
If
that
happens
to
be
where
do
you
find
yourself
or
there's
just
a
variety
of
things
that
you
would
like
to
model
that
events
are
just
like
a
better
fit
for,
so
I
think
we
should
all
read
through
this,
and
there
is
the
log
signal
and
I
think
I
think
there
will
be
some
questions
about
whether
an
event
is
a
log
or
not,
and
how
much
those
are.
F
Yeah,
that's
good,
I
mean
I'm
sure,
I'm
sure
our
favorite
honeycomb
representative
might
have
thoughts
on
this
topic,
but
I
mean
I
feel
like
this
is.
I
think
I
was
talking
to
robert
lauren
just
the
other
day
about,
like
you
know,
I
kind
of
wish
that
open
telemetry
had
an
event
api
because
that's
sort
of
the
glorious
future
we
all
really
wanted.
You
know
you
drive
everything
from
these
lower
level
event
primitives
and
it's
kind
of
cool
to
see
that
I
didn't
know
that
that
was
in
discussion.
B
B
A
I
don't
think
it's
gonna
eat
lunch.
I
think
it's
like.
I
know
that
was
well.
Hopefully
it
was
a
little
bit
of
a
joke,
but
like
we're
doing
some
stuff
here,
like
internally
at
shopify
and
like
continuous
profiling,
and
I
feel
like
it's
just
kind
of
a
different
use
case
than
what
we're
doing,
but
maybe
the
cool
part
would
be
like
an
evolution
of
our
instrumentation
to
maybe
rely
more
on
ebpf,
but
just
doing
the
same
things
in
a
different
way
in
a
more
potentially
efficient
way.
Maybe.
D
Yeah
the
somebody
submitted
like
an
ebpf
and
auto
instrumentation
implementation
for
goaling
and
that's
very
interesting.
A
Yeah,
there's
there's
a
repo
on
github.
I
think
it's
just
like
rb
spy,
herbie
spy
and
that
looks
really
promising.
I
believe
it's
using
vpf
so.
F
I
think
there's
also,
I
think
it
got
donated
to
cncf,
but
there's
a
pixie
observability
thing
and
I
think
that's
all
built
on
ebpf
as
well.
It
was
kind
of
magical,
so
I
don't
think
it's
going
to
eat
our
lunch
really
quickly.
I
think
it
will
be
subsumed
into
open
telemetry
one
way
or
the
other.
It's
it's
less
of
an
eating
of
our
lunch
and
more
of
a.
How
do
we
create
a
new
meal
together
with
these
additional
ingredients?
Sort
of
thing.
B
To
issues
more
related
to
the
ruby,
sig
and
repo
looks
like
we
have
a
couple
of
items
already
on
the
agenda.
B
B
Ariel,
did
you
want
to
give
a
quick
rundown
of
the
stuff
here.
D
Yeah,
I
could
use
maintainer
review
for
the
open
pr's
in
that
list.
One
of
those
in
particular
that's
very
sensitive
and
needs
great
care.
Is
the
registry
gem
one
because
it'll
make
it
so
that
the
sdk
depends
on
the
registry
gem
and
the
instrumentation
base
starts
to
rely
on
the
registry
gem,
so
everything's
probably
going
to
have
to
be
upgraded
or
released
at
the
same
time,
so
that
that
could
use
a
careful
set
of
eyes
or
you
know,
there's
any
objections
or
concerns.
D
A
Again
on
it,
I
want
us
to
kind
of
both
go
over
it,
the
the
part
I
haven't
it
hasn't
jumped
out
at
me.
Yet
I
just
we
have
to
make
sure
it's
not
a
breaking
change
right
and
it
is,
and
it
isn't
and
there
might
be
ways
to
do
it-
that
aren't
and
anyways
yeah.
I
just
I
need
to
think
about
the
different
potential
failure
modes
in
terms
of
like
how
could
this
break
someone's
deployment?
A
You
know,
and
so
I
have
like
scanned
it
a
few
times
and
I'm
like
with
this
break
it,
and
I
was
like
no,
that
might
be
okay
and
I'm
just
trying
to
like.
I
need
to
give
it
proper
attention
it
deserves.
Like
you
said,.
F
F
I
was
just
going
to
ask:
do
we
define
breaking
change
as
to
include
you
must
change
the
name
of
the
gem
like
if
the
actual
api
and
like
if
the
actual
usage
of
it
doesn't
change,
but
you
now
need
to
include
a
different
gem.
Do
we
consider
that
a
breaking
change
or
is
that
okay,
I
feel
like
it's
a
great
area.
Personally,
you
would
okay.
A
It
is
a
great
area,
but
basically
it's
like
if
you
had
something
that
worked
before
and
you
do
and
it
no
longer
works.
It's
like
that
feels
like
a
breaking
change,
I'm
just
trying
to
take
it
in
the
spirit
of
it.
Instead
of
maybe
the
like
nitpicky
potential
little
sense
of
it,
it's
just
like
did
I
do
a
version
bump
and
did
it
break
if
the
answer
is
yes
that
might
be
introduced
to
breaking
change.
F
Does
that
does
that
call
into
question
whether
or
not
we
have
to
consult
the
actual
open,
telemetry
stability
guidelines?
At
this
point
like
do
they
have
to
guide
our
answer,
or
do
we
think
that
this
is
because
it's
so
heavily
related
to
instrumentation
that
it's
not
actually,
and
none
of
those
are
stable-
that
we
don't
actually
have
to?
A
Yeah,
that's
where
I'm
not
sure.
I
know
that
I
could
probably
form
some
opinions,
but
I
I
like,
maybe
too
much
so
relying
on
francis's
super
opinionated
opinions
on
things
like
this,
because
usually
they're
good.
So
I'm
trying
to
kind
of
get
him
rope
him
into
it
a
bit,
but
he's
been
a
bit
busy
lately.
So.
A
B
One
question
that
I
had
there
was
an
inherited
hook
on
instrumentation
base
that
would
register
basically
the
instrumentation
at
that
point
in
time.
Does
that
still
work
in
this
situation?
Does
it
kind
of
conditionally
register
itself
if
it
can
find
the
registry
or
not,
or
does
it.
D
The
base
gem
depends
on
the
registry
gem
now,
so
what
it
did,
what
essentially
happened.
The
approach
I
took
was,
I
duplicated
the
registry
implementation
in
its
own
gem,
and
so
what
will
happen
is
if
you
have
an
older.
Let's
select,
these
are
the
use
case
modes.
I'm
thinking
of
you
have
an
older
version
of
the
sdk,
but
you
upgrade
base
when
you
upgrade
base
it'll
pull
in
its
dependency
on
registry
and
that
will
not
have
an
impact
on
the
sdk.
D
D
The
instrumentation
base
gem
is
going
to
look
at
the
registry
object
instance
that
is
being
defined
and
as
each
individual
instrumentation
is
adding
itself
it'll
call.
The
registry
is
delegating
to
the
base
class,
so
the
base
class
will
call
the
specific
registry
object.
Instance,
that
is
essentially,
for
all
intents
and
purposes
like
a
global
variable
right
now,.
D
D
E
I
I'm
trying
to
keep
that
in
my
head,
but
I
was
just
thinking
and
most
likely
there's
an
obvious
reason
why
this
is
not
going
to
work.
But
if
we
just
renamed
the
registry
class
to
like
registry
class
final
registered
class
final
final
pdf
in
the
new
gem.
Would
that
get
us
out
of
this
situation.
D
So
if
we
renamed
it
in
the
new
gem
but
did
not
rename
it
and
re
what
so
when
we
so
then.
E
From
from
the
from
base
and
from
the
sdk
instead
of
using
registry,
like
we
update,
so
we
change
the
code
to
pull
in
the
new
gem,
but
then,
instead
of
using
the
registry
class,
we
use.
You
know
the
new,
slightly
renamed
registry.
Does
that
get
us
out
of
the
pickle?
Because
you
don't
have
to
worry
about
anything,
overwriting
anything
and
then
yeah?
I
don't
know,
there's
probably
an
obvious
reason
why
this
won't
work.
A
D
E
D
D
So
I
don't
know
I
don't
want
to
like
suck
the
air
out
of
this
meeting
just
for
this
pr,
because
I
do
think
that
it
requires
great
care
and
thinking
and
maybe
think
about
this
synchronously
might
not
be.
D
F
D
Nope
and
the
other
one
are
just
like
I,
I
got
thumbs
up
from
some
folks
like
if
I
get
a
maintainers
eyes
on
there,
that'd
be
great
and
this
this
pr
specifically
is.
We
talked
about
this
at
the
last
get
together,
I
think,
but
essentially,
instead
of
keeping
the
appraisal
files
on
the
version
control,
we
would
we
would
regenerate
them
as
part
of
the
ci
flow,
and
that
way
we
don't
have
to
constantly
be
manually,
updating
things
that
are
getting
because
in
ci
right
now.
D
I
think
it's
regenerating
them
anyway
and
it's
generating
them
if
they
weren't
stored.
You
know
somebody
didn't
check
them
in
so
keeping
a
generated
appraisal
files
and
getting
us
anything
right
now
and-
and
that
was
that.
A
Cool
I
can.
I
can
get
that
in
get
that
in
right
away
after
we
get
that
release
going
then
we'll
merge
this,
so
we
don't
add
more
to
the
the
churn,
but
I
guess
the
only
thing
I
would
think
of
because
we
talked
about
this
a
little
bit
a
while
back.
We
should
try
to
pay
attention
to
ci
times
to
see
if
it
actually
has
any
meaningful
impact
there.
I
I
suspect
no,
but
it's
worth
to
see.
D
All
right,
and
then
that
brings
us
to
I'm
more
like
a
a
pr
for
like.
D
This
one
is
specifically
trying
to
address
an
unstable
and
a
non-deterministic
test
right
yeah,
and
so
this
changes
the
way
that
the
test
runs
instead
of
it
running
in
multi-threaded
mode.
It
runs
in
single
threaded
mode
running
in
the
main
thread,
but
still
uses
the
facility
of
concurrent
ruby
under
the
hood.
D
It's
just
doing
everything
in
the
main
thread
and
then
there's
one
you
one
test
case
that,
instead
of
using
the
async
adapter
uses
the
inline
adapter,
which
has
less
features
than
the
async
adapter,
and
so
this
kind
of
leads
into
another.
D
The
next
line
item
on
the
actual
agenda-
and
I
hate
to
glance
over
this,
but
it's
kind
of
like
hey
when
y'all
get
a
chance
to
look
at
it,
which
is
like
the
recent
feeling
of
like
this
instability
of
our
test
suite
and
the
release
process,
and
it's
just
been
very,
like
very
painful,
to
submit
a
change.
D
So
it
made
things
a
little
bit
fragile
like
right
now,
the
common
gem
we're
fighting
to
try
to
get
that
out,
because
you
can't
really
install
some.
You
know
use
some
of
the
features
and
some
of
the
instrumentation
tristan
brought
this
up
like
the
attribute.
The
common
utilities
is
missing
the
function
to
truncate
like
the
attributes.
F
I'm
curious:
how
is
that
related
to
the
flakiness
and
instability
of
our
tests,
not
that
we
shouldn't
talk
about
it,
but
more.
I
feel
like
there's
an
implied
connection
that
I
didn't
maybe
completely
understand
there.
Is
it
just
like
the
tests
in
the
release
process
is
flaky
and
we
have
released
things
incorrectly,
causing
the
breakage.
Is
that
maybe
what
what
you're
saying.
D
I
would
say
that
one
we
have
a.
We
have
several
different
problems,
one
problem
being
that
the
release
process
is
not
validating,
that
its
dependencies
are
working
and
that
it
can
work
once
it's
released.
It
doesn't
cause
anything
to
break
whatever
it
is
the
common
gem
or
the
sdk
or
whatever.
It
is
the
second
problem
being
is
that
because
we
have
a
known
on
state,
the
the
the
test
suite
itself
is
non-deterministic.
D
We
run
into
problems
with
trying
to
release
things
and
sometimes
things
break,
and
we,
like
me
personally,
I
felt
like
I
kind
of
like
missed
things,
because
I'm
trying
to
make
seven
different
parts
of
the
release
process
work
right,
which
is
this
non-deterministic
build,
which
is
this
thing
missing,
a
tag
which
is
this
change
log
not
being
right?
It's
just
like
so
many
moving
parts
got
it.
F
To
your
point,
well,
two
to
several
of
your
points,
the
the
pr
specifically
for
active
job.
I
think
that
just
needs
a
maintainer
stamp
as
well,
since
I
was
the
one
who
inflicted
that
on
the
world
I
in
the
first
place.
I
I
agree
with
what
the
solution
is.
It's
important
that
we
test
the
different
adapters,
but
like
inverting
it.
The
way
you
did
is
more
stable
in
the
right
approach
like
we
don't
have
to
test
the
majority
of
the
test
cases
in
the
complicated
fancy
paths
we
can
do
different
things.
F
I
think
that'll
help
a
lot,
there's
a
couple
other
flaky
ones.
I
know
the
q
adapter
has
like
a
an
issue
open
et
cetera
to
the
to
the
general
stuff,
about
the
release
process
being
a
little
difficult
and
in
our
test,
suite
being
flaky
and
long-running
and
just
annoying
sam,
and
I
have
both
taken
stabs
at
fixing
that
and
haven't
come
up
with
great
answers.
F
I
know
at
the
end
of
the
last
meeting
I
said
I
would
work
on
it.
The
the
root
of
the
problem
from
my
perspective
is
that
there
is
a
lot
of
there's
a
lot
of
complexity
in
the
amount
of
things
we
are
building
and
releasing
and
the
amount
of
platforms
we're
testing
against
and
just
as
a
matter
of
simple
math.
F
If
you
take
just
linux
and
the
four
versions
of
ruby,
we're
testing
and
the
52
gems
that
we're
currently
releasing
from
the
monorepo,
I
think
it's
you
know
several
hundred
builds
it's
just
a
lot
and
github
actions
makes
that
difficult.
F
And
on
top
of
that,
like
installing
multiple
ruby
versions
on
the
actions
runners
to
try
and
there's
different
ways,
you
can
chunk
it
up,
but
like
some
of
them
are
more
or
less
possible,
and
we
also
sort
of
do
this
very
automatic
magical
detection
of
what
gems
should
be
detested,
like
there's
ruby
code
that
goes
out
and
looks
through
the
file
system
to
find
them,
and
you
can
replicate
that
in
various
ways.
F
But
it's
just
kind
of
gross,
so
I
hit
a
dead
end
with
it,
because
I
I
didn't
have
a
great
answer
to
all
the
complexity.
I
think
sam
may
have
hit
a
dead
end
as
well
for
similar
reasons.
I
was
talking
with
audiel
this
morning
again,
and
I
think
the
answer
is
just
we
ditch
some
of
the
magic.
We
specify
lots
of
yaml
to
actually
specify
all
the
jobs
we
want
and
like
pick
the
right
boxes
and
then
that
opens
up
ways
of
chunking
it
up
that
are
hard
to
express
dynamically
there's.
F
Today,
I'm
I
started
another
branch
where
I'm
trying
to
fix
this
again,
and
I
think
my
current
approach
is
going
to
be
to
chunk
it
up
by
gem
that
we're
releasing
so
there's
you
know,
52
different
matrix
builds
in
each
matrix.
Build
is,
you
know,
there's
a
different
ruby.
You
know
so
we'll
have
a
lot
of
high
level.
A
lot
of
top
level
builds
and
that
should
at
least
say
that
you
know,
if
you
know,
maybe
you
know
just
the
active
job.
Adapter
is
the
one
that
fails.
F
You
don't
have
to
rerun
the
entire
test
suite
again
to
make
that
pass
and
we
can
make
those
failures
bubble
up.
Make
them
visible,
make
it
easier
to
rerun
the
ones
that
are
flaky
and
failing,
and
then
we
can
start
having
nicer
things
like
you
know,
annotations
on
where
failures
were
found,
etc,
etc,
and
then
we
can
figure
out
what
to
do
with
our
release
process
next.
So
I.
F
Is
like
it's
possible,
it's
just
kind
of
hairy
and
there's
not
a
lot
of
great
answers,
and
current
direction
is
just
to
accept
a
yaml
implosion
as
the
necessary
cost
to
fix
it,
and
if
there
are
any
like
big
disagreements
on
yaml
explosions
like
please
say
so,
because
that
sort
of
constrains
which
way
we
fix
it
or
don't
fix
it.
If
that
makes
sense,.
D
Sure
one
thing
I'd
like
to
do
is
offer
up
this
request
for
a
change
in
behavior
is
that
if
we
run
into
a
test
that
is
non-deterministic
that
we
open
an
issue
for
it
and
take
a
crack
at
trying
to
fix
it,
at
least
that's
what
I'm
trying
to
do,
and
I
like
to
try
to
socialize
that
behavior,
so
that
we
don't
have,
and
if
the
person
who
like,
if
you
didn't,
write
the
instrumentation,
you
know
you
know
reach
out
to
the
person
that
introduced
the
test,
so
that
was
successful
so
far.
D
I
don't
know
why.
I
can't
remember:
isn't
it
chris
chris
holmes?
Yes,
yes,
chris
came
back
and
was
like
yeah.
I've
got
a
I've
got
a
fix
for
this,
so
you
know
it's.
It's
socializing
like
keeping
these
things
stable
and
I
think,
like
I
think,
also
combined
with
like
all
these
weird
things
that
happened
this
week.
You
know
the
rails
released
with
delayed
job,
causing
a
problem
and.
G
D
F
I
think
it's
made
us
feel
worse
about
it
than
maybe
it
actually
truly
is
because
other
things
broke
at
the
same
time,
but
there
is
still
a
problem.
I
think,
to
your
point
about
socializing
the
right
behavior.
I
have
personally
not
investigated
flake
tests
before,
because
I
had
a
general
feeling
of
like
the
test.
F
Suite
just
is
kind
of
janky
anyways
and
like
it
fails
for
reasons
that
are
not
always
obvious,
not
always
code
related,
and
I
think
that
that
was
incorrect,
but
I
think
I
allowed
myself
to
believe
that,
because
the
test
suite
was
so
difficult
to
parse
anyways,
so,
hopefully
like
changing
the
way
the
builds
are
run
will
make
it
a
little
bit
harder
to
lie
to
myself
in
the
future.
F
If
that
makes
sense
and
anyways
I,
I
generally
support
the
idea
if
we
should
be
actually
fixing
these
tests
like,
and
I
don't
rely
on
this
stuff.
D
I
hope
my
comments
are
being
received
from
because
they're
coming
from
a
good
place,
yeah,
I'm
not
in
all
intending
on
disparaging
any
of
the
hard
work
people
put
into
put
to
everything
here,
I'm
trying
to
I'm
trying
to
be
supportive
yeah
of
the
work,
so
I'm
gonna
be
quiet.
F
I
I
agree,
though,
that
I
I
received
your
comments
while
personally
like
I
did
not
take
offense
at
the
fact
that
you
were
fixing
one
of
my
flaky
tests.
I
was
actually
thrilled
and
delighted
and
I
think
it's
good
for
the
health
of
the
project
and
I'm
happy
that
you're
bringing
this
stuff
up
like.
F
Despite
our
best
efforts,
it
seems
like
open
telemetry.
Ruby
is
being
widely
used
and
appreciated,
and
that's
good.
So
we
we
have
a
responsibility
to
to
do
it
right
and
I
think
that
you
are
helping
us
out
there
a
lot.
So
I
personally
think
your
comments
and
your
ideas
are
wonderful.
B
Thumbs
up,
I
think
we
want
a
test
suite
that
is
reliable
and
that
fails
only
when
we
have
problems.
So
we
all
write
flaky
tests
because
they
passed
what
we
ran
them,
and
that
was
good
enough.
So
I
think
nobody
should
feel
bad
for
having
introduced
a
flaky
test,
but
having
a
policy
of
create
an
issue
for
it,
and
then
whoever
has
the
time
and
energy
to
fix
it,
go
ahead
and
fix
it.
It's
not
like
yeah.
B
Nobody
should
feel
feel
bad
for
for
any
part
of
that
process,
the
only
the
only
people
who
aren't
going
to
be
committing
flaky
tests
are
the
people
who
aren't
contributing.
So
that's
yeah.
I
think
that's
that's
the
way
things
will
generally
be
working
so.
B
E
Yes,
first
of
all
arielle
you're
mvp
you're
killing
it.
Thank
you
for
all
your
fixing,
all
the
broken
stuff.
I
andrew
I
was
just
going
to
say
I
do
not
at
all
mind
a
yaml
explosion.
I
waited
into
the
the
toys,
you
know,
implement
the
cli
stuff
and
it
was
hard
two
parts,
two
parse
and
having
I've
spent
a
lot
of
time
in
and
around
yaml.
E
So
I
often
would
rather
have
it
copied
and
pasted
from
one
scenario
to
the
next
rather
than
like
dynamically
generated,
just
because
it's
just
easier
to
it's
easier
to
make
one-off
exceptions.
So
I
I
would
definitely
welcome
that
change.
Okay,.
F
H
I
I
also
propose
that
if,
if
one
of
the
benefits
of
toys
is
being
able
to
run
segregated
tests
for
a
particular,
I
only
want
to
run
this
one
gems
tests
or
whatever
have
you
if
the
github
actions
workflow
yaml
sufficiently
segregates
all
that
stuff,
there's
the
act
utility
that
will
run
github
actions
on
your
laptop,
so
you
can
use
act
to.
I
just
want
to
run
that
one
thing
and
it'll
parse
the
animal
appropriately.
If
it's
got
to.
F
F
D
F
Actually,
like
it's
slightly
more
viable
than
shipping,
your
laptop,
but
only
slightly
anyways
cool
yeah,
I
will
try
to
deliver
something
in
the
next
day
or
so
about
it.
I
I
got
really
close
a
few
days
ago
and
I
just
I
couldn't
convince
myself
that
what
I
had
built
was
actually
better,
but
I
did
do
something
different,
so
I
don't
know,
let's
put
the
answers
out
there,
we'll
get
there.
F
That's
okay.
Like
I
said
I
like
listening
to
your
microphone,
it's
nice.
I
think
I
have
the
next
item
on
the
agenda,
which
is:
should
we
update
the
proto.
I
know
that
rob
is
working
on
metrics
and
I
had
a
this
horrible
hairbrained
idea
that
maybe
I
would
pick
up
my
logs
work
again
and
as
part
of
that,
I
looked
into
it
and
realized
that,
like
oh,
the
proto
definitions
have
stabilized
quite
a
bit,
at
least
in
theory.
F
They
should
be
backwards
compatible,
although
I
don't
know
how
often
that
actually
ever
works
for
practice,
so
we
could
update
it
and
there
are
some
changes
that
I
think
would
apply
to
rob.
Specifically,
I
wanted
to
see
what
people
thought.
H
But
our
particular
issue
is
the
our
in
our
ingestion
looks,
doesn't
look
for
the
1000
because
we're
the
not
backwards
compatible
breakage
in
that's
a
whole
thing.
If
you
send
us
the
new
metric
type,
if
you
send,
if
we
send
the
new
metric
type,
which
is
there's
only
one
type
of
metric,
there's,
not
a
double
and
an
in
version
of
each
of
the
histograms
gauges
and
whatnot
we're
good,
and
if,
at
the
time
that
we
move
to
the
version
of
proto
that
changes.
H
The
name
of
instrumentation
library
to
scope,
also
update
the
code
to
use
scope
instead
of
the
name
instrumentation
library.
It
will
stay
on
like
field
four
in
the
in
that
bit
of
the
protobuf.
F
H
There's
like
old
fields,
there's
layers
of
backwards
compatibility
at
a
code
level
api.
You
could
choose
not
to
update
the
code
to
use
the
new
name
for
the
field,
but
you
we
will
be.
That
would
mean
that
you're
publishing,
instrumentation
library
spans
on
like
field
1000,
which
old
collectors
don't
know
about
right.
So.
F
H
Yes,
your
code
will
will
compile
compile,
but
sending
telemetry
through
old
receivers
won't
work
because
they
won't
know
about
to
look
for
spans
on
field
1000.
So
when
you
update
proto
that
takes
instrumentation
library
and
renames
it
to
scope,
also
update
the
code
to
call
it
scope
and
then
the
telemetry
emits
will
go
to
old
receivers.
H
F
Okay!
I
might
take
a
stab
at
that.
I
I
was
able
to
regenerate
the
ruby
protobuf
definitions
and,
like
it
didn't,
actually
look
terrible
to
do
the
upgrade
other
than
the
renaming
of
instrumentation
library
and
stuff.
So
I
think
I
could
work
on
that
yeah
hack
week
at
shopify
and
of
course
I
have
tons
of
meetings
today,
anyways
but
like
what
I
settled
on
as
a
project
was
like
housekeeping
around
the
repo.
F
So
I
might
try
that
related
to
that
I've
been
kind
of
going
through
the
spec
and
just
seeing
what
we
have
to
do
to
keep
up
to
date.
We're
not
actually
that
far
behind,
I
think
we're
compatible
with
something
like
1.4
of
the
tracing
spec,
so
like
not
as
bad
as
you
would
think.
I
also
opened
the
pr
for
schema
url
support.
If
you
want
to
take
a
look
at
that,
I
didn't
in
in
that
pr.
F
I
did
not
update
any
of
our
instrumentation
to
use
the
schema
url,
because
that
would
cause
potential
issues,
maybe
also
it
seemed
really
difficult
and
I
just
didn't
feel
like
doing
it,
but
it
adds
support
to
the
resource
class
and
emits
a
schema
url.
If
present
in
the
tlp
messages.
F
There's
a
couple
of
design
trade-offs
in
there,
notably
the
spec
people,
seem
to
have
thrown
up
their
hands
and
said
this
question
is
unresolved.
It
is
implementation
dependent,
and
so
I
picked
away
and
anyone
who
is
interested
about
how
resources
should
be
merged
within
compatible
schema
urls.
You
should
take
a
look.
Basically,
I
also
threw
up
my
hands
and
said:
forget
the
schema
url
merge
it
like
a
hash
and
see
what
happens.
F
That
may
not
be
the
right
choice.
Other
repos
do
different
things.
There
doesn't
seem
to
be
consensus
that
I
know
of
so.
If
people
want
to
take
a
look
at
that
and
give
me
feedback,
I'd
appreciate
it.
I'm
sorry
I
didn't
add
that
to
the
agenda.
I
was
updating
the
agenda
on
my
phone
with
the
dog
and
I
couldn't
there's
too
many
copying
and
pasting
it,
and
the
dog
was
trying
to
make
friends
with
people,
and
I
just
said
I'll,
just
whatever
I'll
talk
about
it.
Instead
of
you
know
the
agenda,
but
yeah.
F
F
B
H
Happy
to
even
open
that
one,
if
there's
not
already
an
open
issue
tracking
that
rename
I'm
happy
to
go.
Add
that
note
there
too
just
I
was.
I
was
writing
it
somewhere,
so
it
was
written
down
and
not
just
talked
about
and
I'm
happy
to
go,
write
those
words
somewhere
else
too
I'll
go
looking
for
it
too.
H
B
We'd
have
an
issue
we
could
maybe
even
have
it
signed,
but
if
it
doesn't
we're
all
here,
we
know
where
the
stock
is.
D
Rob
kid
as
a
person
who's
intimately
familiar
with
that.
Do
you
have
any
capacity
that
you
might
be
able
to
help
contribute
to
getting
that
out
as
well
to
maybe
to
give
or
we're
closing.
D
H
F
H
I'll
get
you
some
emails
anyway,
I'll
see
what
I
can
do,
but
like
in
this
meeting,
I
can't
I
can't
commit
to
that
I'd
like
to,
but
their
work
queue
is
stacked.
H
F
H
I
I
can,
if,
if,
if
you
send
through
we've,
got
copious
notes
about,
like
which
version
of
the
controller
jacks
us
up,
which
versions
of
what
yeah,
we
could
certainly
set
something
up
that
if,
if
an
updated
hotel,
ruby
libraries
emit
to
us-
and
we
receive
spans
for
good,
so
I
can
absolutely
do
some
integration
tests
with
that.
A
I've
been
pushing
through
some
of
the
work
on
splitting
out
some
of
the
otp
stuff,
there's
a
new
gem.
It
hasn't
been
published.
Yet
I
want
to
just
kind
of
like
sneakily
deploy
it
here
internally
in
our
like
one
of
our
staging
or
production-like
environments.
A
To
make
sure
it's
good,
but
they're,
the
otlp
export
is
going
to
get
deprecated
we're
going
to
push
people
to
use
the
new
thing
which
has
a
strap
extracted.
The
encoding
work
you
can
see.
There's
a
pr
kind
of
you
had
highlighted
yeah,
that's
kind
of
was
some
follow-on
from
that.
It
was
a
byproduct
of
that
work.
I
wanted
to
see
what
it
looks
like
when
you
actually
split
these
things
apart
and
how
it
would
interact
with
another
protocol.
A
So
I
left
a
change
drawing
message
on
this
pr,
don't
feel
probably
tweet
it,
but
basically
just
says
that
I'm
not
going
to
publish
this
gem
until
someone
actually
works
on
it,
because
I
would
be
afraid
that
someone
would
see.
Oh
there's,
otp
grpc.
I
would
love
to
use
that
and
then
deploy
it
without
actually
seeing
that
it
has
no
retry
mechanism.
It
emits
no
metrics.
A
It
literally
just
assumes
it
works
and
says
success
and
then
moves
on,
but
in
the
very
trivial
case
it
does
work
like
it
can
send,
but
it
doesn't
handle
failures
or
anything
like
that.
So
this
is
very
much
done
from
like
the
level
of
time
I
can
associate
with
it
or
contribute
to
it,
but
I
want
it
to
be
there,
so
someone
can
pick
it
up.
A
It's
like
at
this
point
literally
just
needs
someone
to
battle
test
it
and
add
kind
of
some
of
the
environment,
support
to
it
and
someone
who
ideally
is
actually
familiar
with
grpc,
because
I'm
not
so
if
anybody
encounters
someone
in
the
wild
who
would
like
this.
This
is
like
a
perfect
place
to
pick
up
and
contribute
to
this
repo.
I
just
we
don't
have
a
use
case
for
an
internalist.
I
can't
really
adjust
my
spending
time
on
it.
F
I
think
the
the
spec
sig
decided
actually
partially
based
on
our
feedback
that
the
default
required
otp
implementation
for
libraries
should
be
otlv
or
http
right,
yep.
A
The
language
says
it
can
be
language
like
implementation
specific,
but
we
have
to
support
at
least
one
and
we
went
with
the
otlp
over
http,
but
we
can
support
this
one.
But
again
we
just
need
someone
to
pick
it
up.
H
Yeah,
our
experience
from
the
vendor
side
is
that
otlp
over
http
is
sort
of
the
easiest
to
deal
with
in
the
widest
variety
of
environments,
especially
like
not
everybody's
corporate
outgoing
proxy
can
handle
grpc
and
it
does
all
sorts
of
weird
translation.
Shenanigans,
https,
also
load
balances
better
because
not
all
load
balancers
know
how
to
deal
with
the
sticky
connections.
That
hp2
creates
so
like
it's
our
recommendation
too,
to
start
with
dlp
http.
H
If
the
language
supports
it,
if
the
language
implementation
supports
it,
when
I
get
some
time,
I'm
game
to
try
to
battle
test
this,
since
we've
been
having
to
deal
with
otp
grpc
issues
across
the
languages.
H
Do
we
have
do
we
do
run
time,
interpretation
of
proto
files,
or
are
we
like
vendoring
in
the
generated
proto
buffy
things
for
grpc
to
know
what
its
wire
protocol
is.
A
Yeah
we're
we're
kind
of
rendering
it
in
so
like
andrew
saying
that
he
wanted
to
like
potentially
update
them.
He
would
regenerate
them
and
commit
them,
there's
a
gem
specifically
for
it.
It's
like!
What's
your
otlp
exporter
common,
now,
that's
going
to
kind
of
be
the
home
to
this,
so
that
we
have
one
place,
so
you
don't
have
to
update
it
like
the
grpc
and
http
and
like
eventually
it'll
move
with
the
proto.
A
Again,
like
I
invite
people
to
give
it
a
ceremonial
green
check
mark,
I
will
be
slightly
big
headed
if,
like
someone's
like.
Oh
do
you
want
to
change
this
code
I'll,
be
like
no,
because.
A
Yeah
it'll
be
more
like
that's
a
great
idea
feel
free
to
do
it.
I
just
want
to
get
this
this
boilerplate
placeholder
in
place,
so
that
people
who
do
care.
D
H
A
Right
until
someone
has
taken
ownership
and
like
at
that
point,
like
I'm
happy
to
support,
reviewing
and
like
kind
of
going
over
it
and
kind
of
seeing
if
there's
parody
with
the
http
one,
where
it
makes
sense,
not
like
totally
washing
my
hands
of
this,
I
just
don't
want
to
publish
it
and
hurt
a
bunch
of
people.
Unknowingly.
H
That's
totally
your
your
prediction
of
how
people
will
react
to
its
presence.
I
think
it's
reasonable
prediction
and.
A
A
I
have
some
stuff
I
can
like
follow
up
next
meeting,
but
some
of
the
extraction
work
is
going
to
kind
of
lead
way
to
support
for
like
metric
signals
like
all
of
this
is
very
much
with
two
goals
in
mind:
kind
of
making
sure
that
there's
a
place
for
metric
stuff
in
our
exporter
world
and
also
so
we
can
actually
get
the
otlp
exporter
to
a
stable
1.0.
A
That
was
what
really
really
kicked
all
this
off,
so
it's
not
published
yet
again
I'll
battle
test
it
for
us
internally.
Before
I
put
that
on
the
world
and
then
once
we're
happy
with
it
there
then
we're
gonna
update
the
actual
sdk
to
look
for
this
new
version
and
fall
back
to
the
other
one.