►
From YouTube: 2020-12-22 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
Yeah
caught
up
with
the
gutter,
yet,
let's
see
okay.
A
A
A
C
A
What
kind
of
cats
are
they.
C
They're
abysinians,
like
I
don't
know,
they're
supposed
to
be
kind
of
like
the
original
egyptian
cat
yeah.
This
cat
in
my
picture
is
cosmo.
He
was
yeah,
he
was
my
first
abby.
He
lived
to
be
18
years
old,
but
he
kind
of
passed
away
this
summer.
So
these
guys
are
are
the
next
generation.
A
I
know
someone
who
has
those
those
hairless
cats
and
I
always
like
I'm,
not
I'm
not
much
for
like
cats
just
because
I
never
grew
up
with
them.
I
always
thought
they
were
kind
of
weird
looking,
but
when
you're
around
them
in
person
they're
not
that
weird,
surprisingly
they're
kind
of
fuzzy
and
they
act
like
nimble
dogs,
it's
kind
of
cool.
C
A
Just
before
we
get
started,
we
will
need
it
in
the
latter
half,
but
I
want
to
make
sure
we
carve
out
a
little
bit
of
time
just
to
go
over
the
kafka.
Pr
there's
some
comments
that
I've
left
unaddressed,
but
I
think
we
should
just
probably
talk
over
it
in
person
and
then
I
can
leave
comments
after
the
fact
for
posterity.
C
Yeah,
that
sounds
good.
I
think
the
spec
sig
should
be
pretty
quick
and
stop
me
if
it's
not
but
yeah.
There
was
just
kind
of
a
very
long
discussion
around
versioning
and
stability,
with
no
real
resolution
about
mainly
kind
of
it's
pretty
clear
when
something
has
changed
or
been
broken
in
like
the
api
or
the
sdk,
but
it
becomes
like
a
little
bit
more
nebulous.
C
I
think
when
it
comes
to
like
instrumentation
and
like
I
think,
there's
a
lot
of
discussion
about
like
is
adding
an
attribute,
something
that
is
actually
okay
or
is
this
gonna
break
back
ends
and
I
think,
coming
from
the
tracing
perspective,
that
stuff
should
totally
be
fine,
but
I
think
the
thing
going
through
some
people's
minds
is
metrics,
where
adding
like
a
label
actually
kind
of
changes
the
metric
technically,
I
think
it
maybe
depends
on
your
your
back
end,
but
kind
of
you
know.
C
In
most
cases,
I
believe,
like
a
metric
should
be
identified
by
the
set
of
labels
on
it.
So
by
like
adding
a
label,
you
have
now
kind
of
created
a
new
time
series
and,
depending
on
what
sort
of
filtering
rules
you
have
or
alerting
rules,
you
have
that
might
not
work
out
super.
Well,
with
your
back
end.
C
Does
that
not
match
your
understanding,
francis?
I
know
it
seems
like
you.
B
I
mean
I'm
just
thinking
about
how
we
do
metrics
we're
using
statsd
or
bug
statistic
actually,
and
I
think
adding
labels
generally
doesn't
break
anything.
Removing
labels
certainly
will
break
things
but
yeah.
If
you
add
a
label-
and
it's
not
actually
used
anywhere,
then
it's
effectively
going
to
be
ignored
when
people
are
like
the
label
itself
will
be
ignored.
The
metric
value
will
still
be
valid
with
all
the
other
labels
that
it
has.
B
B
That
could
be
problematic
as
a
performance
concern,
but
I'd
be
surprised
if
it
was
problematic
in
a
you
know,
as
a
correctness
thing
or
a
compatibility
thing.
C
Yeah,
I
think,
ultimately,
the
the
the
resolution
that
we
did
come
to
is
that
there
needs
to
be
some
kind
of
rules
for
what
can
change,
and
it
doesn't
necessarily
have
to
mean
that
I
don't
know.
I
think
there
was
some
kind
of
discussion
where,
like
they
don't
have
to
fit
a
vendor's
schemes
today,
but
whatever
the
rules
are,
they
need
to
be
reasonable
and
people
may
need
to
adapt
to
the
rules
depending
on
on
how
they
shake
out
but
yeah.
I
think
I
think
that
that
was
like
the
core.
C
The
core
thing
is
just
like:
how
mainly
do
we
communicate
changes
to
the
telemetry
data
kind
of
coming
out
of
the
of
the
sdks,
and
it's
more
for
the
benefit
of
like
the
tracing
back
end,
to
make
sure
that
things
that
it
relies
on
will
not
kind
of
disappear.
C
You
know
mainly
disappear
and
break
things,
because
I
think
a
lot
of
people
have
like
dashboards
and
alerts
on
on
some
of
this
data,
and
you
don't
want
to
just
kind
of
have
to
remake
those
or
babysit
those
every
time
you
kind
of
upgrade.
It's
like
you
need
to
have
a
way
to
know
you
know
going
in
to
to
an
upgrade
that
things
could.
C
Change
so
yeah.
I
think
I
think
there
will
be
some
more
discussions
around
that,
but
that
took
up
a
lot
of
the
meeting
and
then
the
thing
that
took
pretty
much
the
rest
of
the
meeting
was
this
discussion
about
how
to
make
service
name
mandatory
or
not
in
the
resource,
and
I
think.
C
In
terms
of
how
they're
trying
to
ensure
that
there's
a
fallback
for
resource
or
for
service
name
in
the
resource
like,
I
really
think
I
think
the
way
it
works
in
ruby
and
the
way
it
should
probably
work
everywhere
is
that
you
just
should
have
this
like
default
resource.
That
has
like
some
some
required
attributes
with
some
default
values
and
then,
whatever
the
user
wants,
you
just
merge
on
top
of
that
and
if
they
provided
something
it
will
wipe
out
whatever
default
you
had
set.
C
But
that
would
be
the
way
to
achieve
this,
but
I
think
I
think
the
problem
is
that
different
people
have
approaches
from
different
angles
and
I
think
we're
kind
of
approaching
it
from
the
angle
of
like
you
kind
of
have
this
default
resource.
Probably
I
would
have
to
look
back
at
the
code,
but
when
you
initialize
a
tracer
provider-
and
I
think
we
merge
everything
on
top
of
that-
I
think
there
was
some
desire
for
some
of
the
people
involved.
C
C
I'm
less
sure
that
that
is
really
like
a
valid
use
case,
but
there
was
a
lot
of
talk
about
this.
I
think.
C
I'm
not
sure
what
the
end
result
is.
There
is
going
to
be
a
write-up
of
options
I
think
on
1294,
but
I
think,
as
they
were
kind
of
discussing
the
end,
they
were
saying
that
kind
of
the
main
reason
for
all
of
this
is
due
to
this
12
37,
that
the
jaeger
exporter
requires
a
service
name,
and
they
just
wanted
to
make
sure
that
it
would
come
from
a
that.
It
would
be
available
in
the
resource
more
or
less.
B
A
Just
going
to
check
that
in
the
configurator
we
start
with
like
a
default
resource
that
is
an
empty.
It
contains,
like
the
telemetry
sdk
resource
name,
language
inversion.
So
it's
like
open
cloud
tree,
ruby
and
the
version
of
the
sdk,
and
so
it
creates
that,
like
in
the
configurator
that
starts
with
that
by
default,
and
then
you
can
merge
anything
you
want.
On
top
of
that.
C
A
Potentially
like,
if
it
comes
out
that
it
is
required
that
one
is
set
by
default
like
it,
doesn't
seem
unreasonable
that
we
just
create
in
that
resource
and
that
default
when
we
set
a
service
name,
just
something
generic
like
unknown,
and
then
we
still
keep
our
exposing
like
the
service
name,
the
configurator,
and
that
would
just
overwrite
it
so
by
default,
you're
unknown,
and
we
provide
an
easy
way
to
supply
it.
A
proper
name.
C
A
B
B
B
C
B
There's
a
whole
bunch
like
there's
a
paragraph
near
the
top
of
document
conventions
that
is
incredibly
wishy-washy
right.
It's
just
like
certain
attribute
groups
in
this
document
have
a
required
column
for
these
groups.
If
any
attribute
from
the
particular
group
is
present
in
the
resource,
then
all
attributes
marked
as
required,
must
also
be
present.
However,
it's
valid
if
the
entire
attribute
group
is
omitted,
even
if
some
are
marked
as
required.
So.
A
Always
things
like
that,
it
was
making
me
think
I
don't
know
if
either
of
you
have
ever
watched,
much
simpsons
in
your
life,
but
there's
an
episode
where
bart
wants
to
get
a
a
pocket
knife
and
they
give
him
a
manual
and
says:
don't
do
what
donnie
don't
does,
and
it
just
shows
the
kid
doing.
Horrible
things
in
the
days
and
sitting
there
looking
so
confused.
C
Yeah,
so
I
feel
like
there
should
be
a
default
resource
for
the
telemetry
sdk.
To
be
honest,
that
just
identifies
what
is
the
thing
reporting
I
feel
like
every
backend
wants
this.
I
don't
know
of
any
back
end
that
doesn't
want
to
know
where
the
data
is
coming
from
and.
C
C
And
then,
if
you
want
to
change
that,
you
can
change
it,
but
like
at
a
at
a
minimum
out
of
the
box,
it's
going
to
kind
of
report
what
it
is
but
yeah
I
don't
know.
I
guess,
as
I
say
that,
like
one
of
the
things
that
people
probably
you're
going
to
want
is
just
like
minimizing
the
the
amount
of
resource
data
that
you
need
to
send
with
everything.
C
B
C
I
don't
know
see
if
see
if
what
we're
currently
doing
is
is
a
valid
and
reasonable
use
case.
I
don't
have
to
check,
but
I
think
javascript
is
doing
the
same
thing
and
I
think,
like
I
think,
for
the
most
part,
like
I'm
just
thinking
about
like
light
stuff,
I
think
would
be
a
problem
for
us
if
the
sdks
did
not
report
like
what
language
and
version
they
were
like.
I
think
we
we
also
kind
of
rely
on
that.
B
Yeah,
the
interesting
thing
for
like
go
provides
a
bunch
of
default
detectors,
but
the
resource
detection
mechanism
has
options
to
say
like
turn
off
the
defaults,
so
I
think
conceivably,
you
can
force
it
to
give
you
an
empty
resource,
which
is
a
little
weird.
C
Like
on
one
hand
like
like
in
theory,
I
understand
like
the
desire
to
be
able
to
like
have
an
empty
thing
just
because,
like
it's,
it's
useful
in
theory,
if
you,
you
know
wanted
nothing
or
just
wanted
to
build
something
up
from
scratch,
but
in
practice
I
do
think
there's
like
a
minimum
set
of
like
data
that
everybody's
probably
actually
going
to
want,
and
if
we
can
agree
on
that
and
just
kind
of
having
like
a
default
resource
that
has
that
would
really
simplify
everything.
Here.
C
Because
I
just
feel
like
everything
with
resources
becomes
like
a
huge
can
of
worms
when
you
start
to
factor
in
that,
like
you
can
have
a
bunch
of
detectors,
they
can
run
asynchronously,
they
can
fail
and
like
it,
has
a
lot
of
implications
for
you
know
trying
to
resolve
those
at
export
time
and
what
happens
in
the
multitude
of
edge
cases
that
can
show
up
in
that
whole
process.
So
I
know
yeah.
C
C
B
C
Yeah,
I'm
I'm
with
you
there.
I
might
try
to
like
spelunk
the
history
on
that
file
and
see
if
that
telemetry.
If
there
was
ever
like
some
stronger
language,
saying
that
that
was
required,
and
if
so,
I
might
just
bring
that
up
and
say
that
I
think
that's
a
really
good
idea
and
what,
if
we
also
just
made
service
name,
also
something
that
is
required
and
has
a
fallback
so
that
without
user
intervention
like
you
will
have
like
this
bare
minimum
resource
that
you
always
have.
C
Yeah
and
I
feel
like
for
like
the
yeah,
I
will
not
complete
that
thought.
I
was
going
to
say
if
you
had
some
really
weird,
maybe
well
completely.
If
you
had
some
really
weird
use
cases
where
you
wanted
to
remove
some
of
these
labels,
I
feel
like
you
could
achieve
that
somehow
in
a
span
processor
or
something
along
those
lines,
the
end.
C
And
I
ducked
out
for
the
last
10
minutes,
so
it
was
the
issue
triage.
C
I
don't
totally
know
what
happened,
but
I
think
that
probably
not
much.
C
C
A
There's
two
points
I
guess
to
like
bring
up,
and
also
one
of
your
comments
are
yeah.
One
of
your
comments
I
left
unanswered
because
I
kind
of
thought
about
it
a
bit
and
it
was
regarding
the
the
use
or
lack
of
use
of
active
support.
Notifications
like
hooking
on
the
start
and
end.
A
I
hated
that
I
didn't
document
it
because
I
don't
remember
exactly
what
the
reason
was
for
now,
but
in
the
producer
and
consumer
or
specifically
in
the
consumer,
I
was
running
into
an
issue
with
the
active
support
notification
so
doing
the
patch
seemed
more
reliable,
more
predictable,
but
just
to
like
touch
on
the
point
of
like
all
the
active
support
notifications
that
we
have
right
now.
C
Yeah,
so
is:
are
you
saying
that
we're
actually
generating
more
spans
than
possibly
necessary?
According
to
the
messaging
spec.
A
Yeah,
so
these
are
the
things
like
any
of
the
things
like
in
the
events
folder.
I
have
like
sync
group
leave
group
join
group
heartbeat,
don't
believe
those
are
actually
specified
in
the
spec.
A
That's
was
more
or
less
just
a
copy
of
what
datadog
had,
because
that
was
like.
That
was
my
starting
point,
as
I
tried
to
copy
data
dog
as
closely
as
possible,
but
like
for
my
like,
I
don't
have
a
strong
opinion
on
these
things.
Like
I
don't
know
what
value
they
provide
to
someone
who's.
Instrumenting
kafka,
like
I,
don't
know
how
much
someone
cares
about
it,
but,
more
importantly,
like
I'm
coming
from
the
angle
that
I'd
like
to
kind
of
unblock
this
pr
a
little
bit.
C
A
That
could
be
a
separate
pr
that
can
have
its
own
conversation
around
these
spans
or
whether
or
not
they
should
even
be
spams,
or
maybe
they
should
be.
Events
as
francis
has
suggested
that,
like
they
could
be
events.
C
Yeah,
if
all
right
so
a
couple
things
if
these
are
blocking
this
pr,
we
can
definitely
get
rid
of
them.
If,
if
they're
not
so
much
like
a
problem,
then
I'm
okay
either
way,
I
don't
have
strong
opinions
as
to
whether
they
should
stay
or
not.
I
don't
know
if
if
eric
would,
but
I
think
you
know
from
from
what
I
know
you
know
the
produce
and
consumes
are
kind
of
like
the
things
that
most
people
are
going
to
be
worried
about,
and
I
feel
like.
B
In
my
experience,
producing
consumes
are
the
things
that
are
important.
The
all
these
events
like
in
our
instrumentation
shopify
we've.
We
have
not
instrumented
any
of
these
events.
B
A
bunch
of
these
events
are
just
events
they're
not
like
they're,
just
a
notification,
there's
no
block
associated
with
it,
so
it
really
is
just
a
time
stamp.
It's
not
a
duration,
so
we
kind
of
need
to
go
through
them
one
by
one
and
say
you
know,
is
this
something
that
makes
sense
as
a
span
or
does
it
make
sense
as
an
event?
B
It's
not
really
clear
what
they
what
they
represent
or
what
the
parent
span
would
be,
or
what
the
span
would
be
if
they're
they're
treated
as
an
event,
so
I
feel
like
we
need
to
do
a
lot
more
detailed
digging
before
we
commit
to
something
here,
which
is
why
I
was
suggesting.
Maybe
we
can
just
yank
them
out
of
this
pr
and
have
a
follow-up
pr
where
we
address
them,
possibly
with
input.
B
Well,
sorry,
certainly
with
input
from
eric
about
how
datadog
uses
these
things,
whether
they
require
them
to
be
spans,
whether
they
can
handle
them
as
events,
whether
you
know
they
have
customers
that
are
depending
on
them,
that
sort
of
thing.
C
B
Not
not
all
of
them,
though,
like
some
are
blocks,
but
some
are
not
so.
C
C
However,
the
instrumentation
falls
out
that
there
would
be
some
sort
of
current
span
that
was
hopefully
kafka
like
or
something
generated
by
the
kafka
instrumentation
to
add
an
event
to,
and
you
know
have
it
not
be
like
the
you
know,
your
sidekicks
band,
but
I
don't
know,
maybe
you
do
want
it
to
be
there.
I
don't
know,
I
guess,
there's
a
lot
more.
B
To
think
about
in
terms
of
yeah,
certainly
on
the
consume
side,
there's
another
kind
of
bigger
question
about.
Some
of
these
events
correspond
to
things
that
happened
during
the
consumer
loop,
but
outside
of
the
context
of
processing
a
message
and.
B
We
like,
in
that
case,
you
generally
wouldn't
have
a
span,
but
maybe
we
want
to
have
a
span
that
is
traced
separately
from
the
processing
of
a
message.
B
It's
a
it's,
a
weird
kind
of
thing
that
has
come
up
for
us
at
shopify
in
a
different
context,
so
for
rescue
job
workers,
the
team
that
is
responsible
for
the
background
job
infrastructure
wants
to
be
able
to
use
tracing
to
do
performance
analysis
of
the
job
worker
loop,
but
those
traces
should
not
be
associated
with
the
spans
that
are
created
for
job
processing
itself,
like
for
the
for
an
actual
job
being
processed.
A
Yeah,
it's
like
this.
This
idea
that
your
friends
had
a
long
conversation
conversation
about
there's
like
tracing
the
work.
That's
done
from
like
the
application,
you
code,
you
write
and
then
there's
like
the
framework
kind
of
busy
work.
That
happens
in
the
background
that
you
may
not
necessarily
want
to
clutter
up
your
your
your
back
end
like
you
might
like,
or
your
ui
like
you
might
not
care
about,
like
the
10
million
worker
loops
fans,
but
you
make
like
a
team,
might
care
about
it.
C
No,
I
I
totally
get
your
comments
I
know
like
with
rescue
in.
In
particular,
it's
like
you
end
up
with
just
like
a
bajillion
like
redis
bl
pop
spans
or
something
which
is
like
the
yeah.
It's
kind
of
rescue
popping
a
job
off
the
queue.
You
really
don't
care
about
this,
and,
in
fact
this
is
something
that
you
probably
would
want
to
try
to
disable
like
if
we
yeah
suppress
fans,
I
feel
like
that's
a
good
candidate
for
suppressing
the
rescue
instrumentation,
when,
if
we
write
it.
B
Yeah,
that's
true
the
the
way
we've
dealt
with
that
is
that
there
we
have
fairly
aggressive
head
sampling.
I
mean
we
want
it.
It
happened
so
frequently
that
if
anything
interesting
is
happening
there,
then
it's
going
to
surface
even
with
heavy
sampling,
but
that's
an
interesting
case
where
we
actually
need
different
sample
rates
for
request
spans
versus
this
kind
of
framework
span.
B
B
B
But
I
I
think
it's
actually
interesting
for
our
team
to
do
performance
analysis
of
the
the
export
pipeline
using
tracing,
but
it
leads
to
confusion
for
people
who
turn
on
tracing
for
their
brand
new
service
for
the
first
time
and
they're,
not
really
getting
any
traffic
but
they're,
generating
a
whole
ton
of
spans.
That
will
say
you
know
otlp
exporter,.
C
Yeah
yeah,
I
know
working
for
for
vendors.
We
really
like
to
turn
that
stuff
off
because
it
confuses
customers,
but
being
somebody
who's
like
maintaining
the
the
observability
pipeline
is
like
that
is
actually
kind
of
useful
information,
and
I
would
want
to
see
it
to
be
honest,
so
I
think
that
turns
up
yeah,
this
other
kind
of
edge
case
where,
like
default
behaviors,
probably
you
don't
want
to
stand
for
that
stuff,
but
there
are
situations,
and
there
are
teams
who
do
want
that
spam.
B
C
We
are
probably
getting
slightly
off
topic
like
in
a
good
way.
I
think
these
are
all
good
discussions,
but
just
if
you
want
my
opinion
on
the
the
kafka
instrumentation
and
what
we
should
focus
on
like
I
know
you
all
are
trying
to
roll
this
out
at
shopify
and
like
the
minimum
viable
kafka
that
is
useful
to
you
is
likely
going
to
be
useful
to
other
people
and
if
trimming
some
of
these
things
out,
that
we're
less
sure
about
it's
gonna
help
move
that
stuff
through
and
I
think
it
will
like.
C
I
think
it
makes
sense
to
do
that.
So
it
seems
like
there's,
there's
open
questions
and
things
are
likely
to
change
there.
So
it's
probably
just
better
for
everybody.
If
we
remove
the
controversial
parts,
it
would
probably
be
useful
to
just
kind
of
create
an
issue
for
things
that
we
didn't
port
over
and
just
kind
of,
maybe
just
a
quick
summary
about
like
what
what
we
know
about
them
right
now,
just
so
that
when
we
revisit
that
stuff,
we
kind
of
have
like
a
starting
point.
Yeah.
A
Yeah
definitely-
and
I
like
it's
not
like
we're,
making
a
hard
decision
of
no
forever
just
saying
like
no,
let's
make
sure
we're
doing
the
right
thing
and
revisit
it
later,
get
the
mvp
in
so
moving
to
the
the
next
part
like.
I
think
I
feel
like
we're
in
general
agreements
that
we
can
pull
out
those
events
for
now
and
then
we
can
revisit
it
later.
A
The
next
part
is
so.
These
are
the
producer
and
consumer
patches.
I
think
the
producer
is
fine.
The
way
it
is
right
now,
like,
I
think,
that's
relatively
simple,
we're
injecting
headers,
not
much
else
happening
there.
So
the
more
interesting
part
now
is
when
we
move
to
consumers
specifically
well,
I
guess
both
each
message
and
each
batch
francis
and
I
dug
into
this
quite
a
bit
and
he
suggested
we
make
use
of
links.
A
So
we
would
still
generate
a
span
in
the
same
way
that
we
do
now
but
say
in
the
case
of
each
message:
we'd
have
our
loop
for
processing
each
message,
but
we'd
also
link
back
to
the
span
that
produced
it
and
then,
in
the
case
of
each
batch,
we'd
have
a
span
for
processing
that
batch
loop
and
then
we
would
iterate
through
all
the
messages
in
the
badge
extract
the
headers,
and
if
we
can,
we
will
link
each
message
back
to
its
producer
spam.
B
Yeah,
that's
pretty
much
it!
It's
that
the
at
least
the
examples
in
spec
for
kafka
show.
B
B
C
It
does
seem
like
less
than
ideal
to
have
to
iterate
through
every
message
in
the
batch
and
try
to
extract.
B
When
you
actually
look
at
the
work,
that's
done
in
the
kafka
of
the
ruby
kafka
framework.
To
get
to
this
point,
I
think
iterating
over
the
messages
to
extract
that
stuff
is
remarkably
lightweight.
C
Yeah
like
if,
if
this
is
merely
like
a
drop
in
the
bucket
of
the
work,
that's
already
going
and
that's
perfect,
like
I
always
worry,
with
messaging
systems
in
general
because,
like
I
don't
know,
redis,
for
example,
is
like
crazy,
fast
and
instrumentation.
I've
noticed
like
you,
you
feel
it
there's
just
no
way
around
it
and
like
it's,
not
it's
not
great.
It's
like
you
want.
You
want
observability
into
the
in
to
your
red
system,
but
like
because
it
is
just
so
fast,
like
the
small
amount
of
tracing
overhead
is
noticeable.
B
Management,
all
we
want
cool
yeah
kafka
tries
to
be
efficient,
at
least
on
the
on
the
server
side
of
things,
by
delivering
a
batch
of
messages.
As
kind
of
you
know,
here's
a
chunk
of
stuff
from
this
offset
to
this
offset
and
your
client
is
responsible
for
actually
breaking
that
down
into
individual
messages
and
doing
something
with
it
and
yeah.
Once
you
actually
dig
into
that
work,
it's
kind
of
insane.
A
So
it
seems
like
I
think,
we're
on
the
same
page
for
this.
So
following
this
discussion,
I'll
make
those
changes
and
I'll
I'll
ping
in
some
form
to
flag
for
review
to.
Let
you
know
that
I
made
the
change,
so
we
can
go
over
to
make
sure
it
makes
sense
I
haven't
used.
I
have
not
used
links
in
this
library,
yet
so
I'll
have
to
make
sure
I
do
it
properly.
So
the
extra
scrutiness
when
you
review,
if
we
have
time
I
wouldn't
mind,
jumping
to
the
graphql
pr
quickly,
if
that's
okay,.
B
Sure,
just
on
the
lynx
thing,
the
key
is
that
you
need
to
create
the
links
ahead
of
time,
so
that
I
don't
from
from
my
memory,
there's
no
add
link
after
you've
created
a
span.
You
actually
have
to
have
your
entire
collection
of
links
and
you
pass
it
to
the
to
start
span
whatever.
A
A
So
the
pr
I
looked
a
lot
of
what
eric
had
done
and
then
also
because
I
had
recently
rewritten
our
internal
instrumentation
for
graphql
at
shopify.
A
lot
of
applications
are
using
it
right
now
and,
like
I
have
quite
a
bit
of
confidence
in
it.
A
So
first
I
guess:
let's
look
at
the
the
install,
because
I've
started
the
there's
going
to
be
a
class
called
the
graphql
underscore
tracer.
A
So
I
did
some
tests
and,
like
literally
just
using
like
shopify's
one
when
I
was
rewriting
it
and
those
three
keys
generate
an
absurd
amount
of
spans
one
of
them.
I
had
a
trace
that
went
from
like
11
spans
to
2500
by.
I
think
it
was
the
resolve
type
or
no.
No,
it
was
the
field
key,
the
first
one
there.
So
that's
where
you
get
execute
field
or
execute
field
lazy.
A
If
you
have
a
larger
graphql
request
and
so
like
for
shopify,
I
know
we
would
have
it
off
by
default
and
that's
not
to
say
that
there
isn't
value,
I
think,
there's
actually
a
ton
of
value
in
the
platform
field
key
but
having
it
on
by
default,
just
you're
just
going
to
produce
these
obnoxiously
long
spans
and
if
you've
already
got
a
long
span,
and
you
do
a
graphql
request
in
there
like.
I
know
certain
vendors
might
not
be
able
to
display
it,
and
that
would
just
ruin
any
value
you
get
from
this.
C
A
So
like
this
is
extending
graphql's
platform
tracer,
so
they
do
actually
have
like
proper
tracing
support.
This
isn't
doing
any
weird
hacky
patching,
you
can
add
a
tracer
to
a
schema
and
they
have
a
trace
function
that
calls
and
it'll
iterate
through
all
the
tracers
you've
added
and
there's
different
components
that
needed.
So
it
requires
these
platform
keys.
So
this
is
just
a
way
of
mapping
their
internal
thing
to
whatever
you
want
to
call
it.
I
took
the
liberty
of
going
with
the
graphql
dot.
Whatever
the
key
was.
A
We
can
write
on
that
a
little
bit
if
it's
needed,
maybe
on
the
pr
not
on
the
call
and
then
those
three
def
like
platform,
key
authorized
and
resolved
type.
So
basically
those
are
different
types
of
field
resolution.
A
I
don't
know
what
it
looks
like
in
a
production
environment
specifically,
but
that
one
is
when
you
have
interfaces
for
a
graphql
object.
So
it's
like,
I
have
a
automobile
and
it
could
be
a
car
or
a
truck
if
you
have
an
orphan
type
when
you're
resolving
that
the
resolve
type
key
comes
up.
A
I
don't
know
what
the
value
is
there,
but
the
the
one
that
is
noisy
and
is
valuable
as
the
platform
field
key.
So
this
is
when
it
actually
executed,
fields
executes
a
field
and
the
reason
why
I
say
it's
valuable
is
because
sometimes
with
your
active
record
or,
however,
you
set
up
your
graphql
when
it's
actually
doing
that
work.
Some
of
these
fields
could
require
calculations.
A
It
could
be
doing
something
more
than
just
like
plucking
an
attribute
out
of
a
model,
so
it
could
be
doing
a
lot
of
heavy
work
like
in
the
example
I
was
doing
with
one
of
our
internal
apps
is,
I
think
it
was
like.
Probably
like
98
of
the
graphql
request
was
a
single
field,
but
without
that
key
you
don't
see
that
it
just
looks
like
your
request,
took
a
second
right,
your
graphql.
A
C
A
Get
some
more
details,
but
I
do
think
yeah
long
story
short,
I
think,
should
be
off
by
default.
C
Yeah,
okay,
so
it
sounds
like
this.
C
These
are
just
kind
of
like
method
overrides
here
for
this
tracer
and
if
you,
if
you
return
early,
that
just
causes
a
no
op
on
the
graphql
side,
yeah.
A
If
you
go
into
the
just
a
little
bit
higher
there
and
the
code
so
online
oops
sorry,
I
spilled
out
a
little
bit
so
on
line
28
there.
So
if
there's,
no
so
the
noah
there,
it
just
returns
nails
so
that
I'm
returning
the
yield
so
we're
choosing
not
to
trace
so
yeah
line
28
when
those
come
back.
If
they're
not
enabled
you
get
nothing
back
and
it
just
yields
and
it
doesn't
trace
those
fields.
So
it's
just
like
a
short
circuit.
A
The
second,
the
other
part
that
I
would
like
to
discuss
a
little
bit.
Hopefully
I
have
enough
time.
A
I
think
this
is
interesting,
but
I
if
it
starts
turning
into
like
a
bike
shed
I'll
just
we
can
just
drop
it
so
right
in
the
in
the
yield,
dot
tap
there
online
31,
I'm
doing
some
weird
stuff
and
the
motivation
for
that
is
what
I've
seen
with
like
previous
vendors
internally
at
shopify,
something
that's
come
up
a
lot
of
times
and
I've
heard
from
different
developers
on
different
teams
that
are
going
from
like
rest
to
graphqls.
A
When
you
have
like
a
request
that
hits
your
your
rails
app
and
it
fails,
you
see,
like
oh
my
shops,
controller,
whatever
500
you
get,
the
request
failed
or
something
like
that
or
someone
sent
in
some
bad
information
to
a
post
and
you
get
a
lot
of
visibility
with
graphql.
The
complaint
that
I
hear
is
because
it's
just
a
single
controller,
even
if
someone
makes
a
bad
graphql
request,
assuming
it
doesn't
like
actually
blow
up,
and
it's
just
like
a
malformed
query.
A
That's
a
success
like
your
your
request,
like
if
you're
looking
at
all
of
your
your
request,
fans,
they're
a-okay
like
this,
is
looking
great,
but
it
could,
it
could
be.
You
might
have.
Some
may
have
done
like
a
breaking
change
in
your
api.
You
might
have
done
something
and
now,
all
of
a
sudden,
all
your
graphql
requests
are
failing,
but
you
have
zero
visibility
into
that.
A
We
recently
made
some
changes
internally
to
shopify
and
we're
pushing
more
people
into
tracing,
and
I
think
this
would
be
a
really
big
win,
because
shopify
uses
tracing
heavily.
A
This
could
provide
value
to
like
people
in
general
when
you
send.
B
A
Action,
so
if
you
look
on
line
32
there,
it
says
like
if
the
keys
validate-
and
I
don't
even
know
if
using
the
word
response-
there
is
the
right-
it's
just
whatever
is
coming
back
from
that
tap
and
it
changes
drastically
between
keys.
A
A
So
when
so
right
there,
oh,
it
was
a
little
bit
higher.
So
when
the
query
fails,
so
I
have
a
schema
that
defines
a
few
fields.
There's
like
a
simple
field,
the
resolve
fields,
and
then
I
think
I
have
like
the
interface
something
there's
no
complex
field,
so
when
that
goes
through,
that
check
will
will
hit,
and
so
the
spanner
will
come
back
as
graphql
validate
and
I'll
start,
adding
errors
and
they'll
say:
okay,
what's
the
name
of
this
event?
A
It's
like
complex
field,
and
then
I
add
the
attributes,
error
message:
it
says:
field
complex
field
does
not
exist
on
type
query
now,
for
me
like.
I
think
this
provides
a
lot
of
value,
because
if
someone,
if
someone's
evaluating,
if
their
graphical
change
is
working
and
all
of
a
sudden
there's
a
ton
of
errors,
blowing
up
they'll
see
it
now
and
they'll
actually
have
information
they'll
be
able
to
see
it
in
the
trace.
A
I
think
that's
okay.
I
think
those
are
still
errors
someone's
trying
to
hit
your
api
and
it's
not
working.
So
I
I
don't
know
this.
This
could
devolve
into
like
crazy
bike
shedding
if
there's
any
graphical,
opinionated
people-
I'm
not.
I
know
eric
mentioned
he
has
isn't.
But
the
intention
here
is
like,
I
think,
really
valuable,
and
I
think
this
is
a
good
direction
to
move.
But
I
don't
know
it's
I'm
just
concerned
that
I'm
missing
something
that
a
seasoned,
graphql
person
would
be
like.
B
A
Bit
further,
while
I've
been
messing
around
with
this,
I
think
it's
a
little
too
like
we're
a
little
too
far
in
that
we
don't
get
the
like
the
structured
response,
like
you,
don't
get
the
actual
structure
response
like
I
test
them
in
the
test
here.
So
I
execute
the
query
on
line
46
and
I
get
a
result
back
and
then
I'm
looking
at
what
the
user
would
return.
So
you
can
see
there.
A
It's
like
you,
get
a
message
that
says
feel
complex
field
doesn't
exist
so
like
this
is
actually
sent
back
to
whoever
processed
the
query
or
posted
the
query
to
like
the
graphql
endpoint.
A
So,
like
the
users
get
this
error
back
and
what
I'm
trying
to
do
is
surface
this
visibility
to
like
the
the
app
owners,
but,
like
france
said,
is
there
like
a
more
structured
way
to
check
this,
and
I
kind
of
feel
like
this
is
the
way
to
check
it
because
we
can
hook
into
the
key
when
it's
validating?
The
query
like
this
is
that
validate
is?
Is
this
a
valid
query
like
it
to
me?
It
made
sense
and
I
think
it
is
the
right
place.
Invalidation
be
turned
off.
A
I
don't
think
it's
validation
in
the
sense
that,
like
are
you
allowed
to
do
this,
but
more
so
like.
Can
this
work.
B
Like
if
you
had
a
vendor
solution
that
is
producing
or
extracting
slis
from
tracers.
B
They're,
probably
extracting
them
at
from
client
and
service
spans,
is
there
a
way
that
we
can
reliably
tag
or
reliably
set.
The
error
on
a
server
span
on
this
side
or
a
client
span
for
the
other
side.
A
A
This
pr,
I
think,
is
valuable
on
its
own
without
it,
because
it's
it
honestly,
it's
just
on
par
with
all
the
vendors
right
now,
which
is
I'd
like
to
one-up,
because
you
can
look
at
crack,
you
can
look
at
graphql.
They
actually
have
support
for
datadog
new,
relic
scout
and
quite
a
few
other
vendors
built
right
into
the
gym.
A
So
I
was
looking
at
what
they
did
and
nobody
has
this
type
of
error
reporting,
and
I
know
it's
just
a
common
complaint.
I
get
from
peers
at
shopify
who
manage
applications
with
graphql
apis,
so
I
would
like
to
do
it,
but
I
don't
want
it
to
block
this
pr
to
like
full
transparency
like
this
graphql
instrumentation
is
one
of
my
big
blockers
for
rolling
out
open,
telemetry
internally
at
shopify,
because
so
many
applications
depend
on
it.
I
I
need
it
before.
A
C
Yeah
I
mean
I
will
say
the
same
thing
that
I
said
about
the
kafka
stuff
and
if
this
is
a
if
this
instrumentation
is
going
to
work
for
for
shopify
and
in
whatever
version
with
or
without
this
error,
recording
like
I'm
totally
fine
with
it
and
if
you
wanted
to
separate
it
out,
create
a
separate
pr.
I
think
that's
totally
fine.
If
you
want
to
create
an
issue
to
talk
about
it,
that's
also
fine
but
yeah.
A
I
think
just
like
having
talked
about
a
lot
of
youtube
just
now,
I
think
I'll.
Probably
I
think
I
I
will
end
up
pulling
it
out
of
this
pr
and
pushing
it
into
a
follow-up
yard
just
so
that
can
be
the
the
only
focus
the
discussion
for
that
forecast.
I
don't
want
it
to
drown
out
anything
else
that
might
be
important
on
this
one,
so
I'll
be
doing
that
as
a
follow-up,
and
I
I
do
believe
that
this
is
valuable
without
it.
A
It
only
works
on
the
two
latest.
So
thank
you
appraisals
for
highlighting
that
to
me
I
discovered
that
with
one
of
our
internal
apps
team,
the
facebook
people,
so
I
don't
know
anyways,
that's
just
an
aside,
so
yeah
I'll
pull
that
out.
I
have
a
couple
things
I
want
to
change.
I
know
that
there's
some
feedback
on
eric's
original
pr
about
how
to
actually
install
the.
C
A
A
B
Yeah,
I
agree
with
that.
As
soon
as
we've
got
a
1.0
release
of
the
of
the
api,
we
should
start
pushing
some
of
this
instrumentation.
This
auto
instrumentation
into
the
upstream
projects.
C
Yeah,
if
they're
already
doing
that
for
other
vendors,
I
think
they
would
be
stoked
probably
to
do
it
for
like
an
open
standard
and
and
adopt
it,
and
I
think
that's
kind
of
that's
gonna,
be
the
next
frontier.
You
know
once
we
have
everything
stable
is
to
try
to
get
good
adoption.
For
you
know
first
party
instrumentation,
rather
than
the
stuff
that
we're
writing
yeah.
A
So
that's
not
excellent
stuff
yeah.
So
do
we
because,
like
the
install
method
that
I'm
using,
which,
as
I
understand
what
I've
read
through
graphql
like
we're,
taking
like
the
root
schema
and
we're
saying,
use
this
tracer
and
we
supply
a
tracer?
That's
not
supported
three
or
three
minor
versions
ago
and
they've
depleted
something
about
global
tracers
recently
that
I
need
to
look
in
to
make
sure
that
doesn't
apply
to
what
eric
had
in
his
are
we
okay
with
only
saying
the
last
two
minor
versions
of
dracula,
are
supported.
C
C
A
I
don't
know,
there's
a
lot
of
there's
a
lot
of
churn
and
the
graphql
code
base
people.
I've
talked
to
internally
just
like
about
how
they're
using
graphql
and
what
like
some
of
their
pain
points
were
because
I
wanted
to
get
some
perspective
going
into.
C
A
They
were
just
saying
that
every
time
they
do
a
minor
version
about
like
it's
breaking
changes
and
it's
a
pain
in
the
ass.
So
it's
hard
to
say,
like
it's,
I'm
concerned
that
maintaining
this
is
going
to
be
a
little
bit
frustrating
and
I
it
will
be
bad
if
we
have
to
just
do
a
ton
of
switches
between
versions
to
support,
even
like
a
few
minor
versions
like
as
I
have
it
right
now.
It's
too,
you
know
the
latest
and
the
one
before
that.
C
C
We
should
see
if
we
can
get
up
get
away
without
doing
it.
If
it's,
if
it's
one
of
these
things
where
you
can
reuse
like
90
of
the
code
and
then
you
just
kind
of
have
like
some,
you
know
a
few
different
cases
when
you
install
something,
then
that
might
be
worth
it.
Yeah.
A
A
You
can
use
in
any
number
of
tracers
right
near
schema,
but
what
we're
doing
is
we're
saying:
let's
take
their
ancestor
like
the
root
schema,
that
every
schema
should
be
extending
from
in
a
graphql
api
and
we're
saying,
like
the
ancestor,
uses
this
and
so
then
any
schemas
that
extend
it
now
have
tracing,
which
is
great
for
the
auto
instrumentation
perspective.
But
I
don't
think
that's
going
to
be
like
a
concern
or
a
supported
path
by
the
library
maintainers
or
maybe
it
is
it
just.
A
Right,
oh
but
yeah.
I
always
like
long-winded
short
yeah
I'll,
explore
that
and
see
what
it
looks
like,
because
I'll
be
doing
that
for
what
eric
had
anyways
to
try
to
support
that.
C
Yeah
and
if
it
looks
like
a
total
nightmare,
then
I
think
we
can
maybe
appease
the
masses
by
saying
we
really
want
to
try
to
get
this
integrated
into
graphql
once
our
tracing
api
hits
1.0
and
that
can
be
like
the
escape
hatch.
I
guess.
C
C
Cool,
I
do
need
to
get
going,
so
I
guess
just
the
last
thing
is
these
pr's
are
in
progress.
You
are
going
to
be
off
for
a
while,
like
are
these
things
you're
trying
to
get
done
before
break,
or
are
they
probably
gonna,
be
after
break
things.
A
Hopefully,
before
I
don't
think
anyone's
gonna
yell
at
me,
if
they're
done
after
because
I'm
not
gonna
be
migrating
internal
apps
till
after
break
anyways,
yeah
he's
gonna
scold
me
after
I'm
gonna
make
the
kafka
changes
today
and
I'm
going
to
make
my
best
effort
to
get
the
graphql
in
a
state
that
is
ready
for
review
today.
So,
even
if
I
could
get
a
round
of
reviews
before
the
end
of
break,
that
would
be
cool.
So
I
can
come
back
to
it,
but
cool.
C
A
C
Yeah
no
problem,
I
thought
you
know
it
was
good
to
talk
through
these
things.
So
yeah
I'll
see
you
all
online
have
happy
holidays
and
yeah
enjoy.