►
From YouTube: 2021-08-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
D
B
Yeah,
like
I
live
in
the
prairies,
so
one
of
the
like
this
is
very
normal.
For
us,
like
one
of
the
running
jokes
is
like.
If
your
dog
runs
away,
you
can
watch
it
run
away
for
two
days,
because
it's
just
flat
and
clear
all
the
way
through
it's
here.
It's
like
the
sun
is
red
from
all
the
smoke
and
you
can
only
see
a
few
blocks
down
like
you're
saying
it's
just
it's
kind
of
wild.
D
B
Yeah
yeah,
I
like
pretty
fortunate
that,
like
as
long
as
I
don't
leave
my
house,
it's
fine,
but
as
soon
as
like
I
do
it's
just
I
like
sat
outside
on
friday
for
with
a
friend
had
like
a
couple
of
beers,
and
I
have
asthma,
so
I
was
paying
for
it
for
like
the
next
day
and
a
half
like
I'm
just
like
you
know,
struggling
and
it
just.
I
sat
outside
for
yeah,
maybe
45
minutes,
thomas
just
my
backyard,
didn't
do
anything
exciting.
D
B
Like
a
kind
of
a
random
question
before
we
get
started,
you
worked
a
lot
on
the
the
new
relic
gem
right,
the
the
what
was
like
rpm,
something
yeah
in
a
past
life.
I
did
right.
This
is
kind
of
a
loaded
question
like
I
I
have.
We
brought
in
the
obfuscation
for
database
statements
from
that
gem
into
this
repo.
B
How
much
confidence
should
I
have
that
in
in
practice,
because,
like
we
do
a
lot
of
a
redaction
at
the
infrastructure
level
right
now
and
there's
a
push
to
switch
to
doing
like
obfuscation
like
at
the
application
level
and
then
removing
the
redaction
from
our
collector
pipeline,
and
I'm
like
just
absolutely
terrified
of
that
because,
like
I
don't
want
to
deal
with
the
whole
fallout
of
like
a
pii
league
right,
I
guess
that's
something
like
that's
been
around
for
a
while,
so
it
must
be
fairly
well
tested
right,
like
tested
by
fire.
D
Yeah,
I
think
I'm
probably
not
revealing
too
much,
but
this
was,
though,
that
particular
set
of
regular
expressions
is
not
unique
to
just
the
ruby
client.
It
is
like
a
set
that
kind
of
was
decided
at
more
of
an
organizational
level
so,
like
other
clients,
are
applying
the
same
kind
of
obfuscation
and
yeah.
It
kind
of
like
varies
by
different
kind
of
varies
by
database.
D
It
tries
to
be
selective
with
the
obfuscation
if
you're
like
mysql
or
postgres,
and
then,
if
you
kind
of
like
don't
if
you
can't
figure
out
what
the
database
backing
is,
it
applies
just
like
all
the
rules
to
it
and
you
end
up
over
obfuscating
the
sequel.
Usually
in
that
case,
which
is
like.
D
Yeah,
like
the
worst
thing
that
happens,
is
you
will
have
your
service
teams
being
like
these
sequel?
Queries
are
useless
to
me
because
all
I
know
is
it
was
like
this
select
and
everything
else
is
like
right.
Yeah,
that's
way
better
than
you
know,
select
user,
where
ssn
equal
right.
B
Yeah,
that's
that's
what
I'm,
like
afraid
of.
So
I'm
like
thinking
about
like
how
how
widely
used
like
I
know,
new
relic,
obviously
just
has
been
around
in
the
industry,
doing
this
type
of
stuff
for
a
while.
So
like
I
thought
you
might
have
a
good
take
on
it.
Maybe
you
can
ease
some
of
my
fear
a
little
bit
because
it's
something
I
want
to
look
to
do
in
the
near
future.
D
Nerves,
yeah
and
that
code,
I
don't
know,
I
haven't,
seen
how
it's
been
repurposed
in
an
open,
telemetry
ruby,
but
one
of
the
key
components
of
that
is
it
does
like
somewhere
up.
There
is
some
machinery
that
does
look
for
the
actual
underlying
database
adapter
and
tries
to
apply
like
the
most
appropriate
set
of
rules
for
the
given
database.
D
That
helps
yeah,
I
feel
like
most
of
this,
like
I
feel
like
the
99
use
case
was
like
active
record,
and
I
think
that
was
how
this
is
being
looked
up
by
kind
of
figuring
out
what
what
adapter
activerecord
was
using.
But
right,
whatever
mechanism
you
can
use
to
pass
in
the
database
is
totally
fine,
so
like
here
here,
we
have
like
for
kind
of
first
class
instrumentation,
at
least
for
my
sequel.
B
Cool
another
question:
just
I
know
we're
a
little
bit
overwhelmed.
I
kind
of
want
to
run
with
this
a
little
bit
for
lights.
That
and
like
have
you
looked
at
for
in
terms
of
instrumentation
like
capturing
allocations
or
method,
cache
busting,
like
tracking,
that
in
any
of
your
instrumentation
on,
like
even
like
I'm
thinking
about
doing
it
for
ruby.
So
I'm
wondering
if
you've
had
any
experience
there.
D
D
I
know
that,
like
access
support,
notifications,
kind
of
added
in
like
some
allocation,
counting
to
its
events
like
right,
I
don't
know
if
you've
seen
this
but
like
there
are
some
like
flaws
with
it
in
that
it,
if
you,
if
you're
doing
any
sort
of
multi-threading,
that's
wrong
because,
like
the
allocations
that
it's
getting
is
like
on
a
process
basis
right
so
in
order
to
in
order
to
get
allocations
on
like
a
per
thread
basis,
you
do
need
to
kind
of
like
interact
with
the
movie
vm
and
the
right
kids.
D
I
forget
what
what
the
method
is
called,
but
I
knew
at
one
point
I
like
experimented
with
this.
It's
like
something
there's
like
a
new
object,
callback
that
you
can
register.
It's
like
you
have
to
do
this
in
in
ac
extension,
you
can
register
a
callback
for
this
method
and
you
get
like
a
it
gets
called
every
time.
A
new
object
is
created,
so
you
can
use
that
with
a
kind
of
like
thread.
Local
at
the
vm
level
and
kind
of
get
like
a
per
thread
count
right.
B
We,
I
think
I
don't
know
if
it's
our
gem
or
I
don't
know
we
one
of
our
service
teams
actually
did
add
something
like
that.
They,
like
it's
very
much
ingrained
in
their
their
application.
So
it's
not
generalized
but
they're,
using
a
gem
called
gc
track
for
that
and
so
on.
I
think
francis
had
some
opinions.
We've
chatted
about,
like
recently
like
kind
of
glancingly.
He
said,
there's
a
better
way
of
doing
it.
B
I
think
it's
getting
into
what
you're
talking
about
is
like
being
mindful
of
the
threads
so
that
you're
not
trying
to
track
it
across
threads,
because
then
you
won't
get
a
really
good
readout
if
you're
say
trying
to
look
at
the
allocations
for
a
single
request
against
your
application
right,
but
yeah
I'm
just
curious
to
see
if
you've
done
any
work
there.
It
sounds
like
you're
interested
in
it.
B
So
I'm
hoping
to
start
working
on
that
and
then
not
to
just
in
future,
and
I
think
it's
something
that
could
probably
send
that
stream
right.
D
Yeah
no,
but
I
think
it's
interesting
and
and
super
valuable,
but
yeah
in
in
in
the
research
that
I
have
done
to
this.
It's
like
you
do
have
to
hook
in
at
the
c
extension.
B
The
other
part
of
that
that
I
want
to
look
into
is
a
method
like
the
method,
cache
invalidations,
because
I
know
some
like
if
you
like,
use
structs
and
you
declare
a
new
struct
and
ruby,
it
blows
away
your
method
cache.
So
I
would.
I
would
be
interested
to
be
able
to
surface
that
like
on
a
per
request
basis,
so
like
it's.
If
an
application
is
running
kind
of
slow,
it's
like.
Are
they
blowing
away
they're
about
to
cash?
B
Every
time
a
request
comes
through
because
they're
doing
some
weird
dynamic
stuff
like
that's
something
I
want
to
surface
to
like
app
owners
at
the
company.
So
that's
an
adventure
I
haven't
embarked
yet,
but
I
think
it's
possible.
B
Yeah
yeah
I've
read
a
little
bit
about
it
like
a
year
ago,
and
I
think
there
there
is
means
of
doing
it.
I
just
don't
know
how
robust
it
is
and
what
considerations
need
to
come
into
so
that
yeah
just
kind
of
anyways
if
we
can
like
get
on
with
the
sig.
I
just
wanted
to
see
if
you
had
any
thoughts
there
or
experience.
D
F
This
is
the
logo
for
our
latinae
erg,
so
I'm
on
the
leadership
team
there
yeah
that's
awesome.
E
The
octocat
of
the
octocat
is,
you
know
it's,
it
doesn't
have
a
race
or
gender
or
whatever
it's
it's
good.
It's
a
friendly
cat.
F
Well,
you
know
what
was
happening.
Was
you
know
during
the
school
year,
my
middle
schooler?
You
know
this
was
a
wee
work
for
my
middle
schooler
and
I
or
maybe
I
should
say
you
know
a
co-working
space
charging.
E
F
So
this
this
sig
brought
to
you
by
kirkland
coffee.
D
All
right
so
spec
sig,
brought
to
you
by
kirkland
coffee
was
actually
pretty
pretty
short
today,
so
hopefully
we
can
get
through
it
in
a
reasonable
amount
of
time
and
yeah.
I
have
time
to
discuss
plenty
of
other,
more
interesting,
stuff
related
to
the
ruby
seg.
D
So
I
don't
know
that
anybody
here
is
actually
working
on
metric
stuff
yet,
but
I
assume
it
will
happen
soon,
but
we
do
get
these
updates
every
now
and
then
I
think,
like
the
biggest
update,
is
that
the
the
api
I
think
is-
and
oh
here
we
go
it's
down
here.
The
api
is
at
a
feature
freeze,
so
I
think
they're
kind
of
happy
with
with
the
api
spec
yeah.
D
So
if
we
are
interested
in
metrics,
I
think
it
would
be
interesting.
It
would
be
a
good
idea
to
take
a
look
at
that
spec,
as
far
as
I
know,
like
python,
has
been
working
on
an
sdk
implementation
kind
of
based
on
that
spec.
So
that
might
be,
like
you
know,
one
of
the
nearest
neighbors
to
ruby
if
anybody's
comfortable.
But
I
guess
there
are
a
couple
of
languages
that
are
probably
like
reasonable
candidates
to
look
at
depending
on
what
and
what
you
prefer
to
look
at.
D
But
python
is
there
I'm
not
sure
where
golang
is
but
often
times
it
is
in.
It
has
kind
of
been
on
the
bleeding
edge
of
the
stuff.
A
D
D
Yeah,
I
think
we're
talking
about
this
last
week,
where
it
does
require
an
update
to
the
w3c
trace
context.
Spec,
particularly
the
usage
of.
E
If
I
remember
there
was
some
concern
that
maybe
we're
not,
we
ought
to
be
changing
the
spec,
but
we're
not
we're
sort
of
like
using
the
trace
state
in
a
way.
That's
not
trace
data.
It's
not
like
it's
not
vendor.
It's
like
both
non-vendors,
it's
that
should
be
a
home
for
vendor
specific
information.
I
don't
know.
D
Yeah,
so
so,
looking
through
this,
this
does
actually
look
like
a
legitimate
implementation
to
me.
It
does
look
like
it's
using
trace
data
for
this
data,
which
is
is
kind
of
the
most
appropriate
place.
I
think,
last
week
we
were
talking
about
it.
D
There
was
this
proposal
to
actually
add
this
to
the
trace
parent
and
I
think
long
term,
that
does
kind
of
make
sense,
and
it
was
kind
of
like
this
inverse
power
of
two's
probability,
and
this
is
ringing
a
bell
for
for
for
anyone,
but
yeah
like
the
point
that
you're
bringing
up
there
eric,
I
think,
is
a
point
that
I've
brought
up
last
week
and
also,
though
kind
of
in
that
group
about
like
it,
makes
trace.
E
Yeah
I
mean
this
is
what's
happening
with
datadog
is
we're
trying
to
like
add,
trace
modified
trace
date.
You
know
so
you
know
providing
custom
samplers,
which
I'll
show
modified
trace
date
and
then
it's
like.
Can
we
just
like
wait
to
see
like
what
open
telemetry
decides
on,
but
no
we
can't
apparently.
A
D
D
The
one
thing
that
I
found
that
is
like
not
great
about
that
is.
D
If
you
read
through
the
spec
trace
state,
it's
like
you
can
you
can
add
an
entry.
You
can
remove
an
entry.
You
can
like
modify
an
entry
and
like
when
you
add
an
entry,
you
can
like
put
it
to
the
front
or
when
you
modify
it,
you
can
bring
it
right.
D
But
if
you
just
like
read
it
technically
you're
supposed
to
like
leave
it
as
is
so
it's
kind
of
like
if
you
get
one
of
these
entries
that,
like
a
lot
of
people,
are
reading
that
gets
like
written
once
like
it
has
the
the
possibility
of
getting
kind
of
lost
off
the
end,
because
trade
state
has
like
a
length
limit,
you
kind
of
truncate
off
the
ends.
E
D
E
Yeah
anyway,
yeah
it's
in
it's
we'll
see
how
fat,
hopefully
148
gets
accepted
and
merged
and
implemented
and
wow
168
gets
hashed
out.
But
you
know,
let's
circle
back
in
a
year
and
a
half
and
see
how
it's.
D
D
So
there
was
this
question
about
thoughts
about
composite
samplers.
Nikita
was
kind
of
bringing
up.
I
don't
know
it
was
like
a
use
case
where
somebody
wanted
to
have
like
different
sampling,
based
on
like
endpoint
so
like
this
would
only
be
appropriate
for
some
sorts
of
spans
and
not
appropriate
for
others.
So
like
could
you
have
like
a
composite
sampler
that
worked
that
that
acted
on
an
endpoint
when
it
was
an
endpoint
or
invoke
some
other
sampler
when
it
for
other
use
cases,
and
I
think
riley
brought
up
that?
D
E
Yeah
I've
been
working
recently
with
you,
essentially
a
composite
of
parent-based
sampling
and
then
trace
id
ratio
based
sampling
with
some.
You
know
some
of
those
changes
I
just
mentioned
that
have
appending
some
vendor
specific
metadata.
It's
not
super
easy
to
the
configuration
of
some
of
this
stuff
is
like
yeah.
I
wouldn't
I
wouldn't
mind,
seeing
like
a
so
just
agnostic
way
to
do
composite
sampling.
E
That's
not
like
you
can
do
parent
based
and
then
you
can
defer
to
a
second
one
like
it
would
be
nice
to
have
sort
of
some
more
agnostic
wrappers,
but
I
can
see
this
use
case
becoming
pretty
common,
for
I
don't
know,
maybe
just
something
as
simple
as
like
which
errors
do
I
want
to
do.
I
want
to
have
some
special
sampling
around
like
error
spans
and
that
might
be
a
sampler
in
addition
to,
as
a
complement
to
like
some
more
broad.
You
know
whatever
probability
or
something
like
that,
but
anyway,.
D
E
No
yeah,
it's
fine.
Some
of
the
the
ergonomics
of
this
like
parent-based
composite
sampler,
are
just
like
a
little
ugly
they're,
like
passing
in
the
same
sampler
like
three
or
four
times
for
all
the
different
options
like
remote
or
local.
Anyway,
it's
not
implemented
correctly
in
a
bunch
of
other
languages,
also
such
as,
but
I'm
limited
to
php
anyway,
I'm
right,
I'm
hijacking
the
meeting.
D
No
no
like
I,
I
get
the
impression
that
you've
been
working
a
lot
with
sampling
and,
as
a
result,
I
think
I
don't
know.
I
think
all
these
apis.
It's
like,
I
don't
know
they
get
specked
out.
They
get
kind
of
implemented
to
some
level,
but
I
don't
know
that
they
always
get
like
used
right
away.
So
I
feel
like
there's
always
like
this
beginning
to
use
the
thing
phase
where,
like
I
don't
know,
maybe
maybe
some
critiques
are
in
order.
D
F
It
it
just
feels
like
we
don't
have
enough
lead
time.
I
don't
know
about
y'all.
I
feel,
like
our
group
in
particular,
doesn't
have
enough
capacity
to
dedicate
to
like,
let's
start
looking
at
all
this
other
stuff,
because
we're
busy
still
trying
to
get
trace,
1.0
rolled
out
to
places
and
been
done,
but
yeah,
but
that's
a
digression
in
that
sense.
It's
kind
of
like
if
they,
if
they're,
not
getting
at
least
from
us
like.
Maybe
we
need
to
like,
have
a
rule
of
thumb
where
it's
like.
F
D
Is
kind
of
happening
at
this
back
level
and
so
it's
kind
of
like
a
level
up.
I
guess
you
know
from
from
this
group
sure
yeah.
F
F
Right
like
we
would
be
dealing
with
some
of
the
idioms
that
are
in
other
languages,
just
because
they
found
things
that
were
useful
for
them.
You
know
for
me,
it's
always
like
the
context
object
right,
that
a
context
object
is
effectively.
The
built-in
golang
context
object
in
all
of
its
glory
ported
over
into
ruby,
and
that
doesn't
feel
ergonomic
to
me
when
working
with
it
right.
F
I
like
the
idea
of
the
immutable
data
structure
and
solving
all
these
problems,
but
at
the
same
time
it
doesn't
feel
natural
when
interacting
with
it
right,
and
so
I
I
worry
about,
we
talk
about
the
problem
of
composing.
These
things.
Absolutely,
I
don't
want
to
end
up
with,
because
somebody's
in
a
hurry,
defining
the
api
spec
for
us
to
have
to
also
deal
with
the
ramifications
of
that
and
trying
to
build
things
that
aren't
idiomatic
in
our
language.
F
E
I'll
have
more
time
to
dedicate
to
inflammatory
soon
I
think
so
that
can
help,
because
that's
more
than
zero
right
now,
besides
that
now
I
don't
think.
Maybe
you
go
to
the
meetings
and
yell
and
they
feel
bad.
D
Yeah,
no,
I
think
these
are
all
good
discussions
and
I
think.
B
Yeah
just
to
touch
on,
like
the
sorry
go
ahead,
I'm
grasping
for
words
here,
okay,
just
to
touch
on
like
the
slow
down
lever
a
little
bit
like,
I
think
something
that's
probably
worthwhile
to
know,
and
I'm
gonna
really
really
really
loosely
touch
over,
because
I
don't
know
the
finer
details,
I
kind
of
picked
it
up
from
someone
else.
I
think
copenhagen
as
a
whole
was
getting
a
bit
of
criticism
for
how
long
it
took
to
get
the
tracing
spec
defined
and
as
a
result
like
the
metric,
spec,
hadn't
really
even
started.
B
So
that's
been,
I
think,
causing
some
issues
at
the
higher
level
like
getting
it
accepted
through
the
cncf
right
like
pushing
it
to
the
next
stage
is
that
it
has
taken
so
long
to
do
that
to
even
just
get
started
on
metrics,
like
I
think
in
the
time
that
we
did
tracing,
we
were
supposed
to
have
metrics
and
logs
all
like
solid
and
like
have
a
solid
1.0.
So
I
I
imagine,
that's
also
another
part
of
the
push
to
move
faster
because
it's
not
moving
as
fast
as
it
was
originally
proposed
to
move
right.
D
Yeah,
so
I
think
like
over
the
history
of
open
telemetry,
like
I
would
say
like
early
on,
like
I'm,
I'm
really
happy
to
see
this
sig
as
it
is
today.
I
think
early
on.
It
was
like
not
a
lot
of
participation,
I
think
like
francis,
and
I
would
show
up
and
for
a
while
there
were
some
contractors
that
showed
up,
but
it
was
not
like
the
most
active
or
I
don't
know
organically
populated
group,
so
like
keeping
up
with
everything
in
the
spec
was
actually
kind
of
hard.
D
I
think,
but
I
don't
know,
maybe
I'm
going
to
back
too
far
back
in
time.
I
I
do
think
to
robert's
point.
It's
like
there
is
kind
of
like
this.
I
don't
know
there
is
some
level
of
push
to
kind
of
not
dwell
on
things
forever
and.
B
F
I
apologize
for
interrupting,
but
I
also
didn't
I'm
sorry
that
I
dragged
this
on.
I
think
my
only
and
I
empathize
with
all
of
those
concerns
the
only
thing
I'm
asking
for
them
to
do
on
the
spec
team
is
at
least
give
us
a
chance
to
catch
up
if
they
don't
have
a
commitment
from
all
the
language
implementers.
F
At
least
one
person
that
can
chime
in
there's
got
to
be
like
a
critical
mass
that
they
say
totally
fine
we're
going
to
move
forward
if
any
number
of
people
give
us
the
thumbs
up
and
if
they
see
that
ruby
hasn't
given
a
thumb
up
on
anything,
they
should
probably
work
with
us
and
reach
out
to
us
to
figure
out
why
we
haven't
been
able
to
contribute
to
the
conversation.
F
D
Yeah,
that's
fine,
let
me
say
two
sentences
I
think
in
in
closing
and
then
we'll
move
on,
but
I
think
I
like
these
critiques
because
I
think,
like
you
know,
like
I
was
saying,
there's
been
this
like
long
phase
of
like
building
and
kind
of
like
specking
things
and
then,
as
we
start
using
them.
D
If
we
find
that
there
could
be
improvements
like,
I
think
we
should
think
about
these
things,
and
we
should
always
like
be
able
to
like
dream
about
the
future
perfect
api
that
we
want,
and
I
think
when,
when
we
at
least
have
those
in
mind,
we
can
we
can
find
opportunities.
I
think,
to
try
to
like
to
try
to
kind
of
surface
that
information,
because
I
feel
like
this
stuff
is
like
far
from
done
and
I
feel
like
we're
kind
of
like
you
know
at
like
v1
and
software
goes
through
many
iterations.
D
I
think
that
might
be
an
outlet
for
some
of
this
stuff
like
if
the
ship
has
already
sailed
on
some
of
the
stuff
today
like
then,
I
think
people
would
be
receptive
to
like
things
that
are
going
to
improve
everybody's
lives
on
like
a
lot
of
these
matters.
So.
D
Then
there
is,
this
actually
probably
affects
us.
D
The
collector
was
trying
to
use
the
same
port
for
otlp
http
and
otlp
grpc.
I
guess
it
like
turned
out
to
be.
There
were
technical
hurdles
that
made
it
unfeasible,
so
the
grpc
port
will
be
43.17,
but
http
will
be
4318.
D
Yeah-
and
we
already
mentioned
that
the
metric
sig
is
that
api
spec
feature
freebies,
so
yeah
I
feel
like
this
is
one
of
the
reasons
why
we
sometimes
end
up
where
we
are
is
like,
because
we've
been
like
focused
on
getting
our
tracing
stuff
rounded
out,
we
haven't
had
a
whole
lot
of
time
to
participate
in
like
the
metric
stuff,
so
we're
kind
of
crossing
our
fingers
that
everybody
else
picked
something
that
would
work
for
ruby
but
yeah.
I
think.
B
I
think
the
the
metric
stuff
is
something
that
like
when
I
say
we
I
mean
like
shopify.
People
are
gonna,
probably
be
contributing
some
some
effort
into
for
broken
telemetry
to
like
actually
get
an
implementation
out
there.
So
I
think
we
have.
I
don't
know,
I
don't
think
francis
has
been
participating
in
the
metrics
at
all,
but
we're
definitely
going
to
be
putting
some
effort
in
that
direction
towards
the
implementation
of
it.
So
hopefully
they
did
a
good
job.
D
Yeah,
the
metrics,
the
metric
project
has
a
very
long
story
in
history
as
well.
I
feel
like
they're
kind
of
already
on,
like
I
feel
like
they
kind
of
had
an
initial
version
that
they
heavily
refined.
I
think
I
think
that
initial
versus
version
was
something
that
folks
weren't
interested
in
having
as
like
the
1.0,
but
I
think
enough
people
kind
of
came
in
at
you
know
the
11th
hour
and
was
like,
let's,
let's,
let's
be
duelist
so.
E
I
think,
having
seen
a
similar
problem,
but
not
and
just
in
datadog's
implementation
and
moving
stats
toward
clients,
I
think
one
of
the
chunky
parts
of
our
work
will
be.
I
think,
the
actual
like
metric,
payload
and
flushing
is
whatever
you
know.
It's
just
blobs
of
data,
but
the
implementation
of
the
histogram,
like
the
sketching
algorithm,
I
think
in
ruby,
will
be
something
we'll
need
to
do
as
a
precursor
to
doing
the
metrics
work.
E
I
don't
think
it's
you
know,
so
I
think
there's
implementations
in
some
other
languages,
but
that
might
be
sort
of
like
new,
like
green
field
type
work.
That's
that
yeah
that
that's
the
only
sort
of
like
thing
that
off
top
of
my
head.
I
have
no
idea
how
to
implement
it
so
anyway,
just
that's
my
experience
from
doing
something
somewhat
similar,
or
at
least
like
going
through
an
rfc
process
within
datadog,
for
something
similar.
E
Yeah
there's
all
this
like
like
there's
all
this
like
internal
datadog
was
trying
to
get
their
dd
sketch
sketching
algorithm
as
like
the
algorithm
to
use
and
then,
like
some
other
circonus
or
new
relic,
was
like
no
use
our
thing,
which
is
exactly
the
same,
but
slightly
different.
You
know
due
to
math,
which
I
don't
know,
but
we've
had
you
know
like
for
datadogs
clients
they've
had
to
sort
of
hop
around
to
each
client.
E
E
You
know
sort
of
bit
of
code
that
takes
a
bunch
of
data
points
and
and
generates
this
like
sketch,
which
I
think
can
then
later
be
used
for
calculating
like
p95s
and
p99s,
and
things
like
that.
I
haven't
actually
done
the
work.
Yet
it's
I've.
All
I've
done
is
like
looked
at
what
the
spec
requires
for
it,
I'm
a
little
bit
rambling,
but.
D
No,
I
think
that
is
like
a
good
call
out
for
some
of
the
the
work
that
will
probably
be
coming.
D
I
will
say
we
actually
probably
took
a
little
bit
longer
than
the
actual
spec
setting,
but
I
think
we
had
better
discussion.
So
that's
that
anything
related
to
our
repo
that
we
should
discuss.
B
I'm
still
I've
been
dragging
my
feet
on
the
rail
stuff,
just
even
just
going
through
and
setting
up
like
a
couple
test
applications
and
just
doing
that
from
from
my
perspective,
that's
kind
of
what
I
wanted
to
get
in
and
then
I
wanted
to
do
another
release
candidate
just
to
get
the
current
set
of
changes
out.
B
So
people
could
start
testing
it
and
doing
it,
but
I
don't
really
want
to
block
it,
but
I
also
want
it
to
be
part
of
like
the
next
release
candidate
so
because
I
don't
think,
there's
been
anything
too
massive
since
the
last
one.
But
this
just
because
the
way
we
do
our
big
kind
of
our
big
bang
releases
still.
I
think
it
would
be
nice
to
get
this
out
and
get
people
more
people
working
with
this
stuff.
B
Before
and
like
I
know,
the
instrumentation
isn't
getting
1.0,
but
I
think
to
a
lot
of
the
consumers.
They
potentially
won't
care
that
there's
a
difference.
I
I'm
also
just
like
this
is
now
I'm
going
off
my
own
tangent
here.
B
If
you
look
at
like
ruby
gems
and
you
look
at
like
open,
telemetry,
api
or
sdk,
the
downloads
for
the
release,
candidates
are
pretty
low
in
comparison
to
like
0.17
or
whatever
the
last
non-release
candidate
version
and
bundler
by
default.
Won't
tell
you
to
pick
up
these
versions
unless
you
explicitly
state
it.
So
I
think,
like
our
intention
of
having
people,
have
the
ability
to
test
these
release
cabinets,
I
wonder
how
well
it's
working
if
people
aren't
picking
it
up
by
default.
B
You
know
what
I
mean:
anybody,
that's
using
it
who
isn't
actively
watching
us
do
releases
potentially
doesn't
know
right,
like
I
don't
think
it's
necessarily
a
good
or
a
bad
thing.
It's
just
definitely
a
thing
to
probably
think
about.
I
know
that
the
kind
folks
at
github
have
been
testing
it
for
us
and
as
well
as
the
kind
of
books
that
shopify
at
very
least
like
two
relatively
good
sized
orgs
are,
are
running
it
through
its
paces.
B
So
I
think
that
does
instill
quite
a
bit
of
confidence,
but
I
just
wonder
if,
like
that
was
the
right
approach,
because
I
know
internally,
I
have
like
a
rapper
gem
and
I
had
like
my
own
internal
release
candidate
and
nobody
was
picking
up
the
upgrades-
and
I
was
like
this
is
stupid.
I'm
just
gonna
like
say
this
is
our
internal
1.0
and
then
everybody
started
picking
it
up
and
started
reporting
things.
So
I
don't
know
just
commentary
there.
B
Right
so
like
that's,
the
thing
is
like
I,
I
know
francis
wants
us
to
do
another
release
candidate
and
I
think
it
makes
sense
because
we've
been
doing
them,
so
we
should
continue
doing
them
for
at
least
for
a
little
bit,
or
at
least
one
more.
I
think
we
are
led
ready
to
have
like
the
the
tc
review,
come
back
and
look
through
and
I
think
it
goes
carlos
who's
doing
it.
I
think
he's
ready
to
come
back
at
this
point.
B
We
could
get
him
to
go
over
it
again,
but
yeah,
I
think,
there's.
I
think
we
don't
really
have
anything
blocking
other
than
we
should.
Maybe
just
out
of
kindness
do
another
release
candidate
to
get
it
run
into
his
faces
before
we
say
yes,
this
is
1.0.
E
I
think
my
understanding
was
the
push
for
another
release.
Candidate
was
really
just
to
make
the
tc
review
easier.
It
was
like,
but
you
know
wasn't
about
which
whatever
helps,
I
guess
I'm
on
board
with.
B
Right,
that's
actually
yeah
that
that
makes
sense,
and
I
guess
it
is
following
that
and
it
facilitates
the
review,
but
I
think
from
like
just
getting
this
battle
hardened
a
little
bit
before
we
say
this
is
1.0
like
there's
the
kind
of
formal
dance
we're
doing
and
then
there's
the
practical
steps
and
if
we
want
to
like
let
out
a
1.0
and
say
yeah,
this
is
rock
solid.
Maybe
doing
this
like
rc
thing,
wasn't
the
best
way
of
getting
people
to
test
it
in
the
wild.
B
So
all
I
have
to
say
is:
I'm
gonna
actually
just
get
this
done
and
out
there.
I
don't
know
if
anybody's
taken
a
chance
or
like
taking
a
swing
at
this
branch.
I
have
it
myself.
So
it's
not
fair
for
me
to
expect
other
people
to,
but
I'm
going
to
if
it
looks
good
on
a
couple
applications
I'm
going
to
bring
it
in
it
does
produce
some
rails
breaking
changes.
B
I'll
add
some.
I
have
some
follow-ups
there
before
I
do
the
release,
because
I
want
people
to
be
able
to
disable
the
sub
gems
if
they
don't
want
the
added
functionality
of
having
action,
view
and
active
record
baked
into
the
rails.
Instrumentation.
Maybe
they're
just
happy
with
the
action
pack
with
that
that
patch
against
metal
that
was
renaming
the
span
from
the
raxban
but
yeah
once
that
comes
out,
then
I'm
I'm
going
to
go
full
steam
ahead
for
a
release
candidate.
B
A
B
An
issue
that
I
think
tim's
picking
out,
I
don't
know
if
anyone
saw
the
comments
on
that
it
was
around
the
naming
of
736
there.
I
think
near
the
bottom
of
your
screen.
There
yeah.
B
If
anyone
has
any
comments
or
thoughts
on
this,
if
you
scroll
down
a
bit
francis
disagreed
with
what
I
said
and
that's
expected,
but
the
idea
of
what
he's
proposing
is
adding
a
configuration
option
to
any
of
our
our
instrumentation.
That
has
database
statements
so
things
like
redis,
lmdb,
postgres
mysql.
B
We
we
have
these
poorly
named
configuration
options
like
enable
db
statement,
enable
obfuscate
and
friends
suggested
kind
of
collapsing
into
one
setting
tv
statement,
and
you
can
just
basically
choose
between
you,
including
it
omitting
it
or
obfuscating
it,
and
then
I
think,
like
we
would
have
to
think
of
what
the
right
default
is
there.
Probably
obfuscate
is
probably
the
best
default
to
surface
out
in
the
wild,
but
I
don't
know
if
anyone
else
has
any
stronger
opinions.
B
My
reason
for
liking
that
more
than
having
one
for
enabling
or
disabling
the
statement
and
one
for
enabling
or
disabling
obfuscation,
is
just
saying
like
if
you
enable
obfuscation
but
disable
the
statement.
What
do
you
expect
to
happen?
Probably
expect
the
statement
not
to
show
up,
but
if
we
take
the
alternative
approach
there
just
saying
include
them.
It
will.
Obviously
you
don't.
Even
have
that
question
anymore,
it's
just
very
direct.
What's
going
to
happen
so
does
anyone
have
feelings
or
thoughts
about
that.
B
F
Severe
yeah
these
are
pre
1.0,
so
I
think
it's
okay
yeah
going
forward.
I
would
say
we
probably
want
to
add,
like
deprecation
warnings
or
maybe
even
have
a
release
that
has
deprecation
warnings
right
now,.
B
I'm
I'm
comfortable
with
that
that
approach,
I
think,
for
other
things.
Maybe
I'd
be
a
little
more
ruthless,
but
I
think
for
for
this
because
the
consequences
are
a
bit
higher
pii
and
things
like
all
that
and
all
that
fun
stuff,
maybe
tim.
That
should
be
something
taken
account
if
you
come
across
instrumentation.
That
already
has
an
equivalent
configuration
option,
respect
it,
but
maybe
spam
out
some
some
deprecation
warnings
like
ariel
suggested.
I
think
that's
probably
the
right
approach
and
then
on
the
next
release.
B
It
won't
it
warns
you
so
like.
If
you
try
to
pass
in
an
option
that
doesn't
exist,
it
tells
you
that
it's
being
thrown
away
because
it
doesn't
exist.
If
you
say
for
db
statement,
you
provide
a
value
that
isn't
expected
like
include,
emit
or
confiscate
once
again
it
throws
it
away,
and
I
believe
this
should
be
the
setting
it
throws
it
away
and
it'll
fall
back
to
the
default
in
all
cases
it
if
you
provide
a
value,
it's
not
expecting.
It
just
goes
back
to
the
default
and
warns
you.
C
Might
be
a
silly
questions,
a
question
I'm
not
sure
like
what
are
the
intersection?
What
is
the
intersection
of
obfuscation?
Logic
like
I
was
looking
at
dolly
and
it
does
something
to
obfuscate
the
statement.
C
F
Well,
interpretation:
I
was
going
to
say
like
oftentimes,
it's
like
due
to
syntax
differences
in
these
languages.
For
the
most
part,
I
would
say
that
sql,
like
has
like
overlap
and
there's
a
lot
of
opportunity
for
us
to
do
client-side,
obfuscation,
redaction,
potentially
in
a
more
performant
way.
In
fact,
that's
a
problem
I'm
facing
now,
but
the
very
difference
between
redis
writing
doing
like
a
set
operation
versus
mysql
doing
an
insert
operation.
F
It's
so
different
that
I
don't
know
unless
they
had
some
common
language
or
some
common
structure
that
we
could
use,
but
other
than
that,
I
don't
think
so.
B
I'd
say
for
like,
in
the
context
of
like
this
pass
of
kind
of
introducing
consistency
to
these
options
across
these
instrumentation
gems.
Don't
worry
about
that,
yet
I
wouldn't
make
that
priority.
I
think
that's
something
that
should
be
its
own
separate
bike,
shareable
task,
instead
of
like
conflating
the
two
things
if
you
come
across
say
like
lmdb,
and
there
is
no
code
for
obfuscation
there.
That
would
not
be
a
valid
option.
B
It
would
be
like
include
or
omit
right,
and
then
we
should
be
creating
a
task
to
follow
up
and
add
that
in
same
for
same
for
any
of
them.
If,
for
whatever
reason,
notification
code
doesn't
exist
there,
I
don't
think
this
pr
should
be
responsible
for
adding
it.
I
think
follow-up.
It
should
be
a
follow-up
task,
just
kind
of
make
the
efforts
focused.
I
think
these
things
like
this,
this
type
of
option
and
the
these
types
of
tasks
right
from
the
beginning.
They
call
me
and
matt
we're
talking
about
a
little
bit.
B
This
stuff's
really
important,
so
I
think
it
should
be
really
focused.
So
we
don't
want
this
pr
to
be
much
more
than
just
making
consistent
options
and
then
any
workaround
obfuscation
should
be
very,
very,
very
focused,
and
everyone
can
pay
attention
to
this.
One
thing
because
it's
important.
F
In
a
sense
I
can,
I
can
predict
kind
of
like
a
structural
duplication
thing
that
would
be
okay
and
not
necessary
to
try.
Essentially,
it's
like
the
the
code
would
follow
the
same
pattern.
Just
like
a
rails
controller
has
an
index
show
edit,
you
know,
delete.
We
would
have
a
very
similar
structure.
Maybe
there's
an
include
you
know:
function
or
an
obfuscate
function
or
an
omit
function
in
every
one
of
these
and
they're
structurally
do
structural
duplication,
but
I
wouldn't
be
aggressive
about
like
refactoring
it
into
something
else.
B
Yeah,
I
I
really
like
that
comment
and
I
would
encourage
the
same
thing
like
if
you
put
two
files
side
by
side
and
they
look
almost
the
same
like
that's
a
okay,
we're
good
with
that.
If
something
needs
to
be
extracted,
then
it
can,
but,
like
those
are
premature,
optimizations
that
we
don't
really
need
to
worry
about
with
this
right
now,
because
it's
not
optimizing
for
performance,
it's
optimizing
for.
B
Ariel
other
than
that,
like
these,
like
these
two
things
like
what
I
was
talking
about
in
terms
of
like
the
rc
and
the
real
stuff
and
this
stuff,
this
is
what's
on
my
radar.
This
is
what
I'm
interested
in
right
now.
I
don't
know
if
there's
anything
else,
that
anyone
else
has
been
paying
attention
to
that.
We
should
draw
attention
to
I'm
just
kind
of
I'm
focusing
on
the
things
I
care
about
right
now.
F
I
don't
know
about
problems,
oh,
but
I
do
have
a
happy
report,
so
I
wrote
this
down
on
the
meeting
notes,
but
right
now
we're
peaking
with
rc2
we're
peaking
at
around
80
million
spans
per
minute
and
we
have
like
a
99.378
success
rate
hold
on
one.
Second,.
F
Which
means
we're
like
doing
pretty
great,
I
think
the
gem,
the
tracing
that
we
have
the
auto
instrumentation
specifically
we
have-
is
enabled
our
rack,
faraday
and
graphql,
and
the
majority
of
our
system.
B
F
In
different
flavors,
so
we
have
our
job
workers,
kafka,
consumers,
web
applications,
api
graphql
api,
like
I
don't
think
any
of
this
stuff
is
a
trade
secret
or
anything
yeah.
F
B
F
So
after
upgrading,
there
was
a
big
difference
in
between
the
two.
I'd
also
note
that
one
of
our
biggest
problems
or
significant
issues
is
sql
tracing
right
now
about
less
than
one
percent
of
traffic
has
sql
tracing
enabled
and
the
real
overhead
is
the
regular
expression
person
and
like,
or
you
know,
scanning
the
strings
to
substitute
the
values.
That's
the
biggest.
B
F
Of
our
and
it's
like,
sometimes
the
regex
takes
longer
than
the
query
right.
So
it's
like
we
try
to
do
we're.
I'm
gonna
have
to
do
some
benchmarking.
Take
a
look
at
ways
to
optimize
that
at
some
point,
but
right
now,
that's
our
biggest
thing
and
so
other
things
that
we
started
looking
at
were
adding
trace
correlation
to
our
database
for
mysql,
slow
logs,
for
example.
F
So
if
we
add
for
in
a
comment
in
the
sql
comment
that
gets
generated,
if
we
add
correlation
context
in
there
trace
id
span
id
for
the
parent
span,
we
might
be
able
to
build
some
access
to
like
say
the
maya.
You
know,
mysql
soul,
query,
log,
uis
and
say
like
hey
look.
This
span
had
a
very
slow
sql
query,
I'm
gonna
click
through
a
link
and
say
I
can
go
and
look
at
a
and
potentially
look
that
sample
up
in
the
mysql,
slow
query
logs
and
be
able
to
build
that
correlation.
There.
B
That's
really
cool.
I
have
to
run
right
away,
but
I
wanted
to
ask
before
I
did
so
you
said
you
were
having
issues
with
the
regular
expressions
taking
up
a
lot
of
time.
How
did
you
identify
that
as
the
issue.
F
So
we
have
built-in
profilers.
Essentially
we
don't
have
a
like
a
like
a
sampling,
continuous
profiler,
but
we
have
you're.
F
Sure
we
have
the
ability
to
say
like
I
I'm
on
this
I'm
on
this
view.
We
have
a
link,
you
know
if
you're
a
user-
oh
sorry,
an
admin
user,
you
can
click
a
link
and
say
go
look
at
this
and
you
know,
take
a
look
at
a
flame
graph
and
see
where
it
is
and
it's
like
all
the
time
is
spent
in
parsing.
B
That's
good
to
know,
because
we
we
haven't
actually
enabled
it
yet
sure,
like
significant
call,
we
haven't
done
that
yet
we're
just
pretty
much
throwing
it
away
at
the
infrastructure
level,
and
but
we
do
want
to
move
in
that
direction.
So
getting
this
like
early
report
that,
like
hey
you'll,
probably
run
into
trouble,
is
like
really
really
valuable,
because.
F
Potentially
potentially-
but
you
know
at
this
point
like
you
know,
we're
still
really
early
on,
I
wanna
we'll
probably
start
looking
at
like
trace
id
sampling
in
the
near
future
to
enable
that
to
reduce
the
amount
of
volume
that
we're
generating
right
now,
because
we're
trying
to
hit
100
percent
of
our
our
volume.
So
we
can
build
out
a
bigger
profile.
So
but
hey
I'm
going
to
actually
be
talking
about
this
later
today
on
twitch.
E
It's
up
to
liz
and.
F
B
If
you
feel
comfortable
doing
it,
and
you
want
to
do
a
little
self
promotion,
you
I'd
say
drop
it
in
the
slack
or
slack
before
it
goes,
live.
Give
us
a
little
heads
up,
I'd
watch
it
I'd,
be
interested
to
see.
What's
going
on
there.
F
A
B
Yeah
I
like
that
stuff
I
do
have
to
go,
though
this
is
really
great.
That
was
awesome
information
thanks
for
being
I'll
talk
to
you
later.
E
Do
you,
if
is
there
a
big
data
dog
solves
this
by
not
doing
obviously
at
clients
right,
we
do
obfuscation
in
the
open,
telemetry
collector
the
data
dog
agent
via
a
go
package.
That
probably,
is
something
I
could
push
to
upstream,
like
if
I
were
to
push
to
do
a
processor
in
the
open,
telemetry
collector.
That
does
like
obfuscation.
Would
that
be
valuable?
It
feels
like
it
feels
like.
E
Basically,
this
is
like
the
and
it's
the
same
problem
that
like
I
think
they
tried
to
do
obviously
somewhat
whatever
some
languages
at
some
point,
whether
it
was
ruby
or
php
or
something
on
the
client
and
we're
like
wow.
This
sucks,
like
everything's
slow
now,
and
so
they
ended
up
doing
it
all
on
the
agent,
not
that
it
solves
every
use
case.
Obviously,
but
if
yeah
it
seems
like
whether
it's
datadog's
implementation
or
a
you
know
we
take
the
best
of
whatever
people
have
to
offer.
F
B
B
F
And
so
I
feel
like
also
there's
multiple
versions
of
this
implementation,
like
if
you
look
at
the
java,
one
that
uses
some
templating
language
and
you
look
at
like
a
couple
of
other
implementations
that
are
parsing
tokens
like
it
might
that's
what
matt
just
said
in
the
chat
actually.
F
That's
assuming
that
you're
not
using
find
by
sql
right,
it
doesn't
work,
so
I
can't
guarantee
that
folks
aren't
templatizing
things
properly,
and
so
so
you
know-
or
you
know
you
know
or
if
there's
folks,
who
are
not
using
active
record
at
all.
So
I
I
think.
E
F
And
another
thing
about
that
is
that
the
is
active
record
right
now,
support
notifications.
Is
it
doing
so.
E
As
far
as
I
can
so
yeah,
we
would
have
to
wrap
it
by.
You
know
monkey
patching
the
the
action,
the
methods
or
whatever,
which
comes
with
its
own
work.
I
don't.
I
actually
haven't.
Looked
at
I've
been
writing
php
the
past
two
weeks.
My
life
is
a
mess.
I
haven't
looked
at,
like
the
implementations
in
ruby
in.
F
E
Ladies
and
gentlemen,
yeah
I'm
sick
anyway,
yeah
welcome
to
working
a
vendor
cool.
Well
yeah.
This
is
super
interesting.
I
don't
have
anything
else
same.