►
From YouTube: 2020-10-20 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
Up,
I
wonder
if
it's
not
gonna
be
too
busy,
we
don't
have
our
fearless
leader,
matt.
C
So
far,
pretty
simple
like
I
just
basically
have
it
tracing
the
controller
actions
cool.
So
that's
nothing.
Nothing.
D
D
Yeah,
I
think
you
know
it's
like
80
percent
of
rails.
D
D
D
Not
I
haven't
dug
too
deeply
into
what
some
other
folks
are
doing,
but
new
relics
is
pretty
just
covers
more,
or
is
this
a
little
bit
like
stuck
into
those
four.
C
D
D
E
Yeah
we've
tried
to
be
aggressive
with
supported
ruby
versions,
so
I
think
we
also
want
to
be
aggressive.
E
D
E
So
I
think
matt
is
not
joining
us
today.
I
think
he
mentioned
last
week
he's
on
vacation
or
something
yeah,
so
I
haven't
attended
the
spec
sig
meeting,
so
I
can't
really
fill
in
for
really
the
bulk
of
the
meeting
so
yeah,
I'm
not
sure
what
else
we
want
to
talk
about.
I
know
we
have
a
few
pr's
open
at
the
moment.
E
Matt
has
added
b3
support,
so
I've
provided
some
feedback
on
that
pr,
but
yeah.
I
think
he's
he's
out
for
a
bit.
We
have
a
pr
to
rename
spam
context
to
spam
reference
which
matt
thought
wasn't
going
to
be
a
thing
last
meeting,
but
then
I
guess
it
got
enough
support
that
it
actually
merged.
E
But
right
after
merging,
like
I
put
put
this
pr
together
and
the
consensus
seems
to
be,
this
isn't
a
great
thing
and
maybe
we
shouldn't
merge
it
until
there's
been
a
little
bit
more
debate
about
spam
context.
So
I
yeah,
I
don't
really
know
what
the
outcome
of
that
is
yet
so
for
now
we're
just
yeah.
We've
got
the
pr
sitting
there
to
do
the
rename
if
we
need
it
and
we'll
just
hold
off
on
it.
For
now,
that's
obviously
going
to
be
a
breaking
change.
E
Yeah,
I
have
a
small
change
that
I
want
to
make
to
the
otlp
exporter,
although
I
might
wait
until
we've
got
the
same
change
in
our
private
tracing
instrumentation
rolled
out
into
production
just
to
test
it
out.
But
basically,
brief
description
is
the
way
it
works
right
now
or
at
least
in
hotel.
It's
going
to
be
the
batch
span
processor.
E
The
way
it
works
right
now
is
that
we
don't
the
thing:
that's
in
queuing,
spans
doesn't
wake
up
the
exporter
thread
until
the
buffer
is
half
full
and
then
the
the
default
is
the
default
buffer
size
is
2048
spans.
The
default
batch
size
is
512
spans.
E
So
the
way
the
defaults
work
we
typically
enqueue
1024
then
like
the
the
exporter
thread,
has
a
five
second
timeout.
So
it'll
it'll
check
every
five
seconds
anyway,
but
the
problem
we've
seen
in
production
is
that
some
processors
in
queue
at
a
high
rate
and
q
spans
at
a
high
rate
and
the
buffer
can
get
overwhelmed
like
we
have
around
the
buffer
before
the
exporters
had
a
chance
to
do
things.
E
So
the
obvious
thing
is
to
like
increase
the
buffer
size,
but
when
you
increase
the
buffer
size,
you
also
increase
the
delay
before
the
enqueuer
actually
kicks
the
exporter.
E
So
we
want
to
change
that
to
basically
kick
the
exporter.
If
you've
got
a
batch
worth
of
stuff
to
export,
which
is
a
reasonably
small
change,
but
should
help
with
this
a
little
bit
so
it'll
wake
up
the
exporter
thread
sooner
once
it
like
as
soon
as
it
has
enough
work
to
do,
it'll
wake
it
up.
So
yeah
we're
gonna,
try
that
in
production
I
just
merged
that
change
this
morning
in
our
private
repo,
we're
going
to
try
it
out
production
and
then
roll
it
into
hotel
as
well.
E
I
looked
at
the
the
go
implementation
and
the
go.
Implementation
of
this
is
using
a
channel
like
a
buffer
channel.
So
it's
basically
modifying
all
the
time
like
it's
the
equivalent
of
signaling
on
every
enqueue.
So.
C
D
Yeah
that
sounds
reasonable
to
me,
so
the
idea
is,
we
would
kick
off
the
exporter
thread
at
512
spans.
Instead
of
whatever
half
the
max
buffer
sizes,
yeah
cool
yeah,
I
think,
makes
sense.
D
D
Yeah
one
thing
that
my
colleague
on
the
python
tracer
who
had
implemented
in
their
exporter
was
sort
of
like
a.
Where
did
I
get
this
from
or
no
I'm
sorry
just
one
of
my
colleagues
said
so
yes,
this
and
I
thought
it
was
a
nifty
idea-
was
just
a
randomized
dropping,
so
it
just
drops
randomly
it's
sort
of
like
extremely
minor,
but
then,
if
in
like
periods
of
heavy
volume,
you
at
least
get
a
representative
sample
instead
of
missing
everything
after
x
period
of
time.
D
So
you
just
drop
from
you
just
take
the
length
of
the
buffer
and
say:
okay
just
pick
one
at
random,
so
take
the
oldest
or
the
newest,
whatever
interesting
idea,
but
I
don't,
I
certainly
don't
feel
the
need,
unless
someone's
asking
for
it,
while
we're
on
the
ground.
Yeah.
E
I'm
not
sure
I'd
take
that
approach.
I
mean
I
understand
why
you
might
want
to,
but
in
general
you're,
more
likely
to
end
up
with
broken
traces.
That
way.
D
E
Yeah
ceo's
works
a
little
bit
differently:
yeah
yeah
amusingly,
the
pro
like
the
service
that
we're
having
most
trouble
with
has
something
similar
in
front
of
it.
So
they
they
do
their
own
batching
per
trace
in
front
of
the
our
exporter
and
occasionally
they're
generating
like
forty
thousand
or
forty
five
thousand
spans
per
trace
within
the
same
process.
And
then
they
just
like
unleash
that
onto
our
exporter.
D
Oh
okay,
well,
yeah.
I
wish
you
way
more
than
locked,
okay,
yeah,
I'm!
I
have
been
really
away
from
ruby
for
the
past.
I've
been
working
on
some
collector
stuff
and
then
just
some
customers
internally
have
some
had
submissions
happen,
so
I
can
definitely
review
the
b3
and
any
just
feel
free
to
ping
me
on
the
otlp
stuff,
because
I
know
I
had
booked
previously
and
I
do
still
plan
to
finish
the
whatever
graphql
I
actually
had
recently
just
done
work
on
the
res,
our
rescue
integration.
D
E
Yeah,
that's
okay.
We
may
have
some
feedback
on
that.
Just
we
have
a
weird
fork
of
rescue
in
our
core
application
called
hedwig.
I
don't
know
why,
but
anyway,
and
that
means
that
we
like
in
our
internal
instrumentation,
we
have
this
weird
rescue-like,
instrumentation
and
then
rescue
is
like
a
is
derived
from
that
and
then
this
headwig
tracing
is
also
derived
from
that.
So
we
may
ask
for
at
least
the
instrument
the
rescue
instrumentation
to
support
like
the
two
models.
In
that
way,
yeah
we'll
see
how
it
goes.
I.
D
Certainly
don't
use
rescue
in
production,
so
I
defer
to
folks
who
have
more
experience
with
it,
but
I
just
figured
that
might
be.
I
could
at
least
get
the
ball
moving
by
just
anyway,
but
cool.
That's
everything
sounds
reasonable.
I
don't.
I
certainly
don't
have
much
to
disclose,
because
I've
not
been
paying
attention
too
closely
to
ruby.
E
We
do
have
just
trying
to
see.
When
did
we
last
do
this,
so
we
have
a
few
changes
or
a
couple
of
changes.
I
guess
that
went
in
four
days
ago,
both
of
which
are
breaking
changes.
So
I'm
thinking,
maybe
we
should
cut
a
new
release,
maybe
even
today
sure
yeah.
D
That's
that's
fine
with
me.
I
mean
if
you
need
me
to
thumbs
up
any
of
the
pr's
or
I
know
daniel's
been
working
closely
with
I'm
sure
yeah.
E
Awesome
anything
else.
People
want
to
chat
about.
D
Wilbur
is
there
anything
we
can
help
with
or
any
questions
you
might
have?
I
know
you're
a
little
bit
newer.
A
Yeah
I'm
currently
working
on
another
project,
so
I'm
just
here
to
like
familiarize
myself
and
take
some
notes
but
yeah.
I
think
I'm
good
for
now,
thanks
guys,
what
are
you
working
on
it's
kind
of
confidential
but
soon.
D
D
E
Cool
anything
from
you
robin.
C
No
just
I
mentioned
working
on
the
real
stuff.
I
think
I
actually
don't
think
it'll
be
too
much.
I
think
like
in
a
first
like
pr,
just
like
a
simple
like
controller
tracer
and
started
as
that
is
like
our
starting
point
that
we
can
increment
on.
So
it
kind
of
carved
out
as
one
pr
I
gotta
add
something
for
that
special
handling
around.
C
But
and
then
I
think,
the
the
only
part
where
I
think
there
might
be
some
debate,
because
something
else
is
pretty
straightforward
is
like
how
we
want
to
reuse
or
make
use
of
rack
in
this
instrumentation
like
if
we
do,
if
we
want
to
allow
for
some
duplication
or
if,
when
you
bring
in
rails
instrumentation,
it
just
brings
in
rack
as
well.
C
That's
how
I
have
it
set
up
right
now
is
like
I'm
just
making
rack
a
dependency
of
the
rails
instrumentation
and
then
I'm
sitting
this
like
very
light
middleware
in
front
and
the
call
stack
of
the
rack
one.
So
you
get
your
controller
action
name
and
then
one
of
the
subspace
would
be
all
the
bells
and
whistles
that
come
with
rack.
But
I
don't
know.
If
that's
I
don't
know,
if
that's
right.
E
I
feel
like
it'd
be
nice.
If
we
had
a
if
you're,
if
you
have
a
rails
app,
you
have
a
single
middleware.
Instead
of
like
a
rack,
middleware
and
a
rails
middleware,
I
know
the
datadog
instrumentation
for
rails
is
quite
a
bit
different
and
isn't
necessarily
using
middleware.
It's.
I
think,
monkey
patching
action,
controller
or
something
so
it's
yeah.
D
That's
gross
yeah,
there's
some
conventions
specific
to
us
as
a
vendor
that
force
us
to
do
some
on.
I
don't
know
unholy
things
where
there's
a
lot
of
work.
Around
sort
of
like
our
analytics
engine
involves
in
the
way
we
define
services
a
little
more
like
almost
like
different
libraries
or
different
services,
which
is
weird,
I
think
we're
moving
away
from
it,
but
it
forces
us
to
take.
You
basically
want
to
get
as
much
information
as
possible
into
one
sort
of
top
level
span
and
have
that
be
like
almost
overloaded.
D
D
Do
don't
think
of
that
as
best
practice.
E
One
reason
it's
a
good
idea
to
try
to
lump
it
all
in
together
is
that
you
really
want
to
have
like
the
bulk
of
the
information
about
the
request
on
the
server
span,
where
it's
like
spanking
server,
because
I
think
a
variety
of
back
ends.
I'm
thinking
at
the
moment
of
splunk's
microservices
apm.
E
E
So
if
you've
got
this
sort
of
nesting,
where
there's
like
a
rack
span
at
the
outer
level
and
then
a
rail
span
that
actually
has
a
bulk
of
the
information,
then
it
you're
like
you're,
just
going
to
lose
a
ton
of
stuff.
D
Yeah,
that's
why
we
have
to
bubble
it
up,
so
the
rackspan
almost
becomes.
It's
really
like.
It
almost
looks
like
a
rail
span,
but
it's
technically
for
rec,
and
it's
it's
really
because
our
query
engine
is
like
when
you
query
for
something
like
hey.
D
I
want
all
my
spans
that
are,
like
you
know,
url
api
and
with
user
id
seven,
it's
not
it's
not
like
sql,
where
you
can
query
across
you
know
you
do
joins
across,
like
I
think
honeycomb
does
that
so
there's
some
there's
like
a
few
folks
who
do
stuff
cool
cool
stuff
like
that,
but
not
us
so
yeah.
I
don't
know
what
I've
also
never
used
jaeger
actually
or
zipkin.
D
So
I'm
curious
what
traditional
users
of
those
do,
but
cool
anyway
yeah
happy
to
give
feedback
as
a
as
I
feel
like
just
putting
feel
free
to
put
some
stuff
early
as
a
draft
or
whatever
I
don't
think
anyone's
like
yeah.
C
Oh
okay,
I
think
with
that,
then
I
think
I
might
because
we
wanna.
I
think
I
think
it
does
make
sense
to
have
like
a
single
span
generated
by
the
controller
request.
Instead
of
having
the
rack
subspan-
and
I
was
just
like
really
just
trying
to
get
like
an
mvp
locally,
so
I
can
start
iterating
on
it,
but
I
think
I
might
start
by
deduplicating
or
not
duplicating
some
of
the
stuff
out
of
rack,
even
if
it's
like
could
be
extracted
to
a
common
place.
C
Like
I
just
looked
at
the
sinatra
stuff
and
it's
so
minimal,
like
I
don't
like
extracting
stuff
early
because
then
you
end
up
with
just
like
this
awkward
thing.
That's
unwieldy
so
like
maybe
when
we
get
our
third
thing
that
could
make
use
of
a
common
rack,
middleware
type
thing,
then.
B
C
E
Okay,
cool
anything
else.