►
From YouTube: 2021-08-03 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
We
had
a
good
vacation
yeah,
it
was
excellent.
We
went
down
to
the
or
went
to
florence
on
the
oregon
coast.
Oh
nice,
I
don't
know
if
you've
ever
been
out
there
do.
You
know,
do
you
know
yahats
I
mean
I
know
of
it.
I
don't
know,
I
don't
know
the
town
well,
but
sure
that's
that's
my
favorite
little
place
to
to
go.
They
did
a
crazy
airbnb.
B
So
right
in
the
middle
of
kind
of
old
town
that
little
old
town
strip
down
by
the
river
yeah,
the
airbnb
was
behind
a
game
store.
B
You
had
to
walk
down
this
little
like
little
hallway
to
get
there
and
it
was
then
right
back
right
on
the
river,
like
our
porch
was
literally
right
on
the
river.
Oh
nice,
the
the
it
was
a
renovated
meat
locker.
B
B
So
it's
incredibly
quiet
because
it
has
foot
and
a
half
thick
walls
with
sawdust
insulation
in
them.
So
it's
completely
silent.
B
I
was
not
at
the
spec
meeting
this
morning,
the
conflict.
So
I
was
not.
I
don't
have
anything
to
report
back
from
there.
It
was
very
short
like
remarkably,
remarkably
quick
carlos,
ran
it
and
ran.
It
ran
that
very,
very
fast.
I
don't
know
some
place
to
be,
but
I
think
the
whole
thing
was
like
15
minutes
long
and
we
were
done
cool.
I
think
yeah.
B
The
main
thing
I
know
josh
has
been
pinging
me
that
he's
been
making
a
lot
of
advancements
on
like
trace
ratio,
probabilistic
sampling,
stuff,
and
I
know
he
needs
once
more
eyes
on
that.
So
I
don't.
I
don't
know
if
there's
anyone
at
splunk
john,
who
would
be
willing
to
to
look
over
that
stuff
who
understands
sampling,
sampling.
The
real
issue
is
that
splunk
doesn't
believe
in
sampling,
so
well
only
why
you
haven't
gotten
a
lot
of
well.
Why
splunkers
haven't
been
in
there
giving
a
lot
of
input.
B
So
I
don't
I
don't
know,
I
don't
know
who
he
or
he
would
even
be
like.
Have
the
expertise,
yeah,
tragic
yeah.
I
think
I
I
think
what
he's
doing
is
sound.
My
my
main
concerns
are
that
users
are
gonna
turn
this
stuff
on
and
then
they're
going
to
get
by
definition.
They'll
start
getting
incomplete
traces
because
that's
actually
how
it
works
and
if
you
start
layering
on
some
of
the
sampling
the
way
like
jaeger
does
it.
B
For
example,
I'm
like
a
little
confused
about
you
count
things
to
some
degree,
but
josh
feels
confident
that
you
can
still
count
things
like.
You
can
still
generate
metrics
out
of
traces
with
just
the
probabilistic
simpler
it
doesn't.
It
doesn't
use
the
like.
You
wouldn't
use
the
parent,
the
parent
as
the
default
and
then
go
back
to
probabilistic.
After
that,
you
would
end
up
with
broken
traces.
Let's
see
if
that
seems
bad
right.
Well,
that's
like
the
point
of
the
probabilistic
sampler
actually
is,
is
it's
it's.
B
B
But
yeah
like
the
side
effect
is
not
all
your
traces
are
going
to
be
complete
and
that
would
perhaps
be
okay
if
there
was
some
way
of
knowing
whether
or
not
a
trace
was
complete
and
was
a
good
exemplar
yeah,
the
exemplars
was
exactly
the
thing
I
was
going
to
ask
you.
I
was
like
how
do
you
generate
exemplars?
If
you
don't
have
complete
well.
B
Get
exemplar
spams
like
individual
spans
right,
so
it's
a
little
like,
for
example,
if
you
made
a
call,
a
database
call
high
up
in
your
stack
and
the
database
call
was
sampled
out,
but
it
was
slow.
You
wouldn't
know
that
it
was
a
database.
It
was
actually
the
problem
yeah.
So
the
voiceover
says
it's
always
the
database.
That's
the
problem
yeah!
B
But
it
yeah,
so
I
think
that's.
My
concern
is
like
at
if
people
start
cranking
this
stuff
up
to
high
levels,
how
does
it
like
affect
the
quality
of
everything
else
and
like
do
they
understand
what
what
they're
getting
when
they
do
that,
but
a
bunch
of
people
want
it.
So
you
know,
I
don't
think
we
should.
We
should
block
it
going
in.
I
know
honeycomb
has
expressed
interest
in
the
past
and
doing
more
intelligent
sampling.
B
A
B
Yeah,
I'm
just
curious,
who
is
doing
it
on
tracing
like
who
has
like
production,
tracing
systems
that
are
applying
this
form
of
sampling
to
it.
Besides,
google.
A
B
I
think
there's
a
version
of
this
that
propagates
information
and
a
version
that
doesn't
propagate
any
information
and
I
think
that's
one
of
the
areas
where
josh
wants
feedback
but
yeah.
I
it's
not
like
a
new
concept
of
yeah
trace
ratio
like
the
the
concept's,
not
new,
I
just
like
people
use
it
and,
and
it's
also
in
jaeger.
So
I'm
not
like
this
is
a
crazy
concept.
I
personally
am
not
like
familiar
with
it
and
okay,
so.
B
B
Today,
on
hotel,
tuesday,
ariel
came
on,
I
was
like
they,
they
do
this
or
they
were
interested
in
this,
and
I
was
like
what
do
you
think
about
incomplete
spans
and,
like
eh
I
mean
we.
We
live
with
that
anyways
because
we're
not
sampling
we're
just
like
dropping
spans,
because,
like
you,
you
know,
that's
you
got
to
do
something
you
either
like
blow.
Your
overhead
drop
spans
or
drop
them
more
intelligently.
B
Using
some
kind
of
like
sampling
mechanism
and
dropping
them
more
intelligently
would
be
better
than
what
we're
currently
doing,
but
we're
already
used
to
the
fact
that
you
might
just
might
not
have
all
the
spans,
so
they
they
didn't
seem
to
care
so
anyways
I
mean
it.
It
feels
to
me
like,
if
you're
doing
this,
why
not
just
generate
metrics
and
forget
spans
all
together.
B
Yeah
I
mean
you'll
get
you'll
get
like
like
I,
I
think
it's
like
mostly
fine,
if
you're
not
cranking
this
up
to
ridiculous
levels
and
if
you're
being
more,
I
should
put
it
if
you're
you're
being
selective
about
about
it.
You
know
like
I
think
what
people
want
to
do
like
with
jaeger
is
like
have
this
very
configurable
and
so
they're
trying
to
weight
it
towards
things.
They
think
are
important
and
really
crank
up
the
sampling
on
things
that
they
think
are
not
so
important.
B
So
I
think
that's
that's!
Maybe
one
of
the
ways
to
use
it
so
chatty
stuff,
that's
low
value,
health.
You
know
just
things
you
know
are
not
going
to
be
bringing
your
system
down
but
are
generating
a
lot
of
data
like
turning
that
down,
but
not
turning
it
down
to
zero
is
like
one
way.
People
use
this
stuff
to
end
up
being
more
intelligent
about
what
they're
dropping
rather
than
and
having
it
be
such
a
case.
B
If
you
did
want
to
measure
something
you
know
at
what
rate
you're
dropping
that
thing,
and
so
you
can
factor
that
into
your
measurements,
something
like
that.
So
yeah
personally,
this
is,
I
mean,
I'm
not
saying
there
are
people
who
don't
who
shouldn't
use
this,
but
for
me
having
complete
traces
sampled
way,
weight
like
probabilistically
sampled
way
way
down
using
our
you
know,
current
sampling
strategy,
plus
having
precise
metrics
on
every
span,
feels
like
you
get
all
of
that
and
you
don't
have
incomplete
traces,
but
I
think
yeah.
B
Well,
would
this
sampling
apply
to
metrics
generation
as
well,
which
kind
of
brings
it
back
to
instrumentation,
which
is
the
thing
we've
been
working
on
in
this
afternoon?
Sig
like
once
you
once
you've
got
like
a
comp,
a
compound
object.
That's
you
know,
you're
trying
to
generate
your
http
metrics
and
traces,
and
all
these
other
things
like
do
you
want
sampling
to
be
applied
to
all
of
that,
like
as
a
block
is
that
is
that
how
we
want
this
controlled.
A
B
Matrix
yeah,
it's
the
the
overheads
not
changing,
because
metrics
don't
accrue
memory.
The
way
tracing
does
you
know
yeah?
Maybe
it's
fine
and
maybe
people
can
become
more
aggressive
about
turning
tracing
off
selectively
when
you're
doing
this
kind
of
thing.
B
B
I
kind
of
don't
want
to
start
adding
a
control
plane,
ad
hoc
to
open
telemetry
like
one
just
for
sampling,
because
you
have
like
configuration
in
general
like
it's
getting
to
the
point
where
you
would
want
to
be
able
to
like
if
you
can,
control
sampling
remotely,
like
what's
other
configuration,
you'd
want
to
control
and
like
the
collectors,
certainly,
if
they're
going
to
add
a
control,
plane
mechanism,
the
collectors
could
benefit
from
being
able
to
receive
config
updates
remotely.
C
Yeah
and
everyone
has
their
own
back
end
that
they
want
to
have
for
where
they
keep
their
config
and
and
how
they
want
to
update
their
config.
I
know
within
microsoft
the
teams
that
I
deal
with
just
everyone's
different.
They
shouldn't
be,
but
they
are
so
yeah
for
sampling.
You've
got
team
a
you
might
want
to
do
it.
This
way,
team
b
is
going
to
do
it
a
different
way,
but
they
have
a
completely
different
store
for
well.
We
tend
not
to
do
metrics
for
for
their
traces,
so.
C
I
I
think
you
know
having,
rather
than
having
a
control
plane
built
in.
We
just
have
some
sort
of
mechanism
for
dynamically
updating
the
config
or
subscribing
to
updates
for
config
or
something
right,
which
would
be
a
more
generic
solution.
And
then,
perhaps
on
top
of
that,
you
then
build
a
generic
control
plane
for
those
people
that
don't
have.
B
It
yeah
and
you
can
even
use
the
collectors
potentially
as
a
tiering
mechanism
for
that
control
plane,
rather
than
all
the
things
connecting
to
a
thing
or
having
to
figure
that
out
separately,
you
know
you
can
have.
B
Push
out
the
collectors
at
collectors
that
pushes
things
out
to
sdks
yep,
so
something
like
that
anyways.
Those
are
interesting
thoughts.
I
don't
know
that
who's
gonna
have
time
to
pay
attention
to
that
anytime
soon,
but
my
only
request
is
like
we
not
especially
with
the
sampling
stuff,
showing
up
start,
adding
like
just
slapping
something
on
when
nobody's
looking.
B
So
even
if
the
stuff
jaeger's
doing
is
like
pretty
much
what
we'll
end
up
doing,
if
it's
not
stopped
on
and
the
collectors
can't
even
handle
dynamic
configuration
changes
anyways
so,
like
you
know,
there's
there
there's
something
to
work
in
progress
to
make
it
so
that
that
happens.
Right,
oh
yeah,
that's
nice,
oh
yeah!
Splunk
is
contributing
a
bunch
of
that
work.
I
mean
it's
great,
there's
a
bunch
of
collectors
but
yeah
they're
they're.
Definitely,
this
is
something
that
is
very
that
we
are
actively
working
on.
B
I
know,
and
I
already
opened
tr's
in
the
collector
that
are
going
through
the
process
for
it
yeah.
It
seems
really
a
really
useful
feature
to
avoid
just
losing
data,
because
you
have
to
reboot
the
things
to
change
content
like
yep
yeah.
I
think
our
distro,
I
think
the
splunk
distro
of
the
collector
does
this
already
and
we're
working
on
streaming.
It
I
see
cool,
I
think
I
mean
at
least
I've
seen
internal
demos
of
it.
I
don't
know
I
don't.
B
I
don't
know
what
the
actual
state
of
any
of
any
of
the
actual
distributions.
B
B
B
Something
you
know
like
like
it
could
live
in
the
x,
respect
or
something
since
it's
just
amazon
people
wait
there's
respects.
I
want
to
see
the
x-ray
specs.
A
I
think,
but
since
we're
adding
them
into
the
contour
because,
like
I
don't
like
technically
amazon's
sort
of
on
the
hook,
but
we've
even
seen
some
like
database
sdk,
instrumentation
and
js
like
that
was
written
by
some
non
like
good
contributors
and
so
even
for
stuff.
Like
contributors
write,
it
it'd
be
nice
to
just
have
them
respect
about
it.
B
Yeah
personally,
I
think
it's
it's
fine,
especially.
You
know
it's
just
in
general.
Maybe
there's
another
way
of
putting
it
in
general.
We
don't
want
to
be
like
colliding
name
spaces
or
having
people
use,
name,
spaces
differently
and
the
more
that's
in
respect
the
better.
It
doesn't
like
hurt
anyone
to
have
this
stuff
there.
B
So
I
would
be
fine
with
it.
I
think
the
bigger
issue
was
was
making
sure
plugins
were
configurable.
I
don't
know
if
anyone
was
able
to
look
into
that
like
like
being
able
to
like
configure
which
sampler
you're
using
if
you're,
let's
say
you
have
x-ray,
it's
you've
got
a
distro
that
loads
up
a
custom,
sampler
or
even
two
like,
and
then
there's
the
default,
samplers
and
other
things
like.
Are
we
able
to
in
every
language
actually
control
which,
which
one
runs?
B
I
looked
into
the
spec
and
there
wasn't
actually
anything
written
in
the
spec
about
the
idea
that
when
plugins
are
loaded,
they're
loaded
as
like
a
map
with
names,
so
that
you
know
if
you
load,
if
you
have
an
environment
variable
or
a
comp
file
like
you,
can
change
it
at
runtime,
which
one
you
want
not
like.
B
I
have
to
like
actually
type
code
and
install
a
plugin,
and
that's
the
only
way
to
to
configure
what
plugin
is
running
so
that
that
seemed
like
a
thing
that
would
be
good
to
to
have
in
the
spec.
Basically.
A
B
Does
that
make
sense
to
you,
john
I'm
talking
about
like
yeah,
I
mean
this
is
part
of
the
reason.
I
think
why
java
we
haven't
declared
our
configuration
story
stable
yet
yeah,
because
I
think
these
questions.
I
know
that
people
like
tigran
are
unhappy
with
the
state
of
the
environment
variables
and
how
kind
of
scattered
and
gigantic
than
there
are
and
there's.
B
I
think
there's
not
really
a
coherent
plan
on
how
all
that
stuff
is
actually
supposed
to
stick
together
in
some
way.
That
makes
it's
going
to
make
sense.
You
know
for
the
next
10
years,
right,
environment
variables,
don't
scale
environment
variables.
I
won't
go
as
far
to
say
they're
an
anti-pattern
but
they're
they're
like
up
there
right,
like
I
have
I
have
had
to
deal
with
with
hell,
related
to
environment
variables
and
loading
config
from
them.
C
B
To
just
figure
out
like
a
comp
file
format,
this
seems
like
something
blocked
and
other
people
are
interested
in
yeah
like
a
year
and
a
half
ago,
there
was
some
discussion
about
defining
it
actually
in
the
protocol
in
the
protobox
defining
configuration
as
a
part
of
the
proto.
Every
language
could
just
basically
use
the
proto
as
their
configuration
language
yeah,
and
that's
also
a
great
step
towards
like
remote
configuration
right
now
so,
but
I
don't
think
anyone's
tackling
that.
Yeah.
C
B
Yeah,
for
sure
I
mean
we'll,
we'll
probably
want
to
support
a
variety
of
different
formats.
People
are
going
to
want
the
yaml,
but
we
can't
have
that
be
the
the
default
even
because
yamo
parsers
suck
in
like
every
language.
So
we
definitely
don't
want
to
have
a
hard
dependency
on
eddie
ammo
person.
B
But
clearly
ted,
you
are
not
thinking
galaxy
brain,
we
need
open
source,
monitor,
yaml
specified
and
the
parsing
scheme
specified
as
a
part
of
hotel,
yeah,
oh
camel,
exactly
bring
it
or
just
bring
yeah
define
our
own
markup
language.
B
B
B
Not
yet
so
I'm
hoping
this
this
week,
I
have
gotten
a
couple
other
light.
B
Steppers
will
be
starting
on
it
next
week,
so
diego
is
going
to
be
able
to
devote
some
cycles
to
prototyping
in
python
and
bart
is
doing
this
bart's
actually
already
created
one
of
these
things
in
javascript,
so
there
already
is
like
an
instrument
or
object
in
javascript,
and
so
actually
my
request
at
this
meeting
was,
if
you
haven't
checked
that
out
on
a
rug
to
to
have
a
look
at
what
he's
doing
and
just
see,
see
how
similar
or
different
the
two
things
are.
B
The
problem
is
bart
is
in
the
eu
and
he's
willing
to
have
like
late
night
meetings,
but
this
meeting
is
like
1am
over
there
and
I
think
it's
like
8
a.m,
where
you
are
so
that
seems
like
there's.
No,
unless,
unless
unbeknownst
to
me,
you
like
to
get
up
at
four
in
the
morning-
and
this
is
so-
I
think
it
might
be
tough
to
to
schedule
a
meeting
that
everyone
content.
B
B
Yeah,
except
he's
a
night
owl
is
the
impression
I
got
that
he
doesn't
want
more
things
and
it's
like
why
he
likes
working
for
an
american
company
from
europe.
So
I
don't
know
anyways
point
being
it
might
be
a
little
bit
difficult
to
have
like
direct
syncs
with
bart,
but
I
think
it.
B
A
B
To
him
about
it,
so
I
need
to
let
me
like.
A
B
C
B
B
A
different
thing:
let
me
let
me
check
out
I'll,
send
a
message.
A
B
Look
yeah
yeah,
so
that
was
anyways
that
was
my
like
to
do
was
to
try
to
figure
out
what
they're
already
doing
in
javascript
and
how
similar
how
much
overlap
there
is
with
what
you're
doing
in
java,
but
in
general
I
think
part's
going
to
be
available
to
to
fill
in
the
gaps
and
ntgo
will
be
available
to
to
do
prototype
in
python,
and
a
thing
I
would
like
all
of
us
to
think
about
is
like
configuration
again
for
these
things.
B
B
Should
be
configurable
for
some
of
these
semantic
conventions,
so
I
don't
know
if
you've
on
our
auger
john,
if
you
guys
have
some
cycles
to
look
at
that
with
your
java,
the
stuff
you're
prototyping
in
java.
B
I
don't
know
if
you
have
customer
feedback
over
at
splunk
or
aws
around
like
what
people
want
configured,
but
I
think
that's,
that's
kind
of
like,
besides
figuring
out
the
the
interface
for
this
thing
in
code
figuring
out
what
the
configuration
options
are
for
it.
Just
like
the
next
step.
B
B
So
I
don't
know
how
common
all
the
almost
all
of
the
requests
are.
How
do
we
make
health
checks
go
away,
yep?
How
do
we
make
health
checks
go
away,
and
how
do
we
make
health
checks
go
away?
Yeah?
How
do
we
make
health
checks
go
away?
I
assume
people
are
going
to
want
a
map
map
have
a
way
of
like
mapping
what
what
sets
the
status
to
error,
basically
like?
What's
in
what
counts
as
an
air?
What
doesn't
I'm
actually
surprised?
People
have
not
asked
for
that
more.
B
We
haven't
had
much
of
that,
because
we,
basically
we
our
back
end,
handles
that
right.
So
the
back
end
just
handles
whatever
the
conventions
are.
Are
the
back
end
will
deal
with
it
and
we
have
our
own.
You
know
our
own
story
for
what's
there
and
what's
not
in
there,
so
people
using
splunk
configure
if
they
want
to
change
what
counts
as
an
error,
if
they're
like
no,
no,
no
for
for
this
endpoint
or
urls
that
look
like
this
404s
are
not
an
error,
but
for
this
endpoint
you
know,
authentication
errors
are
authentication.
B
That
authentication
counts
as
an
error
here
and
doesn't
count
as
an
error
here.
Right
like
this
is
a
public
thing
that
doesn't
count
as
an
error,
people
bad
authenticate
all
times.
This
is
like
less
public
if
you're
getting
a
bunch
of
like
bad
off
request
posts
on
this
thing,
that's
like
really
suspicious,
and
we
want
to
know
about
it.
B
But
again,
I'm
kind
of
surprised
people
have
not
been
asking
for
this
more,
I
kind
of
figured
people
would
be
banging
our
door
down
for
for
some
form
of
that,
but
apparently
not
yeah.
That
hasn't
been,
I
mean,
I
think
it's
a
very
occasionally
show
that
but
not
visiting
yeah.
I
think
custom
custom
sampling
in
crazy
custom
sampling
schemes,
as
well
as
just
getting
rid
of
the
health
checks.
Well
yeah,
because
we
we
actually
wrote
a
custom
sampler
for
a
single
customer.
B
So
if
you,
if
you
have,
if
you
the
only
way
you
get
fans
is
if
they
come
from
the
outside,
if
they're
like
they
have
a
parent,
otherwise
they
and
I'm
some
customers
like
desperately
needed
this.
So
we
wrote
a
custom
sample
for
them
and
bundled
up-
and
here
you
go
turn
it
on
with
this
option-
knock
yourself
out,
but
yeah.
Only
one
customer
is
asking
that,
and
you
know,
when
the
customer
is
paying
a
lot
of
money.
You'll
spend
a
couple
hours.
Writing
a
special
sample
for
them.
B
Yeah,
no
yeah,
that's
interesting!
Yeah
I
mean
again.
This
is
like
where
we
have
plug-ins
right.
It's
like
you,
want
some
crazy,
ass
thing,
sure
here's
a
plug-in,
so
you
can
have
it
and
we
don't
like
pollute
the
universe
with
this
as
like
some
config
option
that
gets
attached
to
everything.
Now
so
maybe
that's
it.
Maybe
people
are
just
handling
this
stuff.
B
I
think
there
would
be
value-
and
I
know
nikita
started
shaking
trees
about
this
this
morning,
about
having
a
well-defined
specified
sampler
that
allows
you
to
eliminate
things
based
on
http,
urls
or
whatever.
Exactly,
and
maybe
part
part
of
this
is
a
question
of
like.
Is
this
like
instrumentation
configuration
or
or
like
a
more
generalized
thing,
like
just
processing,
basically
like,
like
what
you're
doing
in
the
collector
just
moving
some
of
that
collector
processing
into
into
the
sdks.
C
Yeah,
it's
like
one
of
the
things
that
I've
been
doing
with
the
most
of
the
internal
projects,
not
with
not
with
hotel
at
microsoft,
is
or
at
least
toying
with
the
idea
of
because
we
have
a
processing
pipeline
that
allow
our
plug-ins
get
loaded
into
rather
than
providing
dynamic,
config
updates,
because
then
each
plug-in
needs
to
define
which
ones
it
can
and
can't
specify,
because
we
get
a
team
that
comes
along
and
says.
Oh,
I
want
this.
Other
config
option
is
to
actually
dynamically
replace
plugins.
C
B
B
C
B
Because
it's
able
to
it's
it's
able
to
do
sampling
like
like
tail-based
and
retroactive
sampling
and
complex
sampling
based
like
rule-based
sampling
across
the
fleet
of
satellites.
So
it's
not
about
in
a
collector
today,
it's
like,
if
you
want
to
do
any
kind
of
tail-based
stuff.
All
the
data
has
to
be
loaded
into
one
collector,
and
so
that's
that's.
B
Limiting
to
what
degree
you
know
you
can
do
it
and
with
the
satellites
we
can
do
it
across
across
the
fleet
satellites
and
they
have
a
bunch
of
some
other
really
nifty
features,
but
don't
I
don't
think
light
stiff
is
planning
on
planning
on
open
sourcing
that
stuff
anytime
soon,
unfortunately,
but
it
definitely
I
can.
I
can
verify
that
that
makes
the
amount
of
like
upfront
sampling.
You
have
to
do
like
much
less
if
you
can
intelligently
sample
it
at
that
level,
because
usually
it's
it's
often
the
egress
cost.
C
Yeah,
which,
like
in
the
initial
sampling
one
where
I
talked
about,
we
have
this
mechanism
where
effectively
it
logs
everything
during
the
the
request
and
then
only
if
it's
an
error
does
it.
You
know
effectively
dump
everything
out
if
it's
on
there,
it
says:
okay,
I'm
going
to
keep
everything.
That's
warning
above
and
drop
everything
else.
Yeah.
B
B
It's
like
it's
spread
out
across
everything.
How
do
you
do
that?
So
that's
some
some
smarty
pants
ways
of
dealing
with
that
kind
of
fun
but
it'll
be
to
somebody
else
perhaps
to
to
come
in
and
open
source
that
stuff.
B
Yeah-
I
don't
know
I'm
just
yapping
at
this
point.
I
think
that's
all
I
got.
I
want
to
start
digging
into
this
instrumentation
stuff.
We
need
smes,
maybe
that's
the
other
thing.
I've
been
bothering
microsoft
and
aws
for
subject
matter
experts.
I
think
I
think
they
would
be
really
helpful.
We
keep
using
http
as
our
example,
but,
like
you
said
john,
it's
always
the
database
and
so
like.
I
feel
it's
like
our
sequel,
stuff,
yeah
or
the
network.
B
I
feel
like
yeah,
our
sequel,
stuff
and
and
things
of
that
nature,
maybe
they're
fine,
maybe.
B
I
it's
not
clear
to
me
like
how
robust
everything
is
in
that
world.
Well,
I
think,
at
least
in
java
the
sequel
instrumentation
like
the
jdbc
instrumentation
is
too
robust,
because
it
ends
up
being
very
expensive
to
sequel
and
sanitize
it
and
do
all
that
stuff.
Yes,
yes
right!
This
is
the
area
where
I
predict
things
like
configuration
or
whatever
really
start
to
come
in.
Http
is
almost
too
trivial.
B
Sql
is
like
yeah
there's
such
a
wide.
I
think
there
is
like
a
true
range
of
like
high
value,
high
costs,
lower
value,
lower
cost
that
you
have
to
make
in
advance
or
make
somewhere
and
again.
Maybe
this
is
a
place
where
people
are
banging
down
our
door
running
more
better
different.
C
Yeah
and
that's
probably
somewhere
like
what
nikita
was
talking
about
this
morning,
if
you've
got
a
bad
query,
that
you're
trying
to
get
more
detail
on
you'd
want
a
sampler
to
just
calls
out
that
query
and
it
drops
as
much
data
as
I
can
but
keeps
everything
else,
a
little
bit
better
right,
low
high,
depending
on
which
view
you
want
to
take
level
yep,
yeah.
B
B
Yeah
we
we
just
provide
a
mechanism
for,
like
the
collectors,
will
reach
out
to
this
endpoint
and
you
can
feed
him
stuff
and
the
sdks,
and
you
can
feed
them
stuff
and
a
tiering
mechanism
for
distributing
the
stuff,
and
it's
on
you,
weird
big
brain
in
the
sky,
to
figure
out
how
you
want
to
drive
that
thing
and
whatever
totally
ridiculous
promises
you
want
to
make
to
your
users
about
how
it's
going
to
solve
all
their
problems.
B
B
It
recently,
and
was
telling
me
it
was
it
was
crazy-
might
have
been
my
brother.
Well,
the
I
mean
the
big
thing
that
I've
seen
is
that
it
was
trained
on
stuff
with
every
license
under
the
sun.
Yes,
it
like
will
inject
gpl
gpl
code
into
whatever
solutions
without
like
letting
you
know
that
you're
putting
gmail
code
into
your
code
and
yeah,
somehow
it's
racist
yeah,
I'm
sure.
B
Yeah
yeah
no,
but
I
think
they
were
playing
around
with
it
at
discord
or
my
brother
was
seeing
people
use
it,
and
you
said
it
was
like.
It
was
a
little
scarily
impressive
once
it
learns
your
code
base
about
being
able
to
not
think
for
you.
That's
where
I
think
people
make
the
mistake.
This
thing
cannot
do
subjective
reasoning.
All
it
can
be
is
like
well,
you
appear
to
be
writing
some
code
like
this
and
like
in
the
past
somewhere
like
this.
Other
code
came
next.
Maybe
so
here
you
go.
B
B
I
I
used
to
think
of
java
as
like,
heavyweight
and
like
difficult
to
type,
but
then
I
went
to
work
at
pivotal
and
I
saw
rob
knee
he's
like
founder
of
pivotal,
like
sat
next
to
him
writing
java
code
and
he
was
even
rusty
right
like
he
runs
a
company
for
a
living
and
I've
never
seen
someone
write
code
as
fast
as
that
guy.
B
B
B
That's
a
goat,
programmer's
answer.
That
was
ruby
tools.
No,
the
ruby
programmer's
answer
is
to
create
what
are
called
abstractions
obstruction
is
when
you
take
an
abstraction
that
then
prevents
you
from
doing
what
it
is.
You
want
to
do,
that's
an
obstruction
or,
or
only
allows
you
to
do
exactly
what
the
author
of
that
abstraction
wanted.
You
wanted
to
do
in
the
first
place
right
which
never
never
never
conforms
to
whatever.
B
The
next
requirement
is
exactly
all
those
you
know
the
big
lebowski
like
that,
there's
a
scene
from
the
movie
this.
Every
time
I
see
someone
come
up
with
this
fancy,
some
fancy
high
level
abstraction,
that's
just
gonna
save
so
much
time
is
like
that
scene,
where
they're
trying
to
drop
off
the
money
and
and
walter's
like
here's
the
plan.
B
B
I
feel
like
every
every
time
I've
seen
someone
write
one
of
those
things
the
next
set
of
requirements
they
just
they're
like
nah.
We
can't
do
that
it
it
with
the
thing
we
wrote
yeah,
we
didn't
think
you'd
come
at
it
that
way,
yeah
yeah
exactly
all
right.
Well,
I
think
we're
probably
out
of
content
yeah,
I'm
just
chatting
all
right
have
a
great
evening
guys
around
bye,
yeah
cheers.