►
From YouTube: 2021-11-11 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
saw
in
the
slack
channel
that
our
regular
facilitator
josh
can't
make
it
today
and
it's
it's
a
holiday
in
many
countries
a
veteran's
day,
remembrance
day
and
all
that
so
I
mean,
if
there's
talk
about,
we
might
as
well
have
a
little
chit
chat
while
we're
all
here.
B
Yeah
yeah,
definitely
I
don't
see
any
items
in
the
agenda,
so
probably
there's
not
much
to
talk
about
other
than
you
know
like.
I
would
like
to
have
people
review
the
actual
pr
that
josh
has
and
let
me
look
for
it.
We
can
probably
discuss
well,
not
discuss.
We
can
probably
just
review
that
offline,
but
this
is
a
good
time
to
remind
people.
B
I
would
say
that
that
that's
pretty
much
the
the
document,
that's
the
one
of
the
well,
I
would
say
the
most
important
in
flight
there's
also
a
related
conversation.
B
Well,
two
of
them.
One
of
them
is
multi
multiple
multi-policy
sampling,
but
I
would
say
that
this
will
build
on
top
of
what
josh
has
in
this
pr
and
the
other
one
is
smaller,
which
is
why
the
sampler
description
has
to
stay
immutable.
B
A
Yeah,
I
think
I
need
to
read
through
it
a
little
bit
more.
I
do
have
some
questions
and
feedback,
but
I
want
to
have
some
time
to
digest
a
little
bit
more,
so
maybe
I'll
bring
that
next
week
or
just
in
the
in
the
slack
or
in
the
github
thread.
B
D
Yeah,
okay:
well,
we
wanted
to
bring
up
one
topic
just
for
just
to
kind
of
get
some
thoughts
on
this.
So
we
see
that
there
is
a
lot
of
effort
happening
for
making
it
possible
to
generate
metrics
out
of
span
data
right.
That's
the
whole
effort,
that's
going
on
with
the
probability
sampler
where
we
can
get
the
adjusted
count
that
can
be
used
for
metrics.
So
we
wanted
to
see
what
direction
is
the
community
thinking?
D
Where
are
we
thinking
that
metrics
generated
out
of
spans-
or
we
are
also
going
to
make
it
easy
to
generate
metrics
directly
at
the
time
of
instrumentation
that
is
before
sampling,
because
those
would
be
more
accurate,
whereas
the
metrics
that
are
generated
from
span
data
do
have
the
statistical
error
in
them?
So
just
wanted
to
think
what
is
the
community
thinking
the
pros
and
cons
of
these
approaches.
A
I
found
that,
like
a
really
powerful
mechanism
to
get
right
into
an
application,
and
and
so
I'm
trying
to
leverage
those
instrumentation
points
for
for
generating
metrics
as
well
understanding
that
yeah,
I
can't
sample
100
of
everything,
and
but
I
do
want
accurate
counts
of
durations
and,
and
so
I
mean
we're
our
instrumentation
still
built
off
of
open
tracing
and
we're
starting
to
move
towards
open
telemetry,
and
I
will
probably
do
that
finish
that
by
next
year.
A
But
our
approach
is
just
to
wrap,
wrap
the
tracer
api
and
and
we're
using
prometheus
and
so
just
register
some
prometheus
metrics
and
we
wrap
the
tracer
api
start
spin
and
finish
spin
or
what
we
use
to
sort
of
get
the
durations
and
then
just
increment
a
histogram
on
span.
Finish.
D
A
Yeah
that's
happening
as
well,
so
so
in
the
back
end
we
use
honeycomb
and
honeycomb
has
support
for
I
guess
in
in
the
spec
what
we're
calling
adjusted
count.
If
you
includes
that
adjusted
count
as
a
span
tag,
then
you
tell
honeycomb.
This
is
a
span
tag
to
use.
Then
it'll,
reinflate
everything
when
you
query
it
it'll
generate
a
time
series
and
that
time
series
is
reflective
of
what
actually
happened.
D
So
are
there
like?
We
see
that,
based
on
the
estimated
adjusted
counts
it
would
be,
it
would
make
sense
to
generate
throughput
metrics.
But
if
you
go
and
use
the
error
codes-
and
you
start
extrapolating,
then
that
might
give
you
some
misleading
results.
A
Exactly
yep
and
that's
and
that's
the
trade-off,
it's
and
I
guess
it's
important
to
understand
what
that
trade-off
is.
But
yeah
I
mean
you're
gonna,
there's
gonna
be
a
lot
of
statistical
error
through
sampling.
D
D
A
Yeah
it's
well,
it's
not
feasible
to
have
every.
I
guess,
span
tag
or
attribute
reflected
as
a
label
to
the
prometheus
metric,
so
we're
just
picking
out
some
very
key
things.
Like
duration.
Every
span
has
a
duration,
and
so
obviously
that's
that's
going
to
be
reflected,
and
it's
like
back
in
the
open
tracing
terminology
component.
A
Every
span
has
a
component
and
you
want
to
know
what
what
instrumentation
was
generating
that
span
so
that
that's
a
label,
but
when,
when
it
comes
to,
like
the
whole,
the
whole
range
of
of
uses
of
span
tags
in
our
platform,
we
can't
capture
every
single
one
of
them
as
a
metric
label.
Plus
the
cardinality
of
those
values
is
too
high
to
make
that
efficient
with
prometheus,
so
we're
using
probithius
to
just
get
a
few
labels
that
we
know
are
low.
A
Cardinality
include
that
metrics
and
then
we
have
like
a
really
good,
highly
accurate
representation
of
what
happened
just
in
those
dimensions
and
then
for
everything
else,
which
can
be
very
wide.
Yeah
we
can
use
sampling
and
reinflation
and
honeycomb
can
give
you
an
idea
of
what
happens,
but
it's
not
like
totally
accurate.
A
B
By
the
way,
I
don't
know
the
details,
but
in
lightsaber
we're
also
doing
something
very
similar.
A
Yeah
and
and-
and
I
also
like
the
idea
of
being
able
to
get
highly
accurate
metrics,
but
on
like
on
a
few
dimensions
and
not
all
of
them,
because
it's
cheap,
we
don't
have
to
send
any
data
down
any
any
pipeline
to
to
gather
metrics
from
we're
doing
it
right
at
the
source.
But
but
I
mean
there's
due
to
these
trade-offs.
It's
it
can.
Both
approaches
can
be
useful.
D
Yeah,
I
see
your
point
we,
so
we
as
a
in
application
owner
one
has
to
decide
hey
which
metrics
I
want
to
generate
up
front,
which
I
can
deal
with
the
statistical
error,
and
I
think
that
also
we
were
thinking
say
as
more
and
more
libraries
start
generating
metrics
for
you,
then
how?
I
guess
you
have
to
make
sure
you
do
enough
configuration
to
get
this
level
of
control
that
we
were
just
talking
about.
A
Yes,
yeah,
that's
that's
true.
I
mean
on
the
sampling
configuration
side,
we're
using
remote,
controlled
sampling
from
jager
to
configure
our
samplers
and
that's
another
topic
that
I
want
to
bring
up,
but
I
I
think
we're
going
to
get
around
to
it
after
josh
has
gotten
enough
eyes
on
on
his
proposal.
Remote
sampling
is
something
that
I'd
like
to
work
on
actually.
E
So,
just
first,
what
we're
doing
in
the
java
auto
instrumentation
currently
is
we're
populating
we're
creating
metrics
we're
using
the
dimensions
that
are
spec
out,
so
the
spec
has
for
http
defines
which
attributes
labels
dimensions,
whatever
we're
calling
them
should
be
captured,
and
so
that
would
be
your
sort
of
your
base,
no
statistical
error,
attributes
and
then
well,
I'm
guessing
that
honeycomb
doesn't
like
generate
like
it's
more
on
the
fly
when
you're
querying
the
back
end
that
they
factor
in
the
adjusted
count,
as
opposed
to
generating
statistic
generating
the
metrics
on
ingestion.
A
Yeah,
I
think
that's
how
it
works.
I
mean
I'm
just
we're:
we've
been
honeycomb
users
for
over
a
year
and
yeah
you,
you
issue
a
query
and
it'll
generate
a
time
series
on
the
fly
and
take
those
adjusted
counts
into
account.
E
Cool,
so
that
and
that
that
so
that
would
be
then
for
everything
other
than
those
the
metric
attributes
that
are
specked
out,
at
least
in
the
java
instrumentation
case.
A
E
So
not
from
spans,
like
so
there's
still
like
separate
signals
in
the
open,
telemetry
apis,
but
in
the
in
the
instrumentation
repo.
We
have
a
instrumentation
api
that
that
sort
of
you
call
operation
start
operation
end
and
it
will
do
the
both
it'll
generate
both
spans
and
metrics.
E
For
you,
you
know,
using
those
same
sets
of
attributes
that
you
gave
it
and
the
same
timings.
A
A
Yes,
yeah
yeah,
we
were
using
java
special
agent
actually,
and
I
took
a
version
of
it
that
worked
and
modified
it
for
our
needs,
as
it
was
at
the
time
like
before.
The
auto
instrumentation
for
java
went
ga.
A
That
was
that
was
the
only
thing
that
kind
of
worked
for
us
and
that
allowed
us
to
modify
and
build
our
own
instrumentation
rules.
E
Cool,
so
I
will
keep
my
eye
out
for
you
in
the
in
the
instrumentation
repo
yeah.
D
E
That's
my
that
was
my
impression
of
how
we
were.
Oh
you
mean
of
having
like
a
an
api
like
an
instrumentation
yeah.
Let
me
post
there's
an
otep
that
honorag.
E
Yeah,
so
that
was
sort
of
the
proposal
of
trying
to
you
know,
share
what
we
our
experience
in
java
and
try
to
maybe
have
something
similar
in
other
languages.
D
E
E
And
then
we
were
chatting
internally
yesterday
and
had
a
question
about
the
sampling
sort
of
the
concept
of
changing.
Oh,
I
can't
remember
what
we
call
it.
E
Yeah,
where
you
sort
of
I
mean
we
understand
that
we
can't
get
like
can't,
put
a
strict
hard
limit
on
so
as
sort
of
background
for
our
customers
tend
to
be
very
cost
conscious
and
we
char.
Our
pricing
model
is
by
ingestion
gigabytes.
E
So
they
tend
to
like
these
kinds
of
cost
controls
like
the
old
to
say.
You
know,
this
is
the
maximum
number
of
spans
that
can
be
ingest,
can
be
sent
over
some
period
of
time,
and
so
with
the
probabilistic
sampler,
that's
not
possible,
but
it
is
possible
to
sort
of
approximate
a
a
limit
by
adjusting
the
sampling
rate.
E
The
probability
over
time
based
on
sort
of
your
best
guess
of
what
the
traffic
will
be
in
the
next
minute
or
five
minutes,
based
on
historical
trends
and
we're
just
kind
of
curious
community
wise.
If
that
topic
had
come
up
and
if
there
were
any
limitations
to
that
sort
of
adaptive,
sampling
approach
of
varying
the
probability
over
time.
A
Yeah,
this
is
something
that
I
considered
using
a
year
and
a
half
ago
and
decided
to
all
the
the
feedback
loop
to
make
adaptive
sampling
work.
I
mean
this
as
it
was
specked
out
in
jager,
I'm
like
well.
This
is
a
lot
of
components
that
I
don't
really
feel
like
bringing
up
right
now.
A
I
might
have
a
simpler
way
of
doing
this,
though
not
as
mathematically
rigorous
and
and
that's
actually
just
to
use
a
rate
limited
sampler,
so
I'm
making
a
sampling
decision,
and
that
decision
is
yes,
maybe
once
every
five
seconds
and
for
that
random
trace,
where
I
said
yes,
that
trace
gets
sampled
and-
and
I
just
build
up
so
for
all
of
the
for
all
the
sampling
decisions
that
I
make,
whether
it
be
yes
or
no.
A
E
A
E
Oh,
I
see
I
got
it.
A
Yeah
that
way,
I
don't
need
like
a
feedback
loop
and
I'm
getting
a
guaranteed
like
rate
of
output
and
fixed
costs.
A
But
I'm
not
sure
why
I
mean
I
mean
I
I'd
like.
Maybe
I
know,
yuri
or
josh
would
probably
have
some
good
reasons
or
be
able
to
tell
me
what
those
trade-offs
are
in
in
doing
things
that
way
versus
adjusting
a
probabilistic,
sampler
and
and
and
the
math
behind
it,
but
it
works
well
enough
for
us.
I
guess
I
mean
on
the
back
ends.
I
see
things
reinflated
and
I'm
like
yep.
That's
that's
exactly
what
happened.
E
And
so
I'm
guessing
you're
just
making
that
sampling
decision
once
at
the
head
and
then
using
parent-based,
yes
throughout
yep,
tracy
yeah
yeah
that
that
definitely
makes
things
simpler.
A
Yeah
I
mean
I
was,
I
know:
there's
been
conversations
of
like
secondary
sampling
or
or
sort
of
doing
another
sampling
decision
downstream.
If
there's
like
a
part
of
the
trace,
that's
like
very
noisy
then,
and
we
haven't
implemented-
that
we
know
that
that's
a
gap.
B
By
the
way,
I
don't
know
if
this
is
exactly
what
you're
looking
for,
but
there
were
some.
There
was
a
conversation
regarding
reservoir
and
there
was
a
prototype
that
was
mentioned,
so
I
can
paste
the
link
here.
I'm
not
super
familiar
with
this
code.
This
was
posted
during
the
discussion,
the
discussions
regarding
the
attempts
and
it's
about
how
to
limit
rate
sampling.
B
I
know
that,
for
example,
josh
was
thinking
about
probably
putting
something
like
this
in
the
collector.
You
know
like
a
processor
that
would
be
a
reservoir
and
then
just
limit
the
limit.
The
output.
A
And
talking
about
that
rate,
limited
sampling
that
that
we're
using
as
I
look
through
josh's
pr,
I'm
kind
of
thinking
about,
like
some
things
that
come
to
mind,
are
where
it
said
that
for
non-probabilistic
samplers,
that
the
adjusted
account
is
unknown,
but
I'm
like
well.
I
have
a
case
where
I
do
know
it,
but
it's
only.
It
only
makes
sense
like
when
I'm
making
that
sampling
decision
at
the
very
beginning.
E
Yeah,
it
would
be
very
interesting
I'd
love
to
hear
josh's
thoughts
on
your
use
case,
because
it
does
I'm
wondering
how,
like
you
have
the
you:
have
the
adjusted
count.
It's
you
could
propagate
that.
A
E
E
Cool
well
thanks.
Everyone
for
the
discussion.
A
E
B
A
I
don't
know,
let's
put
our
names
in
in
the
in
the
meeting
notes
and
if
they're
there's.