►
From YouTube: 2022-06-02 meeting
Description
cncf-opentelemetry meeting-2's Personal Meeting Room
A
B
B
B
A
C
A
Yeah
I
mean
one
use
case
would
be,
for
example,
that
you
have
some
probability
sampler
up
front
right.
Then
then,
but
you
want
to
make
sure
that
the
number
of
spans
you
collect
per
minute
does
not
exceed
a
given
limit.
So
this
is
a
hard
limit
and
yeah.
This
would
be
option
to
do
that
in
this
pan.
Processor.
A
C
I
just
started
reading
some
of
the
comments
this
seems
like
I
remember
you
had
posted
a
draft
of
this
before
it's
a
very
long
comment.
I'm
sure
I
could
read
it.
I
won't
read
it
now.
It
does
suggest.
Probably
you
know
like
a
write-up-
that's
not
in
code,
but
I'm
not
going
to
ask
you
to
do
it
because
I
know
how
short
that
is.
C
I
think
it
would
probably
be
nicer
to
see
if
people
want
to
adopt
this,
and
I
think
your
point
about
the
upfront
probability
sampling
is
that
you're
not
actually
really
interested
in
breaking
your
traces.
You
you,
what
you
would
do
is
you'd
set
your
probability
standpoint,
so
that
you're
almost
never
going
to
break
your
fixed
hard
limit
you're
only
going
to
get
this
consistent
sampling
and
broken
traces
when
you,
when
the
the
rate
limit
is
exceeded.
For
some
surprising
reason,
I
guess.
A
A
Yeah
this
is
not
yet
specified,
so
what
I
did
in
this
case,
despite
there's
no
r
value,
I
put
the
p
value
in
order,
so
you
can
identify
that
this
is
a
span
which
was
inconsistently
sampled
before
and
also
in
the
reservoir
sampling
processor.
A
C
C
Thank
you
for
sharing.
I
think
it's
going
to
take
a
while
to
digest.
I
will
be
glad
to
review
this
to
help
get
it
in,
so
that
I
understand
it
as
well.
Yeah.
C
I
don't
see
anything
else
on
the
agenda.
I
I
always
come
to
this
meeting
enthusiastic
but
but
feeling
guilty,
because
my
employer
wants
me
to
finish
the
metrics
project
at
open
telemetry,
and
I'm
I'm
just
really
here
to
to
wish.
I
was
working
on
sampling
every
week
or
every
other
week.
I
will
definitely
review
352
here.
Thank
you.
Armor
thanks.
D
Well,
just
since
I'm
the
new
face
in
the
group,
I
just
want
to
say-
and
I
report
into
dan
darfler's
organization
and
over
the
next
few
months,
my
the
the
team
that
I
will
be
the
lead
for
will
be
owning.
D
You
know
honeycombs
representation
of
of
collector
plus
the
thing
we
call
refinery
and
things
like
that,
where
we
do
sampling
and
stuff
so
seemed
like
it
made
a
lot
of
sense
for
me
to
start
coming
up
to
speed
on
what
this
group
is
talking
about
and
being
involved
here,
because
because
that's
going
to
be
in
in
my
ambit
over
the
next,
you
know
year
or
more
so.
D
C
A
lot
of
interest
in
the
honeycombs
approach
to
sampling-
and
it
certainly
is
something
spencer
here-
has
talked
about-
having
brought
him
towards
hotel
sampling
as
well.
C
Great
yeah,
where
this
this
group
here
is
theoretically
working
on
techniques
that
will
help
us
sample
data
before
it
gets
to
your
sas
and
right
and
in
theory
they
will
work
in
concert
together
and
they'll
be
able
to
count
spans.
Even
though
they've
been
sampled
more
than
one
place,
it's
great
to
have
you
here
and
I
am
looking
forward.
I
think
that
that
slowly
but
surely
we
are
creeping
towards
a
configurable
sampler
that
can
do
accounting.
So
that
is
again
our
objective.
C
I
don't
have
anything
more
that
I
meant
to
talk
about.
I
did
get
a
customer
asking
me
about
sampling
metrics
this
week
and
I
will
say:
there's
there
there's
something
there,
but
I
don't
intend
to
talk
about
it
other
than
to
say
var
opt
which
I've
talked
about
in
this
group.
A
bunch
can
be
used
to
reduce
metrics
cardinality,
and
I'm
excited
about
that.
C
Yeah,
so
actually
I
mentioned
this
to
ben.
In
the
past
he's
looked
at
this
paper.
I
have
a
light
step
implementation
of
this
var
up
sampler.
It's
it's
not
always
what
you
want.
It's
it's
a
technique.
It
doesn't
necessarily
work
for
tracing
very
much
because
we
want
consistency
more
than
we
want
low
variance.
But
in
the
case
of
this
example,
was
somebody
who's
trying
to
count
most
frequent
queries
in
a
database
system,
they've
hashed,
their
query
id
and
now
they're,
counting
with
a
label.
C
So,
if
you
imagine
a
metric
system
based
on
counting
those
queries
over
short
intervals,
even
over
a
short
interval
you're
going
to
come
up
with
more
than
you,
your
your
quota
for
output,
you
know
for
an
individual
server,
so
you
can
flush
your
output,
but
if
you
want
to
fix
the
size
of
your
output,
now
you
can
sample
your
metrics
down
and
inflate
their
values
and
in
theory,
if
this
works-
and
I
believe
it
will,
you
can
then
estimate
your
most
frequent.
C
You
know
queries
using
metrics
that
have
exploded
cardinality
but
are
still
accurate
thanks
to
sampling,
and
that
is
why
I'm
still
working
on
metrics.
But
I'm
still
coming
to
this
meeting.
B
For
my
part,
as
I've
said,
I've
been
working
on
stuff,
not
too
much
work
from
me
either
in
the
past
two
weeks.
My
job
that
is
unrelated
to
observability
has
been
a
bit
busy,
but
yeah.
I
still
do
intend
to
share
some
stuff.
B
I
think
I
need
to
do
what
I've
said
in
the
past
and
like
shard
it
up
into
like
reviewable
interesting
pieces,
rather
than
some
kind
of
like
big
bang,
oh
thing
so
that
I
think
will
be
the
better
strategy
for
the
kind
of
bandwidth
I
have
so
that
remains
my
my
goal
that
I
don't
have
anything
this
week.