►
From YouTube: 2022-01-27 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
this
meeting
well
for
me
it's
morning,
probably
not
for
everyone.
I
put
some
things
I
thought
of
in
the
agenda
to
talk
about,
but
at
this
point
it
feels
like
a
milestone
has
been
reached
and
I
don't
have
a
lot
to
say
in
the
moment
we
merged
the
big
pr
yesterday.
A
Thank
you
all
for
your
review.
It
was
majorly
a
group
effort,
so
thank
you.
I
will
offer
you
what
agenda
items.
I
was
able
to
find
when
I
thought
about
it
earlier
today,
but
maybe
others
have
things
to
talk
about,
as
I
hinted
at
in
the
last
time,
or
at
least
in
the
spec
sigs,
when
I
was
lobbying
to
get
this
thing
merged
as
soon
as
this
merges
we're
on
and
off
to
new
things.
Obviously,
the
community
has
been
really
asking
for
configurable
sampling.
A
The
jager
configurable
sampler
protocol
is
widely
known
and
implemented.
So
from
my
perspective,
the
next
thing
to
do
to
give
the
users
what
they
want
is
to
start
doing
something,
and
I
think,
implementing
a
compatibility
with
jaeger.
Sampler
is
probably
the
first
thing.
A
We
can
take
a
look
at
that,
it's
yeah,
it's
like
you,
may
think
or
expect
the
two
core
built-in
types
are
probability
and
rate
limiting
and
you
can
either
set
a
probability
or
a
hard
rate
limit.
A
I
will
say
that
this
is
exactly
what
I
think
people
want,
and
I
mean
it's
not
exactly.
We've
had
customers
saying
we
want
regular
expressions
instead
of
a
string
here
and
I'm
not
sure
do
any
of
you
have
thoughts
on
this
topic
other
than
the
broad
ones
that
probably
everyone
has
they
want.
This.
B
I
I
guess
my
question
is
push
versus
pull.
This
looks
like
a
pull
definition
with
the
service
manager.
You've
got
at
the
bottom
right.
This.
A
Is
a
so.
This
is
two
things
here
in
this
file
and
being
protobuf.
You
get
two
things
in
the
same
place,
so
this
is
both
a
protocol
definition
that
can
be
exchanged
over
the
wire
to
configure
samplers
as
well
as
jager.
Has
this
http
endpoint
that
you
can
use
to
grab
your
sampling
strategy
and
you
have
to
tell
it
your
service
name
and
that's
it.
So
this
is
pretty
bare
bones
and
I
think
we
should
start
there.
A
A
But
for
the
basics
here
I
think
what
we
ought
to
do
is
to
start
actually
building
an
artifact
that
people
could
link
into
their
binary
and
use
if
they're,
jaeger
users
they'll
already
have
that
configurable
sampler
endpoint
to
use,
but
for
my
users
at
first,
perhaps
or
just
to
get
open
telemetry
excited
about
this
I'd
say
we
need
something
you
can
configure
through
yaml
or
through
statically,
when
you're
setting
up
your
binary
now,
I
know
peter
and
are
kind
of
exclusively
like
interested
in
sampling
and
maybe
haven't
been
attending
all
the
other
hotel
meetings.
A
Never
I've
seen
you
in
some
other
other
meetings.
I
want
to
show
you
all
something
here
I
well.
Actually
this
isn't
the
right
place,
I'm
going
to
show
you
in
the
metric
specification
something
similar,
because
I
want
to
draw
an
analogy
or
show
you
the
similarity
in
the
metrics
sdk
spec,
and
this
is
by
the
way
what
my
attention
has
been
on
lately.
This
is
kind
of
the
first
priority
for
hotel.
A
We
have
this
concept
of
a
view
and
this
came
from
open
census
and
it's
the
idea
being
that
you
can
I'm
going
to
put
this
link
into
both
chat.
So
you
can
load
it
yourself
and
in
the
notes.
A
And
then
go
back
to
there,
so
the
idea
here
is
it's
like
tracing
in
metrics,
just
separated
earlier
in
in
the
lifetime
of
open
telemetry,
and
this
here
is
a
request
that
came
in
from
open
census,
open
census
had
its
site
set
high.
This
is
like
almost
blue
sky
when
it
started
like,
let's
just
make
it
possible
to
configure
your
metrics
library,
whereas
in
the
rest
of
the
metrics
world
prometheus
was
the
only
other
model
and
inside
of
prometheus.
A
You
set
all
this
stuff
up
inside
your
server,
but
inside
of
the
open
planetary
model,
we're
proposing
from
consensus
is
precedence
to
set
all
this
configuration
stuff
up
in
the
binary,
and
now
you
could
imagine
getting
a
view
configuration
sent
to
you
from
a
remote
endpoint,
at
which
point
it
would
become
a
remote
view,
configuration
my
point
in
showing
you
all
this
is
that
there's
almost
a
one-to-one
correspondence
between
what
people
want
for
spans
and
what
people?
What
was
theoretically
asked
for
and
now
now
being
delivered
for
views.
A
So
this
is
kind
of
like
a
use
case
list,
but
then
the
point
here
is
that
this
is
exactly
a
structure
for
what
you
can
select
on
to
choose
your
behavior,
so
spans
don't
have
types,
but
they
have
kinds
so
internal
node
external,
like
am
I
a
remote
endpoint
or
am
I
an
interior
span?
Am
I
a
service
man
or
my
client
stand?
They
have
that
span
kind
field,
which
is
always
felt
a
little
bit,
not
quite
well
defined.
A
We
have
span
names,
we
have
meter
names,
instrumentation
library,
names
or
scope
names
now,
they're
being
called
the
version.
We
have
schema
url.
So
all
all
all
of
these
have
an
analog
in
span.
A
It
says
you
may
choose
to
support
like
more
criteria
like
if
you're
like
metrics
can
be
integers
or
doubles.
There's
there's
other
there's
other
ideas
you
might
be
able
to.
So
the
the
the
criteria
are
considered
additive,
so
we're
building
up
conjunctions
here.
A
If,
if
you
give
an
invalid
predicate,
it's
like
an
error,
the
output
name,
so
this
is
saying
after
I
match
some
stuff,
I'm
going
to
produce
a
name,
that's
different
than
my
original
name,
probably
because
I'm
changing
something
so
and
in
metrics
this
has
a
requirement
behind
it.
There's
a
thing
we
call
single
writer,
which
says
you
cannot
write
the
same
metric
for
more
than
one
place
or
one
process
at
the
same
time.
Intentionally,
if
it's
unintentional,
we
accept
that
that's
that's
hard
to
avoid
sometimes
but
intentional
duplication
of
a
stream.
A
This
way
we
can
avoid
double
counting,
for
example,
so
you
might
say
that
there's
a
an
analog
of
the
single
writer
rule
for
spans,
which
says
you
you
ought
to
not
create
a
view
which
lets
you
over
count
spans,
and
I
would
say
that
that
requires
something
like
you
can't
produce
the
same
name
from
more
than
one
clause
of
a
view.
A
For
example,
this
deserves
a
little
bit
more
clarity
or
concreteness,
but
I'm
I'm
skipping
through
it
and
then
you
can
configure
the
output,
so
fans
don't
really
have
descriptions
which
kind
of
wish
they
did.
This
is
the
thing
where
I
think
the
the
so
listing
the
attributes,
so
so
some
people
are
going
to
want
to
say.
A
I
don't
think
this
is
such
a
big
deal
in
spans
today.
Maybe,
but
if
you've
got
15
attributes-
and
you
really
don't
need
them
all
you
could,
this
is
in
metrics.
What
you're
doing
is
saying
strip
away
some
attributes.
I
don't
need
them
because
you're
about
to
aggregate.
So
if
you
were
doing
sampling
and
then
doing
span
to
metrics
the
same
concern
would
come
up
and
you
could
remove
those
attributes
doesn't
quite
matter
aggregation.
A
We
don't
really
aggregate
spans
right
now.
We
just
send
them
out.
So
there's
not
much
of
an
analog
here.
So
most
of
the
configuration
is
just
send
a
spam
and
at
the
end
of
the
day,
okay,
but
potentially
you
could
imagine
setting
up
an
sdk
with
views
so
that
you
have
one
view
configuration
for
one
endpoint
and
you
have
a
different
view
configuration
for
another
endpoint.
A
I
don't
hear
that
request
too
much,
but
it's
definitely
something
that
makes
more
sense
in
metrics,
so
you
might,
for
example,
have
prometheus
scraping
you
with
one
set
of
cardinality
with
two
dimensions.
You
might
be
pushing
your
metrics
to
a
different
location
with
five
dimensions,
and
maybe
that's
because
you're
doing
a
you
know
a
switch
over
from
one
to
the
other,
but
maybe
it's
because
you
have
used
for
five
dimensions
over
here
where
it
costs
something
and
then
over
here,
where
you're
doing
alerting.
You
only
want
two
dimensions.
A
The
possibilities
here,
I
think
I've
over
I've
overdone
it
here
making
this
comparison
here.
A
But
my
my
high
level
statement
is,
I
think,
a
configurable
sampler
is
another
way
of
saying
views
for
spans
and
my
my
my
inclination
is
to
follow
the
same
path,
meaning
to
say
votel
can
write,
can
begin
to
experiment,
write
a
specification
that
says
there
is
a
way
to
configure
spam
views
and
that'll
be
not
neither
through
a
remote
endpoint
nor
through
a
yaml
file
that
will
be
through
api
calls,
which
are
not
exactly
specified
same
way,
they're,
not
in
views.
We
didn't
say
what
the
constructor
of
the
view
looked
like.
A
We
just
said
it's
possible
to
make
a
view,
and
so
so
this
first
step
and
then
would
be
there's
a
span
view
spec.
That
says
you
can
configure
policies
that
are
either
probabilistic
or
or
non-probabilistic
rate,
limited
or,
and
you
can
use
conjunctions
to
select
on
span
name
and
possibly
something
else,
I'm
hesitant
to
say
you
can
select
on
attribute
value,
because
that
raises
computational
cost
and
complexity
quite
a
lot.
A
So
that
would
be
part.
One
part
two
is
adapting
jaeger's
remote
sampler
to
that.
So
you
could
fetch
a
remote
sampler
from
jaeger
and
translate
it
into
the
same
logic
of
the
view
configuration
that
you've
that
you've
got
and
you
could
or
you
could
read
it
from
a
yaml
file
and
translate
it
in
that
same
logic
that
you've
got
that's.
How
I
see
this
going,
I've
talked
a
lot.
Would
you
like
to
anyone?
Will
I
think
I've
answered
now
now
the
idea
of
your
question.
B
Yeah,
so
so
I
guess
to
rephrase
it
slightly
you're,
seeing
as
some
sort
of
service.
That's
it
that's
set
up
as
part
of
the
sdk
that
will
do
the
pull,
but
that
will
then
translate
into
something
like
the
view
definition,
which
is
then
pushed
into
the
sampler.
A
Yeah,
that's
right
and
another
another
user
might
say
I
don't
have
a
remote
endpoint.
I
just
want
to
push
out
my
file,
which
is
how
amazon's
doing
it.
I
think
they
they
provide
a
way
for
you
to
push
a
file
to
every
machine
in
your
cluster
and
it
changes
maybe
over
time.
But
you
you
just
read
that
file
and
maybe
re-read
that
file
once
in
a
while
yeah.
A
Shared
space,
yep,
yeah
yeah,
understand:
that's,
that's
the
high
level.
It's
not
my
first
priority
in
hotel
because
we're
still
getting
the
metrics
sdk
out
and
stuff,
but
this
is
really
light
steps
priority.
So
I
in
particular
be
planned
to
begin
doing
something
here
I
felt
like
today's
meeting.
I
don't
have
a
lot
to
share
other
than
what
I
intend
to
do.
A
So
I
would
say:
implementing
the
jaeger
as
a
proof
of
concept
that
you
can
configure
would
be
kind
of
step,
one
just
to
see
what
we've
got
but
I'm,
but
I
would
present
it
as
a
views
configuration
for
spans
plus
a
adapter
to
pull
it
pull
it
in.
I
I've
referred
to
this
issue
here.
Issue
2179.
This
came
from
one
of
ottmar's
comments
in
the
big
pr
that
we
just
merged.
A
I
don't
know
that
we
should
summarize
it
or
beat
it
up
right
here,
but
it
points
to
there
being
ap
api
problems
and
it's
hard
to
imagine
how
this
you,
when
you
read
through
the
sdk
spec
you
get
this
idea
that
there's
these
composable
samplers
there's
an
always
on
there's
an
always
off.
A
I
plan
to
do
this
to
show
people,
because
until
we
do
this,
I
don't
think
the
composition
is
going
to
work
very
well
or
be
very
natural,
but
I
think
I'm,
the
only
very
few
people
are
gonna
understand
this
until
they
see
the
code
for
this.
So
I
would
like
to
just
prototype
that
and
show
them.
It
may
be
a
bit
contentious,
but
without
something
like
this,
I'm
gonna
end
up
with
a
sampler
that
is
sharing
none.
A
It's
not
using
parent-based
I'll
have
to
re-implement
parent-based.
It
doesn't
do
what
I
want,
but
it
could
very
easily.
So
I'm
sort
of
that's
that's
what
I
would
do
there.
I
put
that
on
the
agenda,
but
otherwise
I
have
not
much
to
say
and
I
would
check
check
back
two
weeks
and
see
if
we
have
something
more
to
say
about
this.
A
That
leaves
one
last
bullet
here.
I
think
we've
comprehensively
covered
this
territory
of
rate
limited
when
the
jaeger
spec
that
I
just
we
just
looked
at
briefly
earlier-
was
added
as
a
link
that
was
linked
to
from
the
hotel
spec.
A
We
had
to
change
the
language
a
little
bit,
because
the
rate
limited
there
was
referring
to
a
rate,
limited
sampler
that
doesn't
exist,
so
I
had
us
rephrase
it
as
lowercase
r,
lowercase
l
rate
limited,
because
it's
not
specified
and
there's
sort
of
a
gap
in
the
spec
right
now
and
from
this
group,
we've
learned
at
least
of
three
ways
to
do
rate.
Limiting.
A
My
guess
is
that
every
every,
like
scenario
might
have
a
user
will
have
a
different
preference
and,
in
our
case,
lightstep
prefers
the
second
of
these
bullets.
We
want
complete
traces.
I
don't
want
tail
sampling,
therefore,.
A
I
don't
really
know
where
the
I
feel
like
there's
a
headwind
pointing
towards
the
leaky
bucket
approach,
but
it's
non-probabilistic
and
I'm
not
going
to
endorse
it.
So
I
think
what
we
should
try
to
aim
for
is
at
least.
A
Well,
the
spec
needs
to
say
what
rate
limited
is,
and
maybe
it
could
be
a
kind
of
loose
spec
that
says
any
old
rate
limit
would
do,
but
I
would
prefer
I
don't
know
how
to
give
the
user
this
choice
if
it
deserves
to
be
a
choice,
is
all
I'm
trying
to
say
and
I'm
going
to
leave
that
open.
I
don't
really
know
what
we
ought
to
do.
A
On
how
to
do
a
consistent
tail,
sampler,
that's
rate
limited
and
that's
neat.
A
My
speaking
from
my
vendor,
incomplete
traces
is
not
something
we're
ready
for
yet
so
so
that's
not
going
to
appeal
to
us
much
and
I
my
my
feeling
is
that
the
users
are
asking
for
leaky
bucket
because
they
don't
really
know
how
practical
it
is
to
do
probability,
adaptive
probability
sampling
and
how
the
the
the
risk
of
exceeding
your
trace
budget
is
very
is
is
tolerable
or
controllable.
In
this
scenario,.
A
A
Here's
my
question:
I'm
not
a
math
expert.
I
never
claim
to
be,
but
so
I'm
always
feeling
a
little
bit
out
of
my
league.
But
here's
how
I
see
the
adaptive.
A
A
We
will
get
this
type
of
expectation
regarding
over
and
under
delivery,
because
there's
going
to
be
a
complexity
like,
I
think
you
can
get
more
and
more
complex
to
to
tighten
that
bound.
Perhaps,
but
I
don't
think
we
want
that,
and
I
think
the
thing
that
makes
people
uneasy
is
just
that
you're
saying
here's
a
sampler
on
time
average.
It
will
give
you
what
you're
looking
for,
but
on
a
short
time
interval.
I
can't
tell
you
what
it's
going
to
do.
A
It
might
over
exceed
your
rate
limit,
and
I
I
wish
I
had
more
confidence
or
so
it's
a
little
abstract.
I
was
looking
at
an
algorithm
where
you're
going
to,
of
course
use
one
window
of
rate
information
to
target
the
next
and
I'm
kind
of
assuming
that
the
the
next
is
going
to
be
a
stable
like
it's
the
same
distribution
that
I'm
sampling
and,
of
course,
in
a
very
short
time
interval.
A
I
think
that's
safe,
but
I
don't
know
if
anyone
here
has
feelings
about
how
to
design
a
simple
but
nice
provable
properties,
adaptive
update.
That
would
be
interesting
to
me
and
if,
if,
if
no
one
else
says
anything,
I
will
probably
noodle
on
this
a
bit
and
try
and
share
some
ideas.
But
I
can't
promise
anything.
C
I
I
believe
the
math
is
pretty
complex
here
and,
and
it
definitely
depends
on
the
distribution
of
the
load.
If
the
load
is
very
peaky,
we
will
have
definitely
some
issues,
but
a
natural
approach
here,
which
I
think
is
is
doable,
is
well.
The
specification
says
that
we
want
the
rate
of
a
certain
number
of
spans
per
time
unit.
It
is
given
as
a
second
one,
second
in
case
of
jager.
Well,
we
can
split
this
time
into
several
smaller
intervals
and
have
the
probability
change
from
one
to
another.
C
A
Yeah,
you
just
sketched
very
much
the
direction
I
would
have
taken
it.
I
know
how
to
implement
the
super
naive.
Just
one
interval
I'm
going
to
set
my
probability
and
it'll
over
under
deliver
it
probably
and
then
I'm
just
sort
of
subdividing
that
to
like
give
myself
a
budget.
So
if
I
did
break
it
into
four
and
I
start
doing
the
for
the
first
quarter,
I
can
see
I'm
already
exceeding
my
budget,
I'm
going
to
have
to
turn
it
down
and
at
the
fourth
window
I
might
end
up
at
zero
or
something
like
that.
A
B
So
I
mainly
work
on
the
client
space
rather
than
the
server,
but
the
the
analogy
I'm
about
to
give,
I
think,
is
relevant
to
the
server
as
well,
and
it's
mainly
if
you're
reporting,
error
spans
and
for
some
reason,
the
the
client
server
is
throwing
a
bunch
of
errors,
and
I
think
you
don't
want
to
overload
your
back
end.
So
you
want
to
have
some
sort
of
backup
magnet
mechanism
which
I
think
is
a
straight
limited
sampling
where
it
can
start
and
say:
okay.
B
Well,
I've
got
this
error
x
number
of
times,
I'm
going
to
report
100.
But
after
I
get
x
number
I'm
going
to
drop
the
probability
down,
so
I
don't
get
too
many
and
then
keep
backing
off
on
something
like
a
fibonacci
scale.
So
you
know
you
get
all
of
them
and
then
you
get
half
of
them
and
then
you
get
a
quarter
of
them
because
in
in
the
eric
spike
case
it's
yeah.
A
A
And
this
idea
that
you
could
probably
with
hard
math,
you
know
satisfy
people
by
showing
that
you're,
you
know
the
probability
can
be
arbitrarily
low
or
something
like
that.
If
you
accept
some
sort
of
like
higher
variance
or
something
like
that,
I
that's
my
hunch.
I
think
it's
nice
to
hear
that
more
than
one
person
understood
that
idea
and
then
we've
all
kind
of
bounced
it
around.
A
I
think
it's
probably
overkill,
and
I
I
don't
have
a
link
right
now,
but
I
did
at
one
point
implement
the
very
basic
like
one
interval,
here's
another
question
for
the
mathematically
inclined,
though,
let's
let's
suppose
that
it's
just
one
interval
so
and
to
make
it
concrete.
I've
got
10
seconds
so
over.
My
10
second
interval.
A
I
want
a
target
of
100
spans.
So
now
I
set
my
for
the
next
10
seconds,
10
to
20.
I
set
my
probability
to
I
don't
know
one
over
three
or
whatever,
so
so
that
now
I
am
getting
expected,
I
expect
for
the
next
interval
to
have
exactly
100
spans,
but
now
I've
passed
20
I've
reached
the
20
second
mark.
I
look
at
how
many
spans
I
got
in
that
second
interval
and
I
got
150
spans,
so
I'm
over
by
50
from
my
target
this.
A
I
believe
we
are
looking
for
the
maximum
posterior
probability
estimate
at
every
end
of
interval
in
which
that
what
that
says-
and
I'm
I'm
again-
I'm
basically
out
of
my
league
here-
is
that
at
time,
20
I'm
going
to
use
the
probability
I
had
prior
to
time,
10,
which
I
used
to
set.
My
expectation
is
up
at
the
end
of
20
seconds.
I
now
have
more
information
about
the
world.
A
A
I
wish
I
had
a
more
firm
belief
in
myself
on
this
topic.
That's
what
I'm
trying
to
say
and
I
will
leave
it
there.
I
think
we
can
do
this.
A
C
The
same:
it's
just
that
that
it
would
be
smooth
over
time,
because
what
the
user
is
interested
is
what
the
user
specified
and
not
our
own
intervals.
Yeah.
A
Exactly
so,
that's
what
I
think:
that's
what
I
meant.
It
was
just
an
improvement.
It's
the
same
idea,
but
just
like
refined
and
smoothed
out.
So
I
guess
I'm
going
to
leave
that
as
an
open
question.
I
wish
I
think
I
know,
and
I
can
go,
read
the
books,
but
I'm
looking
for
us
to
kind
of
like
be
confident
of
this
and
my
let's
see
so
the
way
I
remember
it
being.
A
I
need
to
go
back
to
this
and
convince
myself
is
that
if
you,
if
you
use
the
two
windows
of
data
to
compute
your
third
window,
meaning
combine
the
data
that
you
used
to
make
the
first
estimate
with
the
data
that
you
just
got
to
produce.
Your
next
estimate
is
roughly
what
you're
doing
so
that
you
factor
in
both
your
prior
knowledge
and
your
current
knowledge.
I
just
I
wish
I
was
more
able
to
kind
of
fluently.
Explain
what
we're
talking
about.
B
What's
it
called
again,
so
I
I
don't
know
what
the
difficulty
calculation
in
in
bitcoin.
So
I
don't
I
I
I
know
it
exists.
I
don't
know
the
details
of
how
they
do
it
so.
A
A
Well,
I
feel,
like
I'm
optimistic,
I'm
excited
I
really
like
this
group.
Thank
you
all.
I
didn't
have
a
lot
to
say
today,
though,
so
I
think
we
can
post
finished
meeting
two
weeks
from
now.
A
Hopefully
there's
more,
I
would
like
to
have
at
least
started
to
do
something
and
have
something
to
share
I'm
going
to
tackle
that
delegating
sampler
issue
and
then
maybe
revive
my
first
adaptive
sampler
pr
try
to
throw
it
all
together
and
get
a
picture
of
what
what's
the
thing
look
like
when
you
put
it
all
together.
That's
what
I'm
hoping.
A
A
Anything
we
could
use
if
you
think
of
it.
E
Yeah
did
actually
a
prototype
implementation
for
java,
so
actually
I
already
integrated
it
in
the
open,
telemetry
java
project.
So
we
have
a
fork
which
is
on
the
database,
open
source
contribution,
but.
D
A
Great
well,
that
was
the
only
other
thing
I
thought
of.
Thank
you
all.
I
really
appreciate
it
I'll
see
you
in
two
weeks.