►
From YouTube: 2023-01-10 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
A
A
Hello,
everybody,
let's
start
in
a
couple
of
minutes:
I
only
see
six
people
at
this
moment,
so,
let's
hope
more
people
join
very
soon.
In
the
meantime,
as
usually
yeah
I
just
have
any
important
items
to
the
gender
at
least
and
your
names
as
well.
A
A
It's
pretty
important,
especially
for
people
who
were
not
around
December
because
of
holidays,
just
to
confirm
that
no
surprises
happen
when
we
were
released
these.
So
please
review
the
pr
just
in
case
I
think
we
were
trying
to
be
a
little
bit
to
be
a
little
bit
more
conservative,
more
or
less
conservative,
but
just
in
case
check
that
out.
Okay,
the
next
one
tigrant
we
need
reviews
for
supporting
maps
and
heterogeneous
arrays
I,
don't
know
whether
Tyrion
is
around
I.
Think
he's
not
around.
A
So
probably
he's
one
of
those
to
take
a
look
at
that
one.
This
is
an
old
PR.
Oud.
I
think
that
this
is
supportive
at
this
moment
for
logging,
and
probably
this
is
trying
to
make
it
more
General.
A
Okay,
we're
fine
then,
but
please
consider
checking
it
out.
I
see
that
Tristan
has
one
approval
for
that
one,
but
yeah.
We
need
more
reviews,
also
to
macd
and
Kent
queered
review
that
moving
forward
another
one
from
tigran.
We
need
reviews
for
clarifying
the
behavior
for
MC,
not
present
or
invalid
Trace
ID
and
espan
ID
in
otlp.
This
is
especially
important
because
we,
this
is
the
last
one
before
we
can
make
OTL
PJs
from
this
table.
A
Basically,
what
we're?
What
Tyrion
is
trying
to
do
is
that
when
otlp
has
no
span
or
Trace
ID
or
it's
empty
or
missing,
basically,
and
instead
of
just
forgetting
about
the
data
to
generate
a
new
span
or
Trace
ID
respectively.
A
So
that's
an
interesting
Behavior,
Anthony
and
Dan
from
The
Collector.
B
Approval
on
that,
can
you
hear
me
I
can
hear
you
Anthony
yeah
I
was
I
was
just
looking
for
how
to
remove
my
approval
on
this
I.
Think
Josh
McDonald
has
convinced
me
that
this
is
actually
a
mistake
and
we
should
not
be
doing
this
I
I
think
neither
he
nor
tigrin
are
here
today,
but
I
I
would
appreciate
if
people
would
read
Josh's
comments
on
this
issue.
B
Oh
good,
just
just
appeared,
I
I
think
it
would
be
a
mistake
for
us
to
say
that
we
are
going
to
Generate
random
IDs
for
Trace
data,
where
we
have
invalid
data.
If
we
don't
have
a
trace,
ID
or
span
ID.
That
whole
span
is
invalid.
We
just
can't
deal
with
it.
B
A
I,
don't
think
we
ever
saw
it,
but
we
wanted
to
basically
just
cover
something.
So
we
don't
so
it
wouldn't
buy
those
back
in
the
future.
But
I
don't
think
we
have
ever
seen
that
no.
D
The
behavior
proposed
for
locks
makes
total
sense
to
me
because,
if
it's
not,
they
are
invalid,
then
the
lock
line
is
just
not
tied
to
any
spam,
but
I
also
agree
with
I.
Think
it
was
you
Anthony
right
that
it
doesn't
make
sense
this
way,
because
if
you
Generate
random
span,
IDs
then
also
the
the
parent
span.
Id
of
any
spam
pointing
to
that
span
will
no
longer
would
no
longer
work,
so
it
would
not
be
linked
properly
and
I.
I.
D
Don't
think
that
the
recycling
date
would
make
any
sense
to
to
anyone,
except
if
you're,
looking
at
individual
traces
or
individual
spans
actually
can't
even
know
if
they
belong
to
the
same
Trace.
D
A
E
Jointly
I'll
just
say
that
I
support
what
I've
heard
I
I
was
really
surprised
to
see
this
like
it
took
me
completely
by
surprise.
I
may
remember
something
of
that
nature
being
stated.
You
know
two
years
ago,
but
I
didn't
think
it
was
real
and
I
never
expected.
It
was
written
anywhere
either.
F
G
So
I
want
to
make
sure
we're
all
we're
in
agreement
that
it's
for
Trace
ID
and
span
ID,
but
not
like
I
I.
Think
the
oh,
the
camera's,
not
on
the
log
entry
thing
makes
sense
that
you
can
actually
keep
the
log
entry
and
just
drop
the
span
Association
and
the
trace
Association.
F
G
That
means
that,
like
there's
a
whole
bunch
of
things
that
go
bad
and
randomly
generating
an
ID
could
lead
to
unintended
side
effects
as
well.
That
I
think
are
bad
so,
given
that
we
really
want
the
generation
and
then
all
those
data
to
be
like
tied
together
at
source,
if
the
protocol
gets
invalid
data,
I
think
we
just
dropped
that
entire
thing.
Like
that's,
that's
a
problem,
you
know
like
the
span.
Id
is
the
key
identifier
for
span.
So
if
it's
invalid,
you
cannot
trust
anything
about
that
span.
E
We
believe
that
Trace
IDs
have
to
be
16
bytes
and
if
they're,
if
they're
16,
not
16,
bytes,
then
they're
in
a
completely
imbalanced
state.
Where
the
length
is
incorrect
and
either
they
have
to
be
16,
bytes
or
zero
bytes,
and
we
can.
If
you
get
a
five
byte
Trace
ID,
then
what
do
we
do?
That's
a
real
bad
situation.
E
You
extend
it
with
zeros
or
you
don't
I,
don't
know,
but
but
I
think
that
this
is
the
situation
that
needed
to
be
addressed
to
put
a
collector,
because
if
you're
handling
P
data
in
a
collector
program,
you're
not
going
to
expect
to
see
a
five
byte
ID,
it's
a
16
byte
data
type.
You
will
get
16
bytes,
either
way,
whether
it's
valid
or
not,
and
so
I
think
the
the
author
of
The
Collector
was
faced
with
making
a
valid
Trace
ID.
E
Potentially
we
could
fill
it
in
with
zeros
and
just
say
if
you
see
on
all
zeros
Trace
ID,
it
is
invalid.
Please
pass
it
along.
The
next
person
will
see
an
invalid
Trace
id2
on
The
Wire.
However,
I've
heard
about
the
protobuf
lets,
you
have
a
five
byte
Trace
ID.
What
do
we
do
when
we
receive
that
and
I
think
this
started,
because
we,
the
Json
format,
can
do
all
the
same
things.
C
C
So
I
think
that's
a
reasonable
solution
to
both
of
those
cases.
But
according
to
the
w3c
spec,
the
left,
padded,
Trace
ID
and
the
trace
ID
without
the
left
pad
should
be
considered
equivalent.
E
C
A
A
A
Okay,
in
that
case,
let's
move
the
next
item.
Ruslan,
please
merge
Define
conversion,
mapping
from
hotel,
Expo
histogram
to
Prometheus
native
histograms
I,
see
two
approvals
there,
one
from
Anthony
but
I
guess
we
need
actual
Matrix
exports
here.
A
B
Yeah
I
think
that's
only
these
reviews.
There
should
just
be
formalizing.
What's
long
been
discussed
between
us
and
the
Prometheus
working
group.
A
Uh-Huh
yeah
I
see
that
Joe
McGee
also
mentions
that
it
looks
ready
to
merged
okay.
Yeah
I
mean
we
formally
as
per
GitHub
requirements.
We
actually
need
two
reviews
from
approvers,
so
us
as
soon
as
I
see
those
two
I
will
merge.
That.
A
Same
here
thanks
so
much
okay,
thanks
so
much
for
that.
The
next
one
I
put
that
one
there.
It's
a
PR
that
Daniel
created
regarding
span,
but
spam
processori
has
been
there
for
a
little
while
I
was
a
little
bit
afraid
of
merging
that,
because
I
wanted
maintainers
to
take
a
look
at
that
yeah,
and
they
see
that
really
you
were
commenting
something
there.
A
I
suspect
that
Daniel,
you
haven't
seen
those
comments.
Yet
so
probably
we
can
iterate
offline.
A
No
problem,
okay,
so
let's
do
that
offline
yeah
worst
case
will,
just
you
know,
release
these
changes
a
little
bit
late.
Otherwise
you
know
should
be
fine.
Okay,
any
other
comment
on
that
front.
J
Yeah,
hello,
yeah
I'm,
just
we've
talked
about
this
in
the
specsig
a
couple
of
weeks
ago
and
also
the
logsig
to
some
degree.
J
And
I
have
video
links
for
those
things
so
I'm
just
bringing
it
up
again,
just
because
we're
after
the
holidays
and
just
want
to
bring
it
to
people's
attention.
Yes,
this
is
also
about
the
batch
span.
Processor
it.
This
conversation
spawned
from
our
conversation
in
the
log
Sig
in
declaring
the
equivalent
defaults
for
the
batch
log
processor,
there's
a
pretty
strong
feeling
that
using
the
same
defaults
for
the
batch
log
processor.
Specifically
this
the
the
delay
interval
that
five
seconds
is
too
long
cited.
J
Use
cases
are
like
live
tailing
and
also
just
kind
of
installation
kind
of
use
cases
making
it
friendly
for
users
to
see
their
data
right
away.
I
think
Ted
said
something
pretty
nice
in
the
spec
meeting
was
like.
F
J
J
So
that's
what
spawned
this
issue
to
Circle
back
and
discuss
whether
we're
comfortable
with
changing
the
batch
span
processor
to
either
the
same
to
whatever
we
choose
for
the
batch
log
processor
or
you
know
if
we
want
to
try
to
get
more
scientific,
which
I
think
is
going
to
be
tough
to
do,
try
to
decide
what
is
the
appropriate
delay,
but
in
generally
speaking,
I
think
I've
stated
it
here
in
this
issue:
values
in
the
range
of
like
200
milliseconds
to
one
second,
our
values
that
people
feel
are
more
right,
so
I'll
stop
there.
J
K
Yeah,
it
really
does
depend
on
your
environment,
like
if
you
look
at
a
client
if
you've
got
millions
of
clients
sending
requests
to
your
back
end
at
200,
milliseconds
you're
going
to
pull
it
down
so
for
a
back-end
situation
with
an
X
number
of
you
know,
known
number
of
servers,
fine,
but
for
for
a
quiet
environment,
I
I
would
not
suggest
200
milliseconds.
I
Even
for
service
I,
I
think
200
milliseconds
is
the
normal
time
people
use
for
design
some
UI
interaction.
Anything
less
than
300
milliseconds
is
considered
very
interactive
for
open
to
language,
I,
I
think.
In
most
cases
this
is
designed
for
system
handling,
not
interacting
with
people.
So
200
milliseconds
to
me
is
just
too
long.
K
I
All
right,
I'm,
confused,
I
I
think
what
I'm
suggesting
is
I,
don't
understand.
Why
do
we
want
to
put
for
200
milliseconds
because
we're
not
designing
a
game
here?
It
might
help
if
it's
an
interactive
tutorial
for
the
user,
but
I
I
think
open
climate
defaults
shouldn't
be
targeting
the
first
time
user,
the
interactive
Behavior.
It
should
be
function
reasonable
for
production,
foreign.
G
So
with
Dan's
change,
where,
if
you
fill
your
batch,
it
goes
out
faster,
I
think
like
we're,
we're
in
a
good
bounce
there,
where
we
don't
need
the
200
milliseconds.
If
you
have
something
that
is
generating
so
much
Telemetry,
it
needs
to
get
out
the
door
quickly,
you're
actually
going
to
have
a
very
rapid
export
process,
because
as
soon
as
that,
batch
limits
hit
it's
going
to
fire
out
right
with
that
spec
change,
so
I
think
as
long
as
that,
spec
change
makes
it
through
to
the
SDK.
G
There
is
like
I
understand
the
startup
use
case
and
I
would
say,
like
maybe
five
minutes
to
a
minute
might
be
okay,
but
in
in
the
case
where
your
service
isn't
doing
anything,
we
don't
want
to
have
a
whole
ton
of
overhead
and
200
milliseconds
actually
just
means
that
our
overhead
goes
up
we're
already
at
like
a
place
where
we've
made
a
lot
of
our
systems.
G
Work
solidly,
as
opposed
to
like
super
super
efficiently
with
our
sdks
to
the
point
where,
like
we
hear,
oh,
the
otel
overhead
is
just
a
little
higher
than
I
want
it
to
be
I,
don't
think
we
want
to
be
adding
any
more
overhead
at
this
point
and
that
200
milliseconds
I
think
does
add
to
the
overhead
of
hotel
in
a
way
that
I'm
a
little
uncomfortable
with,
unless
users
really
need
it.
So
in
you
know
the
default
way
that
they
really
need.
G
It
would
be
if
they're,
actually
sampling
so
many
spans
and
generating
that
much
data
that
it
goes
out.
Every
200
milliseconds,
that's
reasonable!
That's
what
they
want
that
that
part
of
the
default
I
buy,
but
the
part
where
we
just
always
try
to
send
every
200
milliseconds
I'm,
not
I'm,
not
seeing
a
great
reason
for
that,
like
I.
Think
a
second
sure.
G
You
know
that
from
from
that
diagnostic
first
journey
standpoint
of
like
okay,
let
me
see
what
happens,
but
we
want
to
want
to
keep
our
overhead
minimal
and
I
think
you
know
more
than
once.
A
second
seems
like
a
high
checkpoint
for
this
thread
to
kind
of
wake
up
and
start
firing.
Things.
H
So
what
do
you
think
about
the
fact
that
the
batch
processor
and
the
collector's
default
interval
is
200
milliseconds.
B
Would
I
would
expect
the
clickers
to
be
lower
actually
because
it's
going
to
be
aggregating
data
from
multiple
client
instances
and
so
say,
you've
got
a
one
second
Timeout
on
five
clients:
you're
gonna
get
the
same
data
flowing
through
the
collector,
then
in
200
milliseconds,
as
you
would
in
one
second
of
of
those
clients
aggregations
so
I
think
just
because
of
they
Collective
concentration
Point
having
a
lower
timeout
makes
sense.
There.
G
I
I
also
expect
the
collector
to
be
getting
bashed
data
already
so,
depending
on
the
size
of
the
batch
The
Collector
handles
it
might
actually
just
be
flowing
everything
through
because
it's
getting
a
big
bunch
of
data
from
a
particular
resource
right
so
again
those
are
they're
very
different
use
cases.
The.
H
B
G
Yeah,
but,
but
if
you
think
about
this
journey
right
of
like
my
first
startup
use
case
and
what
I'm
using
the
collector
for
I'm
sending
batch
data
to
The
Collector,
one
thing
that
I
can
do
with
the
collectors
tune
it
to
batch.
More
so
I
send
less
to
my
back
end
over
time.
But
there's
not
you
know
and
again
I.
We
can
do
throughput
examples
with
a
collector
with
different
back
ends.
G
I
only
know
my
own
back
ends,
but
I,
don't
think
you
get
a
huge
gain,
adding
a
bunch
of
time
to
The
Collector,
given
how
much
you
aggregate
on
a
per
instance
level
already
and
the
way
that
things
batch
and
flow
through
the
collector
like
today,
so
I,
I,
they're,
different
use
cases
for
sure
right.
The
SDK
is,
is
the
source
of
the
data
and
I
think
we
need
to
be
very,
very
highly
tuned
and
very,
very
careful
with
SDK
collector.
G
We
have
a
little
more
flexibility
because
again
the
overhead
is
directly
accounted
to
it
and
it's
a
different
use
case
where
we
expect
to
have
a
couple
things
coming
in.
You
know
independently
and
users
might
want
like
every
batch
to
come
in
and
go
directly
out.
You
know
that's
like
a
thing,
that's
way
more
reasonable
in
The
Collector,
but
again
it's
pre-batched
you're,
assuming
right
and
where
it's
not
pre-batched
like
say
a
statsd
use
case
that
300
milliseconds
makes
a
hell
of
a
lot
of
sense
too
right.
L
I
would
really
like
to
see-
maybe
the
configuration
group
or
somebody
dig
into
this
in
kind
of
a
more
research
first
principles
approach.
For
example,
we
recommend
that
you
run
the
collector
as
a
local
agent
as
like
the
most
common
setup
right.
You
have
your
SDK
talking
to
a
local
collector,
that's
also
pulling
in
machine
metrics
and
everything
else,
in
which
case
all
of
the
arguments
just
made
for
how
the
SDK
should
be
set
up
would
actually
be
applied
for
that
local
agent.
As
far
as
it's
batching
and
timing
and
the
SDK.
L
In
that
scenario,
you
would
want
to
be
pushing
data
out
as
fast
as
possible
to
avoid
losing
data
in
the
event
of
a
program
crash,
so
I'm
not
I'm,
just
not
convinced
that
we've
that
we've
really
like,
like
thought,
thought
all
of
this
stuff
through
the
the
other
issue.
That
I
don't
think.
We've
thought
a
lot
about
that
I'm.
Seeing
with
logs
come
up
is
we
have
otlp
where
we're
sending
out
traces,
logs
and
metrics
together
in
a
batch
right?
L
We
have
that
possibility
and
I
wonder
what
kind
of
strange
thrash
we
we
start
to
generate
if,
like
the
log
batch
settings
and
the
trace
batch
settings
are,
are
misaligned
with
each
other
like
does
it
make
sense
to
have
in
terms
of
otlp
pumping
out
everything
together?
Is
this
actually
the
right
way
to
to
configure
that
properly
or
are
you
going
to
get
funky
Behavior,
so
I
don't
know
it
feels
to
me
like
this
is
actually
a
pretty
important
issue.
B
Even
if
you're
shipping,
logs
and
metrics
or
traces
together,
they're
all
going
to
be
in
separate
requests,
I,
don't
know
that
there's
anything
we've
specified
in
any
of
the
sdks.
That
would
talk
about
aggregating
signal
types
for
export
prior
to
sending
them
to
to
a
downstream
receiver.
So
I
I
think
these
are
separate
things.
In
any
event,
we
may.
F
L
Want
to
rationalize
them
seems,
like
seems
like
they
should
be
aggregated
together
in
the
case
of
otlp.
L
I
I,
don't
know
how
well
we've
actually
like
thought
through
this
stuff,
since
open
Telemetry
was
just
a
baby
but
just
tracing
I.
Guess
that's
my
point.
When
we
designed
this
whole
architecture,
we
didn't
have
this
other
stuff
in
there.
I
don't
want
to
just
like
tear
everything
apart,
I
think
it's
like
fine
for
now,
but
I.
K
F
H
You
talked
you
talked
about
doing
like
a
first
principles,
analysis
and
trying
to
figure
out.
You
know
what
try
to
use
some
data
to
support.
Maybe
what
the
defaults
for
these
should
be.
I
think
I
think
whatever
analysis
you
do
is
going
to
be
based
on
a
lot
of
assumptions,
so
you
need
to
make
assumptions
about
the
size
of
each
record.
You
need
to
make
assumptions
about
the
throughput
of
the
system.
H
You
need
to
be
pick
assumptions
about
your
sampling
rate
and
you
need
to
make
assumptions
about
the
length
of
the
logs
and,
like
you
know,
all
that's
going
to
come
down
to
you
know.
This
settings
will
vary
based
on
the
system
that
you're
monitoring
like
Nev,
said
earlier,
and
so
for
that
reason,
I
actually
don't
think
these
set
the
defaults
for
these
matter
too
much.
You
know,
I
was
trying
to
take
the
counter,
but
I
think
the
default
in
five
seconds
is
perfectly
fine.
G
G
If
we
can
should
be
kind
of
the
naive
starting,
you
know
how
I
would
set
up
an
architecture
to
begin
with,
so
like
to
Ted's
point
about
the
collector
being
used
locally
right.
I
think
the
collector's
defaults
are
kind
of
an
amalgamation
bounce
today
of
local
and
aggregation
like
it,
it
Blends
those
two
like
the
reason
it's
kind
of
between
the
SDK,
where
it's
a
local
aggregation
and
what
I
would
expect
a
remote
aggregation
to
look
like.
G
But
what
we
could
do
is
set
up
an
architectural
kind
of
thing
where
we
throw
say
you
know
an
SDK
out
of
collector
at
an
aggregate
collector,
look
at
throughput
and
do
some
okay
in
ideal
scenarios
with
XYZ
here's
what
things
look
like:
here's,
what
we
recommend
for
defaults!
That
would
actually
be
a
good
investment
of
someone's
time
to
provide
these
recommendations
of
some
fashion,
like
if
your
architecture
looks
like
X
here's
some
recommended
defaults.
G
If
your
architecture
looks
like
why
here's
some
recommended
defaults,
but
I
still
think
it's
useful
I
mean
to
Jack's
Point.
Yes,
the
defaults,
don't
matter
because
you
have
to
customize
no
matter
what
it's
still
useful
for
us
to
have
an
idealized
architecture.
We
expect
people
to
start
out
with
and
give
them
the
least
amount
of
friction
with
that
architecture.
G
L
Which
actually
circles
back
around
to
the
idea
that
maybe
our
defaults
should
be
the
SDK
assumes
is
talking
to
a
local
collector
right,
like
half
of
our
defaults,
assume
that
already
like
they
point
at
localhost
and
stuff
like
that,
and
maybe
the
collector
should
presume
that
by
default,
it's
it's
running
as
an
agent
I,
don't
know
if
that
means
automatically
collecting
machine
metrics,
but
that
would
certainly
be
the
best
approach
for
newcomers
to
open
Telemetry
and
if
the
idea
is
like
in
production
like
what
we
actually
need
to
give.
J
F
J
You're
talking
about
throughput
then,
like
the
delay,
is
not
going
to
be
the
thing
that
triggers
the
triggers
the
export,
it's
probably
gonna,
be
like
the
match
or
Max
patch
size.
So.
L
There's
a
set
of
these
right
that
they
all
kind
of
have
to
work
with
each
other,
the
the
batch
size,
the
the
trigger
point
for
early
flesh-
and
you
know
the
delay
I
I-
do
sorry.
K
Add
the
environment
like
if
you
look
at
a
browser
or
a
game,
is
Riley
called
out
they're
not
going
to
have
a
local
collector
yeah,
so
whatever
defaults,
we
Define
I
would
want
them
to
be
reasonable,
but
not
hard,
so
that
you
know
when
we
come
to
create
more
client-focused
sdks,
they
will
have
a
different
set
of
defaults
to
what
the
server
yeah.
L
I
I
think
it's
safe
to
say
that
once
we
have
like
a
specialized
browser,
client
and
also
like
IOS
and
Android
I,
think
it's
safe
to
say
that
we
could
Define
many
many
things
need
to
be
different
about
how
those
things
connect
right
like
I,
it's
probably
fair
to
say
these
are
server-side
defaults
and
we
need
to
come
up
with
a
whole
different
set
of
things
for
what
mobile
devices
and
other
clients
like
should
be
doing
by
default,
because
it's
not
similar
at
all.
L
L
Which
is
why
I'm
asking
since
Alex
is
on
the
line.
Is
this?
We
we
have
a
configuration
group,
that's
formed
and
I,
don't
I,
don't
want
to
like
endlessly
dump
work
on
that
group,
because
I
know
they're
focused
more
on,
like
was
a
yaml
file.
Look
like
but
I
wonder
if,
like
revisiting
the
defaults
and
like
how
do
you
calculate
you
know,
good
default
settings
based
on
your
system?
Is
this
something
that
group
could
maybe
take
on.
M
To
be
honest,
I
I
would
say
a
hard
no
at
this
point,
because
we
have
enough.
We
have
enough
just
trying
to
get
configuration
working
at
all,
I
feel
like
once.
Maybe
once
we
have
a
configuration
model
and
we
have
the
apis
and
everybody's
happy
with
them,
then,
but
again,
this
might
be
the
kind
of
thing
that
comes
out
of
like
a
best
practices
and
user
group
as
opposed
to
something.
That's,
you
know
that's
coming
out
of
the
configuration
group.
N
N
I
I,
don't
know
I
I.
Think
Alex
is
right
like
we're.
Trying
to
get
the
configuration
in
I
don't
know.
Okay,
I
do
think
that
there
are
people
that
are
suited
that
have
been
in
this
conversation
that
could
probably
tackle
this,
like
I.
Think
that
one
of
the
things
that
I've
been
hearing
the
most
is
that
there's
a
lot
of
subjective
ideas
as
to
what
is
ideal
here,
but
I.
Think
that,
as
to
your
point,
Ted
like
there
needs
to
be
a
little
bit
more
objectivity
like.
N
Obviously
you
can't
lay
down
something:
that's
concrete
for
every
situation,
but
I
think
that
you
can.
You
know
objectively,
look
at
all
of
the
situations
and
try
to
identify
them
and
then
build
the
mortgage
rate
calculator
because
I
think
that's
a
great
idea.
Then
I
also
think
that
you
could
just
apply
a
little
bit
more
scrutiny
into
like
the
user
groups
into
saying.
Like
you
know,
how
is
this
going
to
actually
affect
people
but
yeah
yeah?
L
That's
fine
yeah.
K
I
I
think
it's
also
going
to
depend
on
the
language
and
the
run
time
as
well
like
the
the
process
of
serializing
from
like
in
JavaScript
from
an
object
into
otlp
is
not
cheap,
it
can
take,
you
know
milliseconds,
but
yeah,
and
then
we're
talking
about
adding
compression
on
top
of
that.
K
So
whatever
the
halts
are
I
think
it
is
going
to
be,
it
will
have
to
be
defined
based
on
runtime
language
like
I
I,
don't
know
the
characteristics
of
go
or
python,
or
you
know
other
interpretive
languages,
like
you
know,
C
plus
plus
C
sharp,
obviously
going
to
be
way
better,
but
these
are
all
considerations
that
need
to
go
into
said.
Mortgage
calculator.
H
So
so
I
think,
there's
I,
think
we're
all
kind
of
circling
around
this
idea
that
this
mortgage
calculator
a
best
practices
guide
with
some
math
to
back
it
up
and
examples
would
be
a
good
thing,
a
little
bit
of
context
about
this
problem.
So
this
specific
issue
for
logs
the
reason
we're
talking
about
it
is
is
because
it's
one
of
the
few
remaining
issues
before
you
know.
H
We
think
the
log
API
and
SDK
is
in
a
good
place,
and
so
you
know
how
much
should
we
let
this
type
of
conversation
block
logs
proceeding
or
should
we
just
come
up
with
a
value?
That's
good
enough
that
maybe
is
based
off
some
prior
art
and
a
different
signal
or
in
a
different
system,
and
clarify
that
setting
later
with
some
best
practices
guides.
H
There's
other
prior
art
we
can
follow
as
well
sure
you
know
the
collector
and
you
know
other
log
systems
like
fluid
that
I
guess.
The
question
there
is:
is
it
more
important
to
have
alignment
with
another
signal
in
the
SDK,
with
the
with
the
trace
batch
span
processor
or
with
other
prior
art
that
exists
in
the
logging
ecosystem?
The
logging
world.
L
I
guess
what
I
worry
about?
It's
we're
talking
about
first
principles,
but
maybe
another
term
is
just
coherence
like
one
of
the
things
we're
seeing
is
as
we've
like
tacked
on
these
defaults
over
time.
L
Maybe
the
end
result
is
not
particularly
coherent,
even
if
the
individual
decisions
have
some
reasoning
behind
them.
I
think
that's
the
main
thing
I
would
be
a
little
worried
about
with
logging,
so
I
guess.
My
request
to
that
group
would
be
whether
we
like
the
tracing
defaults
or
not,
and
the
answer
is
not
we're.
We're
going
to
keep
them
for
now,
and
so
just
what
is
a
set
of
logging
defaults.
A
Would
you
mind
commenting
that
there's
a
PR
that
is
related
to
these
was
explicit
against
that
and
probably
you
have
to
discuss
with
him
I'm
sure
he
thinks
the
opposite
and
yeah.
Hopefully
we
can
get
the
conversation
movie
on
but
yeah.
Basically,
he
was
saying
that
he
thinks
that,
for
example,
in
the
in
the
pr
that
I
placed
it
that's
five
seconds,
you
know
it's
it's
too
long
and
he
he
could
oppose
going.
You
know
with
consistency
if
this
affects
the
user
experience
and.
I
A
B
Well-
and
this
is
only
for
the
logs
SDK
for
a
live
tail
in
use
case-
he's
describing
a
particular
use
case
and
saying
that
we
need
to
set
the
defaults
for
that
use
case.
Okay,
I,
think
much
of
the
discussion
we've
been
having
here
is
is
coming
to
the
point
that
no,
there
are
many
many
use
cases
for
this,
and
we
have
to
teach
people
how
to
appropriately
configure
this
for
themselves.
A
A
Okay,
so
if
there
are
no
more
comments
on
that
front,
for
now,
let's
move
to
the
next
one
I
just
put
that
one
there
is
about
making
Expo
histograms
table.
This
has
been
there
for
a
few
weeks
now
there
were
there's
two
things
to
discuss.
A
One
of
them
was:
where
is
whether
the
Expo
histogram
aggregation
should
be
a
shoot
or
a
most
for
for
the
six
and
there's
a
question
from
Riley
yesterday,
whether
there's
this
so-called
taint
modes,
I
just
wanted
to
put
it
here,
I
think
that
he
needs
some
final
reviews
from
jamaicd
and
Joyce
suret,
at
least
so.
Please
consider
that.
A
And
I
guess
one
indirect
question
is
whether,
as
I
was
asking
before
whether
this
aggregation
should
be
a
should
or
a
must
for
languages.
H
I
originally
had
it
as
a
as
a
must
I'm
happy
to
soften
it.
To
I
should
because
of
the
reasons
that
George
outlined,
which
is
you
know,
essentially
that
it
may
be
tricky
for
some
language
sdks
to
implement
exponential
histograms
without
a
lot
of
the
domain
knowledge,
and
so
you
know,
should
the
lack
of
exponential
histograms
prevent
them
from
having
a
stable
and
compliant
metrics,
SDK
and
I.
H
Think
not,
but
you
know,
I
I,
like
the
should
language
I
think
we
really
should
aim
for
all
of
our
open,
Telemetry
sdks
to
ultimately
support
exponential
histograms,
and
you
know
there
should
be
really
really
good
reasons
that
they
don't
if
they
choose
not
to
have
an
exponential
histogram
aggregation.
A
E
There's
a
question
about
taint
mode,
though
the
words
Riley
used
to
describe
the
sort
of
problem
that
happens
when
you
have
a
cumulative
histogram
that
accumulates
a
very
large
range
over
time
and
becomes
less
and
less
resolved
to
me
that
one's
connected
with
a
lot
of
sort
of
future
wish
list
items,
including
the
ability
to
forget
time
series
in
general,
so
to
say
that
resetting
a
histogram
is
another
way
of
declaring
you
had
a
cardinality
explosion
and
starting
over.
E
If
we
could
figure
out
how
to
do
that
more
fundamental
thing,
meaning
to
forget
time
series
generally,
then
we
wouldn't
have
the
taint
problem.
We
just
reset
when
you
need
to
avoid
a
taint.
Essentially
so
I
would
suppose
that
that
could
be
something
that's
post,
1.0
for
exponential
histograms
and
hopefully
addressed
in
a
more
General
way
than
just
a
special
casing
of
histogram.
It
does
look
like
the
Prometheus
group
is,
is
getting
behind
this
histogram
and
there's
a
compatibility,
no
loss
going
between
them
and
so
on.
H
Is
solving
detained
issue
is
that
can
that
be
done
in
future
work
in
a
backwards
compatible
way.
E
Yes,
I
would
I
would
solve
that
more
or
less
by
resetting
the
start
time
stamp
field
that
we
have
and-
and
the
question
is,
how
do
you
know
when
to
do
that,
and
how
do
you
do
that
safely?
I
think
is
the
bigger
question.
G
One
thing
I
want
to
call
out
that
exponential
histograms,
really
enable
for
us
is
dynamic,
bucket
ranges
and
the
fact
that
you
shouldn't
rely
on
any
specific
bucket
to
be
exactly
the
same
from
like
point
to
point
to
point
anyway.
E
Yeah,
maybe
I
shouldn't
make
it
clear.
So
if
you,
if
you
can
reset
the
start,
timestamp
on
your
on
your
histogram,
which
would
happen
every
period
with
a
Delta
temporality
histogram.
But
if
you
can
reset
your
start
timestamp,
then
your
bucket
ranges
and
resolution
resets
too,
and
you
can
now
begin
putting
your
resolution
wherever
the
distribution
lies,
and
so
so
the
the
problem
of
taint,
then
is
defined,
is
not
being
able
to
do
that.
E
Essentially,
so
my
my
bucket
ranges
can't
change
because
I've
anchored
my
start
time
stamp
and
I've
seen
a
tremendous
range
of
values
since
when
I
start
time
stamp
this
reset
your
start
timestamp
and
that's
okay.
We
just
have
to
sort
out
the
sort
of
safety
question
to
make
sure
you
don't
lose
data,
and
this
is
connected
with
the
the
man
values
that
Prometheus
once
used
and
the
data
point
missing
question
and
and
of
course,
ultimately,
the
up
monitoring
question
the
uptime
question
which
is
connected
with
this
as
well:
okay,.
H
Well,
that
sounds
reasonable
to
me.
It
sounds
like
there's
options
to
solve
this
going
forward.
I
think
we
should
take
it
to
the
the
pr
to
discuss
these
two
remaining
issues
and,
assuming
that
there's
no
reason
not
to
get
this
merged.
A
Okay,
I
guess
we're
fine,
so
yeah
welcome
back
I
guess,
for
some
of
us
are
coming
back
from
holidays.
Let's
keep
talking
offline.
Please
consider
reviewing
the
PRS,
especially
the
one
for
the
release
and
the
Expo
histogram
a
stability
session
stay
safe.
Talk
to
you
soon,
thanks
a
lot
bye.
Thank.