►
From YouTube: 2022-12-07 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
C
A
I
think
we
should
probably
get
started,
there's
a
couple
of
us
here
and
we
can
discuss
the
items
that
are
on
the
agenda.
If
that's
okay
with
you.
C
Okay,
yep
happy
to
get
started,
I
mean
I
put
the
first
two
I
just
want
to
give
an
update
on
the
Dev
Summit
I
put
a
few
topics
or
related
to
open
Telemetry
in
the
agenda,
and
we
have
acceptance
on
a
lot
of
these
things.
We
just
need
to
now
put
in
the
work
to
make
sure
that
things
are
shipped
first
thing
is
otlp
ingestion
is
accepted.
C
C
We
want
to
allow
dots
and
slashes
and
all
the
other
characters
inside
metric
names
and
label
names
in
Prometheus,
but
we
need
to
figure
out
an
escaping
mechanism,
because
even
a
minus
is
a
valid
character
in
hotel
and
if
it's
like
HTTP
minus
requests,
we
like
we
need
to
escape
the
minus
because
it
could
be
a
minor
like
subtraction
or
a
metric
name.
So
we
need
to
find
a
somewhat
say
in
form
of
escaping
special
characters
in
metric
names,
and
we
should
be
good.
C
C
The
other
thing
is,
we
found
that,
as
we
try
to
add
more
and
more
features
to
the
text
format,
the
text
format
is
not
super
nice
and
we
kind
of
decided
that
all
future
Improvement
will
freeze
the
text
format
as
this
and
most
of
the
Innovation
will
happen
in
the
Proto
in
protobuf,
scraping
we're
gonna
bring
grpc
and
protobuf
back.
Having
said
that,
maybe
in
the
future
we'll
add
all
of
these
features
to
through
the
text
format,
but
it's
not
guaranteed.
C
So
that's
something
really
nice
and
the
reason
we
kicked
this
off
is
to
specifically
add
start
time.
We
currently
do
start
time
as
a
separate
time
series
and
we
want
to
make
it
part
of
the
metadata,
and
that
complicates
things
inside
the
text
format
and
we
realized
you
know
what
we
want
to
do.
A
lot
more
things
and
the
text
format
is
not
the
right
format
to
innovate.
We
will
always
support
text
format
because
readability
and
things
like
that,
but
grpc
and
Pro
Tour-
is
where
the
Innovation
will
happen.
A
So
right
now
our
spec
says
that
exporters
should
add
the
created
time
series.
Do
you
think
that's
still
the
correct
advice
for
otel
Prometheus
text
format.
C
I
think
we
should
remove
that.
Okay,
let
me
let
me
go
back
and
see
yeah.
Okay,
that's
a
good
action
item.
Let
me
let
me
like.
Let
me
make
a
note
of
that
and
let
me
see
or
two.
C
Yes,
I
agree.
C
Interesting
bit
for
us
is
open.
Open,
metrics,
initially
started
off
to
be
like
a
separate
thing
from
Prometheus,
but
it
is
kind
of
revolving
around
Prometheus
and
Prometheus
is
one
of
the
main
consumer.
So
we
are
going
to
bring
it
inside
the
Prometheus
umbrella
and
probably
Sunset
the
open
metrics
project.
C
So
we
have
consensus
there.
We
just
need
to
do
it
yep,
and
then
we
kind
of
looked
at
what
kind
of
Prometheus
metrics
are
being
produced
by
otlp.
C
We
looked
at
the
otlp,
open
element,
free
demo
and
a
few
Auto
instrumentations
stuff,
and
the
main
thing
that's
concerning
is
like
underscore
total
Edition
and
the
unit
inclusion,
yeah,
so
I
mean
I,
think
the
underscore
total
is
just
basically
we
need
to
go
to
different
language,
sdks
and
say:
please
add,
underscore
total
make
sure
you
add
it,
but
units
is
a
little
more
interesting
and
involved
discussion
and
that's
kind
of
the
next
part.
A
So,
on
the
underscore
total
note,
I
actually
I
suppose
I
could
have
added
this
to
the
agenda,
but
I
actually
did
an
audit
of
all
of
the
Prometheus
exporters.
I'll
just
add
it.
A
I
opened
about
40
issues
in
total,
but
I
did
open
a
couple
for
or
total
in
particular,
which
I'll
link
here.
A
C
Yep
there's
also
the
thing
about
units
open
metrics
is
a
should
for
base
units
and
I
kind
of
also
want
to
see
base
units
everywhere,
but
it
looks
like
the
boat
might
have
sailed
and
I
want
to
see.
Like
you
know,
can
we
kind
of
try
to
use
seconds
as
much
as
possible.
A
So
I
think
the
only
weird
so
just
for
context
for
anyone
else.
Who's
listening,
I
opened
an
issue
saying
that
we
should
remove
that
recommendation,
given
that
we
can't
do
conversions
for
all
metric
types
and
I
think
that's
probably
worth
discussing
here.
So
my
understanding
is,
we
sort
of
have
two
options
we
can
either
depending
on
the
metric
kind
like
a
counter
versus
an
exponential
histogram.
A
We
can
decide
that
for
some
of
those
instruments
or
some
for
some
of
those
metric
types
we're
able
to
convert
them
from
milliseconds
to
seconds,
whereas
for
others
like
exponential
histograms,
that
conversion
is
not
possible
or
at
least
I.
Think
Josh
May
contradict
me
and
say
that
it's
possible
but
lossy,
but
basically
we
can't
actually
do
it
for
some.
So
I
feel
like
it's
better
to
be
consistent,
but
I
also
understand
that
Prometheus
is
really
a
lot
easier
to
use
if
everything
is
in
seconds,
but
I'm
curious.
If
there
are
other
opinions.
C
I
mean
now
that
I
have
Josh
here,
like
the
discussion
around
changing
classific
duration
to
seconds.
There
was
also
this
thing
depending
on
what
the
like,
if
it's,
if
you
measure,
typically
in
milliseconds
or
seconds
you
picking
the
unit,
also
helps
with
accuracy.
If
we
only
stick
to
160
buckets.
That
is
still
true
right.
B
I
I
try
to
summarize
that
in
the
there's,
a
discussion
about
it,
You're,
Gonna,
Change,
Precision,
I,
guess
not
accuracy,
but
you're
gonna,
move
boundaries
and
introduce
error
that
wasn't
there
and
that's
really
what
people
are
objecting
to.
So
there
was
an
example
where,
if
you
had
a
measurements
between
one
and
two
like
for
sure
you
know,
that's
gonna,
be
four
buckets.
Let's
say
you
shift
that
by
a
thousand
it's
gonna
be
five
buckets
or
so
now,
because
the
edges
don't
line
up
the
same,
it's
just
a
little
different.
B
It
still
has
25.
Well,
it
sounds
the
same
like
worst
case
accuracy,
but
it's
it's
just
not
it's
changed.
The
data
is
what's
happened.
No.
C
No
I
I'm
just
saying
so
essentially
if
most
of
the
data
Falls
between
0.5
and
1
I
mean
0.5
and
2,
or
if
most
of
the
data
Falls
between
500
and
2000
and
like
basically
forget
the
unit
like
most
of
the
measurements
of
between
500
and
2000
and
most
of
the
measurements
are
between
was
0.5
and
2
and
in
my
mind,
I
would
think
that
0.5
and
2
would
be
more
accurate
than
five
hundred
and
two
thousand
forget
the
unit
series.
C
We
are
not
converting
from
milliseconds
to
seconds
somebody
declared
a
histogram,
and
then
they
said
you
know.
B
Sorry
I
just
needed
yeah
I
I,
hear
what
you're
saying
I
think
the
the
way
to
think
about
this
is
that
the
relative,
like
in
an
exponential
histogram,
the
buckets,
are
always
spaced
by
the
same
amount.
So
the
accuracy
you're,
you
have
the
same
number
of
buckets
between
0.5
and
2
in
the
scenario
as
between
500
and
2000,
and
so
you
shouldn't
have
any
more
or
less.
You
know:
data
quality.
It's
just
that
when
you
change
between
them,
you
can
only
fall
in
quality
and
that's
that's.
What
we're
pointing
out.
C
B
I
was
trying
to
find
an
example,
I
think
that
we're
all
feeling,
for
example,
that
we're
like
familiar
with
in
our
head
and
I've
I've,
seen
cases
where
you
might
be
thinking
of
like,
for
example,
open
histogram,
there's
an
example
of
a
fixed
resolution,
histogram
and
but
it
it
doesn't.
Even
have
this
what
we've
seen
problems
where,
if
you
take
an
otlp
histogram
the
way
it
stands
today
and
you
have
all
your
measurements
between,
let's
say
1100,
milliseconds
and
1150
milliseconds.
Well,
those
numbers
are
very
close
together
in
terms
of
the
fraction.
B
You
just
have
a
really
tight
distribution
and
in
open
Geometry
you
can.
You
can
turn
the
scale
up
and
and
shift
your
index
to
get
good
resolution
for
a
very
small
range
between
a
thousand
and
a
thousand,
fifty
or
whatever,
but
the
scale
is
going
to
be
in
the
in
the
index.
You're
dealing
with
are
going
to
be
pretty
large,
and
what
can
happen
is
that
you
come
in
with
this
otlp
histogram.
B
But
that
doesn't
happen.
If
you
can
change
your
scale
and
resolution.
C
Units
does
that
make
sense
yep
it
does.
That
brings
me
to
the
next
question,
so,
if
so,
what
is
so?
What
like?
What
is
the
main
concern
that
is
stopping
otlb
to
switch
from
switching
to
seconds
instead
of
new
seconds.
B
C
Norris
I
mean
so
essentially,
I
started
for
all
measurements,
but
let
I
mean
I.
Let's
I
mean
let's
stick
with
HTTP,
just
HTTP
like
it
is
in
milliseconds
today
because
it
started
off
with
milliseconds
and
the
vibe
I'm
getting
is
we
can
switch
to
seconds,
but
we
also
need
to
switch
the
sdks
to
seconds
in
the
default
buckets
for
the
sdks
two
seconds.
But
beyond
that
do
we
have
other
points.
C
B
Yeah
so
I
guess
the
way
it's
written
in
hotel.
Now
you
have
a
choice
and,
and
the
choice
for
HTTP
is
bad
because
we've
got
two
different
conventions
and
and
I
know
that
I
mean
I.
Think
the
right
thing
to
do
would
be
for
open
Telemetry
to
switch,
but
there's
resistance
to
that
I
think
partly
because
people
measure
I,
don't
know
I
I,
don't
think
my
opinion
is
going
to
help
us
here.
Unfortunately,
I'm
trying
to
stand
back
and
not
get
in
the
way.
I
don't
want
to
make
trouble.
B
B
And
one
thing
we
also
did,
which
I
think
it
is
not
helpful
enough,
which
is
to
say
that
when
you're,
when
we're
dealing
with
the
explicit
histogram
and
it's
a
floating
Point
instrument,
we
we
shift
the
numbers
to
be
exactly
the
Prometheus
defaults
and
when
it's
an
integer
default,
we
shift
them
by
exactly
a
thousand
just
for
this
case.
In
other
words,
if
you
have
milliseconds
your
default,
histogram
buckets
are
between.
B
B
C
Mean
no
worries,
I
mean
it's
just
that
the
discussion
is
still
pending
and
I.
I
kind
of
don't
see
a
good
reason
for
milliseconds
yet
and
then
we
are
also
making
the
decision
that
maybe
we
should
drop
the
base
unit
suggestion
because
we
are
sticking
too
many
seconds
and
I'm
kind
of
trying
to
understand.
Do
we
already
have
consensus
on
milliseconds
or
not?
That's
the
that's.
C
The
only
thing
if
you
have
consensus
I
can
I
can
I'm
okay
with
it,
but
if
we
don't
have
consensus,
I
still
want
to
try
to
kind
of
advocate
for
seconds
where
possible.
B
I
feel
like
we
don't
have
enough.
People
in
the
room
like
I
know,
Josh
surf
was
like
was
had
mixed
feelings.
Last
time
we
discussed
this
in
the
specs
same
call.
That's
my
concern
is
that
that
there
are
many
more
opinions
than
you
know,
five
here,
but
it's
also
true
that
there's
less
pain
for
switching
an
Hotel
than
it
is
for
switching
anything
in
Prometheus.
B
I
still
would
think
that
if
I
had
an
exponential,
histogram
and
I
was
measuring
things
that
were
measured
in
seconds,
I
would
choose
seconds
and
if
I
was
measuring
things
that
were
usually
measuring,
milliseconds
I'd
choose
milliseconds
and
and
then
never
change
again,
but
for
these
conventions
it's
it
can't
solve
that.
With
my
Cavalier
attitude,.
C
A
I
think
the
question
I'm
trying
to
figure
out
is,
if
someone
that
I
think
maybe
this
group
can
solve,
is
basically
the
question
of.
If
someone
chooses
to
use
a
non-base
unit
for
whatever
reason,
because
it's
still
configurable
in
hotel,
should
we
in
the
Prometheus
exporters
make
the
decision
or
like,
should
we
always
convert
to
base
units,
or
should
it
be
configurable
everywhere
like
what's
our
or
should
we
not
convert
to
base
units?
A
What
should
be
our
our
approach
there,
regardless
of
I,
mean
I
I
am
personally
in
favor
of
using
seconds
but
I,
don't
still
understand
or
I
guess
I,
don't
see
why
we
shouldn't
switch
to
them
other
than
that's
what
we
have
right
now,
but
I
think
we
can
maybe
decide
what
the
Prometheus
exporter
approach
should
be.
C
B
B
For
the
most
part,
my
feeling-
and
my
assumption
is
that
people
are
using
like
10x
the
amount
of
resolution
they
need
with
these
histograms
and
like
a
little
conversion,
is
not
going
to
hurt
them
if
you're
in
a
place
where
a
little
conversion
is
going
to
hurt
you,
you
didn't.
You
have
too
much
error
to
start
with
it's
kind
of
what
I'm
saying
like.
If
you
can
get
to
the
point,
where
there's
not
much
error,
a
little
loss
is
okay,
so
it
would
probably
be
okay
to
tell
the
user
of
the
exponential
histogram.
B
We
are
going
to
shift
your
units,
it
will
introduce
additional
error
and
maybe
there's
a
way
for
you
to
avoid
it.
I
also
want
seed.
Now,
if
there's
a
way
to
maybe
to
do
something
more
General
by
saying
the
hotel
views
could
be
configured
with
units
preferences
and
then
conversions
are
really
not
very
hard
for
a
lot
of
these
units.
You
know
like
multiplying
by
a
thousand,
is
no
big
deal.
B
Could
we
build
in
for
a
certain
set
of
known
units,
the
conversions
that
we
want
and
then
let
there
be
a
legacy
mode
where
maybe
the
otlp
is
unchanged
and
then
maybe
we
do
start
start
to
shift
Hotel
toward
the
same
units
that
Prometheus
would
like
by
by
accommodating
both
sides
and
Building
Bridges
and
things,
and
you
know,
compatibility
modes.
B
That
sounds
a
little
bit
better
to
me.
I
think
we
always
kind
of
had
a
placeholder
in
the
hotel
data
model.
Saying
this.
The
unit
is
an
identifying
part
of
the
metric,
but
it's
it's
also
one
that
we
literally
know
what
it
means
and
we
can
change
it.
So
so
the
the
data
model
doesn't
force
you
to
say
these
are
absolutely
different
metrics,
because
you
could
imagine
doing
a
conversion
to
fix
the
conflict
so
that
we
might
just
build
that
out.
B
A
It
would
be
really
cool
if
it
was
an
exporter
preference
that
would
seem
to
I
suspect.
There
are
vendors
that
prefer
milliseconds
or
something
that
are
out
there
as
well,
so
that
will
give
them.
B
D
D
C
D
So
I've
started
implementing
open,
Telemetry
remote
rate
receiver,
but
it
seems
that
it's
not
that
easy.
Do
it
to
the
nature
of
remote
right
protocol,
because
basically
there
is
like
no
guarantees
that
time
series
was
the
same.
Metric
or
metric
family
are
going
to
come
on
in
the
same
request
and
the
same
holes
for
samples
as
well.
D
So
I,
don't
know,
I
stated
a
problem
here
in
a
document
and
propose
some
solutions
like
the
first
solution
is
really
like
complex
and
limited
one
and
another
one
actually
requires
some
changes
or
actually
like
we
get
back
in
on
some
existing
proposals
about
like
structured,
remote
right
protocol,
so
I
would
like
to
get
and
input
or
like
reviews,
this
document,
and
maybe
there
is
like
some
other
Solutions
or
options
here.
Oh
yeah.
C
Related
to
that,
the
Prometheus
team
is
looking
into
improving
remote
right
and
just
using
otlp
as
remote
right
V2
has
been
suggested,
but
the
problem
is,
we
found
optimizations
in
the
existing
remote
right
format
that
cut
bandwidth
like
40,
just
by
using
a
simple
table
and
not
repeating
the
labels
label
names
and
values,
for
example,
instance
equals
something
could
be
repeated
like
in
every
single
metric
there.
So
why
are
we
repeating
that?
So
now?
That's
a
very
lucrative
optimization,
because
we
do
know
that
Prometheus
hosted
Prometheus.
C
Customers
are
worried
about
their
egress
costs
and
if
we
can
cut
down
on
egress
costs,
then
we
we
would
most
likely
elect
to
do
so
over
using
otlp.
This
is
not
very
related
to
that,
but
we
are
looking
into
improvements
and
that's
something
that
I
try
to
Advocate
and
that
was
the
pushback
I
got.
B
I
haven't
seen
the
document
yet,
but
I
did
with
my
own
back-end
team
have
a
discussion
probably
two
years
ago
about
whether
we
could
just
receive
Prometheus,
remote,
right
and
I
think
we
ran
into
a
lot
of
the
same
problems
that
you
may
have
seen.
I'll
check,
I'll
check
out
your
document,
I
wanted
to
answer
Gotham's
point
just
a
second
ago
with
some
I
guess
some
enticement
or
some
news
I've
been
focused
this
quarter
and
last
almost
entirely
on
this
integration
with
Apache
Arrow
to
do
column
based
compression
of
otlp.
B
The
reason
I'm
here
doing
it
is
because
my
vendor
wants
traces
to
be
far
more
compressed
using
Hotel
collector
pieces,
but
the
collaborator
that
I'm
working
with
from
F5
really
wanted
all
of
otlp
in
it.
So
we've
been
doing
this
conversion
to
and
from
Apache
arrow
that
handles
otlp,
metrics,
traces
and
logs,
and
so
I
was
after
traces.
B
So
it's
been
second
or
third
priority
for
me
to
see
the
metric
stuff
work,
but
is
going
to
really
help
with
that
problem
that
you
just
described
and
the
way
this
is
being
built
right
now.
I
can
I
can
put
a
link
in
the
notes
in
the
chat,
but
there
is
an
Otep
describing
it
phase.
B
One
of
this
project
would
be
to
build,
go
support
into
the
collector
export
and
receiver
for
otlp
directly,
so
you
can
configure
an
otlp
export,
an
otlp
sorry,
a
no-tel
collector
on
the
edge
of
your
customer
Network
configured
for
the
Aero
bridge
mode.
B
You
configure
an
Hotel
collector
with
an
arrow
receiver,
otlp
Arrow
receiver
on
the
south
side
of
your
vendor
and
then
let
them
do
column
compression
between
the
two
nodes
and
you
we
expect
we're
expecting,
like
you
know,
90
compression,
because
the
instance
label
is
then
only
transmitted
once
and
and
and
every
data
point
that
has
exactly
the
same
schema
will
refer
back
to
a
schema
template
that
was
recorded
once
and
so
on.
Of
course,
it's
a
it's
a
technology,
lift
you
know.
Prpc
streams
are
new
and
dangerous
to
me
and
oh,
that's
not
true.
B
That's
just
me
being
doubtful
I
have
it
tested
and
we're
starting
to
do
performance
measurements
right
now.
So
this
is
coming.
B
Yeah
I'll
get
that
Otep
for
you
it's
there.
It
is
anyway
I'm
encouraged.
It
seems
quite
likely
to
do
a
number
of
improvements
on
all
of
our
telemetry.
C
Perfect
yeah,
going
back
to
muslim's
thing.
The
problem
is
Prometheus
remote
right,
one,
splits,
histogram
samples
across
different
requests
and
two
doesn't
send
any
metadata
along
with
the
metric
itself.
It
comes
separately
in
a
periodic
push
request,
so
building
otlp
from
without
metadata
and
without
all
the
histogram
samples
together
is
a
pain.
C
B
B
What
so
going
back
in
my
memory
two
years
ago
or
so
I
was
writing
some
inspects
that
helped
otel
Define.
Why
we
should
have
a
start
time
and
why
what
you
can
do
when
you
don't
and
one
of
the
things
you
almost
certainly
don't
have
in
the
prw
format?
B
Is
that
start
time
so
I
think
one
fine
approach
would
be
to
just
I
guess
the
sort
of
least
support
would
be
passed
through
the
data
without
start
times
and
I
think
that
what
lease
bridge
to
Prometheus
correctly
assuming
the
X,
the
prw
exporter,
doesn't
get
followed
up
by
lacking
it
or
whatever,
and
then
I
guess
a
slightly
more
mature
version
of
what
we've
talked
about.
B
We've
also
talked
about
this
for
the
prw,
the
Prometheus
receiver,
which
is
to
do
a
little
bit
more
correct,
start
time,
handling
and
I
guess
we
should
come
up
with
names
for
these
things,
but
basically,
if
I've
never
seen
this
instance
in
job
before
it,
it's
I
I
can
give
it
now.
As
a
start
time
stamp
and
I
can
use
the
current
value
as
its
current
reset
value.
B
And
then,
if
it's
a
counter
vendors
that
deal
with
start
times,
we'll
get
the
correct
rate,
they
won't
double
count,
the
restart
and
so
on,
and
then,
but
but
after
that,
if
the
instance
resets
on
its
own
and
You
observe
it,
you
can
then
make
some
inferences
about
correct
start
time
so
that
you
know,
like
I've,
seen
your
you,
you
reset
your
counter
I'm,
going
to
give
a
new
start
time
to
that
series,
which
is,
after
the
last
recorded
point
that
will
then
help
consumers
that
do
want
start
time
will
help
them
have
correct
information
and
right
now
the
current
hotel
Behavior
is
that
the
receiver,
the
previous
receiver
I,
think
I'm
not
sure
how
it
works.
B
I
get
confused
every
time.
I
go
in
there,
but
the
somebody
remembers
that
what
I
described,
which
is
the
the
initial
value
and
the
first
reset
time
and
I,
think
they
do
the
correct
stuff
that
I've
described
my
vendor
is
pissed
off
about
how
they
do.
B
The
thing
that
I
like,
which
is
the
first
point,
which
is
an
unknown
reset,
has
the
value
that
it
had,
which
is
exactly
what
Prometheus
expects,
instead
of
a
zero
value,
which
is
an
optional
thing
that
like,
for
example,
the
old
stackdriver
sidecar,
would
do
and
the
lights
that
forked
it
and
did
the
same
thing
so
that
you,
you
can
do
a
true
reset
and
that's
written
up
in
the
data
model
as
well.
B
C
Yep,
but
the
problem
is
not
even
the
start
times:
it's
we
don't
know
the
units
of
the
data
we
are
getting
and
if
there
are
20
buckets
in
a
histogram,
they
can
all
come
in
20
different
push
requests
for
the
same
time
stamp.
So.
C
B
B
That's
the
big
one
and
for
us,
that's
kind
of
showstopper
for
us
to
receive
that
data.
That
was
actually
the
I'd
say
almost
a
stronger
reason
why
we
couldn't
deal
with
prw
one
was
the
start
times
and
causing
us
not
saying
we
couldn't
solve
that
problem
on
our
back
end.
It's
just
expensive,
like
we
don't
want
to
pay
that
additional
synchronization
costs
and
do
it.
B
But
the
the
issue
with
type
and
I
tracked
the
issues
in
the
Prometheus
repository
for
a
couple
years.
There
I
remember:
one
of
them
was
sent
by
Rob
skillington
and
there
was
an
and
it
was
trying
to
to
get
always
on
metadata
in
the
prw
export
and
it
was
rejected,
I
think
considered
too
expensive.
B
There
was
another
alternative
Approach
at
the
same
time,
which
was
a
configuration
option
for
periodic
metadata
export,
and
these
were
in
the
six
thousands
and
the
Seven,
thousands,
the
call
and-
and
it
was
Rob's
didn't
go
in
because
it
was
too
expensive,
but
someone
else
did
and
it
I
think
it's
probably
not
on
by
default,
and
it
means
every
five
minutes.
It
will
output
metadata
in
the
prw
screen
using
the
new
protocol
buffer
message,
and
that
might
be
something
you
could
do
well.
B
C
Follow-Up
to
that
David,
like
somebody
from
Google,
did
start
a
design
doc.
A
C
Yes,
and
it
is
kind
of
accepted,
the
consensus
was
it's
trying
to
do
two
different
things.
We
just
need
to
split
it
into
two
design
talks,
one
for
transactional
and
one
for
structured,
but
I,
don't
think
they
are
working
on
stackdriver
or
that
anymore.
So,
if
somebody
from
Google
could
pick
that
up,
that
would
be
nice.
Okay,.
A
I
can
reach
out
to
the
team
yeah.
They
definitely
were
interested
in
the
person
who
wrote
the
doc.
You're
right
has
moved
on
I
think
the
team
is
trying
to
decide
what
their
next
steps
are,
but
yeah
I'll
reach
out
to
them.
A
Actually,
sort
of
on
that
note,
do
you
think
I
know
that
at
least
all
of
us
in
this
room
are
very
much
in
the
world
of
like
oh,
maybe
we
can
make
otlp
and
and
Prometheus
formats
sort
of
interchangeable,
or
at
least
like
very
easy
to
map
back
and
forth.
Do
you
think
that
improving
Prometheus
remote
right
to
solve
all
the
problems
we've
listed
is
still
something
that's
worth
doing
from
the
community's
perspective
or
from
your
perspective,.
C
I
mean
100
like
we
do
want
to
improve
this,
and
we
like
the
problems
that
the
prw
receiver
is
facing.
There
are
other
use
cases
having
the
same
problems,
and
we
do
want
to
fix
this
just
that
the
two
Prometheus
remote
right
maintenance
have
been
busy,
but
Callum
now
has
some
free
time
and
he
has
solid
experiments.
I
have
pointed
him
to
the
arrow
stuff
and
I
am
going
to
try
to
push
for
otlp
Arrow
representation
if
it
performs
remote
right,
V2.
B
So
I
I
have
some
questions.
I
don't
have
answers
right
now:
I've
I've,
I'm,
I'm,
first
of
all,
I'm
new
to
arrow
and
I'm,
not
in
that
Community
it
centers
around
rust
and
Java
more
so
than
it
does
go
where
I'm
kind
of
native,
so
I'm
still
learning.
But
one
of
the
things
I
pick
I'm
picking
up
is
the
arrow
really
is
designed
as
an
improv
in
memory
processing
like
data
Library,
it's
not
actually
optimized
for
persistent
storage.
B
There's
certain
kinds
of
compression
you
just
don't
get
because
someone
else
is
going
to
do
that
or
you've
got
a
data
store
already
and
so
on.
So
it's
not
actually
meant
as
a
storage
format,
it
is
still
meant
as
a
in-memory
processing
and
also
transport
works
pretty
well.
It's
you
know
it's
like,
but
I'd
like
to
see
there
be
a
clear
reasoning.
Why
pre-rw
isn't
what
it
is,
because
otlp
is
trying
clearly
trying
to
be
a
transport
protocol
and
we
have
in
open
Telemetry.
We
have
no
storage
Technologies.
B
Essentially
so
Prometheus
is
a
storage
technology
and
it
has
a
reason
for
Prometheus
remote
right
to
be
what
it
is
and
what
is
that
I
think
I
think
the
answer
is
probably
something
like
yeah,
it's
cortex.
I
I
I
in
the
back
of
my
mind,
I'm
thinking
of
Cortex
when
I'm
talking
about
prw.
For
this
reason,
if
there's
a
reason
to
to
you
know
cortex,
V2
or
or
whatever
the
future
is
it's
probably
not
just
an
optimized
Point
by
Point.
Here
are
some
metrics.
C
A
Cool
I
think
that's
all
the
agenda
items
unless
anyone
has
anything
else,
they'd
like
to
raise
rustling
did
you
feel
like
you
got
what
you
needed
from
this.