►
From YouTube: 2022-12-01 meeting
Description
OpenTelemetry Prometheus WG
B
B
I
guess,
since
Michael
from
the
go
team
has
already
joined,
we
could
maybe
get
started
by
giving
Michael
a
little
background
on
what
we're
doing
in
this
group
in
sort
of
then
yeah.
Maybe
by
that
time,
Ryan
will
have
joined
and
then
yeah
Michael
can
share
some
thoughts
from
a
runtime
maintainers
perspective
on
crazy
people
trying
to
come
up
with
their
own
profiling
formats
in
the
hotel
space.
B
So
yeah,
hey,
hi,
Michael,
the
the
short
summary
is.
This
group
is
trying
to
standardize
profiling
under
the
open
Telemetry
label.
One
of
the
ideas
is
that,
with
everything
in
open
Telemetry
once
you've
sort
of
put
profiling
in
place
somewhere,
you
should
not
be
light
to
the
vendor
that
maybe
gave
you
that
component
you
can
switch
between
vendors,
which
is
great
for
users.
Another
aspect
is
people
want
to
correlate
signals.
So
if
you
have
profiling
data,
it
would
be
really
nice
if
that
can
connect
to
your
distributed
traces.
B
So
you
can
understand
why
maybe
a
request
was
slow
and
yeah.
In
general,
the
open
Telemetry
project
has
evolved
from
just
dealing
with
distributed
tracing
to
metrics
logging.
Basically,
all
kinds
of
observability
data
I
think
is
falling
under
the
envelope
and
profiling
with
sort
of
a
natural
next
Target.
So
a
group
of
people
get
together
trying
to
standardize
this
and
yeah
we're
we're
interested
in
getting
sorts
from
the
go
runtime
perspective.
B
Is
we
obviously
looked
at
the
people
format
a
lot
and
discussed
it
and
sort
of
the
group
things
that
people
is
really
good,
because
it's
like
self-contained
and
like
an
easy
format
relatively
to
some
other
things,
for
example
JFR.
It's
also
pretty
standardized
and
documented,
but
it
many
here
feel
that
it
lacks
in
some
Dimensions,
for
example,
for
the
continuous
profiling
use
case,
where
you
have
to
send
data
continuously
with
P
Prof.
B
The
people
at
elastic
and
now
I
think
also
a
podcast
is
doing
it
they're,
starting
to
also
de-duplicate
or
sort
of
censor
stack
traces
only
once
so,
you
take
a
stack
Trace,
you
hash
it,
and
then
you
send
it
out
only
once
and
you
then
you
can
refer
to
it
by
hash,
ID
and
so
all
kinds
of
optimizations
that
yeah
currently
seem
difficult
with
P
Prof
timestamps
is
another
thing:
people
of
this
new
time
stamps.
B
You
can
use
labels,
but
it
kind
of
sucks
in
in
many
ways,
and
so
this
group
thinks
that
P
Prof
is
maybe
okay.
We
should
support
it
because
it
exists
similar
for
JFR.
We
have
to
support
it
because
Java
is
not
going
to
move
off
of
it,
but
maybe
we
want
to
develop
something
new
that
is
more
efficient
for
this
use
case
of
continuous
profiling,
and
one
of
the
hopes
of
the
group
was
hey.
B
If
we
come
up
with
something
really
nice,
maybe
we
can
petition
people
who
work
on
runtimes
to
natively,
emit
this
new
format
so
that
we
don't
have
to
deal
with
conversion
costs
of
taking
this
data
in
P
Pro
for
JFR
and
then
converting
it
to
the
hotel
thing,
and
if
the
runtime
did,
that
natively
would
be
the
least
overhead
which
of
course,
people
are
very
interested
in
here
as
well.
B
I
think
that's
kind
of
the
summary
did
I
miss
anything.
Anybody
here
thinks
something
was
left
out.
That's
important
context,
foreign.
B
Yeah
Michael,
that's
basically
what
what
this
group
is
trying
to
do
and
I
guess
the
question
for
the
go
project
would
be
if
some
new
format
came
along
with
the
go.
Runtime
I
think
we're
not
supporting
people
anymore.
It's
not
an
option
for
the
go
run
time,
but
just
to
go
run
time
open
to
outputting
profiling
data
in
other
formats
in
the
future.
B
Or
would
you
prefer
this
group
to
not
come
up
with
ideas
like
that,
and
maybe
the
go
project
has
yeah
their
own
thoughts
and
a
succession
for
people
in
the
long
run.
D
I,
don't
I'm
not
sure
that
so
I
guess
I'll
unwind
the
stack
so
I
I,
don't
think
we
have
any
plans
to
do
anything
different
or
extend
P
Prof
in
any
way
in
any
near
term.
There's
no,
no
plans
for
anything
like
that.
We
are
going
sort
of
doubling
down
on
it
a
little
bit
because
that's
that
is
the
format
we
plan
to
use
for
peach
profile,
guided
optimization
because
you
can't
get
like
other.
D
Another
format
would
be
better,
but
the
problem
is
P
prop
getting
getting
CPU
profile,
samples
and
shoving
them
into
pure
profits
so
ubiquitous
that
it
basically
outweighs
all
other
options
and
also
the
fact
that
some
of
the
information
you
really
want
for
this,
you
can't
really
get
in
a
VM
like
last
Branch
records,
so
I
think
eventually
something
like
that
would
be
great,
but
no,
no
virtualized
environment
basically
supports
getting
less
Branch
records.
D
So
we
are,
we
are
kind
of
going
all
in
on
it,
and
and
the
people
working
on
pgo
have
definitely
expressed
that
they
really
like
the
idea
that
a
profile
guided,
optimization
profile
is
just
a
p
Prof
profile.
The
exact
same
thing
that
you
get
when
you're
just
trying
to
debug
your
program,
because
people
are
familiar
with
it.
They
know
how
to
use
it.
We
have
a
lot
of
tooling
for
it,
and
people
have
built
a
lot
of
tooling
on
top
of
P
Prof.
D
So
that
being
said,
I
think
a
new
format
emitting
a
new
format
would
be
kind
of
a
Hard
Sell
I
I,
don't
like
go,
has
always
been
sort
of
it's
slow
and
careful
with
things
and
so
I
feel
like
I
feel
like.
If
this
format
existed-
and
you
know,
eventually
became
the
thing
that
everyone
is
using
and
nobody
wants
to
use,
P
Prof
anymore,
then
we
would
switch,
but
I
don't
think
we
would
be
the
first
people
to
start
emitting
this
if
I'm
being
completely
Frank.
D
So
that's
I
think
that's
where
we're
at
I
don't
think
I
have
anything
more
intelligent
to
say
on
what
would
be
a
better
alternative
but
from
the
go
runtime
perspective.
I
think
that's
where
we're
at
I
realize
I'm
kind
of
speaking
for
the
team
here
I.
Will
you
know
kind
of
see
where
they're
at
but
yeah
I,
I
I,
don't
think
I,
don't
think
we'd
be
adopting
a
new
format
anytime
soon,
so
yeah
I?
That's
not
it's
not
a
great
answer
and
I
yeah.
D
Unfortunately,
I
don't
really
know
what
a
better
alternative
is
because
I
understand
that
there
are
limitations
to
the
people.
Format
like,
like
you
said.
B
In
that
case,
the
go
runtime
wouldn't
have
to
worry
about
it,
but
it's
really
valuable
for
us
in
the
group
to
understand
what
the
Pew
is
of
maybe
adopting
new
formats
in
in
the
runtime.
That's
prepared
to
here.
Alexei
also
has
a
question.
C
D
That
is
a
good
question
we
have
definitely
we
have
absolutely
gotten
like
feature
requests
I
I.
This
is
this
is
not
the
first
time
I'm
hearing
about
like
time
stamps
for,
for
example,
you
know
one
way
we're
kind
of
working
around.
That
is
that
execution.
D
Traces
now
include
CPU
profile
samples,
and
so
you
can
kind
of
get
a
time
stamp
that
way,
but
it's
it's
definitely
not
great,
because
you
can't
actually
parse
those
execution
traces
today
we
might
change
that,
but
to
to
today
that
that's
that's
where
it
is
aside
from
that
I
think
the
I
think
the
like
sort
of
scalability
issues
that
are
like
the
sort
of
the
sort
of
issues
of
of
having
to
send
the
entire
profile
every
single
time
or,
like
all,
of
the
stack
information
in
the
profile.
Every
single
time.
D
That
is
that
to
me,
is
somewhat
new.
I,
don't
I,
don't
remember,
hearing
anything
like
that.
As
far
as
other
limitations
in
the
format
go,
I,
I,
ca,
I,
honestly
can't
think
of
anything.
It's
it's
worked
pretty
well
for
us
I.
That
being
said,
we
I
don't
think.
We've
ever
really
had
a
very
exotic
use
case,
for
it.
C
One,
what
yeah
timestamps
is
one
thing
that
people
mention
sometimes
another
thing
that
people
mention
is
if
labels
are
used
extensively,
then
having
to
duplicate.
This
stack
for
for
every
label.
Combination
is
can
feel
at
least
redundant
and
maybe
also
like
storage,
expensive
and
wire
size
expensive.
Even
if
compression
is
on
I
wonder
if
you
ever
heard
something
among
those
lines.
D
No
I
I,
don't
I,
don't
think,
we've
ever
really
I,
don't
think.
We've
ever
really
come
up
with
that
I
feel
like
or
or
have
had
that
feedback
given
to
us.
I
feel
like
most
of
the
time.
D
The
way
we
sort
of
treat
profiles
is
that
it's
more
of
a
manual
thing
and
rather
than
an
automated
thing,
which
you
know
maybe
should
be
different,
I'm,
not
I'm,
not
saying
that.
That's
the
that's
the
right
way
to
do
it,
but
that's
how
that's
how
it's
been
treated
for
a
long
time
and
so
I,
don't
think.
We've
really
paid
much
attention
to
that
and
I
also
do
I
as
a
result.
D
Because
of
the
way
we
describe
these
tools
to
our
users
in
the
end
they
kind
of
end
up
just
you
know
like
they
open
the
port,
the
H,
they
add
the
HTTP
Handler
for
p
Prof
and
then
when
they
want
to
find
out
something
about
their
their
server,
then
they
go
and
they
you
know,
take
a
profile
and
they
don't
really
care.
D
What
that
looks
like
because
they're
just
downloading
it
to
their
laptop
or
whatever
and
then
looking
at
it,
which
is
a
very
different
use
case
from
I'm
periodically
sending
profile
messages
to
a
server,
and
that
is
definitely
not
something
we
have
really.
That
is
definitely
not
an
angle
we
have
been
thinking
about
so
yeah
I,
maybe
maybe
I.
D
Maybe
I,
could
should
take
a
step
back
like
because
now
that
I
now
that
I
say
that
these
there
are
these
two
sort
of
separate
use,
cases
and
I,
don't
think
for
the
sort
of
like
manual
debugging
like
I,
just
want
to
look
at
I
just
want
to
see
like
what
the
server
is
doing
and
get
an
idea
of
costs
or-
or
you
know
where
time
is
going,
is
I.
Don't
think
we're
gonna
move
away
from
P
Prof
for
that
that
doesn't
mean
we
don't
have
leeway
to
do
it.
D
It
just
I
think
it's
going
to
be
kind
of
a
hard
sell,
because,
because
we
already
have
so
much
so
much
built
on
top
of
the
P
Prof
format,
I
can
yeah.
Unfortunately,
I,
don't
I,
don't
really
yeah
I,
don't
know
if
I
have
that
much
more
intelligent
to
say
about
it.
C
I
think
my
hand
is
yeah.
It's.
C
It's
from
East
I
wish
it
would
like
it
would
go
down
medically.
B
My
question
is
maybe
around:
if,
if
the
go
project
is
not
so
keen
on,
maybe
supporting
a
new
format,
which
makes
total
sense
I
think.
Would
there
be
a
future
where
profiling
data
is
exposed
through
structured,
apis?
I
know,
that's
also
controversial,
but
then
at
least
you
don't
have
to
deal
with
the
format.
If
there's
a
low
overhead
API
to
get
the
data
Maybe
for
each
event,
you
get
a
callback
or
something,
then
it
can
be
sent
out
in
any
way
that
this
group
might
come
up
with
well.
D
Honestly,
I
think
the
way
things
are
going
right
now,
that's
less
controversial
than
you
might
think
so.
I
think
that
is
a
much
easier
sell
than
supporting
a
different
format,
and
then
third-party
tools
can
just
you
know,
shove
it
into
whatever
format
they
want
and
we
can
just
expose
whatever,
for
whatever
the
format
wants
right,
yeah
I
think
I
think
that
is
that
it
that
would
actually
it's
it's
still.
You
know
it's
still
gonna
be
quite
a
bit
of
work
to
you
know
does
like
designing.
D
Apis
is
never
easy,
but
I
do
think
it
would
actually
be
easier
than
than
than
trying
to
say,
like.
Oh,
let's
support
another
profile
format
at
this
point
in
time.
C
And
it's
it's
not
a
question,
it's
more
like
a
remark.
What
do
we
found
internally
at
Google,
at
least
like
I've,
been
reviewing
many
usually
like
when
someone
wants
to
collect
new
kinds
of
profiling
data?
We
ask
people
that
this
is
in
people
all
format
internally
and
so
profile
product
is
used
a
lot
and
one
thing
in
the
disc
in
the
early
discussions
about
how
the
data
is
going
to
to
be
represented
in
the
people
of
format.
It's
not
always
necessarily
about
like
the
format
itself.
C
One
thing
that
is
good
about
having
a
format
is
that
it
initiates
some
early
discussions
about
like
okay,
like
what
metrics
are
cumulative.
What
do
you
actually
want
to
break
break
those
metrics
by
break
down
those
metrics
by
like
what
are
the
units
for
the
for
the
metrics
I?
Don't
really
have
a
question:
it's
yeah,
it's
more
like.
C
Yeah
common
format
gives
a
language,
and
that
is
often
one
thing
that
is
very
useful.
I
was
thinking
that
if
you
have
runtime
API,
you
would
still
have
to
remember
to
ask
those
questions,
and
it's
probably
fine
to
ask
them
in
some
other
way,
but
yeah.
C
D
D
I
I
completely
agree
with
everything
you
said
and
like
I
I,
just
I
guess,
like
I,
think
the
direction
we
would
want
to
move
is
toward
having
runtime
apis
for
things
and
you're
totally
right.
They
have
to
ask
all
the
same
questions.
We
do
have
some
experience
with
at
least
building
these
apis.
We
have
We've
replaced
some
of
the
runtime
sort
of
metrics
apis
with
with
the
with
the
new
package,
and
that
is
like
string
based
and
also
forces.
D
Everything
to
have
a
you
know,
forces
everything
to
have
a
unit
and
to
specify
whether
it's
cumulative
and
stuff
like
that
and
make
all
of
that
available.
So
yeah
I
yeah
it
it.
If
it
almost
like
you,
it
does
I
think
you
have
a
point
that
you
almost
have
to
redo
that
work
and
that's
kind
of
the
work
that
goes
into
the
API
design.
D
Right
is
because
you
definitely
don't
want
to
miss
these
genuinely
very
important
questions
and
it's
part
of
what
makes
like
designing
an
API
for
this
hard
like
I,
honestly
I
wouldn't
expect.
I,
wouldn't
expect
this
to
just
like,
be
like
oh
yeah
file,
a
proposal
and
like
great
like
it's
it's
you
know
it's
done
unless
it's
something,
unless
it's
something
really
really
simple,
but
I,
don't
I,
don't
think
it
could
possibly
be
simple
because
there's
so
many
so
many
questions
here.
B
Okay,
if
there's
no
more
questions,
then
Michael
you're
free
to
stick
around
and
listen
to
the
rest
of
this,
but
you
might
also
want
to
get
on
with
your
day,
but
we
really
appreciate
that
you
took
the
time
to
to
talk
to
us
and
get
your
thoughts
on
the
future
of
the
go
run
time
when
it
comes
to
profiling.
Super
awesome.
Thank
you.
So
much.
B
All
right,
then,
the
second
thing
on
today's
agenda
would
be
to
discuss
this
architecture,
proposals
that
I
send
out.
If
anybody
else
has
some
other
things
that
they
want
to
get
on
the
agenda,
maybe
just
go
to
the
Google
Doc
and
add
it
to
the
agenda.
I
guess
I
can
also
share
it
in
the
zoom
chat,
real
quick
for
anybody
who
doesn't
have
it
open
yeah.
Is
there
anything
before
we
move
on
or.
B
All
right,
then
yeah,
so
a
few
people
here
have
commented
on
the
proposal.
B
B
Maybe
the
biggest
question
we
have
is
sort
of
how
anything
in
that
proposal
would
be
perceived
by
the
open,
Telemetry
Sikh
members
who,
who
are
maybe
Morgan
I,
don't
know,
if
is
he's
here
right,
has
thoughts
on
how
how
much
complexity
can
be
pushed
in
a
in
a
collector
of
The
Collector
would
be
a
place
to
do.
Conversions,
I,
don't
know
how,
if
you've
seen
this
proposal
that
I'm
talking
about
Morgan,
maybe.
F
No
but
I've
been
listening
in
okay,
so
typically
I,
don't
know
if
there's
a
hard
and
fast
rule
here
because,
like
like
take
metrics,
for
example,
we
receive
leave
receivers
for
otlp
which
require
no
conversion.
That's
the
native
format
used
inside
the
collector,
and
then
we
have
receivers
for
Prometheus
which
go
and
convert
the
format
to
something
else.
F
B
One
specific
architectural
choice
we
could
make
is
we
could
potentially
buffer
up
data.
That
is,
is
we
have
multiple
ways
to
call
it
stateful
or
streaming
data
where,
basically,
you
don't
get
a
whole
profile
in
one
blob
converted
and
send
it
on.
You
get
individual
events
and
you
have
to
buffer
a
lot
of
them
up,
because
some
of
them
are
the
stack
traces.
Some
of
them
are
the
symbols.
B
F
E
Okay,
actually
so
sorry,
this
is
Josh
the
thing
that
I
noticed.
E
Yeah
been
a
long
time,
hi
Morgan
anyway,
so
the
the
thing
that
I
noticed
was
there's
actually
a
communication
between
the
client
and
The
Collector
right
where
it
negotiates
what
the
most
optimal
format
is
like
the
when
the
client
connects
to
The
Collector.
It's
supposed
to
tell
it
like
here
are
my
most
optimal
formats.
That's
new!
That's
the
thing
that
you
want
to
push
through.
That's
the
thing
you
might
want
to
talk
to
The
Collector
Sig
about
oh
actually,
my
camera's,
not
on
apologies,.
E
Open
television
yeah
same
thing,
so
that's
the
bit!
That's
the
component!
That
I
would
actually
go
talk
to
the
collector
sing
about
and
see
how
they
feel
about
this,
that
that
that
negotiation,
I
personally
think
it
makes
a
hell
of
a
lot
of
sense
and
it
seems
to
fit
well,
but
it
would
be
new
to
the
otlp
protocol,
the
fact
that
there's
some
kind
of
a
negotiation
that
happens
of
like
what
are
acceptable
formats,
as
opposed
to
just
we
accept
anything
of
format,
X
right.
E
So
that
would
be
new
and
that's
that's
the
bit
that
I
would
actually
start
farming
around
and
socializing
with
the
first
The
Collector
and
then
also
the
TC.
My
fear
there
is
what
does
this
architecture
look
like
if
someone
doesn't
want
to
use
a
collector
right
and
so
I
think
there?
You
should
have
an
answer
to
that
as
well
in
The
Proposal,
but
that's
my
two
cents.
E
B
F
Right
but
those
exist
right
if
you
send
metrics
and
traces
to
lightstep
or
Splunk
or
honeycomb
or
I,
think
you're,
Relic
now
and
various
other
companies,
you
can
technically
send
it
directly
from
your
instrumentation
by
otlp
I.
F
Think
a
lot
of
people
use
the
collector
as
an
intermediary,
because
they're
already
using
the
collector
to
capture
host
metrics
or
kubernetes
metrics
and
various
other
things
it's
already
there
and
you
can
do
some
pre-processing
on
The
Collector,
which
is
nice,
but
no
there's
plenty
of
people
who
just
stream
otlp
directly
to
a
like
a
like
a
SAS
back
end
or
other
intermediaries.
I
know
like
kerbil
I,
think
is
a
processor
that
can
receive
ltlp.
E
B
E
B
E
So
I'm
suggesting
that
component,
where
you
have
the
the
back
and
forth
of
like
what
protocols,
what
are
the
different
formats
that
I
support,
actually
would
be
encoded
in
otlp,
which
means
it
impacts
actual
back
ends.
So
back-ends
would
have
to
also
do
the
same
advertising
and
it
could
be
that
in
practice
what
happens?
Is
everyone
uses
a
collector
to
do
protocol,
conversions
and
back
ends
only
say
they
support
one
type.
Maybe
that's
what
happens
going
forward,
but
I'm.
E
What
I'm
saying
the
that
aspect
of
your
proposal
is
I,
think
the
most
contentious
and
the
thing
to
go
start
talking
to
folks
in
The
Collector
and
talking
to
folks
who
own
the
protocol
I'm
one
of
those
folks
so
I'm
here
but
I
think
there's
more
people
that
need
to
see
this
and
and
farm
this
through.
E
B
Can
you
maybe
drop
information
on
how
to
talk
to
the
rest
of
this
group
in
the
in
the
meeting,
notes
or
yeah
awesome
thanks,
Alexa
yeah,
good
question.
C
Yeah
the
Practical
question
there
is
in
how
many
languages
the
conversion
needs
to
be
implemented,
because
if
it's
called,
if
it's
just
collector,
then
it's
it
can
be
written
in
like
written
once.
But
then,
if
you
need
to
support
multiple
clients,
then
it
would
be.
C
It
would
be
more
code
to
support,
for
example,
like
GFR
to
people
of
conversion
in
in,
like
five
different
languages,
I
wonder
how
people
deal
with
it
this
today
like
if
the
client
needs
to
do
the
conversions
or
export
to
different
backends.
Does
it
mean
that
there
are
like
employees?
There
is
an
implementation
in
the
Ruby
and
Python
and
C,
plus,
plus,
and
all
Hotel
languages,
to
deal
with,
with
every
with
every
possible
back
end,
like
elastic
yeah
I
mean
for
the
monitoring
metrics.
F
Yeah
so
the
the
sdks
they
use
their
own
internal
format.
That
I
imagine
mirrors
basically
mirrors
otlp,
then
the
sdks
themselves
and
language
agents
have
exported
interfaces
there's
one
by
default.
That's
always
there
for
otlp
I,
think
for
metrics,
there's
always
one
by
default.
That's
there
for
Prometheus,
but
you
can
build.
You
know
a
Lexis
exporter
for
Java,
for
example,
and
and
that's
a
bunch
of
java
code
that
gets
linked
to
the
Java
SDK.
That
would
then
export
in
your
format.
F
E
Okay,
thanks
the
the
other
thing
about
implementation.
We
don't
expect
so
like
in
terms
of
providing
implementations
if
you
provide
in
The
Collector,
that's
good
enough
from
the
open,
Telemetry
Community.
If
a
back
end
wishes
to
support
this
this
signal
and
wants
to
support
the
conversions
they
can
write
it
in
their
own
language
of
choice
and
whatever,
but
like
if
you
provided
The
Collector.
That
should
be
good
enough.
B
Yeah
I
think
that
sounds
reasonable,
at
least
to
me.
That's
something
we
can
work
with
then
yeah,
maybe
to
to
go
back
to
the
more
details
of
the
proposal
right
now.
The
the
initial
proposal
that
I
made
was
basically
saying
that
the
back
ends.
Minimum
requirement
to
support
is
p
Prof,
which
is
because
it's
the
only
standardized
format
that
we
have
right
now
and
then
the
hotel,
collector
or
the
clients
could
also
send
something
else.
B
If
the
back
end
says
it
supports
it
and
if
needed,
the
hotel
collector
does
conversions
that
keeps
the
backends
very
simple.
The
the
backends
can
add
support
for
more
complex
protocols
like
a
stateful
protocol
where
you're
streaming
the
data
and
you
do
symbols
and
stack
traces
separately.
You
can
also
send
shape
R,
but
the
minimum
back-end
would
have
to
support
is
p
Prof
and
that
was
yeah
that
has
a
different
advantages
and
disadvantages,
but
there's
some
yeah
other
Alternatives
and
I've
spelled
them
out
I'm.
B
Trying
to
find
my
link
here
in
the
alternative
section
of
the
proposal.
The
other
option
is
that
all
back-ends,
actually,
instead
of
P
Prof,
must
support
a
stateful
protocol.
So
that
would
be
something
that
looks
similar
to
what
elastic
is
doing
right
now.
B
But
we
would
probably
standardize
this
as
a
hotel
format
and
then
the
sort
of
trade-off
here
is
that
this
gets
more
complex
for
the
vendors
to
support,
because
now
you've
got
like
a
streaming
protocol
where
stuff
arrives
separately,
like
stack,
traces,
arrive
separately
from
symbols
and
and
so
on,
and
you
have
to
deal
with
that.
But
the
advantage
is
that
we
can
do
the
conversion
to
that
more
easily
in
The
Collector.
B
So
the
collector
can
more
easily
convert
people
for
chafe
R
into
that,
and
you
still
saves
the
network
bandwidth
between
the
collector
and
the
the
back
end,
so
that
that
I
think
is
the
one
compelling
potential
change
to
the
architectural
proposal
right
now
and
I'm
curious.
If
people
have
thoughts
on
that.
E
I'll
say
one
thing:
real,
quick
just
to
bring
the
open
Telemetry
component.
We
we,
if
you
look
at
the
stats,
D
receiver,
that
we
already
have
in
the
open
television
collector.
This
is
a
stateful
component,
where
the
way
stats,
D
metrics
work
or
you
report
events
and
then
those
get
computed
into
a
metric
and
then
reported
on
a
time
interval.
So
there
is
precedence
for
having
a
stateful
protocol
where
you
expect
events
to
come
in
and
then
join
them
together
in
the
collector
I.
E
Don't
think
we've
ever
done
something
this
large,
though
so
again,
that's
something
I
would
kind
of
talk
to
The
Collector
about
I
am
I.
Think
that's
there's
a
lot
of
compelling
reasons
why
that
protocol
is
efficient
and
why,
having
that
thing
that
pieces
together,
the
stateful
events,
kind
of
locally
makes
sense
or
in
a
server
my
I
know.
I
would
listen
to
all
the
people.
E
Conversation
and
I
would
argue
that
if
we
don't
support
pre-prof
in
some
way,
we're
probably
making
a
large
mistake,
but
a
stateful
protocol
as
an
addition
to
P
Prof
and
an
optimization
and
something
we
think
the
world
can
move
to
seems
like
a
good
thing
for,
especially
given
the
notion
of
like
Cloud
profiling
and
streaming
use
cases.
B
Yeah
I
think
so.
The
main
problem
with
P
Prof
in
in
the
long
run
would
be
the
bandwidth
requirements
and
if
we
do
convert
to
something
more
efficient,
it
doesn't
have
to
be
on
the
sort
of
producer
client
level,
but
on
the
collector
level
would
be
a
good
place
because
that's
you
can
do
before
the
traffic
has
to
potentially
cross
data
centers
which,
in
the
cloud
is,
can
be
expensive.
G
Can
I
add
a
few
comments
based
on
what
we
just
discussed
so
from
the
point
of
view
of
elastic
our
backing
already
operating
mistake
will
manage
so
for
us
to
add
paper
of
support
at
the
back
end,
so
the
backing
can
receive
stateless
messages.
It
wouldn't
be
a
lot
of
work
because
all
the
information
is
already
there,
so
that
would
go
for
anybody
else
that
also
operate
safely
at
the
back
end.
G
So
the
the
comments
that
they
added
to
the
proposal
are
written
with
that
in
mind,
in
the
sense
that
we're
trying
to
move
towards
back
ends
that
ideally
support
both
stateless
and
stateful
protocols.
That
would
be
the
best
because
then
they
can
accept
data
from
clients
who
don't
understand.
People
option
also
accept
staple
data
either
from
The
Collector
or
from
clients
themselves
and
as
far
as
they
call
letter,
goes
and
doing
the
transformation.
The
protocol
transformation
from
stateless
to
state
for
as
we
discussed
yesterday,
is
this
trivial.
G
We
don't
need
the
full
database
layer,
the
minimal
casting
layer
will
do.
There
is
no
significant
State
being
there.
So
I
do
see
this
working
out
without
any
major
issues,
but
doing
the
reverse,
which
is
stateful
to
to
stateless.
Yes,
it
is
highly
complex
and
it's
trying
it's
something.
We
need
to
avoid.
Basically,.
B
Got
it
so
I
guess?
Is
it
fair
to
say
that
most
people
here
are
happy
with
the
idea
that
back
ends
should
be
able
to,
in
the
long
run,
speak
something
very
efficient
like
a
stateful
format,
where
data
comes
in
like
stack,
traces,
symbols
and
all
that
stuff
separately,
but
also
backends
should
give
clients
the
option
to
send
data
in
the
existing
industry.
B
Formats
like
P,
Prof
and
chafe
are
directly
through,
which
gives
back-ends
an
advantage
if
they
know
what
to
do
with
these
formats,
which
also
gives
clients
an
advantage
because
they
don't
have
to
convert
it,
and
we
want
to
support
both
of
these
use
cases
of
channeling
data
to
the
backends.
That
way,
does
that
seem
to
have
consensus
on
on
a
sort
of
high
level.
A
I
have
a
question
for
the
more
experienced
total
members
most
of
the
otlp
definition
of
services.
Right
now
are
just
one
definition
of
service,
so
you
have
received
metrics
if
you're
not
mistaken
or
receive
Trace,
which
which
accepts
a
payload
and
returns
a
response,
and
this
is
unique.
This
would
also
mean
if
we
support
both
a
stateful
and
Status
protocol.
We
introduce
the
ability
to
support
multiple
Services.
Is
this
something
that
would
be
well
received,
or
is
it
something
that
is
discouraged?
What
what
are
the
the.
F
E
Can
you
say
that
again
sorry
I'm.
A
Basically,
right
now,
the
traces,
metrics
and
logs
services
that
the
otlp
protocol
exposed
are
just
one
possible
implementation
based
on
Unity
rpcs.
If
I'm
not
mistaken,
and
here
we
are
proposing
to
back-ends
implementing
the
next
possible
profiling
protocol
to
have
two
separate
services
that
they
can
Implement
and
they
can
offer
one
or
either
both
or
the
other.
Is
this
something
that
is
okay
with
the
auto
philosophy
in
general
or.
E
This
this
would
be
somewhat
new
and
in
the
context
that
that
I
can
give
you
is
we
there
there's
a
proposal
in
place
that
that
has
been
a
long,
ongoing
discussion
around
using
Apache
Arrow
to
make
a
new
kind
of
event-based
signal.
E
So
this
was
basically
the
idea
that
you
could
send
events
that
are,
what
do
you
call
it
man?
My
brain
is
not
working
where
they
have
the
same
set
of
attributes
but
different
values,
so
columnar
data
instead
of
Vector
data,
if
you
will
so
a
columnar
encoding
of
sending
metrics,
and
that
was
something
that
effectively,
we
would
have
treated
as
a
different
signal.
E
So
one
one
way
you
could
approach
this
in
open
telemetries.
You
could
actually
provide
two
profiling
signals.
You
could
have
one
signal,
which
is
this
holistic,
stateless
thing
and
another
signal
which
is
stateful,
and
you
could
have
two
different
paths
to
no
TLP
for
them
and
you
could
and
the
way
you
support
them
is
either
you
have
the
path
or
you
don't
right.
So
you
would
actually
have
two
separate
signals
in
otlp,
as
opposed
to
pretending
it's
one
signal
right
where
one
would
be
equivalent
to
what
we
do
with
metrics.
E
It's
like
a
big
aggregation
that
dumps
all
at
once
and
the
other
one
would
be
kind
of
more
equivalent
to
what
we
have
with
say
logs,
where
you
send
events
right
and
users
can
choose
which
one
of
those
protocols
make
sense
for
them.
E
Given
all
the
complexity
of
profile,
I,
don't
know
if
that's
right
but
like
if
you,
if
I,
think
as
an
otlp
person
of
what
we've
done
previously
and
how
I
would
approach
this
I
would
think
about
that
as
an
option
right
where
I
literally
have
a
profiling
signal
that
is
stateful
and
a
different
profile
signal
that
stayed
less.
That's
the
approach
we
decided
to
take
for
columnar
encoding
of
metrics,
sorry
multivariate
time
series,
that's
what
we
call
them.
E
So
that's
the
approach
we
took
there
just
we
never
actually
implemented
the
multivariate
part
of
the
protocol.
It's
still
there's
a
lot
of
debate
going
on
because
doing
that
efficiently
has
a
whole
bunch
of
weird
complications,
similar
to
what
you're
running
into
here
right.
E
So
yeah,
that's
kind
of
my
initial
reaction,
I,
don't
I,
haven't
really
thought
through
it
significantly
to
say
that
that's
a
good
idea
or
a
bad
idea.
When
I
looked
at
the
proposal,
though
I
like
what
you're
trying
to
do
and
I
think
it
makes
a
lot
a
hell
of
a
lot
of
sense.
It's
just
it's!
It's
a
little
far
into
what
we've
done
in
the
past
and
I
think
we
need
to
make
sure
that
the
rest
of
the
community
is
on
board
with.
This
is
where
we
want
to
take
our
protocol
right.
B
Yeah,
that
makes
sense,
so
that's
interesting
about
having
multiple
signals
that
would
kind
of
sidesteps
the
content
negotiation,
but
the
question
I
have
for
that
is
the
stateless
signal.
You
would
send
a
p
Prof
or
chafe
R
and
those
are
very
different.
So
you
still
need
some
way
of
saying
like
hey,
which
one
do
you
support
on
the
back
end,
I,
guess
or
would
you
have
sent
three
signals,
one
for
JFR,
one
for
p
Prof,
one
for
stateful?
Do
you
have
any
thoughts
on
that.
E
Yeah,
I
guess
that's
that's
where
it
starts
off.
That's
where
you
have
three
signals
yeah,
so
you
would
literally
have
here's
the
JFR
signal.
Here's
the
other
one
or
you
would
do
the
JFR
to
P
Prof
conversion
client
side,
and
you
would
only
support
like
a
peeper
off
stateful
thing.
Right
got.
B
It
no
I
think
I
think
Chase.
Are
we
really
want
back
in
Steph's
ability
to
take
that
in
without
conversion,
because
conversion
to
B
Prof
will
be
lossy,
there's
a
lot
of
stuff
in
JFR
that
can't
be
put
in
a
p
profit,
at
least
not
reasonably,
but
it's
an
interesting
thought.
I
guess.
Another
question
is:
is
there
a
hotel
sort
of
experience
with
sort
of
shipping
opaque
payload
through
the
whole
pipeline,
isn't
like
taking
a
big
file
with
a
JFR
or
P
Prof
in
it
and
sending
that
to
a
backend?
E
So
yes
kind
of
not
necessarily
opaque
the
right
now.
Our
logging
signal
allows
you
to
attach
it
basically
arbitrary,
structured
data.
Key
value
pairs.
One
of
those
key
value
pairs
can
actually
be
an
array
of
bytes
I.
Think
they're,
adding
that
to
it,
so
you
can
actually
attend
attach
log
payload
I,
don't
know
if
it's
been
used
to
basically
transparently
send
a
particular
format
of
a
different
type.
E
Yet
I
I
believe
that's
to
like
you
know,
I
want
to
log,
maybe
like
an
image
or
something
or
a
a
a
sound
bite.
Sound
clip
is
what
that
I
think
was
intended
for,
but
I
know
that
people
are
sending
Logs
with
known
structure
kind
of
transparently
to
otel
right,
so
there's
precedence
for
that
here
in
terms
of
otlp.
You
know:
we've
encoded,
like
hey,
here's,
just
a
blob
of
bytes
that
you
can
send.
But
we've
never
expressly
said.
E
This
is
an
open
format
where
you
can
put
different
types
of
encodings
and
you
have
to
interpret
it
in
various
ways.
Yet
that's
not
I,
don't
think!
That's
something!
We've
really
done!
In
otlp,
we've
tried
to
enforce
enough
rigor
that
people
can
kind
of
interpret
it
efficiently,
but
at
least
a
minimum
of
things
right.
There's
there's
always
we
made
room
for
extensions,
but
the
idea
would
be
that
if
I
understand,
otlp
I
can
at
least
get
observability
I
might
not
get
some
of
these
richer
things
that
you're
hiding
within
it.
But
that's
fine.
B
Yeah
I
think
we
would
definitely
have
to
at
least
specify
some
more
details
about
what's
inside,
of
a
p
Prof,
because
there's
different
profile
types
that
can
be
shipped
into
P
Prof
and
we
need
to
probably
have
a
mapping
of
what's
the
CPU
profile.
What's
the
memory
profile
Etc,
we
might
also
need
to
do
the
same
for
JFR
if
we
want
that
to
be
shipped
through
so
I
think
some
layer
on
top
of
these
will
have
to
do.
B
Yeah
and
I
guess:
one
thing
we
need
to
figure
out
is
whether
or
not
that
metadata
of
like
how
do
you
interpret
this
JFR?
If
that's
something
where
we
specify
like
this
is
how
your
JFR
has
to
be
or
we
allow
that
to
be
sent
along
with
the
JFR.
It's
like
here's
how
you
can
find
the
CPU
profile
in
this
JFR
I
think
it's
probably
the
latter,
because
in
Java
there's
also
user
space
profilers
that
are
putting
data
into
JFR
like
async
profiler,
that's
getting
popular,
so
I
think
they.
B
They
can
in
practice.
Put
these
same
information
same
stuff
like
CPU
profiles
into
JFR
in
different
ways,
so
I
think
we
probably
have
to
allow
that
to
be
supplied
by
the
user.
B
So
if
I
had
to
summarize
at
this
point,
it
seems
like
the
thing
we
need
to
talk
to
the
collector
sick
about
is,
if
content
negotiation
of
some
sorts
is
palatable
to
the
to
the
group
and
could
be
something
we
integrate
in
this
proposal.
B
Do
we
have
strong
feelings
on
whether
the
back
ends
should
support
something
simple
like
P,
Prof
or
the
backends
should
support
something
more
stateful
and
then
potentially
using
the
collector
to
to
take
pre-profs
to
convert
to
that
I
personally,
I
was
initially
leaning
towards
backend.
That
should
just
support
ppros.
B
That
would
be
the
simplest,
but
I
I've
come
around
to
to
things
that
maybe
the
stateful
thing
is
what
we
should
mandate
on
the
back
end
to
be
Hotel
compliant
just
because
in
the
long
run,
even
we
at
data
doc,
where
we
send
P
profs
right
now,
we
don't
like
the
bandwidth
it
creates
and
getting
getting
that
bandwidth
down
at
the
collector
level,
for
us
would
probably
be
still
very,
very
interesting,
and
even
if
it
means
like
for
us
for
vendor
that
currently
takes
just
P
profs,
it's
a
big
re-architecting
of
their
back
ends.
B
So
that's
maybe
the
main
downside
of
that
decision
and
that
decision
is
also
going
to
go
for
open
source
implementations
that
might
want
to
provide
back-ends
in
the
future.
So
I
don't
know
and.
C
B
B
Don't
think
we
have
a
good
way
of
saying,
like
hey,
here's
how
you
did
like
if
you
have
a
stack
Trace,
just
send
an
ID
next
time
I
mean
we
could
do
that
and
just
pretend
the
ID
is
a
stack
like
the
hash
is
a
program
counter
or
something
horrible
like
that,
and
then
all
your
stack
traces
have
a
single
stack
frame
in
them,
but
I
I,
don't
think
at
that
point.
It
makes
sense
to
use
P
Prof
anymore
if
we're
not
really
putting
the
stack
traces
in
there.
B
B
B
C
I
was
I
was
just
I.
Remember
like
it
was
once
there
was
a
bug,
someone
filed,
or
we
were
discussing
in
people
off
when
someone
would
have
a
file
that
was
basically
concatenation
of
profile.
Photos
like
they
would
put
multiple
products
concluding
in
them
in
into
the
same
file
and
I
was
surprised
when
it
actually
it
got
parsed
but
I
think
there
was
an
error
complaining
that
some
field
was
present
multiple
times.
I
was
basically
thinking.
C
B
One
random
thought
I
had
on
on
P
proof.
As
well
said
you
can
in
series
stream
P
Prof,
so
you
don't
have
to
send
it
as
like
one
file
you
can
just
when
you
want
a
sample,
you
need
to
make
sure
to
send
any
references
in
the
string
table
like
before
you
send
out
that
sample
and
so
on,
but
in
theory
you
could
build
it
as
a
stream
rather
than
yeah.
C
C
Another
question
I
had
for
the
hotel
negotiations.
Is
this
the
collector
part?
So
are
we
going
to
in
the
propos
in
the
proposal
or
in
the
discussions
we're
going
to
effectively
ask
what
it's
okay
to
require
presence
of
The
Collector,
or
we
accept
the
fact
that
the
clients
may
need
to
do
the
conversion
themselves,
which
means
implementing
GFR
to
P
profit
or
GFR
to
this
state
to
the
third
stateful
protocol
in
multiple
languages
or
that's
not
decided
yet
I
didn't
I,
didn't
I.
B
Think
no
I
don't
think
we
have
a
decision.
It
seems
like
the
hotel,
sick
of
collectors,
or
at
least
what
Josh
said
it
seems
like
the
more
we
can
make
stuff
work
without
the
collect,
all
the
better
generally,
but
yeah
I
think
it
will
come
down
to
also
what
we
do,
but
the
signals.
If
we
have
one
signal
for
every
sort
of
like
for
prep
for
JFR
and
for
stateful,
then
we're
sort
of
sidestepping
that
conversion
problem
you're.
Just
as
a
vendor,
you
say
how
many
signals
you
support
right.
E
Yeah
I'd
actually
suggest,
even
if
you
have
one
signal
like
that,
that's
probably
the
route
I
would
go
of.
You
should
assume
that
the
collector
could
might
not
be
in
the
middle,
but
whatever
you
do
to
the
protocol,
it
should
be
something
a
direct
backend
can
also
support.
I
am
not
suggesting
that
that
means
you
have
to
do.
Conversion,
language
or
conversion
logic
in
all
languages
at
the
client
level.
E
I
think,
there's
a
lot
of
reasons
why
you
don't
want
to
do
that,
but
the
whatever
you
push
to
The
Collector,
also
you're,
pushing
that
to
vendor
back
ends.
Just
remember
that,
because
there's
a
lot
of
people
who
just
directly
talk
to
a
vendor
back
end
and
that's
a
use
case
that
we
need
to
support
it's
also
okay,
to
have
the
collector
in
there
and
recommend
it.
But
you
can't
you
have
to
do
both.
Basically
is
what
I'm
suggesting.
B
Okay,
that
I
think
that's
a
great
point
to
capture
and
L
resync's
The
Proposal.
Based
on
that,
so
anything
that
we
implemented
conceptually
in
The
Collector
the
back
ends.
Would
you
have
to
be
willing
to
implement
as
well
that
that's
what
he
said
right,
yep,
exactly:
okay,
that's
yeah,
I!
Guess
what
we
expect
to
be
careful
there
with
the
for
us
is
to
make
sure
that
supporting
Hotel
on
the
back
end
is
not
so
complicated
that
it's
almost
impossible
to
pull
off
in
practice.
E
It's
also
possible
I,
don't
I
guess
the
problem
is:
there's
not
an
existing
profile
like
remote
protocol,
but
to
the
extent
that
you
can
leverage
for
like
for
JFR,
if
there's
a
way
that
JFR
already
goes
over
the
network
instead
of
making
a
new
like
JFR
otlp
protocol,
you
can
instead
just
have
the
collector
support
ingesting
that
existing
protocol
and
converting
it
in
The
Collector.
E
That's
what
we
do
for
statsd,
that's
what
we
do
for
Prometheus,
that's
what
we
do
for
Like,
Jaeger
and
Zipkin
for
for
traces,
for
example,
I
think
the
fluent
forward
does
that
you
know
for
logging.
So
there's
a
lot
of
Precedence
for
reusing.
An
existing
remote
protocol.
I
think
the
problem
that,
as
far
as
I
understand
it
is
that
there's
not
really
a
standard
out
there
now
for
profiling
for
these
different
formats
for
sending
them
remotely.
A
B
E
E
Yeah
I'm
semi-familiar
with
JFR
for
my
days
as
a
scholar
person
anyway
fun
times
the
so
I
guess
in
in
that
case,
you
know
this
is
where
we
should
negotiate.
You
know
what
this
looks
like
and
I
do
think
that
the
conversion
in
The
Collector
with
a
say
we
only
expect
back
ends
to
support
this
one
protocol
or
splitting
the
signals,
so
back-ends
can
say
I
support.
You
know
this
type
of
thing,
this
type
of
thing
and
this
type
of
thing
that
will
help
make
it
clear
to
users.
E
What's
going
on
so
today,
when
you
look
at
otlp
support
from
vendors,
they
will
actually
explicitly
list
what
signals
are
in
there.
So
there
is
precedence
to
have
multiple
signals
or
and
under
profiling.
I
think
you
know
calling
considering
something
a
sub
signal.
I
think
could
be
amenable
to
folks
as
well
right.
B
I
think
I
like
this
idea
of
sub
signals,
because
also
it's
meaningful
to
the
user
to
know
if
the
back
end
natively
supports
JFR,
because
if
it
does,
it
probably
can
do
more
stuff
with
the
data
that's
in
there
than
a
vendor
that
does
not
natively
support.
Jfront
would
rely
on
the
conversion
in
The
Collector
to
get
the
data
so
I
think
it's
valuable
to
even
for
for
users
as
to
understand
what's
going
on
here.
B
C
The
only
thought
is
the
is:
if
there
are
several
signals,
then
would
people
start
asking
like?
Oh
can
we
can?
We
also
add
this
fourth
signal
into
this
into
this
standard,
and
then
it
essentially
becomes
the
current
situation
of
multiple
incompatible
potential
incompatible
profiling
formats.
B
G
If
I
cannot,
if
I
can
add
something
here,
I
think
to
some
extent
we
can
make
allowances
for
the
existing
formats
that
are
widely
used
and
like
people
from
Json,
maybe
because,
as
they're
used
currently
the
relevant
statements,
so
those
could
be
signals
in
themselves.
But
when
it
comes
to
stateful
operation,
because
there
is
no
president
there,
we
will
be
afraid
to
define
a
format
that
works
for
all
of
us
and
make
it
extensible.
G
Because
going
back
to
the
previous
point,
you
know
sure
people
could
be
flexible
enough
using
labels
to
to
make
it
to
stateful.
But
yeah.
That's
a
growth
hack
and
the
same
will
be
true
for
jsr.
We
have
the
flexibility
and
the
opportunity
here
to
maybe
come
up
with
something
that
that
really
works
well.
B
Yeah,
what
I,
also
like
about
the
signals
sub
signals
is
that
it
gives
us
a
good
way
to
sort
of
think
about
each
format,
maybe
with
a
smaller
subset
of
this
group
and
be
like
hey.
What
do
we
need
to
standardize
on
top
of
people
like?
How
do
we
identify
how
to
find
again
the
CPU
profile
in
there
and
then
for
JFR?
B
That's
actually
going
to
look
slightly
different
and
might
be
more
complex
than
what
we
need
for
p
Prof,
and
now
somebody
who
wants
to
support
the
people
of
signal
doesn't
have
to
deal
with
all
the
complexity
of
how
you
need
to
specify
the
metadata
to
take
apart
a
JFR
right.
So
that
might
be
nice
to
say,
like
here's,
the
P
Prof
here
so
JFR
signal
and
then
be
very
specific
about
what
you
need
to
send
along.
When
you
make
a
request
on
that
channel.
E
E
We
see
that
I
I
see
that
a
lot
specifically
for
tracing
where,
like
you
know,
Jaeger
directly
supports
otlp
ingestion,
that's
that's
an
example
or
Prometheus.
They
will
prefer
having
the
Prometheus
server
directly
poll
the
client
as
opposed
to
writing
to
a
collector
and
then
having
the
collector
pull
the
main
where
people
do
use
the
collector
they're,
usually
using
multiple
different
back
ends.
They
want
to
kind
of
control
in
Route,
2
or
they're,
adapting
like
existing
protocols
into
otel.
In
addition
to
using
Hotel.
G
B
That's
good
we're
almost
at
time.
Does
anybody
have
any
final
thoughts
or
questions
they
want
to
raise?
If
not
I
could
try
to
summarize
what
I
think
might
be
next
actions
here?
B
No
so
yeah
I
think
we
learned
a
lot
of
interesting
things
today.
B
Based
on
this,
I
can
try
to
updates
The
Proposal
a
little
bit
to
maybe
at
least
summarize
what,
where
we
are
right
now
and
then
I
will
try
to
figure
out
how
to
meet
with
the
collector
stick
and
try
to
send
the
information
around
for
anybody
else.
Who
wants
to
join
that
call,
so
we
can
get
some
more
feedback
on
the
the
questions
we
have.
B
I
think
the
main
question
right
now
is:
how
does
a
collector
stick
feel
about
the
content,
negotiation
being
part
of
one
signal
or
having
separate
signals
for
the
different
content
formats?
I
guess
that's
the
main
question
we're
trying
to
answer
anything
else.
Once.
B
B
Awesome
then
yeah
thanks
everybody
for
joining
next
time,
we'll
be
back
with
your
regular
host
Ryan,
but
yeah
have
a
have
a
good
local
time.
Wherever
You
Are
thanks.