►
From YouTube: 2023-01-26 meeting
Description
OpenTelemetry Prometheus WG
A
B
B
C
D
I'm
good
A
bit
busy
but
I
guess
I!
Guess
it's
normal.
D
Just
an
AI
I
I
try
to
build
a
similar
metrics
with
using
at
the
Apple
server
that
manage
graphql
and
I'm
still
struggling
so
I
stopped
doing
it
because
it
was
taking
too
much
of
everything.
D
I
was
trying
to
to
identify.
D
E
Yeah
we
just
got
our
plug-in
up
and
running
fully
for
that
to
produce
metrics.
E
F
I've
been
less
sick,
oh
not
good!
Hey,
Justin,
hey
Adriana,
Aries,.
C
D
So,
first
of
all,
thanks
everyone
to
joining
here
so
for
those
who've
been
it's
the
first
time
for
this
type
of
meeting,
so
the
idea
is
here:
we
we
have
the
honor
to
have
Justin
connected
and
he
has
some
feedback
to
share
to
with
the
community
to
improve
the
usage
of
open
temperature
in
general,
so
I'm
pretty
excited
to
have
Justin
here.
D
So
thanks
Justin
for
taking
the
time
of
being
here
with
us
for
this
hour,
so
we
are
not
going
to
arrest
you
with
a
lot
of
questions.
Don't
worry!
We
just
want
to
get
the
all
the
feedback
that
all
the
experience
that
you
had
utilizing
open
terms
in
your
prospects,
all
right
so
just
to
confirm,
Justin
and
the
other
member
of
the
group.
Are
we
able
to
record
this
session?
If,
yes,
we'll
start?
If
not,
then
we
will
keep
skip
it.
F
Yes,
I,
haven't
noted,
I
think
well
shoot
Moon's,
not
here.
I
think
we
just
need
to
let
Austin
know.
Maybe.
Okay.
B
D
It
so
hi
Ted.
How
are
you.
D
So
maybe
I
don't
know
if
you
know
so
we
if
it's
a
problem,
we
won't
record,
but
the
idea
is
that
we
cannot
publish
this
into
this
this
session
on
YouTube.
So
is
there?
If
we
record,
is
it
going
to
be
published
automatically
on
YouTube
or
is
it?
Is
it
a
manual
process.
H
I'm
trying
to
think
I
think
Sergey
is
the
one
who
knows
the
most
about
the
YouTube
upload
stuff.
But
yes,
it
will
get
published
automatically,
but
not
immediately,
so
someone
would
have
to
to
go
disable.
It.
D
F
D
Right
so
just
so,
let's
start
to
record
and
make
a
note
that
this
this
needs
to
be
not
published
automatically
on
YouTube.
Then,
okay,.
F
D
I
see
it's
recording,
it's
the
recording
at
the
top,
all
right,
perfect,
so
just
to
explain
the
the
flow
here
for
this
session,
so
we
will
have
couple
of
questions
first,
yes,
the
first
aspect
is
to
get
the
actual
project
context
from
you
Justin,
and
then
we
will
slowly
jump
into
a
more
deeper
questions
about
your
experience
and
also,
we
will
probably
have
some
questions
from
all
the
first
people
connected
today.
D
Right
so
maybe
before
we
start
I
will
briefly
introduce,
we
can
make
a
sort
of
a
small
round
of
intro.
For
so
everyone
knows
who
is
connected
and
who
is
asking
questions
so
my
name
is
Henrik
rexad
I
am
a
cloud
native
Advocate
at
Dana,
trace
and
I'm,
trying
to
contribute
to
the
open
temperature
project
to
collect
feedback
and
also
help
people
to
get
onboarding
on
open
television.
D
G
Hey
I'm
Adriana
villala
I'm,
a
developer
Advocate
at
lightstep
work
with
Ted
and
Austin
and
been
working
with
Rhys
and
Rin
on
the
end
user
working
group
for
hotel.
F
Hey
everyone
I'm
Rhys
I'm,
just
and
I've
spoken
to
you
a
little
bit
like
Adriana
mentioned
I
work,
primarily
in
the
community
on
the
end
user
working
group
I'm
doing
a
lot
of
things
that
Kennewick
mentioned.
Oh
my
new
job
is
as
deveral
at
New
Relic.
I
All
right,
then
I'm
Miku,
I'm,
Diana,
Trace,
product
manager
and
yeah
part
of
the
demo
app
development.
A
D
All
right
so
Justin,
maybe
it's
your
turn.
So
could
you
introduce
yourself
and
also
what
are
what
is
the
Your
Role
within
your
company.
E
Yeah
I'm
Justin
I'm,
a
senior
Dev
at
Northwestern,
Mutual
I
work
mainly
on
the
our
API
and
back-end
systems.
All
those
are
most
of
our
systems.
Now,
all
almost
all.
Switching
over
to
graphql,
based
and
I
help
out
across
the
board
with
front
end
to
back
end
yeah
I
help
out
in
the
jskin
trip,
also
with
Hotel
bug,
fixing
and
stuff
like
that.
E
Nm.
To
give
some
background,
is
a
Mutual
wealth
company
so
basically
giving
financial
advisors
and
financial
reps
the
tools
needed
to
help
clients
with
their
financial
journey
through
life,
most
notably
we're
known
for
life
insurance
and
the
specific
systems
that
I
work
on
are
for
developing
the
basically
the
PDFs
that
you
give
to
clients
that
state
hey
your
life.
Insurance
policy
is
going
to
do
X,
Y
and
Z,
but
we
also
work
with
disability
insurance,
long-term
care,
all
types
of
products
and
yeah.
D
Could
you
also
explain
within
in
in
within
this
you
mentioned
graphql
and
a
couple
of
applications?
Could
you
explain
which
application
are
been
used?
I
mean
has
been
instrumented
with
open
Telemetry.
Why
did
you
specifically
picked
open
Telemetry
for
this
and
maybe
give
us
a
details?
How
many
data
you're
collecting
anything
that
will
explain
a
bit
the
context
of
on
how
and
and
the
challenge
of
intimacy
in
this
in
this
environment.
E
Yeah
so
we're
our
system,
it
spans
literally
from
Old
School
mainframes
to
AWS
Cloud.
We
even
have
some
pieces
in
Azure
Cloud
at
the
moment,
our
ad
system,
but
it's
really
the
the
reason
for
Telemetry
is
because
one
we
all
don't
have
the
same
Telemetry
platform
inside
of
and
some
people
use
new
relics.
E
Some
people
use
dynatrace,
others
just
use
straight
up,
grafana
and
Jaeger,
and
all
those
open
source
tools
to
do
their
tracing
and
so
really
the
reason
why
we
started
going
down
the
path
of
open
Telemetry
was
our
group
recently
migrated
from
New
Relic
to
dynatrace,
and
we
still
wanted
to
have
our
tracing
all
the
way
through
the
system.
E
E
The
other
big
thing
with
Telemetry
systems
still
and
companies
that
are
doing
Telemetry
graphql,
is
a
big
blank.
In
terms
of
when
you
look
at
Telemetry
data,
I
know,
dynatrace
just
released
in
December
I.
Think
a
new
agent
that
starts
getting
graphql
but
I
mean
we
were
starting
with
graphql
four
years
ago,
and
this
entire
time
we've
not
been
able
to
see
anything
into
our
graphql
systems.
For
those
that
don't
know,
graphql
is
single
endpoint
single
post
and
then
everything
is
held
in
the
body
of
data
request
and
response.
E
Even
errors
everything's
supposed
to
basically
be
a
200..
So
all
of
those
things
basically
left
us
in
a
blank,
and
we
wanted
to
fill
all
that
in
and
to
clarify
like
some
of
our
spam
and
we,
especially
with
the
hotel
graphql
system,
we
our
spans,
can
be
three
thousand
four
thousand
spans
big.
E
We
pass
a
lot
of
data
in
a
single
graphql
response
because
we
have
a
main
gateway
that
brings
all
of
the
different
graphql
endpoints
into
a
single
one,
and
so
it
all
looks
like
one
query
to
our
end.
Users,
so
I
mean
really
we're
talking
about
a
the
vast
amount
of
tech.
That
really
is
throughout
our
system,
but
we're
also
talking
about.
We.
We
had
huge
blanks
and
we
had
to
start
filling.
E
Those
on
I
know
soon
coming
up
we'll
actually
start
with
the
distributed
Amazon
tracing,
because
that's
another
piece
where
we're
kind
of
blind
to
whatever
is
happening
with
like
our
lambdas
and
ECS,
and
everything
like
that
so
yeah
and
that
I
should
say
that's
another
thing.
We
we're
on
both
windows,
servers
and
Linux
servers
in
our
cloud
system
too,
due
to
some
proprietary
stuff
in
there.
So
yeah,
that's
really
what
our
Tech
landscape
looks
like
and
then
also
what
what
tech
we've
really
started.
E
Instrumenting
has
been
our
graphql
systems,
we're
starting
to
get
into
a
lot
of
that's
node.js,
but
we
also
have
graphqlsystems.org.net.
So
we
are
starting
to
instrument
our.net
graphql
systems
yeah.
Those
are
our
two
main
big
pieces
of
I,
guess,
language,
Tech
that
we
have
and.
E
There
are
other
languages
yeah,
not
specifically
in
my
group,
but
other
groups.
They
have
Java.
Mainframe
is
pl1,
so
that's
a
that's
a
fun
area
of
the
space,
and
then
we
have
C
and
C,
plus,
plus
also
mixed
in
part
of
again
the
proprietary
system
that
we
have.
D
Now
could
you
maybe
also
walk
us
through
how
you
are
currently
deploying
those
applications
into
production?
You
have
a
specific
process
for
this
yeah.
E
E
That
is
the
current
state,
we're
actually
in
the
process
of
moving
to
Canary
deploys
and
we
deploy
mainly
to
an
eks
cluster.
So
basically
we
don't
use
home,
charts
or
anything
like
that.
It's
just
basically,
we
have
a
kubernetes
yaml
file
and
through
our
gitlab
custom
pipelines
we
push
that
out.
We
will
be
moving
to
the
cdkates
and
using
Amazon's
construct
system
to
push
all
that
out
and
then
using
Flagger
to
do
canaries,
but
that
is
still
in
basically
early
stages.
At
the
moment,.
D
One
thing
that
I
I
you
mentioned
is
you
mentioned
mainly
graphql,
so
graphql
seems
very
important
in
any
organizations.
What
is
what
is
it?
What
is
it
designed
for?
It
is
the
developer
building
the
the
various
queries,
or
is
everyone
in
the
organization?
Has
the
ability
to
build
their
own
queries?
How
how
is
it?
How
does
it
work.
E
Yeah,
so
we
for
those
that
know,
graphql,
there's
really
I,
should
say
two
systems
to
building
gateways
in
graphql
one
is
Apollo
Federation
and
the
other
is
what's
known.
As
schema
stitching.
We
use,
what's
known
as
schema,
stitching,
to
clarify
it's
more
open
source.
It's
not
as
proprietary
Apollo's,
really
starting
to
lock
down
their
system.
It
basically
like
as
if
we
can
stay
as
open
source
as
possible
and
basically
as
maneuverable
as
possible,
due
to
a
very
changing
landscape
in
graphql.
E
We're
going
to
pick
that
Tech
and
that's
the
reason
why
we
picked
it
with
graphql.
We
say:
hey,
here's,
a
schema
that
has
our
system,
but
you
can
make
your
queries
through
it.
You
can
make
mutations,
you
can
even
do
subscriptions
in
it
and
we
just
you,
can
kind
of
think
of
graphql
as
sort
of
like
an
RDP,
not
RDP,
RPC
system.
E
D
So
graphql
is
then
like
a
component
that
is
used
for
various
microservices
or
is
it
used
for
I?
Don't
know
if
someone
said
I
want
to
make
a
report
of
something
or
extract
some
data
and
build
some
some
analytics
out
of
that
or
how
is
it
used
mainly
for
both
okay,
both
okay,
yep
yep,
okay,
more
on
the
Telemetry,
Telemetry
side
or
or
observity
size?
So
you
mentioned
traces.
Do
you
also
collect
other
signals
and
how
are
you
collecting
them.
E
So
we
have
traces,
we
are
dynatrace,
so
we
have
the
one
agent
that's
installed
on
all
of
our
nodes,
so
any
in
terms
of
the
tracing
side
of
it.
Basically,
it's
just
setting
up
the
node
SDK
I,
add
in
the
auto
instrumentation
that
we
want
it
automatically
gets
picked
up
by
our
One
agent.
We
are
sending
out
metrics
now
and
so
we've
had
a
custom
plug-in
for
getting
certain
graphql
things
that
you
still
can't
get
through
tracing.
So
things
like
we're.
Looking
for
deprecated
field
usage,
we're
looking
for.
E
What's
our
overall
query
usage
things
like
that,
and
we
have
that
reporting
to
dynatrace
that
is
being
used
via
the
otlp
collector
Metro
collector.
D
E
So
a
lot
of
it
is
for
all
the
tracing
is
really
diagnostic
in
terms
of
if
we
have
something
going
on
in
production
being
able
to
easily
figure
out
where
the
problem
could
be,
as
I
said,
like
with
graphql
in
most
reporting
systems,
everything's
a
200.,
so
everything
will
look
like
a
success
by
having
the
tracing
in,
we
can
actually
clearly
see
if
something
actually
is
a
problem.
E
The
best
example
I
can
give
is
when
we
access
a
database
and
all
of
a
sudden,
the
database
comes
back
and
says:
hey
you
had
a
connection
hang
up,
but
again
it
gets
to
graphql
and
we
report
back
at
200.
We
can
now
clearly
see
where
the
problem
lies.
E
D
E
Do
diagnose
we
push
everything
to
elastic
cache
and
then
have
an
elk
system.
That's
running
I
know
we
are
in
the
process
of
doing
a
POC
of
our.net
logging
migrating
it
into
the
dynatrace
system
and
I
know
I'm,
try
I'm
really
pushing
for
we.
We
have
one
system
that
be
able
to
just
hook
everything
together,
even
the
thing
that,
with
the
metrics
having
the
were
they
the
temporals
or
the
no,
the
exemplars
being
added
into
the
spec,
which
will
help
us
hook
our
metrics
to
our
traces.
E
So
all
of
those
one
thing
we
have
added
was
the
in
the
node.js,
the
bunion
plug-in
that
hooks
Trace
ID
automatically
for
us
into
our
elk
logs.
So
we
have
that
capability,
but
we
are
trying
to
get
into
that
whole
system
of
having
it
in
one
place.
D
You
mentioned
that
you
were
using
the
interface
neurolic
and
there
was
all
the
solutions
so
to
propagate
or
to
push
the
information
on
all
the
various
Solutions.
You
rely
on
The
Collector
to
do
that
or
or
how
oh
yeah,
okay.
So
how
do
you
can
you
share
a
bit
some
details
about
the
collectors?
How
many
collectors
you
use
or
I?
Don't
know
if
you
have
those
numbers,
if
you
have
several
collectors
or
only
one
yeah.
E
I,
don't
I
would
say
in
terms
of
what
I
understand
about
our
system
we're
one
of
the
primary
groups.
That's
using
otel
and
I've
been
trying
to
help
others
in
other
groups,
get
into
the
hotel
landscape
and
get
into
not
just
the
whatever
the
their
platform
is
and
using
what
the
platform
gives
them
out
of
the
box,
but
also
migrating
them
into
that.
E
So
that
will
be
in
the
future.
Again
we
just
for
ours.
We
use
the
either
The
Collector
the
metric
otlp
metric
collector
or
we
collect
via
the
one
agent
foreign.
D
So,
let's
jump
into
more
on
the
on
the
experience
and
so,
first
of
all,
when
you
started
to
use
open
Telemetry.
How
could
you
describe
your
experience
and
also
from
that
experience
you
mentioned
that
you're.
The
main
team
that
are
using
open
territory
is
how
is
the
adoptions
going
on
with
the
other
teams.
E
Yes,
so
I
will
say:
initial
adoption
was
super
fast
and
easy.
I
mean
it
was
very
simple
to
go
from.
Let's
say
like
our
our
tracing
needs.
Our
Telemetry
needs
were
75
met
by
whatever
the
platform
was
be
it
New,
Relic
or
dynatrace
that
next,
what
I
would
say,
20
Gap
that
was
filled
in
for
us
was
a
couple
hours
and
pocing
it
and
we
had
it
instantly
up.
I
mean
it's
to
me.
The
open,
Telemetry
initial
tracing
is
just
it
instantly
easy
in
terms
of
others
adopting
it.
E
We
just
got
another
group
a
couple
other
groups
moving
on
to
it.
We
do
have
some
contention
because
some
of
the
things
that
I'm
also
trying
to
watch
out
for
are,
as
I
said,
proprietary
systems,
so
the
one
I
will
always
harp
on
is
Apollo
Studio
a
lot
of
the
pieces
that
it
ends
up,
giving
you
can
get
through
open,
Telemetry
or
through
your
tracing
or
your
Telemetry
provider,
and
that's
the
big
thing
that
I'm
been
trying
to
push.
E
D
So
at
the
moment,
when
you
produce
the
television
data,
who
is,
is
there
someone
who
says
all
right?
Let's
move,
let's
use
open
terms
globally
on
with
an
organization?
Is
there
a
process
to
validate
the
value
that
open
temperature
brings
in
your
organizations
or
is?
Is
this
like
a
POC
as
of
now
and
then
you
will
be
use
it
in
globally
for
other
applications,
so.
E
We
just
started
that
up
this
last
week,
we
specifically,
we
discussed
with
our
first
off
our
OSS,
so
open
source
software
group
and
then
the
other
piece
is
our
Enterprise
architecture
group.
So
if
we
Showcase
with
ours,
which
we're
now
fully
production
ready
with
the
system,
we
can
showcase
to
that
our
Enterprise
architecture,
the
benefits
we're
getting
out
of
it,
along
with
the
maneuverability
we
have
with
it,
not
to
denounce
any
Telemetry
provider.
E
But,
as
you've
heard
like
we
did
a
New
Relic
to
dyno
Trace
migration,
let's
say
we
dynatrace
the
light
step
or
some
other
Telemetry
platform
being
on
open.
Telemetry
allows
us
to
move
that
migration
very
easily,
whereas
if
we're
in
a
proprietary
system,
it
is
very
difficult
so,
like
those
are
the
benefits
that
we've
had
to
kind
of
prove
out.
So
that
way
we
can
go
to
Enterprise
architecture,
talk
with
them
and
get
it
basically,
as
the
one
Telemetry
platform
I
mean,
like
I
said,
we
just
started
that
process.
D
And
you,
as
of
now,
did
you
did
you
see?
I
mean
you
mentioned
the
the
need
of
graphql,
of
having
more
details
produced
out
of
that.
Did
you
actually,
as
of
now
see
the
benefits
in
your
production
environments?
Oh.
E
Yeah
I
I
talked
about
how
some
of
our
we
can
have
three
four
thousand
spans
on
a
single
service,
and
that's
due
to
all
the
graphql
resolvers
that
are
sitting
on
our
system.
E
We
actually
got
such
a
benefit
from
it,
because
we
noticed
that
one
of
the
resolvers
was
acting
weird,
and
this
was
something
we
didn't
know
and
we
kept
questioning
hey.
Why
is
this
system
taking,
let's
say,
200
milliseconds
it
shouldn't?
It
should
resolve
almost
instantly
and
we
actually
pretty
much
found
it
right
away
when
we
added
the
graphql
open,
Telemetry
plugin
on
because
it
showed
exactly.
There
was
one
graphql
resolver
that
was
acting
up
and.
D
A
D
You're
using
the
O2
instrumentation
of
graphqli
node.js
and
net
you've
started.
So
could
you
maybe
share
your
spear
experience
and
if
the
the
the
the
output
produced
by
those
instrumentation
Library
were
useful
and
meaningful
for
you,
or
did
you
have
to
make
some
adjustments.
E
So
we
instrument
on
the
node.jsi,
the
HTTP,
Express
graphql
and
then
on
some
of
our
systems,
the
AWS
SDK
we
get
probably
the
most
benefit
out
of
our
graphql
and
the
AWS,
the
ones
that
we
don't
really
excuse
me
are
the
HTTP
and
the
express
again
Express
we
use
as
a
very
minimal
layer
and
adding
just
a
couple
pieces
in
and
to
be
honest,
we're
going
to
be
migrating
away
from
it
to
like
I,
said
graphql
yoga,
which
is
a
completely
different
server
system.
E
It's
not
just
a
layer
on
top
so
that
we
will
be
potentially
missing
some
stuff,
because
we're
in
the
discussion
of
the
OSS
with
our
OSS
group,
we'll
potentially
be
writing
a
plug-in
for
that
and
trying
to
give
it
back
to
the
hotel
Community.
E
But
those
are
really
our
main
areas
again.
The
HTTP
really
doesn't
help
us
too
much.
I'll,
be
honest.
It
actually
adds
a
lot
of
noise
that
we
don't
need.
The
graphql
is
really
great,
but
there
are
still
things
that
need
to
be
done
on
it.
I
know
I
put
out
a
PR
to
add
in
an
ability
to
basically
ignore
certain
Fields
I
know
there
was
one
that
was
like
ignore
trivial
spans,
but
it
actually
doesn't
work.
E
The
way
I
personally
expected
it
to
and
so
I
we're
modifying,
but
we're
still
trying
to
yep.
J
Yeah
so
I
have
kind
of
an
unrelated
question,
but
I
have
to
drop
soon.
So
I
wanted
to
make
sure
I
asked
it
you
mentioned
lambdas.
Today
we
actually
have
an
ongoing
kind
of
Lambda
Sig
effort.
It
was
restarted
recently.
Could
you
share
any
particular
feedback
frustrations?
You
have
the
lambdas
or
anything
like
that.
That
may
be
missing.
E
So
we
haven't
started
instrumenting
I
will
say,
like
I
think
the
frustration
with
lamb
does
and
just
AWS
in
general
is
the
constant
promise.
Oh
we're
going
to
add
Trace
IDs
we're
going
to
start
actually
following
the
w3c
trace
RFC
and
they
still
haven't
so.
E
We've
like
we've
tried
instrumenting
via
the
dynatrace,
and
it
actually
ends
up
really
slowing
down
our
system
same
thing
with
our
ECS
and
so
really
just.
B
E
All
the
different
ways
that
you
can
potentially
start
lambdas
up
and
the
potential
routes
that
you
can
go
through
I
mean
it
can
be
an
API
Gateway
that
AWS
API
Gateway,
that
has
the
canary
system
built
in
you,
can
just
have
land-ups
with
step
functions
and
you're,
calling
them
directly.
Lamb
does
now
allow
URLs.
So
all
of
those
things
make
the
tracing
just
almost
impossible.
At
the
moment,
we
would
need
to
see
how
the
hotel
Lambda
system
is,
but
I
know
from
our
current
aspect.
E
It
is
just
absolutely
horrible
to
try
and
get
any
type
of
tracing
data
out
of
there
other
than
just
heading
into
like
cloudwatch,
or
something
and
figuring
out
what
happened.
E
J
J
Oh
no
I
was
just
gonna
say,
that's
been,
that's
really
helpful
to
hear
and
we
do
have
like
an
Hotel,
fast
Channel
and
then
also
two
and
or
two
Sig
mediums
a
week.
So
if
you
ever
want
to
pop
in
and
just
provide
some
of
that
feedback,
we
definitely
love
to
hear
it.
We
have,
you
know
AWS
folks,
Google
Azure,
Etc
dynatrace.
What
have
you
so
more
than
happy
to
like
listen
to
what's
going
on
and
try
and
fix
it
soon.
D
So
there
was
a
question
from
Angelica,
so
I'm
I
think
it
makes
sense,
because
you
mentioned
that
you
use
Mainframe,
and
and
do
you
at
the
moment,
I
guess
Mainframe
is
not
part
of
the
the
tracing
strategy
that
you
have.
Are
you
planning
to
instrument
Mainframe
or
we.
E
Do
actually
have
some
of
our
main
frame
traced,
not
in
the
hotel
side,
but
on
our
Dyno
tray
side.
We
have
our
mainframes
traced
at
the
moment
and
we
can
actually
start
in
our
distributed
tracing.
We
do
see
our
mainframes
coming
in,
which
is
really
nice
and.
D
Are
you
also
basically
hoping
or
that
our
community
is
providing
also
like
a
Mainframe
support
to
get
also
detail
like
an
open,
Telemetry
tracing
library
for
Mainframe.
E
It
would
be
absolutely
amazing
if
it
happened.
I
also
understand
that
that
is
such
a
I
feel
like
minority
issue
in
terms
of
it's,
like
probably
less
than
one
percent
of
people
that
are
in
tracing
and
then
I
would
say
it's
even
less
than
one
percent
of
those
that
are
then
on
open
Telemetry
that
are
going
to
want
Mainframe
tracing.
So
it
would
be
amazing,
but
I
I,
just
I
understand
that
it's
how
you're
trying
to
move
resources
and
how
people
may
not
just
want
to
be
on
working
on
a
Mainframe,
Telemetry
piece,
yeah.
H
I
mean
I
do
think
we're
often
open
to
we.
We
don't
totally
have
the
idea
of
like
a
community
layer,
but
I
do
feel
like
these
are
the
kind
of
projects
that
something
like
that
might
be
beneficial
from
right
like
open,
Telemetry
is
a
spec.
We
try
to
make
things
as
standardized
as
possible.
So
you
know
if
you
do
end
up
in
a
situation
like
mainframes,
where
you're
having
to
DIY
it.
There's
probably
there
probably
are
other
organizations.
You
know
you
probably
don't
need
more
than
three
or
four
to
get
something
put
together.
H
That
makes
you
all
happy
and
if
an
effort
like
that
does
get
started,
you
know
we
would
be
happy
to
potentially
host
it,
or
at
least
you
know,
link
to
it
from
our
website
and
stuff
like
that.
So
maybe
it's
the
kind
of
thing
that
could
grow
organically
I,
guess
yeah.
E
I'll
say:
I
I
think
it
would
be
absolutely
amazing
and
if
we,
if
we
had
the
developers
that
wanted
to
do
it,
I
I
got
to
be
honest.
That's
the
other
piece
of
it
is
like
our
developers
are
very
much
like
I'm.
Still
writing
pl1.
Why
am
I
writing
pl1
I
think
it
was
more
if,
like
IBM,
decided
to
actually
start
trying
to
help
on
that
effort,
our
company
would
follow
suit
yeah.
It's
it's
an
interesting
Dynamic
on
that
Mainframe
side
of
things,
cool.
B
E
B
E
The
well
the
open
Mainframe
that
cncf
has
now
I
think
it
was
Cobalt
that
was,
they
started.
The
courses
and
everything
so
I
could
see.
Cobalt
Hotel.
D
Nice,
what
is
the
biggest
I
would
say,
challenge
that
you
had
faced
using
open,
Telemetry
or
or
interacting
with
the
community.
Do
you
have
anything
that
you
would
like
to
share
about
things
that
you
try
to
do
and
you
had
issue
to
do
it?
Yeah.
E
I
I
will
say
and
I
know.
This
is
for
basically
every
project
out
there.
Documentation
is
just
not
great
there
there's
very
much
like
big
gaps
in
the
the
documentation.
E
I
know,
that's
one
area
that
I
plan
on
starting
to
help
out
on,
because
it's
just
I
understand
it.
Developers
want
to
write
code,
they
don't
want
to
write
docs,
but
it's
definitely
needed.
I
mean
I
will
say
contributing
back
to
the
community
has
been
pretty
difficult.
E
From
my
point
of
view,
I,
like
I,
said
I've
been
trying
to
help
out
in
the
JS
contrib
and
I
just
feel
like
I've.
It's
a
very
much
an
uphill
battle.
Trying
to
get
contributions
put
in
I,
don't
expect
like
instant
turnaround,
but
it's
very
much
a
lot
of
plugins.
We,
it
seems
like
they
rely
on
whoever
was
the
original
owner
to
own
that
plug-in,
but
a
lot
of
those
original
owners
are
just
gone
or
they're
checking,
maybe
GitHub
at
once.
E
Every
six
days
and
that's
understandable
owners
are
gonna,
come
and
go,
but
that
that
kind
of,
like
the
processes,
I,
think
of
like
handing
that
off
or
trying
to
get
more
contributors
involved,
making
that
very
seamless
or
as
seamless
as
possible
is
going
to
make
the
community
grow.
E
Sort
of
I
know
I've,
like
put
it
out
there
on
different
things
like
that.
We
could
be
doing
so.
Just
to
give
an
example:
JS
contrib
is
written
in
typescript,
there's
like
a
project
out
called
swc,
which
is
a
rust-based
compiler
to
move
typescript
to
JavaScript,
and
it's
like
20
times
faster
30
times
faster
than
Babel
and
I
know.
E
I've
mentioned
it
multiple
times
and
put
up
a
feature
request
and
said
like:
oh,
we
would
work
on
it
like
NM
potentially
could
work
on
it
and
it
there
was
just
no
discussion
on
it.
I've
put
in
feature
requests
like
I,
said:
I've
done
the
graphql
one
I
think
it's
still
sitting
out
there
from
October
and
like
it
just
has
not
moved.
H
E
I
actually
put
out
a
PR,
so
a
full,
like
I,
implemented
a
feature
that
would
allow
us
to
Blacklist
certain
fields
to
say
like
just
flat
out,
don't
instrument
this
and
it's
just
sitting
there.
So
it's
very
much
if
you're
wanting
to
contribute.
B
E
It
is
very
much
almost
I,
don't
want
to
say
a
full-time
job,
but
it's
a
part-time
job
of
trying
to
keep
up
on
those
pieces
and
I
know
everyone's
busy
I
understand
that,
and
these
are
people
that
are
taking
their
time
out
of
their
day
to
do
this
work,
but
it
just
it.
There
needs
to
be
I,
feel
like
a
better
system
in
place
to
help
with
it.
D
So
you
mentioned
that
you
or
you
also
get
got
some
metrics
out
of
it,
so
your
contribution
was
related
to
filtering
the
the
spans
produced
and
also
producing
out-of-the-box
metrics
from
the
graphql
implementation.
Is
this
also
part
of
the
pr
that
you
submitted.
E
No,
this
is
this
is
a
completely
separate
plug-in
that
we
that
I
ended
up
writing
that,
basically,
will
it
will
track
deprecations
and
query
usage.
E
This
reason
why
it's
not
part
of
that
feature
is
a
lot
of.
It
is
still
seems
like
it's
based
off
of
kind
of
the
platform
that
you're
gearing
it
towards
and
so
I
ended
up.
Writing
it
oh
pause
then,
but
it
still
needs
like
work
to
get
it
working
with
other
pieces.
E
On
top
of
that,
I
I
have
to
go
through
my
my
OSS
group
and
then,
when
you're
working
in
a
big
Enterprise,
there's
a
lot
of
the
going
through
compliance
and
lawyers
and
all
that
fun
stuff.
So
we're
we're
planning
on
opening
it
up
to
everyone.
It's
just
I
have
to
go
through
a
process
to
do
that.
Cool.
H
Could
I
ask
you
about
the
graphql
semantic
conventions
posting
a
link
in
the
chat
for
anyone's
not
familiar
with
them?
I'm
curious.
You
know.
We've
talked
about
like
the
instrumentation
packages
like
the
implementations,
but
I'm
curious,
whether
you
feel
whether
the
conventions
themselves
like
the
schema
itself
is,
is
sufficient
or
good
or
if
you
think
it
needs
Improvement.
E
I
think
there's
two
different
opinions
on
this,
because
there's
going
to
be
the
one
that
why
in
my
Trace,
is
it
called
graphql.execute
or
something
like
that,
instead
of
giving
me
the
operation
name
on
the
trace,
even
though
that's
the
value
people
are
going
to
be
like
well,
no
I
want
to
just
see
the
name
when
I'm.
Looking
through
my
my
span,
log
I
understand
why
it's
not.
E
I
think
that's
something
needs
to
be
I,
think
grappled
with
I.
Think
the
other
interesting
one
is
going
to
be
the
subscription
piece.
The
operation.type
query
mutation
subscription
makes
sense.
The
subscription
note
I
need
to
see
how
the
tracing
ends
up
working
with
subscriptions,
because
that's
going
to
be
a
I
feel
like
an
interesting
area.
E
B
E
To
occur
with
graphql,
specifically
in
the
document,
just
trying
to
look
the
operation
name
is
perfectly
fine.
The
type
is
perfectly
fine,
graphql
document,
that's
another
interesting
one,
I
think
and
the
reason
why
I
say
that
is.
We
have
really
huge
queries
and
mutations
that
end
up
occurring
and
because
of
it,
and
by
huge
I
mean
like
2
000
lines
of
a
query.
E
I
feel
like
it
will
end
up
slowing
down
the
trace.
It
will
definitely
negatively
affect
it.
So
that's
probably
something
that's
going
to
eventually
have
to
be
looked
at.
I
know,
that's
something:
we've
basically
blocked
on
our
side,
because
it's
just
too
much
data.
H
It's
too
too
big,
yeah
I.
Ask
because
this
one
of
our
goals
for
this
year
is
to
to
mark
all
of
our
existing
semantic
conventions
is
stable,
but
you
know
we
don't
want
to
just
rubber
stamp
them
as
they
are.
We
want
to
to
do
a
pass.
You
know
through
each
one
of
them
with
end
users
and
just
double
check
whether
or
not
there
needs
to
be
improvements
and
every
single
one
we
look
at
we.
You
know,
of
course,
find
there
needs
to
be
improvements
or
there's
there's
stuff
missing.
For
example,
HTTP.
H
H
Is
that
if
we
do
get
around
to
graphql?
Is
that
something
you'll
be
willing
to
like
help
us
out
with
in
terms
of
feedback
yeah.
E
B
H
E
All
I
think
one
thing
to
call
out
with
this
would
be
again
if
an
error
ended
up
occurring,
so
the
errors
array
was
populated,
I
think
that's
something
that
needs
to
be
called
out
in
the
semantic
convention.
H
Right,
so
you
have
errors
within
graphql
right.
They
won't
be
at
the
transport.
H
D
I
just
had
a
question
because
I
I
generated
some
traces
and
the
the
query
was
just
as
banned
attributes
and
I
thought.
It
was
very
difficult
to
look
at
two
traces
from
two
queries
you
had
to
search
and
expand
attributes
to
figure
out
which,
which
query
is
this
Trace
about?
Did
you
resolve
this
by
naming
the
spans
differently,
or
do
you
have
the
same
experience.
E
So
really,
when
we're
looking
through
traces,
we're
looking
at
distributed
traces
and
because
of
that
we
just
go
off
of
a
trace
ID
and
then,
even
though,
like
the
spans
are
named
the
same
thing
because
they're
part
of
two
different
traces
it
ends
up
being
fine.
I
can
see,
though,
if
you
ended
up
running
multiple
queries
or
multiple
mutations
in
the
same
Trace
under
the
same
service,
I
could
easily
see
how
that
could
end
up,
causing
a
problem.
H
It
yeah,
maybe
following
up
on
that
in
terms
you
mentioned,
like
you,
have
some
problems
so
you're
looking
at
individual
traces.
The
other
useful
thing
to
do
with
Trace
is
to
look
for
correlations
right,
behavior.
That
has
that
you
don't
like
that's
correlated
by
a
particular
attribute
value
or
something
it
feels
to
me
if
everything's
bundled
up
in
the
same
field,
that
would
not
work
very
well.
H
E
This
is
one
of
the
areas
where
the
field
of
value
is
actually
more
interesting,
it
being
the
span
name,
but
again,
it's
understandable.
Why
it's
not
that's
the
reason
why
it's
I,
don't
I,
don't
think,
there's
a
great
way
of
doing
it.
To
be
honest,
at
least
in,
like
my
my
thinking
behind
it,
it's
just
not.
There.
E
H
See
so
so
they
are
there
they're,
just
not
there's
a
the
name.
I
H
I
E
Have
done
some
we'll
we'll
grab
the
context
and
we'll
grab
a
trace
and
add
a
couple
things
to
it.
I
know
other
teams
that
I've
been
working
with
that
they're
starting
to
add
attributes
themselves
onto
it.
No
one
has
done
custom
spans
yet
I've
kind
of
tried
to
deter
people
from
doing
their
own
custom
spanning
just
because
it
is
interesting
working
with
a
context
and
if
you've
not
worked
in
a
lot
of
asynchronous
programming,
even
people
that,
like
you've,
worked
in
a
single
way,
you've
never
worked.
Let's
say
node.js
callbacks.
E
You
may
not
understand
how
the
context
is
necessarily
going
to
propagate
or
how
the
context
is
going
to
act,
and
so
I
kind
of,
like
I,
try
to
steer
people
away
from
doing
that.
Behavior
because
of
that,
but
I
know
getting
attributes,
and
all
of
that
that
we've
already
started.
H
I
have
a
question,
or
maybe
it's
just
like
a
brief,
open
discussion,
getting
back
to
what
you
said
around
contributing
back
some.
It
acknowledged
problem
that
we
currently
have
in
open.
H
Telemetry
is
like
you
said:
we
have
language
maintainers,
but
they
basically
we
we
don't
have
such
a
a
plethora
of
language,
maintainers
that
they
can
work
on
all
the
things
and
they
feel
like
they,
our
feedback
from
them
is
they
mostly
have
like
the
Cycles
to
work
on
essentially
the
SDK
and
the
API,
and
keep
up
with
spec
changes
that
affect
those
two
core
libraries.
H
They
don't
really
have
time
to
manage
all
of
the
contribos
and
they
often
feel
like
they
don't
necessarily
have
you
know
they
have
like
open,
Telemetry
expertise,
but
they
don't
necessarily
have
like
expertise
in
graphql
or
whatever
it
is,
and
because
instrumentation
packages
aren't
the
kind
of
things
that
attract
long-term
maintainers.
H
You
know
it
got
ported
at
some
point
where
somebody
needed
it,
so
they
wrote
it
and
like
could,
you
know,
contribute
it
back,
but
they
have
the
like
couch
left
out
on
the
street
quality
to
them,
and
we
don't
have
a
great
solution
to
this
like
we
know,
we
need
to
come
up
with
a
better
management
strategy
for
these
things
and
as
part
of
like
going
through
improving
all
these
semantic
conventions,
this
year
is
stabilizing
them.
H
H
So
I
know
that
this
is
the
year
where,
like
that's,
gonna,
really
kind
of
come
to
the
Forefront.
We're
gonna
have
to
confront
the
fact
that
you
know
in
some
languages
like
Java,
it's
like
very
well
maintained
because
it's
kind
of
centralized
but
there's
other
languages
like
you,
know,
node.js,
where
it
just
isn't
it's
just
kind
of
a
pile
of
packages.
H
I,
don't
have
a
good
answer,
but
I
just
wanted
to
bring
this
up
again
as
a
discussion
point,
because
it's
important
to
me
I'm
curious.
If
you
have
anyone
on
the
call,
has
any
thoughts,
Especially
You
Justin
about
like
what
might
be
you
know
what
might
be
a
good
way
to
do
it
or
what
you
see
is
like
the
biggest
pain
point
there.
E
Yeah
I
I
really
think
you,
especially
in
the
node.js
landscape
and
I,
don't
even
want
to
call
it
the
node.js
landscape,
because
it's
the
jskin
trip
and
the
JS
to
me
is
so
Broad
and
and
I
really
like
Java,
yes,
I
I'm,
gonna
I'm
gonna
pick
on
some
languages
right
now,
Java
it
you
can
do
client-side
work
right,
You.
You
have
thick
applications
that
you
can
write
swing.
E
You
can
write
man
I'm
dating
myself
with
half
the
job
at
Job,
applets,
whatever
it
be
it,
but
really
like
Java
is
a
server-side
technology
and
a
lot
of
the
plugins
that
you're
going
to
end
up.
Writing
for
Java
are
server
side.
There's
not
really
a
fragmented
system
there.
There
sort
of
is
because
you
have
the
Oracle
jdk
versus
the
open,
jdk
versus
grawl,
but
essentially
it
all
will
function.
E
Similarly,
as
long
as
you're
not
messing
with
bytecode,
and
even
if
you're
messing
with
by
code,
you're
still
kind
of
fine
CL,
C,
sharp
or
anything,
that's
on
the
CLR
is
kind
of
the
same
way.
Yeah
and
I'm
bundling
CLR,
because
yeah
you
could
add
Telemetry
to
an
F-sharp
component.
But
really
you
could
end
up
using
that
in
a
c-sharp
application
because
of
it
all
being
CLR
based
node.js
is
so
weird
or
I
should
just
say
JS,
because
you
have
a
website
versus
a
server
side
already
starting
up.
E
E
It's
really
weird
in
that
aspect,
and
now,
even
on
the
server
side,
you
have
two
runtimes
that
are
similar
enough,
that
it
looks
like
things
will
just
work,
but
they
may
not,
and
I
specifically
know
JS
with
Andino
they're
I
I've
run
into
it
already
we
POC
Dino
at
one
point,
and
it
was
like
you
got
mostly
works,
but
not
all
the
time
and
not
the
way
we
expect
things
even
I
should
say
tooling,
is
like
javascripts
particularly
is
just
this
environment
that
is
I
always
say.
Is
the
wild
west
of
development?
E
One
that's
the
front
end
web
group
and
you're
gonna
have
then
the
back
end
group,
and
even
then,
at
that
point,
once
you
have
those
two
core
teams.
Maybe
they
decide
like?
Oh,
we
need
someone
specifically
on
the
dino
engine
and
someone
on
node
or
on
the
front
end
I
can
even
see
like.
Oh,
we
we
do
the
90
or
the
80
is
the
core
group
and
then
the
web
group's
like
yeah.
E
But
now
we
got
to
do
five
percent
towards
the
Chrome
and
five
percent
towards
firefight,
like
it's
I,
think
you're,
going
to
end
up
having
a
in
that
particular
group,
a
little
sub
groups
that
need
to
start
getting
created
and
I
think
once
that
happens,
even
if
they're
not
particularly
knowledgeable
on
a
piece
of
technology,
let's
just
say
the
back
end
group
and
they
end
up
getting
that
we
have
among
I'm
bringing
this
up,
because
this
was
a
fun
bug.
Trying
to
figure
out
was
the
driver.
E
Just
was
not
instrumenting
correctly
and
come
to
find
out.
It's
the
way.
There's
the
async
context
just
is
not
built
in
correctly
at
all,
and
that's
on
not
the
JS
contrib.
The
way
they
instrumented
with
JS
contrib
is
perfectly
valid,
but
I
think
when
you
have
maintainers
or
you
have
a
subgroup
that
is
able
to
wholly
focus
on
just
like
back-end
components.
It's
going
to
make
it
that
much
easier
and
I
think
it.
E
It's
really
a
language
where
you're
going
to
have
to
fragment
that
overall
group,
in
my
opinion,
that
that's
what
I
see
makes
sense,
yeah,
I,
I,
just
I
I'm,
even
thinking
of
like
the
rum
work,
that's
currently
going
on
and
I'm
thinking
of
that,
like
once
that
core.js
group
gets
the
rum
system
built
in
and
that's
just
now
a
whole
nother
layer
of
work
that
they're
gonna
have
to
keep
track
of
yeah
yeah.
H
I
I'm
we're
gonna,
actually,
realistically,
the
the
browser
and
server
side
stuff
is
going
to
get
almost
completely
bifurcated
it.
Theoretically.
Yes,
you
can
share
a
lot
of
components,
which
is
how
we're
doing
it
now,
but
the
browser
environment
is
so
constrained
resource
constrained
in
such
crazy
ways
that
it
actually
it
we're
looking
at
it.
H
We're
like
that's
going
to
need
to
actually
be
its
own
vertical
stack,
it'll
be
much
better
if
that's
its
own
thing
and
because
there's
kind
of
a
rum
vendor
Community
I'm
actually
expecting
that
stuff
to
get
maintained
better,
like
the
stack
of
like
all
the
browser
level
instrumentation,
the
stuff
I'm
really
worried
about
is,
like
you
said,
there's
this
ever
shifting
landscape
of
contrib
packages
and
libraries,
and
it
feels
to
me
like
just
having
like
a
maintainer
per
per
package.
E
I
again
it
I
and
I
I
know
on
the
browser
side
I
before
this
job
I
was
a
performance
engineer
on
browser
engine,
so
yeah
it
it's
not
a
fun
landscape.
Every
single
time.
We
think
we're
coming
to
like
a
convergence
on
something
X
browser
vendor
now
wants
to
do
something
else,
or
now
we
have
mobile
and
completely
yeah
I
get
it
trust
me
yeah.
D
Just
for
the
sake
of
the
time,
we
have
five
minutes.
Four
minutes
left
so
Justin
before
we
just
wrap
it
up.
Do
you
have
any
specific
questions
that
you
would
like
to
ask
the
community.
E
Why
are
we
expecting
in
terms
of
I
guess
like
what
is
considered
feature
Cadence
for
otel
and
and
I
guess,
I
could
I
can
point
to
something
very
specific:
there's
there's
the
Matrix
on
the
hotel.
What
says
like
this
language
now
has
X
Y
and
Z
support?
Is
there
like
a
specific
Cadence
roadmap
or
is
there
like,
we
just
kind
of
don't
know
other
than
going
in
meeting
notes,
which
is
what
I
was
starting
to
do.
E
That
even
the
overall
like
so
you
I,
think
the
best
example
is
the
Exemplar
got
feature
Frozen
I
think
do
like
what
do
we
have
the
road
map
of
like
okay,
this
feature
froze
and
now
it's
Q2
and
now
language
implementers
can
start
that
is
out
there
or.
H
We're
we
are
trying
to
put
that
together
right
now.
This
has
been
a
problem
with
our
spec
development.
Is
that
we
don't
actually
have
it
organized
like.
We
have
a
lot
of
people
who
work
on
it,
but
it's
not
organized
into
like
a
coherent
backlog
with
like
timelines
and
expectations,
which
means
yes,
sometimes
things
can
just
stall
out
because
no
one's
paying
attention
to
them.
So
that
is
a
thing
in
2023.
H
That's
actually
the
thing
I'm
personally
working
on
the
most
right
now
is
trying
to
to
Wrangle
that
into
we
do
have
the
first
part
of
it,
which
is
like
a
road
map
that
we've
gotten
from
the
community.
That's
just
very
high
level.
E
So
the
last
piece
I
think
this
is
now
for
my
my
performance
engineer
hat
are:
is
there
ever
going
to
be
a
time
where
we're
there's
going
to
be
potentially
specification
for
I
I,
want
to
say
you're
starting
to
get
into
performance
testing
functions?
Is
there
going
to
start
being
that
level
of
specification
down
to
a
function,
level,
statement
level
or
anything
like
that
that
you
could
technically
add
I'm
thinking
of
like
I?
Have
a
trace
and
I
know?
E
X
function
is
a
hot
spot
that
I'm
going
to
be
working
in
other
than
doing
my
own
custom
spans.
Are
we
ever?
Is
there
ever
going
to
be
work
to
like
specify?
Okay,
you
can
set
this
thing,
and
this
thing
like
per
I'm,
even
thinking
the
performance
timers
that
you
could
do
that
and
all
of
a
sudden
it
will
basically
an
Hotel
specification.
I'll
start
basically
build
all
that
out
for
you.
You
can
just
set
like
point
a
point:
B
cool
done.
E
D
Think
I
think
this
is
something
that
is
not
being
I'm,
not
sure
covered
yet
by
the
community,
but
I'll
agree.
I
mean
from
a
also
come
from
the
performance,
engineering
background
and
I'm.
Pretty
sure
that,
with
the
fact
that
we
have
a
common
format
for
damage
data,
then
we
can
imagine
tons
of
use
cases
to
utilize
data
and
analyze
the
data
just
to
build
your
tests,
but
also
to
analyze
your
property
or
what's
come
out
of
the
tests.
H
Yeah
I
would
say:
there's
where
I
mean
if
I'm
interpreting
what
you're
asking
correctly
there's
a
profiling
working
group.
Okay
did
not
know
that
that,
in
my
mind,
I
mean
it's
it's
it's
very
early
days
but
in
my
mind
that's
kind
of
it's
about
gluing.
These
two
kinds
of
systems
together.
Right
of
you
know
the
kind
of
P
Prof
profiling
stuff
that
you
do
but
trying
to
find
a
way
to
integrate
that
effectively
into
distributed
tracing,
which
is
tricky.
It
turns
out
yeah.
H
B
H
It
is
on
our
roadmap
when
we
talk
to
our
end
users,
if
it
was,
you
know
on
like
the
third
tier
of
what
people
wanted
was
was
profiling,
stuff
and,
and
there
are
like
profiling
companies
who
are
coming
in
and
wanting
to
work
with
us.
But
it's
it's.
B
H
D
D
We
we
reached
out
the
top
of
the
hour,
so
I
think
for
just
the
respect
of
each
personal
agendas.
I
would
suggest
that
we
finalize
this
interview
so
first
of
all,
thanks
Justin
for
your
time
for
sharing
all
your
various
feedback
and
also
I'm
eager
to
have
your
very
PR
part
of
a
approved.
D
So
then
we
can
utilize
the
work
that
you've
done
so
and
thanks
for
everyone
to
be
connected
during
this
end
user
sessions,
I
think
it
was
a
very
I
mean,
at
least
for
me,
was
interesting.
E
D
All
right
so
thanks
and
so
tomorrow,
there's
the
Italian
practice
with
Justin
again.
So,
if
you
want
to,
we
will
probably
gonna
talk
about
a
bit
a
bit
similar
topic
tomorrow,
but
but
with
less
details,
of
course.
So
if
you
want
to
hook
it
up
tomorrow,
stay
tuned
and
we
will
have
other
topics.
I
mean
similar
topics
coming
tomorrow
with
two
more
sessions,
all
right
thanks
and
have
a
present
day
and
see
you
soon,
bye.
All.