►
From YouTube: 2021-11-04 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So,
let's
do
as
we
usually
do,
let's
give
people
two
more
minutes
and
then,
let's
start
at
the
eight
of.
F
D
A
A
A
Viewpoint
represented
too
so
that
we
can
make
sure
that
whatever
we
come
up
with
is
kind
of
whatever
kind
of
semantic
conventions
we
come
up
with
this
opened
extensively
in
terms
of
allowing
intermediary
instrumentation
to
hook
in
I'm,
not
sure
what
dress
you're
doing.
If
you
want
to
say
some
birds
and
introduce
yourself.
C
If
so,
yeah
yeah
so
yeah
as
as
johann
said,
our
primary
interest
is
in
like
an
intermediary
instrumentation
for
our
broker,
and
certainly
just
you
know.
Our
interest
here
is
in
making
sure
that
use
cases
that
we
have
in
mind
either
will
will
be
covered
or
or
I
understand,
intermediate
is
outside
of
the
scope
of
what
we're
talking
about
right
now,
but
but,
as
johann
said,
just
making
sure
that
what's
what
we
come
up
with,
is
extensible
and
and
so
and
then
also
just
presenting
some
other
use.
C
Cases
that
may
or
may
not
have
been
considered
so
far,
such
as
transactions,
which
probably
share
some
things
in
common
with
batch
processing
and
receiving
that
are
mentioned.
But
so
we'd
want
to
look
at
some
of
those
things
as
well.
Jesse
do
you
have
anything
else
to
add.
D
No,
no
we're
excited
to
be
here
and
interested,
and
I
would
say
that
I
do
think
that
we're
really
interested
in
instrumenting
our
brokers,
but
then
also
we
have
client
and
sort
of
producer
or
receiver
api
considerations
to
think
about
as
well.
So
we'd
like
to
sort
of
nail
those
down
too.
A
C
So,
like
they're,
what
our
broker
implements
is
much
what
is
already
sort
of
available
or
in
in
standardizing
the
jms
api.
We
provide
the
same
semantics
and
other
apis
besides
jms,
but
it's
just
a
reference
point
to
anything.
I
mentioned
I'll:
try
to
use
jms
terminology
for
those
who
might
be
familiar
with
it,
because
it's
a
standard
api
as
opposed
to
anything
that
we've
done
so
there's
two
types
of
transactions
that
are
session-based
transactions
and
global
transactions
or
xa
transactions
and
session
transactions
are
the
simpler
concept.
C
The
consumers
can
consume
messages
and
then
we
say
commit
that,
and
what
that
means
is
that
atomically
on
the
broker
and
from
the
applications
perspective
that
all
of
those
produced
messages
are,
you
know,
persisted
and
made
available
for
delivery
to
other
consumers
and
all
the
consumed
messages
are
removed,
and
so
either
all
of
that
happens
or
none
of
it
happens
and
and
then
that's
that's
what
that's
a
session
based
transaction
and
there's
a
global
transaction.
C
There's
a
specification
called
the
xa
x8
transactions
and
what
that
is
is
really
the
same
concept
but
then
there's
a
layer
above
it
where
you
can
have
a
two-phase
commit.
So
if
you
had
multiple
resources
participating
in
a
transaction,
perhaps
you
know
a
messaging
client
and,
and
maybe
the
application
then
writes
something
to
a
database.
C
Isn't
that
useful?
I
think
if
you
kind
of
capture
what
a
transaction
is
to
a
client
and
a
a
resource,
I
think
global
transactions
just
becomes
a
natural
extension
of
that,
where
you
would
indicate
that
multiple
of
these
things
have
happened,
perhaps
linked
to
some
higher
span.
Quick
question:
do
we
do
we
still
think
that
2pc
works.
E
E
So
if
you
have
a
classic
1990s
two
machine
cluster
with
the
database
and
the
and
the
coordinator
and
the
cues
are
all
living
on
one
machine
and
if
the
power
goes
out,
the
power
goes
out,
then
the
illusion
of
two
pc
works,
but
as
soon
as
you
drag
that
out
into
a
large
data
center,
where
everything
is
moving,
it
doesn't
and
that's
why,
in
in
in
all
the
cloud
pass
systems,
two
pc
just
doesn't
exist
anymore.
So
I
would.
E
I
would
think
that,
yes,
the
transaction
set
are
the
jms
transactions
that
you
can
pull
from
a
queue
and
then
put
a
transaction
and
settle
in
one
queue
and
send
to
another
queue.
That's
something
where
the
bro.
Only
the
broker
is
involved.
That's
realistic,
but
I
just
don't.
I
don't
think
we
should.
C
I
I
I
would
be
happy
enough
to
basically
agree
with
that.
I
I
mean
people
do
try
to
use
it
like
people,
like
all
the
points
you
stated
are,
are
definitely
valid.
I
totally
agree
you
know
and,
like
you
said
when,
when
you
know
the
coordinator
crashes,
the
transaction
manager
crashes,
you
know
you
got
these
dangling
prepared
transactions
and
you
know
in
theory,
you
can
involve
a
human
and
figure
out
what
to
do
by
examining
logs.
But
it's
just
I
don't
believe
it's
practical
and
and
they're
yeah.
E
Yeah
we
should
not.
We
should
not.
We
should
not
further
the
practice,
but
basically
to
say
there's
a
set
of
transactions
like
that
you
can
do
around
a
single
resource
and
that's
fine,
but
to
pc.
This
is
something
that
we
should
simply
say
that
doesn't
work
because
it
doesn't
and
yes,
customers
try,
let's,
let's
not
encourage
them.
C
E
And
we
should
we
it
is.
It
is
so
that
the
illusion
of
of
two
pc
cannot
be
upheld
in
cloud
systems
and
and
so
therefore
we
should
not.
We
should
not
give
the
customer
some
hope
that
it
will
be
because
it
can't
and
and
you
and
neither
you
or
I
can
implement
it
correctly
yeah
and
we
and
and
what
makes
it
worse
is
we
don't
have
protocol
for
it
like
like,
because
and
we're
trying
to
do
interrupt
here
right.
E
E
A
E
Absolutely
so,
I
think
I
think
session
transactions
are
actually
worth
supporting
and
that's
what
most
jms
brokers
do,
because
if
you
have
a
if
you're
in
control
of
the
resources,
if
you're
controlling
all
of
all
the
logs,
then
even
in
a
distributed
scenario
like
with
hyperscale
brokers
that
we
run,
you
can
always
find
a
trick
to
create,
create
an
illusion
that
you
have
consistency
across
the
resources
within
that
broker,
scope,
and
sometimes
is,
is
it's
as
as
as
you
can,
control
the
disk
and
all
the
logs
are
on
one
disk,
or
you
find
some
other
mechanism
to
to
deal
with
that.
E
A
That
makes
sense
and
another
question
here,
maybe
for
clements
or
doing
for
the
assassin
transactions
like
how?
How
would
the
flow
there
look
from
like
now
the
producer
and
the
consumer
side
like?
Is
it
similar
to
like
a
database
transaction
where
there's
a
commit
and
a
rollback
that
can
be
triggered.
E
Yes,
it
basically
is,
is
you
start
we
have
we
have
these
these
these?
We
have
these
several
points
of
how
we
start
and
we
start
the
the
operation
right,
we're
doing
descent
operations,
we're
doing
we're
starting
to
receive
and
then
we're
getting
message
to
that
receive.
So
it's
basically
there's
a
starting
point,
which
is
kind
of
the
origin
of
where
that
transaction
begins
and
then
all
the
subsequent
operations
are
just
children
of
that
of
of
that
made
of
that
top
operation.
E
A
Okay,
that
sounds
good.
I
mean
I,
I
will
look
into
that
further
and
see
if
we
need
to
update
our
output
for
that,
to
mention
it
there
at
least
I'm
not
sure.
Actually,
I
think
we
will
not
put
it
up
for
version
1.0,
but
basically
add
this
as
one
of
the
open
questions
that
will
be
answered
by
subsequent
versions.
A
We're
definitely
putting
it
on
the
map
so
that
we,
I
feel
work
and
think
about
that.
A
Awesome
thanks
duane.
As
a
next
point
on
the
gender,
I
put
the
semantic
conventions
for
cloud
events,
the
turtle
mill
or
rao.
If
you
wanna.
B
Now
you
can
go,
I
didn't
have
time
to
prepare
the
the
sample,
so
you
can
go.
H
So
I
have
a
couple
of
different
clients
that
I
can
send
events.
Sorry
it's
it's
too
early
in
the
morning.
So
let
me
start
over.
I
have
a
picture
there's
my
terrible
writing.
So
the
first
scenario
I'm
going
to
show
you.
I
have
a
producer.
H
Okay,
so
I
have
a
producer,
it
creates
a
cloud
event
and
it
sends
it
to
the
service
called
event
grid.
The
client
I
use
here,
as
I
will
show
you.
It
actually
knows
that
it's
a
cloud
event.
H
Then
there
is
some
magic
that
transforms
this
cloud
event
into
event,
hub
message,
event
hub
event
and
then
the
consumer
receives
event
hub
event.
At
this
point,
and
then
the
user
code
knows
that
that's
a
cloud
event
like
the
sdk
I
have
from
this
layer
doesn't
know
anything
about
it.
It's
just
a
bite.
H
So
if
we
look
into
the
code
for
this
example,
it
will
be
quick.
So
basically
I'm
creating
a
cloud
event
and
to
be
fair,
I
can
create
a
list
of
them.
It
doesn't
have
to
be
one
event,
and
this
is
the
the
semantic
convention
for
cloud
events
I'm
trying
to
implement
here
in
the
the
the
small
part
of
it,
and
I
inject
the
context
into
the
event,
and
then
I
send
it
using
this.
This
is
an
event
grid,
client,
okay,
so
on
the
other
side,
this
is
where
I
receive
it.
H
So
I
receive
a
thing
which
is
in
this
case:
it's
it's
a
not
even
an
event
hub
message,
but
it
doesn't
matter
it's
just
a
java
thing.
So
when
I
receive
it,
I
create
another
spam
for
cloud
events,
and
then
I
have
some
some
problem
here,
we'll
discuss
it
later,
but
I
link
all
the
contexts
of
the
cloud
events
that
they
received.
H
So
here
I
use
my
by
this
time
cloud
event,
as
I
manually
deserialized
you
see.
So
this
is
my
application
called
my
sdk,
which
receives
those
has
no
idea.
Okay.
So
then
I
extract
the
context
I
populate
links
on
it
and
then
I
process
whatever
happens
next
and
I
also
checkpoint.
H
H
Yeah,
so
this
is
my
incoming
request.
This
is
how
I
publish
events
so
on
this
layer,
the
event
grid,
publisher,
client,
dot,
publish
cloud
event,
events
well.
This
is
funny
on
this
layer.
I
still
know
this
is
a
cloud
event
on
the
layer
below
it
like
the
http
layer.
I
have
no
idea
right.
This
is
just
http.
It
sends
whatever
okay,
then,
on
the
other
side,
this
is
actually
the
consumer
side.
H
This
is
where,
oh,
oh,
sorry,
I'm
sorry
too
early
in
the
morning.
This
is
the
producer.
This
is
message.
This
is
cloud
event
created,
and
then
I
get
the
link
to
the
process.
This
is
process.
I
didn't
name
it
properly.
I
will
explain
why.
So
this
is
processing
of
my
cloud
event.
It's
linked.
The
azure
portal
shows
it's
a
link,
and
then
this
is
actually
checkpointing.
H
Let's
not
pay
too
much
attention
to
it.
The
this
is
what
we
discussed
last
time
that
we
can
always
link
we'll
talk
about
that
later.
But
this
is
the
implementation
of
this
idea.
We
see
again
this
pattern
of.
We
have
a
sibling
that
the
event
is
a
sibling
of
event
grade
publisher.
A
H
A
H
The
thing
why
I
didn't
name
those
things
this
thing
properly
is
because
if
we
receive
batch
in
our
small
little
semantic
convention,
we
mentioned
that
there
should
be
a
type
then.
But
if
we
receive
batch
and
we
try
to
process
batch,
then
there
is
no
way
to
put
a
type
in
this
event.
Name
right
and
there
is
more
than
one
type
potentially.
H
A
A
I
I'm
just
wondering
if
I
think
it
would
be
maybe
easier
if
you
say:
okay,
you
need
kind
of
one
span
per
process
per
message.
So
for
each
each
message
that
is
processed,
you
will
have
one
span
and
I
think
what
you're
saying
is
that
in
this
example
it's
not
the
case.
So
there
could
be
multiple
messages
processed
during
the
new
span.
H
Yeah
the
interesting
part
about
the
consumption
that
you,
basically
whatever
you
do,
it's
a
user
who
writes
this
code
right,
it's
not
like
there
are
at
least
on
our
site.
We
don't
have
an
sdk
that
actually
processes
cloud
events.
What
maybe
clemens
knows
clemens
do
you
know
if
you
have
any
offering
for
clone
defense
consumption.
E
I
mean
there
are,
there
are
the
way.
How
cloud
events
are
consumed
depends
somewhat
depends
on
the
the
the
delivery
path
right
so
with
with
a
cloud
event.
It's
just.
It's
just
been
projected
onto
an
underlying
transport,
so
you
can
project
them
to
app
mppt
htp.
E
H
Right,
but
when,
when
I'm
talking
about
that,
when
users
write
code,
it's
usually
they
receive
something
over
http
and
then
they
visualize
it
to
cloud
event
like
in
functions
right.
Yes,
where
yeah,
so
essentially
we
we
don't
have
any
offering
that
does
it
but
joe.
You
have
some
case
where.
H
I'm
asking
that
if
there
isn't
any
sdk,
let's
say
or
any
anything
that
helps
users
write
in
the
code
where
they
receive
a
cloud
event.
So,
for
example,.
E
The
sdks
plug
into
transport
mechanisms
that
are
offered
by
by
by
existing
libraries
they
are
the
sdk,
is
a
set
of
helpers
to
do
send
and
receive,
and
decentralization
and
serialization
of
cloud
events.
H
B
Yeah,
so
what
what
I
can
tell
you
ludmila
is
that
I
I
did
a
little
bit
of
work
on
the
go
sdk
and
what
what
happened
there
is
that
they
already
had
the
so
they
have
this,
the
they
call
it
as
a
client,
and
this
this
this
type
is
is
used.
Is
it's
it's
used
for
any
any
protocol,
so
there's
http
mqp
all
of
those,
but
but
the
user
always
use
this
client.
So
it's
a
bit
transparent.
B
So
the
user
needs
to
create
the
transport,
and
then
they
pass
this
transport
to
the
client
and
in
the
in
the
goal
they
have
this
abstraction.
They
call
it
the
they
call
it
the
observability
service
and
this
this
type,
this
client
prior
to
actually
invoking
the
protocol.
B
It
calls
this
observability
service
and
they
had
only
one
for
open
senses
there,
and
I
did
some
work
some
time
ago
where
I
implemented
the
or
I
I
added
a
an
implementation
of
this
interface
for
open
telemetry.
B
So,
in
that
case,
for
the
go
sdk
the
users
actually
with
my
audition,
the
users-
probably
don't
need
to
do
anything
so
like
when
they
receive
if
they
use
the
cloud
the
the
go
sdk
receiver
type,
then
when
they
receive
the
event
they
don't.
They
have
to
do
this
code
that
you
that
you
mentioned
now
that
they
they
need
to
extract
the
like
visualize,
the
the
the
the
cloud
event
and
and
start
the
span
and
so
on,
because
this
observability
service
already
does
that
yeah.
A
B
It
depends
yeah,
so
in
the
go
sdk
the
the
the
way
it's
done
there
if
you
use
the
receiver
that
they
offer
when
you
receive
the
event
prior
to
calling
your
callback.
So
like
your
the
user
code,
it
will
start
a
span,
so
it
will
start
the
processing
span
automatically.
H
B
Are
possible
yeah,
and
I
think
that
the
the
the
c-sharp
sdk
also
doesn't
do
anything
like
that
because
yeah,
it's
just
it's
just
normal
c
sharp,
sending
http
messages
events,
so
I
don't
think
they
they
have
any
special
like
client
like
the
go
one
that
hooks
or
makes
it
more
transparent
to
users
in
the
in
the
c
sharp.
I
looked
in
the
examples.
It's
just
an
api
that
you
know
like
s,
gets
reads
from
the
from
the
request
body,
the
cloud
event
and
serialize
it
there.
B
H
Yeah
and
the
like,
the
fact
that,
at
least
in
many
cases,
users
have
to
write
this
code
makes
me
think
that
the
best
we
can
do
is
to
use
should
everywhere,
because
users
won't
follow
our
semantic
conventions.
B
Right
yeah,
yeah
that'll
that
that
will
be
a
bit
hard
yeah.
The
the
warning
goal
is
quite
nice
because
it's
it's
really
like
it's
really
transparent,
but
I'm
not
sure
how
it
would
be
in
the
case
of
batching,
like
you
have
now,
because
yeah,
I'm
not
sure
if
that
one
off
supports
that
case,
but
but
for
the
http
case
where
I
tested
and
I
bought
both
in
a
sample
for
you,
which
I
the
one
I
did
when
I
introduced
the
pr.
It
works
quite
nice.
B
So
it's
and
with
your
new
with
the
the
new
proposal
that
you
did
also
it's
just.
I
just
need
to
change
it,
the
spay
name
in
in
some
small
things,
but
that
would
also
work
quite
nice,
so
it
would
be,
it
will
be
transparent
for
for
users,
the
only
thing
that
they
would
have
to
do.
Maybe
it's
I
don't
know
linking
or
something
if
they
want
to
do,
but.
H
Yeah,
I
can
show
you
some
other
example
where
I
found
that
lincoln.
It's
probably
a
good
idea.
Like
probably
you
should
say
that
we
should
link-
and
maybe
even
in
this
example,
I
showed
you
before.
So
there
are
two
different
operations
here,
and
this
is
the
first
one.
There
should
be
some
indication
somewhere
yeah,
so
this
is
eight
f
this
guy,
so
this
is
also
the
same
trace
and
this
is
the
same
trace.
H
This
is
the
the
other
trees,
so
here
we
don't
have
a
receive
stage
right.
We
don't
receive
it's
being
pushed.
So
if
we
look
into
the
other
example
I
have,
we
will
also
have
a
receive
stage.
H
People
can
use
to
send
cloud
events,
azure
storage,
fuse.
Their
specifics
is
that
they
don't
have
metadata.
This
is
like
you
over
http
and
that
they
just
you,
just
pass
a
message,
which
is
a
a
pack
blob
of
data
for
every
storage
queue.
So
cloud
events
is
the
next
thing
you
can
use
to
actually
propagate
context.
If
you
want
because
at
least
you
have
distributed
tracing
extension
on
the
cloud
event.
H
So
here
we
have
the
same
message
creation-
and
this
is
the
the
enqueue
thing
so
remember
for
event
grid.
At
this
stage
we
knew
that
it's
a
cloud
event
here.
We
don't
know
it's
already.
Blob
of
data
and
http
layer
cannot
know
it's
it's
a
cloud
event
for
sure.
So,
basically
those
this
is
where
sibling
relationships
makes
perfect
sense,
because
there
is
no
other
way.
How
would
you
express
it?
H
H
So
actually,
those
first
three
is
one
trace
and
the
last
five
is
another
trace
right
and
because
we
linked
here
right,
we
naturally
got
this
other
parent
from
the
consumption
side
right.
So
we
that
this
is
the
the
spring
boot
instrumentation,
which
instruments
consumer
call,
and
then
this
is
a
deck
here
from
the
queue
it's
the
http
request
internally
and
then
within
the
scope,
this
processing
happens.
So
actually
process
is
a
child
of
this
consumer
thing.
H
The
portal
shows
prefers
links
which
is
funny,
but
but
nice.
So
and
then
this
is
a
child
of
this
one
and
it's
linked
to
this
one.
So
we
see
the
whole
thing
in
one
transaction
and
we
get
this.
This
parent
for
free
right.
H
Well,
yeah
this:
this
is
the
choice
that
the
portal
made
to
how
to
show
it
right.
It
can
be
shown
in
any
other
way.
I
think
it
kind
of
makes
sense.
J
B
It's
hard
to
read
that
I
mean
it's
it's
a
bit
obvious
in
this
case,
but
it's
if
you
didn't
know
that
it
was
the
consumer
that
you
would
know
where
this
came
from
yeah.
So,
like
oh
like
in
which
component
the
cloud
event
process
happened.
Unless
there
is
information
in
the
span.
B
H
B
Yeah
I
mean
we
know
that
it's
a
child
that
that
one,
but
if,
if
you
just
by
looking
at
it,
it's
not
clear
right
but
yeah,
it's
just
a
visualization
thing.
I
guess.
H
B
A
A
A
B
H
Yeah
but
like
wait
a
second,
so
if
we
had
a
batch,
we
will
have
multiple
groups
of
this
right.
Multiple
events.
A
Yeah,
but
I
mean
those
creates,
could
happen
in
different
in
different
traces
and,
for
example,
I
I
look
only
at
one
at
one
create.
Basically,
I
look
at
like
the
the
trace.
That
originated
one
message
and
this
message
might
be
created
in
one
phrase,
but
the
badge
that
is
received
by
the
consumer
might
contain
cloud
events
created
in
many
different
traces.
H
I
guess
there
may
be
some
some
scenario
where
the
cloud
events
are
batched
together,
but
then
these
two
examples
they
gave
you.
I
believe
it's
not
the
case,
yeah
it
can't
it
just.
You
cannot
do
this
because
you
either
send
the
whole
button.
Oh
okay,
so
you
can
create
a
badge
from
different
traces
and
then
combine
it
together
at
one
stage
and
then
send
it
yeah
this.
This
is
possible.
H
A
I
think
it's,
I
think
the
use
case
that
you
that
that
you
made
here
basically
made
a
batch
of
cloud
events,
but
basically
dispatch
is
received
just
in
one
in
one.
I
think
event
up
message.
H
I'm
not
sure
this
pass
would
ever
mix
together
cloud
events
from
different
event,
grid
events,
but
yeah
totally.
That
can
happen
in
some
scenario,
so
yeah.
H
A
I
mean
I
had
an
other
idea
here,
because
we
have
this
question
here,
whether
we
do
linking
or
parenting
and
yeah
I
I
worked
out
some
pros
and
cons
for
links.
We
can
go
to
it
afterwards,
but
it
was
thinking
actually
especially
about
this
cloud
events
use
case
where
we
have
yeah
some
cases
where
like
where,
in
some
cases
it
it
looks
better
with
parenting
and
in
other
cases,
yeah.
A
We
can
only
do
links
like
the
case
that
you
showed
here
with
the
batch
receive
and
just
one
idea
I
had,
and
I
want
to
throw
this
out
here
would
be
that
I
mean
it
would
also
be
possible
into
spec
to
say,
okay
for
before,
between
this
process
and
create
state
of
the
cloud
event.
A
A
You
may
you
may
yes,
and
the
idea
here
was
that
basically,
you
will
always
have
the
link.
You
can
rely
on
that.
So
that
is
basically
for
the
consistency.
You
always
can
follow
the
link
between
the
creating
process
stages
and
in
some
cases
where
it
makes
sense
and
if
possible.
In
addition
to
that,
you
have
a
parent
type
relationship
too,
which
provides
you
with,
like
maybe
better
ui
experiences.
H
I
I
I
was
thinking
about
it
and
like
if
we
look
into
the
code
that
we
have
here
when
I
create
a
span,
so
I
don't
provide
a
parent,
and
this
is
the
beauty
of
it
right.
So
it
captures
whatever
ambient
context.
That's
there
and
then,
when
we
say
you
may
add
a
parent,
there
is
a
small
chance
that
some
people
who
don't
understand
them
being
context
or
that
who
basically
struggle
with
learning
open
telemetry
so
far,
they
will
override
this
ambient
context
and
they
will
lose
something.
B
To
override
is
is
what
what
what
we
were
discussing,
I
think
the
other
week
or
so
because
most
likely,
if
you
have
a
an
active
span
already
it
might
it
most
likely,
will
be
the
from
the
auto
instrumentation
like
the
htp
or
something
yeah,
and
we
we
we
talked
that
we
want
to
have
or
or
to
or
we
want
to
use
the
the
one
from
the
event
to
not
make
the
active
one
as
the
parent
right
or
because
if
we
use
the
act,
for
example
in
the
cloud
events
that
I
in
the
example
that
I
sent
there
will
be
an
active
span
when
receiving.
B
A
But
but
I'm
I'm
actually,
I
I
like
this,
I
mean
anyway
from
the
beginning.
I
like
this
idea
that
miller
said
that
we
basically
have
this
link
as
a
short
relationship,
and
I
agree
that
if
you
add
this
yet
that
you
may
add
it
as
a
parent,
if
possible,
that
might
be
too
complicated
because
it
might
involve
checking.
A
So
I
I
like
the
idea
of
just
having
the
span
there,
because
in
the
http
use
case
that
I
think
raul
you
mentioned,
then
basically
the
parenting
would
be
happen
happening
due
to
the
http
context
that
gets
passed.
So
basically,
you
have
that
http
on
the
client
and
service
plans
and
those
would
be
parent
and
put
and
those
would
link
it
all
together
into
one
trace
and
in
addition
to
that,
you
would
then
have
like
the
link
from
the
create
to
the
process
spans
in
a
single
trade
like
for
the
http
case.
Now,.
B
Yes,
yeah
so
the
so
this
part.
I
think
that
we
need
to
get
it
more.
B
I
don't
know
consistent
yeah,
because
if,
if
they
use
the
the,
if
they
just
send
via
http
and
for
example,
in
the
go
one,
they
go
sdk
if
they,
if
they
use
the
the
open,
telemetry
integration,
so
they
will,
they
will
have
the
create
spam
and
this
so
the
so
like
the
first
thing
that
will
happen
it
will,
it
will
create
the
trade
span
and
they
will
send
vhtp
and
because
it's
auto
instrumented,
so
the
the
like.
B
You
will
have
this
this
http
one
and
then
the
receiver
also
creates
the
http
and
then
creates
a
processing.
So
the
process
is
a
child
of
the
http
incoming
and
the
create
is,
is
the
parent
of
the
other
one
at
the
outgoing?
So
it
will
not
have
the
same
structure
as
as
as
we
as
we've
been
discussing
in
this
way.
Yeah.
If
we
use.
A
B
A
Yes,
basically
like
look
miller
here,
she
has
the
link
from
the
cloud
events
create
to
the
crowd
cloud
events
process,
and
I
think
we
would
have
the
same
in
the
http
scenario.
We
will
also
have
the
link
from
the
cloud
events
create
to
the
cloud
events
process.
There
would
be
other
spans
in
between
those
two
like
that
kind
of
http
stuff
right,
but
basically,
this
link
would
be
consistent
between
both
scenarios
and
I,
I
kind
of
like
that
solution.
B
I
guess
I
can
try
in
the
in
the
go
sdk
to
to
add
to
add
a
link
or
doing
the
processing
and
see
how
how
how
it
looks
like.
B
But
I'm
not
sure
which,
because
our
back
end
doesn't
support
links
at
the
moment
and
then
I
think
jaeger
also
doesn't
support
links.
I
think
I.
B
Okay,
I
can
try
then
I'll,
do
this
tomorrow
just
the
same
part
with
this,
and
then
I
can
bring
it
up
what
I
want
to
do
last
week,
which
I
didn't
have
enough
enough
cycles
to
do,
but
I
wanted
to
try
to
at
least
build
one
of
the
status
quo
examples
from
from
from
your
document
that
we've
been
looking
on,
so
I
so
I
want
to
do
one
of
one
of
one
of
those
examples
and
then
see
how
the
real
trace
would
look
like.
A
A
The
link
approach
works
for
the
http
scenario
and
basically,
if
that,
if
I
think,
if
you're
fine
with
how
it
looks,
then
we
can,
we
can
go
with
that
and
just
adapt
the
cloud
event
semantic
conventions.
Just
that
did
it
just
basically
say
you
should
create
a
link
between
the
creating
process,
expense.
B
But
but
it
will
still
be
processing
right,
so.
A
B
A
H
H
Yeah
I
can
go
and
make
another
pass
on
them,
and
then
we
can
update
later.
A
Okay,
I
will
then
so
going
to
the
last
point.
I
will
share
my
screen
and
I
did
some
yeah
listing
here
for
the
pros
and
cons
for
using
links.
I
said
last
week
I
will
look
into
that.
I
also
talked
to
somebody
from
azure
monitor
regarding,
like
the
cost
for
links
on
backhand
side
and
try
to
include
that
in
here
and
basically
I
think,
there's
not
much
new
there.
A
A
So
basically,
with
spans
with
links,
you
can
model
end-to-end
relationships,
whereas
with
current
child
relationships
you
can
only
to
one-to-n
relationships.
So
in
that
way,
there's
more
flexibility
with
links
and
then
yeah.
We
also
have
the
ability
to
connect
different
traces,
so
it's
parent-type
relationship.
Basically
it
always.
It
all
has
to
be
one
trace
and
with
links.
Basically,
you
can
have
more
modular
different
phrases
that
you
connect
together,
and
one
other
point
is
that
yeah,
it
adds
kind
of
an
additional
relationship,
semantics,
which
I
think
makes
the
whole
model
more
powerful.
A
So
you
could
basically
interpret
parent-child
relationships
and
link
relationships
in
different
ways,
which
makes
basically
two
kind
of
different
semantics
of
relationships
that
you
can
can
use,
which
also
kind
of
increases.
The
modeling
capacity
that
you
have
and
the
cons
here
are
that
yeah
links
are
harder
to
query
and
to
visualize
like
we
saw
some
kind
of
even
challenges
today
in
ludmila's
great
example,
when,
for
example,
a
link
leads
you
to
a
span
that
is
not
a
root
span
and
then
kind
of
this
other
trace.
A
You
see
it
split
up
in
the
view
which
is
kind
of
can
possibly
be
confusing,
so
there
is
challenging
there
on
the
visualization
side,
just
also
challenging
on
challenges
on
the
querying
side,
because
links
are
more
difficult
to
query.
If
you
have
to
write
manual
query
for
some,
maybe
a
minor
regulation
scenarios
that
involves
links
that
can
be
quite
tricky
to
come
up
with
a
query
that
also
creates
links.
A
So
those
are
basically
a
cons
on
the
from
the
user
point
of
view.
There
are
also
higher
performance
costs
for
links.
So
that's
about
the
I
got
confirmation
from
the
person
I
talked
to,
who
worked
on
azure,
monitor
on
link
support.
A
Basically,
you
need
multi-pass
querying
when
you
query
link,
so
you
basically
first
you
query
the
root
trace
and
then
you
have
to
look
at
all
the
links
and
then
do
another
pass
to
resolve
the
links
and
that
possibly
has
to
happen
steps
several
times
depending
on
how
deep
you
resolve
the
links.
A
So
basically,
queueing
is
more
expensive
and
also
ingestion
is
more
expensive
and
particularly
the
indexing
step
for
backhands,
because
when
you
index
a
trace,
you
then
also
have
have
to
not
even
index
all
the
spans
in
the
trace,
but
also
the
links
that
each
span
has
so
also.
This
can
be
kind
of
a
recursive
step
that
might
be
expensive
so
and
the
I
also
here
edit
this
in
parentheses,
like
the
lack
of
support
for
links
in
several
backhands,
because
I
mean
that's
currently
a
con
that
we
have,
but
we
said
we
we
will
not.
A
Really.
We
will
not
really
make
this
kind
of
reason
for
not
using
links,
because
we
just
expect
black
hands
to
catch
up
at
some
point.
So
basically,
here
we
see
what
we
anyway
kind
of
knew
before
that
the
other
cons
are
mostly
on
the
how
to
say
on
the
consumption
side.
A
So
basically
it's
it
will
be
harder
to
present
this
to
the
user.
It
will
be
harder
for
the
user
to
query
and
the
performance
for
the
user
or
the
backend
will
be
higher,
and
the
pros
are
mostly
basically
on
the
production
side
of
traces.
So,
basically
we
will.
We
will
be
able
to
basically
model
more
how
to
say
more
rich
or
richer
traces
when
using
links,
and
we
will
be
able
to
model
scenarios
that
we
could
not
model
with
parent-child
relationships.
A
So
that
is
just
yeah
the
twist
of
that,
and
I
think
the
cons
basically
are
mostly,
as
I
said,
basically,
challenges
for
for
the
backhand
and
for
the
visualizations,
and
I
think
the
comms
can
definitely
mitigate
it.
I
mean
all
the
cons
I've
listed
here.
You
can
never
completely
eliminate
them,
but
they
can
be
mitigated
by
yeah
good
support
for
links
in
backhands.
H
A
Yes-
and
I
I
did
some
also
some
homework
here-
I
didn't
get
far,
but
I
can
also
show
some
jaeger
traces
here
that
I
came
up
with
and
when
I
was
thinking
about
that
and
here
in
both
traces.
I
just
yeah
tried
a
very
simple
example,
and
I
just
tried
to
use
parenting,
and
even
this
simple
example
here
you
see
some.
A
If
you
see
this
as
a
single
trace
that
you
see
some
counter
intuitive
thing,
because
here
basically
have
the
producer-
and
here
you
have
the
consumer,
both
in
one
trace
and
basically
in
between.
There
is
there's
nothing
there's
a
gap,
and
I
think
that
is
the.
That
is
what
probably
users
will
see
in
most
of
those
cases,
because
you
produce,
then,
basically,
the
message
is
queued
and
then
under
the
message
is
dq,
there's
always
a
gap,
and
I
also
I
also
think
this
can
be
quite
quite
counter
intuitive.
A
If
you
look
at
this
as
a
single
trace,
because
then
here
here,
it's
kind
of
hard
to
actually,
I
think
discern
what
this
duration
of
this
trace
mean
means.
Yeah,
I
mean
technically,
it
means
yeah.
Until
basically,
the
producer
started
until,
like
the
message
was
completely
consumed.
C
Johan,
I'm
interested
does
this
not
kind
of
point
to
the
fact
that
the
intermediary
that
involved
in
between
doesn't
have
anything
in
this
trace?
Like
saying
I
received
it,
I
queued
it.
I
tried
to
deliver
it
to
a
consumer
and
then
the
consumer
gets
it.
I
mean
if
all
that
was
there,
I
think
the
gap
it
disappears.
Then
right
well,
not.
A
Necessary
completely
because
I
mean
here
when
you
look
at
this
templates
second
example,
I
tried
like
to
put
like
some
intermediary
in
here
and
you
can,
or
the
gap
might
be.
I
think
you
cannot
even
generalize.
It
might
really
depend
on
the
scenario,
but
also
here.
Basically,
the
gap
does
never
completely
close,
because
this
gap
here,
basically
is
caused
by
just
this
consumer,
basically
pulls
the
message,
so
the
individual
doesn't
really
have
control
when
the
message
is
consumed,
the
consumer
pulls
and-
and
also
this.
C
Yeah,
I
would,
I
would
sort
of
think
that
when
you
pull
it,
though
you're
still
effectively
invoking
the
intermediary
and
they
could
indicate
that
they
were
pulled
and
now
there's
going
to
be
a
gap
between
when
the
intermediary
cues
it
and
and
it
gets
pulled.
But
that
seems
natural
to
me
yeah.
I
mean
it
that
sorry
to.
I
Interrupt
but
we've
had
the
java
meeting
starting,
and
so
we
need
to
all.
D
A
Oh
sorry,
let's
thanks
thanks
all
the
messaging
people
for
participating,
let's
all
drop
and
let's
not
let
the
trouble
people
join.
Sorry
for
that.
Okay,.
C
C
Do
you
have
the
power
to
kick
me
out,
because
I,
I
honestly
can't
believe,
I'm
no
okay,
oh
oh
and
I
tried
just
closing
the
application
and
that's
not
working
either.
It
just
brings
the
leave
button
back.
Wow.
I
That's
okay,
we're
we'll
just
let
the
recordings
merge,
that's
fine!
I
will
no
one
ever
looks
at
them.
Anyway.
All
right
I'll
raise
another
escalade
again
with
see
if
there's
any
movement
on
getting
us
another
zoom
channel.
So
we
don't
have
this
problem
every
week,
because
I
know
it's.
It's
painful
for
the
messaging
sig
also.
O
K
I
Cool,
let's
see,
jason
you
even
added
something
to
the
agenda.
Awesome
yeah.
N
I
just
I
noticed
that
in
various
pr's
they
show
up
with
the
little
red
x
like
the
build's
failing,
but
it's
only
the
code
coverage
going
down
or
being
below
threshold
rather,
and
I'm
just
wondering
what
we
think
we
want
to
do
about
it
because
yeah,
it's
just
it's
just
it's
kind
of
distracting
like
it
at
a
glance
and
like
we're
all
busy
doing
100
things,
and
you
look
at
this
and
you're
like
man.
All
these
builds
are
broken,
but
it's
just
code
cov.
K
N
P
I
think
it's
probably
for
metrics
I
for
metrics.
I
used
it
to
find
a
bunch
of
test
cases,
I'd
forgotten
to
write,
but
that
could
be
a
scatter
brain
josh
issue
and
not
a
general
issue.
Just
as
an
affiliate,
you
can,
if
you
have
a
real
ide
josh
your
ide.
P
Oh,
my
god
I
switched
to
intellij.
Is
that
not
real?
Oh
okay,
good
good?
I
thought
for
a
long
time
you
were
using
ds
code
or
something
I
am
also
using
vs
code
for
scala,
because
it's
like
three
times
better
in
scala
than
it
is
for
java,
but
I
use
intellij.
I
won't
say
I
know
how
to,
but
I
definitely
use
it.
N
P
Don't
use
the
map
the
map
is
awful.
I
agree.
Oh
my
god
yeah
like
that's
an
exercise
in
cool
ui
design
that
doesn't
actually
work,
yeah
and
there's
this
problem.
R
K
Limit
to
be
exceeded,
then
that
seems
a
little
low.
N
K
P
P
Okay,
I'm
not
muted,
now
right,
yes,
right,
okay,
cool
thanks,
zoom
yeah.
So
this
is
my
fault.
Open
census
is
down
at
70,
open
tracing
shim
is
down
at
80
and
then
sdk
extensions
is
at
77
percent,
so
those
are
the
only
ones
that
are
somewhat
low,
the
sdk
itself.
P
K
I
P
K
P
Yeah
and
I
feel
like
the
exporters,
some
of
the
lines
of
code
that
it
wants
you
to
test,
because
I
looked
at
this
as
well.
Some
of
the
lines
of
code
it
wants
you
to
test
are
actually
kind
of
hard
to
get
a
test
for.
So,
if
we
look
at
like
this
one
here.
P
Like
this,
this
one's
particularly
interesting,
there
was
nothing
great
well,
never
mind.
We
have
no
problem
now.
There's
no
coverage
all
right,
yeah.
Everything
else
is
at
100,
so
it's
literally
just
that
one
come
on
work
this
time
you
can
do
it
no
okay,
never
mind
never
mind,
but
I
remember
looking
some
of
these,
like
the
lines
of
code
that
it
wants.
P
You
to
cover
are
either
covered
in
like
an
integration
test
somewhere
that
doesn't
have
code
coverage
dumped
or
are
incredibly
hard
to
test
in
a
unit
test
and
probably
aren't
worth
their
while.
K
P
It's
that
that's
totally
my
bet
all
right,
I'm
going
to
stop
sharing,
because
I
think
I
think
we
exhausted
what
what's
useful
in
viewing
there
yeah.
K
I
mean
my
my
philosophy
here
is:
why
do
we
have
it
like?
Is
it
actually
helping
us?
Is
it?
Is
it
actually
helping
us
right?
Is
it
doing
anything
to
make
the
actual
project
better?
Is
it
making
our
lives
easier?
Josh
says
it
made
his
live
life
easier
when
he
was
doing
metrics
to
see
coverage,
although
he
can't
actually,
I.
T
K
K
J
J
K
Yep,
it's
true.
Let's
chat
with
anarag
about
this
this
evening,.
K
I
Let's
see
we
put
out
a
patch
release
yesterday
in
the
instrumentation
repo
and
there
was
a
patch
release.
I
think
the
day
before
in
the
sdk
repo,
it's.
I
So
this
was
what
was
fixed
in
the
sdk
and
then
pulled
into
the
agent
172
agent
release.
So
this
was
an
important
regression
that
was
fixed,
I'm
not
sure,
oh
well.
This
was
sort
of
a
regression
in
the
metrics
loggers
are
using
a
new
one
of
the
java
util
logging
methods
to
take
a
supplier
and
we
weren't
in
the
java
agent.
I
P
I
No,
it's
not
gonna,
be
a
problem.
You'll
love
the
pr.
So
we
had
we
had
it.
We
have
a
test
that
we
cover
all
of
the
java
util
logging
methods,
except
we
had
this
exclusion
in
our
test
to
not
verify
to
explicitly
say:
oh
yeah,
we
don't
really
need
to
cover
these
supplier
methods,
yes,
jason.
Thank
you.
I
Yeah-
and
this
was
at
least
it
was
a
regression
if
you
were
using
the
new
http
request-
header
capture,
so
not.
M
I
Got
it
yeah,
so
that's
a
important
regression
also,
and
I
don't
think
this
was
nikita.
I
think
you
fixed
this.
I
don't
think
this
was
a
regression,
but
it
was
still
like.
I
The
bad
news
was
that
we
we've
changed
how
we're
we
were
using
nebula
gradle
plug-in
to
do
like
automatic
versioning
based
on
the
tags
in
the
repo,
and
that
has
been
giving
us
problems
for
other
purposes.
So
we
removed
that
recently
and
went
to
hard-coded
versions,
which
is
great,
except
that
the
patch
release
github
workflows.
I
When
you
run
them
manually,
they
run
against
the
mate
against
maine,
so
we
had
already
updated
the
patch
release.
I
had
already
updated
the
patch
release
workflow
to
do
this,
do
it
the
new
way,
and
so,
but
in
the
branch
it
was
still
using,
the
nebula
plug-in,
and
so
it
ended
up
basically
ended
up,
pushing
a
1-8-0
release
to
maven
central,
which
of
course,
we
can't
revoke.
F
I
personally,
I
would
actually
vote
for
for
not
doing
a
181
and
and
actually
just
abandoning
the
one
eight
numbering
altogether
and
make
the
next
release
1.90.
K
P
If,
if
you're
friendly
with
them
like,
if
you
know
joel
you,
can
you
can
ask
I've
had
to
do
that
in
the
past,
the
yeah
they'll
give
you
the
please.
Never
ever
do
this
again,
spiel.
N
Can
we
move
the
sdk
to
one
nine
and
then
be
in
sync.
K
K
Yeah,
let's,
let's
you
know,
chat
with
anarch
about
it
this
evening.
Also,
honestly,
I
don't
think
anyone
will
care
very
much.
If
we
skip
one
to
eight,
they
may
be
a
little
confused.
We
could
also
just
generate
a
1.8
release
and
then
immediately
release
1.9
after
it
in
the
sdk.
Just
if
people
are
confused
about,
I
think
they're,
just
auto,
incrementing
and
gates
are
broken.
K
I
Right,
yep
yeah
and
my
my
thought
on
this
is,
if
I
would,
if
it
would,
I
feel
like
it
would
be.
I
would
reserve
doing
something
like
this
if
it
was
more
of
like
a
security
kind
of
a
problem
just
because
we
messed
up.
I
P
O
So
one
one
thing
that
can
be
done
is
to
create
like
a
staging
repo,
so
you
can
just
play
with
it
and
just
release
it
when
everything
is
ready.
So
we
have
done
that
in
the
project
that
I
work
on
and
it's
it's
a
way
just
to
make
sure
that
you
don't
mess
up
so
not
not
trying
to
blame
anyone
right,
but
but
the
staging
repo
actually
works
great
in
that
sense,
because
it
still
has
like
this
manual
step,
which
you
can
automate.
O
K
T
T
O
O
Can
you
can
automate
it?
You
can
automate
it
right,
so
we
just
central
or
nexus
as
a
rest
api.
So
I'm
not
sure
if
there
is
like
a
particular
plugin
that
allows
you
to
do
that,
but
definitely
you
can
just
call
the
rest
api
to
promote
the
repo
if
you're
happy
with
that.
So
it
can
be
like
the
final
step
of
all
the
pushes
to
the
staging
repo
to
just
call
the
rest
api
to
publish
and
that's
it.
I
I
I
think
I
can
just
change
it
to
not
auto
close
the
staging
repo,
so
I
think
that's
a
simple
enough
change
and
then
we
can
manually
check
it
or
I
can
figure
out
another
way
to
run
it
locally
to
verify
just
something
to
because
I
knew
that
the
next
major
release
was
probably
not
going
to
go
out
clean.
I
didn't
think
it
would
go
out
the
I
didn't
think
it
would
succeed
with
the
wrong
version
number,
but
I
have
a
feeling
that.
M
T
I
I
But
that
might
be
that
that
might
give
us
confidence
there
cool
I'll.
Look
into
that
all
right.
Let's
move
on
to
an
important
topic:
josh
do
you
want
to
set
the
stage
here.
P
P
P
P
The
http
url
for
the
the
attribute
is
kind
of
hard,
so
they
do
this
thing
where
they
want
the
http
scheme,
the
http
target
the
http
host,
and
if
you
can't
detect
host
you
use,
you
know
the
net
host
name
or
net
ip
to
try
to
do
your
best
to
construct
this
scrubbed
http
url,
like
that's
the
that's
the
ideal
target
and
then
for
clients.
It's
it's
also
supposed
to
be
here,
yell
with
everything
scrubbed.
Additionally,
in
the
client,
we
had
an
issue
with
high
cardinality.
P
Now
what
I
want
to
understand
is
how
much
that
higher
cardinality
was
because
we
were
including
net
peer.
I
p
address
everywhere
versus
the
url
itself,
but
we
will
have
an
issue
with
url
cardinality
over
time,
especially
if
it
includes
all
of
the
parameters.
P
Just
because
that's
somewhat
expensive,
when
it
comes
to
doing
duration
tracking
of
http,
though,
if
you
don't
have
anything
denoting
the
url,
you
basically
don't
know
what
on
your
server
slow
right.
So
you
get
like.
I
have
a
latency
alert
crap.
What's
that
a
latency
alert
slow
from,
is
it
like
the
whole
server?
Well,
that's!
Basically,
all
you
get
if
you
don't
have
any
notion
of
url
or
target,
so
we
have
a
decision
to
make
on
this
pr
there.
There
was
some
concern
of
like
do.
P
We
include
the
url
or
the
target
and
have
this
potential
cardinality
issue
now
or
do
we
change
duration
metrics
to
not
have
url
and
target?
My
current
suggestion
is
that
we
take
the
we
try
to
include.
We
try
to
get
rid
of
the
other
things
that
are
known
to
cause
cardinality
explosions
and
we
keep
url.
P
And
possibly
craft
a
automatic
scrubber
of
url
that
does
various
things.
That
would
be
my
preference
here,
because
I'd
prefer
to
get
a
useful
duration
metric.
That
scrubber
could
be
something
you
could
turn
on
and
off
via
some
configuration
parameter.
P
That's
that's
like
in
the
longer
term,
in
the
short
term,
I'd
like
to
propose
that
we
actually
just
use
url
directly
and
see
how
bad
it
is
to
see
if
we
run
into
the
same
issue
that
we
had
when
we
also
had
the
net
peer,
I
p
address
in
all
of
the
in
in
in
all
cases.
U
U
I'm
sorry
josh
do
you
would
you
expect?
I
know
that
some
runtimes
already
do
templated
urls,
for
example.
So
are
you
effectively
duplicating
what
some
of
those
frameworks
would
do
so
that
you
end
up
with
a
templated,
url
or
you're?
Just
I'm
sorry,
when
you
say
scrubber,
I'm
trying
to
relate
that
to
what
I
already
do
or
what
spring
already
does
in
terms
of
trying
to
get
to
templated
urls
to
use
the
metrics.
P
So
so
what
what
so,
from
a
from
a
client
perspective,
do
you
scrub
on
client
connections
like
an
http
client.
U
P
P
I
want
that
that
is
the
thing
that
I
want
and
so
yeah.
So
we
have
this
thing
called
http
target.
We
have
this
other
thing
called
hp.
Url,
like
one
thing
we
could
do
is
say
if
somebody
there's
also
http
dot
route,
I
I'm
guessing,
maybe
a
better
suggestion
for
what
I
have
is.
We
prefer
when
we
have
http
dot
route,
and
we
have
this
annotation
based
templating
thing
that
someone
provides
us
in
that
attribute.
P
U
U
T
If
I
recall
specification
correctly,
it
says
that
for
client
spans
you
should
use
http
url
as
that's
the
only
thing
which
is
readily
available,
and
you
should
not
parse
that
url
to
extract
host
and
pass
and
route
separately,
and
I'm
historically
very
pessimistic
about
normalizing,
qrl
or,
while
scribe,
describing
your
url
in
the
agent,
especially
for
client
fans,
because
well
you
call
s3
buckets
with
millions
of
objects,
and-
and
you
have
all
you-
I
don't
know,
so
you
actually
have
no
idea
what
shape
that
url
is
for
clients,
because
you
have
zero
idea.
P
P
If
there's
any
question
mark
parameters
and
again
everything
we're
doing
here
is
a
trade-off
to
deal
with
high
cardinality
concerns
right,
we're
trying
to
avoid
high
cardinality.
So
at
a
minimum
you
get
rid
of
parameters
and
at
a
maximum
you
also
try
to
detect
identifiers
or
just
numbers
and
replace
them
with
some
known
string
to
reduce
the
cardinality.
So
at
least
you
get,
you
might
be
aggregating
too
much,
but
at
least
you
get
a
more
robust
metric
than
you
would.
U
J
U
Is
as
a
fallback
my
fallback
case,
you
can
tell
me
that
this
is
dumb,
I'm
happy
for
that
to
be
stupid,
but
I
allow
people
to
specify
in
configuration
regular
expressions
if
you
see
something
matches
this
use
that
instead,
which
lets
them
say
these
segments
this
url
pattern
whatever.
That
is
that's
actually
this
use
that
string
instead,
it's
dangerous
right,
but
it
allows
people
to
specify
what
that
url
is,
so
that
I'm
not
guessing,
because.
T
U
U
T
Any
any
like
recognition,
scrubbing
of
what
not
I
think
is
is
is
true,
is
too
expensive
like
throwing
away
parameters
after
the
first.
The
question
mark:
that's,
okay,
that's
quite
cheap
when
you
try
to
like
tokenize
url
in
some
way
and
replace
something
with
something.
That's
I
already
because
yeah.
U
U
But
I'm
just
saying
that's
how
I
try
to
do
this
is
because
that
way,
if
I,
if
like
with
rest
clients
or
whatever,
if
something
doesn't
quite
match,
people
then
have
they
have
they
have
the
thing
yeah
didn't
work.
If
it's
that
use
that
instead
and
they
just
could
they
way,
they
could
specify
whatever
it
is,
and
if
I
did
it
wrong,
it
doesn't
matter
they
can
get
what
they
want.
P
Right
all
right,
let
me
let
me
throw
on
a
straw
man,
then
for
this,
because
I
think
we
want
something
that
works
across
clients
and
servers
and
there's
this
thing
called
http
path,
which
is
meant
to
be
that
abstract
path
for
a
client
where
it's
basically
like
a
rest,
client
or
some
kind
of
some
kind
of
thing,
like
jax
rs,
where,
like
there's
a
there's,
almost
a
known
pathing.
That
could
that
you
could
extract
via
annotations
or
something
right.
P
We
would
fill
out
http.path
and
in
the
instance
where
we
have
a
path.
We
will
always
report
a
http
scheme,
an
http
host
name,
ip.
S
Address
whatever
cl
you
know,
whatever
we're
talking
to
and
then
that
sorry,
sorry,
george.
T
S
T
P
That,
okay,
so
if
you're
not
already
pulling
those
attributes
out
for
traces,
yes,
then
that's
that's
a
different
story.
Okay,
so
what
you're
saying
is
the
http
route
is
never
pulled
for
client,
exact,
expanse.
T
I
And
it's
true
there's
a
couple
of
java
http
clients
that
do
that
have
that
nice
route,
syntax
like
aaron,
was
mentioning,
but
the
majority
of
like
the
25
http
clients.
We
instrument
currently
don't
have
any
thing
like
that.
U
I
All
right
I
see
so,
and
I
like
I
like
the
I
mean-
I
think
this
is
a
great
option.
The
has
an
option
right
because
users
from
looking
at
a
lot
of
you,
know
apps
and
url
patterns.
I
F
P
Right,
technically,
we
need
to
treat
all
of
our
attributes
that
way,
but
but
yeah,
I
agree
with
you.
I
agree
with
you.
I
Yeah
but
unfortunately,
the
path
also
contains
a
lot
of
parameters
and
even
I've
seen
credit
card
numbers
recently
in
the
past
actual.
P
Okay,
so
straw,
man-
let's
let's,
let's
split
it
up
between
client
and
server.
So
if
client
we
use
hp
url,
we
scrub
the
scrub,
the
query
string
off
of
it
and
that's
that
would
be
the
preferred
thing
for
client
span.
There's
like
a
fallback
thing
in
according
to
the
specification,
what
I'm
asking
is,
can
I
can
we
just
not
implement
the
fallbacks
and
say
if
you
don't
provide
asp
url,
that's
basically
an
issue
with
auto
instrumentation.
Like
do
we
get
http.url
for
all
client
instrumentation
today.
P
We
should
yeah-
okay,
okay,
beautiful,
because
that
that
simplifies
everything
all
right
for
server.
I
think,
basically,
what
I'm
suggesting
is.
Maybe
we'll
do
http
route
if
it's
available,
so
we
do
hp
scheme
hp
route,
http
host
as
the
three
that
we
use
and
if
route
is
not
available,
then
we'll
use,
target
and
there'll
be
some
sort
of
configuration
parameter
to
turn
it
off.
J
U
J
P
Yeah
we
we
can
so
you're
talking
about
sample
spans.
Imagine
if,
when
you
make
the
decision
to
sample
span,
we
still
calculate
a
metric
on
the
span.
We
just
drop
the
span
itself,
that's
basically
what
we're
doing
in
the
instrumentation
agent.
J
So
so,
j
so
josh,
I
think
that
until
the
metrics
sdk
has
defenses
against
high
cardinality
and
until
that
tooling
exists
that
allows
you
to
drop
target
that
we
should
just
be.
You
know
defensive
in
the
in
the
agent
and
not
collect
any
attributes
on
metrics
that
are
prone
to
high
cardinality
like
we.
We
should
look
forward
to
this.
We
should
have
that
we
should
like
aspire
to
do
this
in
the
future,
but
I
think
until
that
tooling's
in
place,
it's
just
you
know
somebody's
going
to
run
into
this
situation.
P
Well,
I
mean
anyone
can
run
a
situation
now,
especially
if
they
instrument
themselves
and
follow
the
rules
that
we
set
forth
for
what
hp
instrumentation
is
meant
to
do
so.
I
I
totally
understand-
and
I
I
do
think
that
defenses
is
the
next
number
one
priority
for
the
metrics
sdk.
P
Now
that
good
error
messages
are
in
place
and
that's
the
next
thing
that
I
want
to
work
on
yeah,
okay.
So
so
I
hear
you
so.
I
If
we
were
not
in
alpha
for
metrics,
I
would
totally.
I
would
fully
agree
here
since
we're
in
alpha.
I'm
I'm
not
I
I
would.
This
would
be
my
weak
preference,
but
I'm
okay
with
like
a
little
bit
more
experimentation
for
the
purpose
of
getting
more
feedback.
J
Well
can
so
you
know,
speaking
of
like
the
defenses,
so
you
know
I
don't
I
think
of
this.
This
point
is
moot
if
we
can
have
those
defenses
in
before
the
next
release
right,
so
you
know
net.
I
was
talking
to
the
maintainers
for
that
and
they
have
an
upper
limit
of
cardinality
of
2000
for
each
instrument
and
if
we
can
just
have
something
simple,
it
doesn't
have
to
be.
You
know
an
elaborate
defense
scheme
right
now,
but
just
like
cap,
the
cardinality
and
then
after
the
cardinality
limit
is
reached
just
drop
data.
J
L
F
I
mean
in
general,
I
mean,
I
think,
the
the
question
about
defenses
and
how
you
how
you
deal
with
it
is
one
that
we
should.
We
should
try
to
tackle
holistically
so
from
the
from
the
jvm
metrics
side.
This
is
something
that
I
know
that
we're
going
to
run
into
just
from
some
of
the
work
that
we
did
new
relic.
F
You
know,
there's
there's
things
like
allocation,
metrics
and
per
thread
network
traffic
which
which
potentially
has
high
cardinality.
F
I
have
some
thoughts
about
about
how
you
handle
that
and
basically
it
boils
down
to
there's
a
kind
of
grouping
strategy
where,
instead
of
recording
the
individual
thread,
names
or
all
the
individual
cardinalities,
you
group
things
together,
so
that
the
the
sets
remain
relatively
small.
But
it
feels
to
me
like
a
problem
which
should
be
solved
in
a
much
more
general
way,
rather
than
just
trying
to
do
it
ad
hoc
in
the
in
the
jv
metrics
case.
So
maybe
let's
try
to
get
together
and
and
have
a
proper
conversation
about
that
soon.
K
Yeah,
I
think
what
jack
has
said
is
that
the
the
metrics
sdk
itself
would
just
have
like
a
hard
a
hard
limit
that
would
be
implemented
so
that
so
that
at
least
there
was
some
sort
of
you
know.
U
P
So
so
we
do
have
the
ability
to
add,
remove
attributes
and
removing
attributes
is
one
of
the
easy
ways
to
reduce
cardinality.
P
We
actually
have
a
a
thing
in
place
where
we
can
modify
labels
as
they
come
in
right
now
in
the
sdk,
so
in
terms
of
high
level
api.
If
that's
not
the
right
thing
to
have
exposed,
then
I
wish
we'd
exposed
something
less
powerful.
That
solves
the
issue,
but
I
think
that's
that's
kind
of
from
a
high
level
standpoint.
That
should
be
the
way
that
we
do
it.
The
weird
thing
here
is
we're
talking
about
like
a
dynamic,
cardinality
fixer
right.
P
F
I
I
hadn't
actually
thought
about
about
how
dynamic
dynamic
it
needed
to
be
to
be
made.
I
was,
I
think,
my
initial
sketch
of
this
was
there's
a
regular
expression
match,
and
basically
you
do
some
kind
of
stemming
so
that
in
the
case
of
a
per
thread
stuff,
if
you've
got
you
know,
you
know
main
dash,
pool
dash
thread,
seven
or
something
that
that
just
gets
stand
back
to
to
to
which
pool
it
is.
J
I
think
it
would
have
to
be
ahead
of
time.
You
know
what
for
what
ben's
describing
and
what
aaron
said
earlier,
with
her
regular
expression
matching
against
these
urls
right.
J
So
if,
if
it's
not,
then
the
the
attributes
are
going
to
change
shape
at
some
point,
so
you'll
be
you'll,
be
you're
like
having
more
and
more
attributes
and
then
you'll
reach
the
cardinality
limit,
and
then
a
grouping
will
occur
and
you
won't
be
able
to
query
them
in
a
useful
way,
because
what
used
to
belong
to
one
group
now
belongs
to
a
different
group.
F
But
it
does
raise
a
point
jack
about
about.
You
know
a
potentially
you
know
something
which
does
display
dynamic
clustering
of
higher
cardinality,
it's
more
advanced,
and
you
know
it's
probably
not
something
we
want
to
do
in
the
first
release,
but
should
we
at
least
warn
or
have
some
way
of
detecting
hey
you
weren't
grouping
this,
but
this
cardinality
looks
like
it
suddenly
exploded.
J
Oh
totally
yeah
so
that
I
think
we've
we've
discussed
that
I
don't
think
there's
been
any
work
on
that,
but
you
know
if
there
are
cardinality
limits
in
place.
It
would
be
great
if
we
could
emit
metrics
that
allow
you
to
diagnose
when
those
when
those
cardinality
limits
are
reached
and
what
the
offending
keys
are.
P
P
In
any
case,
the
in
terms
of
dynamic
group
and
reducing,
if
all
we're
doing
is
doing
expression
matching
and
stemming
of
the
problem.
We
have
what
we
need
now
in
the
sdk.
We
might
not
have
exposed
everything
in
the
public
api,
but
in
terms
of
the
ability
for
you
to
define
a
view
that
takes
a
look
at
attributes
and
reduces
the
set
of
attributes
or
you
know,
modifies
attributes
on
the
fly.
P
We
have
the
ability
to
do
that.
That's
baked
in
and
all
we
need
to
do
is
expose
the
right
set
of.
We
call
them
attribute
processors.
If
we
expose
the
right
set
of
attribute
processors,
we
should
be
able
to
solve
this
specific
problem
for
users.
What
we
don't
have
is
the
ability
to
do
that
kind
of
generically
in
instrumentation.
P
P
We
have
to
make
the
decision
in
our
baseline
instrumentation
how
much
cardinality
to
expose
there
is,
I
guess
what
I'm
getting
at
so
going
forward,
though
it
sounds
like.
If
I
can
recap
this
discussion,
we
should
have
some
hard
limits
in
place
around
cardinality.
That's
that's
in
line
with
what
everyone
else
is
doing,
and
we
absolutely
need
to
get
that
out
the
door
if
no
one
else,
like
I'm
happy
to
work
on
that
and
get
something
out
by
next
week.
That's
realistic
for
me!
P
If
somebody
else
has
time
to
get
that
done
earlier.
Please
do
like
doesn't
matter.
Second
thing
is
for
this
particular
pr
in
instrumentation.
It
sounds
like
using
url
with
no
query.
P
Parameters
for
the
clients
is
what
we're
going
to
do
on
client
for
server
we're
just
we're
going
to
drop
http
target
for
now.
Until
we
have
a
hard
cardinality
limit,
is
that
correct.
P
J
P
Okay,
yeah
so
for
server,
we'll
use
the
http
scheme
host
and
target,
which
we
think
we
have
for
everything
and
all
the
weird
fallback
code
we
just
get
rid
of,
because
we
know
in
this
library
this
instrumentation
we
have
those
three
things
or
we
don't
have
anything.
Is
that
correct.
M
Some
several
instrumentations,
not
many,
I
think
maybe
two
or
three
exposed
root
at
the
beginning,
so
it
might
be
useful
to
include
http
roots
or
instead
of
target
if
it's,
if
it's
present.
P
I
Yeah-
and
we
will
we'll
get
in
route-
may
not
be
for
this
release,
but
we
do
capture
route
essentially,
for
everywhere,
we
just
we
use
it
for
the
server
span
name
essentially
to
have
low
cardinality
server
span
name.
I
Cool,
so
that
sounds
like
is
today
summarize
that
well
is
everybody
could
with
that
plan?
This
is
a
great
call.
This
makes
yeah
this.
This
would
make
me
feel
comfortable
because
I
know
for
sure
there's
so
many
apps
that
have
unbounded
url
paths
but
yeah.
J
Yeah
so
josh,
I
can
take
a
crack
at
that.
If
you
want
so
just
you
know
just
to
clarify
so
we're
all
on
the
same
page,
because
I
think
this
matters,
so
what
a
naive
implementation
of
a
hard
cardinality
limit
might
look
like
is
just
hardcoding
a
value,
for
instance
2000.
Just
because
you
know
I
don't
know,
that's
what
that
net
does
and
when
that
limit
is
reached.
J
I
believe
their
behavior
is
just
to
stop
recording
or
new
new
series,
like
so
new
unique
sets
of
attributes,
but
if,
if
a
recording
happens
for
a
an
attribute
set
that
is
already
like
has
recordings
associated
with
it,
it
continues
so
yeah
how
you
would
do
that.
P
That
that's
fair,
here's!
Here's
where
things
get
fun!
You
have
two
data
stores.
You
have
to
adapt
right,
there's
there's
the
delta,
which
gets
cleared
every
single
collection
cycle
and
will
grow
and
shrink
and
grow
and
shrink
and
grow
and
shrink.
Then
there's
the
cumulative.
P
The
cumulative
what
I'd
recommend
there
is
basically,
if
there's
a
particular
bit
of
data
that
hasn't
had
a
point
recently
to
drop
it.
So
there'd
be
a
clean
up
cycle
on
the
cumulative
stores.
If
you
want,
you
can
tackle
one
of
them
and
I
can
tackle
the
other
or
you
can
do
both
either
one's
fine.
J
I
Cool
all
right,
yes,
we
are
coming
up
on
time.
So
briefly,
instrumenter
api,
96
out
of
99.
K
That's
starting
next
week
come
here.
Oh
awesome,
yeah
I
mean
I
think
we
would
be
on
schedule
for
a
release
next
week.
I
don't
know
any
reason
why
not
well.
I
did
not
do
that.
So.
K
Is
is
it
one
nine
and
do
we
wait
for
the
cardinality
limit
stuff
so
anyway?
That's
all
no
nothing
important.
I
I
would,
I
would
vote,
wait
for
cardinality
limit
and
we
will
take
you
up
on
your
your
bargain
of
fixing
android
for
bumping
the
sdk
to
one
nine,
perfect
yeah,
and
I
just.
K
Wanted
to
give
note
I'll
have
to
probably
take
off
it
at
half
past
starting
next
week.
I
All
right
cool
we
will
try
to
get
topics
that
are
interesting
to
you
in
early,
oh
yeah
call
out
on
this.
I
need
to
do
this.
I
need
to
vote
also.
I
For
times
meeting
times,
I
think
josh
talked
about
this
last
week,
but
this
sounds
like
a
really
cool
working
group
for
java
folks
and
then
jack
is
this
and
should
we
move?
Can
we
move
this
to
next
week.
K
T
L
K
Throw
in
an
idea
we
have
at
with
span,
I
wonder
whether
it
would
be
super
cool
to
have
an
annotation
for
timing
as
well.