►
From YouTube: 2022-05-13 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
C
D
B
D
Five
minute
trailer
yeah,
it
was
so
weird
what's
going
on
like
I
don't
I
guess,
maybe
you
don't
call
it
a
trailer.
It's.
D
A
Well,
after
my
glowing
review
of
everything
yesterday,
everything
has
gone
to
crap
today,
I'm
trying
to
track
down
very
tricky
context,
propagation
issue
internally
inside
a
homegrown
future
framework
that
also
involves
grpc
and.
A
What
else
oh
well,
it's
got
open
tracing
instrumentation
found
at
least
one
context
that
wasn't
getting
closed,
but
still
can't
figure
out
how
to
actually
get
things
to
propagate.
So
we
actually
can
get
jdbc
calls.
B
A
B
A
It's
unclear
there's
a
lot
of
it
looks
like
there's
multiple
generations
of
then
layers
of
things
going
on
so
I'll.
Take
I'll
again.
I
guess
I
can
throw
the
dice
and
see
what's
happening.
A
A
cool
feature,
an
idea
that
I
didn't
know
about
in
the
debugger,
that
if
you
have
a
break
point,
you
can
not
suspend
at
the
break
point,
but
you
can
make
it
log
information
about
hitting
the
breakpoint,
including
custom.
Just
like
a
custom
message.
You
wanted
to
log,
which
is
super
useful
in
this
case,
where
I
don't
want
to
like
put
a
breakpoint
at
the
context.close
or
scoped
up
close,
but
I
do
want
to
be
able
to
actually
like
count.
A
E
B
A
D
A
E
A
Yeah,
no,
it's
a
fair
question.
Well,
I
started
on
the
golang
side
and
ripping
it
out
would
have
required
me
to
learn
a
lot
more
go
than
I
felt
like
I
wanted
to.
A
On
the
java
side,
I
was
hoping
that
the
shim
would
just
work
and
we
could
move
on
and
then
eventually
work
on
replacing
it.
I
mean
I'm
hoping
to
not
spend
an
enormous
amount
of
time
on
rewriting
all
of
rewriting
all
of
the
instrumentation
there's,
not
that
much
instrumentation.
Honestly,
all
there
is
is
basically
grpc
client
server,
http
client
server
and
then
trying
to
interact
with
this
internal
homegrown
futures
framework.
A
So
I've
wrapped
every
executor
I
can
find
in
the
taskrabbit,
hoping
that
would
do
it,
including
one
when
you're
creating
your
grpc
channel
manage
channel.
A
If
you
don't
give
it
a
executor,
it'll
just
internally
use
like
a
bare
fork,
drone
pool-
or
something
like
that.
So
I
had
to.
I
found
that
one
and
I
thought
oh,
this
would
be
the
last
one.
It'd
all
be
fine,
but
no
it
didn't
help
at
all.
I'm
sure
it's
something
inside
the
homegrown
futures
framework.
Now
so
yeah.
E
Anyway,
yeah,
I'm
really
I'm
interested
to
hear
more
like
when,
as
you
find
out
more
about
that,
the
bridging
the
open
tracing
open,
telemetry,
bridging
because
I'm
still
trying
to
get
a
feel
for
how
much
of
a
pain
or
feasible
it
will
be.
The
bridge
with
micrometer
tracing
sleuth
will
be
in
the
future.
E
Like
kind
of.
If
my
my
theory
is
that
bridging
tracing
is
a
lost
cause
as
opposed
to
bridging
metrics,
which
seems
much
more
doable.
But
I
would
I
would
love
to
that's
just
based
on
not
all
nothing.
A
E
B
For
bridging
okay,
like
open
tracing,
was
like
they
have
context
provider.
They
have
interfaces
for
implementing
context
similar
to
what
we
have
while
brave.
Doesn't
so
that's
one
big
difference.
E
E
And
asking
if,
on
the
auto,
configure
sdk
auto
configure
side
if
it
made
sense
to
have
a
a
disabled
flag?
Basically,
to
give
you
the
no
op.
E
And
jack
kind
of
mentioned
that
the
I
think
they
had
found
these
already,
and
I
guess
I
had
one
question-
follow
question
jack
is
once
you
do
this?
Do
you
get
a
do
you
get
no
ops
bans.
C
I
would
have
to
check
in
the
implementation
of
that,
if
so,
metrics
has
this
little
optimization.
That
says
that
if
there's
no
readers
present
always
return
a
no
op
meter
provider
and
I'm
not
sure
if
the
top
of
my
head,
if
trace
tracing,
has
that
same
optimization.
If
it
did,
then
you
would
have
no
op
spans.
B
B
A
Well,
if
you,
if
you,
if
we
installed
a
zero
percent
sampler,
would
that
get
the
same
that
same
functionality,
because
we
still
in
with
a
zero
percent
sampler,
we
still
generate
span
ids
yeah,
I
guess
so
so
that
might
be.
That
might
be
a
way
to
hack
around.
It
wouldn't
be
pure,
no
op,
but
it
would
be
maybe
slightly
more
nalopi.
A
B
C
B
A
E
Yeah
evan
said
as
much.
It
was
sort
of
just
an
idea
based
on
he
had
seen
the
java
agent
enabled
flag,
yep,
yeah
yeah.
C
Yeah,
it
would
be
nice,
so
if
it's,
if
it's
at
the
auto
configure
level,
then
you
know,
operators
of
applications
can
just
add
an
environment,
variable
disable
the
sdk
and
see
if
that
was
impacting
performance,
if
they
knew
their
users
were
using
auto
configure
yeah.
C
I
I
I
think,
that's
something
you
know
long
term
that
we
should
really
encourage
everyone
to
use.
Like
you
know,
the
those
environment
variables
are
specified
and
it's
the
standard
way
that
you
configure
kind
of
simplistic
configurations,
and
so
I
think
that's
a
safe
assumption.
A
D
A
If
the
sdk,
if
auto,
configure
automatically
configured
the
jager
exporter
to
work
with
whatever
was
provided,
got
it
now,
it
wouldn't
work
100,
because
we
don't
have
the
you,
the
udp,
the
local
udp
exporter.
I
mean
we
haven't
implemented
that
so
it
wouldn't
be
100
reliable.
But
if
you're
using
the
aeger
collector,
it
could
work,
I
mean
yeah,
it
could
easily
work.
A
E
Will
you
remind
me
the
the
noaa
and
we
have
a
no
op
sdk,
that's
different
than
I
thought
there
was
like
a
no
op
sdk
in
a
incubator
or
extension.
E
E
B
C
That
lives
in
extensions
no
op
api
for
reference.
E
E
Carlos
never
made
it
sadly,
but
I
did
I
repeat
his
prs.
B
C
Oh
yeah,
I
think,
there's
kind
of
two
related
conversations,
so
I
was
trying
to
understand
what
this
person
justin
was
doing
with
with
micrometer
and
why
the
prometheus,
http
server
doesn't
work
for
his
use
case,
and
it's
something
like
this.
So
he
has
spring
boot
with
spring
boot
actuator,
which
sets
up
micrometer
and
he's
interested
in
getting
the
trace
data
from
open
telemetry,
but-
and
he
he's
interested
in
a
couple
of
like
kind
of
the
internal
metrics
that
we're
collecting
in
the
batch
span
processor.
C
So
some
open,
telemetry
metrics,
but
most
of
the
metrics
he's
interested
in
he
just
wants
to
keep
using
micrometer
and
spring
those
ones
and
so
yeah.
So
there's
this
last
answer
this
last
response
that
he
had,
because
here
kind
of
explains.
C
I
think
why
he's
reluctant
to
use
the
prometheus
http
server
like
I
doubt
it.
It
wouldn't
work.
But
it's
something
like
you
know:
spring
boot
actuator
provides
these
endpoints
that
are
a
discovery
endpoints,
so
you
can
discover
where
things
live,
for
example
like
where
prometheus
metrics
are
exposed
on
this
application,
and
so
if
he
were
to
use
our
prometheus
http
server
and
his
spring
boot
micrometer
setup,
which
has
its
own
prometheus
endpoint
there'd,
be
two
endpoints
and
the
one
coming
from
open,
telemetry
wouldn't
be
kind
of
discoverable
through
this
spring
boot
actuator
mechanism.
C
And-
and
you
know,
presumably
he
would
have
to
change
his
infrastructure
set
up
to
you
know
whatever
is
discovering
the
location
of
that
endpoint
to
to
be
aware
of
this
additional
open,
telemetry
prometheus
endpoint.
B
I
I
think
contrib
is
a
fine
location
for
the
other
direction
bridge
if
it
does
get
completed
like
we
used
to
have
this
functionality
in
the
sdk
itself,
and
I
do
think
we
would
rather
have
users
use
hotel
as
infrastructure
than
micrometer,
at
least
as
sdk
is
concerned,
and
that's
why,
having
that
having
the
other
version
contribute
seems
like
a
good
separation
where
we
do
want
the
micrometer
shim
to
work.
Well,
we
need
users
to
use
it
to
point
out
all
the
bugs
so
that
we
can
fix
it.
B
C
So
we
could
yeah,
so
he
I
guess
then
so
he
did
open
a
pr
to
trib
and,
like
I'm
inclined
to
accept
it
as
long
as
he's
willing
to
be
a
code
owner
for
it,
and
that's
kind
of
the
other
thing
that
we
were
talking
about
is:
do
we
have
any
kind
of
standards
for
or
criteria
for
when
we
accept
something
into
contrib.
C
B
E
E
Yeah,
I
think
my
feeling
right
now
is
it's
still
early
days
and
we
don't.
We
don't
really
have
a
problem.
A
E
Long
as
it
something
that
makes
sense,
I
think
it's
okay
to
try
it
and
see
what
kind
of
adoption
it
gets.
E
Oh
sorry,
oh
requirements
for
pulling
in
components
into
contrib.
B
E
D
But
like
we
would
prefer
them
to
use
them.
Accommodation.
B
That's
better
with
that
works
better
with
our
instrumentation,
so
it's
sort
of
more
strategic
that
people
use
in
my
conversion,
but
people
will
need
this
other
one
as
well.
Probably
so
it's
fine!
So
to
that
point
like
it's,
I
don't
know
about
adoption.
B
B
B
E
Right,
yeah
yeah,
it
might
make
sense-
I
I
could
add
something
to
the
readme
once
it
lands
yeah.
C
Speaking
of
the
micrometer
shim,
I
don't
know
if
you
saw
but
I
opened.
I
added
an
example
from
the
micrometer
shim
to
the
examples
repository.
E
Oh,
this
was
the
other
thing
evan
raised,
which
was
just
a
good.
A
good
find.
We
all
agreed
that
we
should
fix
it
is
there
was
we
weren't
populating
in
our
grpc
request,
object
that
we
passed
around
in
the
instrumenter.
E
E
So
I
thought
that
was
cool,
a
real
world
library
user
who
wanted
to
use
the
attributes,
extractor
customization
hooks.
D
D
E
We
talked
a
bit
about
this
was
jason's
based
on
jason's
comment
in
picari,
pr
and
sorta.
We
just
discussed
how
some
other
languages,
especially
that
don't
have
as
many
instrumentation
maintainers
have
been,
are
have
been
more
proactively
pushing
instrumentation
into
native
or
upstream
libraries,
and
that
this
is
probably
something
that
we
should
be
should
be
on
on
our
radar
at
least.
E
B
E
Yeah,
so
I
guess
it
depends
on
so
we
would
there's
right.
There's
native
that's
really
embedded
well
right,
that's
transitive
dependency
of
the
core
project
and
then
there's
things
like
how
couchbase
or
how
azure
sdk
have
implemented
theirs,
where
it's
still
an
alpha
com.
Their
module
is
still
alpha
and
it's
a
additional
thing
that
their
users
can
bring
in
you.
A
E
That
distinction,
or
are
you
just
thinking
just
the
overall.
E
B
Like
if
that
distinction
exists,
it's
okay,
I
just
I
don't
know
how
we
would
identify
a
library
that
has
the
distinction.
At
least
I
don't
know
how
I
would
I
guess,
but
if
we
could,
then
that
does
seem
like
an
okay
approach
here,
but,
like
I
will,
if
a
library
is
used
to
just
they
publish
stable
stuff,
then
they
probably
really
wouldn't
be
interested
in
adding
an
alpha
artifact
to
what
they
publish
so.
B
E
E
B
Instruments
yeah,
I
guess
maybe
like
grpc,
doesn't
do
this.
It's
like.
If
you
were
to
go
to
grpc,
can
we
add
an
elf
artifact,
that's
open
temporary
native
instrumentation,
and
that
would
be
an
awkward
conversation
in
my
opinion.
So
there
might
be
projects
where
it
isn't
awkward,
but
I
think
a
lot
of
them
would
be.
E
B
E
B
I
want
http
and
net
well,
it
should
be
stable
soon.
I
hope
and
then
once
that
happens
then
moving
some
night
like
it
shouldn't
be
a
year
from
now.
I
think
so,
hopefully
it's
sooner
than
that.
So
that's
probably
the
time
to
then
think
about
this,
but
I
personally
wouldn't
think
about
it
too
much
before
that.
E
And
then
emily
joined
from
the
micro
profile
team,
and
so
they
are,
I
guess
just
internally
struggling
with
like
what.
If
users,
I
guess,
because
they
were
oh
yeah,
because
they
were
using
open
tracing.
E
Sadly,
they
had
adopted
open
tracing
and
so
now
they're
adopting
open,
telemetry
and
trying
to
understand
if
users
use
both
what
kind
of
bad
things
can
happen,
and
we
shared
john,
your
recent
positive
words
about
the
open
tracing
shim,
but
she
was
still,
I
think
it
sounded
like
she
was
more
concerned
less
concerned
less
about.
Oh,
could
users
do
that
and
bring
in
the
shim
and,
like?
I
think
she
understood
that
that
would
was
a
good
path,
migration
path,
but
it
sounded
like
the
sticking
point
for
them
was
really
like.
A
A
They
seemed
to
interrupt
just
fine
from
my
perspective.
It
seemed
everything
I've
tried.
The
interop
works
great
like
if
I,
if
there's
an
existing
open
tracing
shim
span
in
the
context,
and
you
like
the
the
sdk
picks
up,
picks
it
up
the
same
as
anything
else
and
it
works.
If
you
just
use
regular
hotel
instrumentation
as
well.
Are
you
okay.
E
Yeah,
it
sounded
like
the
problem
was
or
the
concern
was
actually
the
having
ins
open,
tracing,
instrumentation,
plus
instrumenting,
the
same
thing
with
open,
telemetry,
yeah
yeah.
Well,.
E
B
D
B
E
Yeah,
it
was
not
super
clear
and
we
were
over
time,
so
I
asked
if
she
could
bring
a
specific
example.
Next
week,
yeah.
E
Though
she
seemed
to
be
semi-satisfied
with
the
idea
that
using
two
was
worse
than
using
one.
E
I
didn't
let
I
didn't
let
my
hidden
demons
show.
It
was
good
job
that
you
had
just
given
a
good
a
thumbs
up
about
that.
A
E
Well,
yeah
also,
I
shared
that
the
the
it's
actually
a
spect
component
in-
and
I
I
didn't
realize
actually
now
I
remember
seeing
when
it
happened,
but
I
was
thinking
it
was
not
stable
yet,
but
yeah
it's
even
stable.
So
you
know
it's
it's
something
real
that
we
would
stand.
B
B
Yeah,
I
think
people
ask
us
to
make
our
artifact
stable,
but
we
still
hope
for
a
bit
more
active
maintenance
of
it.
I
think
before
doing
that,
but
it's
supposed
to
be
stable.
I
guess
not
alpha,
so
that
might
be
a
concern
for
me,
so
we
would
have
to
get
it
out
of
alpha
if
we
started
talking
to
people
about
like
micro
profile,
that
this
is
what
you're
supposed
to
do
for
your
open
christian
users,
we're
saying
that
without
having
really
stood
stable.
So
that's
honest,
I
guess.
D
A
B
A
B
A
I
think
there's
issues
with
baggage,
which
you
don't
really
handle
open
tracing
baggage,
hardly
at
all.
I
think.
That's
probably
where
the
issues
are.
C
I
haven't
given
dove
into
this
before,
but
it's
not
that
complicated.
I
like
it.
E
Yeah
you've
been
now
your
bar
is
at
the
metrics
level
complexity.
C
E
C
I'm
trying
to
simplify
it,
there's
a
pr
I
have
out
that
gets
rid
of
a
lot
of.
I
think
where
the
apparent
complexity
was
from
and
the
complexity
was
you
know
just
making
it
incomprehensible
and
actually
you
know,
was
a
bug
as
well,
and
so
I'm
hoping
that
the
next
person
that
has
to
dive
in
it's
a
little
easier
to
understand.
E
I
have
a
metrics
question,
the
time
window,
the
timestamp
of
the
the
the
metric
collection.
Is
that
do
you?
Does
it
do
they
all
like
you're,
collecting,
say
10
instruments?
C
So
it's
that's
actually
like
a
question
that
I
was
just.
I
was
just
actually
changing
the
answer
to
one
in
a
pr
that
I'm
working
on,
and
so
at
this
moment
in
time
each
meter
is
given
its
own
timestamp,
and
so,
if
you're
doing
a
collection-
and
you
have
three
different
meters-
those
each
each
one
will
have
their
own
unique
time
stamp.
But
all
the
instruments
within
those
meters
will
be
consistent.
C
I
didn't
really
like
that
answer
and
I
thought
about
you
know
having
each
individual
instrument
have
its
own
time
stamp,
but
that
didn't
sit
well
with
me
either
just
having
a
bunch
of
timestamps
that
vary
within
a
collection
and
so
one
of
the
chat.
I
I
think
that
we
should
have
a
consistent
timestamp
across
the
collection,
but
you
know
it's
not
entirely
correct.
C
So
what
could
happen?
Is
you
know,
like
you
say
you
you
mark
the
time
stamp
when
you
before
you
start
the
collection
and
then
you
know
you
can
have
measurements
that
happen
after
that.
Timestamp
was
recorded
and
and
that's
the
problem
with
this,
with
their
approach.
E
And
do
you
know
if
the
spec
has
something,
because
the
reason
I
asked
was
also
my
teammate
in
the
python
sake
is
talking
about
something
similar
today
and
I
need
to
follow
up
with
them,
because
I
didn't
quite
understand
if
it
was
this
same
question
or
a
different
question,.
C
Yeah,
I
don't
know,
I
don't
know,
I
don't
think
the
spec
has
anything
to
say
on
this,
but
you
know
I
can
look
and
and
ask
a
couple
of
the
other
maintainers
that
are
metrics
oriented.
A
Well,
happy
days,
while
we
were
chatting,
I
I
got
the
last
little
bit
working
with
the
internal
futures
framework
turned
out
that
it
was
using
internally
using
completion,
stages
and
async
stuff
from
a
completable
future.
So
I
needed
to
wrap
all
of
I
need
to
save
the
calling
context
and
then
wrap
the
functions
that
were
being
submitted
to
the
complete
the
completable
future
in
with
the
context,
and
then
it
all
worked,
but
I
had
to
go
deep
deep
into
where
it
was
actually
doing
doing.
A
E
B
Any
time
I
use
computable
feature,
I
just
use
the
version
that
takes
my
executor
out
of
principle.
So
then
I
just
mark
that
as
the
complex
propagating
executor
and
don't
have
to
do
it
for
all
the
functions
themselves,
yeah
the
problem,
oh
you
well,
the
whatever
async
whatever.
So
it
accepts
an
executor.
A
Oh,
the
ace
like
the
async,
call
like
this
yeah.
What
is
the
call
anyway
that's
happening
here?
We're
going
to
play
a7
then
combine
async
or
then
yeah
async
then
run
async
here
yeah.
The
funny
thing
was,
I
was
passing
in
the
executor
everywhere
and
it
still
wasn't
getting
the
content
trapped
properly.
So
I'm
not
sure
I
have
a
feeling
that
the
that
what
was
happening
was
by
the
time
it
hit
that
code,
the
calling
code-
I
don't
know
it
exit-
had
like
already
finished,
and
so
there
was
nothing
in
the
context.
B
B
B
C
So
is
your
one
executor
then,
just
like,
like
a
a
shared
thread
pool
across
your
whole,
app
or
whatever.
B
Normally
it'll
be
just
a
direct
executor
that
is
wrapped
with
context.
If
you
want
to
use
a
thread
tool,
then
you
would
use
a
third
port,
but
if
you're
only
concerned
with
complex
propagation
or
direct
one
works.
Okay,.
B
Before
the
instrumentation
release,
so
I
had
my
one
pr
update
dependencies,
which
ran
into
mysterious
media
failure
that
I
probably
still
won't
be
able
to
debug
today,
but
the
only
dependency
that
would
affect
the
agent
is
just
a
small
patch
on
light
buddies.
So
I
don't
think
that
blocks
the
release,
even
though
it
does
affect
like
it
would
be
picked
up.
E
B
E
Yeah,
so
this
one
we
were
going
to
merge
but
wanted
to
check
if
you
wanted
to
look
at
it
or
not,
only
because
you
could
emerge
that's
what
I
think.
B
E
E
Yeah
for
good
and
bad
everyone
has
been
burned
by
metra,
embedding
metrics.
E
C
B
I
don't
think
anyone's
gonna
do
it,
which
is
fine.
It's
it's
just
it's
all
about
the
usefulness
of
the
instrumentation.
So
examples
are
the
easy
one
to
say
about.
Hopefully,
there's
other
features
than
just
examples
but
like
if
you
use
open
telemetry,
you
get
those
two
instrumentations
interacting
with
each
other.
That's
so
that's
quite
enabling
I
think,
even
if
you
don't
get
free
apis,
you
still
have
to
implement
your
own
interceptor
api.
Whatever
it's
not
that
bad.
I
guess.
E
A
A
E
To
add
it,
or
you
know,
as
opposed
to
this,
there
was
a
I
think
for
some
languages.
The
dream
may
still
be
alive
in
open
telemetry,
but
to
get
open,
telemetry
natively
really
baked
into
you,
know
some
of
these
libraries,
where
it's
just
always
there
and
it's
no.
B
A
Well,
I
have
to
run,
but
hopefully
see
you
next
week.
A
E
E
I
think
we're
good
was
there
anything
else
you
wanted
to
chat
about
jack.
C
I
I
do
have
one
refactor
pr
that
I've
been
working
on
and
it's
there's
a
there's,
a
few
other
things
that
are
kind
of
stuck
behind
it.
It's
not
it's
not
critically
important
to
review
it.
But
if,
if
I
had
to
choose
one
place
to
draw
your
attention
to
on
a
rock,
it
would
be
four.
C
And
I
talked
with,
I
talked
to
josh
about
it,
and
so
I
he
he's
he's
aligned.
So
you
know
he
he
was
he's
been
playing
around
with
a
similar
refactor
locally
that
that
you
know
does
the
same
types
of
thing
that
I
do
here
and
then
he
has
this
additional
goal
of
reducing
trying
to
see
if
he
can
reduce
allocations
on
the
hot
path.
C
I
don't
know
how
what
his
his
thoughts
are
on
that,
but
presumably
they
they
they
can
work
in
conjunction
with
this
simplification,
not
against
it.
So
yeah
anyways,
you
know
if
you
don't
get
to
it
today.
It's
no
problem,
he'll
be
back
next
week,
but
if
you
do
it's
great.
E
B
E
Oh
yes,
thank
you
for
working
on
the
that
clearing
thing.
B
E
B
A
B
E
We're
gonna
hold,
we
did
discuss
it
today.
It's.
E
No,
so
this
is
about
in
in
like
distros,
where
you
want
to
based
on
one
setting.
You
want
to
configure
some
other
settings.
E
And
the
way
that
both
splunk-
and
I
was
mentioning
we're
just
comparing
notes
to
this
morning-
the
way
that
today
we're
both
solving
this
is
via
the
config
property
source,
and
we
just
read
the
property
we
want
to
base
our
new.
Our
decision
on.
We
read
it
directly
from
system
properties
or
system
environment.
B
B
E
B
E
Yeah,
what
do
we
call
it
in
our
just
config?
Oh
right,
right.
E
Yeah,
I
will
dig
and
see
if
there's
an
issue
where
we
had
maybe
discussed
summarized
and
free
up
that
and
if
not
open
a
new
issue
because
yeah,
let's
let
we
should
have
that.
C
I,
like
those
get
or
default
methods
as
well.
I
wouldn't
mind
having
those
in
the
the
auto
configure
at
all.
B
E
Yeah,
okay,
yeah:
I
will
push
that
conversation
back
to
the
top.
E
B
D
E
Okay,
I
will.
I
will
start
that
kick
that
conversation
off
today
so
that
matteish
can
weigh
in
tomorrow
and
we
can
decide
if
that
changes.
Our
rc,
designation
or
not.
C
B
E
Good,
I'm
glad
you
thought
of
that
yeah
in
instruments
api.
Yes,
even
worse,.