►
From YouTube: 2022-06-16 meeting
Description
Open Telemetry Meeting 1's Personal Meeting Room
A
B
C
Hello,
are
you
here
for
the
pythons,
the
python
meeting,
yep
all
right
great,
I'm
just
checking,
because
we
had
some
trouble.
It
was
weird.
C
All
right
welcome
everybody
who
are
we
missing,
aaron.
C
Yeah
welcome
everybody
to
another
edition
of
the
python
sick
meeting
as
usual.
Please
add
yourselves
to
the
candies
list.
I
guess
we
can
start
now,
since
we
are
a
few
minutes
late
already.
C
Laden
do
you
wanna,
drive,
track
and
drive.
Thank
you
I'll
appreciate
you
do
that
because
I
don't.
I
only
have
one
monitor.
Yeah
no
worries.
D
B
D
I
think
this
was
jeremy's
first
time
joining
the
sig
today,
jeremy,
you
kind
of
want
to
just
introduce
yourself
a
little
bit.
We
kind
of
just
like
go
through
productions,
usually
with
new
people.
Yeah.
E
D
Yeah,
so
yes,
jeremy,
you
already
know
me
diego
and
sukanth-
are
our
other
python.
Maintainers
alongside
me
feel
free
to
bother
them
with
with
questions.
If
you
do,
if
I'm
up
there.
C
Yeah
I'll
feel
free
to
ask
welcome
to
the
project.
Oh
by
the
way.
Aaron
is
also
an
approver.
That's
correct,
yeah.
How
many
approvers
do
we
have?
We
also
have
nathaniel
smart
folks.
D
We
have
always
nathaniel
alex
all
of
them
who
don't
join
the
sig
anymore
rough
life.
D
Yeah,
I
think
these
days,
it's
it's
pretty
much.
The
people
that
you
see
here
in
the
chat
I've
been
pretty
active,
so
python
has
always
been
having
a
lack
of
lack
of
resources
because
people
just
don't
like
us.
So
yes,
okay,
now
that
we've
got
that
through
there's
a
couple
topics.
I
kind
of
wanted
to
talk
about.
D
First,
one
move
metrics
rc2
project
back
to
repo
project,
so
this
is
just
like
my
personal
preference
there's
just
like
some
convenience
factors
about
this
diego.
I
think
I
briefly
talked
about
this
in
slack.
D
D
My
my
reasoning
is
just
that,
like
we
just
have
all
of
our
stuff
in
the
the
repo
project
from
before,
and
I
was
just
like
used
to
doing
things
this
way.
Oh,
you
already
did
this
yep
yeah,
you.
A
C
Sorry,
I
didn't
want
to
interrupt.
No,
I
think
that
github
added
that
feature
that
allows
you
to
add
a
project
to
this
project
very
recently,
or
I
noticed.
B
C
This
feature
was
there
pretty
much
yesterday,
so
I
yeah
it's
it's
easy
to
now.
The
main
reason
why
I
prefer
to
use
the
new
beta
project.
The
new
feature
of
github
is
that
this
allows
you
to
add
issues
from
many
open,
telemetry
organization
repos.
So
if
we
look
at
this
make
a
project
that
also
involves
a
country
and
the
anticorripos,
so
it's
easier
to
organize,
and
it
also
has
like
some
other
nice
features
like
views
that
you
can
use
and
stuff.
So
actually.
D
D
All
right,
cool,
yeah,
easy
peasy,
then
yeah,
the
contrib,
adding
the
control
stuff
is
pretty
useful,
too
awesome
all
right.
Moving
right
along
has
come
to
my
attention.
I
think,
like
trask,
pointed
out
that
I'm
not
fully
sure
if
the
our
current
rc
tags
are
falling
simba.
D
I
believe
they're,
adding
like
a
dash
in
between
the
versions.
Is
that
confirmed?
Like
does
do
you
guys
know
if
that's
true
in
terms
of
if,
if
we're
being
compliant
or
not,
if
not
should
we
should,
we
add
it
to
the.
C
Next,
rc2,
compliant
with
with
what
which
document
defines
the
format.
C
F
Okay,
cool,
so
the
the
way
that
python
and
like
pip
respect
version
numbers,
is
slightly
different.
There's
a
pep.
I
don't
remember
the
number
for
it,
but
the
format
is
slightly
different.
Some
ways
like
I
think
our
like
beta
and
alpha
ones,
are
also
like
slightly
different
there's
no
space,
so
it
would
be
like
1.10,
a0,
b0.
F
Yeah,
I
guess
my
point:
is
it's
not
necessarily
supposed
to
follow
some
bear.
I
think
it
sort
of
works
with
semver,
but
the
I
think
we
are
following
the,
but
that
I
can
try
to
find
it
right
now.
A
F
Yes,
yes,
guys
just
want
to
verify.
I
just
put
it
in
the
chat,
if
you
don't
mind
opening
that
latency,
but
I'll
put
it
in
the
dock
too.
Sorry
that
works.
So
I
think,
if
you
look
at
the
you
look
at
what's
there,
I
think
we're
matching
that
right.
We
have
yeah
yeah.
D
C
Yeah,
I
have
one
sure,
but
it's
a
pretty
much
just
about
admire
that
sorry,
it's
time
to
release
rc2
yes,
so
I
am
okay
with
doing
a
release
right
now,.
D
C
I
I
added
the
issues
that
had
the
rc2
tag
yeah
to
the
project
and
then
deleted
the
tag.
I
think
those
were
those
were
it.
D
C
I
guess
it
could
be
good
to
make
a
release
now
yeah
and
we
can
continue
working
on
the
well
on
the
issues,
because
in
one
month
we
can
have
either,
I
guess
a
stable,
hopefully
or
or
if
not
another
rc
so
yeah
that
works
for
me,
okay,
yeah!
D
C
D
Also,
I
I
suggest
diego
con
try
to
get
this
automated
release.
Workflow
reviewed
I've
already
tested
this,
and
it's
like
working
on
my
fork
so
at
barring,
like
a
few,
a
few
bugs,
but
this
really
like
really
simplifies
our
our
release
process.
It's
literally
like
kicking
off.
D
D
C
Yeah,
just
a
question:
aaron,
oh
sorry,
later,
can
you
can
you
check
the
project?
Please
just
one.
Second,.
C
Okay
yeah
the
issue
that
is
in
progress.
Can
you
open
that,
please?
Oh
sorry,
are
we
done
with
the
the
items
in
the
gender
already.
D
A
So
the
only
question
I
had
like
to
address
like
I
can
address
other.
The
other
comments
only
question
I
had
was
this
thing
here:
how
do
we
deal
with
the
attribute
set
for
each
of
them?
A
So
are,
we
are
we
going
to
say
like
include
the
ones
which
will
always
have
the
values
are?
Are
we
going
to
be
fine
with
like
setting
like
picking
the
set
base
like
based
on
the
availability
of
the
attributes,
in
the
request.
D
So
this
might
be
kind
of
biased,
but
the
way
that
we're,
what's
it
called
taking
in
metric
signals
or
any
other
signals
like
spans,
is
we're
assuming
that
like
if
it
doesn't
exist
like
it
like
some
like
something
happened
or
like
it
meant
like
it
wasn't
available.
So
like
at
least
for
like
our
exporters,
we
don't
like
empty
strings
or
zeros,
because
it
bypasses
like
our
our
null
checks,
and
then
we
would
just
have
a
weird
value
for
certain
things.
F
Wayne
are
you
talking
about
microsoft
that
your
your
azure
backend,
specifically
correct,
yeah
yeah,
so
so
prometheus
is
actually
the
opposite
right
like
if
you
look
at
the
spec
I
linked
there,
it's
expecting
it's
basically
expecting,
for
how
do
I
put
this,
like
all
the
label
sets,
should
should
have
all
the
same
labels
right.
F
This
is
probably
a
bit
confusing.
Basically,
the
way
this
manifests
is,
if
you
look
at
it
at
the
text,
format,
there's
a
comment
that
says
this
is
like
you
know
the
help,
help
description,
basically
for
this
counter
or
whatever,
and
then
you'll
have
one
line
for
each
unique
label
set
and
each
of
those
lines
should
have
every
single
label
key
in
the
in
the
specified
labels.
Even
if
there's
an
empty
value
like
you
can't
you
you
shouldn't
not
have
one.
F
I
see
right
and
I
think
I
think,
potentially
we're
using
the
prometheus
client
library
to
handle
this
right
now,
so
I
think
it
would
actually
handle
this
automatically
for
us,
I'm
not
I'm
not
sure
I
haven't
checked
but,
like
the
the
google
backend
is
kind
of
the
opposite
of
what
you
said.
Layton
it
will,
it
will
automatically
backfill
like
it
will
automatically
put
a
empty
value
for
things.
You
don't
pass,
so
I
suppose
all
of
them
handle
it
on
there
on
their
own.
F
But
I
don't
know
I
guess
it's
open
open
for
discussion.
F
I
don't
know
prometheus
is
like
the
main
open
source
sort
of
metric
back
end
yeah.
I
think
it's
just
prometheus
dot,
io.
F
F
Sorry,
sorry,
so
that's
not
completely
true.
There's
also
I'd
have
to
look
at
the
protobufs,
but
they
are
introducing
the
optional
field
presence
which
which
would
allow
you
to
distinguish
between
the
two.
But
if
you
look
at
the
spec
there
for
the
attributes,
it
says
not
to
do
that.
C
Yeah,
because
I
was
thinking
that,
if,
if
we
could
define
a
value
that
meant
what
we
want
to
say,
but
at
the
same
time
I
I
don't
know
how
to
do
that
right,
I
mean
because
it
has
to
be
a
string
or
an
integer
right
or
float.
Yes,.
A
Yeah
I
mean
what
that
you
can
like
that's
something
we
can
do
like
empty
strings
like
basically,
zero
values
are
valid,
like
all
exporters
should
be
considering
them.
Like
that's
a
requirement
right.
That's
one
thing
that
we
can
do.
We
can,
you
know,
ignore
the
labels,
which
one
have
the
values.
F
Sorry
I
was,
I
was
just
going
to
say:
I
want
to
hear
more
opinions
if
anybody
thinks
thinks
this
is
wrong
and
we
should
just
omit
the
attributes.
But
what
were
you
going
to
say.
A
Yeah,
I
was
going
to
say
there
are
very
like
limited
number
of
attributes
that
will
always
be
like
that
are
guiding
to
be
there
in
the
environment.
We
do
not
know
under
what
circumstances
like
the
other
attributes
will
not
be
available.
So
that's
why
I
was.
I
know
I
was
trying
to
be
more
like
defensive
in
adding
the
values
because
they
they
might.
Some
of
them
could
be
none
in
the
like
risky
environment
that
we
get.
A
So
I
was
that's
that's
the
reason
I
was
trying
to
include
like
the
ones
which
are
present,
because
otherwise
I
will
only
have
the
http
method
or
you
know
like
a
couple
of
them,
but
that's
not
really
useful.
I
think
yeah
I'm
open
to
like
what.
What
do
you
think.
D
Yeah
now
that
I
think
about
it,
because,
in
terms
of
like
the
like
telemetry
like
principles,
it's
like
like,
if
we,
if
we
drop
the
keys,
like
we
kind
of
like
prevent
anyone
else
from
making
a
decision
on
it
right,
we're
kind
of
just
like
omitting
information
kind
of
so
at
least
like
if
we
provide
an
actual
default
value,
like
any
user,
has
the
choice
of
whether
or
not
they
want
to
keep
it
or
drop
it
right.
F
A
D
Right
right,
so
we
would
have
to
be
like,
like
kind
of
kind
of
like
I
don't
know
if
this
is
like
a
catch-all
kind
of
thing,
I'm
just
taking
a
look
at
the
attributes
right
now
and
like
currently,
they
seem
to
make
sense
if
it's
empty
or
if
it's
zero
like
this,
is
clearly
the
the
the
the
null
case.
You
know.
A
F
A
This,
this
is
not
something
we
can
do
it
in
general,
like
we
cannot
make
a
general
principle
that
hey
zero
values
that
they
could
be
actual
values.
So
we
should
have
some
general
applicable
procedure
that
we
follow
when
we
do
this
so.
F
Yeah,
I
think
maybe
I'm
coming
at
this
from
more
of
like
a
monitoring
perspective
right,
like
you're,
using
your
your
back
end,
you're,
trying
to
write
alerts
or
something
like
that,
like
the
you're
gonna
see,
essentially,
one
line
per
label
set
right
yeah
and
if
the
values,
if
it's
missing
some
some
keys
right
like
I'm
like
okay,
if
this
attribute
is
equal
to
this,
but
the
attribute's,
not
there,
then
your
your
alert
or
like
your
metric
back
end
has
to
do
the
same
thing
anyway.
F
Right
like
it
has
to
handle
the
case
where
it's
not
there,
so
the
the
way
that
at
least
like
cloud
monitoring
and
and
yeah
like
we,
we
just
backfill
it.
So
we
don't
have
to
worry
about
that
right.
F
E
Is
this
weather?
Is
this
weather
like
any
metrics
value,
can
like
the
value
of
any
piece
of
a
metric,
can
be
zero
or
a
specific
field?.
F
I
mean
I
think
most
systems
probably
will
handle
it.
Gracefully
like,
like
I
said
cloud
monitoring
will
automatically
do
this,
but
if,
if
you
think
about
like
a
metric
as
having
a
schema
or
you
think
about
like
the
attribute
sets
of
a
metric,
it
doesn't
really
make
sense
to
to
leave
them
out
sometimes
and
have
them
other
times
right.
I
don't.
Maybe
I'm
biased.
C
C
Saying
that
expressing
that
and
not
including
it
could
work
if
this
subsequent
process
can
compare
that
to
a
schema
and
realize
okay,
this
is
missing.
So
that's
what
they
were
trying
to
tell
me.
A
D
Right
right
exactly
yeah,
and
it
looks
like
that
this
whole
discussion,
kind
of
just
stemmed
from
also
what
prometheus
expects
to
and
the
blowing
up
of
label
stats,
but
it's
kind
of
like
this
is
kind
of
like
a
like
an
error
scenario
and,
like
it
even
says,
should
right
so,
like
I'm
wondering
if,
like
we
have
to
guard
against,
like
all
of
this
kind
of
things,
for
the
for
the
sake
of
like
maybe
like
integrity
of
the
data
and
also
possible
future
breaking
things,
I'm
like.
Is
it
worth
it
right.
D
D
Like
in
favor
of
not
setting
default
values
and
just
dropping
the
the
labels.
F
Okay,
I
think
so,
like
our
instrumentations
aren't
stable
anyway,
and
this
is
right
now
an
instrumentation
discussion
so
right
like
I'm,
I'm
fine
to
move
forward
with
what
we
have
in
this
pr.
I
do
think
there's
like
a
broader
discussion
like
there
were.
There
were
definitely
some
issues
brought
up
in
the
spec
about
hey.
Should
we
have
like
a
schema?
F
I
think
it
actually
says
in
the
spec
that
you
can
have
a
strongly
typed
object
for
for
your
attribute
set.
I
don't
know
if
anybody's
actually
implemented
that,
though,
but
yeah.
Maybe
there's
like
a
a
broader
discussion
to
have
here.
I,
I
guess
yeah,
maybe
maybe
we
should.
We
should
do
a
little
more
research,
but
just
move
forward
with
this
pr
yeah.
F
Yeah
sorry
good.
A
That's
not
necessarily
an
issue
in
the
dressing
diet,
because
each
each
like
each
time,
you
miss
some
a
level
pair.
It's
going
to
create
a
new
different
time
series,
which
might
be
something
of
a
concern
for
the
users,
but
spans.
I
mean
that
there's
no,
nothing
to
worry
about
there
right.
F
A
F
B
D
Nice,
diego,
I
think
that's
all
of
our
pr's
issues,
so
you
can
move
forward
with
what
we
want
to
see.
Thank.
C
You
can
you
take
a
look
at
the
project
and
the
open
in
progress
issue.
That's
the
question
for
aaron
scroll
down
a
little
bit.
Please,
and
you
mentioned
that
for
cumulative
readers.
We
should
keep
returning
the
already
we
return
values.
If
we
make
like
subsequent
collections
and
in
between
collections,
there
are
no
measurements
read.
Is
that
what
you
mean.
F
C
That's
good,
but
what
is.
F
A
cumulative
reader
if
your
metric
reader
is
saying
is,
is
configured
to
be
cumulative.
F
No,
it's
it's
in
the
metric
reader
right
like.
C
Yeah,
which
is
a
a
mapping
between
the
instrument,
types
and
creation,
temporality.
F
Yeah
yeah,
I
guess
I
I
guess
what
I
mean
is
for
ones
that
are
all
cumulative
like
prometheus,
but
I
think
this
is
generally
true
for
any
like
time
series
that's
cumulative
right
like
if,
if
I'm
saying
the
value
10
seconds
ago
was
10
and
I
haven't
seen
anything
since
then,
the
cumulative
value
is
still
10
right.
C
Yeah,
but
the
I
mean
the
way
of
determining
that
is,
is
what
I
don't
understand,
because
the
if
we
may,
if
we
call
collect
twice
for
metric
reader,
how
do
we
know
that
we
need
to
yield
the
previous
value
if
no
new
values
were
created.
F
Created
well,
new
values
were
measured.
You
know,
I'm
saying
the
opposite.
If
new
values
weren't
measured,
then
what
we
should
do,
then
you
should
return.
The
the
previous,
like
the
current
cumulative
running
value,
correct
yeah,.
C
Yeah
yeah.
Sorry,
that's
that's
what
I
mean
yes,
so
my
question
is
this:
do
we
need
to
do
that
for
some
at
readers,
but
not
for
others.
F
So
I
think
I
think
this
would
be
handled
in
the
aggregation
class
right.
So
the
aggregation
knows
what
temporality
is
supposed
to
output.
If
it's
supposed
to
output
accumulative
temporality
it
should,
it
should
still
return.
F
C
F
Yeah,
so
I
just
I
guess
I
just
I'm
talking
about
it
like
a
from
the
user's
perspective,
they
configure
it
on
the
metric
reader
and
that's
the
behavior
yeah.
Does
that
make
sense.
C
Yeah,
so
if
okay,
but
imagine
the
situation
where
we,
the
the
user
configures
the
metric
reader
and
says
that
one
kind
of
instrument
will
be
cumulative
and
the
other
one
will
be
delta
right
and
then
creates
one
instance
of
those
two
instruments
creates
a
measurement.
A
measurement
is
producing
both
a
collection
cycle
happens,
and
then
a
collection
cycle
happens
again.
C
I
think
well
I'll,
obviously
submitting
up
here
shortly:
cool
okay,.
G
C
Different
for
forked
processes,
like
unicorn
when
we
were
working
with
tracing
right.
C
What's
something
different
yeah
we
have
this.
I
put
that
link
in
the
chat.
C
F
Right,
okay,
so
it's
implemented
in
the
in
the
metrics
in
the
periodic
reader,
I'm
pretty
sure
that
it
has
the
same
post,
four
coke.
It
does
yeah.
A
Yeah,
it
does
so
that's
there.
We
we
did
that
part
but
yeah.
I
was
going
to
say,
like
we
discussed
this,
you
risky
thing
right
where
the
happens
at
the
different
stage,
so
you
that
that
will
still
be
an
issue.
I
think.
A
Yeah
give
me
a
second:
I
did
something
few
days
back
for
the
tracing
and
logs.
D
That
was
specifically
because
your
whiskey
didn't
work
for
certain
environments
right.
A
You
have
the
these
records
incoming
or
to
the
exporter,
and
then
you
know,
hey,
like
you
know
I
have
received
this
record-
is:
is
this
exporter
in
a
good
state
if
it's
not
there,
if
it's
not
the
case
and
then
make
it
in
a
like
the
and
then
add
the
events,
but
with
the
matrix
export
matrix
reader,
it
is
collecting
the
matrix.
A
So
there
is
no
point
in
the
workflow
where
I
can
hook
this
like
this
setup
of
compa,
like
looking
at
the
current
pid
and
the
proportional
pid
yeah
I
was
going
to
you
know,
ask
what
do
people
have
if
we
had
any
ideas
around
it,
but
now
I
forgot,
I
can
create
an
issue
for
that.
A
A
A
Probably
not,
I
think
this
is
not
yeah,
it
doesn't
have
to
be
part
of
rc.
We
can
do
it
later
because
we
didn't
do
it
for
the
tracing.
We
eventually
did
it
and
right
now
it
works
with
many
other
models
except
you,
whiskey,
so
yeah.
I
think
we're
fine,
okay,.
D
Hey
chica:
what
was
the
reason
why
use
whizzy
didn't
work
again?
It
didn't
have
that
that.
A
A
Yeah
so
the
hook
that
the
fork
hook
that
we
are
using
right
after
for
in
jail
yeah
that
will
never
get
involved
with
the
us
key,
because
it's
the
the
c
process.
That's
there.
I
see
yes,
the
four
kingdom
processes.
D
B
F
D
A
D
There's
not
a
point
in
which
we
can
like
actually
check
this
and
re-init.
It's.
F
Okay,
I
mean
we
could
do
it
when
collect
is
called
right.
A
F
The
point
right
yeah,
I
understand,
I
guess
my
question-
is
the
the
other
documentation.
Should
we
remove
the
post
fork
thing
for
trace,
or
should
we
leave
this
around
since
it's
sort
of
maybe
less
tacky.
A
Yeah,
that's
a
good
point
like
I
we
can.
We
can
get
this
removed
for
the
tracing
we
can
and
you
can.
It
should
be
completely
removed
for
the
matrix
as
well
for
the
uv
ski
until
we
figure
out
some
solution.
I
I
think
we
can
just
keep
that
part.
D
A
question:
if
users
were
to
do
this,
would
this:
would
this
create
a
duplicate
spam
processor.
A
Yes,
that's
what
it
does
like,
so
it
creates
a
new
new
processor
like
the
g
unicorn
kit's
new
processor,
for
handling.
Like
let's
say
you
specify,
the
number
of
workers
has
four:
it
creates
a
new
processor
and
then
each
of
the
processor
has
its
process,
has
its
own
setup.
A
A
No,
it's
still
the
same
behavior
right
so
when,
when
the
fork
has
happened,
it
copies
the
whole
address
space
and
then
now
you
have
the
address
like
the
copy
data
space,
but
the
the
keys
were
held
in
the
parent
process,
which
are
no
long,
no
longer
work
in
the
chain
process
and
the
state
like
the
state
is
not
correct
anymore.
A
We
are
re-initializing
that
so
in
in
now,
with
this,
what
it
does
is
it
initialized
after
the
after
the
working
has
happened,
but
in
the
previous
case
we
initialized
before
the
forking
after
the
fog
has
happened.
Now
it
has
copied
the
whole
address
space,
but
that
has
some
corrupt
state.
We
are
realizing
it.
You
will
still
have
the
multiple
trace
test
pipeline
setups
in
each
process.
F
A
A
Yeah
this
this
document,
it
does
not
because
it
does
not
set
up
any
facing
setup
before
the
there
is.
There
is
nothing
like
nothing.
No,
no
tracing
pipeline
works.
F
F
So,
for
instance
like
in
you
know,
in
the
metrics
sdk
or
in
the
tracer,
or
something
like
that.
A
Yeah,
that's
a
good
question.
We'll
have
to
see
the
behavior
now
so,
if,
if
something
like,
if
the,
if
there's
a
chance
to
like
the
log,
was
held
by
the
parent,
while
the
working
has
happened,
you
there
is
a.
You
know
chance
that
you
will
not
that's
not
going
to
work
in
the
trail
process
and
then
we
will
have
to
do
that
on
our
own
but
yeah.
I
I
I
love
to
try
it
out
before
confirming
anything.
F
Yeah,
it
sounds
like
there's
a
lot
of
ways
that
we
could
break
this
and
and
like
accidentally
deadlock.
Maybe
maybe
we
should
try
to
come
up
with
some
like
robust
testing.
A
F
A
There
is
this
another
processor
right
which
which
which
grabs
the
multiple
span
processor.
That
is
also
still
has
the
issue.
If
you
know
this
sequential
and
then
there
is
parallel
spam
processors
yeah,
it's
still
an
issue
with
the
the
well,
I
don't
remember
the
name
but
yeah
we
should
have
some.
You
know
robust
mechanism
for
all
of
this.
D
What's
your
cop,
maybe
I
like
misunderstood
earlier,
but
so,
let's
say
let's
say:
user
has
like
the
latest
open,
telemetry
right
and
and
they
and
they
add
this
right
post
fork.
D
D
A
This
this
code
never
gets
executed
to
begin
with.
Unless
so,
let's
say
you
have
added.
So
unless
you
call
the
batch
spam
processor,
that
hook
is
never.
You
know
added
right,
so,
like
batch
pressure
has
to
be
initialized
only
then
the
like,
then
we
are
registering
the
at
fork
after
in
childhood.
A
Right,
if
you
look
at
the
code,
the
spam,
like
the
best
plan
process,
has
to
be
initialized
yeah.
If
you
go
little
above
in
the
init
definition,
you
have
yeah
here
here
we
are
registering
the
four
at
four
right
yeah.
So
that
means
the
best
one
has
processor
has
to
be
initialized
with
some
setup.
Only
then
this
hook
is
added
right,
but
in
the
post
four,
you
did
not
even
call
that,
like
until
the
fork
happens,
this
initialize
this
setup
is
not
invoked
right,
so
you
not
have
any
problem
there.
D
A
A
No,
they
don't,
because
if
they
do
so,
let's
see
if
they
let's
see
they're
using
unicorn,
and
they
don't
have
they
don't
have
this
special
setup.
What
happens
is
let's
say
if
they
have
the
dress
pipeline
set
up
in
their
core,
they
must
be
initializing
the
best
band
processor
right
if
they
do
that
this
hook
is
attached
because,
because
hook
is
attached,
it's
going
if
this
is
going
to
get
invoked.
The
advocate-
and
this
will
take
care
of
the
everything.
B
D
B
D
If
this,
if
this
like,
if
gio
unicorn,
creates
another
like
like
process
right-
and
so
this
was
already
registered
at
the
original
instantiation
of
the
original
batchband
processor
right,
so
when
it
copies
the
address
space,
I
would
assume
like
this
gets
gets
executed
right.
D
A
A
Is
initialized
after
the
four
the
hook
is
registered,
but
the
hook
is
never
invoked
understood,
so
this
will
the
condition
that
you
are
telling
it
will
will
only.
How
do
I
explain
this?
I
can
add
a
comprehensive
comment.
Sure.