►
From YouTube: 2023-02-09 meeting
Description
cncf-opentelemetry@cncf.io's Personal Meeting Room
B
Good
evening
I
don't
know
Jack
are
you
in
France.
B
A
A
B
B
Welcome
to
the
other
Jack
as
well.
While
you
were
gone,
we
replaced
you
with
Jack
shirazi.
B
B
Speaking
of
yeah,
so
John
won't
be
here
today.
He
put
in
Slack.
E
I
threw
these
in
here
I.
Think
yesterday
or
the
day
before,
I
think
I
saw
some
chatter.
Matesh
has
the
first
thing
already
happened.
C
Idea
was,
we
did
and
like
he
wanted
to
remain
so,
like
Ryan.
Fitzpatrick
doesn't
have
enough
time
to
like
support
the
jmx
exporter
like
the
in
the
country,
and
he
asked
to
be
kind
of
like
removed
from
the
project.
But
when
I
talked
to
him
he
said
that
he's
still
gonna
be
like
a
component
owner,
so
I
mean
he
told
me
that
we
should
leave
him
here.
C
I
mean
I
mean
he
wasn't
a
maintainer
before
so
he
probably
doesn't
want
to
be
either
maintainer
or
approver
got.
E
It
okay,
okay,
that
I
had
forgotten
about
that.
That's
my
that's
on
me,
but
really
Ryan
won't
have
Cycles
or
doesn't
currently
have
Cycles
to
really
even
I
mean
I'm
happy
to
help
out
and
be
a
component
owner.
If
that
makes
sense,.
B
Have
we
had
any
PRS
recently
for
I
think
we
have
some
very
old
PR,
like
Carlos,
had
sent
yeah
there's
two
of
them.
E
E
D
E
D
B
B
D
C
We
should
probably
connect
with
the
The
Collector
people
or
try
to
use
this
jar
yeah
I.
C
Be
aware
that
we
want
to
do
anything
like
that,
because
it
changes
the
syntax
of
the
like
the
metric
definition,
files
from
groovy
to
yamu.
B
Yeah
that'll
be
great
Jason,
open
issue
and
I'll
ping,
let's
ping
or
you
can
go
ahead
and
ping
the
other
component
owners
for
jmx
to
discuss,
because
it
does
make
sense
to
me,
as
opposed
to
investing
more
time
in
these
issues,
to
invest
time
in
moving
us
over
to
the
new
jmx
insights
stuff.
B
E
E
Tuesday
Wednesday
on
those
two
clients
calls
which
are
generally
JavaScript
focused,
but
I
mean
we
can't
ignore
mobile.
We
recently
hit
1.0
on
our
SDK
for
Android,
and
we've
expressed
the
intent
to
donate
that
to
open
Telemetry
and
the
community
has
been
responsive
and
wants
to
help
see
that
happen.
E
Then
we're
talking
about
a
repo
like
where,
where
would
this
repo
belong?
And
there
were
some
people
on
the
call
and
because
I'm
new
in
that
Community
I?
Don't
remember
offhand
names
and
I
apologize,
but
it
was
suggested
that
maybe
the
Android
SDK
belongs
because
it
is
Java
based
instrumentation
that
it
belongs
in
the
existing
Java
instrumentation
repo
and
I
threw
up
in
my
mouth
a
little
bit
and
said:
I
would
talk
to
you
all
about
it.
C
E
Yeah
but
okay,
just
just
to
jump
in
just
just
because
it
lives
in
the
repo
doesn't
mean
it
has
to
share
a
Gradle
right.
Yeah.
C
B
What
can
you
describe,
what
the
what
it
is,
the
Android.
E
Sure
so
they're
I
mean
Watson,
wrote
the
the
bulk
of
this
initially
and
it
is
a
series
of
apis
and
instrumentation
for
Android,
so
instrumentation
being
some
of
the
Android
internals
and
doing
like
crash
detection
and
slow
rendering
detection
and
network
change
detection,
like
all
of
those
kind
of
mobile
concerns,
are
considered
like
instrumentation,
but
then
there's
also
some
apis,
which
users
can
manually
use
and
it's
backed
by
the
Java
SDK.
D
C
C
It's
complicated
I
mean
yeah,
you
use
it
as
a
dependency
you
have
to
manually
like
set
up
the
like
the
main
class
and
like
each
instrumentation
that
we
have
uses
like
a
slightly
different
way
of
registering
itself.
It
is
either
a
like
a
life
cycle
listener
or
an
application,
or
there
are
some
like
Network,
specific
listeners
that
Android
allows
you
to
register
or
in
case
of
Crash
detection.
We
just
set
up
like
the
effect
Handler
exception.
Handler.
C
Then
the
thing
we
built
like
first
time
like
constructs
the
open
Elementary
instance
underneath,
like
inside
the
API,
that
we're
gonna
we're
gonna.
C
B
E
D
So
is
it
fair
to
say
that
it
takes
a
dependency
on
the
SDK
because,
as
it
currently
stands,
the
project
instantiates
the
SDK
itself,
rather
than
accepting
a
pre-configured
instance
of
the
SDK?
Yes,
is
there
anything
about
it?
That
requires
the
SDK
to
be
configured
in
a
particular
way,
or
is
it
just
like
a
convenience
mechanism
right
now
that
the
project
configures,
the
SDK.
E
C
I
think
it's
more
of
the
convenience
thing,
and
so
like
this
black
thing
that
we
built,
we
wanted
to
be
as
easy
to
set
up
for
our
users,
users
as
possible,
and
the
part
of
the
code
that
we
are
planning
to
donate
will
be
much
more
configurable
like
you'll,
be
able
to
set
up
the
SDK.
However,
you
want
it
very
similarly
to
how,
like
all
the
configuration
works
with
its
with
the
functions,
that
I
would
modify
any
of
the
builders.
D
Yeah
and
I
guess:
I
was
kind
of
riffing
on
what
I
think
Trask
was
suggesting,
which
is
that
it
seems
more
appropriate
in
instrumentation
if,
if
it
doesn't
depend
on
the
SDK,
if
it
only
depends
on
the
API,
because
that
matches
what
other
instrumentations
do.
B
Although
we
do
have
and
I
was,
gonna,
can
kind
of,
compare
it
to
or
ask
Mateus
to
compare
it
to
the
spring
boot
starter
that
we
have,
which
does
configure
the
SDK.
It
is
kind
of
similar.
E
So
I
will
always
rattle
the
repo
is
too
damn
big.
You
know
Bell
and
suggest
that
this
really
ought
to
live
in
its
own
repo,
but
I
would
I
mean
I
have
to
defer
to
the
people
that
actually
get
to
make.
That
decision.
A
What
was
the
third
instrumentation
We've
written
down
crash
detection
and
slow
rendering
mentioned
a
third.
B
So
I
don't
think
it
has
to
live
in
this
repo
because
it
is
really
a
completely
different
user
base.
B
There's
no
overlap
compared
to
like
I
do
think
the
spring
boot
Auto
configure
should
live
here,
I
think,
eventually,
the
ahead
of
time.
Instrumentation
should
live
here,
sort
of
the
ways
that
that
people
generally
on
board
to
the
the
different
options
to
onboard
but
Android,
that's
not
like
a
different
option.
That's
like
a
different
universe.
E
Yeah
and
I
guess
one
one
notable
difference
too,
is
that
the
like
the
use
of
a
collector
is
kind
of
assumed
to
be
not
really
relevant.
Like
I
guess,
vendors
could
host
collectors
for
people,
but
mostly
you
have
a
million
handsets
out
there
and
they're
talking
to
something
right,
so
I
guess
the
con.
The
exporters
need
to
be
configured
for
every
instance
of
the
SDK.
F
E
D
Right
so
have
you
given
contrib
any
thought?
We've
we've.
We've
talked
recently
about
adding
other
components
to
contrib
like
the
the
op
amp
implementation
for
Java,
which
is
that
hasn't
really
gotten
off
the
ground,
but
that's
kind
of
its
own
separate
thing
and
we
we
opted
to
recommend
that
that
live
in
contrib
versus
a
dedicated
Repository.
E
B
No
one
build,
but
we
can
split
off
Android.
It's
fine.
C
I
mean
it's
probably
a
better
place
than
an
instrumentation,
and
it's
like
we
don't
get
a
separate
record
and
probably
we'll
have
to
settle
our
country.
B
A
little
bit
there's
definitely
a
a
perception
problem
of
the
contrib
repos
that
they
are
just
dumping
bins
where
things
are
not
maintained
as
well.
C
Also,
the
instrumentation
level
has
only
one
issue:
that's
really
about
Android
and
it
was
created
by
Watson.
So
I.
Don't
think
that
any
of
the
users
that
we
have
would
think
of
visiting
the
instrumentation
we've
gone.
B
Will
Android
does
the
Android
SDK
reuse
any
of
the
instrument?
The
instrumentation
like
okay,
http.
C
There's
a
library
instrumentation
though
I
mean
yeah
yeah,
okay,
so
that's
blank.
Android
SDK
is
using
the
oakdp
instrumentation,
but
I.
Don't
think
that
we,
like
the
the
open,
Telemetry
Android
solution,
should
use
the
okay.
I
should
do
anything
with
okay
HTTP
by
default,
because
there
are
some
users
that
probably
don't
use
that
and
it's
still
manual
instrument
like
it's
still
the
library
instrumentation.
That
requires
a
manual
step.
So
we
can't
really
like
magically
introduce
it
to
this,
so
that
the
wires
have
automatically
in
Android.
C
There
needs
to
be
like
user
codes
stuff
and
makes
it
happen.
B
So
an
Android
User
would
use
would
go
to
the
Android
repo
to
get
the
SDK
and
would
go
to
the
instrumentation
repo
to
get.
B
D
B
So
the
proposal
would
be
open:
Telemetry,
Dash
Android,
oh.
B
You
a
coffee,
I
one
thing
I
like
about
that
like
is
that
the
issues
like
Android
users
have
like
very
specific
Android
issues
like
how
do
I
get
this
whole
contorted
build
thing
to
work
and
so
having
a
slightly
narrower
Community
there
for
issues
and
could
be
helpful.
E
I
I
mean
another,
it's
tangential,
maybe,
but
we're
also
We've
noted
that
it's
it's
almost
entirely
written
in
Java
today,
and
we
think
that
Android
users
coming
in
would
expect
to
find
more
kotlin,
and
so
we're,
probably
going
to
be
doing,
supporting
as
well
or
and
or
the
community
will
help
us
do.
Porting
we'll
see.
C
Also
I
would
also
expect,
like
the
set
of
maintainers
will
be
entirely
different,
because
I
mean
we
have
been
maintaining
this
for
a
year
and
like
a
year
and
a
half
something
like
that,
and
the
Android
problems
that
we
run
into
are
very
hard
to
understand.
When
you're,
strictly,
like
back
in
Java
developer,.
C
It's
like
a
completely
different
world,
so
to
me
it
makes
sense
to
put
it
in
a
separate
report
just
because
you
you
will
have
different
people,
maintaining
it.
B
I
mean
that
part
is
sort
of
solved
by
intrib
with
component
owners.
But
it's
not.
B
E
A
E
F
D
Yeah
so,
while
I
was
on
leave,
I
used
some
some
free
time
to
go
on
a
on
a
Witch
Hunt
for
memory
allocation
and
the
metrics
SDK
and
I
I
made
a
lot
of
progress.
I
got
the
allocations
down
by
probably
75
to
80,
depending
on
a
few
factors,
so
a
big
Improvement
there,
but
this
PR,
which
is
a
draft,
is
like
the
final
step
and
so
yeah.
If,
if
we
were
to
go
forward
with
this
approach,
then
we
could
get
memory
allocations
down
by
like
99.8
percent,
so
like
very
close
to
zero.
D
It's
it's
kind
of
wild
and
so
the
this.
The
approach
that
I'm
outlining
here
is
basically
to
reuse
metric
data
instances.
And
so
you
know
every
time
you
do
a
collection.
You
pass
a
a
a
a
list
of
metric
datas
to
your
exporter
and
the
exporter
serializes
those
and
sends
them
to
some
Network
location
and
there's
this
there's.
This
thing
we
can
take
advantage
of
which
is
that
concurrent
reads,
are
not
allowed.
D
So
it's
not
safe
if
your
exporter
holds
on
to
a
reference
to
one
of
these
metric
data
after
it
returns,
but
if
it
doesn't
do
that
and
you
can
safely
reuse
these
and
reap
these
memory
performance
improvements.
So
yeah
I
wanted
to
give
folks
thoughts
on
this.
It
seems
kind
of
crazy,
but
maybe
too
much
of
a
performance
Improvement
to
ignore.
B
So
I
know
that
historically,
like
object,
pooling
like
that
has
been,
has
gone
out
of,
went
out
of
favor
due
to
the
garbage
collector
performance,
improving
a
lot
and
the
cost
of
synchronization,
whether
it's
making
it
volatile
or
actual
synchronous,
lock
locks
being
more
expensive,
which
so
yeah.
That's
the
only
background
that
I
have
on
this
initial
comment.
So.
D
This
is
this:
isn't
really
object,
pooling
there
is
some
object,
pooling
that's
already
been
merged,
but
this
is
more
like
every
single
one
of
your
metric
storages,
which
is
responsible
for
you,
know
aggregating
measurements
for
a
particular
point
for
a
particular
set
of
attributes.
So
every
one
of
those
would
have
its
own
metric
data
instance.
That
is
mutable
in
that
not
metric
data
instance,
but
a
metric
data
point
instance,
which
it
updates.
D
E
Use
yeah
for
what
it's
worth
JFR
I
think
also
supports
this,
like
they
reuse,
object,
instances
and
there's
a
flag
to
turn
that
off
and
I.
Remember
a
New
Relic
that
we
ended
up
turning
that
off
because
it
caused
some
some
headache.
D
So
one
thing
I
proposed
is
kind
of
related
to
that
flag.
Jason
was
potentially
having
metric
exporter,
have
a
an
ability
to
specify
the
memory
mode
that
it's
operating
in
yeah.
D
Yeah
so
one
other
thing
to
ask:
I,
don't
think:
there's
I,
don't
think
we
have
to
worry
about
synchronization.
D
Because
and
I
I'm
I
might
be
misspeaking
here,
but
so
the
same
thread
that
that
updates,
the
the
state
of
these
mutable
objects
will
also
be
responsible
for
reading
those
updates
and
serializing
them
on
The
Wire.
And
so
it's
not
like
one
thread
writes
and
another
thread
reads,
and
you
have
to
worry
about
like
volatile
fields
or
anything
like
that.
E
Is
there
any
concern
about
a
slow
or
stalled
export
delaying
metric
acquisition
or
measurement
taking.
D
No,
so
right
now
the
way
that
collection
works-
and
this
doesn't
change
in
this
PR-
is
that
when
a
collection
happens,
you
you
have
to
grab
the
the
aggregated
state
from
one
of
your
metric
storages.
Basically,
the
aggregation
of
all
your
measurements
and
do
something
with
it
and
what
we
do
currently
is.
We
turn
it
into
these
immutable,
Point
data
objects
and
then
pass
them
on,
and
so
there's
very
little
contention,
because
the
the
hot
path
which
is
updating
the
aggregated
state,
it
only
has
to
compete.
D
So
in
this
PR
that
Paradigm
doesn't
change,
we
just
read
the
state
and
update
a
mutable
object
from
it
and,
furthermore,
there's
only
actually
two
different
types
of
aggregations
which
have
any
lock
or
which
have
contention
at
all
or
actually
have
like
synchronization
at
all.
There's
the
the
two
different
flavors
of
histograms
are
the
only
ones
that
synchronize
the
sums
and
the
last
value
aggregators
those
all
use
things
like
long,
adder
or
double
adder
or
atomics,
which,
which
you
know
use
various
techniques
under
the
covers
to
reduce
contention.
B
So
what
just
to
explore
a
little
bit
more
of
the
the
same
thread
being
used
to
read
and
write,
yeah
or
write
it
and
then
read
it.
D
Yeah,
so
the
the
exporter
can't
hold
on
to
a
reference
it
has
to
so.
The
the
control
flow
is
the
periodic
metric
reader
periodically
collects
metrics
in
in
calls
export
with
those
metrics,
and
so,
if
the
exporter,
when
it's
called
with
those
metrics,
if
it
synchronously,
does
the
serialization
before
returning
its
completable
result
code,
then
it's
all
happening
on
the
same
thread.
B
E
F
D
F
D
E
F
E
D
And
yeah
so
that
that
does
happen
and
there's
and
that
and.
D
It
does
yeah,
so
you
can't
really.
This
doesn't
change
that
per
se.
D
B
So
Jack
I
understand
if
the
metric
exporter
serializes
the
metric
data
immediately
in
the
same
thread
and
then
sends
that
to
the
server
async
right.
Because
then
you're
done
you've
done
the
reading
in
the
same
thread.
D
It
would
do
that
unless
you
know,
unless
we
added
this
memory
mode,
setting
that
that
I
suggested
where
an
exporter
could
opt
into
immutable
metric
data
points-
and
you
know,
then
they
can
do
whatever
it
wants.
With
those.
D
Yeah
and
and
actually
I
I,
agree
with
that.
So,
if
I
think,
if
we
were
to
do
a
memory
mode,
that
it
would
have
to
default
to
like
the
most
safe
dot
option,
with
assuming
immutable,
everything
and
you'd
have
to
explicitly
opt
in
to
you
know
this
mutable
state.
D
So
the
one
other
thing
I
wanted
to
mention
is
that
this
is
like
one
of
the
the
the
impetus
for
me
to
do.
This
was
that
there's
this
there's
been
these
conversations
this.
This
person
has
brought
up
a
use
case
for
Apache
Pulsar,
where
they
want
to
use
open,
Telemetry
to
monitor
it
and
Apache
Pulsar
can
support
they.
They
want
to
be
able
to
support
a
million
topics,
it's
like
kind
of
like
a
a
message
broker
system
and
they
similar
to
Kafka.
D
So
they
want
to
be
able
to
support
a
million
topics
and
monitor
each
of
those
topics
with
about
20
different
instruments,
and
so
they
want
to
be
able
to
support
about
20
million
different
series,
and
you
know
if
you
have
immutable
data
points
for
each
one
of
those
at
on
every
single
collection,
then
you're
allocating
just
like
a
staggering
amount
of
memory
that
has
to
be
garbage
collected
each
time.
D
So
you
know
I
think
the
person
who
is
who's
a
a
sophist's
name
and
the
person
who's
kind
of
been
talking
about
this
requirement.
He
asked
for
Apache
Pulsar
he's
suggesting
a
more
invasive
solution
where
we
adjust
the
metric
exporter
contract
to
use
a
visitor
pattern
and
where
you
know
you
visit
each
one
of
the
storages
and
like
read
their
data
out
and
serialize
them,
and
and
so
that
would
change
the
API
contract
of
metric
exporter
and
in
a
way
that
wouldn't
be
symmetric
to
the
span
exporter
or
log
record
exporter.
D
So
I'm,
not
in
favor
of
that
I'd
rather
see.
If
we
can
push
the
bounds
with
the
existing
metric
exporter
API,
we
have
today
and
minimize
allocations
as
much
as
possible
and
that's
kind
of
what
what
I've
been
driving
at
here
and
I
guess
other
people
benefit.
At
the
same
time,
too,.
D
Electric
payment,
the
lifetime
of
them
like
so
you
know,
they're,
it's
it's
variable,
so
it
depends
on
what
you
do
with
them
in
your
metric
exporter,
you
know,
and
how
often
you
do
this
that
you
export.
D
So
you
know
the
simplest
answer
would
probably
be
using
the
defaults,
where
you're,
using
the
periodic
metric
metric
reader
with
an
otlp
metric
exporter
and
the
default
for
that
is
to
read
metrics
every
30
seconds,
take
those
metrics
and
pass
them
to
the
otlp
metric
exporter
to
serialize
and
send
off
to
a
network
location,
and
so
that,
in
that
situation,
the
lifespan
of
these
immutable.
These
immutable
objects
is
just
like,
however
long
it
takes
to
collect
and
send
to
the
network
location
which,
under
normal
circumstances,
is
just
like.
B
C
By
the
way,
I
just
looked
on
the
Prometheus
exporter,
sources
and
I
think
it
is
not
prepared
to
handle
like
mutability
data,
because
it's
an
HTTP
server.
It
uses
five
Frets
by
default
and
each
thread
like
reads
metrics.
Then
they
converts
them
to
text.
But
you
could
have
like
several
requests
happening
at
the
same
time
and
they
will
probably
all
run
into
some
timing
issues.
D
Yeah
we'd
have
to
put
some
sort
of
lock
in
place
to
protect
against
that
there's
a
lock
in
the
SDK
itself
that
prevents
concurrent
reads
concurrent
collections,
but
that
lock
is
moot
if
the
Prometheus
HTTP
server.
D
B
I
really,
like
the
idea
of
you,
know
optimizing
for
otlp:
we've
done.
You
know
a
lot
of
that
hand
writing
of
protobuf
and
other
things
to
really
optimize
for
that.
B
Maybe
I
I
would
add
like
comment
javadoc
on
you
know,
sort
of
that
would
also
help
us
in
reviewing
of
like
this
method
is
only
called
in
the
same
thread
or
the
user
of
this
method
must
be
done
in
the
same
thread
as
the
caller.
Something
like
that
to
help
follow
that
flow.
D
Yeah
that
sounds
good
I
just
was
throwing
this
out
as
a
draft,
and
you
know
I
was
assuming
it
was
going
to
take
some
conversation
with
with
folks
and
so
yeah
just
floating
the
idea
to
you
all
and
now
that
I've
done
so
I'll
go
mark
up
this
PR
with
a
bunch
of
additional
explanatory
comments.
F
E
And
after
you
have
tackled
the
memory
overhead
of
the
120
million
metrics,
you
can
tackle
the
computational
overhead
foreign.
F
A
F
B
Cool
before
we
go,
let's
chat
about
Jack.
Are
you
planning
on
123
tomorrow?
Yes,
I
am
anything
you
need
reviewed
before
then.
B
A
B
One
to
eleven
okay,
so
we
conversion
manage
that
I'll
send
a
PR
to
version,
manage
that,
but
I'm
not
worried.
We
can
also
release
with.
This
is
fine.
D
Donor
Bart
has
a
PR
to
add
resource
assertions
so
that
you
can
make
assertions
you
know
about
parts
of
a
resources
attributes
without
necessarily
needing
to
know
all
of
them,
and
so
I
need
to
give
this
another
once
over,
to
confirm
that
everything's,
good
but
I
think
it'd
be
good.
If
we
could
get
this
on.
B
I
support
simplifying
assertions.
C
C
Oh
and
also
the
the
packaging
NPR
that
I
did
I
will
post
a
comment
with
all
the
changes
tomorrow,
so
that.
C
It's
it's
a
lot
but
yeah
better
to
make
this
now
than
later.
I
guess.
B
And
yeah,
just
just
the
library
once
to
document
yeah.
B
Oh
I
wanted
to
ask,
and
so
we
are
in
our,
we
had
a
user
report
this
in
our
distro.
B
B
C
So
are
these
jobs
like
one
of
jobs,
one
one
of
tasks
around
is
repeatable
because
I
think
this
spring
scheduling
instrumentation,
like
instruments
developed,
which
kind
of
contradicts
the?
What
I
wrote
in
the
respect
group
recently
that
we
don't
insta
like
we
don't
pass
context
to
like
long
running
repeated
jobs.
C
So
maybe,
instead,
we
should
just
remove
like
exclude
the
schedule
that
fixed
rate
and
that
will
fix
the
way,
no
fixed
rate
and
fix
yeah,
and
this
the
second
one
there's
a
spring
scheduler
instrumentation
module
and
there's
a
task.
Scheduler
instrumentation.
C
B
B
Cool
Let's:
we
will
try
that
with
our
user
yeah
awesome.
Thank
you.
B
If
you
see
anything
else
that
you
want
specifically
for
the
next
release,
either
tag
it
with
the
Milestone.
If
you
have
those
permissions
or
just
post
on
the
pr
or
in
chat
or
anywhere
else,.
G
Else
you
have
a
minute.
Maybe
I
can
bring
up
the
the
pr
that
I've
published
a
while
ago,
specifically
around
adding
caching
limits.
G
So
does
anyone
have
any
particular
concerns
to
to
this
I
feel
like
having
caches
that
rely,
having
weak
caches
that
rely
on
the
the
garbage
collector
to
to
clear
things
out
can
lead
to
a
lot
bigger
swings
in
memory
usage.
E
This
might
be
a
larger
discussion.
Tyler
I'm
wondering
how
important
it
is
to
consider
this
for
release.
Okay,
that's
fair!
Can
we
talk
about
it
next
time.
G
B
Yeah
yeah:
let's
throw
this
on
the
agenda
for
next
week.
I
agree:
it's
a
complicated
makes
my
head
hurt,
which
is
and
I'm
sorry
I,
haven't
I.
Think
that's
partly
why
we
haven't
reviewed
it
to
reviewed
it
yet
is
because
it
is
a
head
hurting
topic.
B
Okay,
cool
I
will:
let's
go
here:
Oh
wrong
copy
buffer.
Let's
try.
B
All
right:
well,
we
are
two
minutes
ahead
of
schedule,
so
let's
win
today
and
see
y'all
online
and
next
time.