►
From YouTube: 2020-07-14 Spec SIG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
B
B
Any
more
items
were
you
good
to
go,
I
think
you're
good
to
go.
Okay,
hi.
Everybody!
Welcome
to
the
afternoon
meeting
I'm
just
gonna
share
my
screen
and
up
go
through
a
review
of
what's
been
discussed
in
the
current
state.
The
main
focus
the
spec
working
group
has
been
GA,
so
in
particular
andrew
from
light
step
and.
B
So
there's,
if
you
haven't
seen
it
yet,
there's
a
release
required
for
GA
tag.
That's
known
everything
and
we've
got
83
open
issues,
so
that
came
up
as
a
concern
last
week,
just
pointing
out
at
our
velocity.
You
know
that
seemed
untenable
to
GA
this
year
with
83
open
issues
at
our
past
rate
of
of
close
closure.
You
know
if
we
closed
an
issue
a
day
and
didn't
get
any
new
ones.
That
would
be
three
months.
B
So
that
was
a
concern
that
was
raised
last
week,
so
they
did
some
further
triaging
to
try
to
divide
the
issues
up
into
sub
SIG's
and
the
Technical
Committee
has
been
encouraged
to
be
much
more
aggressive
about
deciding
issues.
Most
of
these
issues
are
open
because
either
they
got
stale
discussion
just
fell
off
or
they
were
somewhat
contentious
and
someone
has
to
pick
and
as
we
get
closer
to
the
finish
line,
there's
been
a
request
for
the
TC
2.
B
B
We've
been
looking
for
champions
to
help
with
some
of
this
work.
Three
areas
were
identified
that
have
a
lot
of
issues,
metrics
errors
and
sampling.
All
three
of
those
SIG's
have
a
meeting
group
that
meets
once
a
week,
and
so
luckily
we
got
a
volunteer
champion
from
each
group.
The
champions
goal
is
not
to
do
all
the
work
but
to
cat
herd
and
organize
to
you
know,
just
make
sure
these
issues
get
closed
and
resolved
effectively.
B
Andrews
still
gonna,
try
to
drive
a
regular
triage
process
and
make
sure
that
there's
going
to
be
like
better
triage
of
new
issues
being
created,
so
Carlos
volunteered
to
be
on
point
with
that,
with
Bogdan
and
Riley's,
backup
and
Andrew
observing
and
bothering
people
if
it's
not
happening
so
that
those
are
really
the
the
main.
The
main
deals
we'd
like
to
get
priority
labels
on
all
of
the
remaining
issues,
so
that
we
can
identify
which
might
be
the
best
to
cut
the
P
twos
and
in
general
I've,
been
feeling
satisfactory
about
a
lot
of
this.
B
So
that's
giving
me
some
confidence
that
we
can
push
through
this,
but
I
do
think
it's
going
to
be
tight
and
it
is
going
to
require
the
TC
and
spec
approvers
to
to
kind
of
slam
through
what
what's
remaining
or
cut
scope.
So
that's
sort
of
the
this
state
of
Oh
tell
we
really
really
really
want
a
GA
something
this
year.
So
there's
a
lot
of
interest
in
making
sure
that
happens.
B
I'm,
organizing
open
telemetry
community
days
in
November,
this
will
be
a
free,
multi-day
online
event
near
cube
con
in
the
past,
I've
run
a
conference
called
the
observability
summit
at
cube
con,
but
that
was
kind
of
like
it
was
too
expensive,
so,
like
the
community
couldn't
show
up
to
it
because
you
had
to
buy
a
cube
con
ticket
to
get
into
this
thing.
So
it
was
like
you
know,
minimum
like
$1500.
B
So
we're
gonna,
try
and
organize
something
like
that
and
make
it
multi-day
so
there's
time
to
have
like
a
day
for
workshops
a
day,
for
you
know,
unconference
birds
of
a
feather
and
discussion,
and
maybe
a
day
for
like
a
more
structured
talk
track
or
something
like
that,
details
TBD
and
that
that's
mostly
what
was
going
on
right.
There
are
a
couple
of
specs
that
we'd
like
to
close
right
now:
exception
reporting.
B
We
have
a
very
simple
approach:
we'd
like
to
take
two
recording
exceptions,
so
we'd
like
to
get
this
one
closed,
rather
quick.
So
if
you
care
about
exceptions
and
what
those
semantics
dimensions
look
like
have
a
look
at
this
issue
and
that
errors
working
group
is
tasked
with
coming
up
with
an
explanation
for
how
errors
are
supposed
to
be
recorded.
This
has
become
a
big
debate
in
open
telemetry.
B
It's
one
thing
to
capture
an
exception
is
another,
but
whether
something's,
an
error
or
not,
is
somewhat
subjective.
So
so
that's
a
little
bit
tricky.
However,
we
do
want
to
identify
errors,
because
you
know
more
advanced
sampling
systems
are
able
if
something's
marked
an
error
to
ensure
that
it
gets
recorded.
So
there's
real
value
in
being
able
to
identify
errors
at
runtime.
It's
just
a
question
of
especially
if
you're
a
framework
author.
B
If
you're
writing
the
auto
instrumentation
code,
that's
going
to
get
plugged
in
you
know
how
do
you
decide
which
HTTP
status,
codes
or
endpoints
count
as
errors?
If
you
open
a
file,
try
to
open
file
on
the
file
is
not
there.
Is
that
an
error
or
not?
That
kind
of
thing
so
I
think
that's
the
last
nugget
in
that
errors
exception
working
group
to
get
through
everything
else
is,
in
this
exception,
reporting
spec.
So
please
check
that
one
out
like
why
is
Matz
also
got
an
environment
variable
spec.
B
This
is
pretty
basic,
but
we
started
to
grow
environment
variables
in
a
couple
SIG's,
but
it's
been
somewhat
organic,
so
they
don't
have
any
unified
conventions,
and
actually
we
have
say
environment
variables
for
setting
resources
and
it's
a
different
environment,
variable
name
and
structure
and
Java
from
Python.
So
this
is
just
an
attempt
to
be
like.
Can
we
just
get
a
basic
structure
for
these
and
make
sure
they're
all
the
same?
D
A
A
B
We
have
to
get
a
hustle
on
this
summer
once
we
get
over
the
GA
finish
line,
especially
for
the
spec.
You
know
it
can
maybe
relax
a
little
bit,
but
we
everyone's
got
ants
in
their
pants
to
get
the
GAO
I
would
suggest
the
TC
to
consider
potentially
expanding
membership
at
this
point,
bring
that
comment
at
you,
Sergey
and
I
think
that
would
potentially
help
paralyze
some
of
this
effort,
but
yeah.
B
It's
mostly
just
elbow
grease
and
getting
the
work
in,
but
it's
good
to
have
Andrew
around
as
an
engineering
manager
helping
with
some
of
this
because
I
think
that's
providing
a
little
bit
of
oversight.
So
at
least
if
things
are
getting
ignored,
he's
like
tracking
and
identifying
them
all,
so
we
can
knock
them
down.
So
that's.
C
Sort
of
where
it's
at
yeah
there's
lots
of
really
good
stuff.
There
I
think
that
that's
not
me
but
yeah.
Definitely
it
is
pretty
noticeable
that
yeah,
a
lot
of
specs
PRS,
have
been
sitting
stagnant
for
quite
a
while
I
think
definitely
yeah.
If
there's
a
possibility
to
expand
the
tech
committee,
it
seems
like
there's
not
enough
people
there
able
to
kind
of
focus.
B
Like
honestly,
the
core
open,
telemetry
crew,
probably
like
hit
a
bit
of
burnout
and
fatigue
and
Q
long
QT
this
year,
because
we've
been
going
hard
for
so
long.
So
I
did
notice
that
as
well
like
myself,
included,
I
became
so
over
committed
that
my
ability
to
to
close
and
comment
on
specs
is
evaporated.
Yep.
It's.
C
Other
people,
we
kind
of
me
because
at
Google
we
sort
of
like
switched
the
team,
those
which
limit
trades.
It's
like
a
totally
different
state
of
people
now,
so
we
don't
really
have
anyone
with
a
huge
amount
of
experience.
Yet
so
we
can
like
we
could
look
at
trying
like
Gruen
people,
for
that
role
will
help
but
also
take
a
while
before
we
get
anyone.
C
B
B
C
C
B
But
you
know
it
there's
a
bit
of
like
yes,
we
do
need
the
TC
to
close
these
issues,
but
you
know
there's
plenty
of
room.
If
people
want
to
be
dedicated
to
responding
aspect
issues,
I
do
think
we
can
fast
track
people
getting
approve
your
status,
but
another
way
to
do
it
is
like
just
get
involved,
lots
of
comments
and
issues
and
stuff
and
we'll
get
you
made
approvers.
If
that's
what
you
want.
B
And
we
have
also
like
divided
approvers
to
just
having
these
sort
of
by
signal
approvers
like
so
people
who
are
specialists
for
metrics,
for
example.
So
we
have
like
we've,
actually
subdivided
some
of
the
approving.
If
you
look
in
that
list,
so
there's
extra
approvers
for
logs
metrics
and
tracing,
because
the
expertise
is
somewhat
diffuse,
in
particular
all
the
tracing
people.
We
didn't
really
have
a
great
handle
on
metrics,
so
that's
like
a
different
crew
and
then
the
logging
people
have
all
rolled
in
recently
and
they're
a
whole
new
crew.
So
a.
B
B
So
that's
the
one
thing
I'm
just
mentioning
there
are
these
subgroups:
they
they're
all
in
the
calendar,
there's
metrics
errors
and
sampling.
So
those
are
three
areas
that
we
we
just
want
to
get
done.
So
if
yeah,
anyone
from
your
crew
can
attend
those
meetings
or
wants
to
help
push
those
things
over
the
finish
line.
We'd
appreciate
it.
B
B
B
B
B
A
D
B
There's
been
particularly
helpful,
but
yeah
I
get
the
impression
you
know,
she's,
coming
in
as
a
manager
and
trying
to
just
get
a
lay
of
the
landscape
and
figure
out
like
what
has
to
happen
to
get
the
thing
pushed
over
the
finish
line.
I
think
their
goal
is,
is
to
get
it
pushed
over.
The
finish
line
in
particular
I
do
know.
Amazon
initially
got
involved
because
they're
really
interested
in
all
this
logging
stuff
in
particular
like
Google.
They
really
wanted
to
get
whatever
it
is.
You
all
are
currently
using
as
a
logging
agent.
B
B
So
they
want
to
kill
that
thing
off
and
replace
it
with
a
more
efficient
logging
agent
and
they
want
the
open
telemetry
collector
to
be
that,
so
they
want
to
push
logging
into
open
telemetry,
and
we
said
we're
interested
in
that.
But
we
have
to
GA
rating
at
metrics
before
we
can
really
dig
into
locking
in
a
super
meaningful
way
like
go,
experiment
and
write,
specs
and
figure
out
what
you
want
to
do
and
do
some
work
in
the
collector.
B
E
D
B
B
If
you
look
at
like
lines
of
code
right,
there's
orders
of
magnitude,
more
lines
of
code
written
against
the
instrumentation
api
is
then
against
the
SDK.
Api
is
or
anything
else,
and
so
a
breaking
change
to
the
instrumentation
API
is
like
the
tracing
API,
or
something
like
that.
Just
anything
that
would
require
that
would
create
a
even
a
version
conflict
or
things
like
that
gets
really
dicey.
So
so
those
api
is
become
very,
very
sensitive
to
instability
once
they
GA.
B
We
also
want
to
be
stable,
but
it's
less
of
a
concern,
because
if
we
do
have
to
make
some
kind
of
improvement
which
surely
we
will
you
look
at
frameworks
and
they're,
always
improving
their
hooks
and
observer
things,
but
there's
a
the
amount
of
code
that
has
to
get
written
if
you
break
those
is
a
lot
smaller
and
it's
a
lot
easier
to
upgrade
as
an
end-user
and
the
people
who
maintain
those
things
are
more
like
the
core
maintainer
zve
open
telemetry
because
of
people
writing,
span,
processors
and
exporters
and
stuff.
So
so,
I'm.
A
F
C
That's
good
yeah,
yes,
I
brought
that
one
up
quickly,
I
think
I'd.
The
Soviets
been
around
for
a
while
and
I
should
have
said
that
a
few
different
discussions
from
with
a
few
different
people,
but
gives
me
a
question
around
what
direction
I
should
take
this
in
in
order
to
make
progress
now,
I
think
so.
The
reasons
kind
of
important
to
us,
a
little
bit
at
the
moment,
is
that
I
think
it
would
probably
be
important
to
AWS
as
well,
but
at
Google
with
all
the
interns
working
on
that
with
inflammatory
at
the
moment.
C
One
of
the
key
things
that
they're
trying
to
do
is
add
make
resource
detection
and
a
look
out-of-the-box
kind
of
stole
the
inflammatory
get
resource,
detection
or
anything
on
GCE
standard
way
of
doing
it
and
then
and
then
we
can
export
that
to
cloud
monitoring
profits,
as
other
people
can
do
to
wave
it
back
in
amazing.
But
yes,
the
subtypes
a
little
bit
interesting
ahead.
There
was
fairly
reasonable
amount
of
discussion
on
it
around
what
kind
of
approach
we
should
take.
The
there
wasn't
I,
don't
know.
C
No
one
really
seem
to
have
a
really
strong
view
on
the
way
that
we
want
to
do
any
reasons
to
take
like,
from
my
point
of
view,
one
of
the
two
biggest
any
questions
I
think
I
put
a
community
into
that
proposal
is
one
is
like.
Do
we
want
to
include
this
resource
detection
like
vind
and
specific
results
detection
so
like
GCP,
an
AWS
as
you
know,
in
the
SDK,
or
do
we
want
to
have
that
in
separate
packages?
C
And,
secondly,
do
we
want
to
run
on
default
like
do
we
want
some
of
that
detection
to
run
by
default,
or
do
we
expect
users
to
have
to
write
a
couple
of
lines
of
code
to
configure
that
and
I
think
I've
kind
of
different
opinions
from
different
people's
I'm,
not
too
sure
which
way
to
right
now?
Also.
B
C
And
Morgan
looking
for
it
to
run
by
the
program.
The
other
thing
is
that
Christian
makes
a
comment
near
the
end
of
that
proposal
was
a
fairly
radical
suggestion.
I
would
say
that
maybe
none
of
this
stuff
needs
to
be
in
the
SDK
and
it
can
like.
We
don't
even
need
to
add
anything
to
this
to
get
a
tour,
but
literally
all
of
us
could
be
achieved
by
just
having
Linda
packages
that
will
return
you
a
resource
object
and
then
users
can
use
the
existing
the
existing
API.
C
B
Yeah
for
sure,
I
think
a
related
concept.
That's
been
developing
and
I'm
trying
to
promote
is
the
concept
of
an
open,
telemetry,
distro
I'm,
not
sure,
if
I
pitched
that
on
this
meeting
before,
but
we
are
seeing,
you
know,
AWS
Adger
I,
don't
know
about
stackdriver,
but
you
know
many
backends
don't
or
won't
be
able
to
use
the
vanilla,
open,
telemetry
stuff.
They
need
their
own
custom,
sampler
and
their
own
exporter
in
order
to
work.
B
I
think
the
only
rules
about
a
distro
is
that
it
shouldn't
prevent
a
user
from
being
able
to
get
under
the
hood
and
do
the
regular
configuration
after
it
runs,
and
it
can't
forth
the
SDK
you're
allowed
to
forth
the
SDK.
You
can
make
your
own
SDK,
but
that's
a
that's
a
separate
thing.
That's
you've
made
your
own
SDK.
That's
not
a
distro
open,
telemetry.
B
Second,
one:
yes,
so
for
the
each
language
installation,
you
know
there's
some
concern
right,
like
you
know
like.
If
you
need
a
certain
kind
of
sampling
and
a
certain
kind
of
exporter,
you
can
give
instructions
to
your
end
user
to
like
clunk
all
that
together.
But
what?
If
they
don't?
You
know
what,
if
they
forget
to
install
the
sampler.
So
we
built
a
really
flexible
system
which
is
really
good,
but
I
think
that
flexible
system
now
has
to
get
encapsulated
in
various
wrappers.
That
are
a
little
bit
simpler.
B
We
actually
are
proposing
one
at
light
step,
we're
trying
to
be
as
open,
telemetry
native
as
possible
and
we've
managed
to
achieve
it.
But
there's
just
some
little
squiggles
like
you
have
to
set
a
gr,
PC
header
and
some
other
stuff,
and
we
were
thinking
it
would
be
nice
if
users
could
use
like
a
yamo
file
or
something,
and
so
we've
started
to
create
just
like
a
very
basic
wrapper
that
doesn't
have
anything
light
step
specific,
but
it
just
cleaned
up
the
config
code.
So
much
that
it
just
looked
great.
B
A
B
Thing
and
you
can
see
yes,
if
you
want
to
make
a
distro,
that's
got
your
sampler,
your
exporter,
your
resource
detectors.
You
know
the
whole
sweetest
stuff
to
make
the
work
on
Amazon,
so
it's
like
install
the
Amazon
flavor
and
then
continue
to
configure
it
with
whatever
extra
junk
you
want
to
do.
Yeah.
D
So
IDO
biosis
currently
focused
on
Java
and
I've,
managed
to
make
this
a
distro
thing
for
Java,
but
it
was
like
it
required
adding
in
a
lot
of
hooks
and
Nikita
is
also
like.
We
haven't
defined
these
hooks,
yet
we're
just
sort
of
doing
it.
Ad
hoc
he's
never
happy
about
that
sort
of
thing,
but
we
probably
will
need
to
formalize
what
exactly
a
destroy
is
and
sort
of
what
customization
points
need
to
be
there.
D
One
point
that,
like
the
auto
versus
manual
manual,
will
be
easier,
cuz
and
then,
like
someone,
will
have
to
just
run
the
code,
but
Auto
tends
to
not
have
configuration
knobs,
and
so
that
also
has
a
different
aspect.
I
would
say:
that's
probably
part
of
the
destroys
like
what's
an
auto,
destroy
versus
what's
manual
distro,
so
these.
B
B
And
why
do
we
want
some
kind
of
configuration
file
like
a
default
like
the
collector?
Has
a
configuration
file
apparently,
and
you
could
think
about
picking
a
standard
format
and
a
standard
configuration
file,
and
then
that
would
kind
of
allow
the
auto
instrumentation
stuff
to
be
more
samely
configurable,
but
yeah
I
agree.
Auto
instrumentation
has
to
become
more
configurable
than
it
currently
is.
F
B
That
you
don't
need
to,
because
all
you're
doing
is
setting
resources.
You
don't
need
to
put
it
into
the
SDK
right,
but
the
user
would
have
to
trigger
it
manually.
So
there's
just
a
question
one
of
getting
triggered
automatically
or
not
whether
it's
all
bundled
together.
For
me,
it
comes
down
to
does
that
create
dependency
issues
if
there's
no
extra
dependencies
for
detecting
all
these
different
resources
across
all
these
different
systems.
B
I,
don't
know
if
that's
true
or
not
I,
don't
know
the
actual
list
of
things
everyone's
trying
to
check,
but
if
it
doesn't
require
hauling
in
proprietary,
API
packages
and
stuff
like
that
to
sniff
this
information
out,
then
I
would
say
that's
a
good
candidate
for
going
in
the
core
and
running
by
default
and
the
user
is
allowed
to
turn
it
off.
They
don't
like
it.
I
think
that
would
be
a
great
experience
for
end
users
actually
yeah.
A
Well,
I
think
nails
question
was
where
this
course
will
be
located.
There
will
be
in
cloud
provider
owned
repository
or
in-country
repository
of
open
to
damages,
so
I
think
it's
similar
question
to
how
we
be
discussing
samplers
when
we,
when
we
set
that
samplers
may
belong
to
poor
repository,
even
though
their
vendor
specific
they're,
so
small
and
tiny
that
we
want
and
people
probably
go
to
use
it.
We
want
to
centralize,
like
put
it
inside
our
place,
I
think
I,
think
makeup
detectors
and
specifically
for
this
or
chip
for
I.
A
C
I'm,
okay,
with
merging
I,
think
what
I
would
probably
that's
two-thirds
a
couple
of
open
questions
somewhat.
Those
are
probably
message
that
people
have
approved
and
just
said
see
if
they're,
okay,
with
the
direction
I
want
to
take
this
in
before
I
write.
Ups,
big
strokes,
otherwise,
like
it'll,
be
a
drastically
different
spec,
depending
on
what
kind
of
decisions
I
make
I
like
I.
Think
from
kind
of
listening
to
this
discussion,
everything
I
mean
I'm
earring,
towards
saying
that
the
resource
detectors
should
all
be
leaned
a
specific.
They
like
be
packaged
separately
to
the
SDK.
C
D
F
D
B
Personally
would
encourage
it
all
getting
baked
into
core.
Just
I
mean,
like
you
can
say,
like
you
know,
AWS
as
a
vendor
Google's
event,
there's
only
so
many
clouds
and
to
me
the
only
quite
the
question
is,
is
more
of
a
technical
feasibility
questions
that
I
don't
know
the
answer
to,
rather
than
a
for
me
personally,
it's
not
a
philosophical
question
that
make
sense.
I
mean.
C
C
C
Yeah
it
does
like
if
we
did
want
to
look
at
running
some
of
that
stuff
by
default,
though
I
feel
like
that
would
kind
of
raise
a
bunch
of
other
questions
like
you
like
at
the
moment.
It's
not
that
many
cloud
providers,
but
you
could
see
in
the
future
that
more
you
know
more
major
cloud
providers
might
appear
and
there's
a
lot
of
kind
of
smaller
ones
as
well
and
then
do
we
do
want
to
run
all
those
by
default.
Should
the
old
of
an
SDK?
What
kind
of
rules
we
place
on
it
and.
C
C
D
C
B
A
B
This
actually
pertains
to
auto
instrumentation
as
well,
because
there's
a
question
of
like
which
packages
automatically
get
installed
and
is
it
just
the
core
ones?
Is
it
our
contribute
or
is
it
just
like
anything?
Anyone
on
the
internet
wrote
and
the
last
one
seems
like
kind
of
dangerous,
but
it's
the
most
useful
one
assuming
you're,
not
accidentally
downloading,
spyware
or
something
horrible.
B
So
that's
another
example
of
like
distros
right,
you
know:
do
you
want
the
everything
distro
or
just
the
core
distro,
so
you
could
see
there
just
being
like
an
open
source
kitchen
sink
distro,
that
through
in
all
the
auto
instrumentation
all
the
resource,
auto
detection
and
is
sort
of
like
step,
one
try,
the
kitchen
sink,
get
used
to
that
and
then
once
you
one
get
your
feet
wet
and
understand
what
you
need
then
go
get
a
streamlined
version
of
this
thing.
Yeah.
D
A
So
you
want
to
give
I
didn't
want
to
spend
a
lot
of
time
on
Z
pages,
so
Z
page
was
D
pages
and
remote
configuration,
which
is
next
topic,
interest
project
here
and
Google
and
initially
as
if
I
just
started
as
like
three
pages
like
well-known
concept
and
many
many
companies
already
using
it
and
many
projects
are
using
it.
So
we
just
want
to
enable
progress
on
GPSS
experiment.
A
So
we
get
the
experiment
folder
and
specification
and
I
wanted
to
push
start
pushing
first
specifications
about
the
page
and
I
wanted
to
warn
everybody
that
those
are
again
experiment.
It's
not
like
even
alpha
version,
it's
experiment
so
like
treated
as
such,
and
the
main
purpose
of
experiment
is
to
make
sure
that
interns
can
get
really
fast.
Oh
like
whoever
else
want
to
participate.
You
can
get
very
fast
to
proof
of
concept
and
DMO,
and
then
we
can
take
all
this
work
and
start
Mergent
as
real
implementation
and.
B
A
B
B
B
What,
if
you
replace
with
C++
and
of
course
the
answer
is
like
blazing
performance
great,
like
way
better
performance,
so
they
did
set
up
some
performance
work
like
it's
left
over
for
them.
I!
Don't
have
that
in
turn
anymore.
He
is
gonna,
come
back
as
an
employee,
but
he's
not
coming
back
till
September
and
Brian
since
he's
a
contractor
with
limited
hours.
I,
don't
know
if
I
can
get
him
to
like
champion
this.
So
I
was
curious.
B
If
performance
testing
could
even
potentially
be
an
intern
project
or
if
people
at
Google
were
interested
in
sort
of
this
stuff,
the
main
thing
we
need
is
just
a
proposal
for,
for
how
did
you
performance
benchmarking
like
it
can
be
artificial
benchmarks?
It's
not
like
see
how
good
a
spring
app
works
or
something
like
that.
But
you
know
performance
testing
is
always
tricky
to
set
up
and
get
accurate
results
right,
so
there's
tricks
and
so
I'm.
B
Looking
for
someone
with
performance
testing
background
to
just
write
a
basic
proposal
of
like
what
benchmarks
we
should
have
for
each
language
and
what
does
a
proper
test
harness?
Look
like
an
environment
for
testing,
look
like
to
ensure
you
get
accurate
results,
so
I'm
just
curious,
Sergei
or
anyone
on
the
call.
Are
you
interested
in
this
or
have
resources
you
could
put
at
this
particular
issue?.
A
I
think
we
as
interested
as
everybody
else
so
Bridget
for
everybody
understands
the
importance
of
it
and
I,
don't
think
we
have
like
Oh
Internet's
already
started
their
project,
so
it's
too
way
to
change
as
far
as
I
know.
Maybe
I
can
stand
corrected
but
pretty
sure
it
is
and
so
yeah
I
don't
know
now.
C
Yeah,
no
I,
don't
really
know.
I
did
I'm
sure
it
is
important
to
us,
but
everything
with
involved
and
yeah.
All
the
interns
have
already
started
their
projects.
As
far
as
I
know,
we
we
do
have
a
new,
a
new
person
starting
at
Google
in
a
few
weeks,
but
I
think
we're
intending
them
to
put
them
on
the
other
thing
in
that
document
in
the
open
census
compatibility
work
because
we
need
someone
to
yes.
B
E
A
B
A
good
point,
Oh
Riley,
he
seems
to
be
really
interested
in
performance
yeah
and
provided
since
you
were
to
Microsoft,
hopefully,
can
provide
a
stable
platform
for
running
these
tests.
It's
one
of
the
problems
with
cloud
right
is
like
it's
kind
of
a
crap
place
to
do
performance
testing.
As
far
as
I'm
aware
I
mean
it's
better
than
my
laptop
I
guess,
but
anyways
there's
doing
the
actual
work,
which
would
be
like
an
intern.
But
what
I'm
really
looking
for
is
an
expert
just
just
to
write
the
proposal.