►
From YouTube: 2021-01-07 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Sometimes
before
these
meetings
I
try
to
put
together
an
agenda
because
of
the
sort
of
start
of
the
year
and
the
week's
events
I'm
not
actually
ready
to
give
an
agenda
to
this
meeting,
but
we
have
talked
about
a
workshop
next
week
and
I
want
to
make
sure
that
we
prepare
ourselves
for
that.
I
do
have
a
doodle.
I
haven't
looked
at
the
results.
We
should
do
that
right
now.
B
Perhaps
I
thought
we
could
maybe
brainstorm
a
little
bit
about
what
we
hope
to
achieve
and
how
to
break
the
the
day
up.
Assuming
most
people
can
come
to
about
six
hours,
or
so
I
mean-
or
some
of
it,
we'd
like
to
help
people
who
can't
make
it
to
all
that
choose.
That's
my
hope.
For
the
day
in
the
meeting
today,.
D
Sure
I'll
make
it
quick.
Let
me
see
so
in
regards
to
metrics
labeled
issues
on
the
spec,
oteps
and
proto
repo
and
concentrating
on
the
p1s.
Of
course,
we've
got
and
following
that
link,
you
can
see
it
here
filtered
by
metrics
in
priority
p1.
We
have
18
in
the
to
do
column
two
with
an
associated
pr
and
we've
resolved
14
of
them
so
of
course
giving
this
update
first
time
back
over
the
holidays
and
not
too
much
movement
since
december.
D
B
Let's
look
at
the
doodle
see
what
people
have
said:
okay,.
D
Oh
okay,
this
is
big
what
happened
to
my
maximizer.
B
D
E
C
A
B
Well,
it
works
for
me
it's
a
little
later
on
the
east
coast
or,
if
you're
farther
farther
away,
but
that's
great
for
for
me.
I
think
we
should
choose
that
given
that,
but
we
can
do
the
best.
All
we
can
do
is
the
best
we
can
can
here.
B
Okay,
I
created
that
poll.
I
will
go
finalize
it
right
now
and
then,
since
that
was
all
that
was
on
the
agenda
today,
I'd
like
to
propose
someone
here
or
or
I
can
start
talking,
but
anyone
else
would
like
to
start
talking
about
how
to
organize
the
workshop.
C
B
Good
okay,
so
I
feel
like
not
doing
it
right
now
in
the
moment,
but
maybe
we
should
do
it
since
this
is
the
end
of
the
agenda
here.
Why
don't
we
do
that
right
here
then,.
B
C
Yeah
sure
again,
I
think
that
I
can
definitely
add
them
to
the
dock,
but
what
I'd
like
to
see
is
again
the
the
instrument
types
and
and
the
support
for
prometheus
getting
forwarded.
I
have
proposed
also
a
weekly
work
group
for
that,
so
I'm
getting
that
set
up
and
great
in
general,
you
know
kind
of
looking
at
the
collector.
B
Yeah
yeah
I've
come
to
realize
that
right
now,
the
the
place
that
the
open
telemetry
metrics
project
is
in
is
people
are
very
focused
on
our
data
model
and
our
collector
support
and
the
sdks
and
the
api
are
actually
only
as
important
as
in
as
much
as
they
have
helped
us
define
the
data
model
that
we
currently
have.
So
what
I'm
seeing
is
that,
if
a
if
a
vendor
comes
to
open,
telemetry
and
and
just
doesn't
care
about
our
sdks
or
our
apis,
all
they
care
about
is
our
data
model.
B
B
A
B
Okay,
so
at
least
we
have
a
shared
document
now
and
I
intend
on
writing
up
some
of
my
thinking
about
the
data
model
question.
If
you
were
here
at
the
end
of
in
the
december,
turns
out
I've.
I've
been
talking
with
my
data
platform
team
at
lightstep
quite
a
bit,
and
we
dug
into
some
of
the
questions
about
whether
when
and
whether
it
is
ever
safe
to
add
and
remove
labels,
and
I
think
the
the
whole
process
of
open
telemetry,
the
the
design
phase
we've
been
through.
B
You
could
kind
of
look
at
it
as
an
iterative
step
process
by
which
we
we
started
with
api,
and
we
went
forward
and
ended
up
with
a
data
model.
But
if
you
start
just
trying
to
justify
the
data
model
without
talking
about
the
process
we
went
through
or
the
compatibilities
that
we're
trying
to
find.
B
I
I
think
you
end
up
finding
a
different
explanation
for
the
data
model
and
the
the
this
sort
of
came
to
a
head
when
we
we
had
this
meeting
with
openmetrics
last
month
and
there
we
basically
looked
at
two
kind
of
major
areas
where
our
data
model
is
different
or
where
our
data
model
is
allegedly
more
capable
and
one
of
those
has
to
do
with
deltas.
B
I
think
that
that's
actually
the
less
less
contentious
of
these
issues,
and
I
propose
that
we
don't
spend
a
lot
of
time
talking
about
deltas,
but
the
one
that
turns
out
to
be
hugely
important
and,
I
think,
has
been
unstated
throughout
the
lifetime
of
our
project.
B
Is
that
we're
trying,
with
this
data
model,
to
get
to
a
place
where
you
can
add
and
remove
labels
safely
meaningfully
from
your
data
and-
and
I
think
that
this
was
implied
by,
but
not
so
explicitly
stated
as
it
should
have
been
by
the
open
census
project
and
the
mission
statement
of
a
consensus,
the
ability
to
dynamically
configure
views
really
suggests
that
you're
going
to
be
able
to
change
your
your
labels,
add
or
remove
labels,
and
that
you
might
have
a
deployment
with
heterogeneous
label
sets
and
so
on.
B
But
you
get
to
looking
at
open
tournament,
openmetrics
and
open
prometheus
and
and
what
they
said
and
that
what
they
still
say
is
that's
just
not
part
of
our
data
model,
so
you're,
creating
different
ins,
you're,
creating
different
metrics
with
different
dimensions,
and-
and
so
it's
it's,
and
what
I've
discovered
from
my
team
here
at
lightstep
is
that
if
you're,
if
your
data
model
doesn't
admit
what
we're
talking
about
in
openometry,
it
starts
to
look
like
open.
Telemetry
is
wading
really
far
into
your
computational
model.
B
So
the
way
that
prometheus
is
able
to
deal
with
mixed
label
dimensions
is
by
writing.
Very
detailed
queries
that
pull
in
the
distinct
label
dimensions
and
do
some
processing
to
output
the
data
so
that,
in
order
to
explain
how
to
handle
multiple
dimensions
or
different
dimensions
in
prometheus,
you
have
to
talk
about
processing
and
it's
not
clear
whether
open,
telemetry's
metrics
data
model
sets
is
explicit
enough
about
how
to
process
it
that
we
can.
B
F
F
Okay,
the
request
bytes
received-
or
something
like
that
is-
is
a
counter
because,
whatever
they
want
to
to
be
cheap
and
then
you
can
change
and
make
that
a
histogram
and
you
can
preserve
the
name
or
maybe
even
change
the
name
with
the
view.
F
So
I
think
we
we
need
to
separate
the
the
two
problems,
one
which
is
ability
to
for
the
end
user
to
configure
different
aggregations
and
different
labels
than
the
the
intent
that
the
the
instrumental
had,
also
and
also
to
with,
with
re-labeling
or
or
adding
removing
labels
as
a
different
topic.
Anyway.
What
I'm
trying
to
say
is
there
may
be
two
different
problems
we
can
discuss.
B
So
you
can,
you
can
take
an
instrument,
you
can
create
a
view
that
takes
two
dimensions
and
outputs
something,
but
you
got
to
rename
that
dimension
to
say
I
am
the
two-dimensional
version
of
this
metric
and
then
you
could
output
a
three-dimensional
and
say
I
am
the
three-dimensional
version
of
this
metric.
They
have
different
names
because
they
must
be
different
in
the
data
model
of
the
downstream
system.
B
So
we've
got
this
situation
where
we'd
like
to
to
say
here's
a
data
model
that
has
more
capabilities,
but
we
know
there
are
downstream
systems
that
can't
support
it.
So
either
we
can
restrict
ourselves
and
not
have
these
capabilities
or
we
can
define
how
to
export
data
that
matches
the
expectations
of
those
systems,
and
I
think
that's
our
question
is:
do
we
intend
to
allow-
and
I
think
this
prometheus
never
had
this
question
as
much
as
the
statsd
users
did,
because
when
you're
recording
deltas,
it's
really
easy
to
add
and
remove
labels,
especially
adding
them?
B
It's
it's
it's
just
like,
because
it's
semantically,
you
can
erase
a
label
on
a
delta,
but
but
what
I've
discovered
and
filed
in
some
of
those
issues
is
you
cannot
semantically
erase
a
label
from
a
cumulative
value,
or
else
you
change
its
interpretation
in
some
sense,
and
so
the
the
thing
that
I,
the
the
area
where
I've
had
the
most
trouble
explaining
to
a
sort
of
backend
person?
What
why
do?
We
have
this?
B
This
up
down
some
observer
with
a
data
type
called
non-monotonic
cumulative
sum,
that's
distinct
from
gauge,
and
the
answer
is
you:
you
process
it
differently
and
it
you
treat
label
eraser,
erasure
differently
for
those
that
data
type
and,
and
so
the
problem,
I'm
having
at
least
facing
light
step.
Now
is:
is
we're
asking
vendors
and
we're
asking
open
source
systems
to
either
deal
with
our
more
complicated
data
data
type,
in
which
case
we
have
a
lot
of
work
to
do
to
explain
how
they
can
do
that
or
we.
B
We
need
to
erase
it
before
we
send
it
to
a
downstream
system,
and,
as
this
came
up
with
our
talking
about
compatibility
and
versioning
and
stability,
which
is
for
a
piece
of
instrumentation,
are
you
ever
allowed
to
remove
a
label?
Well,
we
pretty
cl
are
pretty
clear
that
you
should
never
remove
a
label
because
someone
might
have
queried
that
label,
but
if
you're
talking
about
adding
labels
it
it
may
or
may
not
break
something.
So
so
we
might
want
to
say
we
open
telemetry.
B
We
may
want
to
say
for
all
of
your
instruments:
it's
safe
to
add
labels
if,
as
long
as
you
declare
them
optional
labels
and
then,
if
we
could
always
know
in
our
export
pipeline
as
in
in
the
data
model,
if
we
knew
that
the
label
was
optional,
then
for
a
prometheus
exporter
you
could
just
say
erase
the
optional
labels.
Do
it
correctly
and
you'll
have
a
data
model
that
works,
I'm
afraid
that,
right
now
we
haven't
fully
explored
what
what
it
means
to
export
data
to
a
system
that
can't
deal
with
mixed
label
dimensions.
B
F
Thing
is
we
have
this,
this
way
of
configuring,
what
we
called
views
which
we
haven't
defined,
but
the
other.
The
other
option
would
be
that
hey,
open,
telemetry,
the
the
view
configuration
and
the
back
end
are
tied
together
and
and
here
the
open,
telemetry
ecosystem
will
carry
whatever
you
want
with
not
knowing
whatever
is
your
back-end?
F
Whatever
is
your
view
configuration,
but
there
is
a
strong
relationship
between
views
and
the
back
end
of
your
choice
and
hence
we
can
explain
a
lot
of
decision
that
for
for
prometheus,
these
are
the
views
that
you
have
to
install
for
lifestep.
These
are
the
views
that
you
have
to
install,
so
we
can
make
it
a
configuration
of
the
library
that
is
tied
to
the
backend
and
then
then
we
still
don't
have
any
vendor
locking
here,
because
it's
just
a
configuration.
B
B
F
B
B
We
can
erase
labels,
but
I
think
what
you're
proposing
bogen
is
is
more
like
what
opencensus
did
and
I'm
open
to
it,
which
is
you
have
a
piece
of
code
that
sort
of
registers
like
here's,
my
here's,
my
standard
configuration
for
export
in
a
scenario
where
there
may
be
no
optional
label
so
for
your
grpc
instrumentation,
you
say
service
and
method
name.
Those
are
my
only
dimensions
that
are
always
there
and
any
other
optional
labels
that
might
come
in
for
all
kinds
of
reasons
must
be
erased.
B
I
think
that
that
story
works
pretty
well
until
we
talk
about
resources
and
then
resources
are
another
area
where
we've
added
to
the
data
model
and
prometheus
isn't
really
ready
for
it.
So
if
we
naively
turn
every
resource
attribute
into
a
metric
label,
then
resource,
then
erasing
labels
from
resources
and
automatically
creates
all
these
same
problems
again.
So
so
yeah.
F
But,
but
for
prometheus
resource
is
less
less
important
because
they
have
their
own
discoverability
and
they
have
their
own
way
to
to
to
discover
a
lot
of
these
resource
labels
and
hence,
usually
usually,
I
would
expect
people
to
completely
ignore
the
resource
when
they
they
are
scraping
using
prometheus.
The
library.
E
B
B
Here,
thank
you.
Tyler
yeah
we've
gone
about
as
far
as
we
should
on
that
topic.
But
for
me
it's
this.
We
have
to
commit
to
a
bold
statement
about
labels
and
the
ability
to
meaningfully
add
and
remove
them,
which
is,
I
think,
a
benefit
for
the
users.
That's
ultimately
where
this
comes
in.
I
believe
it's
also
something
that
stats.
The
users
are
much
more
familiar
with,
particularly
with
counter-
and
I
think,
there's
a
logical
explanation
for
that.
B
It
requires
a
little
bit
of
theory
and
a
little
bit
of
patience
to
go
through
it,
and
I
don't
want
to
talk
about
it
right
now,
but
there
are
good
explanations
here
so
but-
and
I
filed
this
issue
about
safe
label
removal
and
so
on
that,
for
we
have
to
fix
our
data
model,
we
have
to
really
document
our
data
model
and
I
think
it
means
that
we're
moving
into
computational
model
to
talk
about
how
we
will
always
carefully
and
safely
remove
labels
in
order
to
support
prometheus
yep.
C
B
C
B
B
B
B
I
feel
like
there's,
probably
something
about
api
and
sdk
that
we
should
list,
although
those
not
do
not
seem
like
the
most
pressing
matters
to
me
right
now.
It's
only
because
I've
been
working
on
data
platform,
a
lot
for
the
api.
There
have
been
a
lot
of
loose
ends
about
batch
observers.
I
know-
and
there
have
been
some
nitty
gritty
details
about
the
use
of
synchronous
instruments
from
inside
asynchronous
callbacks
and
whether
the
sdk
knows
reporting
frequencies
independently
for
individual
instruments.
B
All
that
seems
like
so
advanced
it
doesn't
matter
as
it
came
out
of
my
mouth,
I
was
like
blah
blah
blah
nothing
none
of
this
matters
and
that,
but
I'm
mostly
saying
that,
because
until
the
data
model
is
meaningful
and
people
stop
questioning
our
compatibility
with
prometheus,
none
of
us
none
of
our
api
or
sdk
matters.
C
F
Sure
I
think
it's,
but
so
far
we
don't
have
but
yeah
it
is
good
to
to.
I
can
drive
a
discussion
around
that.
B
B
So
there's
some
work
to
do
here,
either
in
code
or
in
config.
It's
a
big
project.
I
think,
to
like
implement
the
code
that
automatically
wears
together
all
the.
F
B
Yeah-
and
I
think
this
this
yes,
I
was
saying
something
about
a
computational
model
earlier,
I
think
it's
the
same
type
of
question
and,
and
one
question
might
be:
is
there
a
join
in
your
model
and
I
hope
not
actually,
I
think,
for
the
most
part,
whenever
I
looked
at
open
senses
it
was,
there
was
never
a
joint.
It
was
always
a
unary
transformation
of
data
happening,
which
simplifies
things.
I
think
quite
a
bit.
F
The
other
topic,
by
the
way,
sorry
for
interruption,
the
other
topic
is
multiple
exporters
and,
more
interestingly,
is
pull
and
push
exporters
in
the
same
same
sdk,
configuration
or
whatever.
I
know,
joshua
had
some
some
pr
and
stuff,
but
it
will
be
good
to
to
discuss
that
as
well.
Maybe.
B
Yeah,
I
mean
the
the
pr
to
me,
works
and-
and
I
was
hoping
to
get
a
bunch
of
stuff
merged
so
that
we
could
write
a
spec
about
it.
But
if
there's
questions
about
how
how
it
works,
there
are
so
many
potential
configurations
out
there
that
I
think
if
we
try
to
support
everything
in
a
configuration,
language,
it'll
it'll
be
our
def,
but
you
know
so
as
an
example
to
configure
multiple
exports.
B
You
know
we
have
conceptually.
You
have
an
accumulator,
a
processor
and
an
exporter
at
every
one
of
those
layers.
You
can
insert
a
double,
so
you
could
have
two
accumulators
and
then
completely
independent,
downstream
pipelines,
and
you
might
do
that
if
you
have
different
processors.
Of
course,
you
could
have
one
accumulator
two
processors,
and
you
might
do
that
if
you
have
different
exports,
export,
temporalities
and
so
there's
all
kinds
of
ways
to
configure
this
in
it.
And
I
don't
want
to
support
everything
and
that's
why
this
topic
scares
me.
F
Do
we
have
some
standard
components
that
we
always
install
in
order
to
support
something
like
the
views
that
opencensus
has,
which
is
a
bit
different?
It's
per
instrument
some
config
specific
configuration,
because,
in
order
to
support
that,
you
need
to
always
have
an
accumulator
that
does
like
the
initial
accumulation.
F
B
Yeah,
I
think
I
know
what
you
mean
I
like
today
in
the
hotel,
go
prototype.
There's
a
descriptor
argument.
That's
present
just
about
everywhere,
but
we're
not
really
using
it.
So
the
idea
was
currently
you
can
configure
a
pipeline
and
and
what
we're
talking
about
with
views
is
a
per
instrument
configuration
configurability.
F
For
a
predefined
pipeline
kind
of
like
so
so
you,
your
model,
you
know,
telemetry
go
is
more
like
you
define
your
own
pipeline.
The
model
in
open
sensors
was
the
pipeline
is
static.
It
has
certain
capabilities,
you
can
configure
different
capabilities
or
different
configuration
for
every
instrument
in
that
static
defined
pipeline,
and
I
think
I
think
a
good
topic
would
be.
Do
we
want
to
offer
a
dynamic
or
configurable
pipeline,
or
do
we
want
to
offer
a
predefined
pipeline
static,
defined
pipeline
with
configuration
at
the
instrument
level?
F
I
don't
know
if
we
are
able
to
offer
both,
but
this
was
an
our
discussion
between
me
and
john
watson.
C
But
logan
you,
you
raise
a
good
point,
because
you
know
the
advantage
of
providing
a
you
know.
A
well-defined
pipeline
is
that
with
knobs.
That
can
be,
you
know
tuned.
Is
that
it
is,
you
know
the
basic
or
the
fundamental
definition
of
what
is
available
and
then
perhaps
dynamic
pipelines
is
an
advanced
feature
later,
but
again.
B
What's
the
exporter
and
what
kind
of
temporality
does
it
need
and
a
bunch
of
protocol
details
and
a
bunch
of
you
know,
security
details,
the
front
half
is
what
how
does
your
accumulator
work?
What
labels
are
you
counting
over
and
what
aggregators
do
you
choose?
And
it's
basically
like.
B
I
think
what
we
want
to
do
is
simplify
the
back
end
halves
here,
I'm
installing
prometheus
or
I'm
installing
otlp
or
I'm
installing,
both
of
those
and
then
the
front
half
can
be
giving
all
my
instruments
full
fidelity,
which
is
like
no
view
it's
just
all
the
data
raw
or
you
know
all
the
data
aggregated
like
by
default
and
then
a
view
is
change.
The
front.
Half
for
one
particular
instrument
or
a
group
of
instruments,
yeah.
F
Yeah,
but
but
right
now,
right
now
josh
the
front
half
it's
it's
customizable,
customizable
in
open
telemetry
go
in
a
way
that
I
can
add
or
remove
a
component
in
the
front
half.
B
F
B
B
F
So,
but
even
that
we
can
discuss
in
the
meeting
if
we
want
to
offer
the
aggregate
the
selector
versus
we
do
offer
what
sensors
did,
which
was.
We
don't
give
you
that,
but
we
give
you
the
view
configuration
and
if
you
pass
a
view
configuration
we
that
says,
for
this
instrument
type
and
for
this
instrument
name.
This
is
the
aggregator.
F
B
The
same
thing,
I
believe
so
yes,
so
my
understanding
would
be
you
can
now
now
that
we
have
an
aggregator,
selector
and
a
processor
api,
we
can
now
go
implement
a
view
functionality
that
is
actually
combining
both
of
those
you
have
to
you
have
to
at
the
moment.
When
you
need
a
new
aggregator,
you
got
to
know
what
type
of
aggregation
and
what
kind
of
reduction
and
what
labels
you
need.
All
that.
F
F
G
I'll
just
throw
out,
you
know
another
thing
that
stuck
with
me
from
the
conversation
I
had
with
bogdan
last
week
or
earlier
this
week.
It
feels
like
a
month
ago
already
was
the
possibility
that
maybe
our
sdk
should
only
natively
generate
deltas
and
require
any
cumulatives
to
be
in
a
exporter
functionality
which
would
greatly
simplify
the
sdk
and
allow
it
to
make
some
guarantees
around
memory.
Memory,
usage.
G
I
understand
I
was
what
I
think
the
proposal
was.
Maybe
we
could
simplify
the
the
pipeline
if
you,
whatever
you,
want
to
call
that
by
making
it
so
the
processor
can't
even
do
cumulatives,
and
that
would
be
a
purely
exported
responsibility.
It
was
we
were
brainstorming
right.
C
Right
I
mean
john
you're
you're,
I
mean
we
looked
at
it
because
this
is
a
good
point
you
raise,
but
it
is
today.
The
spec
you
know
is
assuming
that
it's
part
of
the
I
don't.
C
Right
because
again,
the
trade-off
also
is
and
josh.
I
think
you
and
I
discussed
this-
is
that
if
we,
you
know,
move
that
functionality
into
exporters
the
implementation
and
the
becomes
a
lot
more
fragmented.
B
Current
the
current
code
in
the
go,
which
I
think
is
a
fairly
good
model,
has
a
processor
interface
and
and
the
basic
processor
just
offers
that
all
that
all
it
offers
well,
it's
two
function
pieces
of
functionality
that
prometheus
needed
so
that
all
that
functionality
could
go
in
the
prometheus
exporter,
but
then
anything
like
prometheus
is
gonna
have
to
re-implement
it
and
that's
just
their
sort
of
ability
to
keep
memory,
which
is
what
you
need
to
to
do.
Delta
to
cumulative.
B
That's
called
the
export
client
selector
or
the
aggregation,
temporality
selector
and
that
but
that's
delegated
to
the
exporter,
so
the
processor
needs
to
know,
and
but
the
exporter
doesn't
have
to
do
the
work.
F
The
the
thing
that
I
discussed
with
joe
was
different
was
that
so
we
have
a
exporter
interface,
correct
that
some
like
stasd
can
implement,
data
dog
can
implement
and
whoever
wants
to
implement
that-
and
we
were
thinking
if
right
now,
the
the
the
temporality
calculation
or
the
temporality
computation
happens
in
a
processor.
F
But
what
can
we
have
that
logic
as
a
one
of
the
implementation
of
a
span
of
a
exporter
interface
instead
of
a
processor
interface
and
and
then
and
then
and
then
the
question
becomes,
do
we
really
need
the
processor
interface
anymore,
and
so
we
can
discuss
friday,
but
the
whole
idea
was
around
these
things
like
yeah,
so.
B
There
is
that
interface
and
go.
It's
called
export
client
selector
currently,
but
it
should,
I
think,
be
called
aggregation,
temporality
selector
and
that
so
it's
the
process
of
doing
the
work.
But
it's
the
decision
is
made
by
the
exporter,
and
this
is
another
one
of
those
areas
where,
if
you
can
have
two
exporters
to
saying
I
want
different
temporalities
and
one
processor
and
it
actually,
it
works
out
because
the
processor
doesn't
have
to
maintain
any
state
for
the
delta,
but
it
does
for
the
cumulative.
B
C
G
So
I
would
also
just
caution:
elita,
especially
the
sdk
is
barely
written.
We
should
not
consider
it
to
be
written
in
stone
at
this
point.
C
B
I
was
hoping
to
get
that
push
pull
thing
merged
because
I
think
that
at
this
point
the
hotel
go.
Sdk
is
in
a
place
where
I'm
comfortable
with
it
saying
this
is
good
like
let's,
let's
get
to
1.0,
I
don't
need
to
change
it
much
much.
Of
course,
I'm
not
saying
I
wouldn't
change
it,
but
I
don't
think
it
needs
to
be
so
it'd
be
nice
if
we
could
finish
deciding
about
this
stuff
so
that
we
can
actually
finish
those
sdks,
like
you
say,
john.
B
Yeah,
I
feel
like
there's
a
little
chicken
and
egg
problem
here.
Is
that
we're
talking
about
how
to
organize
the
sdk
and
talking
about
simplifying
it?
But
we
don't
seem
seems
we
don't
have
a
like
a
requirement,
our
requirements
written
down,
and
I
one
of
the
library
guidelines
that
was
one
of
the
first
requirements
written
here
is
that
exporters
do
protocol
and
there's
a
processor
that
does
anything
anything
difficult
so
that
you
can
reuse
that
anything,
difficult
functionality.
B
B
B
F
The
other,
the
other
thing
alolita
that
we
want
is
probably
half
an
hour
going
to
through
all
the
prs
that
are
right
now,
open.
F
Maybe
do
a
live
review,
live
questions
asking
just
to
to
make
to
to
make
the
ball
moving
on
the
on
the
pr's
that
are
already
there.
I
think.
H
Josh,
were
we
still
going
to
get
together
with
open,
open,
metrics,
or
is
that
no,
and
actually
I'd,
be
interested
to
hear
like
how
this
prometheus
compatibility,
if
that
has
any
actual
interaction
with
with
the
like
folks,
from
prometheus
and
or
openmetrics.
B
Hi
rob
yeah,
so
we
talked
about
having
more
than
one
workshop
in
this
month.
I
so
we've
only
made
it
as
far
as
this
next
week
idea-
and
I
appreciate
your
reminder-
I
think
but
elita
had
mentioned-
trying
to
rope
in
the
prometheus
team
or
having
a
subgroup
already.
B
That
that
we
can
get
our
agenda
straight
and
have
a
piece
of
time,
that's
just
data
model,
which
I
think
is
where
we
need
to
get
everyone
to
talk
at
the
same
time,
probably
as
I
mentioned
earlier
at
the
top
of
the
meeting
like
this
question
about
adding
and
removing
labels
is
kind
of
where
it
boils
down
to.
I
think-
and
we
had
talked
about
how
I
think
we
technically
know
how
to
erase
or
prevent
that
those
labels
from
arriving
at
a
back
end
that
can't
handle
it.
But
we
need
to
decide
that.
B
That's
yes,
that's
what
we
want
to
do,
and
then
we
have
to
talk
about
how
we
automatically
know
what
to
do,
and
that
was
why
we
ended
up
talking
about
views
for
a
long
time.
I
had
mentioned
earlier
than
that,
an
idea.
If
we
just
automatically
knew
which
labels
should
be
erased,
we
can
do
it
somehow.
We
have
to
know
which
labels
can't
can't
make
it
into
a
back
end
or
have
a
schema
somehow,
and
that
seems
like
a
an
area
where
we
definitely
need
prometheus.
C
Yeah
josh
again
just
to
follow
up
on
that.
I
did
follow
up
and
richie
is
you
know,
involved
in
and
plans
to
join
in?
This
is
the
I
just
shared
the
issue
and
I'm
gonna
set
that
meeting
up
for
weekly
so
rob.
Maybe
that
addresses
both
the
questions
you
had
for
having
the
open,
metrics
and
the
prometheus
discussions.
H
Yeah,
fantastic
yeah,
if
maybe
like,
I
see
that
actually
calls
out
prometheus,
but
maybe
also
including
openmetrics
folks,
yeah.
H
B
Okay,
I
think
that
my
request
for
everyone
here
would
be
to
keep
this
document
and
open,
and
after
your
lunch
or
after
you've
thought
about
this
conversation
for
a
while
later
come
back
to
it
and
and
see.
If
you
can
express
any
ideas
that
aren't
covered
by
bullets,
I
will
do
the
same
and
then
try
and
come
back
to
it
on
I
think
monday
morning
and
a
lolita
I'll,
try
and
get
in
touch
with
you
about
it
as
well.
B
That
you
are
a
stakeholder
here
and
and
then
immediately
after
this
meeting,
I'm
gonna
go
close
that
doodle
for
friday
time,
the
one
that's
most
popular,
I
believe
it
starts
at
11
30
pacific.
So
it's
going
to
go
late
on
the
east
coast
and
okay.
Sorry
about
that.
I
I
mean,
I
know
that's
pretty
hard
for
people
in
london.
So
now
I'm
having
second
thoughts
already.
I
know
richie
is
often
in
the
london
time
zone.
So.
F
A
A
F
B
C
C
J
D
C
I
think
when
I
spoke
with
richie
last,
he
didn't
think
9
30
was
okay,
so.
B
All
right,
let's
do
9
30
friday
next
week,
since
I'm
in
front
of
it.
I
think
I
can
close.
It
choose
a
final
option.
A
B
Of
course,
not
that
I
know
how
to
use
doodle
what
okay,
I
think,
they're
monetizing
this
page,
so
much
that
I
can't
read.
B
You
should
have
yep.
You
should
have
received
that
notice.
Okay,
all
right,
our
work
is
cut
out
for
us
now.
Let's
all
do
our
homework
brainstorm
on
the
agenda,
and
I
will
we'll
talk
on
tuesday
morning
after
some
of
us
have
revised
this
agenda
or
drafted
it.