►
From YouTube: 2022-04-14 meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
I
know
it
I'm
down
in
the
bay
area
at
splunk
offices
and
I'm
ready
to
go.
A
Yeah
turns
out
not
being
in
the
office
for
a
few
years.
Is
it's
hard
to
go
back
yeah.
A
A
That's
how
it
felt
like
so
maybe
I
carved
out
more
time
than
I
thought
I
felt
like
I
did
so
little
this
week
and
I
felt
really
guilty
about
it.
A
B
So
not
just
local
to
our
area,
but
parts
of
texas.
This
past
week
had
firestorm
and
firestorm
warnings
tornadoes
and
I
think
the
very
tip
had
a
little
bit
of
a
blizzard
all.
At
the
same
time,.
D
B
Worst
of
both,
like,
I
can't
believe,
that's
happening
so
so,
west
texas
had
fire
warnings
and
and
fire
storms,
east
texas,
had
tornadoes,
and
north
texas
in
the
panhandle
was
dealing
with
a
cold
front
and
potentially
a
blizzard.
B
B
A
Yeah
cool:
where
are
we
we're
at
two
minutes?
Pass
I'm
guessing
yep,
there's
anthony.
We
could
probably
start
I'll
try
to
share
my
screen.
A
I
think
this
is
working
yeah,
perfect
cool.
I
am
on
a
laptop
now,
so
I
oh,
I
still
can't
see
your
beautiful
faces.
I
swear
I
just
one
day
we'll
get
good
at
computers,
but
until
then
you'll
have
to
put
up
with
me
cool
so
jumping
in.
A
I
wanted
to
talk
a
little
bit
about
releases
in
the
metric
update,
but
just
to
start,
if
you
haven't
already-
which
I
see
everyone's
doing
make
sure
you
add
yourself
to
the
attendees
list
and
if
you
have
anything
you
want
to
talk
about
on
the
agenda,
please
add
those
as
well
or
if
you
have
some
cool
user
stories,
we'd
love
to
hear
them,
and
I
will
add
that
in
there
in
case
you
wanted
to
put
it
on
there.
A
But
everyone
seems
to
have
been
to
meetings
before
you
know
the
rules
we'll
get
to
the
end.
Just
I'll
blanketly
stare
that
it's
you
and
ask
that
same
question,
but
yeah
cool
to
start
us
off
the
agenda
just
wanted
to
touch
base
on
release.
A
1.7
there's
been
a
lot
of
patches
to
1.6,
which
I
you
know
is
a
spicy
change
and
there's
a
lot
of
spicy
things
that
went
with
it.
Some
we,
I
don't
think
we
realized,
I
think,
did
we
talk
about
the
otlp
last
time.
I
think
we
did
last
time
talk
about
the
otlp,
but
I
guess
if
we
anybody
missed
it
or
they're
on
the
call.
Now
there
was
an
issue
with
one
point:
six,
two
all
the
way
back
to
one
point:
six:
zero.
A
I
think
no,
I
think
it
might
have
just
been
162.
and
go
ahead.
Aaron.
Sorry,
1.61!
That's
what
you're
saying!
Oh.
A
1.61,
where
there
was
compatibility
issues
with
certain
versions
of
the
collector
and
certain
versions
of
the
sdk,
that
is
all
resolved
if
you
upgrade
to
the
latest
of
either
or
both
both
being
the
preferable
option
so
yeah,
just
if
you
have
something
where
you're
not
seeing
data
come
through
yeah.
That's
that's
a
good
thing
to
try,
especially
for
people
who
are
watching
this
on
video
recording
to
be
a
little
pedantic.
E
But
clear
there
was
no
problem
with
any
of
our
releases.
161
worked
fine.
What
happened
was
if
you
updated
to
otlp
15,
while
using
161,
which
was
possible.
It'll
compile
it'll
work
just
fine,
but
the
data
that
goes
over
the
wire
would
change,
because
otlp
116
moved
some
of
the
fields
around
160.
What
sorry
115
move
some
of
the
fields
around
162
got
it
back
to
the
point
where
the
data
going
over.
The
wire
was
the
same
as
it
was
before.
E
The
collector
was
supposed
to
deal
with
that
and
didn't
and
that's
a
miss
and
should
be
fixed
in
the
next
collector
release.
A
Cool,
yes,
that
is
a
good
point.
I
I
do
remember
this
now,
which
again
kind
of
goes
back
to
goes.
Dependency
stuff
is
kind
of
hard,
especially
if
you're
pulling
in
things
that
I
you
know,
aren't,
aren't
dependent
on
here.
I
guess
there's
no,
like
maximum
version.
E
Stable
and
it
isn't
and
and
it's
it
was
in
kind
of
a
weird
spot,
but
I
think
where
we're
at
now
after
this
change-
and
it
should
be
happening
in
the
very
near
term-
is
that
otlp
will
be
marked
one
version
one
and
stabilized
the
logs
data
model
is
stable.
I
think
tigran
has
some
final
changes
he
may
want
to
make
to
the
logs
otlp,
but
once
that's
stabilized
otop
is
going
to
1.0
and
we
will
not
be
making
breaking
changes.
A
That's
exciting!
That's
it's
really
exciting
because
I
think
since,
like
1.10
we've
said
it's
stable,
so.
E
A
Yep,
which
in
theory
wasn't
supposed
to
break
but,
like
you
said,
like
the
collector
support,
wasn't
there,
so
it's
it's
also
getting
fixed
cool.
A
I
think
that
kind
of
summarizes
where
we
have
been
where
we're
going,
I
think,
is
the
release.
1.7
talk
a
little
bit
about
this
there's
not
too
much
feature-wise.
I
think
we
were
trying
to
get
out
here,
mostly
semantic
conventions.
I
think,
are
the
thing
left,
there's
an
in-memory,
metrics
exporter
that
has
pr
tied
with
it
here.
If
you
haven't
already
taken
a
look,
please
take
a
look,
especially
if
you're
a
user,
and
you
want
to
run
metrics.
A
This
is
going
to
be
something
that
you
can
use
in
your
testing
environments
to
validate
that
you're
producing
the
metrics
you
expect.
So
the
usability
and
the
the
shape
of
that
api
is
we'd,
love
feedback.
I
think
aaron,
I'm
guessing
you
from
what
I
saw
you're
just
copying,
essentially
what
we
kind
of
had
already
for
the
api
with
that
new
method
as
well,
but
yeah.
E
A
Anybody
had
some
requests
that
you've
experienced
in
the
past.
You
know
feel
free
to
chime
in
there
or
we
get
out
of
the
past.
It's
not
a
really
sensitive
package
to
have
a
little
bloat
too.
B
Yeah
what
I
would
call
out
there
is
it's
a
package
meant
for
testing
and
it
helps
I'm.
I
tried
to
go,
do
an
audit
of
how
contrib
was
used
metric
tests
and
that's
why
I
landed
on
those
two
extra
methods
for
searching
records.
B
And
the
goal
of
the
upcoming
sdk
will
also
have
this
kind
of
functionality
where
you
can
easily
create
an
actual
sdk.
I
I
want
to
kind
of
pull
back
and
why
we,
why?
Why
we're
approaching
it?
This
way
there
used
to
be
a
package
called
metric
tests
in
the
api,
and
that
somewhat
made
sense
in
that
parts
of
the
api
were
actually
like.
B
Concrete
objects
that
you
had
not
just
the
interfaces,
the
shapes
of
things
in
the
in
the
old
way,
and
that
meant
that
part
of
it
kind
of
made
sense
to
have
a
test
structure,
but
the
actual
problem
that
with
that
was
that
those
test
structures
weren't
actually
using
a
real
sdk.
So
if
you
were
relying
on
the
test
to
actually
test
that
what
you
what
you
put
into
this
is
what
you're
expecting
out
of
it,
it
didn't
work.
B
It
led
to
a
lot
of
false
positives
or
really
more
a
lot
of
false
negatives,
where
it
said
it
passed,
but
it
didn't
like
something
changed
under
the
hood,
so
that
has
been
replaced
with
you
use
an
sdk
to
test.
If
you
need
a
full
s,
if
you
need
the
metrics
collection
portions
of
this,
and
you
have
a
fixed
entry
point
in
the
meter
provider
and
a
fixed
exit
point
in
the
in
memory
collector
or
in
memory
x,
exporter.
A
Yeah
yeah-
and
this
is
really
similar
if
you've
used
also
the
trace
test
package.
The
way
we
approach
it
there,
where
you
use
the
default
sdk
and
if
you
don't
care
about
dependencies
where
you
bring
in
the
sdk,
you
can
just
test
right
in
your
code
and
if
you
do,
you
can
do
it
externally,
like
a
test
package
is
what
we
do
but
yeah.
I
agree
it's
it's
really
great
to
have
this
mock
sck,
but
then
you
have
two
sdks
that
you're
actually
testing
and
one
is
used
in
production.
A
Another
isn't
so
yeah,
really
good
idea,
cool.
A
That
sounds
good
because
I
lost
my
window.
I
was
sharing
so
go
ahead.
B
So
here
let
me
open
that
up,
and
so
I
can
share
it.
So
the
next
topic
is,
I
just
opened
a
task
or
a
issue
around
the
benchmark
job
attached
to
maine.
It
has
been
consistently
failing,
for.
B
I
don't
know,
I
think,
since
we
introduced
it,
it's
not
very
consistent
in
anything
that
it
tests,
the
failing
tests,
aren't
very
consistent.
Realistically,
this
is
do
we
need
this.
Is
this
going
to
give
us?
Do
we
think
we
can
actually
turn
this
into
something
useful,
some
useful
signal,
or
should
we
just
get
rid
of
this
like
this
is
maybe
premature
at
this
point?
B
We
don't
need
to
dis.
We
don't
need
to
have
the
discussion
right
this
second,
I
just
wanted
to
bring
it
up.
Ask
people
to
come,
leave,
comments
and
and
or
volunteer
to
put
more
effort
into
making
this
something
that
is
useful
because,
right
now
it's
just
consistently
failing
and
if
you
go
look
at
like
the
commit
history,
there's
a
lot
of
fails
on
maine
and
most
of
them
are
the
benchmarks.
A
That's
a
good
point.
I
personally,
I
think
we
should
probably
take
it
out
and
there's
a
lot
of
thought.
I
think
in
that
the
look
of
having
everything
fail
is
not
really
the
end
of
the
world,
but
it's
not
great
to
show
the
community,
because
we
are
reporting
that
on
readme
and
like
it's
in
other
places,
so
I
I
yeah,
that's
probably
not
a
great
idea.
A
It
doesn't
really
provide
value
because,
like
you're
saying
those
benchmarks
seem
to
be
noisy,
I
guess
and
not
consistent
in
just
each
run,
which
again,
this
kind
of
goes
back
to
like
if
those
of
you
weren't
here,
there's
a
really
great
joke,
we're
tiger
and
asked
the
cncf
board
if
we
can
get
out
of
hardware
to
run
benchmarks
on
but
turns
out,
they
said
no.
So
I
I
think,
like
you,
always
have
the
noisy
native
problem,
because
we're
always
going
to
be
writing
this
in
the
cloud
right.
A
So
it's
something
that
is
always
going
to.
I
think
plague
benchmarks.
The
original
idea
was
to
have
it
a
little
more
useful
where
it
would
check
your
poll
requests
and
make
sure
there
weren't
regressions
in
performance,
and
I
would
like
that.
I
think
that
that's
a
really
valuable
thing,
but
I
don't
think
this
is
it
like.
I,
I
think
that
we,
if
we
try
to
do
the
thing
we
originally
did,
where
we
would
put
it
running
on
every
pr
and
had
some
sort
of
upload
to
some
sort
of
like
graph.
A
We
would
just
have
a
bunch
of
failures
on
the
pr's
as
well
as
on
main,
so
I
I
think
that
we
need
to
probably
investigate
this
a
little
bit
more,
but
I
would
be
comfortable
pulling
it
until
that
is
addressed.
A
I
also
would
like
I
think
it's
damien
is
his
name
to
comment
on
this,
because
I
think
they
were
the
one
that
added
it
and
see
if
they're
have
any
thoughts,
because
I
I'm
guessing,
they
have
more
context
than
I
do
so
yeah.
I,
I
think,
that's
the
approach.
I
don't
know
if
anybody
else
has
any
ideas.
F
Yeah,
this
has
actually
come
up
in
the
contrib
repository
kind
of
recently,
there's
been
some
flaky
tests
over
there
around
like
load
tests
and
benchmarks
and
stuff
like
that.
There
was
discussion
of
like
hey.
Should
we
take
these
out
for
a
second,
because
they're
constantly
failing
all
the
time
just
like
we're
talking
now
alex,
I
think,
is
his
name
codebotton.
F
I
can
post
a
link
to
that
site
here
and
anyone
who's
watching,
recording,
maybe
we'll
put
it
in
the
we'll,
definitely
put
it
in
the
dock
and
there's
an
issue
as
well.
That
he's
got
written
up
with
some
of
the
history
about
it.
But
I
really
like
that
idea,
the
idea
of
like
being
able
to
see,
because
that's
what
the
benchmarks
are
useful
for
right,
like
every
release,
every
commit
that
we
make
like
what
is
that?
How
does
our
performance
change
over
time
versus
like
a
strict?
F
F
So
I
I
really
like
that.
The
things
that
he's
posted
in
that
issue,
this
concept
of,
like
being
able
to
view
the
performance
results
over
time
and
since
it's
already
kind
of
an
established
pattern
in
the
community
at
least
one
other
community
like
sig,
is
doing
it
like,
and
now
the
collector
contributes
talking
about
doing
it
too
would
be
kind
of
cool
to
just
continue
to
carry
that
pattern
over
through
through
other
cigs.
E
I
think
these
python
benchmark
trend
graphs
are
created
through
the
same
tool
that
we're
using
they're
just
not
using
it
as
a
pass
fail
kind
of
gate
just
using
it
to
generate
these,
and
that's
probably
the
way
we
should
go
as
well.
E
If
you
look
at
the
python
ones,
they're
hugely
variable,
you
know
they
swing
almost
100
at
times,
so
we
can't
really
use
them
to
say
you
know
if
there's
a
if
there's
x
percent
change
in
in
this
benchmark,
we
have
to
fail
the
run,
but
maybe
they're
useful
for
long-term
trending.
I
think
this
is
still
looking
like
a
whole
lot
of
noise
and
it's
hard
to
tell
maybe
a
couple
cases
where
something
plateaus
you
know
goes
really
high
for
a
little
bit
and
it
comes
down.
Maybe
there
was
something
there
that
was.
E
F
Tigran
in
the
issue
in
collector
contrib,
I
think,
mentioned
again
the
the
ask
for
hardware.
I
don't
have
all
the
history,
it
sounded
like
the
ask
for
hardware
was
a
long
time
ago
and
it
sounded
like.
Maybe
he
would
go
ask
again.
Maybe
we
could
ask
again
or
someone
else
could
ask
again.
I
don't
know
it
would
be
we're
getting
like
getting
close
to.
You
know
1.0
releases
and
stability
right.
F
We
got
people
asking
all
the
time
like
what's
the
performance
of
using
open,
telemetry
like
what's
performance
of
the
collector,
what's
the
performance
of
what's
the
impact,
if
I
add
your
libraries
into
my
code
and
run
them
at
scale
like
if
we
can't
you
know,
answer
that,
but
we're
doing
stable
releases,
that's
not
great
so
like.
Maybe
we
can
make
a
good
argument
for
that
now
that
we're
trying
to
get
stable.
A
So
yeah,
I
I
want
to
touch
on
a
few
points.
One
of
the
things
is
like
the
graph
as
anthony
said.
I
think
we
initially
looked
at
that
it
does
require
secret
access
to
be
able
to
post,
and
so
we
were
a
little
concerned
initially
off
the
bat
on
that
so
yeah.
I
think
we'd
want
to
validate
that.
Python
has
looked
through
that
and
has
also
okayed
it,
and
it's
not
just
something
that
was
unaware
of.
So
I
definitely
think
that's
important
to
anthony's
point.
A
I
do
want
to
make
sure
that
like
noise
is
noise,
so
if
you're
failing
a
test,
that's
one
thing,
but
if
yeah,
if
you're
looking
at
graphs-
and
there
are
noises,
I
want
to
make
sure
that
we
have
value
there,
and
I
also
know
that
in
the
collector
they
have.
I
think
they
call
them
like
so
test
or
something
like
that,
and
that
may
be
the
answer
as
well
like
it
it.
A
I
would
love
it
if
the
cncf
gave
us
hardware,
but
I
mean
it
is
the
cloud
native
computing
foundation
so
like
they're,
really
not
the
response
to
tigran
the
first
time
was
like.
I
don't
think
they
laughed
at
them,
but
I
think
it
was
close.
So
I
I
don't
I'm
not
hopeful
to
get
hardware
from
them,
but
I
think
tigran's
response
was
then
okay.
Well,
I
have
dedicated
hardware
in
my
home
or
somewhere
offsite,
where
we
could
have
a
team
member
just
run
it,
maybe
something
that
we
could
try
to
do.
A
I
don't
know
if
that's
feasible,
but
if,
as
you're
saying
tyler
like,
I
think
it's
a
good
point
like
as
we
come
to
like
this
stable
well,
one
of
our
signals
is
stable,
already
like
being
able
to
tell
users
the
overhead
that
they're
going
to
be
incurring
on
certain
systems
and
certain
you
know,
cpus
would
be
really
helpful
and
helps
the
adoption.
So
I
I
think
it's
an
important
point.
Is
this
something
tyler
that
you'd
be
willing
to
pick
up?
I
know
damien
is
also
interested
in
it.
F
A
Cool
yeah,
it's
much
appreciated
like
like
you're
saying
I
think
I
agree
it's
very
it's
important
topic.
So
if
you
wouldn't
mind
just
comments
on
aaron's
issue
and
maybe
we
can
assign
it
to
both
you
and
damian
or
or
one
of
the
other,
I
think
that's
the
way
we
can
go
forward
on
this.
A
A
I
think
this
is
not
it
either
man.
I
have
no
idea
where
my
screen
went.
We'll
just
go
with
desktop
one
cool
back
to
the
agenda.
A
I
think
the
only
other
thing
is
the
metrics
update
and
cool.
We
have
40
minutes,
because
I
imagine
it's
going
to
take
that
long.
All
right
or
hopefully
it
doesn't
take
that
long.
Yes,
I
was
kind
of
wondering
where
we're
at
on
that
I'd
like
to
get
a
better
understanding
of
progress
and
see
if
we
can
see
where
we're
at.
Let's
just
start
there
so.
B
Where
we
are
at,
I
wanted
to
create
a
document
kind
of
detailing
the
other
half
of
the
story
from
last
time
that
got
pushed
around
because
of
well
really
outside
issues,
but
the
status
of
it
is
we
currently
have
prometheus
exporting
so
sdk
through
prometheus
is
working
appropriately
and
I've
created
the
standard
out
exporter
and
we've
gotten
pathways.
B
For
that
it
does
add
a
little
bit
of
complexity
in
that
we
have
to
have
something
that
sits
between
so
the
the
metric
spec
is
vague
and
they
have
a
definition
of
a
thing
called
an
exporter
and
a
thing
of
called
a
reader,
and
they
are
kind
of
you
in
some
cases,
they're
the
same
thing
right,
for
example,
prometheus
technically,
isn't
an
exporter
because
it
doesn't
do
any
of
the
serialization
from
the
internal
types
to
what
prometheus
does
on
demand.
It
actually
does
it
when
the
the
reader
calls
it,
it
actually
acts
as
a
reader.
B
So
we've
clarified
how
go
the
go,
sdk
will
be,
will
define
a
reader
and
the
challenge
there
is.
Readers
are
actually
a
thing
that
does
reading,
not
something
that
you
call
to
read
something
right.
So
it's
not
like
a
io.reader
where
it
has
a
read
method,
but
it's
meant
to
go
and
read
all
of
the
different.
It's
a
metric
reader.
B
So
that
became
clarified.
That
became
more
solidified
and
then
the
exporter
is
what
the
exporter
interface
was
solidified
into.
What
is
what
we
would
traditionally
call
an
exporter
in
the
which
we
call
it
in
the
in
the
traces
world
where
traces,
when
you
call
when
you're
dealing
with
an
exporter,
it
is
something
that
takes
a
bundle
of
met
of
trace
data
and
serializes
it
into
the
wire
format
for
whatever
type
of
exporter.
It
is
right
that
is
now
an
actual
solidified
concept.
B
E
A
Evolved,
but
it's
probably
similar
like
conceptual
things,
I
guess
my
question
is
is
like:
how
can
we
get
it
in
the
repo,
I
think,
is
the
key
thing.
B
So
the
next
steps
that
we
have
here
are
going
to
be
put
in
more
tests,
because,
right
now
our
test
coverage
is
probably
like
20.
So
we
probably
have
a
lot
of
edge
cases
that
I've
found
already.
B
So
we're
still
doing
that,
and
then
we
want
to
rebase
this
into
kind
of
a
much
more
sane
pr
similar
to
what
we
did
with
the
api
and
we're
hoping
to
get
that
done
by
the
end
of
next
week.
So
that's
what
friday,
the
22nd.
A
So
one
of
the
things
I
was
thinking
about
aaron
is:
can
we
start
a
feature
branch
in
the
upstream
and
we
can
essentially
build
the
the
full
metric
setup
there,
because
I'm.
A
Feature
branch:
we
could
have
it
in
a
broken
state
as
we
continually
build
things,
so
we
want
to
just
put
like
interfaces
in
and
then
you
know
each
you
know,
then
we
can
review
pull
requests
for
each.
B
Interface,
I'm
up
for
that,
if
that
can,
if
that
makes
it
a
little
easier
to
digest,
I
think
we're
probably
at
a
point
where
we
can.
We
can
marshal
on
with
that.
I
know:
josh
has
a
branch,
I'm
actually
not
sure
how
I
would
add
a
branch
to
upstream.
I
know
I
can
create
a
branch
in
my.
B
It
out,
I
will
push
that
branch
kind
of
as
it
is
right
now
and
then
we
can
start
working
from
there.
Okay,
so
it
won't.
A
A
Yeah,
I
think
that'd
be
ideal
because
so,
if
you
push
a
branch-
and
I
mean
I
think
you
could
even
do
two
branches
one,
you
could
be
like
the
full
working
prototype
example
and
then-
and
then
I
don't
know
we
could
even
do
another
where
we
are
doing
reviews
on
each
part
of
the
code
and
and
work
those
into
you
know
parcelable
sections
even
but
yeah
because,
like
it'd,
be
really
nice
to
see
a
full
complete
example
like
josh
already
has
and
then
yeah
to
understand,
like
each
section.
A
Maybe
talk
about
it
because
it's
one
thing
to
just
submit.
You
know
six
thousand
lines
of
code,
but
it's
just.
B
B
Exactly
one
one
of
the
reasons
why
we
wanted
to
do
a
full
rebase
a
is
because
we're
pretty
far
behind
maine
at
this
point.
But
we
wanted
to
tell
the
story
similar
to
the
api
right,
where
there's
a
pull
request
that
just
removes
the
old
sdk
and
and
removes
any
kind
of
broken
tests
right.
And
then
we
kind
of
build
the
sdk
back
up.
A
So
we
could
do
that
right,
like
you,
could
build
the
sdk
back
up
in
this
new
branch
and
then
when
that's
then,
when
we're
done
the
it's
really
easy
to
just
say
like
okay,
now,
here's
a
pull
request
to
move
that
branch
into
the
main
and
in
the
same
time
delete
it
because,
like
we
know
the
code
in
that
branch
has
already
been
reviewed.
So
you
don't
have
to
remove.
B
A
Yeah-
and
I
think
that
the
reason
I
was
thinking
about
it,
this
way
is
if
we,
if
we
have
it
so
we
start
to
build
the
sdk
in
its
own
branch,
we
can
start
breaking
it
down
because
it's
going
to
be
broken,
you
know,
there's
not
going
to
be
implementations
behind
things
or
things
aren't
just
like
you
know,
which
is
fine
and
we
can
get
to
a
working
state,
but
you
can
also
say
like
okay,
so
now
we
can
build
a
milestone
or
a
project
on
it
and
help
track
it,
and
I
think
that's
also
like
once
you
build
interfaces,
you
could
have.
A
You
know
multiple
people
working
on
it,
especially
if
you
have
a
working
example.
So
multiple
people
try
to
be
like
reviewing
as
well
as
contributing
to
it.
So
I
think
that's
a
good
way
to
do
it.
I
think
once
we
get
that
branch
up
aaron,
we
should
probably
try
to.
I
don't
know
sink
on,
maybe
some
issues
that
we
can
create
to
just
say,
like
here's,
the
next
steps
of
how
we're
gonna
like
parse
this
apart
does
that
make
sense
I
mean
I.
A
Help
as
well,
I
have
no
problem
doing
that,
but
I'm
guessing
you
have
a
lot
more
context
than
I
do.
Yeah.
A
So
the
thing
is
like
I
bet
you,
we
are
right
because
I
think
josh
was
sending
data
like
you're,
saying
he's
sending
prometheus
data.
So
that
says
to
me
you
have
a
working
example.
Even
if
you're
not
feature
complete
for
the
specification
right
like
it's,
it's
something
we
can
work
on
right.
So
I.
A
B
A
B
A
E
I
know
I
think
that's
definitely
the
way
to
go.
One
thing
I
might
suggest
is:
let's
enable
branch
protection
on
that
branch,
but
just
require
prs
so
that
that
way,
when
it
does
come
time
to
merge
it
in,
we
can
have
confidence
that
everything
that
went
on
to
that
branch
went
through
a
br
there's
some
auto
log
in
github,
and
we
know
how
it
got
to
the
state
that
it's
at
so
we
don't
need
to
go
back
and
re-review
those
6
000
lines.
A
I
strongly
support
that,
although
okay
yeah
we'll
have
to
yeah.
A
Sorry
bring
that
up,
but
okay,
but
we'll
get
it
done.
Okay,
cool
with
that.
The
next
thing
on
the
agenda
anthony
brought
up
intern
season.
It.
E
Is
intern
season
we
are
not
going
to
have
as
many
interns
at
aws
on
working
directly
on
adot
as
we
have
in
the
past.
I
think.
Last
year
we
had
three
cohorts
and
you
know
27
interns
across
them.
It's
looking
right
now
like
we're.
I'm
going
to
have
five
that
are
going
to
be
assigned
to
me
and
I've
got
some
ideas
for
things
to
for
them
to
work
on,
but
I'm
also
curious.
Does
anybody
have
ideas
for
interns
to
work
on
either
good
things
to
get
them
started
with
the
the
project?
E
You
know
introductory
issues
or
sizable
projects,
things
that
you
know
might
take
six
or
eight
weeks
to
build
out
and
have
some
meaningful
impact,
but
that
we,
maybe
just
don't,
have
the
ability
to
do
ourselves
right
now
and
I
don't
need
you
know,
answers
or
ideas
right
now,
but
you
just
want
to
plant
the
seed.
If
anybody
has
any
ideas
come
and
talk
to
me
or
reach
out
to
me
on
slack
I'd
love
to
hear
about
them.
A
So,
what's
a
good
timeline
like
would
by
next
week,
be
a
good
idea
to
get
back
to
you
or
you
need
it
sooner.
E
I
would
say
by
by
the
end
of
the
month,
even
they're
not
scheduled
to
start
until
may.
Okay,.
A
Yeah
I'll
think
about
it.
I've
got
some
ideas,
but
the
proper
scoping,
as
you
know,
is
important,
so
yeah.
B
So,
just
to
throw
out
an
idea
that
I
had
for
lights
up,
but
I
think
everybody
should
be
doing
it
in
kubernetes.
There's
a
thing
called
a:
what
is
it
it's
a
metrics,
it's
not
metrics
exporter,
but
the
idea
is
that
it
will
take
prometheus
metrics
that
you
tell
it
to
and
create
kubernetes
metrics.
B
What
I
would
suggesting
is
we
do
something
similar
for
in
my
case,
for
my
company
light
set,
but
you
would
probably
want
to
do
it
for
the
data
warehousing
software.
I
forget
the
aws
equivalent
of
of
tracing
of
metrics,
but
cloudwatch
cloudwatch.
We've
got
a
few
yeah
for
for
one
of
them
so
that
you
basically
can
have
kubernetes
metrics
to
do
like
auto
scaling
from
an
arbitrary
query
to
cloudwatch.
E
C
C
E
We
don't,
I
don't
think,
we've
got
a
guide,
that's
basically
a
large
list
of
here's,
a
bunch
of
things.
You
should
go
read
and
go
explore
and
come
back
with
questions,
but
a
code.
Walkthrough
is
probably
a
good
idea
and
you
know
for
certainly
for,
like
the
the
hotel
go
repo,
that's
the
sort
of
thing
we
could
probably
do
in
the
open
and
share
as
well.
C
E
A
That
being
said,
brad,
that's
a
good
ask
in
fact
honestly
anthony
that's
a
really
good
project
for
an
intern
right
there.
How.
A
Yeah
I
I've
we
used
to
always
do
that.
A
lot
like
operations,
the
newest
person,
would
always
go
read
the
docs
and
at
the
same
time
they
were
the
ones
that
were
updating
the
docs.
So
they
would
give
a
presentation
at
the
end
of
it
and
and
then
they
were
the
expert
because
they
literally
just
wrote
down
the
most
recent
version.
But
I
don't
know
if
that's
something
that
is
suitable
for
them.
E
Potentially,
we
kind
of
run
our
internal
introduction
doc.
That
way,
a
lot
of
the
interns
who
are
coming
to
us
don't
have
any
experience
with
go.
So
a
lot
of
that
is
also
you
know:
here's
effective,
go,
here's!
You
know
resources
to
get
you
started
on.
You
know
how
to
program,
here's
how
you
program
and
go
so
it's
not
necessarily
tied
to
the
the
hotel,
go
sdk
or
client
libraries,
but
having
something
like
that
for
hotel
go
would
be
useful
and
that
would
be
a
good.
E
You
know
introductory
kind
of
projects,
usually
the
way
we
structure
the
internships
we
try
to
give
them
one
or
two
early
things
to
work
on
just
to
get
their
feet
wet
in
terms
of
process
and
how
things
work
and
then
give
them
a
larger
meteor
project
that
will
take
up
the
bulk
of
their
internship
that
they
work
on.
A
Yeah
cool,
I
think,
if
that's
a
great
idea,
though
right
yeah,
we
should
we
should
do
a
better
job
on
that.
I
think
a
few
slides
even
would
just
be
good,
so
yeah.
A
Okay,
yeah
anthony
I'll
think
a
little
bit
more
about
some
of
those
projects
as
well
we'll
I
think
we
should
sync
and
anyone
else
on
that
call
that
has
some
cool
ideas
be
sure
to
pass
them
off
david.
Do
you
know
if
any
google
interns
are
working
this
summer?
It's,
I
don't
think
they
did
anything
last
year,
but
I'm
just
wondering.
D
A
Cool
well
definitely
didn't
need
the
full
40
minutes
for
the
metrics
stuff,
which
is
great
I'm
glad
we
have
a
path
forward
on
that
it
looks
like
we've
gone
through
the
full
agenda.
That's
written
down
in
the
dock,
so
I'll
just
pause
and
open
it
up
anybody
else
on
the
call
have
some
good
ideas
or
topics
or
questions.
They
want
to
talk
about
that.
If
you
didn't
list
on
the
agenda.
C
I'm
still
always
looking
for
good
first
items,
if
you
have
any
so
I
looked,
I
didn't
see
many
labeled
right
now.
Maybe
it's
just
the
time
in
the
release,
but
but
if
there's
someone
I
should
ping,
let
me
know
I'll
bug
them.
A
I
I
know
I
got
harrison,
yeah,
that's
a
good
point.
Yeah!
I
have
another
pr
that
is
changing
the
use
of
label
everywhere,
which
is
again
not
it's
just
housekeeping
shoot.
I've
got
a
few
more
in
my
head,
yeah.
I
got
to
stop
doing
things
and
just
write
them
out
for
issues.
I
think
that's
a
good
idea.
E
The
semcom
generation,
I
think,
you've
updated-
make
to
make
that
a
bit
simpler.
That
could
be
a
good
place
to
start.
A
Yeah,
that's
actually
another
good
one.
We
have
a
pr
now
for,
I
think,
1.8,
but
we're
going
to
need
a
pr
for
1.9
and
1.10
after
that
so
yeah.
I
guess
there
are
two
issues
there
already
assigned
to
me,
but
I
could
definitely
sign
up
to
brad.
At
that
point.
It's
just
going
to
be
running
make
commands
so
it
may
sound
daunting,
but
it
it
because
it's
gonna
be
2000
lines,
but
it
really
is
simple.
So
yeah,
that's
actually.
A
Okay,
I'll
I'll
ping,
you
in
slack
that
sounds
like
something
I'll
do.
A
Yeah,
I
just
forgot:
I'm
jumping
on
a
flight
after
this
call,
but
I
at
least
will
ping
your
site
tomorrow.
Yeah
no
worries
no
worries.
E
A
D
A
And
we
can
start
from
there.
Okay
and
then
you'll
have
1.8
as
a
reference
as
well,
because
there
is
a
manual
updating
to
tell
everything
to
start
using
1.8
instead
of
1.7,
which
is
currently
existing
using,
but
that's
just
a
graph
really
not
too
much
or
if
you're
really
good
at
set.
I'm
not,
but
I
guess
I'm
good,
but
I'm
just
not
confident.
C
A
Yeah
there's
a
little
bit
of
a
code
walkthrough.
We
can
definitely
jump
into
this.
So
the
semester
conventions
in
open
telemetry
are
a
stat
of
you
know,
common
attributes
that
are
shared
across
all
the
implementations
and
they're
versioned,
and
so
the
versions
with
the
specification
and
we
try
to
match
that
versioning.
So
if
a
particular
instrumentation
type
will
use,
you
know
a
version
of
the
semantic
conventions,
it's
done
by
generating
code.
A
So
this
is
just
essentially
taking
our
key
value
type
and
then
inputting
all
of
the
string
fields
or
in
fields
or
boolean
fields
to
form
you
know
canonicalized
variables
or
constants
in
some
cases
and
and
there's
also
some
functions
in
there
as
well,
and
I
think
that's
where
the
complication
arises,
because
we
wrote
some
http
specific
functions
that
will
kind
of
group
semantic
conventions
that
you
need
for
particular
operations,
and
so
it
it
amounts
to
just
this.
You
know
a
few
thousand
lines
of
code
that
are
all
generated.
A
Open
symmetry
project
has
a
standardized
generation
tool.
That's
written
in
python,
not
go
for
some
reason
because
you
know
go
is
the
coolest
language
out
there?
Everyone
knows
that,
but
anthony
did
a
really
good
job
on
trying
to
wrap
that
template,
and
so
we
were
able
to
use
that
tool
to
generate
the
things
that
come
directly
from
the
semantic
conventions
in
the
specification,
but
not
the
functional
stuff.
A
So
this
new
pr
made
it
so
that
we
are,
we
have
our
own
little
generation
tool
for
all
those
functions,
and
so
it
should
be
completely
a
generated
code
in
theory,
and
so
then
yeah.
The
other
thing
is
like,
as
we
just
said
like,
it
is
semantic
conventions,
our
version,
so
any
place
that
we
already
depend
on.
You
know
currently
1.7
of
the
semantic
conventions.
Since
the
new
version
came
out,
we
want
to
try
to
update
to
make
sure
that
we're
at
the
current
version.
A
There
are
compatibility
layers
that
that
involves
like
the
schema,
which
is
a
little
bit
that'd,
be
the
next
topic
that
this
deep
dive
would
talk
about,
but
yeah.
So
maybe
something
to
go.
Take
a
look
at
afterwards.
How
we
do
compatibilities
with
the
schema,
but
that's
kind
of
the
main
overview
of
the
semcoff
package.
A
E
There's
a
there's
a
pr
in
the
collector
for
a
schema
translation
processor
as
well.
I
haven't
had
a
chance
to
review
it
yet,
but
I
know
sean
from
atlassian
has
been
working
on
that
and
I
don't
know
if
we
still
want
to
have
something
that
we
build
into
our
sdk.
To
make
that
an
option.
I
think
for
research
detectors.
E
It
may
still
be
something
we
want
to
to
provide,
because
resources
will
return
an
error
if
you
have
different
versions
of
the
schema
that
you
trade
them
and
you
try
to
merge
them
together,
but
at
least
that
that
is
making
progress
there
as
well.
So
if
we
want
to
pull
that
in,
we've
got
something
to
look
at
as
an
example.
A
Cool,
that's
really
exciting.
I
imagine
it's
complex.
A
Yeah,
I
think
we
should
definitely
look
at
that
when
that's
merged
anthony,
if
you
have
a
link
to
the
pr
I'd,
love
to
take
a
look,
if
you
post
it
in
slack
or
in
the
docs
or
something
like
that
or
on
the
agenda
sheet.
A
Cool,
I
think
at
this
point.
Anybody
else
have
something
you
want
to
talk
about
and
maybe
we'll
open
it
up
to
user
stories,
any
cool,
interesting
ways
the
project's
been
used
in
the
past.
Let's
see
lcd
is
progressing.
I
don't
know.
If
david,
we
have
any
update
on
that.
D
Don't
think
so
things
haven't
been
good
from
what
I've
heard
in
ncd
land.
So
it's
I
mean
they
are
using
open,
telemetry,
a
very
ancient
version,
or
they,
the
current
stable
version
of
etcd
right,
that's
released
and
they
tell
people
to
use,
has
an
ancient
version
of
open
telemetry
and
we've
updated
it.
The
version
it
head
but
they're
having
because
of
staffing
issues
and
just
general
problems
that
their
group
has
right
now
they
they
haven't
been
able
to
do
another
release
that
they're
willing
to
recommend
people
actually
use.
D
C
Yeah
yeah,
so
they
are
actively
getting
new
contributors
from
a
variety
of
companies,
so
there
should
hopefully
be
about
six
or
seven
people
coming
on
board
and
having
to
go
through
walkthroughs
over
there.
So
hopefully
you
know-
maybe
that's
a
good
first
first
thing
for
those
people
to
go
work
on
potentially
yeah.
D
D
C
E
And
is
it
because
they
they're,
probably
in
the
sdk
as
they're,
they
have
a
binary
distribution
or
executable
distribution
as
well.
That's
the
problem
or
do
we
have
problems
between
a
post,
1.0,
api
and
versions
that
later
versions
that
people
want
to
use.
D
The
versions
yeah
you
can
thank
me
for
that.
The
versions
that
they
don't
tell
people
to
use
that
exist
out
there,
that
they're
working
on
stabilizing
and
releasing
use
post,
1.0
and
they're
stabilizing.
A
The
open
telemetry
is
not
blocking
them
to
my
knowledge,
okay
yeah
I
saw
there
was
that
pr
for
compatibility
backwards
in
the
semcoff
package,
but
it
seems
that
they've
resolved
that
on
their
ends
by
doing
upgrades.
So,
okay,
okay,.
A
Cool
yeah
well,
keep
us
posted
if
there's
anything
brad
or
david,
or
anybody
who
hears,
because
I
think
having
this
in
std
and
kubernetes
is
really
cool,
let
alone,
I
think,
helpful,
for
the
project,
so
yeah.
D
A
A
Okay,
yeah
I've
been
looking
at
the
batch
band
processor
and
the
extremely
limited
free
time
I
have,
and
I
think
that
there's
some
optimizations
that
could
probably
happen
there
I'd
be
interested
to
see
if
they're
running
into
a
lot
of
drop
spam
issues
or
anything
like
that,
but
it
doesn't
sound
like
that's
the
reason
yet.
D
Yeah,
at
least
at
google,
I
I've
I've
gotten
it
demos
in
front
of
a
few
people,
but
I
need
to
I'm
working
on
building
a
version
that
our
scalability
team
is
going
to
use.
So
hopefully,
then,
we'll
find
out
all
the
the
real
problems:
okay
with
with
v020.0
and
then
in
a
year
I'll,
tell
you
what's
wrong
with
1.0.
A
Okay,
yeah
cool,
that's
exciting!
I
it's
unfortunate
it's
an
old
stuff,
but
it's
still
exciting
yeah
yeah.
D
D
I
suppose
I
should
also
mention
that
the
cubelet
tracing
work
slipped
so
it'll
be
the
1.25
of
kubernetes
for
that
really,
okay,
so
it
didn't
make
it
yep
it
was.
It
was
close.
It
was
just
a
lot
of
a
lot
of
small
review
stuff
that
didn't
didn't
get
done
in
time.
A
Cool
anybody
else
have
some
cool
use
cases,
or
we
can
probably
end
it
here.
I'm
guessing
we're
running
up
15
from
the
hour.
A
Awesome
well
then,
let's
do
that
thanks
everyone
for
joining.
Obviously,
if
you
have
any
more
questions,
please
reach
out
in
slack
or
in
pr's
or
issues,
and
if
not
we'll
see
you
all
next
week
and
same
place
same
time,
bye.