►
From YouTube: GMT 2018-09-04 API WG
A
Okay:
okay,
there
we
go
we're
now
recording
so
I
was
going
to
meet
with
a
couple
folks
that
I've
been
chatting
with
about
just
brainstorming.
Some
ideas
for
the
API
working
group,
backlog
I-
think
we
get
that's
also
something
we
can
do
in
this
meeting
is
come
up
with
items
for
the
backlog,
so
I'm
going
to
show
a
few
slides
about
metrics.
A
A
So
I'll
go
ahead
and
just
go
through
these
slides.
Can
you
all
see
that
okay,
yep,
okay,
great
and
then
like
feel
free
to,
of
course
interrupt
me
as
we
go,
and
when
we're
done
with
the
slides,
we
can
just
discuss
as
long
as
we
like
about
the
current
and
future
metrics
and
maysa,
so
I
just
wanted
to
review
the
current
interface
real
quickly.
B
A
Apologize
if
this
is
all
just
known
information
for
all
you
guys,
but
this
will
just
make
sure
we're
on
the
same
page
to
start
so
the
master
and
the
agent
exposed
metrics
via
the
operator
API.
We
have
these
older
unburned
endpoints,
the
metrics
snapshot
endpoint.
So
that's
really
not
part
of
any
API
so
to
speak,
but
that
route
is
added
by
default
by
the
the
metrics
actor
that
we
have
in
Lib
process
and
then
in
the
v1
API.
A
B
A
A
Okay,
maybe
I'm
not
sure
how
to
Oh
looks
like
I've
closed
it
now.
Okay,
one
second,
sorry.
A
Our
current
convention
is
to
begin
the
metric
key
with
the
actor
the
metric
is
coming
from.
So
here
we
have
allocator
slash
and
each
metric
key
has
a
double
associated
it
associated
with
it,
and
the
type
of
the
metric
isn't
encoded
in
the
output.
An
operator
needs
to
simply
look
at
our
metrics
documentation
in
order
to
figure
out
what
type
of
metric
they're
looking
at.
So,
for
example,
we
have
the
event
queue
dispatches.
This
is
a
counter,
then
we
have
allocation
run
latency.
A
We
have
the
current
value
of
the
timer
and
then
some
statistics
calculated
from
that
time
series
the
count
of
values,
the
max
and
min
value,
and
then
we
have
a
variety
of
hard-coded
percentiles
that
we
provide
I've
showed
just
I've
shown
just
the
P
that
D
and
P
90
here
and
the
others
would
follow,
and
then
we
also
have
gauges.
Here's,
for
example,
the
current
offer
filters
for
a
particular
role.
C
A
A
A
So
when
a
framework
registers,
when
it
subscribes,
we
will
add
particular
metrics
for
that
framework,
some
based
on
the
framework
ID
and
some
based
on
the
principle
that
the
framework
has
subscribed
with.
We
also
have
per
role
metrics.
So
when
frameworks
subscribe
in
a
role
will
expose
things
like
we
saw
previously
on
the
previous
slide.
Things
like
the
number
of
active
offer
filters
for
a
particular
role.
So
this
is
not
ideal.
It
means
that
the
set
of
metric
keys
or
metric
names
offered
by
the
master
will
change
over
time.
A
It's
not
fixed,
which
can
make
writing
tooling
to
consume
those
metrics
a
little
more
difficult.
We
also
have
the
issue
that
our
data
model
isn't
extensible.
Currently
we
just
expose
a
primitive
type,
just
a
double,
so
we
don't
have
any
way
to
add
additional
information.
We
may
want
to
add
going
forward
so,
for
example,
the
the
per
framework
metrics
rather
than
adding
new
keys.
We
could
do
something
like
tag
a
fixed
metric
key
with
the
framework
ID
in
the
framework
name.
This
is
more
along
the
lines
of
what
I've
seen
in
other
projects.
C
This
I
would
say
that
these
are
more
like
modern
metric
system
formats,
just
to
be
a
little
pedantic
about
it.
Yeah
we
didn't.
We
didn't
choose
some
arbitrary
thing
for
just
maysa,
so
originally
we
chose
it
because
it
happened
to
be
at
the
time
what
everything
else
used
a
Twitter
and
I
believe
that
was
inspired
by
what
everything
was
using
at
Google,
which
was
just
like
a
flat
key
value,
structure,
mm-hmm
and
I
think
there
might
be
systems
that
still
use
that,
but
yes,
some
of
these
modern
systems,
don't.
A
D
E
Crystal
training
here,
I
think,
although
there
is
no
standard,
I
think
with
the
load,
I'll
say
with
a
place
that
mesos
embedded
into
the
extended
cloud
related
ecosystems.
Promises
is
definitely
quite
popular
in
the
projects
as
closely
related
to
metals,
so
this
is
I
feel
like.
Although
this
is
not
standard,
this
is
one
of
the
things
that
people
interact.
Low
metals
will
often
got
exposed
to
previously
yeah.
D
C
C
D
Well,
right
now,
I
guess
my
sauce
exporter,
for
example,
takes
the
flat
key
space
and
turns
into
tags.
So
if
you
one
way
to
support
I
mean
so
any
system
that
needs
to
export
a
flat
namespace
inverse.
A
A
C
A
A
D
A
A
C
C
B
C
C
C
A
B
C
The
bucket
one,
you
can
imagine,
is
pretty
trivial
right,
but
the
quantile
one
I've
seen
some
pretty
interesting
libraries
out
there.
Some
of
them
look
like
we
could
probably
use
them
I.
Just
wonder
if
there's
any
like,
if
I
look
at
these
metrics
I
can't
tell
if
there's
a
window
being
used
or
if
it's
like
some
kind
of
like
decaying
thing
like
exponential
decay
of
old
data
or
something
I
can't
quite
tell
so.
C
B
A
full
page
describing
how
like
best
practices
for
histograms
and
summaries
and
then
some
links
to
libraries
and
everything
go
Java
Python
Ruby.
So
maybe
we
should
take
a
look
at
that
afterwards,
like
I,
think
it
makes
sense
with
during
the
meeting
and
I
can
add
the
link
to
your
document.
If
you
want
yeah.
B
A
If
we
like,
let's
say
we
do
decide
that
we
we
like
the
Prometheus
format
or
Prometheus
data
model
for
metrics.
If
we
were
going
to
like
when
it
comes
to
actually
generating
a
response,
you
know,
if
I,
let's
say
we
have
a
new
get
metrics
call
operator
hits
it
do
you
think,
like
do?
Operators
want
to
get
this
precise,
textual
format
in
return
from
that
endpoint
or
from
that
call,
or
a
JSON
or
protobuf
representation
that
maps
easily
onto
this
data
model.
F
B
Agree
there
and
I've
been
updating
the
Philip,
the
missile
exporter,
to
keep
it
in
sync,
with
the
performance
metrics
and
it's
a
pretty
painful
process.
So
if
we
could
avoid
a
de,
it
would
be
great
right
now
there
are
a
ton
of
other
expressions
that
are
used
to
try
to
deduce
from
the
key
name.
That's
the
flat
strain.
B
What
metric
it
refers
to
you
and
then
to
extract
all
the
various
nut
stuff,
and
it's
not
easy
to
keep
it
in
sync
with
what
is
supported
by
masses
and
when
our
new
metrics
are
added.
That
has
to
be
updated
and
then
releases
have
to
be
coordinated
and
I'm
guessing
that
also
deploy
in
both
things.
At
the
same
time,
it's
not
easy.
So
if
you're
we
are
Department
of
a
new
message
version,
you
might
be
missing
so
metrics
until
you'd
be
told
a
new
version
of
speak
exporter.
B
A
C
C
There
might
be
one
for
that
which
knows
how
to
write
it
in
this
format,
and
the
assumption
is
that
our
model
is
so
close
to
this,
that
we
don't
have
to
go
and
like
look
at
metrics
and
figure
out
tags
and
rewrite
the
keys
and
so
on
which
we're
currently
doing
in
these
third-party
metrics
conversion.
Things.
C
I
said
I
think
it
would
be
good
to
look
at
some
of
the
libraries
like
if
you
look
at
how
the
code
'hail
metric
stuff
is
done.
I
think
it's
called
drop
Wizard
metrics
now.
But
if
you
look
at
how
that's
done
that
it's
a
similar
story
there,
they
did
couple
like
all
the
exporters,
there's
a
variety
of
exporters
and
they
they're
decoupled
from
the
the
data
structures
and
so
on.
You
can
use.
F
But
if
ultimately,
prometheus
is
the
thing
a
lot
of
people
want.
You're
gonna
have
to
make
sure
whatever
the
data
model
we
write.
It's
compatible
so
like
in
terms
of
how
you
write
tests
and
how
you,
just
you
know,
implementing
the
native
model
wanting
to
make
sure
the
things
work.
You
still
be
kind
of
I
guess
logically
decouple
the
two,
even
though
you
should
make
a
effort
to
help
them
in
terms
interface
and
implementation
and
yeah.
F
If
it
turns
out
the
converter,
is
so
simple:
that's
it
just
requires
no
enhancement
per
each
metric,
then
I
guess
arguably,
is
also
one
you
write
at
once
and
you
don't
have
to
change
it
in
missiles
as
well.
So
the
effort
to
to
maintain
AI
is
also
not
a
lot,
and
you
seen
I'm
saying
right.
So
if,
if
it
does
require
like
changing
a
client-side
for
each
new
metric,
then
it
will
be
a
pain
for
either
the
external
client
or
the
native
representation.
C
Yeah
I
think
that's
a
question
here
is:
can
we
it's
probably
the
case
that
will
come
with
a
model
that
has
more
concepts
like
tags
and
so
on
and
labels,
but
we're
we're?
Then
gonna
have
to
do
things
to
make
sure
that
the
flat
key
structure
that
some
existing
users
expect
continues
to
work.
Whether
we
continue
to
like
make
new
metrics
work
on
that
model
or
not
it's
another
open
question.
Would
we
tell
those
users?
Okay,
here's
the
new
model.
We
make
all
the
old
metrics
work
about
any
new
metrics
going
forward.
A
Yeah
I
can
imagine
having
like
a
metrics
exporter
or
metrics
converter
module
interface
in
maces.
So
we
could
have
our
internal
representation
that
we
think
is
generally
useful
and
extensible
and
then
convert
that
to
different
formats.
So
we
could
yeah.
We
could
have
an
exporter
that
would
generate
Prometheus,
and
maybe
we
could
I
think
this
is
what
you're
alluding
to
them
have
something
like
an
exporter
that
converts
to
the
existing
flat
structure
yeah.
D
I
think
that's
straight
forward
right.
You
just
have
different
end
points
which
will
serve
for
your
different
formats
and
we've
had
proof
of
concept
code
before
the
date.
The
benefit
here
about
we're
talking
about
the
internal
data
model.
Is
they
you?
If
you
have
a
richer
internal
data
model,
then
you
can
present.
You
know
a
richer
Prometheus
expedition
like
right.
Now,
there's
a
bunch
of
things
you
can't
do
with
the
Prometheus.
C
C
D
I'm
not
sure
you
need
to
handwrite
it,
but
it's
maybe
maybe
I
think
that
you
can
imagine
other
different
kinds
of
explosion.
These
metrics,
like
you,
can
have
you
could
have
for
the
per
framework
metrics.
For
example,
you
could
have
restful
paths
where
you
query,
slash,
metrics,
slash
framework,
slash,
roll
slash
whatever,
so
you
can
sample.
You
know
you
can
imagine
there
instead
of
n
points
where
you
can
sample
subsets
of
the
metrics,
and
you
know
that
cannot
be
backed
by
their
by
the
same
internal
metrics
store
right,
yeah,
interesting
framework.
C
D
I
think
that
having
labels
as
dimension
basically
textual
names
for
dimensions,
I
think
that
top
plumbing
that
three,
the
API
is
relatively
straightforward.
Adding
that
support
the
the
histogram
one
is
probably
a
little
bit
trickier
getting
the
concepts
right
well,.
C
C
C
C
Yes,
yeah
in
order
to
convert
back
to
what
it
used
to
be
yeah
I
mean
that's
how
I
was
imagining.
We
would
do
this
change,
it's
like.
If
we
change
the
model,
we
didn't
go
and
change
all
the
existing
metrics.
We
da
dat
them
all
figure
out.
What
we
think
makes
sense
and
then
have
a
converter.
That
knows
how
to
get
exactly
back
to
what
it
used
to
be
like
for
the
flat
structure,
yeah.
A
D
C
I
guess
there's
like:
will
we
continue
to
go,
try
to
move
forward
with
the
flat
putting
new
things
in
the
flat
namespace
or
not
or
where
we
tell
users
that,
like
all
the
new
metrics,
we're
not
gonna,
add
them
to
the
flat
structure.
I,
don't
know
I
just
curious.
What
users
would
say:
I
know
that
Twitter's
studies
in
the
flat
stuff,
so
maybe
what.
C
A
A
A
C
A
C
A
C
E
C
C
A
B
Missus
should
allow
like
there
should
be
a
module
interface
or
something
like
that,
and
the
matrix
should
be
exported
by
missiles
in
the
format
that
the
operator
wants
to
consume.
Then
export
export
them
in
a
generic
rich
format,
and
then
it
would
be
easy
to
write
converters
and
misses
when
she
wrote
any
of
them.
I
see
those
two
possibilities:
I
don't
know
what
the
consensus
is.
Yeah.
C
A
C
C
C
C
C
D
Think
the
one
at
least
for
me
that
one
of
the
things
having
to
be
able
to
explicitly
say
the
dimensions
using
tags,
the
big
benefit
of
that
is,
it
makes
it
way
easier
to
write
up
prometheus
exporter.
Let
me
says:
Explorer
currently
does
a
bunch
of
shenanigans
around.
You
know
reg
Xing,
on
the
line
and
superbly
and
with
the
current
internal
data
model,
you
basically
have
to
do
something
very
similar
or
give
up
on
exporting
those
dimensions
to
to
Prometheus.
So
we're
like
tags
for
me
having
good
fidelity
for
Prometheus.
A
A
So
that's
something
that
I
could
work
on
in
preparation
for
another
API
working
group
meeting
in
the
future.
I
could
try
to
put
together
a
more
formalized
list
of
requirements
and
based
on
our
discussions
today,
maybe
some
more
detailed
thoughts
about
how
we
might
satisfy
those
and
what
the
real
benefits
are
actually
of
making
any
changes.
I.
B
A
D
Have
one
thought
with
the
peph
framework
metrics
at
least
of
our
site?
We're
gonna
have
to
do
something
about
them,
where
something
is
pretty
undefined.
Just
because
us,
the
number
of
metrics
make
you
accountable
to
do.
Neroni
is
going
to
be
untenable,
like
we're.
Gonna
have
like
200
or
250
or
thousand
metrics,
so
we're
gonna
have
we're
gonna
either
carry
out
some
kind
of
local
patch
or
hopefully
figure
out
a
more
coherent
way
to
kind
of
get
the
information
from
them
without
you
know
having
to
collect
everything
all
the
time
so.
A
D
It's
the
quantitative
metrics,
so
you
know
pulling
60
megabytes
of
JSON
or
protobuf
every
every
10
seconds,
even
if
you're
doing
it
really
really
efficiently.
It's
still
more
work
than
you
want,
then
you
want
to
do
for
for
a
monitoring
system.
So
then
we'd
probably
always
need
to
be
listening
to
really
think
about
whether
metrics
is
the
right
way
to
kind
of
exposed.
It's
very
granular
information,
I
think
I
thought
I'd,
probably
think
about
it.
Also
differently.
If
the
only
if
you
know
the
only
thing
I
ran
was.
D
C
D
That's
a
little
bit
of
a
wonky
question
we
do
have.
We
do
think
that
can
that
collects
per
container
metrics
that
I'm
not
that
familiar
with,
but
we
I
was
actually
asked
to
turn
off
the
we
actually
filter
out
the
stuff
on
the
agent
which
collects
the
per
container
metrics
from
the
resource
exporter.
I
was.
D
C
D
Source
is
the
master
I
mean
if
metrics
in.
If
this
metric
sampling
is
proportionately
more
costly
on
the
agent,
then
I
have
less
concern,
because
you
know
the
agent
is
like
the
agent
matters
for
task
launch,
but
most
of
the
time
is
just
sitting
there
right,
whereas
the
mouse
was
always
doing
something.
A
D
Yeah
I
mean
the
node.
Exporter
has
a
concept
of
collectors
where
the
collector,
where
you
can
configure
the
node
exporter
to
say,
I,
want
to
collect
this
class
of
data
and
not
some
other
class
of
data,
because
it's
not
interesting
to
me
and
I:
don't
want
to
pay
the
collection
overhead.
The
problem,
the
thing
that's
the
thing
with
the
perfect
matrix
is
I
think
they're
going
to
be
really
useful.
So
if
you
just
say
don't
collect
them,
then
at
the
point,
when
you
need
them,
you
don't
have
them.
D
A
A
Okay,
well
we're
out
of
time.
I
don't
want
to
keep
people
too
long.
I
really
appreciate
everybody's
showing
up
today.
Thanks
for
the
discussion,
I
was
really
helpful
for
me
and
thinking
about
metrics
and
how
they'll
evolve
and
I'll
continue
some
work
based
on
our
discussion
today,
and
let
you
guys
know
you
know,
sometime
in
a
month
or
more
I,
can
come
back
to
the
group
and
presents
some
further
findings
and.