►
From YouTube: Kubernetes SIG Instrumentation 20190502
Description
Meeting notes: https://docs.google.com/document/d/17emKiwJeqfrCsv0NZ2FtyDbenXGtTNCsDEiLbPa7x7Y/edit#heading=h.w0zeakm4brmh
A
Okay,
I'll
kick
off
the
recording
and
let's
get
started
hello.
Everybody
welcome
to
the
second
fermentation
meeting
today
is
May
the
2nd
2019
and
we
have
two
agenda
topics
today.
Our
first
topic
is
going
to
be
our
continued
discussion
about
around
open
census
and
open
tracing
from
continued
from
last
time,
and
then
we're
going
to
be
talking
about
structured
logging,
I
believe
we
had
someone
from
the
open
census,
team
and
open
tracing
team
on
last
time
is
that
person
with
us
today
again.
A
B
Into
the
CNC
F
at
con
Europe
later
this
month
and
there's
a
public
road
map,
that's
up
now
that
talks
about
you
know
when
we're
planning
to
or
when
we're
going
to
try
to
at
least
have
versions
of
the
API
ready
for
each
language.
I
don't
have
a
ton
of
insight
into
the
go
one,
but
there
is
a
Java
version
of
the
API
that
is
available
that
people
can
look
at.
B
However,
a
lot
of
this
is
still
kind
of
you
know
it's
not
done
until
it's
done,
I
would
say.
Maybe,
if
there's
a
strong
interest
in
doing
something
in
a
short
term,
you
know
in
the
next
couple
months
might
be
good
to
start
with
census,
because
there
will
be
a
migration
path
from
census
to
this
new
project
and
we're
planning
to
support
the
ma
believe
two
years
fully
support
and
ensure
backwards,
compatibility
from
new
stuff
to
old
stuff,
I.
A
Think
that
I'm
hopeful,
sorry
I
think
last
time
we
talked
about
this
I
think
I
think
we
had
a
general
consensus
that
we
want
tracing
and
that
the
new
project
is
probably
what
we
want
to
go
with.
I
think,
last
time
we
were
still
unsure
about
the
like
set
up
burden
that,
like
the
agents
bring
with
themselves,
has
anybody.
A
A
bit
more
like
I,
think
there
we
threw
into
the
room
that
we
could
potentially
put
this
into
the
couplet
binary
itself
or
something,
but
that
would
again
make
the
binary
size
explode.
So
I
haven't
really
found,
like
I,
haven't,
been
able
to
think
of
something
that
I'm
like
super
happy
with
I
said.
If
anybody
else
maybe
put
some
thought
into
this.
B
They
that's
they've,
got
that
out
there
now
you
can
I,
don't
think
it's.
I
think
they
it's
alpha
or
beta,
it's
not
like
production
or
whatever,
but
it
works.
The
basic
idea
is
you
have
an
agent
process
running
in
the
sidecar,
and
then
you
have
a
service
that
you
deploy.
You
point
all
of
your
agents
at
that
service
running
wherever
else,
and
then
that
is
actually
what
sort
of
handles
batching
and
exporting
spans
to
Jager
or
Zipkin
or
stackdriver
or
what-have-you.
B
That
would
be
probably
I
think
the
way
that
you
would
want
to
do
it
because
then
you
would
avoid.
You
would
certainly
avoid
configuration
problems
like
people
wouldn't
have
to
add.
People
would
be
able
to
have
a
single
place
that
they
can
configure
their
exporters
to.
Actually,
you
know,
send
the
telemetry
off
to
wherever
I.
A
A
This
is
kind
of
the
thing
that
we
were
talking
about,
that
it
has
additional
set
up
burden
right,
so
maybe
I
mean
maybe
someone
who
is
more
familiar
with
what
the
topics
can
maybe
do
like
a
reference
architecture
or
something
or
to
see
what
this
would
bring
an
additional
set
up,
because
if
it's
truly
a
sight
card
next
to
every
component,
I
would
anticipate
that
only
very
few
users
actually
set
that
out.
It.
A
A
A
That
doesn't
seem
like
a
huge
setup
burden,
but
if
we,
if
it
had
to
be
like
the
sidecar
type
of
approach-
and
there
are
many
kubernetes
set
up
mechanisms
that
don't
run
things
as
pods
and
then
in
that
kind
of
such
kind
of
situation,
I
I
would
find
it
a
pretty
huge
pain
to
have
to
set
up
a
sidecar
next
to
every
API
server
instance.
I
have
next
to
every
controller
manager
and
so
on.
So
what
kind
of
setups
are
you
discussing
here
like
people
setting
up
their
kubernetes
clusters?
A
Yeah
I
mean
there
are
there
are
cluster,
install
methods
that
don't
use
pods
or
static
pods
to
run
control,
plate
components
right,
they're,
pretty
popular,
like
coop
spray?
Does
that
so
I
think
we
should
be
careful
in
introducing
something
like
that.
But
if
the
diamond
set
approach
works,
then
I
think
I'd
be
pretty
happy
with
that.
I
mean
that
this
doesn't
necessarily
work
for.
D
A
C
C
If
you
want
this
nice
thing
like
logging,
and
you
want
to
be
able
to
enable
this
feature,
and
some
people
will
not
write
like
some
people,
it's
not
worth
the
overhead
to
have
a
tracing
back
end
like
single
clusters,
whatever
in
most
of
the
cases,
unless
you
are
like
doing
some
cluster
administration
or
serious
cluster
of
administration,
you're-
probably
not
going
to
care
about
it,
so
I
mean
we
need
to
cover
the
80%
case,
as
opposed
to
like
worrying
about
the
prettiest
cases.
Yeah.
A
A
I
think
eventually,
maybe
maybe
eventually,
but
primarily
our
interest
right
now-
is
adding
tracing
to
kubernetes
right,
and
so
that's
probably
what
we
should
focus
on
anything
I.
Think
for
that.
It's
probably
a
reasonable
trade-off
to
say
it's
a
little
bit
more
work
to
set
up
tracing,
and
here
are
some
examples
of
what
you
could
couldn't
do.
I
think
that'll
be
okay.
I
I
would
be
interested
to
hear
if
the
diamond
set
approach
or
what
kind
of
architectures
would
be
reasonable
for
this.
A
Living
yeah
I
do
recall
we
were
viewing
some
open
census.
Document
must
have
been
half
a
year
ago
or
something
where
some
of
their
feedback
was
that
a
lot
of
people
but
I
think
this
was
people
who
used
kubernetes,
not
necessary,
necessarily
administrator
Nettie's
we're
in
a
lot
of
like
multi-tenant
environments
and
stuff,
like
that
people
are
not
even
allowed
to
create
daemon
sense.
A
So,
in
those
kind
of
cases
I
think
I
mean
first
of
all,
let's
find
that
document
and
validate
that
this
actually
has
anything
to
do
with
the
control,
plane,
components
or
just
that's
called
them
end
users
yeah.
So
if
you
two
maybe
could
take
that
up
and
figure
out
what
some
reasonable
architectures
could
be.
Yeah.
C
A
B
Also
just
to
play
out
like
there's
no
reason
you
couldn't
just
have
a
single
agent
running
across
all
of
your
nodes.
You
simply
have
to
deal
with
the
overhead
of
serializing
and
or
sending
the
spans
to
that
single-mode
right
to
that
single
service.
The
reason
that
you
would
want
one
per
is
simply
to
avoid
the
additional
overhead
and
potentially
backing
up
causing
back
pressure
like
if
you
know,
there's
any
kind
of
interrupted
connectivity
or
what
that.
A
I,
don't
know:
does
anybody
else
have
any
other
comments?
I
think
we
have
some
points
to
investigate
on
that
and
then
I
think.
Maybe
in
a
couple
of
weeks
we
can
maybe
after
coop
con.
We
can
look
back
at
this
if
I
remember
correctly
from
the
timeline
I
think
the
Python
client
is
the
one
that
is
being
released
first
right
and
then
Java
one
Java
was
it
Joe,
okay,
Java
yeah.
B
B
A
Yeah
I
feel
like
after
coop
con.
We
probably
have
a
better
picture,
and
maybe
by
that
point
we
have
some
experience
reports
as
to
what
kind
of
architectures
could
make
sense.
Like
I
mean
it
seems
like
it,
should
it's
completely
up
to
users
to
choose
the
architecture,
but
it
feels
like
we
should
at
least
have
some
idea
of
what
kind
of
complexity
were
introducing.
A
D
A
D
The
motivation
behind
opening
this
issue
originally
was
that
other
than
the
kubernetes
audit
logs,
there
are
basically
no
components
at
all
on
the
kubernetes
control
plane
that
offer
structured
logs
as
an
option,
which
means
it's
really
really
challenging
to
aggregate
index
and
like
basically
do
anything
with
any
large
scale
of
logs
and
I.
Imagine
that
this
hasn't
come
up
more
because
I
think
folks
are
using
smaller
clusters
and
a
lot
of
the
cloud
providers
have
some
level
of
cloud
aggregation
of
logs
and
that
kind
of
thing.
D
But
when
I
open
this
issue,
I
was
working
with,
like
you
know,
400
on-premise
nodes
needing
to
manage
these
logs
in
some
manner.
That
is
not
magically
taken
care
of
by
a
cloud.
A
lot
of
stuff
sort
of
overwhelming,
like
our
elk
stack
and
the
fact
that
we
couldn't
have
these
things
like
in
a
structured
form
to
make
indexing
and
processing
much
easier
made
it
very
difficult
to
operate
and
to
bug
certain
cluster
issues.
Yeah.
A
C
C
It's
it
actually
wouldn't
support.
All
of
the
functionality
for
K
log
is
one
of
the
problems
right
now,
because
it
wouldn't
support
a
numeric,
verbosity
levels
and
wooden
art.
It
would
not
support
numeric
paucity
levels,
it's
supposed
to
be
more
generic,
and
so
there
is
some
complexity
and
possible
migration
paths
and
I
think
there
might
even
be
a
there's,
an
issue
or
PRI
I.
Don't
have
the
link
for
it,
but
I
was
looking
at
it
like
a
week
or
two
weeks
ago.
C
Someone
was
bringing
up
a
way.
Someone
has
a
PR
I
think
to
migrate
from
K
log
to
this
other
thing,
and
it's
at
the
eat
spit
up
before
a
while,
but
I
can
rule
it
was
but
yeah
I
think
we
should
probably
do
something
about
it
and
probably
come
up
with
some
sort
of
plan
to
migrate,
structured
logging
out
I
would
love
structured
bugs.
A
A
Deprecation
actually
I
haven't
it.
When
I
wrote
my
comment:
I
haven't
hadn't
thought
about
the
longer
approach,
so
I
think
that's
worth
investigating
more,
especially
because
Tim
has
already,
but
quite
a
lot
of
thought
into
that.
Maybe
it
maybe
it's
worth
having
him
on
one
of
our
meetings
when
we
want
to
discuss
it
in
more
detail.
A
C
C
It
would
be
super
invasive
across
the
entire
career
I
use
code
base.
It
would
be
much
simpler
if
we
could
do
it
on
a
per
binary
basis,
but
that
currently
isn't
really
possible
without
some
sort
of
like
abstraction
layer,
present
and
I.
Think
that's
the
reason
why
I
think
the
basic
idea
that's
been
floating
around
is
to
flop
to
front
Locker
in
front
of
K
log
interface
and
then
to
on
a
per
binary
basis,
switch
out
the
implementation
as.
A
E
A
E
A
C
A
A
That's
actually
the
like
structured
information
that
we
would
like
to
extract
and
I
think
Holly
actually
did
some
experiment
last
year
as
to
like
85%
of
all
log
calls,
we
could
probably
automatically
rewrite
to
have
like
a
structured
and
structured
information
instead.
So
I
think
this
is
that
this
doesn't
need
some
more
thought,
but
I
think
everybody
agrees.
We
want
structured
logging,
yeah.
C
E
Just
want
to
add
to
the
discussion
that
there
was
one
and
a
half
years
ago
after
the
conference
in
Paris,
the
NGO
Commons
project
created
by
Brian
Catalan,
and
they
have
a
issue
on
a
standard,
Naga
interface
with
50
comments
from
people
all
across
ago
community,
without
having
seen
Tim's
proposal
for
a
logger
interface,
I.
Think
like
wherever
we
track.
This
I
will
add
this
issues.
E
C
C
A
C
C
Last
time,
can
you
just
back
up
yeah
sure
sorry?
So
there
is
a
cap
for
control,
plane,
metric
stability,
we're
implementing
a
framework
which
will
allow
us
to
define
stability
levels
for
metrics,
so
that
people
will
not
break
down
moving
forward.
This
is
the
first
part
of
a
three
phase
plan.
The
first
phase
is
implementing
the
framework.
The
second
phrase
is
implementing
the
static
analysis
in
validation,
piece
and
the
third
part
is
actual
migration
of
the
code
in
the
binaries.
C
I
am
working
I'm,
almost
complete
for
the
first
phase,
just
there's
a
couple
of
minor
things
that
Stephen
brought
up
and
I
will
adjust
that
today
of
letting
it
emerged
soon,
I've
also
had
been
talking
to
Peter
sort
of
he's
on
the
call
he's
also
works
for
gold.
He
is
interested
in
I
think
working,
collaborating
on
the
static
analysis.
Piece
I've
also
talked
to
Daniel,
who
from
eBay,
who
is
also
interested
in
collaborating
on
this
their
analysis
piece.
So
we
can
probably
get
that
knocked
out.
Hopefully
pretty
quickly.
C
I
have
like
a
local
prototype,
so
I'll
share
with
you
guys.
You
know
whenever
you
guys
are
interested
and
we
hopefully
we
can
just
get
knocked
out
I'm
hoping
ambitiously
by
Barcelona.
That
way
we
can
just
get
it
merged
and
then
we
can
just
move
forward
with
it.
And
then
we
can
do
the
migration
before
1/16,
because
that
would
be
great
and
then
we
would
have
like
metric
guarantees.
Yeah.
A
That
sounds
fantastic.
We
could
make
that
happen.
I
think
we
it's
long
overdue,
that
we
have
something
like
that.
Yeah
and
then
I
mean
then
the
hard
part
is
actually
going
to
be
making
sure
that
we
get
some
metrics
that
are
stable
and
that
we're
confident
about
right,
like
even
even
the
like
super
common
metrics.
We've
broken
support
several
times
in
the
past.
So
it's
great
to
have
this
in
place,
but
then
actually
getting
graduating.
The
metrics
is
gonna,
be
another
tough
one,
but.
C
Probably
one
or
two
on
a
binary
level
basis
that
will
be
easy
to
graduate,
but
the
mentality
will
have
to
shift
right
so,
like
people
can
now
just
start
adding
labels
anymore
to
stable
metrics,
so
they
will
have
to
create.
If
they
have
a
feature,
they
will
have
to
create
a
separate
feature
level
metric,
which
actually
makes
more
sense,
because
then
it's
isolated
and
they
can
remove
it
and
whatever,
when
they're
done
with
their
feature
thing
so
and
also
now,
we
will
have
a
gate
for
metrics
changes.