►
From YouTube: Kubernetes SIG Instrumentation 20200109
Description
Meeting notes: https://docs.google.com/document/d/17emKiwJeqfrCsv0NZ2FtyDbenXGtTNCsDEiLbPa7x7Y/edit#heading=h.qpfxt91hdl2x
A
C
B
Cool
so,
first
of
all,
we
just
want
to
thank
everyone
who
took
a
look
at
the
tracing
proposal
and
I'm
glad
we
were
able
to
merge
it
as
provisional.
I
really
appreciate
that.
But
the
main
thing
that
I
wanted
to
talk
about
is
a
follow-up
from
cube
con,
which
is
we
sort
of
have
a
couple
of
different
proposals.
We've
got
the
tracing,
the
structured
logging
and
the
request
ID
proposal,
and
we
wanted
to
explore
the
idea
of
having
them
share
mechanisms
for
how
they
attach
labels
or
or
metadata
in
general,
to
the
telemetry
right.
B
So
we
want
to
see
if
we
can
make
sure
that
traces
and
vlogs
have
consistent
ways
of
matching
them
to
containers
or
pods
or
whatnot.
So
I
investigated
this
about
a
month
ago,
but
basically
there's
a
thing
in
open,
telemetry
called
distributed
context,
which
is
basically,
you
can
think
of
it
as
being
similar
to
span
context
except
for
a
different
purpose.
So
it's
a
just
a
map
that
contains
string
key
values
and
just
like
span
context.
It's
passed
around
in
a
context
context
and
it
is
propagate
it
can
be
propagated
across
component
boundaries.
B
So,
just
like
we're
doing
for
tracing,
we
could
send
around
these
key
value
pairs
stored
in
HTTP
headers
when
we're
passing
it
that
way
or
in
the
context
context
in
process-
and
this
is
how
open
telemetry
does
trace
to
law
or
sorry
trace
to
metric
correlation
is
that
you
can
set
a
tag
in
one
process,
propagate
everything
to
the
second
process
via
a
call
and
then
that
second
process
can
produce
metrics
or
traces
with
the
same
set
of
metadata.
So
one
thing
we
could
potentially
do
is
use
this
both
for
structured
logging
and
for.
B
The
API
servers
telemetry
would
all
have
that,
but
then
also
controllers
that
receive
this
down.
The
line
are
able
to
without
embedding
any
knowledge
of
how
to
say,
attach
metadata
for
pod.
They
can
just
take
the
they're
able,
just
by
using
the
context,
context
that
we
have
from
our
trace
and
passing
that
to
whatever
logging
function
we
use
they
can.
Then
we
can
then
retrieve
that
metadata
in
the
implementation
of
logging
and
attach
that
as
a
structured
log.
B
So
this
is
a
way
sort
of
to
get
everything
in
one
and
to
have
them
all
share
the
same
sort
of
place
in
the
context
context
for
keeping
our
key
value
map
for
labels
right,
and
it's
also
somewhat
nice
that
it
works
across
other
boundaries
as
well.
So
even
things
like
the
container
runtime
or
at
CD
could
their
logs
and
traces
can
have.
B
B
B
C
B
I
actually
did
take
a
look
at
the
request,
ID
proposal
this
morning,
because
it
looks
like
they
pushed
some
updates
and
it
actually
proposes
using
the
same
I
want
to
say
mechanism
as
tracing
proposes.
In
other
words,
they
want
to
when
a
request
comes
in,
add
a
an
annotation
and
then
grab
the
annotation
out
whenever
you
act
on
an
object
to
attach
the
request.
Id
they're.
B
A
B
So
it's
a
weird
thing
to
think
about
because
context,
context:
ingo
is
itself
a
key
value
map
right
so
distributed.
Context
is
one
of
the
values
that
is
itself
a
string
to
string
map.
It
is
the
the
keys
in
context
are
not
generally
usable
simply
because
they're
they're,
usually
like
you're
supposed
to
make
a
private
struct
and
then
use
that
as
the
key.
So
no
one
else
can
steal
it.
So
we
don't
really
just
want
to
use
the
map,
that's
in
context,
context
right
away.
B
A
That
makes
sense,
I
just
wanted
to
make
sure.
That's
that's
how
I
understood
it,
but
I
just
wanted
to
double
check.
So
if
that's
the
case,
I,
don't
really
see
the
dependency
of
the
structure
of
logging
and
tracing
I.
Think
I
feel
like
it's
like
it's
a
an
implementation
detail
of
where
the
things
are
stored,
but,
like
the
like
the
way
I
see
it,
the
caps
could
be
implemented,
dependently
and
then
unified
a
later
point.
No,
yes.
D
This
is
exactly
what
I
wanted
to
say
that
that
within
D
this
logging
cap,
we
only
defined
that
we
want
to
have
some
some
kind
of
kubernetes
specific
package
which
which
provides
an
API
which
can
be
used
to
attach
some
information.
With
that
context,
we
don't
define
how
it's
implemented
and
we
can
use
use
open
telemetry
distributed
context
as
a
as
an
implementation
detail,
yeah,
okay,.
A
Perfect
yeah,
that's
it
that's
exactly
what
the
impression
that
I
had
as
well,
because
with
this
package,
we're
essentially
encapsulating
this
and
whether
we
use
distributed
context
inside
as
really
an
implementation
detail.
So
yeah
I
think
we're
all
saying
the
same
thing
and
are
agreeing
so
one
thing
that
I
we
did
merge
the.
B
A
A
Think
that's
probably
one
of
the
biggest
things
for
both
caps
to
go
into
implementable,
I
believe
just
a
sign-off
from
secure
architecture,
essentially
I
think
they're
different
for
structured
logging
and
tracing,
because
tracing
essentially
is
about
the
introduction
of
potentially
new
components
or
a
large
dependency,
where
a
structured
logging
is
more
like
a
general
general
overview
of
them.
Does
that
make
sense,
yeah.
B
E
A
F
A
A
C
A
A
G
F
G
A
G
I
update
it
register
ID
caps
and
basic
basic
foreseeable
August
ID
minimize
impact
to
interesting
features,
simple
implementation
and
I
would
like
to
collaborate
and
interact
with
related
capes.
This
means
as
force
I
would
like
to
ensure
consistent
metadata,
and
this
may
need
to
be
considered
and
managed
across
communities
beyond
stereotypes
and
implementation,
which
does
not
interfere,
related,
clicks
implementation
and
here's
miss
angel
Bob.
You
because
I
the
future
consistent
throwing
to
features
and
first
export
rigor
study
to
each
kubernetes
component
rocks
and
robot
age.
We
can
study
two
related
objects.
G
This
is
the
line
design
overview
over
export
will
cast
ID
and
I.
Consider
collaboration
with
a
little
check.
You
know
there
is
existing
capes
structured,
routine
feature
that
we
rated
locus
exporting
and
main
concept
over
this
field.
Jerry
structured,
the
note
format
and
addressed
it
is
in
cleric
with
the
regard
for
a
structured
and
structured
format
is
attractive
for
me,
but
this
replacement
is
very
tough
work
and
may
be
required
troubles
and
migration
steps.
In
this
situation.
It
is
expected
to
take
a
long
time
to
migration
completely.
So,
in
the
meantime
we
can
study
frigate.
G
E
C
A
Understanding
is
that
essentially,
is
intending
for
this
to
be
done
on
a
more
or
less
line-by-line
basis
in,
in
which
case
I'm
not
too
concerned
about
the
breaking
of
log
lines.
But
what
I
am
a
bit
concerned
about
is
that
it
does
somewhat
interfere
with
structured
logging,
because
it
makes
the
migration
harder
as
long.
F
A
A
Yeah
I
tend
to
agree
like
context
plumbing,
isn't
there
everywhere
anyways
it
needs
to
be
done,
and
then
we
need
to
figure
out
the
way
to
add
and
read
from
the
context
so
I
I
feel
like
where
we
would
actually
end
up
with
a
relatively
similar
timeline
to
restructure
logging,
except
for
the
migration
part,
but
I
feel
like
I'm,
not
sure
I'm,
seeing
enough
of
a
benefit
in
the
comparison
that
I
think
doing.
This
right
now
is
of
much
it's
like.
A
A
G
G
E
The
problem
is
that
we
for
monitoring
we
maintain
matrix
server,
which
exposes
a
matrix
API
that
provides
one
memory,
value
historically
dot.
This
memory
value
this
type
of
memory
because
it
exposes
four
types
of
memory,
so
matrix
server
like
at
some
point.
There
was
a
decision
to
to
expose
I,
think
it
was
working
set
and
but
the
decision
itself
was
not
words
anywhere
a
side
of
API
which,
with
metric
server
history,
it
led
to
changes.
E
B
B
But
the
the
decision
to
use
working
set
in
the
cubelet-
I
can
talk
about
a
little
bit
too
so
ye-
and
this
is
let's
see
this
dates
back
to
very,
very,
very
early
communities
and
it
actually
is
stolen
from
borg.
So
we
use
a
similar
metric
to
what
Borg
has
used
for
a
long
time.
So
some
of
this
is
just
practical
experience.
B
Some
of
this
is
also
it
makes
sense.
In
theory,
at
least
so.
The
working
set
is
the
total
memory
usage,
and
from
that
we
subtract
the
pages
on
the
inactive
file
list,
and
to
summarize
it
you
can
basically
think
of
the
kernel
as
keeping
sets
of
pages
it
thinks
are
being
used
that
are
backed
by
files
and
pages
that
it
thinks
are
not
being
used
and
it
sort
of
keeps
balancing
these
lists
to
try
and
keep
the
hotter
ones
in
one
side
and
the
colder
ones
on
another.
B
B
E
This
is
this
is
based
on
incorrect
they're
incorrect
assumption,
that's
what
David
discussed
was.
There
was
some
incorrect
assumptions
when
the
value
was
switched.
So
what
VPA
people
like
their
goal
is
to
estimate
memory
to
give
a
pods
minimal
memory
that
it
doesn't
got,
get
get
out
of
a
memory
or
evicted,
which
means
this
is
what
David
explained
it.
E
This
is
the
our
best
estimation,
so
I
think
what
we
are
like
mainly
missing
is
a
correct
documentation
because
of
our
design,
and
this
should
also
simplify,
because
there
are
lots
of
people
that
asking
why
we
have
why
metric
server
gives
us
this
memory
and
permissions.
It
gives
other
value,
and
this
would
like
give
us
full
explanation
why
we
meet
this
type
in
this
place.
Okay,
so.
A
E
A
E
A
Going
to
ask
the
exact
same
thing:
yes,
okay,
that
we
actually
already
one
minute
over,
am
I
apologize
for
anyone
who
has
to
go
somewhere
else,
but
we
are
also
out
of
agenda
items.
So
it
was
a
very,
very
productive
meeting
thanks
everyone
I
hope
everyone
had
a
wonderful
start
into
the
new
year
and
have
a
wonderful
local
time
right.