►
From YouTube: gRPC Community Meeting Jan 18, 2018
Description
gRPC Community Meeting 1.18.18
A
Okay,
somebody
give
us
a
thumbs
up
that
you
can
hear
it
yeah.
Okay,
awesome!
Thank
you
so
much
thanks
for
joining
us
today
and
apologies
for
joining
a
little
later.
We
were
the
phone
apparently
didn't
like
the
number
six,
so
that
always
was
fun.
Wasn't
that
happen
so
I
am
excited
today,
because
we
have
a
really
great
presentation
about
open
census.
Census
has
definitely
been
the
thing
that
I
think
was
the
most
simple.
A
Whenever
we
did
our
community
meeting,
you
know
people
would
always
ask
questions
about
census,
and
so
I'm
excited
that
we've
got
somebody
here
to
give
you
a
demo
and
an
update
I'm
using
it
with
G
RPC
Before.
We
jump
into
that.
Does
anybody
have
any
announcements
or
anything
that
they
would
like
to
share
with
the
community
say
hi
or
anything
like
that.
B
A
B
B
C
Sure
I
can
take
over
all
right
thanks,
so
I'm
JP,
Dion
Jana
I
work
on
the
census.
Team
actually
wants
to
give
a
little
bit
of
like
demo,
but
I
will
first
begin
with
some
of
the
concepts
we
have.
So
it's
kind
of
like
going
to
be
more
of
like
a
presentation.
So
let
me
share
my
screen.
I
hope
that
you
can
see
it.
Can
you
see
the
screen
right
now:
okay,
yeah,
so
I
yeah.
Let
me
introduce
myself
first
I
work
at
Google.
B
C
C
Language
space
team
we
have
multiple
sub
teams
and
generally
I
would
just
want
to.
You
know,
give
you
a
little
bit
of
observability
and
you
know
observable
play
especially
in
distributed
systems
and
how
our
geographies
integrations
so
I.
Think,
like
you
know,
many
people
have
heard
about
observability,
but
there
are
a
few
different
definitions,
so
I
want
to
clarify
my
own
definition
or
our
definition.
First,
we
call
observability
this
holistic
approach
to
be
able,
to,
you
know,
observe
a
system
for
some
real
reliability,
performance,
availability
and
so
on.
C
We
look
at
different
signals
in
order
to
achieve
that.
Metric
collection
distributed
tracing
profile
in
log
in
are
a
few
of
those,
so
I'm
going
to
just
give
you
a
brief
overview
of
the
motivation,
some
of
the
best
practices
and
some
of
the
you
know,
concepts.
We
came
up
in
the
past
years
to
make
our
production
systems
reliable
and
explain
you
how
a
consensus
came
around
so
I
said
signals
you
know
who
wants
they
have
metrics
more
easily
from
G
RPC
Services
Inc
lines
I'll
be
able
to
trace
there.
C
So
before
we
get
in
there,
open
census
is
actually
providing
is
a
foundation
layer
that
gives
you
all
these
signals
from
services
and
allows
you
to
you
know
upload
the
instrumentation
data
to
your
choice
of
back-end,
but
before
getting
there,
I
just
want
to
explain
you
some
of
the
core
concepts
behind
open
census
and
why
we
came
up
with
those
concepts.
So
to
give
you
a
little
bit
background
I
work
for
you
know
we
work
for
a
dominantly
distributed
systems
company
and
one
of
the
common
architectural
patterns
we
use
is
the
micro
services
architecture.
C
You
know
at
any
time
there
are
lots
of
teams
working
on
different
micro
services,
and
you
know
each
team
is
sometimes
responsible
of
a
bunch
of
different
services
so
being
able
to
observe
being
able
to
observe
our
systems.
The
fundamental
reasons
why
you
know
we
became
reliable,
fast
and
user
friendly,
and
in
order
to
be
able
to
observe
our
systems,
we,
you
know
had
to
instrument
them
and
we
invented
some.
You
know
collection.
What
methodologies
you
know.
Export
formats
like
new
philosophies
in
this
area.
C
Our
instrumentation
stack
is
cares
about
efficiency,
and
you
know
the
overhead
of
the
collection.
So
observably
is
a
part
of
our
engineering
culture,
but
we
actually
enabled
it
by
making
it
easy
and
lower
overhead
and
before
digging
more
into
the
distributed
systems.
Observability
there's
one
thing:
I
want
to
explain,
because
it's
a
little
bit
different,
observe
ot
is
different
in
distributed
systems
from
monolithic
systems
because
of
one
particular
reason.
So
this
architectural
diagram
is
pretty
much
common
at
every
you
know.
C
For
every
product,
at
micro-services
companies
we
usually
have
a
user
facing
very
business
logic,
driven
front-end
server.
It
depends
on
various
other
services
in
authentication.
Billing
and
reporting
are
those
examples
there
and
at
some
point
they
will.
All
you
know
depend
on
some
database
there
and
some
low-end
surge
layer
and
so
on.
In
this
case,
you
can
see
the
data
store
are
non
sequel
database.
C
You
know
it's
just
so
hard
for
them
to
like
understand
the
root
cause
of
the
problem,
especially
if
it's
triggered
by
their
users.
So
in
this
case
you
know
what
data
store
in
blob
search
teams
will
see
fluctuations
in
their
course,
but
we'll
have
like
harder
time
to
understand
what
is
going
on
or
where
the
problems
are
originated
that
so
it's
also
like
you
know,
I
a
lot
of
about
about
the
case
when
things
are
going
wrong,
but
it's
not
only
when
things
are
going
wrong.
C
C
Hey
are
we
meeting
the
you
know
SLO
with
this
team
with
you
know
the
higher
level
teams
dependent
on
us
are,
you
know
providing
the
service
that
we
promised
or
they
want
to
understand.
You
know
what
is
the
impact
of
this
high
level
other
service
on
our
service,
and
you
know
what
happens
if
this
high-level
service
grows
to
a
person
overnights?
Is
our
deployments
going
to
be
able
to
handle
that
scale?
If
not,
you
know
what
are
our
next
steps
and
so
on.
So
we
this
is.
C
This
is
the
reason
why
we
want
to
be
able
to
break
down.
There
are
signals
in
different
various
ways,
especially
at
the
lower
ends,
because
it's
becoming
more
complicated
to
ask
these
questions
and
we
use
dimensions
in
order
to
achieve
that.
We
call
it's
just.
You
know
these
different
ways
of
actually
asking
questions
with
dimensions.
You
can
query
the
collected
data
in
ways
that
you
know
help
you
the
answers
with
you.
C
Can
queries
things
like
give
me
the
you
know,
block
this
block
storage,
request,
latency
distribution
for
our
pcs
or
genetic
analysis,
or
it's
a
very
high
level
product
or
you
know,
give
me
these
phrases
and
reports
that
contains
this
specific
RPC
method
or
you
know,
give
me
the
CPU
profile
for
the
you
know
our
pcs
started
at
analytics.
You
can
tell,
for
example,
your
compression
life.
You
can
gather
CP
profiles
or
your
compression
library,
and
you
can
actually
query
it
in
a
way
that,
like
only
even
with
a
you
know,
see
samples
collected.
C
If
you
know
the
original
RPC
was
coming
from
this
particular
product.
So
it's
great
that
you
know
that,
like
we
can
query
this
data,
but
you
know
the
question
is
how
we
actually
like
the
signals.
So
it's
it
square
able.
This
way,
we
record
all
the
instrumentation
data
with
various
key
value
pairs.
We
call
these
key
value
pairs
tags
and
then
you
know
the
back
ends.
For
example,
a
metric
collection
back
end
like
parameters,
can
filter
the
collected
data
by
tags.
C
So
there's
one
problem
here,
though,
like
you
know
that
is
key
value
pairs.
The
problem
actually
here
is
in
the
microservices
architecture.
You
actually
have
no
tight
coupling
between
different
services
right,
so
how
can
the
database?
They
all
know
about
all
these
high-level
dimensions
that
other
teams
might
be
interested
in?
C
This
is
this
is
where
we
get
some
help
from
the
world
of
context
propagation.
So
the
tags,
all
these
key
value,
pairs
and
dimensions
are
actually
not
in
the
love
level
services
they
are
produced
at
the
high
level,
services
and
they're
being
passed
all
across
the
spec
is
a
part
of
the
RPC.
So
in
this
case
you
see
that,
like
from
all
the
way
up
to
the
bottom,
the
our
pcs
are
being
tagged.
So
the
blob
search
at
the
little
low
level
and
love
level,
it
doesn't
have
to
know
anything
about.
C
C
So,
to
summarize,
you
know
we
we
have
this,
like
culture
of
you
know,
producing
tags
at
the
high
level
services
depending
on
the
specific
requirements
of
the
teams,
and
we
propagate
those
tags
all
across
to
our
pcs
and
then
each
component
in
the
system
can
you
know,
record,
metrics,
traces
and
so
on,
and
so
on.
With
these
tags,
as
I
mentioned
in
the
beginning,
we
saw
observable
to
is
a
holistic
approach.
Each
type
is
in
you
know
useful
to
answer
different
questions,
for
example,
distributed.
C
Traces
cannot
tell
you
anything
about
your
CB
hotspots
or
you
know.
Cpu
samples
cannot
tell
about
overall
latency
at
the
end
and
latest
problems,
so
we
collect
different
various
various
signals,
examine
the
problem
and
you
know
see
the
problem
from
different
perspectives
and
it
became
very
clear,
I
think
very
easily
that
it's
very
hard
for
developers
to
think
about
all
these
dimensional
signal
types.
You
know
build
highly
efficient,
instrumentation
libraries
and
you
know,
instrument
each
layer.
C
They
depend
on
and
that's
why
we
decided
to
build
a
common
framework
and
decided
to
open
source
it
and
make
it
vendor
agnostic.
So
anybody
can
use
it
against
any
provider
I
personally
I'm
working
on
the
NGO
part.
So
I
will,
you
know,
keep
the
rest
of
the
presentation.
More
NGO
focused,
so
the
we
we've
been
working
on
open
census
for
a
long
time.
But
finally,
we
announced
it
yesterday.
C
Open
census.
Is
this
holistic
instrumentation
framework
the
open
source
version
of
Google
census
project,
but
we
actually
are
rethinking
about
pretty
much
everything
from
scratch
and
rewriting
the
entire
library?
It's
not
just.
We
are
open
sourcing,
the
existing
one
entirely
any
rewrites
and
the
main
reason
we
are
open
source-
and
this
is,
we
want
to
know,
feel
that
missing
building
block
in
the
open
source
world
we
want.
C
You
know
libraries
frameworks,
all
sorts
of
infrastructure
projects
to
be
able
to
instrument
without
you
know
having
to
reinvent
this
concept,
and
if
you
are,
you
know
you
are
working
for
an
organization
you
can
adopt
these
solutions.
We
have
already
built
or
use
it
as
a
reference.
So
open
census
provides
a
single
set
of
libraries.
C
We
current
have
tags,
taxes,
dimension,
propagation
layer,
metrics
traces,
and
we
will
support
more
signals
in
the
future.
We
have
language
support
for
go,
Java,
C,
char,
a
C++,
sorry,
Python,
PHP,
JavaScript,
C,
sharp
and
Erlang
are
coming
next.
Our
libraries
are
vendor
agnostic
and
can
upload
data
to
any
back-end.
So
we
support
caring,
the
Prometheus
against
tech
driver
I'm
working
to
be
a
support
for,
though
some
vendors
are
also
working
on
supporting
open
census.
So.
B
C
Are
expecting
them
to
start
publishing
their
own
exporters
for
output
census?
We
provide
out-of-the-box
integration
such
as
G
RPC,
and
will
go
off,
for
example,
the
net
HTTP
package.
We
also
in
our
libraries
provide
introspection,
which
is
very
interesting
because
we
probe
we
can
provide
you
a
tiny
dashboard
to
report
to
usage
from
a
single
process.
So
without
you
really
rely
on
an
external
service,
you
can
see
what
is
going
on
in
a
single
process.
C
This
is
how
you
import
the
plugin,
the
integration
for
gr
PC
for
go,
and
you
know
you
just
pass
a
stat
similar
to
the
GRDC
clients
and
servers
when
you're
initializing
them,
and
in
this
case
we
are
looking
at
a
server.
So
in
the
handler,
what
you
can
do
is
extend.
You
know
extend
the
tags
coming
from
the
incoming
contacts.
C
This
is
how
we
record
values.
This
is
the
metrics
library
we
are.
Currently
we
have
a
measure
total
hello
that
represents
a
number
of
times
we
set
hello.
The
test
record
is
going
to
save
increment,
like
save
one
with
the
tags
in
the
current
incoming
contacts,
so
you
will
be
able
to
tell
the
number
of
fellows
you
know,
broken
down
by
the
service
name
or
by
specific
user.
C
This
is
how
it
looks
like
in
the
dashboards.
So
you
know
you
can
break
the
data
by
the
dimensions
we
have
collected.
In
this
case,
each
color
is
representing
a
different
originator
service.
Baby
blue
is
the
number
of
fellows
from
the
authentication
service
year
and
the
purple
is
representing
something-
and
you
know
the
other
two
dimensions,
as
are
represented
by
green
and
dark
blue
and
GRP.
C
A
plugin
automatically
creates
spins
for
the
incoming
and
outgoing
our
PCs,
but
you
can
also
use
our
trace
package
to
add
custom
spins,
so
here
I'm
creating
a
custom
child's
spend
and
finishing
it.
You
can
create.
You
know
as
much
as
you
want
and
you
can
annotate
them.
You
can
set
attributes
on
them.
You
can
also
you
know
key
propagate
on
the
context.
By
using
the
context
object,
you
can
just
create
more
children
tracing
the
same
trace,
and
this
is
an
example
of
the
traces
collected
by
in
the
in
their
life
time
of
this
RPC.
C
C
C
To
summarize,
we
have
this
holistic
approach.
We,
you
know,
use
multiple
signals.
We
allow
you
know
tags,
allow
us
to
break
this
break
the
data
by
dimensions,
so
each
team
can
produce.
You
know,
produced
two
tags
dimensions
they
are
interested
in
and
they
can
pass
it
to
the
low-level
serve
other
services.
They
depend
on
and
I
take
propagation
from
entire.
You
know
core
developer
frameworks
in
you
know,
service
meshes
and
load
balancers
as
well,
so
users
can
automatically
get
some
sort
of
instrumentation
out
of
the
box,
but
they
can
use
the
same
libraries
at
fine-grained.
C
Details
like
as
I
did
in
the
G
RPC
server
handler
example,
like
you
know,
creating
custom
spans
and
so
on.
Our
instrumentation
layer
is
optimized
to
be
very
low
overhead
and
won't
cost,
so
it
makes
it
easier
for
libraries
and
frameworks.
You
know
instruments
without
thinking
about
the
cost
of
instrumentation,
that
much
and
once
you,
you
know,
adopt
these
concepts
and
you
know
put
them
in
place.
It
gives
you
a
quite
a
solid.
You
know,
foundation
layer,
open
census
is
already
available,
its
vendor
agnostic.
We
already
support
from
me
to
Zipkin.
C
Instead
driver
more
exporters
are
coming
very
soon.
We're
eager
to
focus
more
on
framework,
integrations
and
writing
more
exporters,
so
I
highly
encourage
you
to
you
know,
take
a
look
and
give
us
some
feedback
and
contribute.
Thank
you.
So
much
more
details
is
on
open
census.
That
I
am
you
can
check
it
out
and
you
can
take
a
look
at
you
know
language,
specific
examples
and
Docs
I.
B
C
You
look
you're
looking
for
portability,
maybe
you
would
like
to
you,
know
switch
and
we
are
now
providing
some
out-of-the-box
instrumentation
if
you
use
this
angler,
so
you
don't
want
to
maintain
that
layer.
For
example,
you
can
just
you
know,
reused
existing
measures,
and
you
know
already
provided
by
the
open
census,
integration.
C
It's
really
up
to
you
and
your
portability
and
four
agree
suggest
open
census.
Also,
people
who
are
publishing
you
know
code
outside
of
their
organizations.
If
you
would
like
to
you
know,
provide
some
utilities
and
wanna
have
some
like
you
know,
portable
non
vendor
specific
solution
to
instrument
open
census
is
a
better
approach.
Then
you
know,
but
otherwise
I
think
Prometheus,
if
you're
using
internally.
C
A
C
A
C
Sizes
is
involved
in
to
standardization
efforts.
The
first
one
is
initiated
by
Prometheus
team
prometheus
volts.
They
would
like
to
standardize
the
exposition
format
and
it's
kind
of
like
also
you
know,
standardizing
the
data
format
and
we
are
involved
in
that.
Currently,
our
data
model
is
entirely
matching
and
we're
trying
to
make
sure
that,
like
we
are
meeting,
if
that's
the
standard
is
happening,
we
would
like
to
implement
that
standard.
C
There
is
another
standardization
format
around
traces,
especially
around
trace
context
is
the
initial
step,
because
in
all
these
backends
propagate
trace
IDs
differently
in
libraries-
and
you
know,
the
entire
ecosystem
cannot,
since
they
cannot
support
vendor
specific
ways,
propagation
ways
we
all
our
libraries
are
not
very
aware
of
your
trace
ideas,
and
so
we
end
up
dropping
things,
because
we
don't
want
to
put
vendor
specific
code
in
our
library
and
so
on.
So
we
are
standardizing
how
we
propagate
trace,
IDs
in
HTTP
and
beyond.
C
D
A
E
Wondering
we
have
the
situation
where
we
have
a
server
that
uses
G,
RPC,
bi-directional,
persistent
HTTP
to
connections
and
there's
the
option
of
what
we
could
either
start
them
all
up
initially
and
have
them
kind
of
always
running
or
we
can
have
them.
You
know
kind
of
fire
up
dynamically
and
we're
thinking
of
siding
with
just
open
up.
You
know,
however,
you
know
say:
20
connections
that
the
maximum
amount
we
probably
need
and
just
leaving
them
open
and
on
production
servers.
E
D
D
Stressing
for
the
occasion
so
the
I
guess
I
should
just
real
quick
ask
which
languages
are
you
all
working
in
Java
I?
This
doesn't
change
too
much,
okay,
good
I'm,
even
better.
So
whenever
you
create
a
channel,
it
does
not
connect
immediately
so
you're
free
to
go
ahead
and
create
those
channels
initially
and
then
just
let
them
sit
and
they
will
connect
whenever
you
need
them.
There
is
the
channel
state
API
that
you
can
use
to
ask
it
to
connect
for
you.
D
D
E
Typically,
multiple
servers
are
going
to
be
involved
here.
That's
do
you
see
potentially
like
network
topology
type
issues
that
might
encounter
if
I
just
leave
them
on,
meaning
that
I'm
actually
connecting
them
like
having
a
bi-directional
stream?
That's
open
and
ready
to
go
right
on
startup
is
Network.
Some
of
these
machines
that
sometimes
have
network
issues
I'm
worried
a
little
bit
right.
D
Right,
so
sorry,
because
you
were
doing
the
but
I
think
you
didn't
instead
earlier
I
forgot
about
it
because
you're
doing
the
by
texting,
you
don't
actually
need
the
connectivity
state,
API
you'll
just
have
the
channel
you'll
create
the
bite
ice-cream
just
initially
and
any
time
it
goes
down.
You'll
get
it
again
enough
cells
that
should
get
a
little
bit
easier.
D
Do
be
careful
to
not
bump
that
too
high
of
a
value,
because
it's
very
hard
to
diagnose
loaded
caused
by
people
I
if
that
can
be
on
the
on
the
network
or
CPU
or
some
other
things
like
that,
but
whenever
keeps
keep
alive
is
on
bureaucracy
will
be
sending
pings
occasionally
to
make
the
connection
and
notices
that's
bad
and
that
you
can
enable
on
both
server-side
and
client-side
and
so
for
any
long-lived,
anytime,
you're
doing
long-lived
RBC's.
It
doesn't
even
have
to
be
by
die,
although
that's
the
most
common
case
of
having
long
enough
dark.
E
D
E
D
A
Right,
if
no,
then
we
can
wrap
it
up
for
today,
thanks
everybody
for
joining,
and
we
have
the
recording
so
we'll
share
that
on
the
You
Tube
channel
and
do
check
out
the
meeting
notes
or
will
get
the
census
mailing
list
and
a
link
to
the
GR
PC
demo
in
there
as
well.
So
you
can
jump
in
and
play
around
with
that,
and
then
we
will
meet
again
in
a
couple
weeks.