►
From YouTube: gRPC Community Meetup February 2023
Description
Protoconf: Configuration Consumption with gRPC
Configuration changes are at the heart of any software operations. -protoconf, which is heavily inspired by Meta's `Configerator` tool, is designed to rethink how configuration is being Composed, Delivered and Consumed by software in the fastest and safest way.
In this session Shahar Mintz goes over an overview of protoconf's architecture and will focus on how gRPC and protobuf are used to consume and programmatically alter configurations.
A
A
So,
as
it
is
said,
my
name
is
shakar
I
work
at
David,
devops,
consultant
and
architect.
I
previously
worked
at
some
big
and
small
companies
startups
and
big
Tech
and
corporate
and
I've
I've
I've,
seen
a
lot
of
configuration,
challenges
and
I
wanted
to
solve
them
with
protocols.
A
A
Why
would
the
grpc
community
would
be
interested
in
this
talk,
and
this
is
what
church
GPT
had
to
say
that
you
guys
are
going
to
care
about
centralized
config
management
and
how
to
unify
audio
configs
under
a
single
system.
Dynamic
configuration
updates.
A
A
You
get
type
safety
for
your
configurations
and
you
never
get
a
string
where
it
needs
to
be
a
Boolean
or
whatever
so
there's
less
risk
for
you,
loading
new
new
configuration
it's
compatible
with
grpc
because
it
was
built
for
grpc
application
built
with
grpc
and
since
it
uses
git,
you
get
configuration
observability
and
history
of
what
configuration
changed
and
when
so,
I
want
to
talk
about
all
of
this.
But
first
I
want
to
give
a
brief
history
of
configuration
management
and
how
we
we
got
here.
A
A
So
the
first
generation
of
configuration
management
start
in
the
90s
with
CF
engine
that
introduced
the
declarative
versus
imperative
approach
before
CF
engine.
All
Unix
machines
were
configured
by
shell
scripts.
So
if
the
shotscript
failed
for
some
reason,
it
would
have
to
to
continue
from
where
it
stopped,
because
you
might
don't
want
to
delete
again
something
that
you
just
created
so
CF
engine
introduced
of
the
declarative
way
of
configuration
with
an
eventually
consistent.
A
Second
generation
with
puppet
and
Chef
comes
in
the
early
90s
early
2000
sorry
were
more
more
hyperscaler
started
to
to
come
out.
We
started
to
see
chef
and
puppet
managing
hundreds
and
tens
of
thousands
of
servers
with
a
better
syntax
than
CF
engine,
with
a
better
code
base
architecture
and
better
server
client
architecture
that
allowed
various
things
and
more
features
and
also
reports
on
what's
going
on
in
the
system.
A
But
then
we
entered
into
the
cloud
with
ec2
and
we
decided
to
see
ephemeral
instances
either
by
spa
or
with
sport
instances
or
or
with
Dynamic
Auto
scaling,
and
we
needed
a
fix
for
that.
So
ansible
and
salt
stack
introduced
with
two
different
approaches:
asabel
cared
about
their
provisioning
and
therefore
knew
all
the
systems
in
the
in
the
environment
and
salt
had
a
a
better
subscription
approach
and
Discovery
service
for
for
machines.
A
So
salt
could
know
them
shares
that
are
coming
going
and
then
we
entered
into
the
era
of
containers
and
we
started
to
see
tools
like
kubernetes
and
Helm
and
terraform
for
SAS
services.
A
A
If
we
go
from
the
software
back
to
the
system,
the
the
SAS
the
cloud
all
of
this-
it's
eventually
software.
So
let's
talk
about
the
difference
between
what
software
wants
and
what
human
want.
Software
wants
static
data
for
configuration
Json
in
this
in
this
case,
but
maybe
even
just
binary
blog.
If
it
can,
it
will
be
easier
for
it
to
pass
and
and
safer
and
human
wants
to
write
code.
Human
wants
to
write
logic
and
and
functions
that
will
help
him
write,
write
better
configuration
with
less
Hustle.
A
I
took
this
quote
from
from
the
SRE
workbook:
the
quality
of
human
computer
interface
of
the
system.
Configuration
impacts,
an
organization's
ability
to
run
that
system
reliably.
This
is
from
Google
SRE,
workbook
I
think
it's
it's
it's
a
big
deal,
I
think
it's
one
of
the
fundamentals
of
running
reliable
system.
A
So,
with
everything
I
learned
about
configuration,
I
decided
to
to
write
platforms,
it's
heavily
inspired
by
a
tool
called
configurator
that
runs
within
meta
or
Facebook.
It
doesn't
matter
how
you
want
to
call
it,
but
configure.
Configurator
is
basically
in
charge
of
all
configuration
changes.
Software
configuration
changes
at
Facebook,
it
comes
from
it
configures
the
load
balancer
as
it
configures
the
monitoring
system,
configures
all
the
software
that
runs
inside
meta,
so
programs
have
a
few
stages
for
the
config
life
cycle.
A
A
So
when
we
compose
a
config,
we
first
want
to
define
the
structure
of
the
configuration
and
we
do
it
with
protobuf.
Why
so
we
can
then
compile
it
to
whatever
language
we
want,
and
the
configuration
that
was
compiled
and
composed
by
the
protocol
compiler
will
be
would
be
easy
to
to
read
from
our
app.
A
Then
we
write
the
code
and
we
compile
the
the
config.
Let's
see
how
it's
actually
happened
in
real
life,
so
first
we
Define
the
structure.
As
I
said,
this
is
a
pretty
simple
config
with
a
integer
of
connection
timeout
and
we
have
a
nested
struct
here.
So
we
can
do
a
really
complex
configuration
just
by
using
protobuf
system
that
we
use
the
stalactic
language,
which
is
pretty
much
like
python.
If
you
know
python,
it
will
be
easy
for
you
to
use.
A
The
reason
we
use
starlark
is
that
because
it's
a
limited
version
of
of
python
and
starlock
doesn't
have
access
to
sockets
and
to
files.
So
it's
guaranteed
that
whenever
you
run
the
code,
it
will
always
produce
the
same.
The
same
output
doesn't
matter
if
you
run
it
on
your
own
computer
or
somewhere
else.
A
And
after
you
compile
the
the
config
you'll,
see,
you'll
get
Json
representation
of
the
configuration,
and
now
you
can
continue
to
validate
your
your
code,
so
we
talked
about
composing.
Now,
let's
talk
about
validating,
so
the
first
thing
protocol
introduced
for
validation
is
the
ability
to
write
validation
code
just
like
unit
tests.
A
You
write
a
code
that
says:
okay,
this
config
produced
is
okay
or
not.
Okay,
and
if
it's
not
okay,
it
will
never
be
written
out,
so
you
don't
risk
creating
a
bad.
An
invalid
configuration
we'll
see
in
a
minute
how
it
looks
like
in
the
future
I
want
to
support
the
validate
proton
standard,
so
it
will
be
easier
to
just
try
it
inline
rules
inside
the
inside
the
protophiles.
A
Currently,
this
is
the
way
to
do
it
and
after
compiling
and
before
we
deliver
it
to
production,
we
can
run
the
software
locally
against
the
configuration
and
after
we
feel
we're
ready.
We
push
it
to
GitHub
or
gitlab,
and
we
can
just
run
a
standard
review
both
for
the
code
and
for
the
actual
config
before
we
we
ship
it
to
everywhere.
A
So
this
is
how
it
looks
like
when
we
add
validation
for
in
this
example.
We
just
make
sure
that
the
connection
timeout
is
higher
than
three
three
or
higher
and
if
not
we'll
get
a
compilation,
error
and
we'll
never
write
out
the
the
config
after
we
have
the
config.
We
can
run
the
protocol
of
agent
locally
and
connect
to
it
with
our
with
our
app
we
will
show.
We
will
see
that
in
the
consuming
part.
A
So
this
is
how
we
validate
our
config.
So
the
next
thing
to
do
is
to
deliver
it
and
we
have
some
some
keys
to
delivering.
It
needs
to
be
fast.
We
want
to
deploy
in
seconds
and
not
in
minutes,
because
if,
if
we
have
fast
delivery,
we
also
have
fast
rollback
so
the
the
fastest
we
deliver
the
change
the
fastest.
We
can
do
rollback
to
a
change
and-
and
if
something
goes
wrong,
we
can
always
roll
back
and
change.
It
needs
to
be
safe.
We
want
to
minimize
the
blood
values.
A
If
we
have
something
like
a
single
point
of
failure
in
the
configuration
in
the
configuration
delivery
pipeline,
then
we
might
be
doomed.
It
needs
to
be
simple,
so
every
organization
can
use
their
own
strategy
and
it
is
as
simple
as
just
run,
protocol
of
insert
with
the
pass
to
the
root
of
our
configs
and
the
paths
of
the
configs
that
we
want
to
load.
A
This
is
the
suggested
deployment
model,
so,
after
everything
was
merged
to
main,
we
will
run
a
protocol
insert
hook
that
will
insert
our
configuration
changes
into
console
or
ATD
or
zookeeper
zookeeper.
These
are
the
supported
key
value.
Storage
that
we
currently
support
are
all
battle
tested
software.
So
you
can,
you
can
trust
it
and
then
the
agent
detects
the
changes
on
the
key
value
store
and
can
stream
the
the
config
change
to
the
application
through
grpc.
A
So
this
is
the
delivery
now
for
the
consuming.
So
this
is
all
you
need
in
order
to
consume
a
protocol
config,
you
just
compile
the
this
this
service
to
your
language
and
you
can
use
it
from
your
code
just
connect
to
the
port
confedient,
which
runs
either
on
your
local
machine
or
as
a
sidecar
to
your
container.
A
Whatever
you
do
you
initialize
the
configuration
that
you
have,
you
can
have
default
values
when
you
initialize
the
configuration
and
then
you
just
subscribe
for
the
configuration
and
every
time
there's
there's
a
new
config
change.
You
can
just
unpack
the
config
to
the
previous
config
instance
and
continue
the
app
continue
to
read
from
it.
A
You
don't
have
to
use
the
protocol
compiler
and
starlock
in
order
to
compose
the
configs,
because
if
you
just
comply
with
the
protobuf
schema,
you
can
generate
it
from
JavaScript
or
Ruby
or
whatever
language
you
liked
the
the
configuration
to
be
done,
and
then
you
can
also
introduce
your
own
validating
a
process
before
you.
You
deliver
everything
and
since
the
grpc
definition
of
of
the
agent,
you
can
replace
the
delivery
model
as
well.
A
In
fact,
when
I
talked
with
with
a
friend
about
it
that
maybe
people
will
be
afraid
to
use
the
agent
in
project
in
production,
he
said
well,
the
agent
is
so
simple
that
you
can
basically
rewrite
it
in
Rust,
and
this
is
what
he
did.
He
wrote
a
rust
version
of
of
the
protocols
agent
and
gave
it
to
me
as
a
birthday
gift.
A
I
never
released
it
because
it
was
all
used
nightly
features,
but
now
they're,
probably
emerging
production
into
the
Streamline,
and
maybe
I
should
revisit
that
and
then,
as
long
as
you
you're
up
uses
protocols
to
consume
the
configuration
you
can
use
whatever.
Whatever
agent
you
want.
A
But
there
is
one
more
thing
that
I
wanted
to
talk
about.
Up
until
now,
we
discussed
how
the
human
interacts
with
the
machine
when
it
comes
to
configuration
management.
But
maybe
you
want
a
machine
to
change
configurations
as
well.
A
Maybe
you
want
to
change
the
log
level
from
a
portal,
or
maybe
you
want
to
make
configuration
changes
based
on
telemetric
information,
so
I
want
to
also
show
you
the
mutation
RPC.
A
This
is
the
invitation
RPC.
Just
you
can
just
connect
to
the
mutation
server
and
alter
the
the
configs.
A
When
we
look
at
the
code,
the
code
in
the
upper
part
is
how
you
connect
and
make
an
RPC
change
through
through
a
protocol
and
now
just
how
you
load
the
config
into
the
styler
code
and
then
from
from
there
you
can
do
whatever
you
want
to
do,
and
this
is
how
it
looks
like
just
from
your
application
server.
You
can
make
the
RPC
to
the
mutation.
Rpc
limitation,
RPC
will
recompile
everything
and
we'll
push
it
back
to
git
and
all
the
configuration
will
be
streamlined
into
your
applications.
A
So,
as
I
said,
it
can
be
part
of
the
CI,
it
can
be
a
part
of
the
monitoring
system
or
you
can
make
user
interfaces
for
Less
technical
stuff
or
needs
to
control
the
the
configuration
of
the
applications
we
need
help.
First,
we
need
to
get
from
zero
production
users
to
one
and
then
more
so,
if
you're
interested
in
using
it.
Please
approach
me
and
I
can
see
how
can
I
support
you
with
that?
A
I
need
help
writing
helpers
in
various
languages,
so
it
will
be
easier
for
people
to
to
to
use
protocol
maintaining
the
code
base.
Of
course,
writing
documentation
and
examples,
and
if
you
want
to
join
and
steering
the
committee
for
the
project
you're
more
than
welcome,
if
you
want
to
learn
more,
this
is
our
Dockside,
please
star
some
GitHub
and
show
show
love
to
us.
You're
welcome
to
join
on
our
Discord
I'm
hanging
there
all
the
time
and
follow
us
on
Twitter.