►
From YouTube: Config Architecture Review 3.22.18
A
A
C
B
C
B
All
right
so
I
think
joining
the
meeting.
Here
is
a
reflection
of
the
effort
it
took
to
get
to
this
this
design.
So
this
been
going
on
for
several
months
where
we've
been
trying
to
converge
on
things
we
have
summit
a
few
months
ago
and
never-ending
series
of
meetings.
I
think
me
ended
up
at
a
big
new
place,
so
let's
get
going
all
right.
B
So
what
I'm
going
to
talk
about
is
basically
kind
of
a
quick
recap
of
where
we
are
today
the
issues
we've
had
and
what
we,
what
we
expect
to
do
about
it,
we'll
talk
briefly
about
what
the
gallery
component
actually
is,
how
we
want
to
do
staged
rollout
and
multiple
cluster
support
and
we'll
go
through
implementation.
Clicks
thanks,
it's
quite
possible.
We
won't
get
those
all.
E
B
Where
we
are
today,
okay,
so,
like
everybody
knows,
we
program
customers,
program
Bustillo
through
using
kubernetes
resources,
its
provides
a
pretty
flexible
and
consistent
model.
It's
a
nice
continuum
to
just
programming
communities.
It's
the
way
we
have
it
now.
Our
resources
are
targeted
at
individual
components,
so
the
users
are
actually
responsible
for
understanding
how
a
given
particular
thing
they're
trying
to
achieve
a
particular
feature.
Maps
to
these
and
further
reducers
are
responsible
for
ordering
an
orchestration
how
to
make
updates.
B
So
resource
updates
are
in
fact
dangerous
as
it
is
today.
A
bad
configuration
can
just
lead
to
a
total
service
outage,
there's
kind
of
no
warning,
you
just
check
it
in
and
it
fails
and
fails
your
customers
potentially
and
excuse
me
the
emergent.
Can
the
emergent
state
if
you
relate
to
the
previous
bullet,
about
having
to
orchestrate
many
many
updates
having
the
user
to
orchestrate
many
dates?
B
If
you
use
it,
doesn't
do
this
and
just
says,
yeah
use
the
three
or
four
resources
I
want
to
have
here
the
final
state
of
my
system,
which
includes
changing
these
three
resources.
If
they're
all
pushed
simultaneously,
the
system
will
experience
emergent,
state
and
kind
of
unknown
operational
modes
that
the
operator
didn't
intend,
which
will
lead
to
random
Baker's,
including
their
customer,
visible
outages,
it's
all
temporary,
because
the
system
will
stabilize
eventually,
but
it's
too
transient
in
nature.
B
Additionally,
we
as
it
stands
today
we
have
pretty
poor,
diagnose
ability,
we
don't
report,
we
don't
use
the
canonical
lady's
battery
for
recording
resource
status
and
I
said
individual
features
sometimes
required
from
having
multiple
components,
and
we
don't
really
tell
you
if
the
components
are
or
misconfigured
relative
to
one
another.
You
just
get
stuff
doesn't
work
and
you
gotta
figure
out.
One
letter
issue
is
reuse,
so
in
a
large
dimension
that
contains
a
thousand
services,
there's
a
lot
of
config
state.
B
That's
that
means
that
ends
up
being
the
same
across
all
these
services
and
as
it
is
today,
we
have
kind
of
a
mishmash
in
terms
of
how
we
allow
we
use
of
config
State
across
different
services
within
a
mesh.
So
mixture
has
this
two
level
composition
model
which
allows
a
reasonable
amount
of
sharing
across
services.
B
Pilot
is
on
a
different
plan
and
we
just
don't
have
a
holistic
story
there,
which
relates
to
the
in
this
eco
model,
by
not
having
a
very
clear
definition
of
ownership,
who's
responsible
for
different
different
who
is
logically
responsible
for
different
kinds
of
features.
It
becomes
unclear
who
has
the
Apple
to
control
the
behavior
of
service.
Is
this
an
attribute
of
clung
to
the
service
or
the
producer
or
the
service
producer?
It's
unclear.
B
So
the
the
distinction
between
component
versus
feature
centric
configuration
is
what
is.
This
is
fine
to
show
right.
So
if
the
user
is
trying
to
add
a
coda
to
using
or
release
an
API
management
system,
the
users
trying
to
add
a
coda
to
be
enforced
on
a
given
method
call.
The
user
is
responsible
for
changing
two
different
components.
Well,
first,
the
decoder
needs
to
be
declared
to
mixer,
and
you
can
figure
out.
B
They
needs
to
be
done
that
this
mixer
related,
the
user
needs
to
wait
until
that
that
updates
actually
taken
effect
and
then
can
proceed,
update
the
mixer
client,
which
is
in
the
proxy,
so
that
the
code
that
actually
starts
getting
charged.
So
that's
one
case.
There
are
several
other
interactions
that
are
possible.
So
in
our
current
model
the
user
is
forced
to
understand
the
relationship
between
mixer
client
that
mixer
needs
to
understand.
B
The
order
may
be
the
order
is
the
same
when
doing
an
update,
as
opposed
to
being
a
bad,
probably
the
updates
in
their
orders
and
in
Reverse
if
the
user
is
removing
things,
but
it's
just
kind
of
exercise
left
up
to
the
reader.
If
you
do
it
in
the
wrong
order,
you
get
some
knowledge
is,
of
course,
the
pattern
of
just
pushing
both
at
the
same
time,
which
I
said
leads
to
a
merchant
config
practice.
It
probably
mostly
works.
B
So
when
you're
doing
it
you
when
you're
testing
it,
it
probably
just
works
fine,
but
in
the
field,
when
the
system
is
busy,
when
there's
a
it's
a
large
mesh,
it
might
not
be
fine
and
you
might
have
outages
that
the
customer
is
experiencing
as
a
result
of
this
alright.
So
what
do
we
want
to
do
about
this?
B
Basically,
we
want
to
introduce
a
pipeline,
a
transformation
pipeline
that
that
sits
between
when
the
user
authors
and
what
the
system
executes
at
one
time.
So
the
pipeline
has
three
primary
phases.
One
is
an
authoring,
an
authoring
step
where
the
user
is
responsible
for
composing
composing
configuration
together
to
describe
the
behavior
of
a
service.
B
So
the
let's
talk
with
the
authoring
model
on
the
left
side
of
things.
So
what
we're
proposing
is
a
scalable
model.
There's
effectively.
Two
approaches
to
composable
approaches
for
the
user
to
alpha
can
see.
So
first
is
the
direct
offering
model.
So
that's
that's
what
we're
going
to
do
is
and
the
notion
of
a
serve
is
configurable
talked
about
later.
B
It's
it's
a
way
for
the
user
to
describe
all
the
behavior
of
a
service
in
one
place.
It's
unambiguous!
It's
complete!
If
it's
not
there,
it
doesn't
exist
kind
of
thing.
So
it's
one
place
to
go
to
author
all
in
the
state
associated
with
a
different
service.
So
that's
it's
great
for
bootstrap
and
great
for
small
measures.
It's
easy
to
understand!
There's
my
file.
B
These
different
parties
want
to
have
different
different
roles:
different
permissions
to
edit
different
parts
of
the
config
state.
You
want
to
be
able
to
reuse
config
State
across
services,
so
all
the
services
in
my
mesh
I
want
them
to
use
the
same
adapter.
To
talk
to
talk
to
it
back
here
or
I
want
them
to
use
a
consistent
set
of
labels
for
how
they
how
we
manage
the
traffic
route
and
so
forth.
B
B
B
Outside
is
the
authoring
model,
so
that's
a
set
a
small
documents
that
are
that
have
composed
together
using
tooling
in
order
to
produce
the
service
config,
so
architectural
e
service
config,
is
what
enters
sto
the
component,
the
composable
config,
is
outside
the
realm
of
this.
Do,
and
is
it
is
an
external
cooling
and
external
systems,
as
we
said,
and
the
final
piece
of
the
puzzle
is
mexican-themed,
which
is
also
kind
of
blended,
together
with
service
config,
to
produce
the
final,
so
Michigan,
City
I,
don't
talk
about
it
very
much
of
this
presentation.
B
It's
really
intended
to
capture
and
a
low-level
component
control,
so
things
like
controlling
the
cache
size
is
that
the
individual
component
is
using
or
turning
on
some
debug
flags
for
a
particular
component
in
that
kind
of
stuff.
So
it's
kind
of
global
and
it's
it's
where
a
service
config
is
is
strictly
in
the
realm
of
abstract.
B
F
So
I
had
the
the
comment
on
the
on
the
document
about
service
config
and
I
was
hoping
to
discuss
it.
We're
still
talking
past
each
other
in
terms
of
what's
going
on,
there
I
think
there's
there's
a
whole
in
terms
of
talking
about,
like
the
totality
of
the
of
the
mesh
state
being
expressible
in
terms
of
service,
config
and
mesh
config.
That.
F
F
F
B
The
service
of
the
target,
the
idea
here,
is
that
the
service
config,
the
finds
the
behavior
of
the
service,
the
debate
here
of
the
match
relative
to
service
or
anything
coming
in
or
leaving
that
service.
So,
in
fact,
if
I'm
a
client,
if
the
service
is
a
client
service
II,
if
this
interaction,
there's
clients,
there's
clients
inside
parts
of
that
interaction,
and
there
is
server-side
parts
of
that
interaction.
F
As
its
calling,
but
that's
the
bit,
that's
that's
where
we're
I
think
we're
talking
past
each
other
is
like
you're,
saying
service
a
is
calling
service
B,
but
in
general,
like
there's,
no
single
service,
a
right
like
I
can
be
in
a
situation
where
there
is
no
service.
That's
calling
right!
The
client
is
not
a
service
or
the
client.
It
belongs
to
five
services.
F
B
F
No
I'm
saying
I
I
as
a
person
who
is
running
the
client
I
want
to
control
some
behavior
about
how
I
interact
with
a
different
service
like
just
for
the
sake
of
argument.
Let's
say
that
we
decide
that
circuit
breaking
is
a
client
concern
right
it
that
may
that,
may
it
may
not
be
or
whatever,
but
just
for
for
the
sake
of
argument.
Let's
say
it
is
so
I
have
a
client.
It's
not
in
a
service.
I
still
want
to
be
able
to
do
circuit.
Braking
when
that
servant.
When
that
client
calls
a
remote
service.
F
B
D
Different
school,
so
I
mean
I,
mean
spike,
is
right
in
contact
right.
Clients
will
need
to
override
networking
behaviors
in
the
context
of
a
specific
client.
Yes,
it's
hard
to
amount
of
clients
into
service
definitions.
Now
there
are
scooping
things
that
comedians
like
I,
think
to
address
the
concern
here,
but
they
need
to
be
enumerated
in
the
dollars.
Right
can
Penny
client
over
my
aspects
of
the
service
config
in
their
context,.
D
F
D
F
D
In
any
world
where
the
service
can
say
is
the
canonical
offering
format,
but
all
the
see
the
clients
can
request
the
the
owner
of
the
service
config
put
the
necessary
novice
config
to
let
them
control
behavior
right
as
the
mobile
today,
for
instance,
in
virtual
services
right
in
the
indy,
revise
everything
api's
right.
They
shared
over
a
model
so.
D
One
in
the
default
scenario
right,
the
service
producer
will
generally
own
the
networking
attributes
right
weeks.
Sp
the
most
common
power
to
orthogonals
without
there
are
ways
for
people
to
achieve
the
access
control
properties,
but
you
want
outside
of
the
system
right.
They
can
use
code
review,
for
instance,
and
they
don't
have
these
two
ready
tackle
to
achieve
it.
And
thirdly,
if
they
want
a
more
sophisticated
thing,
then
they
can
move
to
composable
and
fix
and
have
fun
green
apples
within
the
deployment
system.
It's
up
all
those
Pere
vallès.
We
agree
on
that.
D
B
I
D
D
B
J
B
B
Okay,
so
service
contribute.
As
we've
been
discussing
the
idea
there
is
to
have
a
single
single
document
that
describes
the
behavior
or
service
within
the
mesh.
It
is
not
designed
to
despite
clients
of
these
services,
and
it
is
five
how
a
service
interacts
as
the
service
itself
as
a
client.
It
provides
these
configurations
for
the
service
as
a
client
as
well.
B
This
this
document
is
this
is
centered
around
features,
not
individual
components.
This
this
approach
systematically
eliminates,
while
the
idea
here
is
to
systematically
decide
what's
a
client
side
concern
and
what's
a
server
side
concern
and
organize
the
configuration
accordingly,
it
also
completely
eliminates
the
the
ordering
and
eventual
emergent
state
issues
that
we
explain
so
there's
only
one
artifact
that
describes
the
behavior
service.
There's
not
multiple
artifacts
need
to
be
coordinated
by
the
user.
The
clear
aqua
model
here
is
is
the
same.
Look.
B
The
service
config
is
in
the
same
namespace
at
the
service
and
access
to
the
service
resource.
You
have
access
to
the
service
config.
It's
that
that's
simple
and
the
service
object
that
the
service,
config
and
resource
provides
a
clear
path
toward
a
immediately
stay
for
a
lot
model
which
might
not
get
all
right.
The
composable
can
save
it
really
well
about
composition.
So
it's
a
number
of
small
documents.
It's
they're
designed
to
be
reusable,
so
you
can.
You
can
share
documents
across
across
across
services.
B
You
can
apply
access
control
to
individual
documents,
but
that's
largely
up
to
the
in
our
first
iterations,
at
least
up
to
the
operator
to
design
this
design
this
edit
and
approval
workflows
in
their
system.
So
it's
important
to
note
this
general
approach
favours
per
service
control
when
the
when
resources
and
configuration
is
introduced
into
the
system.
So
if
you
have
this
global,
you
want
to
do
a
global
switch.
B
Everybody
is
using
TLS
in
this
approach
that
that
global
change
is
done
in
the
in
a
composable
config
somewhere,
and
somebody
needs
to
write,
run
the
packager
for
each
service.
In
order
to
produce
me
the
service
config
file
that
work
that
has
this
change
reflected
in
so
from
that's
effectively
to
fund
the
pipeline.
That's
so
there's
no
instantaneous
change
to
the
system.
Everything
goes
through
the
same
rigor
of
it's
a
service
level
change,
that's
rolled
out
gradually
in
the
cluster.
B
B
It's
a
kubernetes
controller
that
ingests
the
service
configure
your
civilities,
so
it
performs
validation
on
the
number
of
dimensions
of
a
relative's
or
service
config,
so
bolsters
Samantha
correctness,
there's
referential
integrity
checks
within
the
config
itself
and
there's
what
I
call
temporal
validation,
which
is
to
give
you
warnings
about
that.
Your
change,
if
config
over
time,
so
you
can
be
warned
that
hey
you
intend
to
set
this
quota
to
zero
and,
as
yesterday
you
had
it
set
to
a
thousand,
and
this
might
cause
outages
all
right.
So
that's
allegation
consolidating,
optimization
and
transforming.
B
So
that's
about
taking
all
these
these
self
service,
config
documents,
kind
of
slicing
them
horizontally,
discovering
what's
common
and
among
them
consolidating
it
that
way.
Looking
at
the
and
individual
features
and
there's
there's
cases
where
you
could
move
functionality
and
have
it
done
by
the
proxy
and
not
by
mixer,
or
vice
versa,
dependent
on
the
the
configuration
of
the
system,
the
the
galleys
also
ingests,
cluster
topology
information.
So
it
understands
an
individual
cluster.
What
it
looks
like
and
it's
its
scale
and
dimensions.
B
So
that's
been
drive,
sharding
decisions
as
well
and
finally,
once
the
component
level
increases
there's
a
scheduling,
scheduling
regime
that
that's
how
ye
station
that
galle
does
so
for
a
single
change.
It
might
be
car
pushes
on
its
in
order
to
achieve
correct
ordering.
So
that's
that's
takes
care
of
this,
so
the
galley
run
in
every
cluster
and
as
it
says,
it
consumes
cluster
topology
to
do
its
drive
its
decision.
So
this
is
a
graphical
form
of
what
I
was
describing
service.
B
Configs
come
in
mesh
config
come
into
the
in
case
your
galley
validates
and
allows
these
these
configs
to
internet
system.
If
they're
valid,
there's
a
number
of
transformation,
steps
and
finally
outcomes
component
configs
that
are
dead,
delivered
to
the
individual
components
of
the
mesh
I
show
a
cache
here.
B
It's
we'll
have
to
see
how
things
scale
as
we
create
this.
This
might
not
be
necessary,
but
I
expect
as
a
as
a
metric.
Girls
very
large
at
some
point,
you'll
want
to
cashier
that
may
be
shared
between
multiple
instances
of
gali
I
sold
this
the
right
side
of
the
diagram
here
in
a
chills
distribution
we've
been
talking
about
different
ways.
To
achieve
this.
B
There
is
what
I've
called
here
direct
the
direct
approach,
which
is
effectively
having
gali
app
socket
open
to
the
individual
individual
components
in
the
mesh
and
just
doing
an
RPC
call
to
deliver
could
see
it's
very
direct,
very
simple.
It
doesn't
require
persistent
persisting.
The
component
confused
since
they're
all
unimportant
creations,
direct
direct
functions
already
the
service
configs,
it's
not
necessary
to
store
them.
B
F
B
The
operator
can't
program
those
directly.
You
have
to
go
through
the
service
config
file
solution.
The
second
approach
is
a
resource
based
approach
where
each
of
these,
each
of
the
components
on
the
right
side
is
actually
listening
to
communities,
resources
and
it
reads
it
that
way.
So
the
effect
is
that
the
component
config
is
a
bad
model,
become
architectural,
become
part
of
our
API.
It
become
part
of
our
supported.
B
And
it
makes
it
so
Galle
is,
is
no
longer
the
choke
point
or
stuffed
Internet
system,
so
the
user
can
go
and
offer
these
things
directly
and
then
there's
a
middle
ground
where
maybe
the
user
can't
offer
these
things.
But
at
least
you
can
look
at
these
things
when
you
do
evaluate
which
models
for
tests
over
time.
The
second
approach
is
definitely
more
expense
in
commutation
and
in
memory
load
and
network
load
within
the
cluster,
but
it
provides
a
good
access
for
so
that's
a
thing
to
the
main
of
the
next
Tamar
town.
Yes,.
D
D
We
don't
have
to
run
gali
to
be
able
to
write
a
pilot,
for
instance,
had
an
overt
ABI
contract,
but
it
could
enforce
that
any
testing
driver
could
just
drive
that
API
compiler
to
be
tested
in
isolation
without
depending
on
everything
else
behind
it
in
this
pack
being
present,
which
is
premium
or
in
property
yeah.
But
given
the
department.
B
But
so
okay,
so
are
you
saying
directly,
is
good
yeah?
Okay?
Well
because
it
applies
to
both
in
reality
what
you
described.
So,
if
you
imagine
a
pilot
consuming
CR
DS
as
it
does
today,
different
CR
DS,
but
still
on
the
CR
D
plan,
we
can
easily
run
it
in
I
select.
Well,
we
can
run
it
in
relative
isolation.
It
needs
a
kubernetes
api
server.
It
means
right.
D
Now,
that's
the
point
that
I'm
making
that's
already
a
ting
complex
test
furnace
right.
We
already
have
difficulty
with
that's
a
good
thing.
If
pilot
has
an
open
facing
API
and
didn't
read
series
and
has
there's
something
else
that
was
responsible
for
that
yeah,
most
of
functional
complexity,
while
it
is
supposed
to
be
you
give
me
the
information,
how
am
I
supposed
to
serve
it
right,
that's
the
thing
that
is
supposed
to
do
and
the
other
stuff
is
really
a
separate
integration
concern.
B
D
B
D
B
B
All
right,
so
I
think
last
thing:
I
have
time
to
cover.
Is
the
state
robot
model?
So
what
are
we
trying
to
do
so?
We're
gonna
do
a
lot
of
validation
up
front
and
have
a
strong
user
model
to
try
and
capture
errors
as
soon
as
possible,
but
hey
that
stuff
keeps
getting
through
anyway,
so
we're
just
trying
to
minimize
the
radius
the
blast
impact
of
go
back
to
figuring
the
system
right,
so
you
deploy
your
and
and
in
a
large
mesh
with
many
many
services.
B
Many
many
instances
of
a
particular
service
you
can
do
staged
rollout
kind
of
at
the
Apollo
shut
one
pound
down
started
up
again
start
that
and
it
starts
up
in
a
new
mode.
That's
fine!
For
when
you
have
a
lot
when
a
a
few
say
like
two
pods,
then
you
can
have
a
50%
outage
like
pushing
back
config,
so
we
want
to
deliver
smaller,
green
rollout,
smaller
than
a
pod
and
really
being
traffic
driven.
So
not
only
can
we
do
selection
based
on
we
can
we
can
select.
B
Let
me
stick
to
my
script,
all
right,
so
the
idea
here
is
that
we
provide
the
core
mechanisms
to
do
this
staged
rollout,
but
the
operators
still
kind
of
running
things
from
the
outside.
So
the
operator
tells
us
what
algorithm
to
use
to
split
traffic
between
old
and
new.
The
operator
needs
to
tell
us.
Okay,
I
want
more
users
to
use
it
now.
I
want
more
users
and
they're
the
ones
running
that
external
for
loop
that
drives
you
to
the
new
they're,
the
ones
to
detect.
B
When
there's
a
problem,
I
mean
they
need
to
rollback,
happy
I
think
it's
quite
likely
that
we'll
be
delivering
a
kind
of
full
answer
to
all
these
questions.
To
all
these
problems.
At
some
point,
we'll
have
an
opinion
saying:
here's
a
premade
module
module
to
do
this.
Well,
it's
always
going
to
be
optional
and
it's.
The
intent
is
to
let
the
users
existent
systems
deal
with
this.
So.
A
A
D
D
D
B
So
my
preference
would
be
to
do
another
session
like
this
to
finish
going
through
this
encouraging
people
to
do
it
in
to
update
the
doctor
and
I'd
like
to
schedule
a
separate
session
with
Spike
just
discuss
this
thing,
and
then
we
can
report
back
to
the
group
kind
of
the
outcome
of
that
I
think
that's
be
the
best
most
efficient
use
of
everybody's
time.
Alright,.
D
So,
maybe
into
one
more
session
to
go
through
the
proposal
and
then
schedule
a
separate
Q&A
session
so
to
make
sure
that
you
know
once
people
have
gotten
all
their
comments
in
the
dog,
they
can
me
come
through
because
it's
going
to
take
a
bunch
of
time
like
I,
have
the
same
like
I'm
well
aware
of
spikes
concerns
trigger
and
I
discussed
them
a
lot
in
the
context.
The
networking
ideas
so
I'll
make
sure
I
get
my
comments
and
announcement
on
that
vein
into
the
dog.