►
From YouTube: Config Working Group 2.22.18
B
A
D
A
Much
I've
spent
a
lot
of
time
trying
to
figure
out
the
nice-tasting
status
for
ourselves,
transitivity
trying
to
use
the
community's
testing
framework
I
almost
spent
more
than
two
days
on
that
one
and
I
just
gave
up
it's
just
like
it's.
A
dependency
help
so
and
I
was
dealing
with
weeks
related
stuff,
so
I
didn't
spend
much.
C
A
D
This
refactor
is
fairly
deep,
so
I
kind
of
split
into
three
steps:
I
just
merged
it,
the
first
step
and
I'm
gonna
change
the
client
and
the
next
step
was
changing
the
storage
model.
So
it's
also
in
progress
and
I
didn't
really
get
into
the
design.
Public
internal
format
at
this
moment
is
pretty
much
different,
a
peer
groups
and
that's
that's
it
prototyping,
the
Oscar,
fake
and
sodium
and
I
just
talked
about
how
we're
gonna
roll
out
a
toss
config,
and
we
had
some
discussion
and
had
some
share
some
ideas.
D
So
I
would
say
the
robot
is
in
progress
and
the
design,
and
we
also
have
made
some
decisions.
How
the
format
representing
the
cluster
namespace
scope
we
decide
to
align
with
kubernetes
are
back
like
a
having
cluster
roles
and
roles
for
name,
scoped
and
and
across
the
rows
for
the
cluster
scope.
Config
I
also
send
the
pr2
the
Easter
API,
but
we
have
some
people
not
agree
with
that
and
then
just
coming
stuff.
D
D
D
If
that
system
we
are
able
to
have
namespace,
have
the
package
and
pass
mapped
kubernetes
api
group
for
all
the
resources
for
pilots
and
after
that
that
we're
going
to
see
how
do
for
mixer
not
too
much
progress
on
the
validators
and
the
link
tests
and
not
really
any
work,
has
actually
started
so
on
the
validator
parts,
and
but
at
least
the
McKenna
I
talked
that
we
sorting
out.
We
were
on
using
the
some
validators
for
the
talk
generation,
but
probably
not
too
much
beyond
that,
and
so
Jason
you
wanna,
and
giving
updates
and
p0.
E
E
B
B
B
B
C
I
think,
ultimately,
once
the
architecture
that
we're
going
to
talk
about
a
minute
kind
of
rolls
out
the
validation
moves
upstream,
so
either
galleys
galleys
gonna
do
some
or
the
the
offline
tool
it's
gonna
do
something
well,
but
the
claimant
gets
two
components.
Ideally
two
components
at
runtime
just
get
fresh
clean
this
stuff.
So.
B
B
It's
giving
everything
through
and
getting
set
up
scripts.
It's
tedious
work,
that's
very
your
phone,
okay,
so
to
make
sure
it's
coming
through
for
a
setup
and
tests
before
we
and
then,
if
we
add
a
new
component,
you
end
up
having
to
review
all
I
thought
the
jus
that
already
wired
in
the
validation
for
mixer.
Please
it's.
B
D
C
D
C
All
right,
so
we
had
a
meaning
sometime
in
January,
I,
guess
to
kind
of
hash
out
a
number
of
issues
about
what
we
want
to
do
with
the
configure
architecture.
Since
then,
I've
started
this
architecture,
doc
and
kind
of
drew
this
diagram
and
try
and
the
doc
is
about
explaining
this
diagram
and
how
it
all
works.
So
actually
let
me
go
to
the
previous
introductory
diagram,
so
all
I
want
to
do
is
kind
of
go
over
this
diagram
in
20
minutes
and
give
you
a
sense
of
where
our
collective
mind
is
at
at
this
point.
C
So
right
now
on
this,
do
people
write
configs
by
submitting
see
are
these
into
a
cluster
and
the
C
RDS
or
component
oriented
largely
so
you
program
mixer
to
do
something
you
program
I'm,
going
to
do
something
and
that's
that's
the
way
it
works
its
cluster
local.
So,
ideally,
we
want
to
move
to
a
model
where
the
operator
is
thinking
in
terms
of
features
not
in
terms
of
components.
So
the
the
user
wants.
C
The
the
operator
would
like
to
pose
a
code
of
between
two
components
or
would
like
to
collect
a
particular
metric
or
such
things,
as
opposed
to
having
to
understand
that
mixture
is
responsible
for
doing
part
of
that
and
all
more
either
as
possible
for
doing
another
part
of
the
same
same
same
thing.
So
this
kind
of
thinking,
in
addition,
part
of
our
initial
requirements,
for
what
the
work
would
like
to
do,
is
eliminate
emergent
States
in
a
system
which
is
an
inherent
problem
in
the
kubernetes.
C
The
fact
that
pushing
resources
into
pushing
new
configs
is
inherently
non
transactional
in
nature.
You
pushed
and
eventually
consistent.
So
you
end
up
pushing
configs.
They
show
up
at
different
rates
in
different
systems
and
the
system
experiences
an
emergent,
emergent,
totaled
state.
That's
that's
unexpected!
That
can
lead
to
failures
and
trans
transient
failures,
at
least
so
so
we
ended
up
with
this
kind
of
model.
C
So
it's
I'll
ignore
the
first
box
that
the
light
gray
box
has
started
there
and
we'll
talk
about
that
afterwards,
but
effectively
the
we
recast,
the
configuration
for
ISTE
or
in
terms
of
a
service
config
resource.
So
that's
all
the
state
that's
necessary
to
describe
the
behavior
of
a
single
service
is
captured
in
that
one
in
that
one
resource
and
that
so
that
includes
oh.
D
C
Okay,
so
all
the
state
necessary
to
describe
a
single
service
is
captured
in
a
service
config.
That
includes
how
to
configure
the
mixer
how
to
how
to
do
the
routing
rules
that
are
applied.
The
different
policies,
the
adapter
parameters,
all
that
stuff
is
in
one
file
that
can
be
introduced
into
the
system
in
an
atomic
transactional
way.
C
From
that
single
file,
the
we
we
will
convert
those
into
a
number
of
component
specific
things,
so
you
go
from
the
user's
intent
to
exactly
what
the
mixer
needs
and
exactly
what
the
the
proxy
needs
and
the
the
broker
and
the
other
components
were
creating
yeah.
This
transformation
is
first
done
in
a
cluster
agnostic
way,
so
kind
of
generically.
C
This
is
how
the
the
system
should
work
and
then,
finally,
it's
delivered
in
to
transform
into
a
cluster
specific
set
of
configs
that
are
delivered
to
the
individual
components,
but
that's
taking
the
high-level
intent
of
what
the
system
should
behave,
behave
as
and
adapting
it
to
the
reality
of
the
clusters:
physical
topology,
so
that's
kind
of
Trenton,
the
general
transformation
pipeline.
Moving
back
to
the
left
of
the
diagram.
C
They
showed
this
intent
config,
and
this
is
effectively
a
utility
layer
and
a
usability
layer
on
top
of
the
service
config
and
it
it's.
It
provides
composition
on
top
of
the
the
core
that
the
basic
abstraction,
the
idea
is,
the
operator
can
create
a
bunch
of
small
configs,
compose
them
together
to
create
the
larger
service
config,
and
the
reason
this
is
useful
is
that
this
now
allows
sharing.
So
you
could
have
multiple
service
configs
that
are
built
out
at
the
same
core
parts
so
effectively
you
can.
C
You
can
compose
compose
a
bunch
of
configs
and
produce
a
variety
of
service
configs
out
of
them.
So
this
is
how
the
mesh
level,
somebody
can
say-
hey
I'd,
like
all
of
my
services,
to
have
to
collect
the
same
set
of
metrics.
So
you
put
that
in
a
little
share
thing.
That's
inherited!
That's
that's
used
by
all
the
different
service
configs,
and
so
you
can
apply
policies
horizontally
that
way.
So
in
today's
today's
East,
your
world,
we
do
some
of
that
at
runtime.
C
What
we're
looking
at
it's
this
thing
so,
starting
on
the
top
left
you
get
the
operator
is
producing
these
intent,
intent
config.
So
they
again,
those
are
these
small
little
pieces,
they're
kept
into
users,
probably
in
the
user
source
control
system,
and
the
user
then
runs
a
tool
to
put
to
consolidate
all
those
an
output
service,
config
files.
It's
also
possible
there's
a
dotted
line
going
from
the
operator
straight
to
the
service
config
file.
C
The
operator
can
write
their
own
service
gonna
thick
file
by
hand
if
they
want
they
can
run
orthogonal
tools
on
the
outside
of
this
do
to
produce
these
files
you
had
annexed
or
think
you
had
an
example
of
the
tool
yesterday
that
could
be
used
to
propose
things
together,
trade
school
or
some
case
on
it,
okay,
so
yeah,
so
so
the
operator
composes,
somehow
these
service
config
files
they
get
delivered
as
CRTs
as
resources
into
these
API
server
on
the
receiving
end.
Galley
is
a
web
hook
for
the
API
servers,
but
gets
these
configs.
C
So,
oh
by
the
way,
so
the
the
compilation
step
on
the
left
side
is
expected
to
perform
all
sorts
of
validation,
so
semantic
correctness,
referential
integrity,
check
and
there's
a
I've
got
a
list
of
the
top
ten
five
five
different
things
so
the
so
the
the
service
config
file
is
assumed
to
have
undergone
a
number
of
these.
These
checks
already.
So
it's
got
a
certain
amount
of
a
certain
level
of
correctness,
kind
of
built
into
it.
It's
self-contained,
so
it
doesn't
contain
references
to
other
resources.
C
So
you
can
read
it
about
it
in
in
isolation
that
gets
pushed
to
the
API
server.
The
API
server
says
that
the
gali
gali
performs
second-level
validation
for
this
stuff
and
finally,
the
the
resources
accepted
and
an
objected
to
the
system.
So
when
that
happens,
gallied
and
takes
the
the
incoming
resource
and
starts
basically
slicing
it
in
terms
of
composites,
so
understanding
what
the
configures
is
trying
to
do
and
then
just
figure
out
what
part
goes
to
proxy.
What
part
goes
to
mixer
I
show
a
box
going
from
galley
to
storage.
C
It's
not
clear
whether
that's
actually
needed,
but
from
practical
standpoint.
That
would
just
be
a
cache
to
avoid
doing
the
same
transformations
again
so
galley
is
is
logically
stateless.
It
takes
the
inputs
and
produces
a
direct
function,
produces
the
component
level
configs,
but
these
component
level
configs
are
cluster
agnostic,
so
they're
not
aware
of
the
topology
of
in
actual
bus
troops.
C
Pilots
are
running
in
each
cluster
and
in
this
model,
the
the
mission
of
pilot
in
the
that
the
purpose
of
gap
of
pilot
within
the
global
sto
context
is
it.
Scope
of
influence
is
increased
when
pilot
is
now
responsible
for
delivering
all
the
cluster
config
with
all
the
config
within
a
cluster,
so
it
already
is
building
a
model
of
building
a
model
of
the
cluster
topology
to
program
the
proxy.
In
addition,
here
would
also
use
that
same
knowledge
to
figure
out
how
to
program
mixer.
C
Unfortunately,
so
the
protocol
between
galley
and
the
pilot
and
pilot
and
the
components
is
specifically
not
CR
DS.
It's
something
to
be
designed.
The
idea
here
is
the
toll
that
there's
a
limit
to
how
much
the
API
server
can
scale,
and
these
are
really
not
user.
This
is
not
user
state.
This
is
really
system
runtime
state,
so
the
model
for
the
API
server
is
not
quite
right
for
what
we
need
here.
C
So
once
once
pilot
pilot
gets
in
the
picture,
it
kind
of
lowers
the
config
to
day
when
the
cluster
needs
and
don't
set
out
to
the
to
the
individual
components
running
in
the
system.
So
that's
kind
of
a
general
flow
and
a
general
responsibility
of
all
the
pieces
that
that
were
looking
at
and
in
addition
to
just
the
raw
distribution
there
is
the
animal
facts
finished.
C
Any
type
is
useful
diagram,
not
that
useful
that
we're
looking
at
the
how
to
do
a
staged
rollout
of
configuration
so
that
you
can
introduce
configuration
changes
gradually
within
the
system
and
instead
of
as
the
Big
Bang.
So
the
the
goal
here
is
to
to
do
what
we
start
to
call
kind
of
resource
blending
or
config
blending
which
says,
and
they
the
compiler
can
be
given
two
different
versions
of
config,
and
this
is
the
old
one.
C
This
is
the
new
one
and
the
compiler
will
produce
a
blended
config,
a
blended
service
config
that
contains
the
states
were
both
that
state
that
that
that
blended
service
config
that
gets
distributed
this
distributed
instantaneously
through
through
the
cluster
through
all
the
clusters,
but
by
default,
or
by
by
nature
of
this
process.
The
the
new
so
you're
going
from
a
new
from
an
old
to
a
new
config.
C
Both
are
blended
together
and
sent
down
to
the
to
the
system,
but
at
runtime,
as
the
system
is
looking
at
this,
you
can
think
that's
incoming
by
definition
by
by
again
by
the
nature
of
this
protocol,
all
the
components
are
going
to
use
the
old
config,
even
though
there's
a
new
one
sitting
there.
So
the
idea
is
this
is
kind
of
to
eliminate
the
eventual
consistency
problem.
C
When
a
request
enters
the
system,
there's
going
to
be
small
algorithm,
that's
running
in
the
proxy
to
decide.
Hey
am
I,
gonna
use,
config,
a
or
kind
of
thing
being
for
this
request.
Typically,
the
algorithm
is
just
going
to
be
a
random
number
generator
applied
over
a
percentage
of
requests.
Ten
percent
of
requests
use
a
90
percent
use
B.
C
The
specifics
of
the
algorithm
is
doesn't
really
matter,
but
there's
a
selection
that's
made
of
which
config
to
use
and
that
selection
is
sent
down
used
in
proxy
used
in
mixer
and
and
other
interested
parties
in
the
system.
So
and
that's
how
we
we
achieve
a
smooth
transition
from
from
A
to
B,
so
the
algorithm
gradually
is
tweaked
so
that
you
start
at
10%,
20%,
30%,
49%
and
at
some
point
your
all
new
traffic.
C
Is
you
using
the
new
config
once
you're
on
that
step
once
you're
in
that
phase,
then
we
can
deliver
another
another
config
throughout
the
system.
That
only
can
bet
it's
no
longer
blended
config,
but
only
contains
B,
so
that
last
step
again
has
no
effect
on
the
behavior
of
the
system,
except
for
eliminating
and
reducing
a
little
memory.
Consumption.
By
removing
this
with
caching.
C
So
so,
right
now,
as
I
mentioned
in
the
beginning,
the
only
thing
the
only
thing
is
Co
actually
does
is.
This
is
the
component,
the
cluster
specific
component
config
we're
figuring
out
the
strategy
to
get
this
rolled
out.
So
one
of
the
things
were
one
of
the
efforts
we're
doing
now
is
starting
to
express
the
config
in
a
higher
level
in
the
higher
level
form,
so
it'll
be
more
suitable
to
be
applied
inside
a
service
can
see
directly.
C
So
it's
no
longer
component
specific,
it's
more
abstract.
The
next
step
after
that
is
figuring
out
how
to
compose
these
things
into
these
little
itsy-bitsy
pieces.
So
we
can
have
the
intent
level
a
config
as
a
source
of
input
to
the
system
in
practical
terms
of
getting
moving
forward.
A
first
step
is
what
the
work
that
ours
is
doing
today,
of
getting
a
galley
in
the
system
that
performs
no
transformation
and
doesn't
involve
pilot
in
the
scheme.
So
it
basically
is
ingesting.
C
It's
going
to
ingest
ingest
some
configs
sputum
back
out
and
get
them
into
the
system,
but
so
it's
not
very
useful,
except
that
it's
it
gets
it
in
the
in
the
pipeline
once
that's
there.
The
next
step
is
probably
to
start
changing
the
input
format
to
be
more
d,
the
service
config
and
the
intent
config
stuff.
So
that's
what
I
my
to-do
list
is
to
sit
down
with
folks
and
come
up
with
the
actual
list
of
the
different
click
stops.
We
want
to
do
over
the
next
six
months
there
and
that's
an
antibiotic.
D
C
D
D
My
view
is,
galley
is
also
like
a
ProQuest
configuration
and
API
and
if
I
don't
see
like
really
Gary,
sits
above
the
cluster,
so
I
don't
know
how
that's
gonna
work.
Yeah.
C
So,
there's
a
couple
things:
first,
we
do
want
to
support
a
multi-tenant
design
where
galley
is
actually
controls,
many
measures
so
that
there's
our
partners
that
us
we
want
to
do
these
hosted
scenarios.
So
the
architectural
II,
the
model
has
to
be
that
galley
is
not
directly
in
the
mesh,
but
can
it
can
affect
so.
D
C
C
D
Be
nice
if
we
can
kind
of
split
and
the
cancer
of
multi
tenant
galley
with
maybe
the
first
step
of
galley?
That's
a
single
tenant
galley.
So
we
we
kind
of
sorting
out
the
single
tenant
le
and
then
we
talks
about
how
mati
it's
gonna
work
in
here,
because
because
the
kubernetes
api
server
is
not
multi-tenant
and
they
don't
really
sits
about
the
cluster
and
they
rely
on
add-on
components
like
across
the
registry
to
be
able
to
to
find
the
multiple
Kuster
to
addressing
different
clusters.
C
B
C
We've
generally
agreed
Kennedy
there,
the
the
role
ignoring
the
issues
of
the
cluster
affiliation.
There
we've
agreed
on
the
role
and
there's
these
different
pipeline
steps
and
the
transformation
that's
happening
so
we're
trying
to
get
move
in
that
general
direction
immediately.
While
the
details
are
still
the
final
details,
I
think
it
is
basically
kind
of
from
this
perspective,
no
regrets
moves.
We
know
enough
that
we
can
do
these
things
and,
as
as
the
design
refines
itself.
C
B
C
Exactly
exactly
so,
that's
all
I
think
we
know
the
general
direction
we've
already
discussed.
This
van
was
in
the
discussion
discussion,
so
I
think
there's
gonna
be
a
decent.
The
core
is
not
gonna,
be
contentious,
the
details
of
exactly
the
protocol
and
that
kind
of
stuff.
So
that's
what
we
need
to
argue
again
argue
about
I.
Think
I
do
expect
a
lot
of
discussions
to
come.