►
From YouTube: SIG Architecture First Meeting 07-10-2017
Description
A
A
It
was
more
widely
publicized
and
it
was
so
probably
this
meeting
will
be
more
focused
on
just
getting
the
lay
of
the
land
for
the
sake
and
discussing
if
we're
at
the
right
meeting
cadence
and
discussing
what
we
want
to
accomplish
long
term
and
less
on
policy
decisions.
So
without
much
further
ado,
the
agenda
is
available
at
fitly,
slash,
cig
architecture.
I
will
also
paste
a
link
in
the
chat
which
will
be
available.
Have
you
bored
yet
so
and
yeah.
So
without
further
ado,
let's
get
started
Brian.
B
There
are
a
bunch
of
areas
that
I
have
been
driving
but
haven't
been
adequately
socialized
across
the
entire
project.
Some
things
have
been.
Some
of
the
materials
have
existed
for
a
long
time
like
API
conventions,
documents
which
was
written
principally
by
Clayton,
but
is
a
collaboration,
but
more
recently,
I've
been
trying
to
evolve
more
clearly
describe
the
architecture
broken
down
into
kind
of
coarse-grained
chunks
to
help
people
understand
how
the
system
is
structured
or
how
we
would
like
the
system
to
be
structured
and
effectively.
B
The
system
has
was
initially
intended
to
be
modular,
which
is
why
we
have
a
lot
of
independent
control
loops.
For
example,
button
practice
has
evolved
as
a
monolithic
system
with
principally
two
monolithic
binaries,
and
we
don't
really
have
any
mechanism
by
which
we
can
break
them
up
at
least
optionally
for
some
deployments
that
want
that.
B
Additionally,
with
the
upcoming
conformance
efforts
to
try
to
ensure
application
portability
among
lots
of
different
kubernetes
distributions
and
services,
a
lot
number
of
things
are
kind
of
coming
to
a
head
all
at
the
same
time,
like
the
pain
points
with
the
mono
repo
and
so
on.
So
there
seems
to
be
critical
mass
to
kind
of
broaden
the
effort
beyond
just
me.
B
So
that's
what
the
architecture
is
intended
to
do
and
as
well
as
to
be
a
home
for
efforts
that
didn't
really
have
a
clear
home
like
breaking
up
the
repository
and
multiple
repositories,
or
at
least
you
know
better,
supporting
multiple
repositories.
If
you
want
to
look
at
it
that
way,
since,
where
do
you
have
a
moat
at
least
three
github
boards
and
many
many
repositories
at
this
point?
B
B
That
would
be
one
of
the
responsibilities
to
provide
some
clarity
on
that
and
I
know
that
already
there
have
been
some
questions,
come
up
about
some
of
the
parts
of
the
proposal
and
what
it
means
so
clearly,
there's
work
to
do
to
clarify
that,
and
we
would
like
to
get
the
proposal
documents
converted
to
markdown
and
check
in
and
approved,
there's
actually
no
process
really
for
getting
for
actually
doing
making
any
kind
of
feature
proposal.
We
have
been
kind
of
cargo
coating,
an
informal
process,
but
I
would
advocate
that
formalizing.
B
A
Great
and
so
in
terms
of
the
the
mission
moving
forward
to
I
think
having
a
consistent
architectural
vision
over
time,
I
think
is
going
to
be
held
in
this
group.
I
think
that's
super
important
to
have
that
guiding
light
in
terms
of
what
what
is
consistent
with
the
design
principles
and
what
isn't,
especially
when
it
comes
time
to
vet
different
features
or
things
that
might
get
moved
into
core
or
out
of
core
mo.
A
B
Absolutely
one
of
the
first
things
we
need
to
do
is
to
write
up
the
Charter.
For
this
thing,
a
lot
of
that
is
contained
in
email.
Actually,
so
we
should
take
that
and
use
that
as
a
starting
point
and
flush
it
out
a
little
bit
and
try
to
get
that
checked
in
is
that
I
assume
that
would
just
go
in
our
subdirectory
in
their
community
repo
I.
A
C
A
A
So
in
terms
of
our
work
moving
forward,
Brian
you've
got
a
number
of
documents
and
one
of
the
things
was
a
presentation
that
was
that
went
over
during
the
the
leadership
meeting.
I,
don't
know
if
you
want
to
kind
of
hit
some
the
high
points
of
that,
but
I
think
for
people.
Maybe
who
are
new
to
this
effort
to
understand
the
stratification
that
you're
sort
of
proposing
and
the
way
that
you're
developing
the
taxonomy
around
that
I
think
would
be
really
interesting
for
people
to
get
just
an
idea
that
that
is
sure.
B
D
Good
point,
so
one
thing
that
we
have
come
across
numerous
times
in
service
catalepsy
is
that
the
knowledge
of
the
best
patterns
to
follow
and
developing
kubernetes,
like
extensions
which
apply
probably
85%
of
them,
overlap
with
your
enemy
api
surface.
So
kubernetes
itself,
there's
not
a
like
kubernetes
development
guide
that
you
can
point
people
to.
That
explains
the
rationale
in
the
patterns
and
in
addition
to
that,
I
think
probably
most
people
on
this
call
are
familiar,
that
there
are
artifacts
of
several
different
generations
of
approaches
present
in
the
code.
D
For
example,
there
are
a
lots
of
different
mechanisms
of
youth
in
different
controllers
that
represent
snapshots
of
what
the
current
thinking
about
the
best
way
to
do
things
once
and
I
think
it
would
be
really
helpful,
as
we
now
have
API
aggregation
to
encourage
extensions
to
kubernetes
to
be
built
in
the
shape
of
kubernetes
itself,
if
we
could
have
some
kind
of
assembled
guidance
beyond
what
we
currently
viewed
for
like
API
changes.
D
B
And
this
is
an
area,
that's
evolving
as
well,
so
we
definitely
need
to
create
such
a
guide,
but
you
know
actually
the
people
who
are
both
building
the
extension
mechanisms
that
we're
going
to
be
relying
on
like
API,
aggregation
and
initializers.
There's
a
lot
of
the
people
using
the
like
Service
Catalog
are
going
to
be
evolving
the
mechanisms,
as
well
as
the
best
practices
around
that
as
they
do
it.
Yeah.
D
I
agree
and
I
expect
also
that
will
continually
evolve
those
things
and
so
I.
Don't
think
that
we
should
wait
for
the
evolution
to
stop.
It
is
a
tricky
problem
to
write
the
the
right
to
find
the
right
l'orange
between
what
cynicism
and
what
the
what
we
think
the
eternal
truths
are,
but
I
think
that
we
can
make
some
progress
without
being
overly
skewed
towards
things
that
will
change
immediately
as
we
develop
the
magazines
yeah.
B
A
A
G
A
B
Okay,
so
can
you
see
my
slides?
Yes,
okay,
and
so
this
was
the
presentation
that
I
did
during
the
summit.
It
is
public,
so
you
should
be
able
to
see
this
document.
There's
a
shortcut
link
there
on
the
first
slide
and
I
can
post
that
to
the
chat,
I
think
I
can
actually
the
way
I'm
presenting
I
can't
see
the
chat
anymore
I'll.
Do
it
later
so
yeah.
The
stratification
that
was
mentioned
looks
like
this.
B
It
was
a
I
actually
walked
through
every
API
and
major
function
in
the
system
and
try
to
categorize
them
and
there's
a
longer
Clint,
which
some
people
say
is
too
long,
but
to
try
to
capture
everything
and
the
rationale,
and
we
can
definitely
discuss
that
in
more
detail
as
we
flush
this
out
and
try
to
write
up
a
concrete
proposal
and
get
it
approved.
But
the
idea
was
to
try
to
capture.
B
Groups
of
functionality
in
ways
that
we
could
build
a
layered
system,
so
you
could
know
what
you
could
depend
on
when
you're,
adding
something
to
the
system
and
also
as
we
try
to
tease
pieces
apart
and
pull
things
into
extensions
or
pull
things
in
other
repositories.
What
would
make
sense
to
pull
where
so
the
nucleus?
The
idea
there
was
kind
of
to
capture
the
essential
bits
of
kubernetes
that
it
would
not
be
considered
kubernetes
without
those
bits,
as
well
as
to
provide
the
essential
mechanisms
needed
to
build
the
higher-level
layers
of
the
system.
B
So
that
includes
the
API
server,
which
is
kind
of
the
the
center
central
piece
of
the
control
plane
and
the
cubelets,
which
you
know
pretty
much
to
do
anything
in
kubernetes.
You
need
to
execute
a
pod,
so
you
could
just
say:
well.
The
API
machinery
itself
is
the
basis
for
everything
and,
while
that's
true,
you
know
we're
not
just
building
API
machinery
here,
we're
actually
building
the
system
that
can
execute
and
manage
containers
and
workloads
running
in
those
containers.
So
I
didn't
consider
it
to
be
kubernetes
without
the
cubelet.
B
These
pieces,
I,
would
say
I,
wouldn't
consider
pluggable
in
the
system,
so
that
was
also
something
I
wanted
to
capture
is
in
this
bit
that
I
called
a
nucleus,
which
is
the
core
API
machinery
control
plane
and
the
execution
engine
which
is
keyboards.
Those
parts
are
not
considered
pluggable
all
those
pieces,
the
key.
What
relies
on
our
pod
will
like
to
contain
a
runtime,
then
networks
with
the
volume
buying,
plugins
and
I'm
being
a
little
bit
fun
to
determine
ology
there,
because
we
haven't
really
settled
on
the
final
interface.
There.
B
The
application
layer
would
be
the
api's
that
you
would
put
into
a
help,
chart
or
other
resource
configurations
describing
your
application,
so
things
like
deployment
or
stateful
set
or
even
service
and
ingress
I
put
at
the
application
layer,
and
then
the
governance,
layer,
I
put
other
automation
and
policy
enforcement
mechanisms.
These
kinds
of
things
can
be
done
other
ways.
So
sorry,
one
thing
about
the
application
layers
that
pieces
of
it
are
intended
to
be
pluggable,
for
example,
the
schedulers
pluggable,
the
ingress
controllers
pluggable.
B
Even
the
key
proxy
people
have
stubbed
out
and
replaced
with
other
mechanisms,
so
core
pieces
of
the
application
layer
considered
to
be
pluggable.
The
governance
layer
is
considered
to
be
more
optional,
or
at
least
that's
my
current
thinking,
because
that
could
be
accomplished
different
ways
and
you
don't
necessarily
have
to
have
these
kind
of
policy
objects
or
automation
objects
in
the
description
of
your
applications
apology.
B
So
this
is
kind
of
a
range
of
things
from
things
like
resource
quoted
limit
range
to
on
the
enforcement
side,
to
horizontal
pod,
auto
scaling
on
the
automation
side
and
then
about
beyond
that,
the
interface
layer
that
clients
would
be
built
on.
So
these
are
the
client
libraries
that
we're
now
producing
the
client
SDKs
as
opposed
to
the
server
side
educated.
We
meant
proposed
earlier
for
people
building
on
top
of
kubernetes
or
building
other
kinds
of
clients
for
kubernetes,
so
I
would
actually
even
put
keep
control
into
that
layer.
B
You
can
use
the
system
our
intent
is.
You
should
be
able
to
use
kubernetes
without
necessarily
having
to
use
cube
control
and
we
have
been
trying
to
pull
out
complex
logic
from
acute
control
and
actually
pull
it
into
the
lower
layers
of
the
system
so,
for
example,
garbage
collection
which
so
core
resource
Application
Lifecycle.
We
pulled
in
to
the
to
the
server
side
of
the
system
and
then
the
on
that
is
the
ecosystem,
and
these
are
things
that
we
define
as
being
outside
of
kubernetes.
B
There
may
be
who
energy
goes
to
some
projects,
so
just
so,
people
understand
why
that
is
the
CNC
F
is
intended
to
be
a
foundation
that
hosts
many
different
projects
which
can
interoperate
and
work
well
together,
but
don't
aren't
necessarily
tied
to
each
other.
So,
for
example,
you
can
use
the
linker
D
with
kubernetes
or
Prometheus
with
kubernetes,
but
you
can
also
use
them
with
containers,
running
and
other
platforms,
so
there
I
really
wanted
there
to
be
a
home
for
kubernetes
specific
projects
that
aren't
part
of
kubernetes
proper
but
are
part
of
the
ecosystem.
B
So
that's
the
rough
idea
of
the
layering
and
my
general.
You
know
it
is
complex,
so
I
am
guessing.
It
will
evolve
over
time
and
I
as
far
as
escalations
go.
I
think
escalations
will
help
us
solidify
the
boundaries
between
the
layers.
As
we
figure
out.
Oh
yeah,
that's
a
tricky
case.
You
know
we
have
an
inversion
of
what
we
thought
the
layers
should
be
there.
Then
we
can
decide
well,
do
you
need
a
way
to
fix
that
or
should
we
change
the
definition
of
the
layers?
B
So
you
know
it
is
I
did
look
through
quite
a
bit
of
the
system
and
examples
of
things
built
on
top
of
the
system
to
come
up
with
the
layering
that
I'm
sure
it's
not
100%
complete
I'm.
There
then
things
added.
For
example,
since
I
wrote
the
original
proposal,
custom
resource
definitions
didn't
exist
when
I
wrote
the
original
proposal.
B
I
It
box
no
pressure
belt
from
a
video
on
to
I,
like
I,
like
what
I'm
seeing
the
I
was
putting
in
chat
that
cluster
ops
had
done.
It's
not
the
same
layering,
but
it's
it's.
It's
irrelevant.
It's
this
sort
of
a
level
zoom
out
click
from
there
about
the
whole
system,
so
I
put
some
links
to
that
in
chat,
because
we'd
spent
some
time
actually
building
up
where
kubernetes
fits
in
an
environment
and
there's
some
reps.
There's
some
overlap
with
this
in
a
positive
way.
So
it's
nice
to
see
additional
clarification.
There.
B
Yeah
so
also
I
want
to
say
about
the
layering
I
would
imagine
that
there
may
be
some
deployments
that
want
to
deploy.
These
layers
is
broken
down
into
separate
components
or
containers,
and
in
some
deployments
they
won't
want
that.
So
one
thing
I
think
we
will
need
to
move
towards
is
more
flexibility
in
how
we
build
releases
I
already
have
hypercube
and
separately
built
components.
B
I
think
we'll
need
to
expand
on
that
and
provide
more
options
for
how
the
components
are
factored
for.
People
who
don't
remember
originally,
the
scheduler
was
part
of
the
controller
manager
way
way
way
back,
and
when
we
split
out
the
scheduler
from
the
rest
of
controller
manager,
we
did
that
actually,
as
kind
of
as
an
experiment,
to
prove
that
kubernetes
was
indeed
modular
and
to
prevent
the
scheduler
from
becoming
entangled
with
the
rest
of
controller
manager.
It
broke
every
cluster
setup
in
the
universe,
just
adding
that
component.
So
that
was
one.
B
We'll
need
to
figure
out
how
to
address
that
going
forward,
but
I
imagine
there
won't
be
a
one-size-fits-all
factoring
of
the
binaries
and
the
other
release
artifacts
like
containers,
packaging,
the
binaries
as
well
as
what
crosses
you
actually
need
to
run.
You
know
certainly
for
some
scenarios
packing
packaging
all
of
the
binaries
together
in
order
to
reduce
the
resource
footprint
like
in
a
mini
queue.
Sort
of
scenario,
for
example,
might
make
a
lot
of
sense
and
others.
B
They
may
want
more
flexibility
to
place
certain
components
like
the
scheduler,
which
is
one
of
the
principal
reasons
we
broke
it
out.
Also-
or
you
know,
for
two,
and
there
can
be
a
number
of
reasons
why
you
might
want
to
break
these
things
apart,
whether
it's
for
fault,
isolation
or
scalability
reasons
or
whatever
and
out
that
was
another
thing.
B
The
scheduler
we
have
engines
would
eventually
become
resource
intensive
and
we
wanted
to
be
able
to
scale
that
out
or
scale
it
up
rather
separately
from
the
other
components
you
know,
but
for
some
small
clusters,
it's
more
overhead
to
run
more
components
and
it's
more
complexed
on
motor
components.
So
this
so
I
guess
what
I'm
saying
is
this
layering
isn't
doesn't
necessarily
imply
a
particular
factoring
of
the
release.
Artifacts.
A
B
Think
it
depends
on
how
we
use
the
meeting.
You
know.
If
we
have
more
agenda
items
than
we
can
get
to,
then
we
can
potentially
expand
the
time.
I
wouldn't
want
to
expand
the
time
just
to
expand
the
time
right
so
and
we
need
to
figure
out
how
to
make
forward
progress
on
the
action
items
we
need
to
do
so.
B
H
Definitely
like
to
clear
the
deck
over
the
next
few
weeks.
I
think
this
thing
is
something
that
we've
long
delayed
so
maybe
like
the
points
we're
bringing
up
a
day
early
like
how
can
we
have
way
as
quickly
as
possible,
get
some
of
the
things
that
we
know
settled
and
into
a
written
form
would
be
pretty
like
I
think.
That's.
The
reason
to
have
meetings
is
to
get
something
established
here.
B
Would
suggest
we
actually
maybe
start
by
email
so
also
just
to
let
people
know.
I
am
NOT
that
responsive
to
any
form
of
communication,
because
I'm
pretty
busy
in
particulars
slack,
is
good
at
notifying
me
as
opposed
to
get
hub
notifications
which
go
into
the
void,
but
I
can't
provide
necessarily
a
synchronous
response.
Just
lack
so
just
be
aware
that
I
apologize,
but
there's
not
a
lot.
B
I
can
do
about
it
yeah,
so
maybe
starting
an
email
thread
about
or
start
in,
the
agenda,
doc
flushing
out
the
issues
that
we
need
to
get
movement
on
and
they
assign
owners
for
those
things.
And
you
know
if
we
have
owners
for
those
things
we
can
check
on
the
status
of
those
in
the
next
meeting.
If
we
don't
have
owners
for
them.
That
can
be
one
of
the
things
that
we
do
in
the
next
meeting.
Is
owners
designs
them.