►
From YouTube: SIG Architecture Community Meeting 07 24 2017
Description
Kubernetes architecture Special Interest Group community meeting, July 24th, 2017. Discussing the SIG charter and PodPreset.
B
B
A
Okay,
so
we
don't
have
a
lot
of
time
so
get
started.
What
I
wanted
to
do
today?
What
I
wanted
to
make
sure
we
got
done
today
is
kind
of
finalized,
the
cig
charter.
It
had
been
kind
of
rough
outline
sent
out
in
email
numerous
times
actually,
and
there
wasn't
really
any
debate
about
the
scope
so
I.
Just
before
this
meeting,
I
wrote
it
up
and
sent
a
PR,
which
is
kind
of
short
and
simple,
just
posted
to
the
chaps.
A
A
But
you
know
basically
it's
things
that
we've
discussed
before
so
projects
banning
issues,
not
things
that
are
the
domain
of
individual
SIG's,
defining
the
scope
of
the
project
documenting
and
evolving,
the
system,
architecture
defining
and
driving
the
necessary,
which
is
building
points
all
those
it's
design
details,
but
those
big
points
might
be
in
other
SIG's
like
today,
PME
scenery,
establishing
and
documenting
the
design
principles.
There
is
an
existing
doctrine
for
that,
establishing
it
and
documenting
the
conventions
for
the
api's,
and
there
is
doctrine
for
that
developing
the
necessary
technical
due
process.
A
Like
proposal
Anita
Chatterjee
processes,
there
is
actually
an
open,
API
review
process
proposal
from
fill
driving,
improvement
overall
code
organization,
including
Cara
Borges,
and
of
repositories,
and
educating
approvers
and
owners
of
the
other
six
by
holding
office
hours.
So
those
are
the
things
we
discussed
previously
in
email
as
I
was
writing
this
up.
It
occurred
to
me
that
there
are
some.
B
A
Think
of
the
right
words
but
logical,
artifacts
or
you
know
ways
of
factoring
the
system
that
are
not
covered
here.
I
talked
about
github,
orgs
and
repositories,
and
I
talked
about
well,
actually
did
I
mention
the
word
components.
I
think
the
word
yeah.
The
word
components
is
in
there,
but
I
didn't
talk
about
binaries
or
when
these
artifacts
or
things
like
that,
I
intentionally
left
those
out
for
now.
We
can
think
about
that
a
little
bit
more
about
what
should
be
big
release
or
you
know,
wherever
the
build
lands
versus
surg
architecture.
A
C
C
Think
one
of
the
things
we
talked
about
at
one
point:
what
to
have
a
set
of
sort
of
you
know
starting
docs
for
cigs,
so
that
they
could
actually
have
something
saying
out
the
gate,
and
then
it
was
up
to
those
SIG's
to
evolve
that
over
time,
I
don't
know,
I,
don't
know
what
your
thoughts
are
when
we
do
that,
how
we
do
that
I
think
is
another
across
so
many
things.
So
it's
not
like
we're
starting
out
new.
So
it's
like
you
know
is
that
the
question
comes
up
again:
yeah.
A
I
think
we
need
to
do
that
soon.
I
would
say
for
the
governance
bootstrap
getting
the
election
going
is
kind
of
more
urgent
I
think
this
does
not
contain
everything.
A
charter
needs
to
contain,
but
I
think
it's
useful
to
have
a
concrete
example
to
iterate
on
so
I
just
want
to
capture
the
things
that
have
been
going
around
in
email-
and
you
know,
if
anything
is
super.
Objectionable
I
will
just
rip
it
out,
but
I
would
like
that
like
have
a
stake
in
the
ground
starting
point,
so
we
can
iterate
on
it
and.
A
C
A
Exactly
I
mean
in
terms
of
approval
process,
there
are
some
de
facto
crosses
for
some
of
these
things
already.
That
will
need
to
formalize,
like
most
technical
areas
of
projects
have
official
list
of
approvers.
The
API
approvers
is
there's
an
informal
list.
It's
not
because
it
doesn't
correspond
one-to-one
with
code.
Necessarily
it
has
been
informal.
It's
a
subset
of
I
think
it's
a
subset
of
people
in
the
API
directory,
although
that
may
not
be
true
because
I'm
a
Ben
remove
so
I
don't
get
spammed
by
Gallup
automation,
but
yeah.
C
D
Is
a
desk
right
here,
it's
cool!
It
was
unclear
to
me
whether
or
not
this
group
should
own
things
like
versioning
and
deprecation
policies,
because
I
have
a
specific
ask
around
at
TD
2
versus
FTD
3.
This
isn't
very
important
to
take
this
up.
We
can
do
it
on
the
mailing
list,
but
I
want
to
get
a
quick
sanity
check.
Whether
this
is
the
right
forum
or
it's
or
some
other
place
really.
A
D
The
quick
TLDR
Justin's
Joe
is
here
as
well
as
it
was
unclear
to
me
whether
or
not
this
is
the
sort
of
thing
that
would
ultimately
get
punted
to
say,
clustered
lifecycle,
because
so
this
came
up
on
a
community
meeting
right.
The
burgeoning
dock
has
kind
of
buried
someplace,
it's
kind
of
unclear
what
that
is.
There's
another
version,
a
dock
that
sort
of
relates
to
API
versions
versus
the
support
policy,
and
then
I
can't
really
find
too
much
of
a
policy
that
declares
versioning
or
support
of
third-party
components
and
I.
D
Think
as
we
try
and
break
out
into
those
layers,
the
gray
boxes
on
the
bottom
I
think
we
certainly
need
to
define
like
what
the
guarantees
are.
We
expect
between
us
and
gray
boxes
on
the
bottom.
Just
in
terms
of
surface
area.
You
probably
want
to
use
that
and
if
I'm
like,
how
are
we
validating
that?
All
these
things
are
correct
and
saying
these
are
the
things
that
give
and
release
secure
Nettie's
works
in
concrete
today.
D
I
just
want
to
throw
out
the
quick
idea
that
it
sounds
like
our
current
versioning
policy
implies
that
once
one-eighth
is
cut,
we
drop
support
for
1/5
1/5
was
the
first
release
to
introduced.
Support
for
see3
1.6
was
the
first
release
to
default
to
sed
3
clusters.
So
I
think
that
roughly
means
that
we
could
conceivably
drop
automated
upgrade
tests
that
upgrade
for
my
CD
2
tv3
as
part
of
generating
CI
signal.
The
one.
C
A
release
so
so
I
feed,
Justin
pretty
nice
hand
up
I.
Think
there's
two
issues
here:
number
one:
what
is
the
process?
How
does
this?
Actually?
How
does
this
thing
fit
into
that
process
and
I
think
that's
worth
discussing
out
and
then
there's
the
there's,
the
sort
of
the
block-and-tackle
of
what
does
this
mean
exactly
for
at
cd2
and
like
we
didn't
deprecated
it
so
like?
How
do
we
think
run
that
process?
Okay,
so
Tim
st.
C
sig
release,
I,
you
know
my
gut
here
is
that,
like
for
things
that
are
going
to
be
controversial,
controversial
and
impact,
the
the
rest
of
the
the
project?
Siga
architecture
is
a
good
place
to
essentially
communicate
and
suss
out
sort
of
the
10,000
foot
view
of
what
the
impact
is
for
stuff.
The
actual
implementation
of
how
this
stuff
actually
gets
rolled
out
in
the
nitty
gritty
details
of
coordinating
that
delegating
that
to
a
different
stage,
whether
it
be
sig
release
or
cluster
life,
stick
or
whatever
I
think
probably
makes
sense.
C
A
A
A
A
A
There
was
some
stuff
I
ripped
out.
I
mean
the
most
recent
update
was
me
just
ripping
out
stuff.
That
was
totally
wrong
and
never
since
it
was
written
so
early
in
the
span
of
the
projects,
there
was
stuff
that
we
went
a
totally
different
direction
on,
but
yeah
I
think
changing
any
of
the
current
numbers
is
obviously
going
to
or
any
of
the
current
policies
is
going
to
require
the
involvement
of
the
relevant
sig.
A
D
I,
just
I
didn't
want
to
go
into
the
weeds
on
this.
I
just
wanted
to
raise
this
as
an
example
of
something
that's
cross-cutting
and
worried
to
have
a
steering
committee.
I
totally
would
say
this
is
their
job
to
figure
actually
owns
what
just
in
the
pragmatic
moving
forward
sense.
Do
you
guys
think
that
maybe
it's
worth
raising
this
to
kubernetes
des
and
asking
the
candy
I
think
how
should
we
drive
this
forward?
Yeah
actually.
B
Another
hallmark
of
a
really
good
Charter
is
it
almost
acts
like
a
razor
blade
that
you
can
put
something
on
one
side
of
the
other
up?
So
if
it's
an
answer,
if
it's
a
question
about
the
long
term
viability
of
some
part
of
the
interface,
it's
going
to
be
deprecated
when
not
it's
architecture
is
probably
more
forward-looking
versus
cluster
lifecycle
and
others
that
are
more
backward.
B
Looking
so
I
feel
like
if
our
Charter
is
really
good,
we're
going
to
be
able
to
look
at
it
and
throw
things
against
it,
and
it's
going
to
split
either
or
left
or
right.
It's
going
to
it's
going
to
be
in
our
wheelhouse
or
not
in
it
and
I
think
that'll
be
a
good
way
of
determining.
If
the
chart
is
well
written
in
the
long
term,.
F
So
here's
here's
another
example
of
a
an
issue
which
cuts
across
multiple
SIG's,
arguably
I.
So
I
think
that
probably
many
people
on
this
call
are
familiar
with
the
existence
of
this
pod
preset
resource
and
it's
API
group,
which
is
called
settings.
So
this
is
a
canonical
example
for
me
of
the
chicken
and
egg
problem
that
we
can
have
when
developing
things
that
need
new
machinery.
So
just
the
TLDR
is
that
this
resource
it
allows
the
user
to
express
a
I
guess.
G
A
Does
anybody
else
have
any
more
action
items
stood
on
the
Charter
before
we
move
on
in
terms
of
topics
so
I'm
going
to
when
we
end
up,
and
we
end
up
with
it,
to
do
yes,
I'm
going
to
put
one
line
about
the
deprecation
policy,
we
have
an
existing
document
and
I
could
get.
There's
no
other
place
right
now,
other
than
jig
architecture
to
own
that
so
I'll
put
it
there.
A
F
Okay,
so
TL
DR,
this
resource
basically
allows
you
to
specify
a
loose
coupling
between
a
label
selector
that
describes
which
pods
should
be
mutated
so
that
they
can
consume
something
and
the
driving
use
case
for
this
with
service
catalog
bindings
and
then
there's
a
loose
coupling
between
who
gets
it
and
what
shape
does
it
take?
We
originally
developed
this
as
a
core
API
group
driven
off
an
emission
controller
because
we
did
not
have
initializers
yet
so.
A
Yeah
I
dropped
a
message
about
that
in
I,
forget
some
issue
or
PR,
or
something
since
I
black
hole.
Alga
have
notifications
that
lost
track
of
it,
but
this
was
an
issue
that
Joe,
raised
and
I
had
actually
raised
it
previously
before
initializers
were
a
viable
option,
which
is
the
reason
why
the
main
reason
why
it
went
into
the
kubernetes
repo.
A
Originally,
we
are
going
to
need
the
ability
to
move
api's
around
as
we
try
to
move
towards
the
more
stratified
architecture,
design.
So
I
don't
view
moving
an
API
around
as
necessarily
being
a
user
visible
change,
at
least
in
the
future.
My
goal
is
to
make
it
not
necessarily
be
a
user
visible
change.
We
are
going
to
need
flexibility
in
terms
of
how
we
package
components.
I
we've
had
hypercube
for
a
long
time
and
never
really
pushed
that
over
the
finish
line,
but
in
some
scenarios
people
may
want
an
all-in-one
binary
and
others.
A
So
as
a
pragmatic
matter,
since
services
feature
is
kind
of
being
driven
by
the
needs
of
Service
Catalog,
my
suggestion
is
put
it
in
service
catalogs,
API
server
for
now
I
mean
Service.
Catalog
is
taking
the
bullet
for
the
project
as
far
as
driving
API,
Oh
aggregation
and
potentially
the
initializer
mechanism
over
the
extension
line
over
the
finish
line.
So
I
do
appreciate
that,
but.
A
F
C
In
addition,
there's
this
there's
this
pattern
that
when
things
move
to
GA,
they
get
turned
on
by
default
when
something
is
turned
on
by
default.
In
the
you
know,
the
current
controller
manager
it's
there
forever,
whereas
if
it's
in
something
like
another
aggregated
API
server,
that's
optional,
then
it's
very
clear
that
it,
you
know
is
not.
It
doesn't
have
that
stamp
of
approval.
That
doesn't
have
that.
You
know
and
and
I
recognize
that
you
know
we
don't
have
a.
C
We
should
have
the
flexibility
to
have
sort
of
conformance
levels
and
feature
levels
and
feature
modules
that
are
separate
from
the
the
binaries
that
we
ship,
but
the
reality
is
that,
when
something
ships
in
kubernetes
/
kubernetes
are
out
of
grenades,
/
kubernetes,
it
really
is
a
stamp
of
approval
and
official,
and
it's
seen
as
being
the
way
to
do
things.
I
cannot.
F
I
agree
so
I
I
just
wanted
a
state
land
position
I'm.
Definitely
in
favor
of
moving
it
out
of
the
core
I'm
in
favor
of
moving
it
to
the
Service
Catalog
API
server
and
I.
I
know
that
that,
like
Doug
Davis
and
Aaron
Schlessinger
from
Service
Catalog,
they
are
here
I
wonder
if
any
of
them
would
like
to
present
their
opinions.
F
G
G
Yeah,
it's
like
my
only
concern
with
moving
to
Service
Catalog.
Is
that
if
we
do
envision
something
like
pipe
reset
being
generic
and
not
service,
catalog
specific
and
that
will
require
someone
to
install
and
get
up
running
service
catalog
just
to
get
pipe
reset.
That's
it's
not
a
huge
concern,
but
is
a
concern
and
the
reason
is
not
a
huge
concern
is
because
I
don't
think
in
all
uses
it.
Yet
that's
IP
source
catalog,
but
the
minute
someone
else
does
come
along.
Then
I
think
it
becomes
a
very
concern
to
me
and.
B
G
Other
thing
I
want
to
mention
before
I
sort
of
lower
my
hand
is
Jill.
You
made
a
comment
there
about
how
things
that
aren't
in
core,
don't
if
I
have
to
follow
up
and
preemie
for
butchering
your
words,
but
this
exact
same
process
as
coordinated
features
and
I'm
wondering
whether
that's
really
true
as
we
look
to
split
out
the
core
into
into
individual
API
servers.
Why
would
I
kind
of
assume
the
process
would
remain
the
same?
It
was
more
just
a
structural
change,
I
think.
C
There's
a
difference
in
my
mind:
there's
the
difference
between
the
binaries
and
how
we
run
stuff
and
the
expectations
out
of
those
api's
on
those
API
groups.
Look
with
aggregated
API
server.
Some
company
off
the
street
can
actually
go
through
and
say
here's
a
new
API
server
that
you
can
stall.
You
know
it
has
a
bunch
of
stuff.
You
know
new
API
groups
and
the
deprecation
policy
in
the
support
policy,
for
that
can
be
completely
different
from
the
rest
of
kubernetes.
Now
I
think
there's
a
question
of
like.
C
If
is,
if
something's
you
know,
part
of
kubernetes
and
not
an
incubator
project,
maybe
one
of
the
requirements
that
we
say
as
part
of
being
like
a
kubernetes
project
is
that
you
have
to
adhere
to
the
rest
of
the
kubernetes
sort
of
policies
around
this
stuff
and
I.
Think
that
that's
great,
but
what
I
we
haven't
actually
sort
of
lined
up
or
made
that
decision
in
terms
of
what
does
it
mean
to
be
a
criminales
project?
A
For
example,
I
am
going
to
want
to
at
some
point
blue
api's
out
of
the
v1
core
API
group
into
other
API
groups,
and
potentially
at
least
in
some
configurations
and
for
other
binaries
right
so,
but
those
that
doesn't
mean
that
suddenly
the
deprecation
policy
will
not
apply
to
the
pod
api
anymore.
I
mean
maybe
that's
about
example,
because
that's
nucleus,
but
like
replica
set
or
something
like
that,
that
is
still
widely
used.
A
So
it's
true
that
we
need
to
figure
that
out,
but
I
think
things
at
different
layers
will
have
different
policies,
potentially
like
things
in
the
client
library,
layer
or
things
in
ecosystem
may
have
more
relaxed
policies.
But,
as
you
get
closer
in
that
tiny
nucleus
may
even
have
a
more
stringent
policy
than
our
current
partition
policy
may
in
it,
and.
C
I
think,
as
we
start,
you
know-
and
this
is
a
larger
discussion
as
we
start,
the
discussions
around
conformance
part
of
being
part
of
a
conformance
profile
might
mean
that
you
have
to
support
a
certain
deprecation
policy
right,
so
we
can
use
that
tool
to
essentially
curate
what
people
expect,
not
in
terms
of
behavior
but
in
terms
of
long
term,
support
and
deprecation
policy.
My
point
here
is
that
these
things
are
just
disconnected
right.
C
C
D
D
A
F
Certainly,
no
more
of
a
burden
than
it
would
be
to
install
the
if
we
were
to
move
pod
preset
into
its
own
API
server.
It's
not
significantly
different
than
the
burden
attendant
to
doing
that
and
I.
Don't
think
anybody
is
saying
that
this
should
stay
in
court.
So
I
don't
view
that
actor
is
necessarily
being
a
big
issue.
So
can.
C
And
it's
up
to
the
Service
Catalog
folks,
but
like
it,
may
be
worthwhile
to
name
that
appropriately,
so
that
it's
more
service
catalyzes,
it
didn't
leave
the
room
for
a
more
generic
thing,
as
it
gets
proven
out
as
more
people
are
like
hey
I
want
something
like
this
I,
you
know.
Not
everything
has
to
be
sort
of.
You
know
super
duper.
Generic
building
block
for
the
ages,
I
feel.
F
F
I
would
like
to
avoid
renaming
the
API
group
to
have
more
linkage
to
Service
Catalog,
I
I.
Think
the
right
way
to
go
with
this
is
to
just
move
the
API
group
wholesale
keep
the
names
the
way
they
are
keeps
the
settings
API
group
free
of
entanglement
with
the
Service
Catalog
API
group,
and
that
way
it's
easy
to
relocate
the
API
group
in
its
own
API
server
at
a
future
point
in
time
without
doing
additional
renames
or
respect
Thanks
yeah.
H
Ya,
Paul
Paul
covered
most
of
it.
The
only
thing
I'd
say
is
Brian.
You
mentioned
sort
of
this
idea
of
return
on
investment,
I
think
err,
the
other
aaron.
You
guys,
you
folks
in
cig
testing,
can
kind
of
be
the
gauge
on
whether
it
is
worth
breaking
out
at
a
later
date
into
a
new
API
server.
A
new
aggregated
API
server,
but
I
am
certainly
now
it
support
of
leaving
it
in
its
own
generic
API
group
and
just
putting
it
into
the
Service
Catalog
aggregated
API
server.
Right
now,
yeah.