►
From YouTube: Kubernetes SIG Architecture 20190131
Description
A
Recording
okay,
welcome
everyone
to
the
kubernetes
SiC
architecture
meeting.
It
is
Thursday
July,
31st,
2019
I.
Am
your
host
Jay
singer
Dumars
I
work
at
Google
and
I'm
one
of
three
chairs,
and
can
it
go
and
get
started?
The
agenda
is
available
at
bit:
dot
ly
/
cig
architecture,
one
of
the
things
I
put
in
here,
just
as
a
quick
thing
is
I
saw
our
meetings
as
we
get
higher
attendance
and
as
our
topics
get
more
complicated
and
contentious
keeping
the
time.
A
Windows
for
discussions
is
really
difficult
and,
as
a
moderator
trying
to
manage
discussions
that
are
going
in
a
hundred
different
directions
is
incredibly
difficult
to
do
so,
and
this
kind
of
also
dovetails
with
a
lot
of
feedback
I've
been
getting
that
cigarette
texture
starts
to
feel
a
little
bit
more
like
we're
a
widget
there.
You
know
bike
shutting
and
rattling
on
things
than
actually
making
progress
and
there's
a
lot
of
things.
We,
as
a
group
and
team,
need
to
to
actually
work
on
and
get
done
so
I'm
hoping.
A
Maybe
we
can
focus
on
making
these
meetings
a
little
more
action-oriented
and
less
discussion,
and
if
there's
things
we
need
to
discuss,
maybe
we
can
move
those
to
the
to
the
mailing
list,
because
that
that's
a
little
more
good
for
people
who
want
to
get
their
thoughts
together
in
an
ordered
way.
So
I
just
put
a
few
working
agreements
in
the
agenda,
mainly
just
like.
A
Let's
make
feedback
constructive
and
actionable,
let's
figure
favorite
work
over
discussion
when
we
can
and
if
it's
a
comment,
let's
try,
let's
try
and
put
them
in
the
chat
and
then
that
helps
kind
of
writing
and
in
getting
that
down,
and
so
it's
not
really
having
to
just
speak
it
out
loud
and
lastly,
just
if
you're
talking
more
than
listening
just
be
mindful
of
other
folks
who
are
who
are
in
talking,
because
we,
the
only
one
person,
can
talk
at
a
time.
So
hopefully
none
of
that
is
too
radical
or
contentious.
A
I
just
want
to
make
sure
that
we're
doing
the
best
we
can
to
be
productive
and
me,
and
we
have
a
lot
to
do
and
not
many
people
to
do
it.
So,
let's,
let's
really
make
2019
a
year
where
we
start
making
progress
on
a
lot
of
these
things,
and
with
that
let's
go
ahead
and
kick
off
the
talking
about
the
windows,
progress
and
Brian
grant.
You
have
the
lead
on
that
as
a
be
working
in
that
room.
B
Years,
yeah
great
yeah,
so
there
is
a
lot
of
work
done
on
the
windows
kept.
The
link
is
in
the
gender
notes,
doc.
Please
take
a
look
so
thanks
for
all
the
work
on
that
and
to
the
other
folks
who
helped
review
it.
There
is
a
pretty
comprehensive
list
of
functionality
that
has
been
confirmed
to
work
or
not
to
work
and
what
the
reasons
for
things
that
don't
work
are
as
well
as
a
more
complete
description
of
the
effort
and
how
the
functionality
will
be
well
appear
to
users
and
how
they
will
leverage
that.
B
So
there
are
a
bunch
of
action
items
in
terms
of
GA
criteria
and
things
as
granular
as
specific
tests
that
need
to
be
written.
So
please
do
take
a
look.
There's
also
work
done
on
the
windows
group
managed
service
accounts
issue,
and
that
also
has
been
marked
implementable,
there's
also
a
link
to
that
in
the
agenda.
Notes
doc.
So
please
take
a
look.
B
C
Yeah,
so
we
sent
a
an
email
over
to
the
cygwin
dose
list
via
michael
extra
something
yesterday,
so
we're
actually
increasing
the
frequency
of
syncs.
We
have
yeah
just
yeah.
We
think
we've
got
a
good
list
of
issues
to
continue
working
through
making
progress,
but
I
think
we've
got
a
better
clear
path
forward
there
and-
and
it's
helped
get
some
more
people
across
more
companies,
including
some
customers.
C
D
As
it
first
of
all,
you
do,
and
and
and
brand
thank
you
so
much
for
spending
a
ton
of
hours
with
us
reviewing
the
cabin
going
back
and
forth
and
giving
your
detailed
feedback.
So
we
would
really
appreciate
that
we
will
have
gotten
to
implementable
without
you
guys
the
only
one
thing
that
you
know
and-
and
we
do
raise
it
in
the
cab
as
much
but
I
wanted
to
bring
it
up
here.
D
We've
had
a
discussion
in
this
forum,
maybe
at
two
months
ago,
where
we
talked
about
how
is
a
cluster
lifecycle
with
no
supporting
hybrid
compute
clusters,
where
you
have
Windows
and
Linux
nodes,
as
you've
mentioned
the
cab.
Our
goal
is
that
we
will.
We
do
want
to
provide
full
support
for
that
scenario,
because
that's
how
some
of
our
most
of
our
customers
are
going
to
be
deploying
in
and
we've
outlined,
how
we
want
to
give
them
the
best
practice
of
using
things
and
toleration
and
how
to
make
that
an
acceptable
solution.
D
Both
further
existing
ecosystem
of
Linux
containers,
as
well
as
for
their
newly
introduced
Windows
containers,
I,
wanted
to
bring
it
up
to
this
forum
and
make
sure
if
anybody
has
any
questions
or
concerns,
why
hybrid
or
heterogeneous
clusters
shouldn't
be
supported.
Please
pick
up
now
so
that
we'll
have
at
least
the
next
month
to
address
it.
D
Yeah
and
I
think
Patrick
is
working
on
that
with
some
of
the
test.
Automation
will
have
so
that
we
are
gonna,
be
able
to
assess
that's
an
area
working
if
you
guys
have
any
specific
tests
that
we
cannot
command
and
maybe
have
someone
from
from
your
sake
to
to
come
in
and
and
a
little
oversee
the
process
without
spending
a
ton
of
time.
We
definitely
welcome
that
addition
to
help
us.
You
know,
navigate
this
and
make
sure
we
have
a
good
support
story.
There
sure
I
think
you
can
work
with
me
and
Patrick
offline.
A
A
E
That
is
that
the
CRD
migration,
one
correct,
see
your
dance
legend
yeah.
So
to
has
Tim
circulated
the
document
here.
No
okay,
I
think
because
if
you
want
yeah,
there
is
a
document
that
we
are
currently
circulating,
which
is
proposals
on
how
we
want
to
manage
the
installation
of
corey
api's.
So
I
think
for
today,
because
we
don't
have
the
document
circulated.
I
people
had
a
chance
to
look
at
it
that
you
should
circulate
that
dock
with
the
proposed
options
that
exist
in
as
a
cigarette
community.
A
B
Yeah,
so
it's
we
put
it
on
the
agenda
for
next
week.
I
thought
we
might
discuss
like
short
or
medium-term
options
this
week
in
longer
term
things
next
week,
but
we
can
just
discuss
it
all
next
week.
This
was
something
that
was
anticipated
in
the
architecture
roadmap
that
was
written
back
in
early
2017,
but
were
like
way
beyond
the
point
where
we
actually
need
to
resolve
it
now.
So
so
yeah
we'll
try
to
make
sure
the
dock
gets
sent
out
to
the
list.
A
F
I'm
here,
okay,
so
hi,
everyone,
I'm
David.
This
kind
of
relates
pretty
strongly
to
what
the
previous
topic
was
going
to
be
about
CID
installation
stuff.
But
this
is
more
specific
to
storage.
So
I
circulated
the
document
earlier,
not
sure
everyone
got
to
read
it
read
it,
but
the
general
idea
is
that
we
have
a
CR
D
or
we
have
an
API
object
right
now,
which
is
a
CR
D
for
storage.
F
So,
although
we
do
have
the
planned
long
term
poll
by
the
Tim's
about
installing
CR
DS,
we
do
require
a
more
short-term
solution.
So
I
outlined
two
of
them.
It
seems
like
we're
kind
of
circling
around
two
possible
solutions,
one
being
to
move
the
CR
D
to
a
core
API
and
the
second
being
to
kind
of
hack,
specific
cloud
providers,
cluster
up
scripts
or
deployment
mechanisms
to
install
the
C
IDs
by
default.
But
Michelle
I
think
you
have
a
reason
that
it
should
be
more
of
a
core
API.
G
F
E
B
Csi
in
particular,
CSI
itself
is
an
extension
mechanism
that
were
used
to
get
the
blind
source
implementations
out
of
tree,
so
there
needs
to
be
a
stable
common
interface
for
the
numerous
volume
sources
that
have
been
implemented.
So,
in
my
opinion,
in
terms
of
enabling
another
an
extension
mechanism,
if
the
co
D
based
management
is
not
ready,
the
needs
for
this
particular
mechanism
are
so
critical
that
that
outweighs
you
know
trying
to
push
hard
on
C
or
D.
For
this
particular
thing,
I.
E
B
I
I
I
B
And
we
have
a
few
ad
hoc
cases
in
the
system
where
things,
for
example,
for
the
keep
system
namespace
for
the
default
namespace
you
know.
Most
of
these
are
fairly
not
configuration
intensive,
they're
kind
of
simple
so
having
hard-coded
configuration
some
in
some
core
system
component
hasn't
so
much
been
a
problem
for
other
things.
We've
used
the
out
on
manager,
but
as
the
number
of
different
cluster
lifecycle
tools,
you
know
kind
of
proliferate
and
and
evolve
and
change.
B
H
J
So
it's
it's
embedded
in
the
API
server.
It's
exported
into
llamó
files
as
well
for
consumption
by
external
tools.
Yeah.
Those
in
particulars
look
similar
to
the
CIT
thing
kind
of
at
a
10,000
foot
view,
but
when
you
actually
dive
down
into
it
like
the
are
back
stuff
is
purely
declarative
and
we
can
do
things
like
make
additive
only
changes
and
take
the
superset
of
muddin
and
so
like
the
CRT
stuff.
With
all
the
active
call-out
mechanisms,
like
it,
I
I,
think
that's
where
you
run
into
problems.
H
That
seem
to
be
one
of
the
big
concerns
for
me
so
because
for
the
creb
you
have
to
set
up
role
by
means
an
end
or
a
cluster
rule
by
means
and
rules
or
cluster
rule,
in
order
to
be
able
to
access
them.
Assuming
a
simple
case
where
it's
a
coupie
system
component-
and
you
already
have
a
role,
can
you
instantiate
a
cluster
rule
binding
or
a
role
binding
or
CRD
that
hasn't
been
generated
yet
and
if
not,
how
do
we,
then?
How
do
we
control
access,
for
instance,
if
we're
doing
like
ad
hoc
installation?
H
K
K
K
F
K
B
K
B
E
K
Let
me
just
try
and
say
what
I,
when
I
think
I
said
in
the
comments
in
this
document,
which
is
the
the
stuff
in
the
add-on
directory,
is
actually
not
that
optional.
Like
do
you
are
you
gonna
yeah,
it's
optional
and
air
quotes,
but
are
you
really
gonna
run
a
cluster
without
DNS
like
like
you
can
run
different
people
couldn't
run
DNN
different
DNS,
but
but
if
you're
running
a
cluster
and
you're
not
running
the
stuff
in
the
default
add-on
directory,
that
means
you've
made
some
conscious
choice
about
it.
I
K
H
B
B
When
we
added
the
scheduler
component
broke
it
out
from
controller
manager,
which
maybe
you
did,
we
broke
every
cluster
lifecycle
tool
in
every
blog,
and
this
is
pre
current
he's
a
hard
way,
but
it
was
easier.
We
felt
like
all
cluster
installations
everywhere,
and
this
is
sort
of
a
similar
thing.
I
think
I'm.
J
F
Right
now,
what
we're
trying
to
come
to
a
decision
on
is
the
short
term.
First,
for
the
storage,
see
RDS
in
particular,
which
are
more
of.
We
need
these
to
be
installed
on
every
single
cluster,
as
is
there's
no
there's,
probably
no
modifications
that
you
would
want
to
do
and
if
you
don't
have
that
CID
installed.
Many
of
the
storage
features
won't
work
so.
I
There's
two
issues:
I
was
on
go
ahead,
I
think
there's
an
option
1.5
here
and
I
think
that's
worth
exploring.
That
again
is
based
on
some
of
the
patterns
with
the
there.
They
are
back
objects,
which
would
be
that
there's
still
CR
DS
but
initialization
of
those
CR
DS
is
in,
is
actually
in
the
in
the
new
API
server,
similar
to
our
back,
which
gives
us
the
option.
As
we
develop
generic
sort
of
cluster
initialization
and
sort
of
object
configuration,
we
can
move
that
stuff
out
and
modernize
it.
K
Api
server
has
a
I'll
call
it
a
defect
where
it's
not
actually,
there's.
No,
they
can't
take
a
cluster
lock
and
do
something
atomically,
so
over
rolling
upgrades,
etc.
Like
this
set
changes,
I
expect
bad
things
happens.
I
API
server
is
not
currently
the
right
place
to
do
this
right,
like
you,
you
want
something
that
updates
this
once
after
the
rolling
upgrade
has
completed
and.
H
I'll
say
to
see
RDS
on
their
own.
Aren't
100%
useful
right
like
I.
Don't
we
know
if
there's,
if
there's
webhook
validators,
we
don't
want
people
right
and
see
our
knees.
Don't
have
the
validators,
because
they're
gonna
produce
invalid
series
at
some
point
and
then
we
have
to
deal
with
that.
So
I
think
like
just
lump
it
into
the
load
and
series
is
not
an
answer.
I
think
the
first
step
is
everybody.
K
K
H
B
J
H
So
if
the
goal
isn't
to
solve,
using
like
reading
mechanisms
for
magnet
or
ECR
you
someday,
the
goal
is
to
get
a
mandatory
type
that
your
cluster
will
not
work
without
installed.
Why
are
we
mandating
that
it
can't
be
a
core
type
like
we
know
how
to
do
the
core
type?
We
know
how
to
make
that
work
today
and
if
we
went
to
advanced
CSI
HEA
and
that's
the
goal
we're
going
after.
K
In
favor
of
that
I
mean
think
he
doesn't
I'm
actually
fine
with
that.
But
I
would
like
to
advocate
for
a
slightly
different
position,
which
is
like,
like
a
microkernel
type
thing,
where
there's
there's
some
things
that
you
can't
run
without
and
there's
some
things
that
are
part
of
kubernetes.
But
if
you
don't
install
them
things
still
work,
you
don't
have
as
many
features
and
and
I
think.
The
reason
this
is
good
is
because
those
things
can
be
developed
exactly
like
just
random
extensions,
I.
Think.
H
K
B
B
Yeah
I
mean
really
for
you
know.
There
are
increasing
security
concerns,
for
example,
and
a
lot
of
work
has
been
going
into
cloud
provider
instruction
as
well
between
cloud
provider,
extraction
and
externalizing
the
volume
sources
through
CSI.
We
have
an
opportunity
to
extract
a
lot
of
dependencies
from
internet
code
base.
I
think
it
is
a
step
in
the
microkernel
direction.
Despite
the
type
and
the
other
things
we
can
do
like
lose
a
bunch
of
compiled
in
types
into
an
aggregated,
API
server,
or
things
like
that.
But
I
think
we
should
explore
those
avenues.
H
H
My
point
is
I,
doubt
very
much
that
we
have
the
ongoing
bandwidth
between
Tim,
Tim,
Jordan
and
Justin
to
push
forward
on
that
I
think
it
needs
an
owner
and
we
need
to
figure
out
which
say
it's
yeah.
Maybe
that
should
be
that
the
ultimate
result
of
that
document
is
which
signal
think
it
falls
under
and
where
our
sketch
out
lines
and
the
meantime
move
to
let
CSI
and
movie-entry
anybody
objectives.
Yeah
and
I.
B
A
H
I
A
B
I
mean
there
was
also
an
announcement
at
the
community
meeting.
Basically,
all
significant
work
in
a
way
that
we
can
discover,
which
is
self
Azhar,
is
being
asked
to
have
a
cap
in
place
in
the
implementable
state
for
114.
So
thank
you
very
much
for
Aaron
and
others.
Who've
been
pushing
on
that
from
the
release.
I
is
called
number
of
people
by
surprise,
so
the
deadline
was
pushed
to
Monday.
B
So
you
know
please
get
on
top
of
that.
If
you
wanted
to
surprised,
people
are
kept
dashboard
for
sake,
architecture
is
completely
broke
in
and
out
of
date,
the
last
time
I
checked
due
to
the
Kemp
move
in
general.
You
know
I
know
that
Tim
Hakan
was
looking
at
the
kept
tool
that
Caleb
started,
because
caps
are
merged
before
they're
marked
implementable
and
some
have
open
cars
and
one
eye
is
pretty
hard
to
just
discover
all
the
caps
that
are
in
flight.
So
hopefully
the
tool
can
help
with
that.
The.
H
Tool
that
I
took
a
look
at
and
I
understand
that
Kayla's
reworking
in
this
year
was
mostly
about
lifecycle
management,
so
it
was,
if
you're
creating
a
kept
it's
going
to
help.
You
create
all
the
correct
artifacts
from
the
most
recent
template
and
walk
through
the
steps
of
things
you
need
to
fill
out
and
if
you
were
reviewing
a
cap,
it
had
some
life
cycle
stuff
too,
but
it
didn't
seem
to
match
the
github
eccentric.
Workflow
I,
don't
know
if
anybody
else
uses
cure,
get
centric,
workflow
I,
don't
there's
a
separate
tool.
H
There
were
two
tools
there,
the
other
one
was
more
of
a
lint,
which
I
think
is
actually
a
good
chunk
of.
The
value
to
me
is
getting
something
that
says:
hey
don't
merge
this
cap,
because
these
rules
haven't
been
satisfied,
so
I'm
gonna,
hopefully
be
working
with
Caleb
again
to
look
at
the
next
iteration
and
work
through
it.
If
there
are
things
that
people
who
do
a
lot
of
kept
reviews
want
to
see
out
of
the
tool,
you
can
talk
to
me
or
talk
to
Caleb,
just
in
terms
of
like
wish
list
for
your
workflow.
A
B
B
You
know,
in
addition
to
the
Kemp
tracking
board
being
basically
broken.
The
API
tracking
board
has
been
very
hard
to
keep
up
to
date
manually.
There
is
a
PR
out
to
add
automation,
for
that
it
would
be
great.
Someone
from
the
city
architecture
group
could
help
that,
along
by
taking
a
look
and
making
sure
that
it
addresses
our
needs.
There
are
over
100
PRS,
open,
labeled,
API
change
in
communities,
communities
right
now,
and
there
are
eight
guys
and
other
places
as
well.
That
need
to
be
tracked
down.
B
You
know
so
I
thought
it
would
be
good
to
take
a
look
at
the
things
in
flight
and
prioritize,
which
ones
someone
should
take
a
look
at
and
make
sure
we
have
people
covering
that
there
was
work
that
Jason
others
did
on
documenting
an
API
review
process
which
we
haven't
really
managed
to
get
rolling.
Yet
I
think
we
need
to
take
another
look
at
that
and
see
what
we
can
do
to
better
scale.
The
process.
B
J
Yeah
so
I
characterize
these
yeah
and
please
add
things
if
I
miss
them
the
the
query
that
I
pulled
these
out
of
as
somewhat
in
flux
as
things
get
pulled
out
and
put
back
into
the
milestone,
with
the
exception
and
confusion,
so
just
starting
at
the
top.
The
first
set
was
things
that
are
currently
beta
that
are
planned
or
proposed
to
be
promoted
to
GA
in
114.
Some
of
these
are
actually
very
old
features
that
just
sort
of
languished
in
beta
for
no
particular
reason.
J
J
Recognize
what
it
is,
which
is
a
API
that
is
widely
used
and
recognized
what
it
cannot
probably
be,
which
is
the
perfect
l7
API
we
want
from
where
it
is,
and
so
trying
to
figure
out
a
middle
ground
that
doesn't
break
a
bunch
of
people,
but
lets
us
get
it
out
of
a
permanent
beta
api
group,
something
that
you
can
say
well
here
it
is.
We
support
it.
If
we
want
to
do
something
radically
different
or
better.
We
can
start
that
as
an
independent
effort.
J
Web
book
admission,
that's
kind
of
a
laundry
list
of
things
addressing
user
experiences
using
the
current
beta
version,
people
trying
to
use
web
po
condition
and
running
into
issues
or
improvements
that
they
needed,
and
so
there's
a
list
of
things
that
we
plan
to
do
and
114
that
will
remain
in
beta
C
Rd.
Open
API
publishing
is
looking
to
bring
a
series
up
to
parity
with
built-in
types
as
far
as
the
schema
information
that
it
exposes,
but
that
is
not
graduating
out
of
GA,
like
I
said
earlier.
B
The
event
API
was
redesigned
over
a
year
ago
and
then
implementation
kind
of
stalled.
The
API
actually
then
did
get
implemented,
but
it
wasn't
adopted
anywhere
in
the
system.
So
now
we
have
a
contributor
who
is
working
to
change
some
of
the
existing
event
generation
call
sites
within
the
project
excuse
a
new
event
API.
So
it's
really
the
first
use
I
believe
the
new
API,
which
is
intended
to
address
a
bunch
of
problems.
There
are
some
things
that
have
come
up.
Some
performance
concerns
that
people
are
working
to
address,
but
hope
is.
B
L
B
J
To
be
clear,
as
we
are
going
through
this
list,
some
of
these
are
much
further
along
in
towards
the
requirements
that
have
to
be
met
by
Monday
as
far
as
having
a
design
and
having
it
reviewed
and
approved
and
merged.
Some
of
these
are
likely
to
not
be
there
on
Monday.
So
if
there
are
ones
that
you
care
about,
please
go
look
at
them
and
if
they
are
not
close,
let
the
people
involve
know
or
take
ownership
of
that,
and
if
they
are
close,
please
help
drive
them
to
closure.
This
week,
yeah.
B
B
The
project
is
in
desperate
need
of
some
project
management.
So,
if
that's
something
you
want
to
help
with
or
you
can
find
someone
to
help
with
help
with
that,
please
reach
out
because
things
a
lot
of
things,
don't
make
the
release
not
because
someone's
not
working
on
it,
but
because
you
lack
of
coordination
of
the
things
that
need
to
come
together
to
make
it
happen,
and
that
can
happen
for
big
efforts
like
Windows
and
it
can
happen
for
smaller
things.
Do
I.
A
Would
also
add
that
adding
more
API,
reviewers
and
people
who
can
serve
level
up
eventually
and
get
to
the
approval
status
is
going
to
help
us
as
well.
Yeah.
J
To
that
end,
I
was
actually
going
to
say
something
yeah,
but
that's
a
good
segue,
so
I'm
gonna
say
now.
If,
if
you
are
involved
in
one
of
these
areas
and
would
like
to
shadow,
an
API
review
knows
someone
who
would
we
are
looking
to
develop
a
deeper
bench
for
API
reviews,
and
so
all
of
these
things
that
they're
going
to
get
in
an
API
reviewer
approver
is
going
to
be
looking
at
them.
J
B
So
there
there
are
three
of
these
three
documents:
there's
the
API
review
process
document
that
Jordan
mentioned,
there's
the
API
conventions
document
and
there's
the
API
changes
document.
So
the
ideal
state
would
be
that
we
would
have
a
number
of
documents
at
different
levels
of
detail,
focus
on
different
audiences,
like
people
trying
to
contribute
api's,
Allegheny
API
changes,
doctor
keeping
trying
to
review
api's
API
conventions
are
both
for
people
writing,
api's
and
people
reviewing
api's,
making
that
more
concise
and
actionable
with
the
checklist
and
also
writing
some
validation.
B
H
Review
process
with
the
general
knowledge
that
is
I,
guess
most
troubling
to
people
who
are
trying
to
improve
or
update
API
is
probably
what
will
break
us.
I,
don't
know
if
that's
really
captured
anywhere,
like
it'll,
say,
use
pointers,
but
it
doesn't
say
why
doesn't
really
explain
how
to
eat
bolting
actually
works,
it
doesn't
explain,
validation
and
backward
compatibility.
You
think
capturing
that
is
worthwhile.
Yes
and
everything
you
just
talked
about
are
complicated
topics
and
I
think
they
are
best
demonstrated
by
examples.
B
So
the
API
changes
document
has
some
of
the
few
of
those
examples.
We
could
definitely
use
more
and
also
has
a
few
gotchas
for
compatibility
for
instance.
So
these
things
you
know
every
time
we
stumble
on
one
of
these
things
in
a
review.
Usually
it's
not
the
first
time
that
has
happened.
We
should
flag
that
and
try
to
go
document.
It
there's
a
long
design,
documentation,
backlog,
things
that
I
just
copy
paste
from
PRS
and
email
threads
and
things
like
that.
But
there's
a
long
backlog.
B
But
I
think
the
other
thing
about
that
is
I
mean
the
more
we
can
make
documented
or
standard
boilerplate
or
have
a
lynchin
tool
for
then,
the
more
the
west
toil
we
have
and
the
more
time
you
can
spend
on
the
more
complicated,
more
specific
issues
specific
to
a
particular
feature
right.
So
that's
really.
M
B
Now
it's
making
it
harder
because
there's
spread
out
across
many
more
places
and
the
rules
are
evolving
and
we
need
to
do
a
better
job
of
documenting
and
keeping
the
rules
up-to-date
and
communicating
those
changes
which
we
haven't
so
far
done.
So
we
probably
can
do
for
like
presentation
on
a
regular
basis
at
the
community
meeting
or
somewhere
else
about
what
has
changed
in
API
design
lands.
M
H
B
J
A
One
last
thing
to
add
about
the
review
increasingly
reviewer
pool
is
it
I
know
it
can
be
a
little
bit
of
intimidating
and
daunting
to
get
into
this,
but
an
easy
win
for
somebody
just
to
get
more
familiar
with.
It
would
be
to
partner
with
Tamara
Smee
other
reviewers
and
do
some
of
that
documentation
like
capturing
those
edge
cases
and
understanding
what
the
rationale
is.
That's
a
that's
a
low
effort,
high
wind
thing
that
helps
the
community
and
also
would
get
somebody
more
familiarity
with
the
rationale
and
understand
why
these
are
done.
A
The
way
they
are
so
I
would
say
that
partnering
on
documentation
is
one
of
the
lowest
bars
that
you
can
do
and
it's
one
of
the
most
helpful.
So
anybody
who's
interested
you
know
contact,
Tim
or
Tim
or
or
Jordan
or
Brian
or
myself,
and
we'll
definitely
set
you
up
and
get
you
poised
for
success
on
that
I.
B
K
B
B
J
Let
me
run
through
the
rest
of
this
list
really
quickly.
I'll.
Do
it
in
like
two
minutes
graduation
of
alpha
guys
to
beta
the
first
one.
The
CSI
topology
was
the
one
we're
talking
about
proposing
to
do.
Entry
runtime
class
is
continuing
as
a
CID
continued
development
of
alpha
api's
for
the
dynamical,
and
it
because
that,
so,
how
are
we
installing
that
c,
ready.
F
Okay,
but
it's
it's
going
today
in
114,
runtime
class
is
designed
to
operate
if
the
CRT
isn't
there
and
it
just
treats
it
as
if
you
don't
have
any
runtime
classes,
you
just
damp
in
the
emergency
yeah.
So
if
the
user
is
goes
to
create
a
runtime
class.
Obviously
the
in
the
CR
you
first
once
there
is
a
control
plane
component,
which
there
will
be
at
some
point
complicated.
H
H
J
There
were
a
few
that
are
proposing
field
additions
to
GA.
Api
is
the
first
is
for
CSI
and
line
volumes
to
get
parity
with
entry
in
line
volumes
that
I
believe
is
merged
or
close
to
merged
now.
So
that
is
looking
good.
There
are
a
couple
questions
on
that,
but
is
is
close
user
namespace
remapping
from
what
I
can
see
that
is
still
relatively
in
flight?
So
I
don't
know
if
that
is
going
to
make
it
the
next
one
topology,
where
service
routing
is
that
merged?
I
can't
remember
it
that.
M
J
Okay
group
managed
service
accounts.
This
is
actually
being
done
as
an
extension
and
the
alpha
phase.
So
there's
no
API
impact
for
alpha.
We
are
trying
to
figure
out
like
the
coherent
path
forward
for
representing
operating
system,
specific
things
inside
a
pod
spec,
and
so
this
is
one
of
several
possible
windows
related
fields
that
might
make
it
into
the
the
pod.
In
the
once,
we
figure
out
how
to
to
do
that
coherently
and
do
that
in
a
way
to
make
sense
with
the
existing
Linux
specific
fields
and
end
of
our
expansion
in
volume.
Subpaths.
J
So
a
way
to
do
and
bar
expansion
for
sometimes
the
next
set
our
net
new
API,
is
the
csfi
Gration
was
originally
a
CRD.
It's
looking
like
that
will
be
done
in
tree
to
unblock
CSI
server
side
apply
is
not
a
new
resource.
It's
a
moving
of
the
queue
control,
apply
function
into
a
server-side
API
so
that
all
clients
can
use
it.
J
Ephemeral
containers
that
is
currently
in
the
milestone,
but
I,
don't
think
that
will
make
it
by
Monday.
The
discussions
seem
to
reach
a
point
of
agreement
about
six
months
ago,
but
then
kind
of
stalled
out.
So
if
that
gets
picked
up,
it
will
probably
be
for
115
and
then
finally
there's
an
effort
to
continue
the
work
to
make
our
components
consume
structured
configuration
instead
of
a
trillion
flags.
So
cubelet
has
pioneered
this.
J
There
are
a
couple
of
components
that
already
consume
an
alpha
level,
API
files
or
alpha
level
close,
and
so
this
is
taking
a
critical
look
at
those
and
reviewing
them
before
promote
to
beta
those
may
or
may
not
get
promoted
to
beta,
depending
on
how
that
review
goes.
But
this
is
trying
to
trying
to
move
that
forward
and
then
the
last
thing
I
wanted
to
say
was
for
planning
and
allocating
API
reviews.
J
This
is
a
long
list
and,
like
I
said
some
of
these
things
will
fall
off
this
list
before
Monday,
but
come
Monday
or
Tuesday.
This
will
still
be
a
long
list
and
just
getting
the
implementations
done
isn't
enough.
If
we
get
to
a
week
from
code
freeze-
and
they
still
haven't
bubbled
up
for
review,
so
let's
try
to
identify
which
reviewers
and
approvers
are
assigned
to
each
item
to
make
sure
that's
load
balanced
and
matched
with
the
people
who
have
knowledge
in
that
area.
Let's
try
to
get
that
assignment
done
early.
J
So
if
you
don't
have
an
API
review
or
approver
work
with
the
people
who
are
on
the
approvers
reviewers
list
or
Jace
or
other
people
who
have
thought
about
how
we
can
blow
balance
this,
to
get
those
identified
early
to
make
sure
we
don't
slam.
Someone
like
me
with
25,
API
or
use
a
week
before
good
reason.
Other
thanks.
B
B
I
found
a
pasty
no
thread
back
in
cute
con
2017,
where
I
volunteer
to
update
this.
So
now,
I
actually
did
go
update
the
high-level
description
in
the
what
is
community
stock
that
would
proved
to
be
insufficient.
So
this
is
a
much
more
detailed
attempt.
Previous
attempts
were
in
a
various
other
Docs
which,
after
this
I
will
go
update
like
architecture
mg
and
the
principles
that
Andy
and
the
aren't
along
architectural
roadmap
doc
with
the
layers
and
all
that
there
were
a
bunch
of
detailed
specific
features
listed
in
there
about
doc.