►
From YouTube: Kubernetes Community Meeting 20160317
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo of StackPointCloud’s Kubernetes Community Portal; Kubernetes 1.2 released; Draft 1.3 Priorities from Redhat, SIG-Node, SIG-Scale and SIG-Controlplane.
A
A
Alright,
so
this
morning
we
have
a
reasonably
full
agenda.
We've
got
got
a
demo
from
sac
point
cloud
showing
us
some
work,
they've
been
doing
on
the
communities,
a
cougar,
Nettie's
community
portal.
We
get
to
talk
for
a
moment
or
two
about
1.2.
Of
course
it
is
free,
it
has
been
released.
Press
releases
will
go
out
today
if
they
have
not
already.
I
think
it's
scheduled
for
noon
cetera
and
then
we'll
talk
about
1.3
and
then,
if
there's
time,
we'll
do
quick,
cig
report
outs.
A
D
Alright,
let
me
get
back
to
the
home
page
here.
So
hello,
everybody
like
Matt,
said
or
sarah
said
Oh
Muriel
and
we
have
mad
online
he's
another
one
of
the
cool
we're
both
co-founders.
At
that
point
cloud.
You
also
have
Matt
finishing
from
Berlin
he's
a
developer.
That's
been
working
with
us
on
on
this
project,
a
little
bit
about
us
on
the
services
side
of
the
operation.
We
have
been
building
communities
for
infrastructure
as
a
service
providers.
We've
also
been
working
with.
E
E
D
What
they
think
that
at
any
point,
feel
free
to
either
pause
to
stop
me
and
if
you
have
a
question
or
you
can
feel
free
to
reach
out
to
us
a
side
of
this
meeting
by
emailing
us
at
community
at
stack.
Point
cloud
com,
no
further
ado
dolls,
just
dig
into
this
can
I
get
an
affirmative
from
somebody
that
Matt.
Can
you
see
my
screen
yep
yep
great.
So
this
is
the
landing
page
that
the
design
is
very
much
inspired
by
the
material
design
spec.
It
seemed
appropriate
given
the
work
we
were
doing
here.
D
So
it
was
a
good
opportunity,
a
fun
opportunity
to
kind
of
dig
into
that.
A
little
bit
deeper,
so
just
the
head
there's
a
couple
different
sections:
there's
the
learn
section
where
we're
going
to
focus
on
tutorials
right,
so
the
community
able
to
submit
tutorials
and
we'll
go
ahead
and
there'll
be
some
controls.
Cuz
we'll
want
that's
a
more
curated
kind
of
experience
for
users
and
so
or
for
visitors,
and
so
that's
what
that
will
look
like.
E
D
You
could
also
have
a
forum
section
where
the
community
can
kind
of
engage
with
one
another
in
there
there's
a
little
bit
less
controls
over
it.
We
are
building
some
some
some
ways
to
flag
information
into
that
and
we'll
get
a
little
bit
more
into
that.
The
events
section,
so
one
of
the
things
is
to
kind
of
aggregate
all
the
things
that
are
happening
in
the
community
as
it
grows
around
globe.
I
saw
a
cool
graphic
from
rubicon
that
mapped
all
the
the
meetups
around
the
world.
D
It's
impressive
and
then
last
but
not
least,
we
have
the
tools
section,
which
is
something
we
keep
hearing
from.
The
community
is
an
important
need
right:
the
tools
that
exist
in
the
ecosystem
that
will
make
it
easier
for
me
to
embrace
Cooper
daddies
and
the
related
technology.
So
we
show
you
what
that
looks
like
right
now.
This
is
all
running
locally
right
now,
but
we
have
the
home
page.
We
have
learned
section
asked
and
the
attempt
the
list
views.
D
Let
me
show
you
what
that
will
look
like
longer
term,
so
a
pretty
standard
stuff
users
are
going
to
be
able
to
come
in
here,
enter
a
couple
search
terms
and
will
dynamically
filter
the
list
right
now.
I'm
logged
in
as
an
admin,
so
I
have
the
additional
functionality
of
being
able
to
publish
an
unpublished
content.
So
let
me
walk
you
through
adding
something
and
in
the
screen
again,
this
screen
is
kind
of
in
progress,
quickly
copy
and
paste
some
stuff
in
the
content
area.
D
We're
going
to
allow
for
mark
down
and
I
can
quickly
kind
of
go
here
and
see.
The
preview
will
have
a
tagging
mechanism,
so
users
can
come
in
and
add
tagging
and
then
also
complexity.
Right
is
this
kind
of
beginner
level,
medium
or
advanced
task
hit
save
again
here
it
is
in
my
unpublished,
suction,
when
I
click
published
will
pop
it
over
here
and
where
it
is,
users
can
come
in
and
they
can
add.
My
comment
and
it'll
show
up
I
think
it's
there,
but
them
oh
gods
are
failing
me
notice.
D
D
Let
me
show
you
on
the
wireframe
what
the
ListView
for
questions
will
look
like
and
where,
as
you
can
see,
hopefully
it's
pretty
clear
which
questions
have
been
answered
and
which
ones
haven't
and
I
think
everybody's
kind
of
familiar
with
that
metaphor
on
the
web,
would
a
forum
looks
like
at
least
a
modern
one,
and
so
we
show
you
we
have
it
here
and
part
of
the
motivation,
for
this
has
been
we've
seen
a
lot
of
dialogue
occurring
in
the
slag
channel
and
we
wanted
a
place
to
kind
of
trap
this
in,
but
it
feels
trapped
in
there.
D
So
we
wanted
to
create
a
public
presence
where
the
community
can
go
and
it's
searchable
and
and
users
can
go
and
find
it.
Let
me
jump
over
to
the
events
section
here
is
one
of
the
places
where
we're
in
club
I
added
an
event
and
I'll
skip
over
what
it
looks
like
this
is
a
detail
event,
but
we're
going
to
differentiate
between
like
recurring
events
like
meetups
and
then
like
one-time
events
like
a
like
Kubek
on,
and
so
this
is
the
screen
for
adding
that
one-time
event.
D
One
of
the
some
of
the
functionality
we're
building
into
this
is
the
ability
to
subscribe
to
an
eye
cow
feed
which
meetup
provides,
and
so
then,
once
we
set
up
your
event
once
will
continually,
as
as
you
announced,
events
and
they
become
public
on
the
web,
will
populate
this
so
always
have
kind
of
a
catalogue
of
the
latest
and
greatest
happening.
Last
but
not
least,
we
have
the
tools
section,
there's
nothing
in
there
right
now,
I
see.
D
If
I
can
me
pull
this
up,
there's
the
tools,
but
we're
flushing
out
the
design
here,
but
ultimately
it's
going
to
be
a
collection
of
tools
from
the
community.
As
with
the
other
sections,
we
are
going
to
encourage
the
community
to
participate
in,
submit
a
items
to
this
and
so
yeah.
That's
that's
what
we've
been
busy
working
on
math.
Is
there
anything
to
add
to
that.
C
Only
just
kind
of
when
we
expect
to
have
it
done
I'll
over
the
next
few
weeks.
D
Yes,
in
the
next
couple
of
weeks,
Finnegan
can
probably
speak
a
little
better
to
that
since
he's
doing
all
the
heavy
lifting
on
it,
but
yeah
we're
in
the
process
of
kind
of
refining
some
of
the
interactions
and
and
adding
some
of
the
polish.
But
we
are.
We
are
definitely
on
final.
One
awesome
work,
then.
C
We've
gotten
some
emails
from
various
people
with
already
pre-existing
tools
out
there
that
will
begin
to
populate
into
this
once
we
have
it
in
production,
and
so
we'll
do
some
pre
population
with
some
content
and
tools
for
with
various
people.
We've
spoken
to
like
Kelsey,
has
offered
to
well
Kelsey
told
us.
We
can
just
rip
off
all
of
his
stuff
out
of
his
repo
and
pushed
it
in
so
we'll
be
doing
things
like
that.
Yeah.
D
F
A
F
F
A
Is
super
awesome
I
actually
have
sent
a
couple
of
people
your
way
or
told
them
that
if
they
already
know
you
they
need
to
reach
out
couple
days.
Hopefully
we
can
coalesce
a
lot
of
these
independent
efforts
to
try
and
make
sense
of
the
community
somewhere
and
give
a
good
space
for
this
kind
of
thing
to
populate
yeah.
E
F
I,
don't
think
we're
ever
gonna
replace
stack
overflow
I
mean
it
just
yeah.
I
mean
I
don't
think
that
should
be
a
goal
right
like
that,
should
be
yeah.
We're
doing
that's
not
a
good
goal,
because
everybody
goes
there.
It's
better
surgical.
I
mean
in
terms
of
like
google
search
and
things
like
that,
it
pops
it's
just
a
better
experience,
so
I
think
this
should
be
curated
towards
more
kind
of
like
if
you
are
ever
in
a
cure,
benetti's
community.
This
is
where
you
come
rather
than
like.
D
Orioles,
as
being
an
example
right
now,
you're
getting
a
breath
of
things
that
you
can
do
to
engage
the
community,
whether
it's
the
curated
tutorial,
which
is
very
specific
thing,
whether
you're
asking
questions
of
the
community,
which
is
where
Stack
Overflow,
does
I,
think
a
really
excellent
job
and
we're
not
going
to
displace
it.
Then
events
and
the
tool
so
I
think
what
this
is
a
screen
with.
B
Way
more
focused
fragmentation
of
fragmentation
of
the
conversations
will
be
not
good,
I.
Think,
there's
really
good
conversation
here,
I'd
love
to
really
like
to
see.
You
guys,
write
it
down
to
say,
okay
here
how
you
should
think
about
Cooper
Nettie's
with
stack
overflow,
and
this
conversation
you
just
had
about
it
should
be
somewhere
in
the
page
and
under
the
documentation
that
says:
here's
where
you
go
for
what
kindness
and.
F
G
F
This
is,
this
is
a
great
place
to
sort
of
have
an
archive
of
technical
documentation
of
back
and
forth
about
like
something
like
how
do
I
do
a
che
or
like
I,
think
that
there
is
there's
stuff
that
you
can
achieve
in
a
forum
that
doesn't
lend
itself
naturally
to
the
stack
overflow
layout,
but
also
is
not
quite
a
tutorial
that
I
think
is
like
it's
a
conversation.
It's
like
it's
almost
a
more
persistent
way
of
having
some
of
the
conversations
that
we
have
on
the
email
list,
yes
and
I.
A
You
so
much
Ariel,
and
what
and
I
think
that
there
was
lots
of
good
commentary
about
how
to
keep
this
from
fragmenting.
Too
far,
so
we
can
potentially
even
look
at
pulling
together
a
bunch
of
people
that
are
working
on
collecting
community
resources
and
try
to
make
this
unified.
I'm
thinking
of
core
cube
and
we
found
out
about
something
called
awesome
cooper
Nettie's
recently
in.
E
A
Okay,
so
let
me
find
the
window
that
tells
me
what
our
next
topic
is.
No,
it's
not
that
one
there.
It
is
okay!
So
up
next
we
have
the
1.2
release.
Watch
released,
1.2
was
cut
yesterday.
There
are
draft
release,
notes,
Brendan,
do
you
maybe
know
or
Quentin
know.
If
the
draft
release
notes
have
actually
made
it
public,
yet
I
know
it
was
an
intern.
B
A
A
C
A
B
F
No,
that's
not.
We've
changed
our
policy
a
while
ago,
but
we
changed
our
policy
I
think
even
prior
to
11,
when
we
put
it
out
on
to
github.
It's
it's
ready,
so
it
the
one
dot
to
release
is
as
ready
as
it's
going
to
get
I
mean.
We
will
also
go
through
a
process
of
getting
it
to
work
NGK,
but
the
that's
not
the
statement
of
when
it's
stable
that
it
was
a
lot
that
they
used
to
be.
A
F
F
F
Don't
think
it
was
ever
explicit
one
way
or
the
other,
so
I'm
not
sure
what
would
change
but
and
not.
I
don't.
I
don't
know
of
and
documentation
point
that
I
I
guess
what
we
at
maybe
I'll.
Take
this
another
Nick
pick
another
stand
at
this.
It
is
marked
as
stable
in
github
a
long
time
ago.
We
didn't
mark
it
as
stable
and
latest
until
it
was
in
gk
and
we're
no
longer
doing
that
right.
F
A
F
Not
really
like
a
long
time
ago,
when
our
ed
coverage
wasn't
very
good,
it
kind
of
hammered
on
it
a
bunch,
but
at
this
point
the
only
thing
the
G
keci
does
is
prove
that
the
g
ki,
sorry,
the
ji
ke
parts
of
the
system
are
working,
meaning
like
we
can
automatically
deploy
clusters.
It
doesn't
really
prove
very
much
about
the
other
stability
of
and
a
little
bit
about
sort
of.
Some
of
the
integration
points
like
gke
creates
its
own
routes
and
things
like
that,
but
it
doesn't
really
prove
anything
about
curbing
ID
stability.
F
F
This
sure
I
mean
that's
fine.
I
think
that.
But
what
we
try
and
do
here
is
I,
don't
think
we're
ever
going
to
move
to
world
where
we're
lockstep
we're
like
you
know,
we
mark
the
github
release
and
we
push
gk.
At
the
same
time,
we
want
people
to
get
some
mileage
on
it
before
they
get
auto
upgraded,
and
things
like
that.
So
you
know
I
think
that
if
that's
the
perception
and
that's
we
can
do
their
best
to
say
like
hey,
you
know
what
that's
just
not
true
and
work
from
there.
A
A
Thank
you
to
everyone
who
contributed
tested,
looked
at
work
done,
did
argued
in
SIG's,
etc,
etc,
and
because
that
is
a
huge
amount
of
work
and
collectively
as
a
community,
we
want
to
continue
to
increase
the
speed
and
quality
with
which
we
are
contributing
to
Cooper
Nettie's,
so
we
will
be
going
through
1.2
lessons
learned
in
terms
of
potential
1.3
timelines,
possibly
time
possibly
next
meeting
is
hungry
more,
oh,
how
the
timing
bits
go
and,
as
I
said
great
big.
Thank
you
to
all
of
you.
A
It's
been
a
whole
lot
of
hard
work
to
get
us
this
far,
and
here
we
go
so
next
up
is
in
fact
1.3
because
you
know
never
a
dull
moment.
We
get
1.2
out
the
door,
it's
time
to
do.
1.3
so
I
have
on
my
list
today
that
red
hat
is
going
to
talk
to
us
about
things
that
they're
committing
to.
A
If
there
are
other
specific
companies
that
want
to
talk
about
this,
or
as
we
look
at
more
community
product
management
opportunities,
I
think
aligning
around
sinks
is
another
great
way
to
talk
about
what
the
community
commitments
are.
So
don
Chen
is
going
to
be
able
to
so
Andy
goldstein
is
going
to
talk
about
what
head
is
spinning
too
and
then
Don
Chen
is
going
to
talk
about
the
what
the
discussions
and
commitments
and
conversations
for
features
and
work
over
the
next
three
months
will
be
from
signode
and
then
I.
A
Think
Bob
is
game
to
talk
a
little
bit
about
what
the
sale
ability
is
looking
to
do
in
the
1.3
timeline
and
then,
if
anybody
else
wants
to
jump
in
with
things
that
their
company
or
their
sig
are
wanting
to
commit
to
that's
great,
we
can
get
there
in
a
couple
of
minutes.
So,
let's
start,
if
we've
got
Andy
on
the
line
with
red
hat
Andy,
are
you
around
I
am.
A
A
H
Right,
so
there
is
a
link
to
this
document
in
the
agenda
document,
so
you
can
browse
along
at
your
leisure.
This
is
a
list
of
the
features
that
red
hat
is
going
to
commit
to
working
on
for
the
cabrillo
days,
one
at
three
release.
It
is
a
large
list,
so
we
probably
won't
finish
all
of
them,
but
we
are
going
to
work
on
all
of
them
and
we
hope
to
complete
as
many
as
we
go.
H
So
these
are
not
in
any
particular
order,
so
I
will
just
start
at
the
top,
so
in
Coober
Nettie's
12,
we
are
now
running
with
set
cops
set
to
the
unconfined
profile
and
we
are
interested
in
making
better
use
of
set
comp
now
that
it's
supported
in
docker.
So
we
are
currently
discussing
what
what
that
means-
and
we
have
Trello
cards
in
our
Trello
organization
for
most
of
this,
so
feel
free
to
look
for
those
but
set
comp
is
one
thing
that
we
want
to
work
on.
H
Improving
next
up
is
having
the
couplet
evict
pods,
when
the
node
is
low
on
memory
based
on
quality
of
service
tears.
So
this
is
a
way
to.
Instead
of
relying
on
the
oom
killer
to
take
action,
kind
of
at
the
end
of
the
line,
we'd
like
to
preemptively
and
proactively
try
and
get
pods
off
of
a
node
based
on
quality
of
service
so
that
the
node
doesn't
follow.
H
So
we
have
a
downward
API
for
several
items
right
now,
but
we'd
also
like
to
include
conventions
and
standards
for
how
to
refer
to
things
like
the
CPU
and
memory
resources
and
limits
that
the
pot
author
has
specified-
and
this
is
useful
so
that
people
who
are
running
things
like
jboss
or
other
application
servers
can
have
a
better
idea
of
how
to
set
the
Java
min
and
Max
heat.
For
example,
we
want
to
continue
working
on
deployments,
so
we
have
several
items
that
we
are
hoping
to
contribute.
H
These
include
things
like
hooks
that
can
run
during
the
hook
or
during
the
deployment
life
cycle
process,
custom
deployment,
strategies,
improvements
to
logging
and
so
on.
We
are
interested
question
together
with
one.
This
is
the
Cooper
Nettie's
deployment,
resource
or
type.
Thank
you.
Sorry
next
is
idling
an
idling.
This
is
something
that,
from
in
the
open
shift
world
and
open
shift
online
we
take
great
advantage
of,
and
we
are
interested
in
working
on
a
proposal
and
if
time
permits,
hopefully
getting
some
code
up
there,
so
it
helps
with
over-committing.
H
H
Next,
one
I
think
is
probably
of
great
interest
to
many
of
you.
We
are
looking
to
start
running
continuous
integration
tests
on
at
least
rel
with
Cooper
Nettie's
and
docker
110
or
whatever
the
the
latest
and
greatest
happens
to
be,
and
this
will
help
give
a
community
confidence
that
the
Cooper
Nettie's
can
work
on
docker,
110
and
newer
versions,
and
also
help
identify
issues
if
needed
that
we
can
address
next
up
is
we
have
a?
H
We
have
an
admission
controller,
an
open
shift
that
allows
the
cluster
administrator
to
to
set
some
defaults
and
some
restrictions
for
where
pods
are
allowed
to
run
in
terms
of
which
nodes
they
can
run
on
on,
and
so
we
have
the
ability
to
set
a
default
node
selector
for
a
given
namespace,
which
we
call
a
project
in
open
shift
and
we're
interested
in
helping
get
that
upstream,
so
that
the
community
can
take
advantage
of
that
in
Coober
Nettie's.
We
have
a
couple
of
items
related
to
storage.
H
We
are
going
to
be
working
with
the
storage
sig
on
improving
dynamic
provisioning
of
volumes.
There's
an
ongoing
effort
to
do
some
refactoring
of
these
internals
of
the
storage
code
to
make
the
codebase
just
better,
and
we
are
interested
in
adding
quotas
for
persistent
volumes,
given
that
that
is
a
scarce
resource.
H
H
H
Two,
and
so
this
is
the
ability
to
have
jobs
that
run
at
periodic
schedules
on
the
scalability
and
dusty
improvements,
side,
we're
hoping
to
be
able
to
scale
out
to
a
thousand
nodes
at
half
max
pas
density
and
we're
hoping
to
help
get
up
to
250
pods
per
node,
and
part
of
that
will
probably
require
some
api
api
server
performance
improvements,
namely
adding
protobuf
serialization
as
a
replacement
or
alternative
to
json
for
a
lot
of
the
internal
communications
between
say
controllers
and
the
api
server.
And
then
we
have
a
couple
more
that
are
node
related.
H
We
are
actively
working
on
getting
support
for
C
advisor
to
gather
stats
on
per
container
file
system
usage
when
the
docker
device
or
graph
driver
is
device
mapper,
so
support
for
that
on
rel,
and
that
is
part
of
a
broader
proposal
that
the
node
team
is
working
on
related
to
Disick
disk
accounting,
so
that
we
can
better
take
advantage
and
be
aware
of
the
scarce
disk
resource
that's
available
on
each
node,
so
that
we
are
overloading
a
note
or
running
out
of
disk.
And
then
we
have
some
stretch
goals.
H
We
are
interested
in
having
AC
group
quality
of
service
hierarchy
so
that
we
can
take
advantage
of
colonelcy
group
features
in
terms
of
scheduling
when
it
comes
to
dividing
up
tasking
between
pods
and
containers
that
our
best
effort
or
burstable
or
guaranteed,
and
then
we
have
a
couple
of
security
and
policy
related
things,
an
open
shift.
We
have
an
authorization
engine
and
an
OAuth
implementation
that
we're
interested
in
contributing
to
the
upstream
community
as
as
possible
implementations
that
you
could
enable
if
you
so
choose.
G
Awesome,
yes,
questions.
A
And
we'll
have
to
all
collectively
grok
this
read-through
probably
ask
a
file
the
questions.
Ideally
you
guys
will
you
of
the
Red
Hat
team
can
integrate
these
in
with
the
different
things
that
they
work
with
I
know.
A
lot
of
this
work
is
reflected
in
the
document
that
Don
is
working
on
around
what
signal
signode
group
is
committing
to
in
the
1.3
timeline.
A
So
this
this
leads
to
teaser
for
next
week
and
I'll
have
Joe
talk
about
it
at
the
end
of
this
conversation,
if
we
have
time,
but
the
idea
of
getting
also
community
product
management
going
in
the
sense
of
making
sure
that
all
of
these
peaches
are
moving
forward
in
a
way
that
gets
us,
see
our
knees
and
gets
us
design,
docs
and
figures
out
what
we
need
in
order
to
make
this.
This
a
is
make
this
project
move
forward
as
fast
as
possible
with
with
the
right
things.
So
we
can
talk
about
that
more.
I
Just
a
generic
question:
we
got
a
bullet
list,
but
how
do
if
we
want
to
understand
in
more
detail
what
each
of
these
bullets
means?
How
does
that
happen?.
H
So
we
do
have
some:
we
have
Trello
cards,
we
use
Trello
to
do
our
tasking,
and
so
we
have
cards
that
back.
The
majority
of
the
items
in
here
and
I
will
go
through
and
add
those
links,
as
our
cards
are
slightly
cleaned
up.
Some
of
them
have
a
little
bit
of
cruft
in
them,
but
we'll
have
the
the
links
in
there
and
additionally,
you
can
reach
out
to
to
me
on
slack.
You
know
these.
These
are.
J
For
the
most
part,
almost
all
of
these
have
proposals
in
some
form
or
another
or
an
issue
that
tracks
them
and
II
kind
of
summarize.
The
list-
and
these
is
kind
of
a
reflection
of
a
number
of
different
things
pulled
together,
but
we'll
try
to
make
sure
that
these
are
linked
to
actual
issues.
So
you
can
repeat
the
back
story
on
these
yeah.
A
J
K
Question
there
I
think
it
would
be
nice,
though,
to
actually
sort
of
start
to
box
these
things
up
a
little
bit
in
terms
of
which
parts
of
the
project
they
impact
and
then
I
think
that
the
list
won't
feel.
Quite
so,
you
know
huge
when
you
actually
say
okay.
Well,
this
is
work
in
this
area
in
this
area
and
there's
gonna
be
a
few
things
that
actually
cut
across
a
bunch
of
areas
and
that's
going
to
be
the
stuff,
that's
probably
going
to
going
to
need
the
the
most
coordination.
A
L
There
you
go
excellent,
thank
you.
So
we
start
to
every
for
every
release.
Before
we
start
the
cycle,
I
will
share
the
Copernicus
each
release
roadmap,
always
the
signal
team
and
crack
the
information
I'm
doing
this
waste
is
because
a
lot
of
the
feature
at
the
node
time,
the
node
team.
It
is
not
a
super
visible
and
so
I
wanted
to
the
better
communication
to
the
with
p.m.
and
also
engineer
and
also
our
pattern,
the
community.
L
L
They
feel
these
kind
of
things,
but
we
need
to
talk
to
the
PMO
and
also
I'm
still
waiting
for
some
like
the
community
input,
and
I
tried
to
reach
out
to
that,
go
right
ahead
and
also
far
away
and
the
chorus
and
the
hyper
all
our
partner
and
and
also
the
Intel.
Although
all
those
the
community
partners
and
but
I
haven't,
got
all
the
input
and
feedback
here
so
I
pet
words
for
the
for
this
release,
a
category
three
areas
accessible,
reliability,
improvement
and
also
is
easy
to
use
and
the
development
of
velocity.
L
It
goes
through
quickly
go
through
each
area
here
so
for
extensibility
identify
the
top
three
ad
actually
stop
for
pub
pub
for
an
immediate
attention,
Ariel
and
the
first
one.
It
is
the
redefine
continent,
runtime
interface,
so
we
already
have
good,
don't
in
the
racket
integration
we
have
the
first
wish
of
the
container
random
interface
and
over
that
integration.
We
we
defer
to
this.
L
We
are
not
aware
satisfied
at
that
interface
because
for
mini
version,
one
of
them
really
is
just
every
single
duplication
and
it's
not
where
modernized
it's
very
difficult
to
plug
into
the
new
contender
runtime
and
there's
no
separation
from
the
image
management
and
the
container
lifecycle
management
and
also
the
resource
management.
So
it's
a
super
super
tific
holder.
If
you
change
it
any
policy,
all
you
want
to
support
it
and
use
to
put
afternoon
image
format.
You
have
to
change
every
single
continent
renter.
L
So
we
we
decided
to
after
racket
the
integration
the
on,
which
is
certain
point.
We
want
to
refine
this
kind
of
things.
So
now
it's
the
time,
and
so
so
so
we
also
add
a
enter
this
big
umbrella
and
we
are
going
to
finish
the
racket,
integration
and
also
the
factory
based
on
the
under
the
new
interface
we
also
want
to.
Of
course
we
have
the
refractory,
the
darker
darker
clacking,
and
we
also
at
the
same
time
we
already
have
the
hyper
support,
but
we
want
merge
their
support
into
our
upstream
repository.
L
So
the
all
those
kind
of
things
we
need
discussing
and
add
the
other
thing.
At
the
same
time,
OCR
OC
is
going
to
release
their
first,
the
production
release
in
june.
So
we
need
to
have
some
ownership
where
we
need
our
red
head
and
our
engineer
to
participate
at
one
and
do
some
exercise.
So
at
least
we
want
to
set
the
direction
to
what
da
to
reset
their
direction
to
what
we
want
in
a
long
term,
and
do
so
that's
why
I
put
this
t2p
timeliness
is
not
important.
It's
just.
L
We
want
more
community
in
a
while.
We
didn't
really
no
Kate
the
engineer,
one
hundred
percent
driving
this
point
so
another
one.
It
is
machine
problem
each
year.
The
today
is
not
a
PR.
We
are
like
after
machine
problem
in
here
we
are
kind
of
doing
idaho
support
at
northern
iowa
and,
if
there's
the
colonel
issue,
if
there's
the
fastest
and
corruption
and
if
it's
a
machine
issue,
machine
hardware
issue,
there's
no
way
to
properly
propagate
those
information,
and
you
only
sleep,
I
think
experience
for
this
release.
L
I
want
to
introduce
the
API
and
I
want
to
have
negative
part
happening
to
you
at
least
introduce
some
like
the
sample
incantation
reference
reference,
implementation
of
the
demon
to
detect
the
knowing
issue,
what
we
already
know
and
give
us
how
we're
going
to
integrate
this
neck,
the
workflow
and
how
we
are
going
to
detect
the
problem
and
in
report
and
the
triple
net
calculus
problem
and
the
propagators
problem
to
absolute
control
layer
and
how
we
are
going
to
take
action.
So
something
like
that.
L
Open
source
projects
they're
working
on
06,
which
had
to
work
partner
with
them
and
their
true,
have
work
another
demon
or
some
reference
implementation
there.
So
so
for
each
project
you
can
see
I
put
the
which
company
we
are
hold
my
order
partner
in
our
community.
We
are,
and
also
try
to
assign
engineer
as
the
owner
to
drive
those
project
or
front-end.
Did
you
find
the
beginning
to
end
so
another,
the
third
one
for
the
extensibility
it
is.
L
We
want
to
folder
sale,
whether
collect
accounts
of
the
metrics,
so
some
is
really
critical
and
it
used
by
the
controller
and
a
lot
of
part
of
their
other
towns
of
the
stats
and
the
matrix
matrix
is
not
useful
for
the
control
plan.
But
it's
super
super
important.
Their
sites
is
really
is
useful
for
the
debugging
monitoring
for
the
customer
and
also
introspection
there's
the
problem
so
sir,
what
did
the
problem?
It
is
Cuban.
It
cannot
handle
that
many
of
the
metrics
and
it's
harder
to
do
the
resource
management
agreed
upon,
which
was
management.
L
If
you
put
all
those
kind
of
things
into
the
demon
default
email,
so
we
decided
in
the
long
run,
we
are
going
to
have
the
separate
the
monitoring
Padma.
So
that's
the
plate,
that's
the
new
project,
/
work
with
and
we
initiate
for
the
1.3,
but
we
still
working
phone
at
the
site's
performance
by
MSS.
To
save
two
sis
are
the
priority
to
set
the
project
in
the
crafter
water.
L
Well,
the
major
things
is
just
because
we
start
there
already
started
is
kind
of
the
things
in
the
one
point
view
in
1.3
we
introduce
the
new
york
unit,
matrix
idea
and
a
summary
API,
and
we
already
see
the
pump
a
huge
performing
skin
through
that
one.
So
that's
right,
we
are
going
to
do
more
research
say
it
is
this
problem.
It
is
urgent
or
driving
for
the
one
point
straight.
So
so
then
another
one
is
PG.
It
is
the
support
alternative
energy
4matic.
L
It's
just
kind
of
the
long-term
investment,
because
we
kind
of
pretty
much
only
support
the
supporter
darker
image.
You
see
the
darker
demon
and
a
support.
Ab
see
image
and
yes
you're
a
character,
but
there's
the
there's,
the
other
many
people
ask
for
how
about
double
how
about
Jeff
I'll.
So
this
is
connect
that
we
want
to
do
some
nectar
experimental.
How,
especially
after
we
affect
you,
our
content
and
random
interface.
We
separate
image
related
it
here
and
we
define
in
this
bag
APH.
So
we
can
see
how
much
we
can
target.
L
L
We
could
I
talk
to
some
community,
a
friend
and
they
say
they
want
attack
this
build
about
per
inch
on
the
fly
and
feature
to
help
us
to
redefine
you
need
to
spike
and
help
us
figure
out
the
preview
as
it
to
run
the
arbitrary
couple
and
just
double
J
Jeff
file,
and
so
that's
kinda
peach.
You
like
the
health
Lydia,
the
sauna
community
aside
so
another
another
area.
It
is
real
happiness,
improvement
so
for
the
reliability
improvement.
So,
first
to
see
it
is
resource
management.
L
This
was
management,
are
so
many
things
and
and
I
identified.
You
are
too
to
prevent
emag
node,
get
over
committed
and
affected
all
the
running
jobs
and
affect
the
Cuban
eight
performance
and
the
darker
performance
identified.
Truth,
one
thing
since
I
out
how
fruit
sauce
handling
and
the
front
among
all
the
autopsy
resource
identification.
Why
it
is
there
just
is
that's
the
top
one
and
secondly,
it's
my
memory
so
for
them
for
the
disco
we
first
thing
we
need
still,
it
is
improve
all
our
disco,
the
disco
honking.
L
So
one
of
things
is
handled
by
redhead.
It
is
a
disco
hunting
for
device,
my
demons
attacked
earlier
and
another
one.
It
is
we
needed
to
the
big
reflection
about
at
least
management,
so
continent
will
have
the
energy
management.
We
have
the
container
garbage
collection.
We
have
the
image,
cabbage
detection,
all
those
kind
of
chickpeas
and
each
of
those
control
nope.
It
is
a
synchronized.
There's,
no
global
I
mean
know
the
level
global
way
to
veer
off
the
keys
to
safety.
So
we
may
make
the
wrong
decision.
L
So
we
already
started
in
the
one
point
we
already
started
with
factory
some
of
those
taste
management
of
things,
but
it's
not
the
completed
here.
So
we
are
need
to
finish
this
as
soon
as
possible
for
the
1.3,
then,
after
that
we
can
introduce
the
art
of
these
components,
stay
tuned
to
the
system
so
another
one.
It
is
out
of
memory.
We
already
have
the
proposal
and
the
are
from
the
redhead.
We
have
the
design
review
of
that
about
that
one.
So
now
it
is
just
we
need
to
finish
the
work
about
this
way.
L
So
another
see
it
is,
is
kind
of
the
really
important,
so
that
is
relative
small.
What
I
think
it
is
using
switch
to
up
for
this
Hunter
library
we
switch
to
the
darker
that
I'll
provide
the
client
library.
So
now
we
can
have
the
so
in
that
one.
So
we
can
solve
those
negative
compatibility
issue.
So
so
also
we
can
solve
this.
You
introduced
the
arrow
handling
for
all
those
kind
of
things,
operations,
prospect
upper
so
so
and
last
way
it
is
really
important.
It
is
but
it's
kind
of
the
hand-holding
issue
it
is.
L
We
should
configure
our
sister
and
to
enable
node,
I
locatable
feature
we
already
introduced
not
eligible
in
the
one
point
to
be
nice,
but
we
didn't
know
up,
do
it
in
configure
to
make
that
is
really
happen
so
right
now.
It's
not
a
lot
by
default.
Is
there's
not
a
capability,
so
in
this
one
is
also
kind
of
the
resource
management.
So
we
are
so
we
won't.
We
have
to
preserve
reserve
some
like
the
resource
for
the
demons
and
the
machines,
and
so
so
you
want
/
community
resource.
L
So
last
one
it
is
kinda
hand-holding
ESS
focus
for
the
GCE
and
because
that's
the
POS
related
stuff.
We
need
in
forcing
the
cpu,
kotak
and
also
karma
memory,
accounting
and
ever
I've.
Been
you
other
Colonel
like
the
open
shift.
They
already
have
those
kind
of
the
England.
Also
we
need
the
ketchup
and
is
in
the
GCE
side.
So
next
area
is
that
easy,
easy
ease
of
use
so
identified
a
two-top
one
when
it
is
the
simplified,
the
know
the
bootstrap.
L
This
is
most
of
all
the
production
and
there's
the
many
area
we
still
working
on
the
identify,
HC
problem
at
not
level
and,
of
course,
there's
the
minute.
This
is
cross-project
a
cross
gene
project
that
anglers
the
many
things
also
required
by
Bill
support
it
on
running
tight
on
time.
So
I'm
picking
finish
this
one
so
another
way
it
is
just
assistant
ii
and
the
redhead,
our
power
and
the
last
one.
It
is
the
know
that
continued
cheney,
integration,
test
and
automatic,
with
darker
validation
process
and
also
finish
the
not
confirm
test
coverage.
A
A
A
All
right
Bob
did
you
want
to
go
through
a
quick
list
from
scalability
and.
B
A
And
while
you
do
that
I'll
tap
dance
for
a
moment
saying
we
have
one
more
one
more
cig,
they
can
share
some
of
the
commitments
and
that's
sting
control
plane.
If
we
have
time
so,
hopefully
we
will
have
that
space
and
do
that,
otherwise
we
can
bump
it
to.
Next
week
great,
we
are
seeing
your
cute
scale
weekly
agenda
right.
B
So
this
is
I
think
one
of
the
things
we
need
to
do
is
assemble
this
into
a
separate
document
more
along
the
lines
of
what
Don,
with
your
showing.
So
that's
a
that's
a
TBD
for
us
to
do,
but
I
thought-
and
we
really
just
today
had
a
serious
discussion
in
the
sig
about
103.
Some
of
the
things
here,
I
think,
are
the
nature
of
scalability.
Things
like
this
is
that
they
tend
to
cut
across
a
lot
of
things,
and
some
of
the
effort
is
really
kind
of
testing
and
analytics
oriented.
B
But
there
are
a
few
things
here
that
I
think
are
general
interest
anyway,
that
probably
worth
just
a
quick
review.
Go
line,
one!
That's
six
etsy
etsy
d3,
certainly
there's
a
lot
of
effort
here
around
making
sure
that
we
have
a
good
way
to
do
test
repeatability
and
share
results.
Some
of
those
crosses
over
into
the
sink
testing
group,
which,
if
not
this
week
perhaps
next
week,
we
should
also
have
a
readout
from
I.
B
Think
one
of
the
one
of
the
things
that
we
really
do
need
to
I
think
spend
a
little
bit
more
time
on
as
a
community
is
better
I'll,
say
a
goal
on
the
scalability
side,
both
in
terms
of
nodes,
a
number
of
number
of
pods,
which
is
really
the
more
critical
thing.
So
we
had
a
lot.
We
had
a
good
discussion
about
that
at
at
st.
scale
this
morning,
I
think
we
should
try
to
put
a
stake
in
the
ground
that
will
help
everyone.
B
Everyone
working
on
feature,
work
as
well
to
have
the
right
context
for
the
work
they
need
to
be
doing.
So
that's
pretty
quick
I
think
I
would
certainly
encourage
anyone
in
the
community
to
reach
out
to
me
to
Joe
to
attend
the
lists
and,
as
we
get
this
list
published,
to
give
us
more
more
feedback
and
input.
B
F
A
Thank
you
Bob
any
quick
questions
or
we
can
work
through
a
quick
questions
for
Bob
or
we'll
work
through
specific
issues
and
different
things
that
this
stuff
will
work
across
and
all
of
that
over
the
next
week
or
so.
Okay,
no
questions,
which
means
we
have
enough
time.
I
hope
for
a
very
quick
update
from
sig
1.
M
Can
you
see
the
doc
now?
Yes,
okay
right,
so
we're
just
going
to
go
I'll
just
go
through
really
quickly
here
and
we'll
share
this
within
the
next
few
days.
This
hasn't
been
fully
reviewed.
Yet
so
it's
a
little
risky
to
do
this
now,
but
we'll
share
this
in
a
couple
of
days
with
the
community.
So
first
thing
is
finishing:
some
of
the
1.2
work
highest
priority
that
we
had
to
boot
out
of
the
release,
because
we
ran
out
of
time
the
highest
priority.
M
There
is
finishing
the
metrics
api
and
concretely
what
that
means
is
that
you'll
be
around
you'll,
be
able
to
write
schedulers
or
controllers
that
use
like
node
level,
usage,
information,
memory,
cpu
and
so
on,
to
make
decisions
like
scheduling,
decisions,
scheduling
based
on
usage
or
some
combination
of
usage
and
request,
or
something
like
that,
so
this
will
be
ready
in
1.3
time
frame
the
metrics
api.
It's
been
the
works
been
going
on
for
quite
a
while.
The
node
level
part
was
finished
in
1.2,
but
the
control
plane
level
part
will
be
in
1.3.
M
M
Cluster
deployment
and
bootstrapping
in
scheduling
we
have
over
Nettie's
quinton
is
leading
this
effort.
There's
a
few
p0
and
by
the
way,
the
priorities
here
are
not
the
same
scale
as
Don's
Don
had
very
specific
priorities
like
p0
meant
blocker
I,
don't
know
if
you
saw
in
her
dhokli
p1
men
were
definitely
doing
it
or
something
there
is
there
at
these.
This
is
a
relative
scale
and
you'll
notice.
We
even
have
things
like
1.5
and
0.5,
there's,
not
a
specific
meaning
for
each
thing.
It's
just
just
a
relative
scale.
M
So
anyway
yeah
so
some
of
the
initial
federated
control
plane
objects
for
uber
Nettie's
will
be
in
1.3.
The
Red
Hat
Andy
mentioned
one
of
these
job
things.
I
think
the
schedule
job
there's
also
a
job
workflow
object
for
for
indicating
job
dependencies
that
we
want
to
have
a
1.3
and
there's
an
index
job,
but
that's
a
lower
priority
that
may
not
make
it
into
1.3.
M
Lower
priority
than
p1
is
may
not
make
it
as
the
release
rut.
You
could
say
roughly
so,
let's
move
along
a
little
more
there's
reschedule
er,
which
the
hallway
folks
are
working
on
by
the
way.
There's
a
px,
a
pee
?
means
not
yet
prioritized
and
px
means
someone
outside
of
goo.
Someone
in
the
community
outside
of
Google
is
working
on
it,
and
so
we
didn't
work
super
hard
to
figure
out
what
is
the
priority,
because
we
know
it's
work,
that's
being
done
anyway.
M
It's
something
that
we
definitely
want,
but
it
wasn't
worth
spending
a
bunch
of
time
figuring
out
in
increments
of
point
5.
How
important
is
the
reschedule
or
is
very
important,
of
course,
and
there's
some
focus
on
from
huawei
who
are
starting
to
work
on
that
there's.
Some
work
on
I
am
often
security,
specifically
around
up
streaming.
Openshift
authorization
pause
security
policy
and
some
authentication
improvements,
scalability
and
moving
at
CD.
Ev3
Bob
mentioned
some
of
this.
This
is
one
of
these
areas
that
overlaps,
multiple,
multiple
interest
groups
and
Cooper
Nettie's.
M
We
have
some
work
around
node
management,
so
we
have
this
disruption.
Budget.
There's
a
github
issue,
this
destruction
budget,
abstraction
that
underlies
any
kind
of
node
maintenance
underlies
the
reschedule
ur.
So
basically
noah
s
ellos
for
eviction
s
ellos
four
pods
is
what
that
basically
means
and
I
won't
go
through
the
rest
of
those
and
then.
Lastly,
we
have
some
usability
stuff.
The
most
important
one
there
is.
We've
received
some
suggestions
that
we
should
do
something
around
making
it
easier
for
people
to
understand
why
pending
pods
are
pending.
We
have.
M
That
right
now,
but
we
can
definitely
do
a
better
job
of
that
and
sort
of
the
flip
side
of
that,
which
is
for
positive
schedule,
giving
some
explanation
of
why
they
scheduled
where
they
did
often
people
will
say.
Well,
why'd
you
put
this
pod
on
this
machine
where
there
is
already
another
pod
running
or
or
something
like
that,
and
so
so
we
want
to
answer
both
sides
of
that
coin.
Why?
I
did
a
pod
not
get
scheduled
and
if
it
got
scheduled,
wyd
its
schedule
where
it
did
and
gonna.