►
From YouTube: Kubernetes Community Meeting 20180208
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
Notes: https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
A
Right,
it
is
now
10:00
a.m.
on
the
dot.
My
name
is
Paris
Pittman
I
work
at
Google
I'm,
also
a
lead
of
the
special
interest
group
for
contributor
experience
and
welcome
to
today's
edition
of
the
kubernetes
community
meeting.
First
up
today
we
have
Dinesh
with
court
works
for
a
demo
and
then
we're
gonna
go
through
our
usual
sake,
updates,
oh
actually,
I'm.
Sorry
before
state
updates,
we'll
have
our
graph
of
the
week
that
we
get
from
our
dev
stats
dashboards
and
then,
at
the
end,
we'll
round
it
off
with
some
announcements.
C
D
Hi
everyone,
my
name,
is
Dinesh
Rani
and
today
I'm
going
to
be
talking
about
an
open-source
project
called
stock.
So
the
goal
with
Stark
is
basically
to
provide
additional
intelligence
for
our
storage
drivers
and
communities.
So
kubernetes
does
have
good
support
for
storage
plugins
or
threw
in
entry
or
storage
drivers,
as
well
as
flex
volume
drivers.
But
the
thing
is
that
it
does
not
know
about
advanced
features
of
storage
such
as
where
data
is
located
or
if
the
storage
is
healthy
on
particular
nodes
or
not
so
start
basically
aims
to
solve
those
solve
those
issues.
D
So
some
of
the
features
that
we
have
in
one
dot,
o
R,
basically
hyperconvergence.
So
basically
the
scheduler
actually
talks
to
start
to
figure
out
where
data
is
located
and
prioritizes
nodes
where
R
prioritizes
those
nodes.
It
also
monitors
the
storage,
the
health
of
the
storage
drivers
on
different
nodes
and
then
fails
over
pods
in
case
our
storage
goes
unhealthy
on
those
nodes
and
we
also
pulled
in
the
the
snapshot.
D
Control
is
not
that
work,
that's
being
done
in
the
kubernetes
incubator
project
and
made
it
a
part
of
stock,
so
that
you
can
actually
use
that
if
you
just
apply
stock,
so
I'm
just
going
to
go
ahead
and
show
you
a
demo
of
how
this
works
so
the
way
stock
has
been
implemented.
It
basically
has
a
plug-in
interface,
so
you
can
actually
light
drivers
for
any
of
your
storage
solution
or
storage
solutions.
So
right
now
the
the
drivers
that
have
been
implemented
is
just
port
works.
D
D
We
basically
implemented
stock
as
a
scheduled
extender,
so
so
what
we
do
is,
basically,
you
can
either
configure
your
default
scheduler
to
talk
to
stock
or
you
can
launch
another
and
an
additional
instance
of
a
scheduler
called,
give
it
a
different
name
and
then
make
it
talk
to
stock
since
I
didn't
want
to
muck
around.
With
my
default
scheduler
I
was
basically
just
created.
An
additional
cellular
and
I
have
basically
called
it
stock,
and
this
is
just
using
the
default
cube,
shadow
scheduler
image
and
what
I've
done
is.
D
I
basically
said
that
I'm
going
to
use
the
stock
config-
and
this
is
basically
defined
here
so
in
stock
config-
am
basically
saying
that
one
of
the
extenders
that
I
wanna
use
is
stork,
I,
basically
defined
a
service
for
it,
and
I
have
said
that
every
time
you
want
to
share
you
something,
please
send
it
the
filter
and
the
prioritize
requests,
and
this
is
basically
how
started
started
up.
This
is
the
this.
Is
the
deployment
for
stock.
D
D
So
both
stock
ins
and
the
scheduler
running
in
a
che
mode,
so
one
of
them
becomes
the
leader
and
I'm
just
going
to
tail
the
stock
logs
to
show
you
how
we
would
basically
do
the
hyperconvergence
when,
when
pods
are,
you
know
so
I'm,
just
gonna
use
user
MySQL
deployment.
What
I'm
going
to
do
is
I
have
actually
defined
two
PVCs
over
here.
One
of
both
of
them
are
using
the
port
works
volume,
storage
class.
D
One
of
them
is
going
to
be
used
for
the
data
and
I've
just
created
another
PVC,
just
for
just
for
to
be
mounted
in
a
temporary
location,
I'm
going
to
talk
about
why
it
is
if
I
have
talked
actually
used
to
two
volumes.
So
basically
one
of
them
gets
mounted
into
the
MySQL
path
and
one
of
them
is
a
stamper.
D
D
So
what's
gonna
happen
here
is
once
the
pod
gets
created
once
we
create
this
deployment
and
the
PVCs
get
created
when
the
scheduler
tries
to
shed
you
in
the
part,
it's
gonna
basically
talk
to
stock
and
what
stock
is
gonna
do.
Is
it's
going
to
look
at
the
two
PVCs
that
a
part
of
that
are
being
used
by
the
deployment
and
then
talk
to
the
driver
that
is
configured
with
in
this
case
with
its
port
work?
D
So
now
that
I've
created
this,
then
we
go
here.
You'll
see
that
initially
it
will
be
running
out,
but
once
the
PVCs
have
been
created
and
we
will
see
that
this
will
actually
get
a
filter,
this
filter
request
and
then
a
prioritize
request.
So
the
filter
request
is
basically
just
going
to
filter
out
nodes
where
port
works
or
the
divert
that
has
been
configured
is
not
running.
And
then,
basically,
the
prioritize
request
is
going
to
prioritize
the
nodes
where
data
is
wrong.
D
So
so,
let's
see
what
happens
so
we
actually
got
a
filter
request
from
the
scheduler.
The
request
basically
consisted
of
the
three
nodes
that
are
in
the
in
the
communities
cluster.
It
basically
send
in
KB
to
KB
for
and
KB
three,
and
we
basically
responded
saying
that
is
all
like.
It
is
all
right
to
schedule
the
pod
on
all
the
three
nodes.
D
D
We
then
contacted
the
driver
and
figure
out
where
the
the
volumes
were
lying,
and
in
this
case
we
figured
out
that
volume
one
was
on
KB
2
and
KB
3
and
the
other
was
was
on
kb
2
and
kb
3,
and
this
is
the
response
that
we
ended
up
sending
over
here,
so
for
every
right
now
stark
for
every
node.
That
has
a
volume,
it
increments
the
score
by
100,
and
if
there
is
no
there's
no
peace,
no
volume
on
that
node,
it
basically
gives
it
a
default
score
of
10.
D
D
D
D
So
the
second,
the
second
feature
in
stark,
is
basically
I
have
two
minutes.
So
basically
what
happens
is
if,
if,
if
the
driver
goes
unhealthy
on
one
of
those
nodes
shock
is
basically
pulling
pulling
the
health
of
the
storage
drivers,
so
this
is
when
it's
pulling
it
and
it's
good.
It's
basically
gonna
figure
out
that
something's
gone
bad
and
then
then
delete
parts
from
those
nodes.
So
since
I
don't
have
much
time,
I'm
gonna
skip
that
and
I'm
going
to
show
you
how
how
we
can
take
snapshots.
D
D
D
Now
we
can
basically
take
this
snapshot
and
create
a
clone
of
the
snapshot
and
attach
it
to
another
application.
So
the
way
to
do
that
is
basically
in
the
PVC.
You
would
give
it
an
annotation
specifying
the
the
snapshot
name
and
the
storage
class
would
basically
be
start
because
stock
is
the
one
that
knows
about
these
snapshots
and
how
to
restore
from
these
snapshots
or
take
clones.
So
I'm
just
gonna
basically
create
a
clone
from
the
snapshot
and
then
attach
that
to
another
instance
of
MySQL.
D
So
there
is
more
information
on
on
the
on
the
the
schedule.
Extender
in
the
blog
that
we've
put
up
and
the
source
code
is
is,
is
on
github,
so
right
now,
like
I
mentioned,
it
has
support
only
for
the
port
works
driver,
but
we
welcome
contributions
from
the
community
to
add
support
for
more
drivers
and
we
plan
to
add
more
features.
For
example,
there
might
be
conditions
where
you
might
not.
You
might
want
to
also
prioritize
based
on
the
zone
and
other
failure
domains.
So
we're
going
to
add
support
for
that.
A
E
Do
you
have
a
documentation
for
this
shared
when
it
comes
to
Red
Hat
Linux
is
where
I'm
at
all
I
work
in
Symantec.
We
are
just
trying
to
evaluate
all
of
these
solutions
in
our
environment,
but
we
standardize
everything
using
Red,
Hat,
Enterprise,
Linux,
7.4
and
most
of
our
environment
is
hardened
like
all
packages,
then
we
have
to
scan
and
validate
all
of
this.
If
you
have
a
documentation
when
it
comes
to
Red
Hat,
that
will
help
us
a
lot.
Do.
D
E
D
So
if
you
can
send
me
and
send
me
an
email
or
just
yeah,
send
me
an
e-mail
I'll
point
you
to
that
information.
Yeah.
E
G
G
D
A
All
right,
thanks
again,
Dinesh
appreciate
your
time
today.
If
anybody
needs
to
know
she
is
going
to
peace
to
his
contact
in
the
chat,
there's
also
contact
on
the
agenda
at
notes
today,
all
right
so
quick.
Thank
you
also
to
our
note
taker
today.
Josh
burkas,
we're
always
looking
for
note
takers
for
this
meeting,
just
feel
free
to
jump
in
when
you
can.
Next
up
is
Jace
with
the
1.10
release
updates.
H
Thank
you
for
us.
We
are
at
a
pivotal
moment
in
the
release
cycle,
which
is
week
6
of
12,
so
50%
of
the
release.
Time
has
expired
from
this
point
forward,
we're
going
to
start
moving
a
little
bit
faster
and
a
lot
more
work
in
terms
of
release
time
activities
in
the
agenda
for
the
community
meeting.
There
is
a
link
to
the
official
schedule.
So
if
you're
curious,
please
review
that
it
is
important.
We
stick
to
it.
We
we
reference
us
all
the
time.
H
So
if
hopefully,
there
are
no
surprises,
as
things
like
code
freeze
come
on,
which
is
happening
on
February
26th
so
feature
freezes
past,
but
you
may
be
interested
in
knowing
what
some
of
those
things
are
going
to
be
delivering.
This
particular
release.
There's
a
link
in
the
agenda
as
well
to
the
feature
tracking
spreadsheet
in
there
and
you
can
see
what
is
happening
the
year.
There
is
a
feature
exception
process,
so
if
SIG's
have
had
a
delay
for
some
reason
and
haven't
been
able
to
get
their
feature
fully
documented
definitely
work
with
the
release
team.
H
We
don't
want
to
hamper
people
I'm
necessarily,
but
we're
also
trying
to
avoid
the
case
where
things
are
getting
delivered
in
the
last
minute
and
maybe
in
a
way
that
isn't
necessarily
as
thought
out
or
fully
supportive
as
it
needs
to
be
to
be
at
the
quality
we
expect.
So
next
week
we're
going
to
be
cutting
our
first
beta
release
and
we're
going
to
be
assembling
the
release
branch.
This
means
that
essentially,
we're
gonna
have
to
fast
forward
the
daily
commits
in
the
master
into
the
release
branch.
H
So
we
keep
those
synchronized
and
having
the
release.
Branch
allows
us
to
pop
up
on
the
testing
and
make
sure
that
all
this
signal
looks
good
on
those.
Those
changes
that
go
in
so
we'll
be
updating
you
about
the
progress
on
that
next
week.
If
you
haven't
seen
it
I
do
send
a
weekly
email
to
the
terminating
staff
list
that
has
sort
of
the
the
status
and
everything
for
the
release.
H
If
that's
helpful,
definitely
let
me
know
it's
I'm
trying
to
raise
visibility
of
this
release
generally
so
that
we
have
better
radiation
of
these
deadlines
and
time
frames.
So
people
aren't
surprised
because
it
seems
like
every
release,
there's
a
surprising
amount
of
surprise.
So,
let's
not
do
that
release
team
meetings
are
in
a
weekly
cadence.
H
Excuse
me
in
February,
26
we're
gonna
switch
to
Monday
Wednesday
Fridays
and
two
weeks
before
the
release
cut,
we
are
going
to
have
daily
release
burned
out
meetings,
everybody's
invited.
If
you
join
the
kubernetes
milestone,
burn,
milestone,
burn
down
group,
you
will
get
an
invite
to
that
automagically
and
you
can
attend
and
I
try
and
make
them
entertaining
no
promise
of
puppies.
A
I
I
So
his
background,
the
release
team
has
a
set
of
leads
and
specific
rules
and
then
shadows
who
are
learning
about
the
process
and
looking
to
come
up
and
contribute
more
so
Josh
burkas
is
the
issue
triage
lead
and
I'm
the
shadow
there
and,
as
we
went
through
the
the
feature,
freeze,
phase
I
started
to
wonder
about
kind
of
the
ebb
and
flow
of
the
cadence
and
what
might
be
coming
and
that
kind
of
how
much
surprise
is
coming
next.
So
what
I've
come
to
understand
is
there's
a
little
bit
of
complexity
here.
I
So
everybody
knows
about
issues
and
github,
but
things
are
more
complicated
and
kubernetes,
because
we've
got
all
sorts
of
labels
and
then
we
have
also
on
github
there's
projects
and
milestones,
and
some
folks
use
those
or
not,
and
then
we
also
have
this
other
repo.
That's
the
kubernetes
slash
features
so
that
there's
a
lot
of
different
places
and
I
got
me
wondering
well
like
do.
We
have
a
normal
classic
s-curve
of
feature
creation
and
clothes,
or
what
is
it?
What
does
this
actually
look
like
so
I'm
share
my
screen
here.
I
To
show
these
couple
graphs
and
within
these
then
I'm
wondering
like
are:
are
there
trends?
Are
there
things
that
are
observable
and
what
are
what
are
we
seeing
here?
So
this
first
one
is
the
7
day
moving
average
cig
release
kind,
all
four
issues
and
we've
got
the
release
markers
there,
so
maybe
not
a
whole
lot
to
see
there
like
there's,
there's
not
completely
a
clear
front,
loading
of
features
and
we've
been
closing,
there's
some
other
spikes
there
in
the
middle.
I
Actually,
you
start
to
realize
that
these
correspond
kind
of
to
the
open
initial
phase
feature
freeze,
code,
freeze
and
final
stabilization,
but
the
numbers
are
actually
increasing
as
we
go
through
the
phase.
Now,
that's
not
necessarily
a
bad
thing.
You
might
be
having
sort
of
these
overarching
features
and
then
sub
things
being
split
off
besides
them,
but
it
kind
of
it
raises
some
questions
about
maybe
what's
going
on
there
and
one
of
the
reasons
I
was
looking
in
this
direction
as
we
had
some
concerns
that
some
SIG's
are
are
missing
the
the
110
release.
I
We
had
a
particular
stake
that
was
notably
late
and
a
bunch
of
features
so
trying
to
understand.
If
there
are
things
that
we
could
do
here
better
to
to
help
particular
SIG's
and
I
think
Aaron
gherkin
burgers
mentioned
a
number
of
times
that
we
maybe
need
a
little
more
dimensionality
in
these
on
the
the
third
one
that
I
wanted.
I
So
the
the
data
is
not
entirely
clear,
but
one
of
the
things
I
like
about
this
set
of
charts
was
that
I
came
away
with
a
few
more
questions
that
I
started
with
and
some
ideas
for
some
some
additional
data
that
I
might
like
to
try
to
pull
out
and
I.
Think
that
that's
what
I
would
like
to
see
people
doing
with
those
stats.
Coming
with
the
question
looking
for
answers,
finding
more
questions,
starting
additional
conversations
on
how
we
can
improve
as
we
go.
So
that's
what
I
had
for
the
week.
A
J
So
cigar
architecture
has
been
working
on
a
few
issues.
We
continue
to
work
on
the
kept
that
today's
enhancement
proposal
process,
which
is
intended
to
be
a
formalization
of
the
long-standing
design,
baloon
process,
kind
of
a
cross
between
product
requirements
document
and
a
design
document
serving
serving
sort
of
both
roles
and
other
roles.
So
we
continue
to
try
to
flush
that
out.
J
We
have
a
couple
of
efforts,
testing
out
the
process
we
do
want
to
try
to
make
it
lightweight
we're
basically
trying
to
provide
clarity
about
what
people
should
do,
as
opposed
to
just
sort
of
the
cargo
cool
thing
that
people
have
been
doing
copying
previous
proposals
or
not
even
being
aware
that
why
or
how
proposals
for
useful-
and
things
like
that.
So
that's
been
one
of
the
big
efforts,
something
that
we've
done
more
recently
has
started
identifying
some
projects
with
within
the
overall
project
and
under
the
umbrella
of
each
individual
sig.
J
The
next
step
we
need
to
do
is
go
identify
the
rest
of
the
sub
projects
effectively
within
the
kubernetes
repository.
So
an
example.
Sub
project
might
be
q,
dns,
for
instance,
which
is
another
another
one.
We
call
out
so
I'd
actually
like
to
plea
to
all
the
SIG's
to
this
is
being
recorded
right
now
in
60ml.
J
So
take
a
look
at
what's
there
already,
if
they're,
not
everything
is
identified
for
your
sig
and
that's
true,
especially
for
the
kind
of
the
big
long-standing
stings
like
API
machinery,
sig
note
and
others
help
us
flush
that
out
and
actually
in
our
upcoming
topic,
is
putting
new
trading
new
repos
for
new
code
that
we're
going
to
talk
about
and
before
we
start
creating
a
lot
of
new
three
code.
Four
six.
We
want
to
identify
where
the
current
cousin
is
so
we'll
talk
about
more
about
that
in
it,
so
that
that's
about
it.
A
Yeah,
it
looks
like
requests,
please
figure
out
a
way
for
issues
and
we
tagged
with
their
related
cap,
but
no
questions
and
send
feedback
to
Joe
as
well.
All
right
thanks,
Brian,
all
right.
Next,
we
have
sig
scalability
I
saw
you
on
the
line
earlier.
L
Yeah,
we
figure
out
how
to
go
see
if
I
can
go
for
screen.
Ian
yeah,
well,
okay,
so
I'll
go
through
this
fairly
quickly,
I'm,
just
logistics
recap
one
thing
to
notice
that
to
avoid
a
conflict
with
cigar
architecture,
we
moved
our.
We
moved
our
time
out
by
thirty
minutes.
We've
also
shifted
to
my
weekly
cadence
links
are
here:
I'll
go
pretty
quickly,
I
will
say
we
don't
do
much
on
the
mailing
lists.
Slack
and
meeting
notes
were
probably
the
two
main
areas
that
interaction
occurs.
L
So,
for
those
of
you
are
not
not
familiar
with
the
sig,
it's
a
small
but
very
consistent
group
saying
people
show
up
pretty
much.
You
know
very
very
regularly
and
the
the
shift
from
how
do
we
build
bigger
and
bigger
clusters
to?
How
do
we
ensure
that
the
big
clusters
that
we
build
are
really
great
has
been
is
pretty
complete
at
this
point,
I
think
most
of
the
real
work
in
the
sig
at
this
point
is
about
fighting
regressions,
which
does
continue
to
make
the
question
about.
L
You
know
the
the
beyond
5000
note,
question
I'd,
say:
there's
almost
there's
very
little
discussion
in
this
thing
about
this
at
this
point,
but
if
you
have
a
passion
for
this,
please
show
up.
There's.
Definitely
some
signal
out
there
that
some
folks
are
interested
in
configs
a
lot
bigger
than
this
I
might
point
you
to
the
firmament,
scheduler
stuff,
which
looks
kind
of
interesting.
Although
it's
early
running
much
bigger
cluster
seems
to
be
one
of
the
things
they're
out
there.
L
L
So
we
have
a
new
charter,
it
was
really
the
main
thing
I
wanted
to
surface
to
the
community.
There's
the
pull
request.
The
sync
is
fine
with
it.
I
think
we're
not
really
quite
sure
what
we
need
to
do
to
make
sure
to
sort
of
validate
that
this
is
it
but
I'm
sure
we'll
work
that
out
over
the
coming
weeks,.
L
Big
shout
out
to
parge
for
driving
all
the
doc
work
around
this
and
getting
this
done.
So
thanks
very
much
that
thought
it
might
be
worth
just
to
pull
a
couple
of
the
things
out
from
the
Charter
just
in
terms
of
who
we
are
defining
goals.
Measuring
those
goals
contributing
to
system-wide
issues
is
the
main
story,
so
this
is
fundamentally
a
sort
of
tooling
tooling
effort,
tooling
in
consultation,
but
a
lot
of
the
effort
is
intruding
around
surprise.
Mr.
lune
oriented
and
really
I
think.
L
The
key
thing
here
is
our
goal
is
not
to
be
firefighters,
but
try
to
keep
the
fires
from
breaking
out.
This
is
focus
on
aggression
and
I.
Think
that's
really
probably
going
to
be.
Our
theme
for
this
year
is
working
on
tooling
I
I
just
picked
like
one
of
the
recent
things
that
sort
of
along
these
lines
from
PR
flow,
but
there's
going
to
be
more
coming
up
in
future
sig
readings
in
this
area,
so
sort
of
a
final
thought
question
on
all
this
kind
of
an
interesting
thing
that
the
last
coupe
con.
L
We
have
any
another
coupe
con
coming
up
last
week
on
one
of
the
things
that
we
plan
to
do
is
obviously
get
together
and
we've
kind
of
thought
it
would
be,
or
more
developer
types
getting
together
and
having
deeper
conversations
about
this
and
one
of
the
things
that
happened
was
we
had
a
lot
of
users
show
up
and
ask
a
lot
of
questions
about.
How
big
can
you
make
things
and
what
does
that
mean
and
lots
of
know
lots
of
interest
from
users
and
running
big
big
clusters?
L
So
you
know
a
question
is
whether
we're
doing
a
good
enough
job
out
of
scalability
it
really
kind
of
explaining
what
what
kubernetes
scalability
is
means
so
love
to
have
some
feedback
to
that
a
shout-out
to
Sean.
He
was
one
of
the
Google
folks.
He
showed
up
and
did
just
an
awesome
job
walking
the
community
folks
through
kind
of
how
we
do
testing
and
that
we
hope
that
really
helped
people
out
so
that
I
hope
he
makes
the
Copenhagen.
And
that's
it
you.
L
L
M
Be
moved
out
automatically
it's
it's
very,
very
useful
or
for
people
who
run
their
clusters
on
tram,
because
in
on
crema
clusters
you
normally
don't
have
the
same
place.
Baby
I
mean
the
videos
easily,
as
you
have
providers.
So,
if
you're
interested
in
trying
this
feature,
we
are
looking
for
people
who
want
we're
willing
to
try
this
in
a
week
real
workload.
So
we
really
would
like
to
get
your
feedback,
it's
vital
for
us
that
some
real
business
before
we
can
move
it
to
beta
in
110.
M
H
M
Going
down
it's
pretty
natural
because
you're,
adding
more
complex
featured
and
all
of
them
require
opacity,
of
course.
So
you
know
we
have
like
a
couple
improvements,
performance
improvement
projects.
One
is
to
enable
equivalence,
as
we
call
it,
equivalence
cache.
Basically,
the
idea
behind
it
is
that
we
store
information
about
the
scheduling
and
decisions
for
a
pod,
and
if
another
con
has
the
same
scheduling
requirements,
we
used
the
same
scheduling
decision
and
for
for
the
second
part
and
other
future
parts.
M
M
M
You've
noticed
that
they
can
extension
mechanism
of
the
scheduler
is
not
necessarily
great
for
all
use
cases,
particularly
if
you're
interested
in
any
extensions
that
required
as
communication
of
the
scheduler
and
extension
mechanism
is
not
a
great
way
of
adding
audience,
because
I'm
currently
have
this
like
interface
over
HTTP,
and
we
do
lots
of
marshaling
on
run
running.
So
we
are
thinking
about
making
the
scheduler
more
like
a
framework
plus
an
SDK
that
allows
you
to
have
like
in
classes
as
well
as
extent
extent
of
external
audience.
M
So
this
is
another
item
that
you're
working
on
and
finally
very
quickly.
I
I
would
like
to
point
out
three
incubators,
and
we
currently
have
Q
arbitrator
is
a
an
effort
to
add
certain
new
features
to
scheduling
that
a
one
is
to
you
gain
scheduling.
Second,
one
is
to
support
codon
for
hierarchical
namespaces.
M
These
are
teams
of
cube
arbitrator.
Does
we
have
another
incubator
for
certain
facility
tool,
and
this
is
a
tool
that
allows
you
to
draw
around
certain
things
in
your
parser
is
being
pointed
to
a
running
cluster
and
it
restores
the
state
of
that
cluster.
Then
you
can
create
a
pod
in
a
in
a
tool
and
a
tool
tells
you
whether
the
part
can
be
scheduled
or
if
it
cannot
tell
you
how
much
more
resources
move.
Finally,
we
have
the
scheduler.
The
scheduler
job
is
to
evaluate
your
cluster
and
determine
if
there
can
be
smarter
scheduling.
M
J
M
N
M
A
O
There
is
a
PR
that
has
been
posted
to
the
kubernetes
community
repo
and
is
in
the
community
meeting
notes
that
summarizes
our
proposal
and
bay.
Basically,
the
summary
of
the
proposal
is
to
say:
well,
we
are
going
to
sunset
or
close
down
kubernetes
incubator.
That
doesn't
mean
we're
gonna,
you
know
kick
kick
out
existing
projects,
but
we're
not
going
to
accept
any
new
project
and
later
and
instead
we
are
identifying
three
classes
of
repositories.
O
The
first
class
of
repository
is
what
we've
called
associated
repositories,
which
are
basically
repositories
that
are
largely
independent
from
kubernetes
but
sort
of
our
conformant
with
the
community
guidelines,
including
the
skits
gifts,
the
LA
sort
of,
and
those
are
pretty
freeform
beyond
that.
A
step
closer
to
the
main
project
is
the
notion
of
a
SIG's
sponsored
or
cig
associated
repository,
and
that's
a
repository
that
we're
a
kubernetes
sig
agree
is
that
they
are
working
on
it.
O
It
is
intended
to
be
a
place
where
sig
members
act
is
our
actively
working
on
the
project,
not
just
a
sig
that
this
sort
of
adopted
someone
else's
project,
but
on
where
the
sig
members
are
actually
actively
working
on
it.
The
restrictions
on
those
projects
are
a
little
bit
tighter
and
are
detailed
in
the
flow
request
and
then
finally,
kubernetes
kubernetes,
repose
or
criminais
score
repositories
are
sort
of
the
closest
thing
into
the
repository
and
they
they
are
expected
to
be.
O
Things
are
really
core
to
the
operation
of
the
cluster
and
therefore,
instead
of
being
approved
by
a
sig
specifically
are
going
to
be
approved
by
a
sig
architecture
in
in
general.
I
definitely
encourage
everybody
to
take
a
look
at
that.
There's
also
a
fact.
At
the
end
of
the
floor,
quest
that
tries
to
answer
some
questions
that
people
would
have.
O
There's
also
been
some
comments
and
I'm
going
to
go
through
and
others
maybe
have
gone
through
and
responded
some
comments,
but
please
go
through
and
take
a
look,
because
this
is
how
we
propose
to
move
forward
with
repositories
you're
on
out
I
want
to
emphasize
that
this
is
intended
to
be
about
new
repositories
going
forward.
This
is
not
intended
to
turn
any
existing
repositories.
We
don't
want
to
disrupt
anyone's
current
workflow.
O
J
Yeah,
so
I
just
want
to
re-emphasize
a
point
I
quickly
made
earlier
and
I
just
made
a
comment
on
the
talk
about
this:
we're
not
going
to
force
any
changes
to
existing
repositories
or
code
other
than,
for
example.
You
need
following
some
of
the
rules
which
are
in
here
like
you
must
have
an
owner's
file.
You
need
to
point
to
that
owners
file
from
60ml,
and
things
like
that.
So
we
can
start
really
understanding
what
sig
owns,
which
piece
of
code
where
all
their
code
lives
and
so
on.
J
Yeah,
so
for
SIG's,
who
do
want
to
move
some
code
out
of
incubator
to,
for
example,
sig
repos
that
should
be
possible
and
straightforward
I
need
to
check
with
Aaron
about
whether
there's
anything
else
we
have
to
actually
do
to
that.
You
have
organizations,
make
it
open
for
business,
but
we're
working
on
that.
P
So
so
I
get
a
quick
question
here.
If
it's
okay,
to
ask
questions
now:
The
Associated
repos,
which
are
I,
guess
any
repo.
It's
not
part
of
any
kubernetes
organizes
early,
but
it
has
you're,
basically
signing
up
to
use
the
CLA
and
institute
the
CLA
audit
rules
and
apply
the
code
of
conduct,
oh,
who
would
want
to
use
and
do
an
Associated
repo
versus
just
something
off
on
their
own
like
what's
the
benefit?
Why
do
it
and
how
does
it
help
unblock
this
incubation
process?
So.
O
O
Where
it's
just
sort
of
two
or
three
people
working
on
something
I
think
that,
where
it's
not
necessarily
clear,
maybe
it's
a
tool.
For
example,
maybe
you
have
a
great
idea,
like
as
an
example,
I
have
started
working
on
this
tool,
I'm
calling
cube
sanity,
which
is
basically
the
ability
to
run
sanity
checks
on
a
cluster
for
various
properties
that
you
expect
to
always
be
true,
and
fire
up
alerts
that
they're,
not
I,
don't
think
a
sig
is
ever
going
to
really
own
that
I.
O
Don't
ever
really
expect
there
to
be
more
than
two
or
three
computers,
I,
don't
I
think
that's
a
great
example
of
where
you
might
want
that
to
be
an
Associated
repository,
I
think,
there's
also
a
degree
to
which
maybe
you
just
want
more
freedom
right.
Maybe
you
don't
want
to
you
know,
have
a
cig
be
able
to
tell
you
how
it
should
be,
how
it
how
that
code
should
be
written
and
that's
another
motivation.
So.
O
Certainly
could
be,
there's
no
reason
for
it
not
to
be
right.
I
think
that
the
the
motivation
for
allowing
it
or
identifying
it
as
a
example
is
to
make
it
easier
for
computer
people
in
the
community
to
contribute,
because
they
know
that
the
code
of
conduct
will
be
applied
because
they
they've
already
made
arrangements
with
our
company
to
sign
this
DLA,
but
there's
no
reason
I
mean
if
somebody's
doesn't
want
to
there's.
No.
None
of
this
is
intended
to
block
people
from
having
you
know
free-range
wild
repositories
out
wherever
they
feel
like
having
them.
O
J
P
O
O
A
Q
Sorry
I
I
was
having
trouble
with
two
mutes
George
put
me
on
the
schedule,
just
one
of
those
say:
hello,
I'm
going
to
be
the
program
manager
for
interpreters
efforts,
kind
of
going
forward,
so
we've
we've
added
a
team
of
about
16.
So
if
you
have
questions
or
interested
in
collaborating
with
this,
please
feel
free
to
reach
out
I'll,
try
and
make
sure
that
my
contact,
details
and
stuff
are
are
broadcast
out.
I'll
personally
be
participating
in
contributor
experience
and
in
6
p.m.
Q
B
All
right
everyone,
we
have
hashed
shoutouts
in
slack.
So
if
you
see
someone
in
the
community
doing
something
great
just
mention
them
in
that
channel
and
we'll
give
them
a
shout
out.
Every
week
this
week's
shout
outs
go
to
Duffy
Cooley
Stefon
shebanski,
Craig,
Tracy,
Timothy,
Sinclair,
Chuck,
Liz,
frost,
Nikita,
Raghunath,
Erin
Creek
and
Berger
Ilya
Dimitri
chenko,
a
horde
of
wretzky
ellen
Corbis
and
Tim
pepper.
Thank
you.
A
B
Yes,
so
we
moved
to
instead
of
opportunistic
sked
scheduling
for
the
cigs.
We
basically
assigned
you
a
time
slot
over
the
next
cycle
and
that's
gonna
be
linked
at
the
document
every
week.
So
we've
scheduled
you
out.
If
you
have
a
scheduling,
conflict,
you're
encouraged
to
swap
with
other
sig
leaders
and
what
we're
gonna
do
is
try
to
keep
the
same
pace
so
that
each
sig
can
do.
At
least
it
averages
out
to
like
one
and
a
half
updates
per
cycle
so
find
that
at
the
top
of
their
Docs.
A
Awesome,
yes,
yes,
thanks
to
you,
we
are
not
chasing
folks
down
anymore,
all
right
and
then
our
last
update
is
meet
our
contributors.
Our
first
live
streamed
edition
of
that
went
off
yesterday
morning.
It
was
awesome.
We
had
I
think
was
five
contributors
online.
All
taking
questions
from
the
slack
Channel
meet
our
contributors
as
well
as
Twitter.
This
is
intended
to
be
a
part
of
the
mentoring
initiative,
as
an
idea
for
mentoring
on
demand.
Ask
ask
a
contributor
anything
that
we
also
want
to
expand
this
and
do
peer
to
peer
code
reviews.
A
So
if
you
have
code
that
you
would
like
to
have
a
second
second
glance
on
also
submit
that
to
the
meet
our
contributors
on
slack,
the
github
page
as
well.
The
call
for
volunteers
is
in
the
agenda
as
well,
and
that's
it
for
the
announcements.
Any
last-minute
questions
concerns
comments
for
any
of
the
announcements
or
sig
updates.