►
From YouTube: Kubernetes Community Meeting 20150303
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Kubernetes Anywhere Demo, CNCF Update, 1.2 Release Watch, 1.3 Google feature commits, SIG Auth update
A
Push
the
record
button
good
morning:
all
it
is
Thursday
March!
Third,
and
this
is
the
public
and
record
community.
So
we
will
go.
Do
many
fun
things.
Today's
agenda
includes
a
demo
from
the
wii
works
team
on
cooper,
Nettie's
anywhere
we're
going
to
get
a
cloud
native
compute
update
from
the
technical
oversight
committee,
chair,
Alexis,
Richardson
Chris
and
a
ciszek.
A
The
executive
acting
executive
director
of
the
CNC
F
is
also
going
to
be
on
the
call
this
morning
and
we'll
we'll
do
that
update
because
we're
pretty
close
to
having
the
Cooper
Nettie's
a
product
or
project
accepted
into
the
cloud
native
compute
foundation.
So
Alexis
will
tell
us
more
about
that
at
about
10
after
the
cloud
need
I'm.
Sorry,
then
we'll
do
a
1.2
release,
latch
TJ
goal.
Thurman
is
going
to
tell
us
where
we
are
with
1.2
and
jump
right
to
the
1.3
features
and
then
we'll
do
some
cig
reports.
A
Eric
tune
has
a
couple
of
slides
for
us
to
talk
about
sig
off
and
then,
if
we
have
time
we
can
do
a
cig
cluster
ops
update.
There
are
a
couple
of
announcements
in
the
notes
as
well.
We'll
get
there
as
we
get
there
if
there's
time,
otherwise,
they're
fine
in
the
notes,
so
Elon
are
you
game
are
sorry
ilya.
Are
you
game
to
kick
us
off
with
sure.
B
I
can
do
it
up.
Yeah
and
I
got
a
couple
of
slides
first,
how
do
I
share
my
screen
here
at.
C
Yeah
to
entice
friend,
okay,
its
first
grandma
daddy's
lines
can
thank
you,
and
this
is
now
yep.
Okay,.
B
Okay,
well
yeah
I'd
like
to
quickly
go
through
the
grains
anywhere
project.
That
is
something
that
loves
people
to
deploy
portable
clusters
in
exactly
the
same
fashion,
anywhere
whatever
kind
of
environment
they're
running
on
and
leverages
map
as
a
bootstrap
and
management
at
better
it
also
uses
we
met
four
pods,
but
that
is
not
gersiol
to
to
this
product.
B
B
Somebody
place
on
the
moon,
okay,
there's
some
noise
in
the
background;
okay,
so
we
all
familiar
with
this,
but
for
a
novice
user.
This
is
this
is
pretty
complex
and
well.
They
can
consider
various
options
to
get
the
hood
struct
and
one
of
the
obvious
options
is
DNS,
which
works
as
you
see
great
for
tanaja,
but
in
amazon
is
not
that
easy
private
cloud
insert
since
close
to
impossible.
B
So,
and
having
faced
all
these
deployment
options,
which
both
are
they
going
to
use,
maybe
they
actually
want
to
use
a
pass?
Maybe
they
want
to
run
on
bare
metal?
Maybe
they
want
to
use
our
vertex
work,
a
overlay
Network
be
and
what's
the
what's,
the
difference
between
l2
and
l3
and
for
for
the
novice
who
uses
these
our
very
big
questions,
and
they
also
have
questions
about
distributions
and
how
they
gonna,
ultimate
or
analytically,
use
ansible.
A
B
Or
chef
yes,
Oh
Tara,
maybe-
and
they
still
have
tell
us
a
bunch
of
questions
that
that
matters
a
bit
more
to
themselves
such
as
how
are
they
going
to
actually
from
at
CSD
on
top
of
it
and
yeah?
Maybe
they
actually
want
to
use
me
so,
whatever
they
still
haven't,
even
decided
so
yea
and
things
like
da
Perez
and
databases?
How
is
that
or
well
I've
got
it
CD
now
and
I
have
do
I
have
to
back
it
up,
or
what
do
you
have
to
do
with
okay?
B
B
Then
the
the
goal
is
to
dramatically
since
the
back
of
reigns,
deployment
easier
way
to
get
started,
scale-out
and
infrastructure
Center
without
changing
any
configuration
and
even
leave
the
portability
is
your
config,
as
it
says,
allow
using
to
move
on
blown
the
entire
cluster,
make
TLS
set
up
with
transparent
the
approach
we
using
fully
containerized
for
them
nice
and
we
believe
in
it
as
a
management
network.
Anybody.
A
B
You,
sir,
and
the
the
approach
is
due
to
be
able
to
integrate
a
related
config
management
for
provisioning
tools
in
the
nuts
are
given
a
set
of
docker
hosts.
We
decide
to
allocate
three
machines
to
run
at
cds.
We
run
a
proper,
proper
three
node
SED
cluster.
We
allocate
one
machine
to
run
master
and
two
two
workers
in
this
example.
So
on
each
of
the
machines
you
got
into
soccer
and
then
once
that's
done,
you
install,
leave
and
launch
it
like
so
solid
and
launch
it
like
this.
B
This
is
this
is
exactly
the
same
on
machines
and
then
then,
then
you
need
to
make
decisions
where
you're
wrong.
What
engine?
And
these
are
neither
the
doc
commands.
You
run
on
the
ATM
machines.
If
you
decided
to
to
set
up
a
three
node
cluster
and
all
you
need
to
care
about,
is
the
city
plus
the
size
and
essentially
you
just
enumerate
them
specify
name
NCT,
one
I
said
you
cleared
it
took
you
three.
We
will
create
dns
records
for
these
and
then
on.
B
The
Master
will
start
the
API
server,
controller
manager
and
cheddar
or,
like
so,
and
then
on.
The
workers.
Will
first
start:
we
need
to
set
up
cumulative
olives,
because
human
has
rather
complex
a
bunch
of
things
that
you
have
to
bind
mount
into
the
container
if
you
run
in
container
X
cubed,
so
there's
a
helper
that
does
couplet
holding
set
up
and
next
we
start
qubit
itself
using
volumes
from
cupid
volumes
and
then,
when
I,
the
we're
on
the
proxy
okay-
and
this
is
the
same
on
all
workers-
that's
it
okay!
B
A
B
B
So
here
is
the
brief
summary
of
the
EP
usage
of
the
terraform
utterly
just
provide
the
beautiful
you
declare
this
module
provided,
aw
keys
that
set
the
region,
give
it
cluster
name
and
select
either
secure,
simple
setup
flavor,
and
you
can
also
set
the
the
ssh
key
name
and
the
instance
types
here
so
I've
done
that
right.
This
is
this
is
my
terraform
think
that
is
the
copy
of
this
reading
and
there's
a
very
well
file.
A
B
At
62
and
HCG
three
and
then
a
pho
control
module
chela,
and
this
is
not
using
self-hosting.
We
run
all
the
service
components
as
separate
containers,
which
just
makes
things
a
bit
easier
to
understand.
Self-Hosting
is
great,
but
I'm
not
entirely
sure
if
it
makes
things
a
lot
easier
for
the
user
and
there's
a
toolbox
script
that
he
can
run
or
by
the
way
this
st
lessening
and
the
the
way
TLS
is
working,
is
actually
through
volume.
B
So
what
Dom
calls
data
containers
and
in
the
mvcc
to
based
example,
I
push
this
data
containers
to
my
supposedly
safe
amazon
container
registry
that
I
can
give.
I
can
set
finery
informations
on
that,
not
only
master,
nobody
can
pull.
One
can
pull
the
container
image
that
contains
the
server
pls
pls
key
and
all
the
other
things
that
I
want
to
keep
safe
and
nodes
can
only
call
the
roll
and
images
for
that
I'm.
B
B
A
A
A
D
I'm
Alexis
every
buddy.
There
is
now
a
cloud
native
competing
foundation.
It
is
part
of
the
Linux
Foundation
and
it
is
for
one
of
a
better
phrase,
a
bit
like
the
Apache
foundation,
but
for
cloud
native,
the
idea
of
setting
this
up
was
to
place
Cooper
Nettie's
into
a
neutral
foundation
so
that
other
companies
than
Google
could
contribute
to
it.
Now
some
of
you
may
think,
that's
unnecessary.
D
Others
may
think
it's
necessary,
but
it's
something
that
really
for
large
companies
is
seen
as
quite
important.
The
Google
team
has
written
a
piece
of
paper
that
says
that
if
the
technical
committee
of
the
CNC
f
except
scuba
Nettie's,
then
coo
benetti's
will
be
moved
by
google
into
the
foundation.
This
means
that
copyright
and
trademarks
currently
owned
by
Google
fuku
benetti's
would
move
to
the
Linux
Foundation,
and
then
it
would
become
a
linux
foundation
project
inside
the
plant
native
foundation.
D
There
will
be
other
projects
inside
the
cloud
native
foundation
what
those
projects
are
and
how
they
are
run
is
still
up
for
grabs.
Ok,
that
doesn't
mean
that
there
is
no
plan,
but
it
does
mean
that
we're
going
to
be
working
with
the
coup
benetti's
community
to
make
sure
that
we
have
the
right
set
of
rules
for
Cuba
Nettie's
in
place.
There
is
no
idea
of
trying
to
come
up
with
some
crazy
set
of
new
stuff
that
is
imposed
from
the
Linux
Foundation
to
this
project.
D
This
project
really
will
be,
is
the
reason
the
clan
native
Foundation
has
been
set
up.
In
the
first
place,
it
will
be
the
first
project
in
the
foundation.
We
hope
there'll
be
many
others
just
like
Apache
started
with
one
web
server
project
is
now
the
home
of
many
many
other
things
that
nobody
predicted
at
the
time.
Ok,
so
I'm
happy
to
take
a
few
questions,
or
you
know
whatever
Sarah
Brian
I
see
Joe
on
the
call
as
well
want
to
talk
about.
D
I'll
do
my
best
to
be
forthcoming,
we're
in
the
middle
of
a
vote
right
now
inside
the
TOC
to
accept
or
not
Cuban,
at
ease
into
the
foundation.
If
that
sounds
odd,
it's
only
because
the
vote
is
happening
by
email,
cert,
a
synchronous
it
hasn't
completed
yet
I
hope
it
will
be
complete
by
next
week.
Ok,
yeah.
A
That
I'm
working
on
is
a
frequently
asked
questions
about
what
it
might
mean
to
be
a
cloud
native,
compute
foundation
project,
so
I'll
be
taking
notes.
Also,
in
addition
to
the
notes
about
what
specific
questions
around
that
exist.
So
anything
we
see
here
that
as
a
question
that
we
don't
have
an
answer
to,
we
can
go
back
to
the
technical
oversight
committee,
the
governing
board
and
get
answers
and
get
this
all
very
much
more
published
because
we're,
as
is
traditional
and
technology,
we're
building
the
bus
as
we
fly
it.
D
Friend,
ok,
so
the
CNC
f
was
set
up
with
a
charter
which
I
believe
is
public
out
on
the
CNC
f
site
in
the
Chancellor
I
believe
that
it
says
correct
me
if
I'm
wrong,
that
the
contributions
model,
the
default
contributions
model
for
sensitive
projects
will
be
the
developer
certificate
of
origin,
pc
0,
which
is
also
used
by
many
limits
projects,
and
I
believe
it
is
used
by
hey
joe.
I
agree,
I
think
it's
much
better.
It's
used
by
many
of
the
companies
participating
like
my
company
we've
works,
use
it
chorus
use
it.
D
I
think
dr.
use
it
as
well.
Google
has
used
of
CLA
up
until
now,
which
is
a
more
traditional
copyright
assignment
mechanism.
Joe
is
just
demonstrating
on
the
screen
in
the
corner.
What
happens
to
developers
when
they're
asked
to
sign
of
CLA
these
days?
It's
seen
as
a
bit
of
friction,
so
we
hope
to
have
a
lower
friction
contribution
model.
If
you,
google,
for
OpenStack
CLA,
vs
DC,
oh
you'll
see
the
debates
the
OpenStack
community
went
through
and
the
pain
they
went
through
when
they
realize
that
CLA
wasn't
getting
them
any
contributions.
D
Now,
if
a
project
insists
that
it
wants
to
have
a
CLA,
then
I
think
the
CNC
have.
Rules
may
be
gracious
enough
to
allow
that,
but
the
expectation
is
pec
tation.
Currently,
it's
that
will
be
moving
to
a
developer
certificate
of
origin
model,
which
should
lead
to
no
changes
in
terms
of
what
the
committee's
and
project
leads
have
to
do,
but
it
should
lead
to
it.
Must've
contribution
for
the
average
developer.
D
Not
an
expert
on
what
happened
with
node,
but
if
you
read
the
there
was
a
really
good
post
last
week
on
medium
by
this
guy
Mikhail,
who
was
involved
in
the
no
transition
talking
about
some
of
the
rules
they
set
up
and
they
seem
to
have
gone
really
out
of
their
way
to
encourage
contributions.
I
would
would
not
be
surprised
to
hear
that
job.
F
G
Run
to
the
openstack
transition,
his
open
cycle,
he
didn't
have
a
lack
of
contributor
problem
for
this,
and
ninety-nine
percent
of
openstack
contributions
are
corporate
funded,
so
the
CLA
was
actually
a
not
considered
a
hurdle
for
them.
So
the
nice
thing
about
DC,
oh,
is
it
makes
it
much
easier
for
individuals
to
act.
The
the
downside
is
that
they
can.
They
can
contribute
code
that
their
company
hasn't
given
them
permission
to
add
that's
the
that's.
G
G
G
Right
yeah
I
would
I
would
I
would
suggest
I'm
in
favor
of
the
DCL
I.
Just
would
suggest
that
for
developers
who
work
for
companies
that
there'd
be
some
way
to
make
sure
that
the
company
can
be
on,
we
can
actually
have
companies
say
that
they
are
on
board.
You
mean
yeah
with
DCO
contributions.
They
don't
need
to
see
LA,
but
it
just
it
just
provides
that
extra
cool
comfortable
for
both
parties
right.
What
we
really
don't
want,
as
developers
getting
in
trouble
for
contributing
their
companies
being
mad.
We.
C
A
F
Another
question,
but
I'm
just
letting
on
dco.
For
me,
the
the
good
thing
about
the
DCO
is
that
it's
actually,
if
I'm
reading
it
correctly,
it's
not
a
copyright
assignment.
So
that
means
that
the
the
code
base
can't
be
dual
licensed
in
the
future,
which
makes
it
a
lot
more.
You
know
it
doesn't
feel
like
you're,
giving
anything
away
as
much
a
different
question.
F
D
D
Standards,
politely
turn
your
back
and
walk
away
until
they
get
it,
because
it's
not
going
to
be
that
it's
going
to
be
about
open
source
running
code
and
it's
gonna
and
that's
going
to
happen
fast
in
terms
of
the
debate
between
you
know
a
free-for-all
and
some
sort
of
privilege
of
specific
projects.
I
think
that
that
is
not
a
settled
topic.
I
might
personally
am
much
more
in
favor
of
a
free-for-all,
but
I
am
just
one
voice
in
that.
D
I
would
be
perfectly
happy
to
have
competing
projects
and
lots
and
lots
of
areas
but
and
I'm
not
alone.
In
thinking
this.
Having
said
that,
I
think
there's
going
to
be
several
very
important
areas
where
there
will
be
strong
social
forces
pushing
people
together.
Let
me
give
you
an
example.
So
I
am
lucky
enough
to
have
briefly
worked
with
the
Google
team,
along
with
chorus
VMware
and
calico
at
madis,
which,
on
a
network
interface,
definition
for
the
Cuban
Eddie's
plugins
project.
This
started
out
being
called
something
called
CNI.
D
You
might
have
seen
some
blog
posts
about
it
from
the
docker
team
and
previously
from
Tim
on
the
Google
team.
So
you
know,
CNI
is
a
networks
interface
for
any
container
that
is
also
being
applied
to
Cuba
Nettie's
as
an
orchestration
technology.
I
can't
see
any
motivation
for
having
to
see
ni
projects,
in
fact
that
look
that
would
be
really
dumb,
so
I
think
it's
very,
very,
very
unlikely,
but
there'll
be
that
kind
of
thing.
D
So
in
as
much
as
there
are
things
that
look
like
interfaces
interoperability
guidelines,
those
are
very
likely
to
be
converging.
I
think
that,
on
the
other
hand,
when
it
comes
to
things
like
orchestration
or
scheduling
the
make,
you
may
find
that
there
is
room
for
different
approaches
according
to
different
use
cases.
So
one
of
my
personal
criticisms
of
KU
benetti's
as
it
stands
today
is
it's
quite
big
and
complicated.
A
So
I
will
step
up
for
a
minute
and
describe
the
TOC
makeup
and
the
broader
picture.
So
there
is
a
governing
board,
that
is,
that
is
seated
with
members
of
the
cloud
native
compute
foundation,
so
people
that
have
put
in
money
to
purchase
help
guide
as
a
governance
board.
The
real
meat
of
the
work
happens
with
the
technical
Oversight
Committee,
which
has
just
he's
just
being
seated
now.
Alec
alexis
is
the
chair
of
that
technical
oversight
committee
and
right
now
there
are
six
other
members.
A
Now
these
member,
the
members
of
the
technical
oversight
committee,
were
nominated
and
then
voted
on
five
other
sorry
right,
six
total
and
there
are
five
other
members
they
were
nominated
and
then
voted
on
by
the
governing
board.
Now,
given
that
the
governing
board,
just
putting
money
not
just
but
primarily
put
in
money,
got
to
get
this
governing
board
started,
we
knew
that
the
technical
oversight
committee
would
not
be
complete.
A
So
the
technical
oversight
committee
gets
to
build,
build
in
two
more
seats
that
they
choose
now
as
an
example
and
I
know
that
we've
had
this
question
internally
right
now,
there
is
no
one
from
google
on
the
technical
oversight
committee
or
more
directly
from
the
Cooper
Nettie's
community.
There
are
related
community
members,
but
not
directly,
and
that
is
one
of
the
things
the
technical
oversight
committee
is
is
tasked
with
fixing.
So
there
are
two
more
seats
the
technical
Oversight
Committee
says.
A
We
know
we
have
holes
in
this
space
and
we
need
expertise
in
these
places,
so
they
fill
two
more
seats
and
then
there
is
an
end
user
seat
on
the
technical
Oversight
Committee,
which
is
the
end
user.
Membership
of
the
club
named
compute
foundation,
gets
to
nominate
one
more
seat
on
the
technical
Oversight
Committee.
A
To
give
us
really
that
end
user
vision,
visibility
now,
given
that
we
have
not
made
the
end
user
membership
broad
at
this
point,
yet
we're
asking
the
TLC
to
make
an
interim
appointment
of
someone
who
is
very
little
focus
or
use
focused
because
they
are
users.
So
we
will
have
three
more
seats
that
are
seated
in
the
toc
in
the
next
probably
week
to
week.
A
2
3
is
that
about
right,
Alexis,
yeah,
okay,
so
we
would
expect
those
seats
to
also
be
filled
out,
and
then
we
will
have
a
full
technical
oversight
committee
where
in
the
toc
starts
defining
what
a
lot
of
these,
what
it
means
to
be
an
odd
native,
compute
foundation,
project
sort
of
things,
look
like
and
dig
in
with
the
Cooper
Nettie's
community
as
the
first
project
being
accepted
into
finding
the
best
practices.
The
common
interface
is
the
common
overlays
of
what
we
can
do
as
developer
communities,
but
the
intent
is
not
new.
D
G
D
We're
looking
for
what
I
would
call
a
common
sense
solution
and
not
something
that
makes
everybody
feel
there's
been
some
enormous
contortion,
so
this
arises
in
some
quite
practical
ways
and
is
slightly
constrained
by
some
quite
practical
concerns.
For
example,
it
turns
out
that
on
github
you
can
have
organizations,
but
you
cannot
have
sub
organizations,
so
you
can't
have
a
metal
organization
called
cloud
native
on
github
and
then
split
that
into
a
coup
benetti's
or
and
blah
blah
blah
the
rugs
for
other
projects.
That
is
like
really
super
irritating.
E
D
We're
gonna,
try
and
forget
figure
this
out
and
honestly.
This
would
be
a
good
time
to
appeal
for
constructive
suggestions
and
ideas,
send
them
to
Sarah
or
to
me
or
to
Chris,
and
this
chick
or
anybody
else
that
you
know
is
involved.
There
is
a
public
mailing
list
for
all
of
these
things
which
you
can
join
send
stuff
to
so
go
for
it.
D
A
A
A
Sorry,
the
technical
oversight
committee
meetings
are
open
if
you
want
to
participate
them,
as
is
their
mailing
list,
so
they
it's
something
that
I
can
connect
people
with,
or
we
can
put
a
link
into
the
notes.
Discussion.
A
G
A
The
the
whole
point,
even
straight
down
from
the
governing
board,
is
light
oversight
until
and
where
we
find
scalable
things
that
work
to
make
the
communities
and
the
projects
go
faster
and
better,
and
over
and
over
in
the
governing
board
meetings.
It
was
said
that
we
do
not
want
to
slow
these
projects
down.
G
I,
like
I,
like
it,
would
love
to
see
more
clarification
on
the
difference
between
an
implementation
project
on
a
specification
project.
I
think
that
that's
a
really
insightful
distinction,
mm-hmm,
okay,
that
would
that
would
I.
Think
of
the
things
I
would
ask
is,
is
you
know,
can
we
clarify
that?
Can
we
actually
formalize
it
because
the
that
might
be
the
difference
between
where
you
just
want
to
move
really
fast?
And
what
do
you
actually
want
to
slip
things
down
right.
A
So
I'm
gonna
jump
on
don't
like
I
said
we
have
a
really
fun
videos
when
you
jump
on
to
1.2
and
then
1.3.
Although,
as
I
said,
any
questions
can
come
to
me,
things
that
should
go
into
the
frequently
asked
questions
about
this
transition.
I
know
it's
been
coming
pretty
much
since
Cooper
Nettie's.
One
point:
I'm
happy
to
facilitate
those,
so
TJ
1.2
release,
watch
yeah.
I
I
Meetings
to
look
at
the
milestone,
burn
down
and
look
on
nowadays
are
actually
looking
individually
at
every
every
bug
that
is
left
and
we've
determined
that
the
important
bugs
we
still
have
to
fix
are
not
complex,
at
least
in
terms
of
dependencies
and
the
ability
to
check
them
if
needed,
and
the
flakes
have
been
going
down.
They're,
not
perfect,
but
they're,
much
better
I'm
submit
who
is
running
smoother
over
the
last
couple
days.
I
So
that's
all
good
data
to
say
that
we
can
cut
our
branch
and
declare
beta
very
student,
so
we
picked
tomorrow
morning,
11am
civic
time
so
morning,
for
some
of
us,
and
so
that
is
when
we
will
branch
and
and
create
a
beta
release.
So
anything
that
gets
in
before
that
point
will
be
in
both
edit
one
point
to
anything.
Any
code
that
you
get
in
afterwards
should
either
go
to
head.
I
A
I
J
Point
out
for
people
worried
about
the
submit
q
a
whole
bunch.
There
are
31
pr's
right
now
that
Jenkins
is
messed
up
and
we'll
never
complete,
and
they
will
never
go
into
the
merge
q
ever
all
of
everything.
That's
targeted
for
12
everything
that
has
a
looks
good
to
me.
We've
Jeff
managed
to
get
cleared,
but
if
you
have
a
PR
that
isn't
targeted
for
12
and
doesn't
have,
it
looks
good
to
me
and
your
end-to-end
tests
have
been
sitting
there
for
hours
or
days.
J
K
J
K
I
No
just
that,
basically,
once
once,
we
branch
or
sort
of
in
final
watch
for
release
I
believe
for
one
point
on
pipeline.
It
was
around
two
weeks
dish,
so
I
think
one
half
to
two
weeks
is
probably
what
we're
looking
at,
but
basically
once
the
numbers,
they
were
there
of
untested
ility
and
number
of
remaining
blockers
in
the
milestone
people
release.
E
We're
moving
the
docs
we
discussed
this
in
a
previous
community
mean
really
the
docs
to
a
separate
repo
under
the
communities,
org
and
they're,
a
bunch
of
benefits
to
that
and
announcement
on
communities.
Dev
I
posted
a
link
to
the
proposal
which
discusses
those
benefits,
but
but
anyway,
that's
that's
what
we're
doing
so.
We
need
we're
going
to
block
any
pr's
that
touch
those
directories
in
the
Civic
you
automatically
Eric
was
kindly
enough
to
write
a
new
Munder
to
auto
label.
E
I,
don't
label
those
PRS,
and
this
Simic
you
be
watching
for
those
labels
is
additionally,
I
think
pr's
that
well
yeah.
So
anyway,
we're
still
working
out
all
of
the
logistics
for
that
blue,
because
there
are
a
bunch
of
semi
manual
transformations
that
need
to
be
done
to
the
docks
when
we
copy
them
over
and
we
need
to
leave
boarding,
links
in
place,
and
things
like
that.
So
to
give
us
time
to
do
all
that
stuff,
we
need
to
breathe
the
dots
so.
E
L
E
E
A
M
I
have
a
question
about
1.2.
We
have
some
cases
where
there
are
scripts,
the
reference
things
by
version,
I'm
thinking,
for
example,
in
the
cluster
ubuntu
install
scripts.
There
are
mentions
of
versions
of
things
that
are
typically
some
number
of
weeks
to
months
out
of
date.
Is
there
any
vision
for
how
stuff
like
that,
it's
updated,
I
think.
N
N
H
P
What
I'd
like
to
share
is
what
we
at
Google
consider
to
be
the
things
that
we're
going
to
put
the
majority
of
our
effort
to
making
sure
that
we
get
across
the
line
between
13.
It
is
certainly
not
the
only
thing
that
we're
going
to
do,
and
it's
certainly
not
something
that
we
alone
are
going
to
do.
We
would
love
help
from
the
community
and
things
like
that,
but
these
are
the
things
that
we
think
would
make
for
a
quite
compelling.
P
1.3
release
and
I
hesitate
to
call
the
blockers,
but
we're
certainly
going
to
do
our
best
to
get
them
over
the
lock.
So
in
1.3
we
at
Google
are
going
to
focus
on
getting
five
things
over
the
line.
The
first
is:
what's
currently
called
pet,
set
coming
up,
trying
to
figure
out
a
better
name
for
it,
but
something
around
legacy.
Application
support
there's
already
a
proposal
out
for
it
right
now
by
Red
Hat
thanks
so
much
for
the
guys
there
and
we're
going
to
put
time
behind
that.
P
P
The
number
of
nodes
we
currently
support
a
you
know,
ideally
between
two
and
five
thousand
total
nodes
under
support
by
Cooper
Nettie's,
there's
a
quite
a
bit
of
investigation
that
needs
to
go
on
here
about
what's
realistic
and
not
so
there's
going
to
be
some
work
understanding
exactly
what
our
target
is.
We're
going
to
integrate
a
number
of
different
I,
am
solutions
for
identity
and
management
and
ackles
support
for
cluster
auto
scaling
allowing
communities
to
provide
the
signals
to
your
underlying
cluster
to
scale.
Obviously,
we
will
implement
that
on
gke
and
we
would
love
other.
P
You
know
other
folks
to
wire
it
up
for
other
clouds
and
then
finally,
some
improvements
to
scheduled
job
to
make
you
know
running
things
on
a
time
basis,
a
better.
So
that's
the
set
of
things
that
we
think
are
the
top
priorities
that
we're
absolutely
going
to
allocate
people's
time
to
here
at
Google,
and
we
would
love
people
in
the
community
to
help
out
on
those
and
things
like
that.
But
those
are
the
ones
that
we
consider
to
be
top
of
mind.
P
There
are
some
additional
contributions
that
we
certainly
have
in
mind
that
our
for
lack
of
a
better
term
nice
to
have
you
know
we're
not
going
to
consider
them
to
be,
like
necessarily
blockers
for
the
next
release,
but
we'd
love
to
make
progress
against
them.
These
include
things
like
our
distributed:
touch:
dashboard,
improving
our
auto
scale,
metrics,
simplified
configuration
and
setup,
improving
our
scheduling
and
scheduling
decisions,
improving
the
way
that
we
allow
people
to
test
their
nodes
and
and
potentially
bring
your
own
nodes,
including
your
own
custom
images.
P
Things
like
that,
improving
the
way
that
docker
compose
interacts
with
Cooper
Nettie's,
ideally
we'd,
have
a
kind
of
transparent
solution
for
that
and
and
a
series
of
other
things.
Reliable
cluster
setup
is
another
one
that
I
want
to
highlight,
making
it
very
very
easy
to
set
up
your
own
cluster,
no
matter
where
it
is
with
or
without
the
use
of
cubes
Cuba
says.
A
Don't
this
is
an
enormous
portion
of
what
Google
does
is
the
future
bit,
but
the
next
is
the
part
that
is
I
suspect
just
as
interesting
to
the
community,
which
is
now.
We
ask
you
all
today
what
you're
working
on
what
you
want
to
get
into
1.3,
because
some
of
our
resource
allocation
over
the
next
three
months
has
to
be
tell
for
you
to
help
work
with
others
so
to
become
reviewers,
to
low
your
expertise
and
ability,
potentially
refactor
something
so
that
what
you
can
do
or
what
you
want
to
can
be
done.
P
P
We
are
very
excited
to
support
that
and
would
love
that
transparency,
if
you
guys,
if
you
have
a
company,
need
and
want
to
move
your
company
forward
and
require
certain
feature
great
tell
us,
let
you
tell
us
how
we
can
help
enable
that
enable
your
contributions
and
things
like
that,
give
us
feedback
into
the
features
that
we
are
working
on.
All
those
various
things.
P
Then
that
is
is
that
this
is
we're
trying
to
be
transparent
and
trying
to
do
this
very
iterative.
Lee
the
the
list
that
I
provided
here.
We
will
kind
of
bubble
up
into
an
artifact
that
we
make
public
and
that
will
not
be
a
google-owned
artifact.
P
By
any
stretch
of
the
imagination,
we
will
fully
expect
that
other
people
will
add
things
to
the
next
feature
and
an
ad
in
and
as
you
see
there
will
add
red
hats
meeting
in
the
next
couple
weeks
and
I
actually
don't
know
whose
alias
that
is
but
is
going
to
be
working
on
workflow.
So
again,
this
is
the
purpose
of
us
just
putting
a
stake
in
the
ground
on
the
things
that
we're
working
on.
A
We
have
asked
for
those
lists
from
you
all
and
just
so
we
can
continue
to
discussing
do
resource
planning
across
the
whole
the
whole
of
the
community.
We've
asked
for
that
list
at
our
next
meeting,
which
is
actually
in
two.
There
is
no
meeting
next
week
because
of
coupon
and
time
zones
and
all
the
fun
that
is
going
on
with
1.2
and
trying
to
get
it
picked,
walk
at
all,
so
that's
March
17
and
we
can
have
a
longer
discussion.
We
can
bring
lists.
We
can
make
space
for
that
next
time.
A
O
Yes,
I'm
going
to
make
a
real
brief
comment
from
the
cig
testing
meeting
a
couple
weeks
ago,
Sarah
I
believe
you
showed
up
and
shared
with
us
the
wonderful
news
that
federated
testing
was
going
to
be
a
priority
like
above
uber,
Nettie's
I,
think
was
the
phrase
and
so
I
understand.
This
was
early
in
the
process
and
there
have
been
lots
of
discussions
since
then,
but
in
the
interest
of
transparency,
I
was
wondering
if
somebody
could
shed
a
little
light
on
what
has
caused
distributed,
testing
to
sort
of
fall
into
the
nice
to
have
bucket.
P
It
makes
it
really
clear
that
it
is
a
priority
and
that
other
and
there's
only
even
with
a
wide
community,
there's
only
limited
a
limited
number
of
brains
and
figures
to
be
work.
They
not
having
running
will
be
a
release.
Blocker
for
13
I.
Think
that's
the
important
position,
but
I
would
strongly
urge
the
Google
folks
to
take
on
this
and
I.
Think
you'll
find
that
you
get
along
with
amount
of
support
from
the
rest
of
the
community.
On
that
I.
P
Don't
want
to
speak
on
behalf
of
eric
who
is
our
test
lead
and
I
don't
think
he's
on
my
call
right
now.
I
would
be
absolutely
stunned
if
it
was
not
out
long
before
13
released,
but
I
think.
P
I
I
understand
right
here
that
Bob
I
not
going
to
do
that
on
this
call.
I
will
talk
to
our
guys
and
get
back
to
you,
okay,
I'm,
just
again
for
clarity.
We
were
very
pleased
after
the
last
cig
test
meeting
when
it
seemed
like
you
had
put
the
stake
in
the
ground
Lummis.
So
this
seems
like
a
retrenchment.
That's
a
very,
very
worrisome
than
that
I
would
Bob.
I
would
not
call
this
a
return
sherman
in
any
way
shape
or
form.
I
literally
have
not
spoken
to
Eric.
P
So,
okay,
please
only
take
it
as
lack
of
communication
and
not
a
retrenchment
all
right.
If
it's
not
again,
though,
if
it's
not
showing
up
on
the
main
release
priority,
then
I
really
worry
that
treating
it
as
this
orthogonal.
Other
thing
means
that
it
doesn't
really
get
the
priority
and
attention
that
it
deserves
understood.
Yep.
O
I
just
wanted
to
do
one
other
quick
thing
and
then
I
don't
want
to
take
up
any
time
from
sig
updates.
A
while
ago,
I
think
I
had
sort
of
asked
that
six
provide
updates
here
and
maybe
even
a
little
more
frequently
than
on
a
weekly
basis.
So
I've
started
posting
the
sig
testing
weekly
meeting
notes,
tuku
brunette
east
ave,
preferably
the
day
of
I'm,
just
sort
of
putting
a
call
out
there
that
if
others
want
to
do
that,
that's
super
helpful
to
start
to
keep
up
with.
What's
going
on,
there's.
A
Also
a
list
of
all
of
the
working
documents
from
this
different
cigs,
so
this
is
more
of
pull
model
as
opposed
to
push
I
agree.
The
push
model
is
better,
so
there's
a
list
of
all
the
working
documents
and
the
cigs
on
the
wiki,
so
you
can
go
peek
there,
but
I
think
the
push
method
is
great
too.
Alright,
let's
jump
to
eric,
and
then
we
will
see
what
we
have
time
for
after
eric
and
the
sig
off.
Can
you
hear
me?
Yes
can.
A
N
Okay,
first
I
want
to
thank
everyone
who
made
contributions
in
the
off
area.
21.2,
I
tried
to
list
everyone.
Apologies
might
miss
anyone,
but
a
great
community
support
here.
Here's
an
overview
of
changes,
they're
going
to
be
one
more
tube.
N
That's
the
Amazon
docker
repo
thing
support
for
that,
and
as
with
all
complaints
about
docker,
config
Jason,
not
working
on
Cuba
Nettie's
and
that's
fixed
in
1.2
know
what
I
believe
with
secrets.
You
can
now
there's
new
secret
types
that
help
you
validate,
that
you've
constructed
you're,
seated
right,
including
docker,
config,
Jason
and
other
types,
and
you
can
now
consume
secrets
in
environment
variables
and
1.2
terms
of
authorization.
We
now
have
the
ability
to
list
the
groups
that
a
user
belongs
to
in
the
token
file,
so
you
can
use
those
groups
to
do
authorization.
N
We
made
some
improvements
to
our
sort
of
first
version
of
authorization,
which
we
call
a
BAC
that
is
still
useful
for
bootstrapping
and
will
continue
to
support
for
a
while,
but
eventually
we
want
to
replace
that
with
a
new
thing.
I'll
talk
about
that
in
a
second
and
there
beginning
steps
to
upstream
a
new
authorization
model
talk
about
that
for
1.3
webhook
authorization
was
added,
and
we
this
is
a
model
where
people
can.
The
API
server
can
call
out
to
some
rest
service
that
you
implement.
That
does
any
kind
of
authorization
you
check.
N
That
interface
is
now
is
formalized
and
we're
hoping
new
authorization.
Contributions
for
the
most
part
will
come
through
this.
A
web
hook
model.
He.
We
also
have
work
in
progress
at
something
called
pod
security
policy.
This
is
a
way
to
have
authorization
when
I
just
say
that
alone
I
mean
like
who
can
make
objects
of
type,
odd
or
type
secret
or
read
them
are
right.
That's
a
sort
of
a
very
generic
type
authorization
pods
are
very
specific,
and
so
we
have
very
specific
policies
around
pods,
like
what
user
can
you
run
in?
N
What
selinux
you
know
types?
Do
you
need
to
have
that
type
of
thing?
We
have
an
object,
called
pod
security
policy.
This
can
have
that
very
pod,
specific
authorization
rules-
that's
not
finished
so
my
to
pick.
So
this
is
not
final,
but
these
are
the
things
I
I'm
hoping
we
see
one
point
three
and
we'll
open
the
community
suggestions
on
what
else
we
want
to
take,
but
my
pics
are
completing
this
open,
ID
connect.
It
needs
a
refresh
flow
and
client-side
support.
N
A
proxy
authentication
for
people
that
want
to
put
apache
off
as
the
authorization
is
a
proxy
in
front
of
API
service
would
be
great
for
on
prem
people
that
want
to
integrate
with
a
many
types
of
apache
off
modules.
I'd
like
to
upstream
open
shifts
authorization
and
make
that
not
required,
but
the
default
when
you
set
up
kuba
Nettie's
cluster
on
prem
or
on
one
of
many
of
the
cloud
providers,
and
I
want
to
get
the
pod
security
policy
wired
and
working.
N
N
F
N
F
N
Make
a
note
to
try
to
keep
those
labels
more
up-to-date,
all
right!
Thank
you,
sir
remember
he
created
it
like
I,
said
I
off-label.
That
would
be
fine
too,
if
you
prefer,
and
we
can
make
security
just
be
about
vulnerabilities.
If
you
want.
L
N
L
L
A
A
A
N
I
was
just
typing
a
response
to
the
group
shot.
Yes,
I
think
we'll
probably
keep
a
BAC
indefinitely,
because
it's
useful
for
bootstrapping
and
for
simple
use
cases,
but
in
the
future,
once
openshift
Ozzy
is
a
up
streamed.
We
will
encourage
people
that
that
should
be
who
are
do
not
have
another
option
to
use
that.
A
Cool
all
right,
I'll
jump
to
my
announcements,
very
quick.
We
are
I,
get
all
sorts
of
asks
for
use
cases,
people
who
might
want
to
speak,
etc.
So,
if
anyone's
in
the
Boston
or
New
England
area,
that
has
a
good
use
case
for
Cooper
Nettie's
reach
out
to
me
please,
because
container
days,
Boston
is
coming
up
and
they're
looking
for
speakers
and
then
the
open
container
day
CFP
is
open
until
March
21st,
and
this
is
happening
at
oz
cons.
So
the
links
are
in
the
notes.
Thank
you
all.
A
Thank
you
to
alia
and
Eric
and
TJ
and
David,
and
everyone
who
spoke
because
I
think
I
missed
someone
yes
Alexis,
but
he's
not
here
so
have
a
great
week,
and
we
will
see
you
in
two
weeks
on
the
17.