►
From YouTube: Kubernetes Community Meeting 20170608
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo: Kubo [Kubernetes on BOSH]; Release 1.7 update; SIG Cluster Ops, SIG Windows, SIG On Premise; Leadership summit synopsis
B
Great,
thank
you
very
much
for
joining.
This
is
the
kubernetes
community
meeting.
As
always,
these
are
recorded
and
available
afterwards
on
the
YouTube
channel.
My
name
is
Marco
Chaffee
I
work
with
a
lot
of
kubernetes
stuffs,
both
on
a
boon
to
with
canonical
and
then
also
a
lot
of
other
projects
that
I
have
where
we
use
kubernetes
for
quite
a
lot
of
things.
B
We've
got
a
full
schedule
today,
so
we're
going
to
start
with
a
demonstration
by
Eric
Johnson
for
Kubo,
which
is
kubernetes
on
Bosh,
we'll,
have
release
updates
for
1/7
and
165,
we'll
go
through
three
sig
updates:
SiC
cluster
ops,
sig
windows
and
sig
on-premise.
Then.
Finally,
a
leadership
summit
synopsis.
B
B
D
All
right
so
for
announcements,
I've
got
the
first
one
up
here,
which
is
big
architecture
and
there'll
be
a
little
bit
more
about
this
in
a
minute,
but
this
was
one
of
the
outcomes
from
the
community's
Leadership
Summit.
That
happened
recently.
If
you
go,
and
essentially
this
is
the
sig
that
I
think
people
have
assumed,
exists
or
should
exist,
and
there
was
pretty
much
United
fronts
on
establishing
this,
especially
after
Ryan
grant
gave
a
great
presentation
on
how
he
would
seem
to
might
be
is
about
to
break
up
there.
D
D
Yeah
big
architecture,
yay
I'm
in
the
process
of
setting
it
up
right
now,
which
is
a
relatively
lengthy
process
to
get
all
those
things
in
place,
but
I'm
getting
meeting
with
Brian
grants
next
week.
I
believe
you
get
to
the
machining
of
that
set
up.
Kimball
get
a
charter
together
and
everything
happening
so
I
and
I
guess:
I
have
the
next
announcement
as
well.
Sorry,
dog
answer
I
also
see
guys.
Your
meetings
are
now
scheduled
for
9:00
a.m.
Wednesdays
at
9:00
a.m.
D
B
C
Have
10
minutes
the
floors
of
yours?
Ok
great!
So
we
go
through
this
pretty
fast,
so
we're
representing
a
large
key
here
between
pivotal
and
Google
have
been
working
on
this
project
for
about
six
months
now,
and
so
what
we
wanted
to
do
is
kind
of
introduce
it
to
you.
We
figured
we'd
started
off
with
the
context
of
something
that
everyone's
familiar
with
right
would
be
kubernetes.
Hopefully,
everybody's
seen
this
Lakha
mole
demo.
It's
really
awesome.
Whoever
came
up
with
this
should
get
promoted.
I
think
it's
great
his.
C
Cool,
so
we're
going
to
start
off
with,
like
you
know,
three
replicas,
you've
gotta
master,
some
workers
or
running
our
guestbook
out,
and
you
know,
Sam
Ramsey
comes
along
with
this
mallet
and
he
whacks
one
of
the
pods.
What
does
kubernetes
do
it
comes
along
and
it
fixes
that
right?
Everybody
knows
this,
so
we're
back
up
to
three
replicates
in
real
world
right
kubernetes
runs
on
top
of
virtual
machines.
So
what
would
happen
if
you're
having
a
bad
day
and
one
of
those
machines
dies
right?
C
So
in
this
case
you're
going
to
drop
back
down
to
two
replicas
and
what
does
kubernetes
do
in
that
situation?
Well,
we
know
it's
going
to
reschedule
a
pod
and
everything's
back
up
and
running
again
and
actually
like
what
happens
with
this
VM
right.
How
do
we
get
that
sort
of
VM
restored
and
I'm
sure
that
there's
plenty
of
ways
of
going
about
doing
this,
and
what
we're
going
to
show
you
today
is
how
Kubo
takes
care
of
that
so
over
time.
C
What
we
want
to
do
is
get
back
to
this
condition,
maybe
another
bad
day.
Example,
we've
got
another
heartbleed
that
comes
along
and
all
of
those
VMs
that
you're
running
on
top
of
you
want
to
get
those
things
packed,
and
so
how
do
you
do
that
sort
of
in
a
zero
downtime
way,
because,
obviously
you're
a
sad
panda?
C
Maybe
you've
got
a
pain
in
the
ass
day
right
you
want
to
upgrade
kubernetes
from
version
1.6
to
version
1.7
when
that
comes
out,
and
then
how
do
you
do
that,
of
course,
with
zero
downtime?
So
this
is
where
kupo
comes
in
note
here:
right,
I'm,
showing
a
kind
of
a
condition
where
the
pods
are
rebalance.
That's
not
something
that
kuba
does
Kubo
is
going
to
repair
the
VMS
and
then
kubernetes
over
time
would
probably
rebalance
your
guestbook
out.
C
So
in
this
particular
case
right
we're
going
to
talk
about
how
we
used
Bosh
for
couvreux,
so
Bosh
as
I
said
right.
Open-Source
it
handles
release
engineering
deployments
and
lifecycle
management,
and
it's
modeled
after
Borg,
so
everybody
I
think
has
probably
heard
of
the
white
paper,
and
we,
of
course
all
know
that
kubernetes
in
fact
is
modeled
after
Borg.
So
one
of
the
things
that
I
would
say
would
be
different
about
these
sort
of
two
approaches:
kubernetes
I
think
was
originally
and
I
saw
the
Brendan
was
on.
C
C
So
if
we
have
a
machine
failure,
that's
a
human
being
that
goes
along
and
replaces
that
and
of
course,
board
and
kubernetes
are
resilient
to
that
type
of
thing,
and
that's
obviously
very
well
represented
in
kubernetes
in
the
cloud
world,
where
you're
running
on
top
of
a
bunch
of
virtual
machines,
though
kubernetes
doesn't
at
least
today
take
care
of
the
underlying
infrastructure.
So
this
is
where
we
were
leveraging
Basch,
so
Bosch
kind
of
does
both
to
a
certain
extent
that
will
actually
maintain
the
health
of
the
applications
that
it's
managing
as
well
as
the
infrastructure.
C
So
this
is
what
Kubo
is
Kubo
is
Bosch
plus
communities.
It's
an
open
source
project
that,
like
I
said,
was
worked
on
between
pivotal
and
Google.
So
what
exactly?
Is
it
fundamentally?
It's
a
Bosch
release.
So,
as
I
said,
Bosch
can
be
used
to
deploy
lots
of
different
software,
including
Cloud
Foundry,
and
what
we've
done
is.
We've
worked
to
provide
a
manifest
file
for
deploying
kubernetes
the
hard
way
via
Bosch.
C
So
this
is
not
a
fork
of
kubernetes
or
it's
not
any
sort
of
changes
to
cover
many
agents,
so
standard
out
of
the
box
could
be
Denny's
experience.
Ftd
clusters
running
on
VM
a
whole
bit
some
terraform
scripts
today
that
will
spin
up
some
of
infrastructure
and
we'll
kind
of
talk
about
that
in
a
second
and
then
there's
a
custom
Bosch
director,
the
Bosch
director
is
the
actual
machine
that
monitors
the
health
of
the
infrastructure
machines
and
we
use
kind
of
a
custom
version
of
that.
C
That's
got
a
couple
utilities
baked
into
it
for
things
like
certificate
generation
and
credential
manager.
Over
time
we
like
to
get
to
a
vanilla
wash
and
then,
as
Vash
jeans
wash
jeans,
additional
capabilities
will
probably
achieve
that.
So
what
problems
does
it
solve?
Obviously,
I've
seen
a
couple
of
this
the
day.
One
activities
right:
how
do
you
I
install
kubernetes,
so
this
would
be
yet
another
way
of
installing
kubernetes,
but
where
it
starts
to
shine
is
with
the
day
two
stops
right.
So
how
do
you
keep
this
thing
operationally
up
and
running?
C
How
do
you
do
rolling
updates
of
kubernetes
versions,
support
hae
configurations
as
well
as
multiple
zones,
and
then
also?
How
can
you
scale
cook
your
veggies
like?
So,
if
you
want
to
extend
the
number
of
workers,
or
even
the
number
of
masters
could
cool?
Oh
can
do
that
for
you,
so
the
demo
time
abasa
over
Megan
together
do
the
fun
stuff.
Okay,.
F
Yeah,
that
was
just
a
decocker
man
to
delete
a
VM,
so
I
have
over
here
I've
already
installed
a
kubernetes
cluster.
You
pingu,
both
the
cluster,
has
two
master
nodes
with
a
load
balancer
sitting
on
top
of
them,
and
then
we
have
three
worker
node
with
some
top
running
on
them.
We
also
have
a
cluster
of
SPG
nodes
that
are
in
a
picture
about
lighting,
so.
C
Just
while
you're
on
the
slide,
the
I
guess
the
gray
represents
the
cloud
infrastructure.
So
this
is
a
load
balancer
that
we
configured
with
terraform
to
sit
in
front
of
the
master
nodes
and
we
selected
terraform
specifically
because
we
have
a
goal
to
provide
multi
Aya's
support
maash,
you
can
come
Gallup
I'm
talking
to
mr.
Bosch,
already
supports
multiple
cloud
providers.
C
What
we
want
to
do
is
obviously
make
sure
that
whatever
solution,
we're
providing
at
the
infrastructure
layer
can
support
multiple
cloud
providers,
and
so
we
went
with
terraform
scripts
to
provision,
say
a
load
balancer
that
we'd
sit
over
multiple
masters,
but
the
workers
right
where
we
want
to
get
to
is
using
things
like
the
add-on
capabilities
that
kubernetes
already
has
for
reading
load
balancers,
so
we're
not
doing
everything
with
terraform,
but
that's
kind
of
where
things
stand
today.
It's
alright
yeah.
F
F
So
once
the
happy
plan
this
is
like.
What's
the
note
is
deleted,
our
load
balancer
will
notice.
That's
not
there.
So
it'll
start
sending
all
API
traffic
to
the
remaining
master
node,
so
we
should
still
be
able
to
access
our
API
and
this
connection
not
fail,
and
in
the
middle
we'll
see
that
flash
will
say,
I
can't
access
this
via
many
more
so
it'll
fans
unresponsive.
Also,
the
owners
get
their
goods
unresponsive
agent.
That
means
that
Bosch
can't
communicate
with
this
VM
anymore.
It'll.
F
C
C
F
F
Now
holy
deleted,
so
we're
still
equal
taxes,
API
and
once
posh
wonder,
disappears
from
here
partial
ask
Google
to
create
a
museum,
and
then
it
will
start
installing
its
agent
back
on
top
here
and
then.
The
next
thing
that
happens
is
Sasha
will
start
running
all
of
the
kubernetes
jobs
so
like
the
scheduler,
the
API
and
the
controller
on
a
VM
and
I'm.
The
way
back
into
a
same
fate
that
we
were
in
before
I.
C
Think
in
the
interest
of
time,
we'll
kind
of
keep
moving
on
and
then
just
jump
in
in
a
rug,
great
chain,
yeah,
so
kind
of
current
status.
Just
yesterday,
we
released
version
zero
point,
zero
point:
four.
This
is
the
first
release
that
represents
a
decoupling
from
Cloud
Foundry.
So
if
anybody
had
ever
heard
of
Kubo
back
when
it
started,
we
had
this
thing
kind
of
yeah.
C
The
original
release
of
Kubo-
we
did
have
this
thing
kind
of
married
alongside
of
Cloud
Foundry,
but
over
time,
we're
looking
at
sort
of
decoupling
that
so
this
represents
kind
of
the
first
release,
we're
removing
of
the
Cloud
Foundry
dependence.
The
other
thing
to
mention
this
is
an
open
source
project
under
Apache
License
v2.
It
is
currently
being
rehomed
from
pivot
holes,
open
source,
github
repositories
over
to
the
Cloud
Foundry
foundation,
so
very
similar
to
C
in
their.
F
C
So
we're
moving
it
to
Cloud
Foundry
foundation,
so
I'm
not
too
dissimilar
to
the
cloud
native
competing
foundation.
Pivotal
and
Cloud
Foundry
there's
a
separate
foundation
there
and
what
we're
doing
is
moving
the
project
over
to
their
voting.
This
should
be
closed,
probably
middle
of
next
week,
sometime,
in
which
case
we'll
have
the
project
fully
Rio.
That's
a
complete
foundation
owns
project
and.
C
So
for
roadmap
there's
a
couple
things.
Obviously,
when
we
got
started,
we
were
trying
to
a
proof
of
concept,
kind
of
pre-alpha
thing
with
Cloud
Foundry.
We
did
do
some
things
that
probably
weren't
exactly
in
line
with
kubernetes,
but
we've
since
started
to
kind
of
move
more
towards
that
direction.
So
things
like
you
know,
like
I
mentioned
using
the
add-ons
for
the
load
balancer,
we
want
to
be
able
to
support
persistent
volume
claims.
C
Some
of
the
networking
suffers,
will
maybe
a
little
to
Mary
to
Cloud
Foundry,
so
we're
fixing
a
lot
of
that
stuff
as
we
go
move
towards
this
pure
open-source
standalone
bosh
release.
I've
got
a
link
right
there
to
the
current
github
repository
where
the
project
roadmap
is
tracked
and
that
stuff
will
get
moved
over
to
the
cloud
fabric
so
and
then,
of
course,
multi-is
support
will
be
coming
and
that
is
it.
So
here's
a
couple
links
for
anyone,
that's
interested
in
checking
out
the
project
further
again.
B
E
B
G
Okay
thanks,
and
so
we
have
the
economy
we
we
have
still
in
the
code
of
three
stage
and
so
the
top
one
issue.
Today
we
have
actually
master
couple
weeks
or
last
week's
in
battery.
It
is
a
17
Q
issue
and
suddenly
Q.
It
is
initial.
We
have
the
submit
queue,
infrastructure,
confidence
so
found
those
problem
and-
and
we
voted
up
by
the
PR
from
the
72.
So
the
summit,
children
are
look
skills,
but
we
have
a
crew.
Has
the
senior
freakiness
which
connect
the
Rasmus
in
three
days
ago.
G
They
have
a
50
person
of
the
senior
so
which
is
a
huge
effect,
are
merged.
You
read
about
the
existing
open,
TIA
and
a
little
top
issue
and
the
the
loonies
team
is
on
this
one,
and
also
we
are
go
to
engineer
from
aware
of
team
like
API
machinery
and
also
other
team
and
the
even
included
technology
engineer.
So
even
we
are
believe
this
is
not
a
node
identity
problem.
This
is
a
copy
issue
is
when
you
have
some
idea
and
auto
now
and
the
please
the
help
us
to
should
address
one.
G
G
1-1
fast-forward
merge
to
the
to
the
one
point
single
branch
per
day,
and
then
we
have
that
we
lie
to
create
other
types
of
raid
and
the
please
to
help
us
to
make
your
monitor
the
test
grade
and
make
sure
that
it
is
clean.
Then
we
also
nanite,
we've
had
one
on
save
and
Becca
one
and
there's
the
stable
or
better
one.
We
only
have
the
one
issue
for
from
all
the
critical
field.
Only
cereal
build
have
some
issue
which
is
known
issue
and
is
the
limit
to
the
Cuba
dienes.
Sometimes
it
is.
G
If
the
show
is
not
lighting,
you
look.
So
less
of
the
critical
people
are
green,
so
so
next
one
is
the
power
problems
are
not
our
focus,
it
is
the
it
is.
The
resolve
really
connected
so
last
week,
example
to
resolve
the
civil
or
quickly
in
the
critical
bills.
There's
a
website
master,
verified
problem,
you
shop
and
a
slow
and
a
federation,
so
I
link
to
those
things
and
that's
into
open
and
in
a
detail
then
climb
to
the
top
one
in
the
sleekness.
It
is
a
serial
which
is
what
I
system
internet.
G
We
have
the
trouble
seniors
in
question
again,
so
a
lot
of
liquid
is
and
the
reason
it
is
introduced.
The
either
has
infrastructure.
So
we
have
the
Limi
strategic
issue
and
the
code
had
issue
and
a
sense,
our
test,
the
infrastructure
need
and
he
need
a
lot
of
work
and-
and
you
try
to
make
sure
that
a
media
stand
back
for
this
holy
circle
and
also
stand
back
and
solve
those
cuts.
Infrastructure,
public
media.
So
last
word
are
the
learn.
My
yesterday
we
aboard
our
meeting.
G
We
made
all
the
decisions
for
the
exception
request
and
there
are
so
there's
the
file
a
ten.
We
have
to
file
request
and
the
two
is
approved
and
why
encode
in
a
queue
and
the
truest
objection?
That's
the
one
you
open
for
the
final
decision
based
on
a
passing,
so
so
the
last
two
things
I
want
to
listen
and
we
still
have
the
156
open
issues
and
the
most
it
is
adamak
with
the
big
groups.
G
So
we
have
that
a
while
back
I
send
the
mail,
so
you
have
the
Patent
Office
June
9,
which
is
tomorrow
and
after
tomorrow.
So
we
expect
here
the
same
rules
attached
that
issue,
open
issue,
Mike
Ford,
1.7
milestone
and
the
next
decision.
It
is
approve
or
milestone
or
not
not,
and
if
it's
not
then
makes
monday
or
sometimes
we
are
going
to
be
in
all
those
collisions
angle
amount,
so
they're
also
having
actor
last
time.
G
B
Great
thank
you
for
that
update.
Let
me
grab
my
browser
again.
Next,
up
on
the
itinerary
is
1.6
top-five
release,
update
Don.
A
D
So
for
me
personally,
I'm
now
deeply
involved
in
contributor
experience
and
and
Rob
is
doing
a
lot
of
work
trying
to
pound
the
pavement
to
get
cluster
operators
into
the
room
to
share
their
stories.
So
essentially
the
bottom
line
is
we.
We
just
are
really
working
hard
to
reinvigorate
the
the
cluster
operator
community
and
also
provide
relevant,
useful
information
back
to
the
project
from
that
community,
and,
if
any
of
you
all
want
to
show
up,
that
would
be
supremely
awesome.
H
You
guys
hear
me:
okay,
yes,
excellent!
This
is
Michael
I'm,
going
to
also
give
a
quick
update
on
sick
windows.
So
about
a
month
ago
we
stuffed
the
team
again
after
a
hiatus
order
of
about
a
month
where
we
didn't
really
have
many
resources
on
it
so
and
we're
working
hard
on
finalizing
the
plan
that
would
take
us.
H
The
support
for
Windows
Server
containers
to
beta
our
goal
is
to
have
the
planning
milestones
by
the
end
of
this
month
and
I
currently
were
working
on
a
lot
of
the
networking
related
things
around
getting
node
4
to
work.
We
not
balancing
and
going
through
the
ovn
and
OVS
infrastructure.
That's
it
from
us
Michael.
A
H
We
haven't
had
that
much
like
getting
other
folks
to
contribute
to
this
effort.
Besides,
cloud-based
cloud-based
has
given
us
engineers
that
are
doing
some
of
the
networking
work
around
ovn
and
OVS,
but
they're
doing
that
work
across
the
board,
not
just
kubernetes.
Only
on
the
kubernetes
only
project
is
just
a
friend
of
us.
That's
been
able
to
stuff.
Ok.
A
D
A
D
B
The
final
cig
update
is
a
cig
on
premise
which
I'll
go
ahead
and
give
so
take
on
premise:
we've
been
focusing
on
understanding
the
landscape
of
on-premise
solutions
and
problem
space.
That's
there
we're
working
towards
building
a
very
clear
mission
statement
of
what
the
sig
wishes
to
achieve
still
a
relatively
new
sig.
B
A
B
Absolutely
so
we
do
meet
somewhat
regularly
every
other
week
and
anyone
who's
doing
any
kind
of
operations
or
management
of
kubernetes.
On
premise,
we
look
to
seek
out
your
experience.
How
you
overcome
problems
such
as
is
the
structure
like
load,
balancers,
storage,
networking
and
then
helping
us
document,
those
so
that
others
can
benefit
from
overcoming
those
hurdles,
we're
in
a
cloud
and
Aya's
world.
That's
not
necessarily
as
prevalent.
D
D
Essentially,
the
the
format
was
a
series
of
presentations,
sort
of
assessing
the
current
landscape
of
the
project
and
some
of
those
challenges,
and
then,
after
that,
were
a
series
of
two
blocks
of
two-track
unconference
style
meetings,
basically
with
loose
agendas
that
had
been
developed
before
I
in
the
notes
for
the
community
meeting.
There
is
a
link
to
the
folder,
and
that
has
all
the
notes
that
were
taken
at
the
sessions
and
also
at
the
summary
session,
which
was,
after
all,
the
unconference
style
meetings.
D
D
Some
really
interesting
key
takeaways
that
one
of
the
big
ones-
and
none
of
these
are
probably
going
to
come
as
a
shock
or
surprise
anybody
is
it.
Communities
is
a
victim
of
its
own
success
in
many
ways,
because
it
is
growing
so
fast
and
there's
so
many
things
happening
all
the
time.
You
also
have
this
really
rich
community,
of
both
others
and
individual
contributors
and
vendors,
individual
contributors
on
their
own
time,
and
a
lot
of
really
interesting
matrix
of
people
working
on
the
project.
D
D
What
are
we
putting
in
releases?
How
much
feature
richness
versus
work
on
paying
down
technical
debt
is
required
and
and
I
think
that
the
the
appetite
of
the
leadership
present
was
really
let's,
let's
take
a
good
long
hard
look
at
what
it
would
take
to
make
communities
as
stable
as
possible.
That
doesn't
mean
throughout
features
entirely
or
anything
like
that,
but
I
think
people
were
really
like
wow
we.
This
is.
D
This
is
now
a
thing
and
I
think
somebody
said
I
wish
I
knew
who
it
was
remember
the
list,
but
basically
we're
not
wondering
if
kubernetes
a
thing,
it's
a
thing
it's
out
there
and
it
is,
it
is
really
making
a
huge
impact
on
the
world.
So
now
our
focus
has
shifted
from
you
know,
build
build
build
to
now.
We
have
to
really
think
about
build
and
maintain
so
I
thought.
That
was
a
really
interesting
aspect
of
this,
and
that
was
played
out
a
lot
of
discussions,
so
state
processes
were
also
covered
in.
D
There
are
some
great
ideas
about
how
to
improve
engagement,
to
how
to
make
six
more
effective.
I
thought.
A
really
great
thing
that
came
up
was
for
bi-weekly
cigs,
essentially
taking
the
off
week
and
making
office
hours
for
the
cig,
leads
to
answer
questions
or
provide
guidance
or,
if
you're,
a
cig
like
like,
say
your
cig
note
and
you've
got
a
lot
of
other
dependent
things
coming
that
affect
your
cig
or
that
you
also
impact
those.
If
you
have
office
hours,
then
other
people
can
come
and
discuss
those
those
those
cross
dependencies
at
that
time.
D
So
another
thing
was
adding
office
hours
just
generally
one
at
one
meeting
every
couple
weeks
where
sig
leads
can
just
hang
out
and
answer
questions
or
talk
across
sig
glede
and
talk
about
those
challenges
and
how
to
improve
engagement.
So
it's
all
really
good
stuff
I'm
not
going
to
go
through
all
these,
because
there's
a
lot
but
I
mentioned
earlier
that
there's
sega
architecture
was
a
ratified
decision
and
I'm
in
the
process
of
doing
all
the
machinery.
D
Behind
that
meeting,
with
Brian
grant
to
smoke
the
really
nail
down
the
mission
on
that
Ken
and
some
others
have
some
great
ideas
about
how
to
increase
incentives
for
people
who
want
to
contribute
companies
and
individuals,
because
there's
a
lot
of
increasing
maintenance
burden
and
it's
sometimes
can
be
a
thankless
job
and
any
of
the
people
who've
been
on
the
release.
Teams
know
how
much
goes
into
this.
There
probably
is
a
support
group
somewhere
for
people.
Who've
done
the
release.
D
Lastly,
we're
looking
at
the
abstraction
of
COO
cuttle
as
a
way
of
addressing
multi
Risch
the
repo
issues,
because
pretty
much
everything
you're
going
to
run
into
in
coop
cuddle
also
feels
like
that's
going
to
come
up
in
other
repositories
as
we
break
these
things
down.
So
it's
a
learning
moment
and
we're.
Basically,
you
can
read
the
notes
there,
but
essentially
this
will
be
our
test
to
see.
How
does
this
actually
work
in
a
multi
repo
worlds?
Does
this
main
renée's
kubernetes
repo
serve
as
an
umbrella?
D
You
know:
how
do
we
do
that
and
we're
also
going
to
be
looking
at
cloud
foundries
model,
their
multi
repo
model
as
a
way
to
see
what
successes
and
challenges
they've
had
and
leverage
other
community
projects
where
we
can
learn
from
them
as
we
move
into
a
more
distributed
type
coding
environment?
So
that's
it
any
questions.
D
I
E
D
E
Jason
was
an
awesome
summary
I
know,
I
added
those
bullet
points
on
you
at
the
last
minute.
So
thanks.
Oh
that's.
B
A
I'll
jump
in
with
the
last
of
the
two
announcements
because
I
don't
remember,
hearing,
let's
check
our
meetings
being
bi-weekly,
9:00
a.m.
Pacific
time
Wednesday,
which
is
noted
here
and
then
a
reminder
to
everyone
who
wants
to
comment
on
any
of
the
governance
work
that
the
bootstrap
committee
has
done
that
there's
another
week
before
we
close
up
those
documents.
We
come
back
with
a
second
career
vision
and
more
fun
timelines
for
that
so
the
index
and
the
presentation
were
added
to
the
announcements
here
in
the
community
list
or
community
agenda.
So
take
a
look.
I
I
E
To
do
that
to
be
on
the
topic
for
the
title
of
the
topic
title
of
the
steering
committee
charter
docks
you
can
see
Caleb
and
myself
a
couple
others
with
dropped,
LG
TMS.
There
I
also
wanted
to
take
this
opportunity
to
pick
Sarah
and
camera
for
Cameron,
for
putting
together
the
Leadership
Summit
and
having
that
apisto
so
smoothly.
I
thought
it
was
really
great
to
see
everybody
there.
I.
Don't
necessarily
want
this
to
be
a
self
congratulatory
thing
for
an
exclusive
cause
of
people
who
got
to
meet
face
to
face.
E
Well,
some
of
you
may
feel
left
out.
I
hope
we
did
a
really
good
job
of
making
it
as
much
information
as
possible
available
to
the
entire
community,
so
I
think
Jake
did
a
fantastic
job
of
taking
those.
As
that
all
the
other
note
takers
who
volunteered
there's
a
Leadership
Summit
slack
channel.
If
you
want
to
go,
see
what
little
back-channel
chatter
we
had,
and
so
we
tried
to
make
it
as
focused,
but
as
open
and
transparent
somewhat
as
possible
and
I
thought
it
was
awesome.