►
From YouTube: Kubernetes Community Meeting 20190502
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
Hello
and
welcome
to
the
kubernetes
community
meeting
today
is
May
the
2nd,
and
that
is
a
Thursday
and
fun
tip
for
the
week.
It
is
18
days
until
cube
con
EU
and
Barcelona.
So,
if
you're
going
to
be
making
it
they're
very
excited
to
see
you
all
today
we
have
a
packed
agenda,
so
I'll
go
through
that.
In
just
a
moment,
I
am
Lachlan,
Evensen
I
represent
cig
p.m.
in
the
kubernetes
community
and
I'm
also
on
the
1:15
release
team
serving
as
a
lead
shadow.
A
So
that's
me
welcome
again
and
it's
great
to
have
you
here,
few
orders
of
business
first,
this
meeting
is
being
recorded
and
streamed,
live
to
YouTube
and
we're
all
under
the
kubernetes
code
of
conduct,
which
says
treat
each
other
with
respect,
so
be
mindful
of
your
conversations
as
they're
being
streamed
live
to
the
internet.
Also,
if
you
could
mute,
that
would
be
greatly
appreciated
if
you're
not
talking.
Obviously,
so,
please
try
and
watch
the
mute
button
to
give
folks
full
attention
for
those
people
that
are
talking.
A
I
have
thrown
the
link
in
the
chat
here
for
the
agenda.
If
you'd
like
to
pull
that
up,
feel
free
to
take
a
look
and
I'll
just
quickly
zoom
through
the
agenda,
we'll
take
a
look
at
what's
on
there
and
then
we'll
just
pass
it
get
going
here.
So
we
have
a
demo
at
first
then
we
get
the
release
updates
from.
A
A
B
So
this
is
this
is
ka
dash
and
just
making
sure
my
screen
is
shared
yep.
Everyone
can
see
okay,
good,
so
the
first
thing
I
want
to
show
here
is
that
K
dash
has
native
built-in
integration
with
IDC,
so
that
if
you
want
to
use
Oh
IDC
to
login
to
your
cluster,
there's
no
proxies
or
authenticating
proxies
or
anything
you
have
to
put
in
front
of
it.
B
You
just
could
configure
it
with
couple
environment
variables
and
then
that
just
works
out
of
the
box
and
what
you
can
see
here
is
the
dashboard
and
we
have
a
couple
or
the
you
know
the
main
screen
and
we've
got
a
couple
charts
here
that
show
you
some
basic
information
about
the
status
of
your
cluster.
You
know
how
many,
how
much
CPU
usage
you're
having
and
how
much
RAM
usage
is
going
on
both
from
a
node
perspective,
as
well
as
a
pod
perspective,
where
it's
showing
you
the
actual
use
versus
how
your
reservation.
B
So
you
can
see
here,
I've
got
the
set
up
to
be
incorrectly
configured.
You
can
see
at
a
glance
there
that
I'm
using
way
more
RAM
than
I
have
have
reserved
that
could
be
potentially
leading
to
some
problems
in
my
cluster.
So
you
can
see
that
at
a
glance
and
then
down
here
you
can
see
the
your
events.
So
that's
you
know
everything.
That's
happened
in
your
cluster,
it's
coming
from
the
events
API
and
you
can
see
at
a
glance
what
has
happened
recently.
You've
got
a
whole
slew
of
things
you
can
see
here.
B
You've
got
some
nodes
and
namespaces
workloads
hop
over
to
a
namespace,
really
quick
and
I'm
going
to
move
this
zoom
window.
That's
right
over
the
top
of
one
I
want
to
click,
and
you
can
see
my
namespaces
and
I
go
into
one
of
these,
and
here
again
you
get
the
same
charts
again,
but
now
you
can
see
that
there
are
in
the
context
of
this
namespace.
So
again,
I
can
see
in
this
namespace.
What
is
my
pod
CPU
usage
in
my
RAM
usage
and
I?
B
B
So
and
again
here
you
can
see
some
basic
information
about
the
namespace
as
well
as
all
of
the
pods
that
are
running
in
that
namespace,
and
you
know
you
can
do
your
sorting
by
CPU
usage
requested.
You
can
see
that
you
know
I've
got
something
here,
that's
clearly
using
way
more
cpu
than
it
was
allocated.
So
that's
definitely
something
I
would
want
to
look
into
if
I
were
using
this
to
manage
my
cluster.
B
In
addition,
we're
gonna
hop
over
here
to
workload,
so
these
you
can
see
all
the
various
deployments
and
replica
sets
and
whatnot
that
I
have
going
and
again
you
can
see
same
thing.
Some
charts
telling
you
the
health
of
things
I
can
type
here
to
filter.
You
know
everything
you'd
expect
to
see
in
a
dashboard
but
again
in
real-time,
so
we
actually
use
this.
This
is
one
of
the
primary
useful
things
that
we
use.
B
This
for
is
will
frequently
do
especially
in
our
staging
environment,
a
deployed
the
world,
and
you
can
come
to
this
screen
and
watch
things
update
in
real
time
and
get
a
real
sense
for
how's
that
deployment
rolling
out.
What's
what
stuck?
What's
you
know?
What's
still
waiting
I
can
then
obviously
come
in
here
and
again
for
this
particular
deployment.
I
can
see
all
the
information
about
it.
B
I
can
see
the
container,
and
some
of
the
information
about
that
I
can
see
the
replica
set
that
it's
currently
under
I
can
see
the
one
replicas
that
I
have
here
currently
running.
Any
events
for
this
pod
would
show
up
down
here.
I
can
also
come
in
here
and
easily
scale
this.
So
if
I
said
I
wanted
to
maybe
have
three
of
these
I
can
just
scale
that
and
again
you'll
see
real
time
right
here
status
of,
what's
going
on,
you
see
those
deploy
rollout
and
again
immediately.
B
They
show
up
here,
and
you
can
see
the
events
down
here
that
it's
rolled
out
in
addition
to
the
that
you
can
move
scale
that
back
really
quick,
sorry,
you
can
also
add
it
here.
You
can
see
all
the
yeah
Mille,
the
actual
gamble
for
that
you
can
make
changes
and
save
them.
In
addition,
it
actually
has
context
to
where
documents.
So
it
knows
you
know,
I
can
come
in
here
and
see.
What
actually
are
is
is
the
API
here.
B
B
In
addition,
you
know,
and
then
it's
so
so
we
have
management
of
very
all
the
things
you'd
expect
to
see.
So
that
was
workload
view,
but
here's
a
pod
view
same
thing:
I
can
see
you
know
all
the
pods
and
again
I
can
come
in
here
and
filter
and
you'll
see
that
these
charts
update
in
real
time
as
I'm
as
I'm
typing.
B
Also
this
where
you
can
just
come
in
to
do
apply
it's
basically
a
coupe
control
apply
here,
so
you
can
paste
your
gamal
straight
in
here
and
hide
that,
and
you
know,
make
changes
to
your
cluster
that
way
as
well.
So
that's
the
the
whirlwind
tour
I'm,
assuming
everyone
sort
of
seen,
seen
it
seen
a
dashboard
before
so
I
won't
belabor
too
many
points
on
I
will
also
show
here
that
it's
also
fully
responsive.
B
B
Poorly
behaving
pod,
while
I
was
away
from
my
computer,
you
know
out
at
the
store
and
I
could
do
all
manage
all
of
this
remotely.
That's
actually
been
really
nice
and
come
in
handy
a
couple
times
so
other
than
that
I'll
go
through.
My
presentation,
really
quick
looks
like
I'm.
Okay
on
time,
I've
only
got
a
very
small
handful
of
slides
here,
but
so,
as
you
saw
Kate,
it's
full
cluster
management.
It's
real
time.
You
don't
have
to
refresh
your
page.
Everything
shows
you.
The
current
status
of
things
makes
it
easy
to
quickly
visualize.
B
You
know
it's
got
your
crud,
your
scalene
API
and
the
OID
C
integration.
So
a
quick
note
about
adoption
since
I
released
this
about
a
month
ago
and
since
we're
all
nerds
here,
I
thought
I'd
put
that
in
the
numbers
in
hex,
just
kidding,
I'll
put
them
in
something
we
can
all
do
off
hand
off
the
top
of
our
heads,
so
I
released
this
about
a
month
and
a
half
ago,
maybe
two
months
now,
I
posted
something
on
reddit
about
it.
I
got
over
100
up
votes
on
that
I've
had
over
225
github
stars.
B
Since
then,
getting
one
to
two
stars
a
day
right
now,
I
actually
have
a
video
on
YouTube
that
is
linked
to
from
the
github
page.
That
started
gives
basically
the
same
demo.
I
just
gave
that's
at
about
3000
views
and
and
docker
docker
hub
poles
I've
had
about
40
little
over
45,000
poles
in
the
last
two
months
here.
So
adoption
so
far
has
been
better
than
I
expected
for
the
first
like
I,
said
less
than
two
months,
but
it's
really
encouraging
to
see
that
you
know,
especially
the
reddit
thing.
B
I
found
valuable
when
I
built
this
I
sandbox
myself
or
time
box
myself
to
not
spending
more
than
a
month,
building
it
before
I
put
it
out
to
see
if
there
would
actually
be
any
interest
in
it.
So
I
built
it
in
about
three
nights:
three
and
three
nights:
three
weeks
of
nights
and
weekends
before
I
put
something
out
there
and
it's
been
really
encouraging,
so
I've
definitely
continued
development
on
it.
B
B
So
what
works
for
us
may
not
work
for
other
teams,
so
anything
that
would
keep
someone
from
using
Kay
I
would
love
to
hear
about
and
so
that
we
can
start
working
on
getting
those
things
prioritized
and
in
there
and
and
lastly,
how
being
new
to
kubernetes
I'm,
not
sure
of
the
best
way
to
promote
ok,
like
I
said:
I
did
I
did
a
reddit
post.
It
was
met
very
well,
but
I
don't
really
know
where
to
go
from
there
so
other
than
that.
B
A
B
That's
a
great
question,
so
the
primary
one
and
kind
of
that,
maybe
the
driving
motivator
for
me
and
making
it
was-
was
that
real-time
thing
when
we
do
we,
like
I,
said
we
frequently
do
real-time
deployments
sort
of
deploy
the
world
in
our
test
environment
and
having
to
sit
there
and
refresh
the
page
to
see?
Is
it
done
yet?
Is
it
done
yet?
Is
it
done
yet?
B
Just
seeing
everything
in
real
time
is
a
big
advantage
in
addition
to
having
the
the
oid
c,
integration
was
really
nice
for
us
to
not
have
to
mess
with
some
setting
something
up
in
front
of
it.
It
makes
it
really
easy
to
get
get
this
dashboard
up
and
going
and
get
authentication
working
quickly,
and
then
it's
integrated
one
thing
I
didn't
mention
is
we're
using
we're
using
metric
server
for
all
of
the
staff.
B
That's
here
so
I
know
the
existing
one
you
use
is
using
heap
stirrer
I
believe
that
I'd
heard
that
they're
working
on
and
this
next
release
will
probably
upgrade
to
MIT
metric
server.
But
for
us
heap
ster,
we
didn't
is
deprecated
and
we
didn't
really
want
to
install
that
and
move
that
direction.
So
this
was
nice
from
that
standpoint,
but
really
the
real-time,
as
is
the
primary
driver
and
motivation
behind
building
it.
A
A
C
Hello,
everyone
we
are
now
in
week,
four
of
the
kubernetes
release
cycle
for
115,
some
things
that
happened
this
week.
We
had
our
second
alpha
released
that
happened
on
Monday
and
then
on
Tuesday.
We
had
our
enhancements
freeze
and
then,
following
that,
we've
gotten
a
couple
of
exception:
requests
for
enhancements,
so
going
into
enhancement.
Freeze
we
had
about
43
tracked
enhancements
for
115
after
enhancement
freeze.
It
looks
like
that
number
has
now
dropped
down
to.
C
C
A
No
questions:
okay,
tastic
thanks
for
the
update,
Claire,
very
much
appreciated.
Okay.
So
now
we
move
on
to
our
cap
of
the
week
now
the
cap
of
the
week
is
actually
revised.
Ipv4
ipv6
joules
stack
cap.
Now
this
may
be
funny
because
I
actually
wrote
the
cap,
but
I
didn't
suggest
that
it
be
the
cap
of
the
week.
It
was
just
coincidental,
so
let
me
just
go
through
I
will
share
my
screen
very
quickly.
A
All
right!
Let
me
know
when
you
can
see
it
okay,
so
it
is
on
the
enhancements
kubernetes
enhancements
repo.
It's
pull
request
number
808,
now
I'm
just
going
to
quickly
go
through
this,
this
kit
I'll.
Let
me
pop
it
to
something
a
little
more
readable,
but
the
motivation
behind
this
cap
was
actually
to
create
enable
dual
stack
so
allow
IP,
v4
v6
networking
on
both
your
pods
and
services
inside
kubernetes,
because
we
have
seen
from
the
community
a
bunch
of
people
asking
especially
for
large
clusters
or
where
there's
IP
address
exhaustion.
A
Can
we
go
forward
with
ipv6,
so
what
this
cap
proposes
is
a
way
forward
to
get
your
stack
now,
Jill
stack
means
you
run
both
ipv4
m
v6
side-by-side
because
IP
v6,
pure
in
kubernetes,
has
been
there
since
about
1.9,
but
IP,
v4
and
v6,
which
we
refer
to
as
dual
stack,
is
a
common
migration
path
for
people
to
run
both
at
the
same
time
and
have
that
kind
of
experience.
So
this
cap
is
in
provisional
status
at
the
moment,
it's
under
a
view,
and
it
has
an
exception
for
115.
A
It's
actually
a
multi
release
cap
in
that
we'll
be
adding
functionality
over
many
kubernetes
releases,
but
for
the
115
release,
specifically,
the
the
functionality
was
to
get
multiple
IP
addresses
on
a
part
of
the
foreign
a
v6
and
enable
that
networking
and
allow
nodes
to
have
multiple
ciders,
so
an
ipv4
in
an
ipv6
cider.
So
that's
the
cap,
it's
on
808
and
if
I
pop
back
to
the
agenda
just
quickly,
we
do
have
in
the
kubernetes
slack.
A
D
D
All
right,
so,
let's
jump
right
into
it.
What
did
SIG's
storage
deliver
for
one
point
fourteen,
so
these
were
there
there's
a
lot
of
things
that
six
storage
works
on
if
you're
interested,
we
have
a
spreadsheet
that
we
use
to
keep
track
of
all
the
features
bugs
design
Docs
that
were
working
on
each
quarter.
These
were
kind
of
the
highlights
from
the
last
release.
The
big
one
was
local,
persistent
volumes
being
moved
to
G
a
local
persistent
volumes.
D
Are
a
volume
plug-in
just
like
any
other
persistent
volume
plug-in
that
we
have
the
difference
here
is
that
it
allows
a
disc
that
is
local
to
the
node
to
be
used
as
a
persistent
volume.
For
those
of
you
who
are
familiar
with
the
host
path,
you
may
be
wondering,
oh
well,
what's
the
difference
from
host
path
and
this?
D
The
benefits
or
the
use
cases
for
this
feature
are
cases
where
you
want
to
optimize
for
performance
over
durability.
Assuming
you
know,
you
have
a
database
or
something
that
handles
replication
at
the
application
layer,
and
you
want
you
know
very
high
performance.
You
could
use
something
like
a
local
SSD
using
this
feature,
so
we're
very
excited
to
have
that
moved
to
GA.
There
was
also
a
blog
post
put
out
intranet
ezo,
explaining
this,
and
some
folks
that
are
already
using
it.
Next
up
is
CSI
the
container
storage
interface
that
went
GA
late
last
year.
D
It
is
the
southbound
API
for
kubernetes
to
integrate
with
third-party
block
and
file
storage
vendors
and
that
API
is
continuing
to
expand
to
make
sure
that
we
have
all
the
features
the
basically
feature,
parity
with
entry
volume.
Plugins.
To
that
end,
we
have
moved
Rob,
lock
volumes
and
topology
to
beta
and
resizing
to
alpha.
Rob
block
volumes
allows
the
block
device
to
be
exposed
inside
of
a
container
instead
of
a
mounted
filesystem.
D
I
want
to
increase
the
size
of
this
volume,
be
able
to
do
that
through
the
kubernetes
interface,
so
that
feature
went
alpha
this
last
quarter
and
then,
finally,
we
have
a
large
effort
underway
to
migrate
a
lot
of
the
entry
volume
plugins
to
CSI.
The
motivation
here
is
that
we
want
to
remove
as
much
third-party
code
from
the
core
of
kubernetes
as
possible.
Csi
is
the
extension
mechanism
for
storage.
It
is
GA,
so
there's
two
parts
to
this.
D
One
is
third
parties
writing
CSI
drivers
and
then
the
work
being
done
by
this
by
sig
storage
now
is
basically
creating
adapters
or
shims
within
the
core
of
kubernetes
such
that.
If
someone
tries
to
use
one
of
the
entry
volume
plugins
instead
of
having
the
business
logic
for
that
fulfilled
internally,
it
gets
proxied
out
to
the
CSI
driver,
and
that
way
we
have
one
kind
of
codebase.
Instead
of
two,
we
reduce
the
security
surface
within
the
kubernetes,
binaries
and
I
think
it'll
be
good
for
the
whole
community
overall.
D
So
an
alpha
implementation
of
the
that
shim
was
created
last
quarter
and
the
work
is
continuing
this
quarter
and
then
finally,
we
have
a
pluggable
end-to-end
test
framework,
a
lot
of
the
tests
that
we
have
used
to
be
very
specific
to
a
specific
volume
plugin,
and
what
that
meant
is
a
lot
of
volume.
Plugins
were
left
untested
and
there
was
a
big
effort
and
the
sig
to
design
a
generic
pluggable
end-to-end
test
framework
such
that
you
could
write
a
set
of
tests.
D
D
That's
being
done
here
to
make
sure
that
all
the
different
API
is
that
we
expose
entry
have
a
way
to
be
able
to
be
essentially
shimmed
out
to
a
CSI
driver
and
again,
once
this
work
is
complete,
the
CSI
drivers
will
fulfill
the
request
instead
of
entry
and
the
success
metric
here
is
a
little
bit
weird
in
that.
If
users
don't
notice,
we
have
succeeded.
D
Next
up,
CSI
features
are
continuing
to
we're
continuing
to
work
on
them
to
make
sure
that
they
are
in
feature
parity
with
what
we
have
entry
we're
planning
to
move
resizing
to
beta
this
quarter
and,
in
fact,
actually
based
on
the
kep
review.
It
looks
like
it
may
end
up
remaining
in
alpha,
that's
still
up
in
the
air,
then
we
have
ephemeral,
in-line
volumes.
D
We
want
folks
to
be
able
to
write
similar
types
of
volumes
using
CSI
that
is
already
possible
today,
but
the
interface
is
a
little
bit
janky,
because
you
have
to
create
a
PVC
in
order
to
use
it
and
doing
so
for
something
like
a
secret
volume
is
a
little
bit
weird
so
specifically
for
ephemeral
volumes.
We
want
to
allow
those
volumes
to
be
able
to
be
defined
inline
in
the
pod
definition,
so
that
functionality
is
under
development.
As
alpha
this
quarter,
next
up
is
volume,
capacity
and
usage
metrics.
D
This
API
already
exists
and
for
internal
entry
volumes
it
allows
users
to
be
able
to
get
information
about
how
much
space
they've
they're
using
on
a
given
volume,
and
we
need
to
make
sure
that
we
wire
that,
through
for
CSI
as
well,
next
up
is
snapshots
and
cloning.
The
interesting
thing
to
note
here
is
both
snapshots
and
cloning
are
only
going
to
be
available
through
si
si
si
si
is
kind
of
our
future
interface
that
we're
building
everything
into,
whereas
the
previous
set
of
features
are
more
about
bringing
feature
parity
to
si
si.
D
These
are
about
extending
the
kubernetes
storage
layer
and
si
si
with
more
functionality,
and
so
snapshots
was
introduced
as
alpha
a
couple
of
quarters
ago
and
we're
continuing
to
revise
and
improve
it.
So
one
of
the
big
things
that
we're
working
on
this
quarter
is
how
we
can
introduce
the
ability
to
have
basically
pause
and
resume
hooks
so
that
we
could
do
application
level.
Consistency
crash,
consistent
application
level
consistency
instead
of
crashing
Cinci
we're
working
with
folks
in
cig
apps
on
a
design
for
this.
D
So
stay
tuned
for
that
there's
a
cap
out
actually,
if
you're
interested,
take
a
look.
The
second
part
of
this
is
volume.
Consistency
group,
so
snapshots
currently
is
single
volume
snapshots.
So
you
get
a
crash
consistency
at
a
single
volume
level,
we're
also
beginning
to
design
what
a
multi
volume
consistency
group
would
look
like,
and
what
snapshotting
that
consistency
group
would
look
like.
So
you
get
multi
volume
crash
consistency
through
the
kubernetes
api
and
the
next
up
is
volume
cloning.
D
This
feature
already
exists
in
kubernetes
as
beta,
but
we
are
running
into
some
issues,
especially
with
CSI,
so
we're
trying
to
redesign
it
to
make
sure
it
works
well
with
CSI
as
well.
The
feature
here
is
a
lot
of
storage
systems,
have
limitations
and
numbers
in
terms
of
how
many
volumes
can
be
attached
to
a
given
node,
and
we
need
to
make
sure
that
the
scheduler
is
aware
of
this
information
and
is
able
to
keep
track
of
it.
D
That
is
all
that,
where,
though
these
are
the
highlights
of
what
we're
working
on
with
115
there's
a
lot
more
that's
going
on
if
you're
interested
join
us
in
the
storage
sig
and
we
can
find
out
more,
we
also
have
a
number
of
part
number
of
presentations
coming
up
at
cube
con
Barcelona
Monday
of
the
cube
con
week.
There
is
a
cloud
native
storage
day.
D
If
you
are
interested
in
registering
take
a
look
at
the
cubic
on
website,
you
can
find
information
there,
and
then
we
have
a
number
of
presentations
from
folks
in
cig
storage,
about
storage,
especially
on
Wednesday,
which
will
be
the
storage
day.
And
then
there
is
a
tutorial
on
Tuesday
for
folks
who
are
interested
in
figuring
out.
What
does
stateful
workload
deployment
on
kubernetes
looks.
Look
like
it'll
walk
you
through.
D
You
know
the
process
of
creating
stateful
sets
and
PVCs,
and
all
of
these
things
and
then
finally,
on
Thursday,
we're
gonna
have
an
intro
and
deep
dive
session,
for
is
the
storage
sig.
The
intent
of
this
is
to
give
folks
who
are
brand
new
to
the
sig
information
about
what
we
do,
how
to
get
involved,
as
well
as
some
of
an
overview
of
some
of
the
big
projects
that
we're
currently
working
on.
D
So
if
you're
going
to
be
in
Barcelona,
Barcelona
and
you're
interested,
please
check
that
out
and
then
finally,
we
have
our
meetings
every
two
weeks.
You
can
go
to
our
community
page
to
find
out
more
about
that
feel
free
to
add
to
our
agenda
doc.
The
notes
are
linked
there
and
you
can
always
ask
us
questions
on
our
slack
channel
or
our
mailing
list,
and
that
is
all
I
have.
Thank
you
very
much.
E
E
We
more
or
less
work
according
to
release
cycles,
and
but
we
also
tend
to
think
in
terms
of
like
actual
quarters
of
the
year,
which
are
only
an
approximate
sort
of
mapping.
So
we
didn't
release
the
114
dogs
Jim
Angelo
and
was
really
sweet
for
the
dachshund
114.
He
did
a
brilliant
job
and
we
are
deeply
grateful
to
him.
He
has
also
spearheaded
on
some
really
great
additional
write-up
of
the
role
and
with
every
release
we're
getting
better
about
documenting
their
responsibilities
for
the
Ducks
leave
for
the
release,
so
we've
made
really
major
strikes.
E
E
E
E
I'll
say
a
little
bit
more
about
some
of
this
in
subsequent
slides,
I.
Think
and
we've
also
got
this
focus
group
kind
of
moving
into
the
status
of
a
working
group,
looking
specifically
at
security
content
and
how
to
organize
the
content
better
on
within
the
docs
site,
and
also
how
to
develop
parts
of
it
and
also
how
to
reach
out
to
other
SIG's
to
work
on
improving
the
content
and
help
folks
develop
general
sort
of
better
security
practices
around
managing
their
clusters.
E
It
some
but
not
all
kubernetes
right,
we
finished
our
say,
Charter
and
we've
there.
A
number
of
projects
are
interested
in
subdomain
hosting
on
kubernetes.
Tined
is
the
only
one
that
comes
to
mind
at
the
moment,
but
we've
had
a
few
others
as
well.
I
think
cube.
Cuddle
was
may
be
looking
at
this
option
also,
and
in
any
case
that
was
a
bit
of
well.
E
It
was
partly
just
setting
up
communication
appropriately,
so
that
folks,
who
want
subdomain
hosting,
have
expectations
set
appropriately
on
and
so
that
the
the
work
of
maintaining
subdomains
doesn't
fall
entirely
on
the
main
sig.
E
Our
review
and
planning
meeting
for
the
quarters
was
held
at
the
end
of
March
I.
Think
our
first
attempt
at
a
remote
meeting
for
this
work
and
we've
made
pretty
decent
progress
already
early
in
q2
on
implementing
on
the
stuff
that
we've
planned.
There's
links
to
the
notes
in
the
slide,
so
I'm
not
going
to
go
into
detail
there
and
we're
also
looking
at
getting
more
dedicated
technical
writers
hired
to
work
on
specific,
ongoing
problem
areas
in
the
docs
that
pick
the
right
solution
set
up.
E
The
new
contributor
ambassador
role
is
an
important
part
of
this,
and
the
path
from
new
contributor
to
approver
is
one
that
we're
trying
both
to
document
and
to
make
easier
so
that
folks
can
see
where
they
can
go
from
first
pull
request.
You
know
correcting
a
typo,
you
know
pinging
the
right
tech
reviewers
for
big
PRS,
for
new
release,
Doc's
getting
you
know,
actually
getting
peers
merged,
so
we're
making
we're,
making
good
progress
there
and
we've
got
eager
volunteers
waiting
in
the
wings,
which
is
pretty
exciting
release.
2015
the
M
is
coming
along.
E
Barney
is
coming
up
to
speed
on
he's,
partly
I
believe
he
shadowed,
gem
for
114
and
Jim
is
still
around
to
consult
and
mentor
and
will
continue
to
hold
Doc's
prints
at
upcoming
conferences.
Sadly,
we
will
not
be
holding
a
Doc's
print
at
right,
the
docs,
because
it
conflicts
with
cute
converse
Ilona.
So
for
the
first
time
in
several
years
right,
the
docs
will
not
have
a
kubernetes
presence,
I'll
be
there,
but
not
as
a
Bernese
contributor
and
as
I
already
mentioned.
E
Issue
triage
is
finally
starting
to
happen,
because
it's
really
been
pretty
much
scattershot
up
until
this
point
so
well.
Looking
forward
to
the
results
of
that
effort
things
you
should
know
about.
The
current
state
of
the
sink
I've
already
mentioned
the
security
content
effort.
We
have
a
dedicated
slack
Channel,
Zach
Arnold
is
leading
it
John
feel
free
to
join
the
channel.
Ask
questions
get
involved.
That
would
be
great.
E
We
also
have
some
stuffing
limitations
coming
up
listed
on
the
slide.
I
started
a
new
position
at
stripes
the
week
that
we
did
the
planning
meeting
so
the
end
of
March,
and
this
means
that
my
time
is
partly
taken
up
more
than
before,
with
getting
up
to
speed
at
a
new
job,
I
still
and
supported
in
my
kubernetes
contributions.
I'm,
not
stepping
down
but
I'm,
not
as
available
as
it
was
jared,
is
not
available
at
all
through
the
summer
and
Zach
is
left
to
pick
up
all
of
the
slack
pun
not
really
intended.
E
We
are,
however,
getting
shadows
up
to
speed
so
that
on
those
weeks
we
have
some
of
them
when
none
of
us
is
available
because
those
AK
and
I
are
out
and
there's
a
few
of
those
coming
up
between
now
and
August,
we
will
still
have
coverage
leadership
coverage
for
the
sake,
so
this
may
will
also
turn
into
some
succession
planning,
but
we're
not
there
yet
the
kubernetes
blog.
This
is
this
is
actually
pretty
big.
E
The
kubernetes
blog
is
now
an
official
sub
project
of
cig
docks,
I'm,
Kaitlin
Barnard
who's
been
an
amazing
contributor
to
the
Kate
stocks.
Up
until
this
point
is
going
to
be
taking
point
on
that
project
and
we
can't
wait
to
get
things
like
fully
sort
of
ramped
up,
we've
had
a
fair
number
of
hiccups
with
the
law
contribution
reviews,
and
this
ship
makes
the
whole
process
of
getting
blogs
through
from
initial
submission
to
final
publication,
considerably
smoother.
E
So
how
to
contribute?
We
have
excellent
documentation
on
contributing
to
the
docs.
It
continues
to
be
improved,
but
misty
Linville
was
the
first
person
to
really
create
better
pages
for
that
and
she
deserves
a
shout
out
whenever
the
question
of
contributing
to
the
Kate's
box
comes
up,
because
those
pages
really
make
it
a
lot
easier
to
understand
what
to
do,
and
we
have
a
few
projects.
E
Different
projects
set
up
to
reflect
some
of
the
planning
that
we've
done
in
our
planning
meetings,
so
you
can
take
you've
been
sort
of
follow
along
there
and,
as
always,
we
welcome
the
fixes
to
the
docs
in
pull
requests
and,
in
spite
of
our
new
issue
triage,
a
PR
is
still
the
quickest
way
to
get
something
fixed.
We
don't
need
to
run
everything
through
the
issue.
Workflow.
E
E
E
A
Okay,
so
that
concludes
our
sig
updates.
Moving
on
to
announcements,
we
have
no
stated
announcements
on
the
agenda.
I
will
open
up
for
any
other
business
at
the
end.
We
are
moving
on
to
the
shoutouts
for
this
week,
so
check
out
the
shout
outs
channel
in
the
kubernetes
lac,
if
you
have
anything
or
to
say
about
wonderful
work
that
people
in
the
community
have
done
to
help
you
or
others
feel
free
to
go
to
the
shout
outs
and
give
them
a
shout
out.