►
From YouTube: Kubernetes Community Meeting 20190214
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC.
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information!
A
You
mean
thumbs
up
awesome,
happy
thursday,
kubernetes
family,
welcome
to
another
wonderful
community
meeting
before
we
start
just
so.
Everyone
knows
this
is
recorded
and
streamed
and
we
have
a
code
of
conduct.
So
don't
say
anything
you
don't
want.
You
know
on
the
public
record
forever
other
housekeeping
items,
please
keep
yourself
muted,
if
you're
not
actively
presenting
and
also,
if
you're,
not
actively
editing
or
presenting.
Please
don't
idle
in
the
community
notes
dock,
because
lately,
Google
Docs
has
been
a
little
slow
and
we
need
someone
to
be
able
to
take
notes.
A
Speaking
of
if
someone
would
be
awesome
and
be
a
note-taker
for
this
community
meeting,
that
would
be
great
and
be
sure
to
put
your
name
in
the
note-taker
field.
So
we
know
who
to
send
some
swag
to
as
usual,
we
have
a
demo
to
start
us
off
and
with
us
we
have
Gwyn
and
she'll,
be
demoing
the
cube
service
exporter,
so
Quinton,
please
hi.
B
Thanks
very
much
for
having
me
here.
Let
me
go
ahead
and
share
my
screen,
see
if
I'm
doing
this
right.
Yes,
all
right.
Can
everyone
see
my
screen
awesome?
Okay,
this
is
cube
service
exporter,
which
is
a
tool
we
developed
at
github
with
the
aim
to
be
able
to
do
external
load
balancing
on
our
kubernetes
clusters.
We
have
an
internal
structure
with
multiple
kubernetes
clusters
and
we
develop
this
as
a
way
to
be
able
to
load
balanced
across
them.
Coop
service
exporter
does
this
by
exporting
key
value
pairs
to
an
external
storage.
B
C
B
B
So
this
is
keep
service
exporter
as
a
deployment.
It
pulls
from
docker
image
from
docker
hub,
and
it
runs
on
the
cluster
that
you
specify
in
this
case
I'm
going
to
change
the
cluster
name
too.
No
actually
I'm
going
to
leave
it
at
mini
cube,
because
what
I'm
going
to
walk
you
through
next
is
how
that
works
on
a
cluster.
So.
C
D
B
B
B
B
So
you
can
see
that
cube
service
exporter
exported
some
information
about
my
cluster
and
about
so
first
of
all,
it
has
the
node,
so
it
says,
keeps
every
supporter
nodes
and
talks
about
which
node
that
is
and
which
internal
adders
to
find
it.
Then
it
also
talks
about
which,
which
of
my
cube
service
exported
appointments,
has
the
leadership
currently
now
the
really
cool
thing
is:
if
I
now
deploy
a
service
on
to
this
cluster
cube
service,
exporter
will
be
able
to
show
via
annotations
on
that
service.
B
Actually,
I
can
show
you
that
file
a
file
really
quick.
Let's.
So
this
is
an
example
of
a
service
that
you
would
deploy
and
use
with
keep
service
exporter.
You
will
add
annotations
to
the
metadata,
and
all
of
these
annotations
are
going
to
be
looked
at
by
cube
service
exporter
and
transforms
into
key
values
on
your
console
storage,
which
can
live
anywhere,
which
I'm
going
to
show
you
next.
B
So
now,
if
I
go
and
exact
back
into
my
console
pod
and
look
at
all
the
key
value
stores,
you
can
see
you
make
this
big
there.
We
are.
We
have
the
information
about
our
service.
This
is
cube
service
exporter.
These
are
the
services.
This
is
my
current
name-space
on
my
cluster.
This
KSC
example
is
the
name
of
my
service,
which
you
can
see
right
here.
B
B
And
because
I
don't
want
to
use
up
everyone's
time,
I'm
also
going
to
deploy
my
service
I'm.
Actually,
sorry
I
think
I
need
to
change
something
really
quick
right.
So
the
trick
here
is
in
my
file.
I
actually
have
to
change
the
value
of
the
cluster
ID,
because
that
is
what
gets
exported
to
the
key
value
storage,
and
so
now
that
I'm
on
a
different
cluster
I
have
to
change
that
value
did
I
save.
That
would
be
silly
excellent.
B
Interesting,
how
do
I
change
this.
C
C
C
B
B
Okay,
so
basically,
what
because
I'm
running
out
of
time
and
I
can't
give
up
this
further?
What
we
should
be
seeing
here
is
my
other
cluster
exploiting
a
key
value
from
from
the
other
cluster
into
this
console
deployment.
What
I
wanted
to
show
was
that
other
clusters
can
talk
to
can
explore
to
any
console
deployment
anywhere,
but
I
have
no
idea
what
went
wrong
with
this
demo.
Currently
I
am
I'm
a
little
bit
I'm
a
little
bit
baffled.
However,
I
will
be
available
for
questions
and
help.
Anyone
show
this
if
they
are
interested.
A
D
Guys
near
me,
yep
it's
going
to
ruin
have
found
times
day.
Every
Thursday
for
all
of
us
single
folks
just
wants
to
give
a
few
updates
on
where
we're
at
with
the
release
and
what
contributors
can
expect
and
what
have
you.
So
as
of
this
week,
we
are
and
just
wave
your
arms
that
mean
if
my
internet
starts
going
out
and
I'll
show
off
my
video,
but
basically
we're
halfway
through
the
release,
we're
exactly
halfway
through
the
release.
D
This
week
on
Tuesday,
we
cut
V
dot,
one
dot,
14.0
alpha
three,
which
went
well
and
went
off
without
a
hitch.
We
have
burned
down
beginning
in
two
weeks
and
1.14
is
being
cut
officially
in
five
and
a
half
weeks
just
under
six
weeks
and
as
far
as
new
things
that
have
popped
up
that
you
can
take
advantage
of.
Over
the
last
week,
Aaron
created
a
shared
calendar,
shared
Google
Calendar
for
1.14,
which
has
all
these
events.
D
You
know
it
has
all
these
days
all
of
the
release
team
meetings,
the
cig
release
meetings
and
what
have
you
burned
down
meetings
once
those
start
in
two
weeks?
So,
if
you're
interested
in
seeing
how
the
schedule
was
going
or
participating
in
those
meetings
for
sure
get
on
that
calendar-
and
you
know
add
it
to
yours-
it's
really
great
other
big
things
that
are
going
on
right
now.
Basically,
last
week,
an
enhancements
land
was
the
end
of
the
extension
deadlines,
as
a
lot
of
you
may
know
two
weeks
ago
is
the
cap
deadline.
That
was.
D
This
is
kind
of
the
first
release
that
we're
all
acquiring
caps
for
all
new
enhancements
that
make
it
into
the
release.
So
there
was
a
bit
of
an
extension
time
period
as
well
for
things
that
were
in
flight,
but
didn't
quite
make
it
just
yet
that
ended
last
week.
So,
right
now
in
Han,
Smith's
team,
which
is
led
by
Claire,
is
you
know,
working
through
all
of
that,
making
sure
that
all
the
extensions
are
in
and
annotated
in
the
way
in
which
they're
annotated.
D
So
if
you
have
a
cat
that
you
need
an
extension
for
and
you
haven't
gotten
one
yet
you
know
it
may
be
too
late,
but
you
should
for
sure
talk
to
Claire
and
get
that
settled.
So
you
know
just
FYI
make
sure
your
enhancements
are
in
check
and
if
you're
interested
in,
like
the
logistical
aspects
of
a
release,
then
go
to
github.com,
slash,
Koopman
any
slash
enhancements.
D
This
is
where
the
caps
live
and
check
code
of
issues
go
to
milestones
and
look
at
the
114
miles,
there's
about
three
that
are
in
that
milestone
and
for
each
of
them.
Oops
I'm
gonna
show
off
my
table.
Sorry,
yes,
but
for
each
of
the
caps,
you
can
see
a
number
of
things
you
know
and
four,
depending
on
your
perspective,
whether
you're
more
interested
in
the
enhancements.
D
Elf
CI,
you
know
the
documentation,
the
release
notes
or
what
have
you
but
feel
free
to
go
through
those
look
through
the
graduation
criteria,
the
testing
checklist
and
what
have
you
and
that's?
Oh,
it's
just
a
helpful
way
for
everyone
can
be
spending
their
time,
making
sure
that
those
things
are
well
articulated
and
contributing
better.
D
You
know
graduation
criterion
checklist
if,
if
you
can,
as
far
as
CI
signal
goes
Maria,
the
CI
signal
lead
is
working
with
a
few
SIG's
to
clear
up
a
few
bugs
that
have
popped
up
in
CI
from
I
believe
Signet
working,
sig
close
to
life
cycle,
but
that's
in
progress.
If
you're
a
part
of
those
SIG's,
you
know
be
sure
to
help
help
out
there.
If
you
can,
you
know
book,
triology
is
looking
good
test
infrastructure.
D
We
had
113
PRS
merged
last
week
and
the
PRS
went
up
for
the
114
CI
jobs
as
well
as
new
tooling,
to
automatically
create
the
CI
jobs
for
future
releases.
That's
great
kudos
to
the
test
in
for
folks
for
this
release,
we
did
have
crown
and
github
outages
last
week,
but
you
know
that's
life
and
ops,
land,
sometimes
they're,
outages
and
other
than
that.
Docs
and
release
notes
are
working
together
to
kind
of
think
up
the
on
formats
for
the
release
notes
as
well
as
blog
topics
and
what-have-you.
D
So
that's
all
moving
along
swimmingly
and,
like
I,
said,
there's
two
more
weeks
of
active
development.
That's
going
to
go
into
this
release,
at
least
so
you
know
a
lot
of
stuff.
That's
going
to
be
documented,
have
release
notes
for
it
and
have
blogs
about
it
is
still
TBD.
So
you
know
everyone
be
vigilant
without
ci
signal
and
keep
on
developing
your
114
features
and
make
sure
you
have
your
captain,
but
that's
all
for
me.
I'm
Mike
release
lead
shadow
for
2014.
Thank
you
all
for
and
now
at
this
community.
D
A
F
F
F
So,
let's
start
with
what
we
did
last
cycle,
so
sig
docks
had
a
really
busy
q4
and
first
part
of
q1.
We
released
a
mature
process
for
localizing
docks
that
toolchain
and
that's
that
contribution
pipeline
is
pretty
solid
at
this
point,
so
we
have
debuted
Chinese
and
Korean
localizations
and
I'm
also
pleased
to
announce
that,
as
of
yesterday,
we
now
have
French
available
on
desktop
we're
still
working
on
getting
it
on
mobile.
But
if
you
go
to
the
desktop
site,
you
can
selectively
begin
receiving
content
in
French.
F
We
have
changed
the
docks
landing
page
layout
from
user
journeys
to
cards,
so
the
user
journeys
experiment
gave
us
a
lot
of
valuable
data,
but
it
wasn't
working
out
so
we
have
switched
to
cards
and
we
would
love
to
hear
feedback
on
that
design
and
thanks
specifically
to
Andrew
Chen
or
Chen
APIs.
The
link
for
the
Ducks
landing
pages
here,
Cates
thought
io,
/,
dog,
/
home.
F
We
released
1.13
dots
thanks
to
Tim
Fogarty
and
as
an
updates,
Andrew
Chen
has
stepped
down
as
a
co-chair
of
cig
docks.
He
is
moving
on
to
sort
of
bigger
and
brighter
things
at
Google
and
Jared.
Botti
is
returning
s
co-chair,
so
I
have
I,
will
open
a
PR
to
the
community
repo
to
make
sure
that
that
information
is
updated
upcoming
cycles
so
for
q1
and
q2.
F
Our
big
focuses
on
mentorship
and
improving
content
we'd
like
to
make
sure
that
the
content
that
we
have
really
shines
and
is
maximally
helpful
to
developers
and
we're
starting
to
focus
on
that
with
a
I'll
talk
a
little
bit
about
that.
In
a
moment,
we've
got
a
specific
project,
a
repo
project
up
with
content
available
for
contributors
to
focus
on,
but
we're
building
out
a
mentorship
track
right
now,
right
now,
the
release
process
is
the
only
part
of
sig
Docs.
F
That
has
a
really
well-defined
mentorship
track
and
we
noticed
that
a
lot
of
people
are
attracted
to
it.
Consequently,
so
we
want
to
provide
more,
more
and
better
defined
tracks
for
contributors
to
come
into
sig,
Docs
and
meaningfully
level
up
on
the
subject
of
improving
content,
I
am
still
trying
to
hire
a
tech
writer
to
work
on
the
set
up
section
in
general
and
specifically
the
thinking
a
right
solution.
Page,
if
you
know
a
tech
writer
who
is
awesome
and
has
good
open-source
experience
and
is
looking.
F
Please
contact
me
on
slack-
is
a
Keurig
Sarah
I'm
happy
to
receive
resumes
and
recommendations.
We
have
a
new
French
localization.
Our
cig
Charter
is
in
progress,
we've
gotten
good
feedback
from
Erin,
kirkin,
Berger
and
Philip
Rock,
and
it
has
been
incorporated
and
we're
now
we're
just
waiting
on
final
review
release.
One
14
is
in
progress,
Jim
angel
is
doing
a
bang-up
job
and
I
see
him
following
up
with
placeholder
PRS
with
future
developers.
So
thank
you
in
advance.
Everyone
who
is
opening
placeholders
and
getting
your
ducks
in
early.
F
F
Hi
Andrew,
so
we
the
more
that
we
look
at
this,
the
more
it
looks
like
it
would
just
be
helpful
if
we
could
give
you
a
dedicated
resource
to
give
you
feedback
on
that
cap,
template
that
you're
proposing.
So
if
that
would
be
helpful,
I
guess
so.
My
question
is:
would
that
kind
of
feedback
just
like
having
a
dedicated
resource
to
give
you
feedback
on
the
the
template
that
you're
proposing
in
the
cap?
Would
that
be
helpful?
Yes,
yeah.
H
Absolutely
I
mean
one
of
the
I
haven't
had
the
the
time
I've
wanted
to
devote
her
to
documentation,
and
so
one
of
the
things
that
I
thought
I'd
like
ask
for
previously
was
just
there
was
someone
to
be
a
little
more
directive
and
prescriptive
of
the
work
that
we
can
do
to
move
that
along.
Oh
that
would
that
would
be
helpful
for
me.
So
I
would
I
know
that.
F
F
F
G
F
There's
a
link
to
the
issue
proposing
ducks
for
keep
scheduler.
Is
there
someone
from
six
scheduling?
Well,
okay,
so
at
this
point,
I
will
just
ask
if
you
are
from
six
scheduling
and
you
are
interested
or
if
you
can
boot
signal
on
this-
that
would
be
really
helpful
and
we
can
get
some
keep
schedule.
A
box
out
the
door.
I
have
a
question
for
signature
backs.
When
are
the
sig
update
and
deep-dive
signups
opening
for
Shanghai.
F
F
Okay,
so
on
the
sub
sub
project
status
updates
right
now
we
have
sort
of
one
meaningful
update
for
reference
talks:
we've
added
a
google
Summer
of
Code
project
to
clean
up
referenced
rock
generation.
Hopefully
we
get
some
nibbles
on
that
reference,
not
generation
right
now,
it's
still
not
as
clean
as
I
wish.
It
was,
and
now
this
mattes
clean
as
I
think
it
needs
to
be
so.
F
So
hopefully
we
are
able
to
get
some
get
an
intern
for
for
google
Summer
of
Code
and
to
help
clean
up
the
reference
stock
generation
and
polish
it.
If
you
are
interested
in
cig
Docs.
This
is
how
you
can
contribute.
We
have
really
liked.
We
have
really
Rock
and
get
started
contributing
Docs
doc.
Slash
contribute
on
the
kids
case
that
IO
/
Knox
like
much
contribute
like
I
mentioned
earlier.
We
will
have
more
info
about
specific
mentorship
tracks
later
in
q1.
F
F
If
you
see
something
incorrect
in
the
docs,
please
open
the
PR.
Rather
than
opening
an
issue,
PRS
get
more
attention
than
issues
we
have
only
so
much
bandwidth
and
we're
it's
a
lot
easier
to
give
a
feedback
into
Shepherd
to
Shepherd
a
fix
through
the
process
than
it
is
to
try
and
diagnose
and
solve
an
issue
so
see.
This
is
where
to
find
us
the
best
places
on
slack.
That's
where
we
were
the
most
active,
but
here
are
also
links
to
a
repo.
F
A
I,
don't
have
a
question,
but
I
do
want
to
thank
you
and
also
give
you
mad
props,
actually
using
the
community
meeting
to
interact
with
other
SIG's
is
like
the
goal,
and
that
was
awesome.
So
thank
you.
We
might
actually
add
that
to
the
template
for
sig
updates.
Any
other
questions
all
right.
Next
up
we
have
sig
storage
with
Saad
I
thought.
Are
you
here,
yep.
A
I
So,
first
off,
let's
talk
about
what
six
storage
delivered
in
the
last
release:
1.13
q4.
The
big
highlight
item
was
moving
the
container
storage
interface
implementation
in
kubernetes
to
GA.
This
has
been
a
very
long
project
that
the
kubernetes
storage
seg
has
been
working
on
for
almost
a
year.
If
you
count
the
development
of
the
spec
it's
more
than
a
year,
what
what
is
the
container
storage
interface
wise,
kubernetes,
implementing
it
think
of
it
as
an
extension
mechanism
to
kubernetes
to
allow
volume
plugins
to
be
added
to
kubernetes
much
more
easily
in
the
past?
I
In
order
to
add
a
volume
plug-in
to
kubernetes,
you
had
to
actually
check
code
into
the
core
of
kubernetes
and
get
it
reviewed
by
the
core
sig
storage,
and
it
was
a
pretty
painful
process.
It
wasn't
good
for
multiple
reasons
having
an
actual
real
extension
mechanism
where
folks
can
develop
volume
plugins
completely
independent
of
kubernetes
is
super
valuable,
so
we're
very
excited
about
finally
having
that
move
to
GA.
So
that
was
the
big
highlight
item.
Other
items
that
we
worked
on
was
moving
topology
to
GA.
I
Topology
is
a
way
for
volumes
to
be
able
to
express
the
kinds
of
limitations
that
they
have
back
to
the
crew
brandy
scheduler,
so
that
any
pods
using
those
volumes
can
be
intelligently
scheduled.
So
if
you
can
imagine
volumes
that
are
only
accessible
by
some
subset
of
nodes,
you
can
imagine
like
a
rack
or
a
sone,
or
something
like
that.
Having
some
mechanism
to
be
able
to
generically
express
that
to
the
scheduler
is
important,
so
we
had
that
move
to
GA
as
well.
I
Another
feature
is
moving
raw
block
volume.
Support
to
beta
if
you're
familiar
with
kubernetes
volumes
at
all
you've
probably
been
using
them
as
a
mounted
file
file
system,
where
you
have
a
directory
exposed
somewhere
inside
of
your
container,
but
there
has
also
been
an
ask
for
being
able
to
consume
the
raw
block
device.
That
is,
you
know,
backing
a
particular
volume
and
the
use
cases
here
are
two
two
use
cases.
One
is
databases.
I
Some
databases
are
optimized
to
work
with
raw
block
devices
rather
than
through
a
file
system
because
they
basically
have
their
own
implementation,
and
the
second
is
software-defined
storage
systems
that
consume
raw
block
devices
and
expose
a
distributed
file
system
of
their
own.
So
the
support
for
raw
block
has
finally
moved
to
beta
next
item:
is
generic
mount
libraries
for
ice
cozy
and
Fiber
Channel?
Ideally,
what
we
want
is
for
folks
to
be
able
to
write
their
own
CSI
drivers,
but
what
we
realized
is
that
a
lot
of
storage
systems
are
based.
I
They
use
the
scuzzy
or
fiber
channel
a
protocols
in
order
to
they're
based
off
of
the
scuzzy
or
I
scuzzy
or
fiber
channel
protocols,
and
instead
of
having
everybody
write,
their
own
CSI
driver.
That's
slightly
different.
We
decided
to
offer
a
set
of
libraries
that
folks
can
use
for
these
specific
protocols
as
a
starting
point
and
then
build
their
own
custom
functionality
like
how
the
volume
provisions
and
things
like
that,
so
that
those
libraries
now
exist
and
can
be
used
to
write
your
own
drivers.
I
We
also
extended
flex
volumes.
Support
flex
was
an
older
extension
mechanism
for
volume.
Plugins,
we
decided
not
to
continue
investing
in
that
and
basically
freeze
production
on
it.
We're
gonna
keep
maintaining
it
as
sig
storage.
So
if
there's
any
bugs
we'll
address
them,
but
we're
not
gonna
add
any
new
functionality
to
it.
The
last
piece
of
functionality
that
we
did
add
was
volume
resizing
that
went
in
last
quarter
and
that's
the
last
feature
that
we're
going
to
add
to
flex
and
then
finally,
lots
and
lots
of
CSI
drivers
have
been
written.
I
Next
up
is
what
are
we
working
on
for
this
quarter?
Further,
the
1.14
release.
The
biggest
project
is
moving
entry
volume
plugins
to
CSI.
So
a
lot
of
you
are
probably
already
familiar
with
the
built
in
volume
plugins
that
are
baked
into
kubernetes
for
historical
reasons,
and
these
include
cloud
provider
volume
plugins,
including
GCE,
persistent
disks,
Amazon
EBS
volumes
because
they
are
baked
into
kubernetes
and
expose
a
kubernetes
api.
They
are
the
the
kubernetes
deprecation
policy
applies
and
we
can't
just
delete
them.
I
There's
a
lot
of
people
in
the
sake
working
on
it-
and
this
is
you
know,
take
multiple
quarters,
we're
hoping
to
have
an
alpha
implementation
of
that
this
quarter
next
is
bringing
a
feature
parody.
Csi
CSI
is
basically
going
to
be
the
way
that
we
extend
the
volume
subsystem
for
kubernetes,
but
in
order
for
it
to
do
that,
we
need
to
make
sure
that
it
has
all
the
features
that
the
entry
volumes
have
and
so
we're
continuing
to
add
functionality
to
bridge
that
gap.
I
Specifically
a
raw
block
volume
support.
We
want
to
make
sure
we
are
adding
it
to
CSI
and
moving
it
along.
The
CSI
version
of
raw
block
is
currently
alpha.
We
want
to
move
that
to
beta
topology,
which
we
discussed
when
GA
for
entry
volume
plugins
is
still
alpha
on
the
CSI
side.
We
want
to
move
that
to
beta
this
quarter.
I
Being
able
to
reference
pas
volumes
in
line
is
something
that
CSI
did
not
support
and,
as
CSI
expands
to
be
able
to
support
things
like
local
ephemeral
volumes,
it
becomes
more
important
to
be
able
to
specify
a
volume
in
inline
in
a
pod,
rather
than
only
through
a
PvP
BC,
so
we're
adding
support
for
that
dis
quarter
and
then
adding
support
for
volume
resizing
as
well.
In
addition
to
that,
we
are
working
on
drivers
that
are
created
by
six
storage.
I
These
are
drivers
that
kind
of
don't
really
have
an
obvious
owner
outside
of
kubernetes
maintainer
x'.
So
we
maintain
them
on
behalf
of
I
guess
the
world.
So
the
three
four
drivers
that
we're
working
on
this
quarter
are
an
NFS
CSI
driver,
I,
scuzzy
CSI
driver
a
fiber,
channel,
CSI
driver
and
then
a
driver
that
allows
you
to
use
a
docker
image
as
a
volume
source.
I
So
those
are
underway,
and
then
we
have
a
few
designs
that
we're
working
on
this
quarter.
One
is
around
secure
containers,
you've,
probably
heard
of
G
Weiser
and
kata
containers
where
they
come
up
with
a
secure
container
runtime
that
makes
breaking
out
of
that
container,
much
more
difficult,
we're
working
with
the
security
team
and
the
node
team
to
come
up
with
a
way
to
secure
volumes
used
by
such
containers
to
prevent
breakout
we're
also
looking
into
re
redoing.
The
volume
attached
limits
feature.
So
this
is
a
mechanism
by
which
a
volume
plug-in
can
advertise.
I
What
are
the
maximum
number
of
volumes
that
I
support
per
node?
This
is
a
limitation
that
a
lot
of
storage
systems
have
the
way
that
we
have
it
designed
right
now
is
not
very
flexible
to
handle
CSI,
so
we're
redoing
that
design,
and
then
we
have
a
number
of
testing
efforts
underway
to
continue
to
make
sure
that
this
layer
is
rock-solid.
I
One
is
around
scalability
testing.
If
you
are
familiar
with
the
kubernetes
scalability
tests,
you
may
know
that
the
existing
tests
are
entirely
stateless.
They
don't
take
stateful
workloads
into
account
and
we
want
to
change
that
and
make
sure
that
we
are
also
exercising
the
volume
subsystem
as
part
of
those
scalability
tests,
so
we're
working
with
scalability
on
that.
Second,
is
a
pluggable
end-to-end
test
framework,
so
a
lot
of
the
volume
tests
that
we
have
used
to
be
per
volume
plug-in
and
that
didn't
really
make
sense.
I
And
finally,
kubernetes
conformance
is
a
is
something
we've
been
working
on
for
a
while
for
this
quarter,
we're
going
to
have
a
set
of
tests
that
we're
going
to
kind
of
hope
to
promote
to
kubernetes
conformance
in
the
future,
but
we're
identifying
what
those
tests
are
and
creating
a
validation
suite
for
now
and
then
next
steps
would
be
to
figure
out
how
that'll
fit
into
the
rest
of
kubernetes
conformance.
So
that's
what
six
storage
is
working
on
for
this
release.
I
If
you
are
interested
in
getting
involved,
we
have
meetings
every
two
weeks
we
just
had
one
at
9:00
a.m.
today,
one
hour
ago,
feel
free
to
follow
the
link
here
to
find
all
the
information
about
that.
As
well
as
the
meeting
notes,
we
have
a
very
active
slack
channel.
So
if
you
have
any
questions,
feel
free
to
reach
out
to
us
there
or
on
the
mailing
list,
and
that
is
all
that
I
have.
A
Alright,
if
you
do
I'm
sure
you
can
find
sod
and
slack.
Finally,
we
have
our
announcements.
First
up.
There
is
an
update
on
slack.
We
are
only
manually
inviting
contributors
who
need
access
from
now
until
we
hear
from
slack
HQ.
If
you
were
a
cig
and
have
a
member
that
needs
access
ping
in
the
hashtag
slack
admins
Channel
and
an
admin
will
DM
you
for
the
email
consumer
traffic
is
currently
being
routed
to
discuss.
The
URL
is
discussed,
kubernetes
I/o,
the
cube
con
Shanghai
CFP
ends
11:59
p.m.
A
A
A
Next
up
we
have
DB
Hana
Shelley
I
am
sorry
with
names
in
no
particular
order.
Sen
Amit,
cold,
Jeff
and
Ben
shout
out
for
their
assistance
and
tests
in
for
a
release,
task
automation,
spiff,
XP
shoutouts
to
at
code
Ranger
at
mr.
Bobby
tables
at
K,
Bernard
10
for
putting
together
and
posting
the
CVE
blog
so
quickly.
The
blog
post
on
the
latest
CBE
nakida
gives
a
shout
out
to
at
mr.
Bobby
tables
and
at
just
Augustus
for
handling
the
new
member
requests
in
K
org
in
such
a
timely
manner.
A
It
feels
like
there
at
least
four
to
five
requests
a
day
and
they
are
still
managed
to
get
each
of
them
while
simultaneously
doing
things
for
the
community.
I
would
agree.
They
are
very,
very
quick
to
respond
at
Paris.
Thanks
me
for
hosting
this
call
in
her
absence
at
the
last
minute
and
I
am
a
true
team
player.
I.
Just
like
helping
out
and
mr.
Bobby
tables
wants
to
shout
out
to
at
Zachary
Sarah,
sig
Docs
and
everyone
involved
in
kick
starting
the
French
translation
efforts
at
Sieben
at
our
Alaine
Perry
air
at
L.
A
Led
are
you
at
yes
J
at
us,
mana
at
Arbenz
air
and
at
John,
you've,
a
gusta,
Gustad
and
others
I
don't
have
select
candles
for,
but
here's
the
github
their
github
handles.
Yes,
thank
you
so
much
it's
good
to
see
that
we're
getting
more
translations
on
the
docks
with
that.
If
anyone
has
anything
else
to
discuss,
let
me
know
otherwise:
I
will
be
releasing
everyone
and
giving
everyone
back
about
18
minutes
of
their
time,
all
right,
happy
Thursday!
Everybody!
Thank
you
for
coming.