►
From YouTube: Kubernetes Community Meeting 20190801
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
All
right
welcome
everybody.
It
is
August
first,
and
this
is
the
weekly
kubernetes
community
meeting.
This
is
a
public
meeting
where
we
go
through
an
agenda
to
update
the
community
on
what's
happening
in
the
kubernetes
community.
Before
we
get
started,
please
remember
that
we
are
under
the
kubernetes
code
of
conduct,
so
be
excellent
to
each
other
and
remember
that
we
are
streaming
and
recording
to
YouTube,
so
everything
you
say
will
be
on
the
internet
for
all
time.
We've
got
a
packed
agenda
this
week,
we're
gonna
have
a
demo
from
garden.
A
Our
glorious
release,
manager
has
returned
from
holiday,
so
we'll
be
lucky
you'll
be
giving
us
an
update
on
what's
going
on
there,
and
then
we
have
forcing
status
updates
that
we're
gonna
try
to
cram
in
today
to
kind
of
catch
up
catch
us
up
for
the
summer.
So
we're
gonna
have
the
product
security
committee,
which
is
we're
gonna
start
to
have
committees,
do
status,
updates
on
what's
happening
with
their
so
they're
gonna
go
sick,
instrumentations,
gonna,
go
sick,
docs
and
then
six
storage
is
gonna,
close
us
out.
B
Everybody
thanks
for
having
me.
Let
me
share
my
screen
here
quickly.
So
can
you
see
my
screen?
Yes,
good?
All
right,
so
I
wanted
to
introduce
you
to
a
tool
that
we've
been
working
on
for
about
a
year
and
a
half
now
and
garden
is
a
developer
tool
that
makes
it
easy
to
work
with
complex
distributed
systems,
and
we
spend
a
lot
of
effort
on
our
kubernetes
support.
It
is
a
garden
is
designed
for
to
support
other
other
providers
as
well,
but
kubernetes
is
where
it's
at
today.
B
So
that's
where
we're
focusing
and
kind
of
core
to
the
idea
of
garden
is
that
you
should
be
able
to
describe
the
whole
journey
from
a
bunch
of
git
repositories
through
building
deploying
and
testing,
and
yes,
notably,
testing,
and
we
want
to
weave
together
your
development
workflows
and
your
testing
work
closed
so
that
you
can
get
rapid
feedback.
While
you
work
and
get
some
of
the
type
of
feedback
that
you
would
possibly
only
get
from
CI.
B
While
you
work
on
your
code,
so
I'll
dive
straight
into
a
demo,
see
starting
here
in
a
terminal
so
garden.
You
started
with
the
CLI
and
there's
a
bunch
of
different
commands
won't
go
through
all
of
them.
I'll
use
the
one
that
kind
of
does
does
at
all,
so
I,
wanna
and
actually
I
want
to
make
a
point
here.
So
I
have
an
example:
application
which
is
a
variant
of
the
docker
and
boding
example,
which
might
be
familiar.
It's
used
to
demo,
docker,
compose,
etc.
I'm
gonna
spin
that
up,
but
just
to
make
that
point.
B
I
don't
have
darker
running
locally,
nor
do
I,
have
mini
cube
or
anything
else,
running
locally
I'm,
gonna,
point
garden
at
my
demo
cluster,
which
in
this
case
is
hosted
in
gke,
could
be
pretty
much
anywhere.
I'm
gonna
run
the
death
command,
which
goes
through
my
whole
stack,
make
sure
everything
is
built,
deployed
and
tested,
and
for
good
measure.
I'm
gonna
enable
hot
reloading
for
I.
Think
yeah,
there's
only
one
service
in
the
stack
that
is
configured
for
that
and
I'll
explain
that
a
little
bit
later.
B
So
the
first
thing
that
happens
here
is
we
start
a
dashboard
here.
I'll
show
you
that
in
a
little
while,
it
connects
to
the
cluster
make
sure
that
they're,
all
the
all
the
services
that
garden
needs
are
set
up,
and
here
you
can
see
it
walks
through
pretty
quickly
in
dependency
order,
make
sure
things
are
built.
Services
are
up
and
running,
they
already
were
so
that's
why
it's
so
fast
any
test
Suites
that
I've
defined,
have
been
executed
and
runs
everything
and
deploys
everything
that's
needed,
and
you
can
see
here.
B
B
So
let
me
just
quickly
show
that
actually
so
the
example
project
is
this:
this
guy
here,
you
might
have
seen
it
and
we
have
a
voting
voting
page
and
result
page
nothing
special
going
on
here
and
notice
that
I'm
connecting
here
through
localhost.
What
garden
does
here
behind
the
scenes
it?
It
opens
up
the
port
immediately
when
you
start
and
then,
when
you
connect
to
the
to
the
port,
it
will
create
the
tunnel
to
to
the
service
in
the
cluster
dynamically.
So
basically
there's
a
built
in
TCP
proxy
in
in
the
tool
itself.
B
It's
just
a
little
bit
about
how
you
configure
garden.
So
here
is
the
example
project.
At
the
top
level,
you
define
my
garden
project
and
that's
in
this
case.
It's
quite
simple:
I
have
two
environments
configured
so
I
can
run
the
same
project
easily
locally
or
remotely
here.
I
have
just
had
the
connection
information
that
I
need
to
connect
with
gke
cluster.
There's
some
templating
here,
because
we
actually
run
this
in
CI.
So
you
can.
We
can
pick
up
environment
variables
from
the
CIA
needed
for
convenience.
B
You
can
install
nginx
is
an
increased
controller,
but
you
can
use
whatever
whatever
you
like.
Jordan
doesn't
really
do
any
magic
in
terms
of
kubernetes.
All
this
does
all
the
all
that's
happening
is
my
getting
question
here
or
okay.
Okay,
sorry,
all
that's
really
happening
is
garden.
Will
translate
its
configuration
to
the
relevant
kubernetes
kubernetes
object.
So
let's
look
at
a
simple
configuration
here,
so
let's
look
at
the
Redis
configuration
at
the
simplest,
our
garden
module
can
look
like
since
it
looks
something
like
this.
B
This
is
humor
I'm,
saying
basically
we're
listening
this.
Should
this
post
in
here,
which
we've
templated
from
the
project
and
figuration,
should
rel
to
this
particular
service.
You
can
define
tests
as
part
of
your
stack
and
maybe
just
to
kind
of
help
visualize
this.
Let
me
open
a
dashboard
here,
so
the
stack
graph
is
what
we
turned
like
how
you
configure
garden.
This
is
a
visual
representation
of
that.
So
these
are
all
the
individual
steps
you
need
to
deploy
the
cost
to
deploy
your
stack,
you
go
for,
builds
deploys.
B
You
can
have
workflow
tasks
as
part
of
the
deployment
workflow
here.
We
have,
for
example,
a
task
that
creates
a
table
in
the
database
and
towards
the
bottom,
where
you'll
see
tests,
including
integration
tests.
So
what
what
I'd
like
to
do
is
quickly
show
how
I'm
gonna
break
a
test,
so
I
have
a
downstream
test.
It
expects
a
certain
status
code
from
this
service
here,
I'm
just
gonna.
B
Have
it
return
something
else
and
you'll
see
it'll
pick
that
up
here
and
it'll
start
traversing
the
graph
until
the
part
that
so,
basically
everything
that
is
affected
by
the
deadline
of
the
line
of
code,
that
I
just
changed
will
be,
will
be
automatically
resolved
here
and
here
I
can
see
the
actual
error
from
the
tests.
It
is
also
printed
out
here
in
the
console,
and
let
me
just
kind
of
unbreak
that
that's
here
and
yeah
so
and
that's
that's
kind
of
that's
a
very
rapid
fire
overview.
What
garden
does
the?
B
What
we
do
is
basically
allow
you
to
individually
configure
every
part
of
your
stock.
The
way
we
see
it,
every
part
of
your
stack
should
describe
itself.
It
can
also
work
across
multiple
repositories.
Here
we
have
everything
in
a
kind
of
mono
repo
fashion,
but
the
notion
is
that
you
can,
you
can
do
you
can
describe
every
part
of
part
of
your
stock
and
and
what
it
depends
on
so
here.
B
I
declare
that
this
integration
test
actually
needs
this
API
service
to
be
running,
and
this
means
it's
easier
for
me
to
write
tests
that,
instead
of
mocking
and
stopping
every
part
of
part
of
and
basically
unit
testing,
every
part
of
my
code
I
can
easily
write
integration
tests
and
have
them
run
while
I
work.
How
I'm
on
time
a
couple
more
minutes.
Okay,
so
just
just
to
kind
of
show
how
the
same
project
can
look
at
using
helm.
Charts
here,
I
have
the
same
Redis
service
just
pulling
this
table.
B
B
Charts
here
like
like
so
provide
values
to
them
through
through
templates
I,
can
reference
other
modules
as
part
of
a
stack
here
I'm
just
telling
this
helm
charts
that
deploys
in
my
worker
image,
which
is
very
simple
docker
file,
which
what
the
name
of
the
repository
name
is
and
what
the
version
tag
I
should
deploy
and
yeah
there's
another
there's
another
module
type
to
simply
called
kubernetes,
where
you
can
specify
without
helm
the
actual
manifest.
We
have
a
lot
of
flexibility
in
terms
of
how
you
configure
your
stack
and
the
philosophy.
B
Your
own
garden
is
not
that
you
know
we're
providing
this
higher
level
abstraction
and
limiting
to
living,
limiting
you
to
it,
but
rather
that
you
can
start
simple
and
you
can
kind
of
gradually
grow
in
complexity
as
you
as
as
the
needs
arise,
yeah
I'm
think
I'm
at
ten
minutes
now.
Should
we
open
up
for
questions.
A
C
Hello,
everybody
can
everybody
hear
me:
okay,
yeah,
fantastic
thanks
George,
so
my
name
is
Lachlan
Evensen
I'm,
the
116
release
lead
quick
update
on
116
this
week.
7:30,
which
was
this
Tuesday,
was
enhancements
fries
and
that's
where
we
take
in
and
cut
off
all
the
enhancements
that
are
coming
in
in
a
slated
for
the
116
release.
C
So
a
little
bit
of
ordering
there,
which
is
great
when
you're
laying
down
you
know
like
an
envoy
proxy,
you
want
it
to
come
up
and
get
its
routing
in
the
right
place
before
everything
else
starts,
so
I'll
feel
feel
free
to
click
there
and
take
a
look
at
that
that
is
slated
to
go
on
to
116.
Obviously,
it's
pending
the
code
landing
and
things
like
that.
Also,
we
cut
116
alpha
2
on
Tuesday
this
week.
So
if
you're
interested
in
taking
a
look,
you
can
download
that
and
have
a
play
and
give
us
feedback.
C
Upcoming
milestones
is
next.
Tuesday
86
will
be
alpha
3,
and
that
will
be
the
final
alpha
for
116.
Then
we
go
into
beta
the
week
after
patch
release
updates,
everything's,
still
TBD
marked
mid-august,
and
that's
for
all
supported
releases,
139,
114,
5
and
115.
To
currently
do
not
have
dates
assigned.
That
is
the
release
team
update,
feel
free
to
ask
any
questions.
I'll
be
hanging
around
on
chat
thanks,
George
any.
A
D
Thanks
I've
never
actually
shared
my
screen
on
zoom
before
so
we'll
see.
If
this
actually
works
can
y'all
see
yep
thumbs
up.
That's
awesome
thanks
for
for
having
us
today,
folks,
I
think
that
this
is
our
first
time
at
the
communities.
Community
updates.
So
again,
thanks
and
so
I'm
here
to
introduce
the
product
security
committee
and
tell
you
a
little
bit
about
what
we
do
so.
Firstly,
what
we
do.
The
product
security
committee
is
responsible
for
organizing
the
entire
security
incident
response
process,
including
internal
communication
and
external
disclosure.
D
So
we
handle
everything
from
from
triage,
bugs
new,
bugs
and
all
the
way
through
until
the
fixes
are
our
boys
job.
So
a
quick
introduction
to
who
we
are.
We
have
a
few
people
on
the
team
now
a
number
of
Jay's,
so,
firstly,
we
have
Brandon
Phillips
and
Joel
from
Red
Hat
at
cj
Tim
in
Jordan,
from
Google
and
myself
from
Shopify.
We
have
a
few
associate
members
here
with
us
on
the
team
as
well
me
as
her
newest
from
Microsoft,
so
we're
pleased
to
have
them
along
with
us.
D
D
So,
firstly,
we
will
receive
a
suspected
vulnerability
report
by
the
disclosure
process,
and
so
the
disclosure
process
at
kubernetes,
tayo
tayo,
slash
security,
is
where
we
expect
that
the
majority
of
our
vulnerability
reports
will
come
in
right
now
and
with
that
we
will
then
triage
of
vulnerability.
One
of
the
members
well
and
then
we'll
assign
a
fixed
lead
if
applicable,
completable
usually
applicable.
If
a
vulnerability
is
not
actually
a
vulnerability,
we
won't
assign
a
scientific
sleep
at
that
time.
D
You
know
we
can
just
triage
and
send
it
away,
but
if
that's
applicable,
we'll
we'll
pick
a
fix
lead,
usually
round
robin
from
the
product
security
committee,
if
there,
the
fix
lead,
will
identify
and
select
relevant
developers
to
build
a
fix
team.
This
is
really
easy
thanks
to
code
owners-
and
so
you
know,
if
there's
a
vulnerability
and
a
piece
of
code
will
definitely
go
check.
D
The
code
owners
out
and
try
to
find
the
last
few
people
who
worked
on
that
particular
part
of
a
feature
and
then
build
this
team
out
and
start
fixing,
and
so
the
fix
lead
with
the
help
of
the
fix
team,
will
then
assign
as
cbss
score,
which
is
the
what
is
a
common
vulnerability
scoring
system.
It's
a
pretty
helpful
tool
that
allows
us
to
reason
about
different
types
of
vulnerabilities
against
systems
and
then
we're
gonna
request
a
CPE,
and
so
the
kubernetes
organization
is
able
to
do
this,
which
is
really
awesome.
D
So
after
we
assign
a
CV
and,
of
course,
the
fixes
on
the
way
we're
gonna
wait
for
that
fix
to
be
approved,
and
then
once
we
actually
have
a
fix
and
patches
available
to
go,
we
will
create
a
communication
plan
about
these
vulnerabilities
and
then
release
them
to
the
appropriate
people.
So
that's
really
the
short
and
sweet
version
of
how
the
process
works.
In
practice.
It
actually
is
a
little
bit
longer
than
this
we're
working
on
adjusting
our
timelines
to
actually
meet
some
expectations
again.
D
This
is
a
volunteer
volunteer
based
program,
so
for
updates
what
we're
working
on
and
now
is
the
kubernetes
and
hacker
one
partnership.
We
selected
a
bug
bounty
vendor
earlier
in
the
year
and
that
ended
up
being
hacker
one
and
we've
been
working
with
them
to
launch
our
bug
bounty
program
for
kubernetes,
no
big
updates
for
us
on
this.
Yes,
please
stay
tuned,
hopefully
we're
going
to
have
something
by
coop
con,
but
we
are
working
ferociously
to
get
this
people
spoke
bounty
program
in
action.
D
Now
we're
really
excited
about
this
and
finally,
so
if
we're
to
find
us
how
to
do
the
things
we
have
this
homepage
here
at
kubernetes,
slash
security,
please
check
us
out,
read
the
documentation
become
familiar
with
it
and
how
these
vulnerabilities
and
security
reporting
affects
you,
and
we
have
a
slack
channel
more
for
just
talking.
Please
don't
report
vulnerabilities
they're
publicly.
The
slack
Channel
is
a
kubernetes
security.
D
We
do
have
a
public
list
that
you
can
come
join
a
talk
with
with
us
at
at
the
kubernetes
security
discuss
and
if
you
do
wish
to
report
a
vulnerability,
please
do
it
privately
at
kubernetes,
dot,
io,
/
security,
I,
don't
really
know
what
else
to
update,
given
that
it's
our
our
first
meeting
so
short
and
sweet
thanks
again
for
having
us,
please
feel
free
to
reach
out
to
any
of
us.
We'd
love
to
help
you
or
further
explained
any
of
this
system
to
you
and
if
you
have
any
questions,
I
am
here
for
you.
D
A
It
looks
like
erin
has
the
first
question,
which
is
a
I
heard.
Erin.
Are
you
on
okay,
I'll,
just
yep,
but
he
says
I've
heard
PST
referred
to
as
one
of
the
least
fun
on
calls
ever
so
huge
thanks
to
everyone,
volunteering
their
time
to
do
this,
I?
Guess!
That's,
not
a
question.
Just
a
compliment.
Hey.
D
Thanks
so
much
it's
a
yeah.
There
are
six
people
on
the
on-call
rotation
right
now,
I
think
it's
been
Tim
for
the
longest
time.
It's
been
clear
yeah,
but
we
have
been
running
through
a
lot
of
this
down.
Hopefully
we're
going
to
sort
of
error
on
call
process
to
spread
the
love
around
I
know.
I've
signed
up
for
ops,
unity
is
what
we
use
for
this
and
I've
been
paged
a
couple
of
times
and
yeah.
It's
definitely
more
frightening
than
my
production
pager
at
Shopify.
That's
for
sure,
mm-hm.
A
D
Yeah
security
at
cabrillo
style
is
the
de
facto
way
to
report
vulnerabilities
to
us
today.
The
reason
I
did
not
include
that
is.
We
would
like
people
to
use
the
website
and
because
there
is
a
little
bit
further
instruction
there
as
they
just
don't
fire
stuff
downrange
to
us.
We
do
have
a
issue
template
and
whatnot
to
use.
So
instructions
are
on
kubernetes,
dot,
io,
/
security
and
hopefully
you
know
fingers
crossed.
We
can
get
a
bug,
bounty
program
on
and
start
forwarding
people
over
there
soon.
D
E
My
name
is
PR
I'm,
leading
this
together
with
pathetic
brand
Rick,
so
I
will
cover
what
we
did
recently,
what
we
are
working
on
right
now
and
how
we
can
help,
basically,
so
what
we
did
recently.
So
we
started
this
year
with
the
with
noticing
that
plenty
of
matrix
within
kubernetes
within
the
co
bernetta
system
components
violates
kubernetes
instrumentation
guidelines
and
prometheus
instrumentation
best
practices.
So
we
we
decided
to
address
this
issue
so
that
you
know
the
rule.
There
was
also
no
consistency
in
matrix
names
and
labels
or
head
from
two
components.
E
You
have
like
similar
matrix,
meaning
more
or
less
the
same,
but
you
know
written
in
a
different
way.
So
we
did
a
lot
of
work
story.
Writing
all
those
metrics
so
that
the
matrix
coming
from
kubernetes
system
components,
you
know,
are
consistent
between
the
components
they
follow,
the
best
practices
and
the
guidelines.
So
basically
they
meet
the
standards.
This
work
landed
in
1.14
and,
while
working
on
this
part,
we
also
realized
that
there
is
not
good
way.
For
you
know,
keeping
the
matrix
table
like
keeping
track
of
the
matrix
stability.
What
does
it
mean?
E
E
So
there
was
no
such
mechanism,
the
kubernetes,
so
we
decided
that
we
want
to
address
this
as
well,
so
we
introduced
I
mean
we
designed
and
implemented
a
framework
for
versioning
metrics,
so
so
that
you
know
there
is
a
mekinese
to
perform
the
work
that
we
did
in
the
previous
quarter,
but
also,
you
know,
perform
similar
work
in
the
future,
so
that
we'll
be
able
to
change
the
metrics.
We
figured
out
the
migration
path.
We
are
working
now
on
the
validation
script.
E
We
need
help
on
migrating
the
matrix
like
to
use
like
the
existing
matrix
to
use
this
new
format.
So
if
you
are
interested
feel
free
to
join
us
like
on
the
sig
instrumentations
like
that
is
plenty
of
Cisco
Panetta's
system
components,
we
would
love
to
complete
this
migration
within
1.16
timeframe
and
it's
time
to
start
working
on
this.
Basically,
so
the
related
caps
like
there
is
like
this
kubernetes
matrix
overhaul.
The
work
around
you
know
defining
this
stability
matrix
stability
finger
is
split
amongst
free
caps.
E
E
A
F
Snake,
it
would
have
bit
me
awesome.
Koki
see
my
screen
yep,
looking
good,
perfect,
alright,
so
we
did
last
cycle.
We
continued
our
localization
efforts,
so
we're
up
to
seven
languages
right
now,
recently
being
Spanish,
Portuguese,
Indonesian
and
Hindi,
with
two
mill
in
the
works
and
localization
is
just
exploding.
I
know
Jennifer
touched
on
this
last
update,
but
it
seems
like
it's
a
train
in
motion
and
it's
really
awesome
to
see
the
work
that's
being
done
in
the
different
areas
all
over
the
world
and
we
also
released
the
115
dock.
F
So
thanks
to
Barney
that
was
a
great
effort
there.
We
still
have
some
snags
in
the
release
process
for
documentation,
but
I'm
actively
working
with
Barney
and
the
current
release
lead
to
and
a
to
resolve.
Some
of
those
friction
points
there's
no
reason
why
Doc's
needs
to
be
complex
part
of
the
release
process,
so
we're
trying
to
make
sure
it's
an
easy
journey.
There
we're
continuing
to
improve
mentorship
and
sustainability.
So
things
like
the
new
contributor
ambassadors
issue.
F
Triage
leads
localization
leads
in
shadow
chairs,
trying
to
make
it
so
it's
a
revolving
door
in
a
good
way.
So,
there's
a
sustainable
model
within
cig
Docs
for
how
PRS
are
handled,
how
issues
are
handled?
How
you
know
chair
mentorship,
is
handled
and
really
build
a
open
and
collaborative
community
around
sig
Docs
that
can
sustain
itself.
F
Also
in
this
cycle,
we
introduce
concepts
and
curated
security
content
and
do
a
single
page.
This
is
with
help
with
Zeca
Arnold
and
we
created
a
it
was
not
an
official
working
group
as
more
of
a
sub
project
and
I
think
there's
a
finite
end
date
here,
where
at
some
point
that
will
kind
of
disband
themselves.
But
the
efforts
there
are
to
consolidate
how
you
secure
cluster,
where
all
those
resources
are
and
in
the
past
they're
kind
of
all
over
the
place.
F
So
it's
a
it's
a
really
big
improvement
as
they're
consolidating
those
and
it
sounds
like
it
might
play
a
little
bit
of
a
role
in
tandem
with
the
things
that
Jonathan
was
talking
about
earlier
today,
kind
of
like
what
do
you
secure
in
your
cluster
and
then
also?
How
does
the
announcement
of
communication
peace
happen
to
the
community
and
that's
really
the
focus,
therefore,
that
we
also
finished
a
content
redesign,
including
a
table
to
represent
the
various
vendor
solutions
fairly.
So
you
can
click
on
that
link
in
the
slide
link
in
the
documentation
here.
F
F
Your
own
product
type
thing
there's
a
lot
of
a
lot
of
content
and,
as
we
saw
the
vendor
scale
out,
there's
different
ways
to
sell
your
products,
and
it
was
not
really
standardized
in
any
in
a
in
any
way,
so
that
effort
put
it
down
into
a
table
where
you
can
simply
see
check
marks
of
where
vendors
you
know
fit
in
certain
categories
and
areas
to
find
a
solution
that
meets
your
need.
That
is
not
necessarily
a
pitch.
F
We
also
held
our
second
remote
quarter
to
and
q3
planning
meeting.
So
previously
that
was
done
in
person.
This
is
the
second
one.
We
did
through
zoom
session
to
work
out
some
of
our
goals
for
a
q3.
We're
currently
averaging.
One
point
two:
seven
million
pageviews
per
week,
which
I
think
is
pretty
impressive
up
from
1.2
million
a
week.
So
that's
awesome
to
see
that's
pretty
impressive
numbers
and
I
guess
shout
out
to
net
liffe
eye
for
helping
sustain
some
of
those.
F
F
The
release
116
is
in
progress,
as
I
mentioned
earlier.
Tunde
is
coming
up
to
speed,
awesome
and
he's
doing
a
great
job
at
improving.
Some
of
those
friction
points
we've
been
having
the
past
few
releases.
Much
like
software
continues
to
get
better,
I
think
release
process
for
documentation
continues
to
get
better
and
I
still
think
we
have
room
to
grow.
F
The
working
group
I
was
talking
about
earlier
with
sac
Arnold
Zak
Arnold
no
longer
can
run
that
group,
so
we're
looking
for
someone
to
take
over
and
kind
of
run
with
that
with
that
group,
there's
a
sig
knocks
security
channel
on
slack
and
really
looking
to
kind
of
see
that
project
to
completion.
There
was
a
lot
of
efforts
to
consolidate
security
concepts
to
a
single
page
now,
I
think
the
secondary
area
is
figuring
out
that
communication
point
from
I'm
a
cluster
operator
owner.
F
How
do
I
know
when
there's
a
vulnerability
in
my
cluster
I,
believe
that's
the
objective
they're
set
out
to
to
solve
and
I
just
some
housekeeping
here,
Jerez
getting
back
from
sabbatical
today,
so
they'll
be
awesome
to
have
them
back
and
really
good
to
see
him
Jennifer
still
has
limited
availability
from
starting
her
new
job,
and
so,
as
a
result
of
all
this,
a
lot
of
the
workload
has
fallen
upon
Zak
core
lesson.
So
it's
quite
busy
and
he
thinks
you
can
advance
for
your
patience
as
he
has.
F
So,
if
you're
interested
in
contributing
to
documentation,
there's
a
link
here,
we're
looking
for
more
shadow
chairs
and
mentoring
and
q3
q4,
there's
always
need
for
technical
review
to
ensure
concepts
and
tests
are
accurate
and
up
to
date,
you
can
see
the
current
priorities
and
the
link
there.
So
there's
a
lot
of
issue
triaging
that
gets
done
and
sig
ducts,
but
there's
also
some
objectives
to
improve
content,
documentation
which
you'll
find
in
that
link.
There
there's
a
little
bit
of
a
balance
between
putting
out
fires,
as
well
as
improving
the
existing
content.
F
You
see
something
incorrect
open,
pyaare,
as
we
noted
their
peers,
are
gonna,
get
more
attention
than
issues
just
by
the
nature
of
what
they
are
and
where
to
find
us.
There
is
the
communication
links
to
the
chairs
there,
as
well
as
their
home
page
slack
in
our
list
and
I
think
that
is
it.
So
if
you
have
any
questions
reach
out
to
us
on
slack
or
you
can
find
me
on
slack
as
well.
Thank
you.
A
G
Can
everyone
see
my
screen?
Okay,
all
right,
so
my
name
is
Sally
I'm,
one
of
the
co-chairs
of
SIG's
storage.
Today,
I
wanted
to
give
you
a
real
brief
update
on
what
we've
been
working
on.
So
first
up,
I
wanted
to
talk
about.
The
features
that
cig
Storage
has
delivered
in
1.15
1.15
in
many
respects
was
a
kind
of
working
release
where
there's
a
lot
of
work
that
where
we've
been
working
on
but
haven't
necessarily
pushed
out
or
released
yet.
G
But
the
major
highlights
for
the
115
release
were
this
migration
work
that
has
been
going
on
for
multiple
quarters
now
and
effort
here
is
around
the
container
storage
interface.
It's
the
container
storage
interface
was
built
by
six
storage
to
allow
vendors
to
be
able
to
build
plug-ins
and
drivers
out
of
tree
out
of
the
kubernetes
court.
G
This
is
in
conjunction
with
an
ongoing
effort
to
remove
cloud
providers
code
from
entry,
and
so
we
did
a
lot
of
work
last
quarter
to
enable
this,
but
it
hasn't
gone
to
beta,
yet
it
remained
in
alpha,
but
a
lot
of
work
was
done
that
you
don't
get
to
see
just
yet.
In
addition
to
that
migration
work,
we
continue
to
add
functionality
to
CSI
functionality
that
didn't
exist.
Necessarily
on
the
entry
side.
G
In
addition
to
that,
we're
continuing
to
make
the
CSI
layer
more
robust,
we
added
a
new
registration
mechanism
to,
or
we
made
the
existing
registration
mechanism
for
the
cubelet
for
CSI
drivers.
More
robust,
it's
more
tolerant
of
errors.
We
also
allowed
CSI
drivers
to
be
able
to
expose
volume
capacity.
This
is
something
that
entry
drivers
can
do,
but
CSI
wasn't
able
to
do
until
very
recently,
and
then
we
also
added
the
ability
for
you
to
be
able
to
specify
secrets
per
volume
so
per
PBC
instead
of
for
an
entire
storage
class.
G
G
So
beyond
CSI,
we
are
adding
two
major
pieces
of
functionality,
volume,
cloning
and
volume
snapshots
volume
snapshots
has
been
in
alpha
for
quite
some
time
now
and
we
added
a
couple
of
new
features.
There.
One
was
the
snapshot
and
delete
volume
finalizer.
This
prevents
users
from
accidentally
deleting
objects
out
of
order
and
getting
them
as
getting
themselves
into
a
bad
state.
We
also
designed
aqs
and
resume
hook
for
volume
snapshots
in
conjunction
with
cig
apps.
G
The
idea
here
is
that,
before
we
take
a
snapshot,
we
should
have
some
mechanism
by
which
we
can
tell
the
application
to
pause,
writes
so
that
we
can
pause,
writes
and
flush
flush,
anything
that's
in
buffer,
so
that
you
can
take
a
snapshot
that
is
going
to
be
consistent,
so
that
design
was
completed
last
quarter
and
then
the
big
star
feature
for
1.15
was
volume
cloning.
This
gives
users
the
ability
to
use
another
persistent
volume
as
a
data
source
at
provision
time.
G
So
instead
of
first
creating
a
snapshot
and
then
creating
a
new
PBC
from
that
snapshot,
you
can
create
a
PVC
and
point
directly
to
an
existing
PVC.
As
long
as
the
underlying
storage
system
supports
cloning
that
will
just
result
in
the
cloning
of
that
volume.
So
that
was
the
1.15
release.
Moving
on
to
what
we're
working
on
this
quarter
for
the
116
release,
CSI
migration
again
is
our
biggest
ongoing
effort,
we're
hoping
to
move
that
to
beta
this
quarter.
G
A
lot
of
the
work
was
done
last
quarter,
so
we're
pretty
confident
in
being
able
to
hit
beta
here.
The
next
item
here
is
adding
online
and
offline
volume
resizing
to
CSI
that
is
currently
available
as
alpha.
We
want
to
move
that
to
beta
this
gives
users
the
ability
to
Dyna
to
automatically
resize
volumes
through
the
kubernetes
api,
rather
than
having
to
go
around
and
interact
with
the
storage
system
directly
and
then
volume
snapshots
were
finally
planning
to
move
this
to
beta
as
well.
It's
been
an
alpha
for
a
while.
G
It's
gotten
good
usage,
good
feedback
and
we're
fairly
confident
in
where
the
API
is,
but
but
we've
also
identified
a
large
amount
of
work
that
we
want
to
do
before
we
move
to
beta.
So
this
will
be
a
little
bit
of
a
stretch
goal
for
us,
we're
hoping
to
hit
beta
this
quarter,
but
that
may
end
up
slipping
and
then
volume
cloning
which
was
introduced
last
quarter
was
simple
enough
and
has
gotten
enough
positive
feedback
that
were
confident
in
moving
that
forward
to
beta
this
quarter
as
well.
G
In
addition
to
these
beta
debate,
ax
tations
we're
also
looking
into
designing
a
volume
snapshot,
namespace
transfer
mechanism.
This
is
a
mechanism
by
which
you
can
transfer
volumes
across
namespaces.
So
the
ability
to
snapshot
or
clone
a
volume
is
great,
but,
as
most
of
you
are
probably
familiar
with
the
PVCs,
they
are
named
spaced
and
of
both
cloning
and
volume
snapshots
limit
restore
to
the
same
namespace
for
security
reasons,
but
users
want
to
be
able
to
kind
of
transfer
volumes
across
namespaces
and
we're
looking
into
how
we
can
enable
that.
G
So
that's
going
to
be
a
design,
this
quarter,
not
an
implementation
and
we're
also
implementing
CSI
support
for
Windows.
This
is
pretty
big,
so
we're
kind
of
going
all
in
on
Windows
support
in
sync
storage.
It's
something
that
we've
ignored
for
a
while,
but
it's
it's
very
important
and
for
this
quarter
the
goal
is
to
get
CSI
working
as
an
alpha
implementation
with
1.16
and
then
finally,
I
talked
about
the
ephemeral
CSI
volumes
that
is
going
to
be
moving
to
beta
at
this
quarter
as
well.
G
So
that
closes
up
the
features
that
we're
working
on
for
116.
If
you
are
interested
in
getting
involved,
please
go
to
this
website
here
and
we
have
all
the
information
about
how
you
can
get
involved.
We
have
a
mailing
list.
We
have
a
sync
channel.
If
you
have
any
questions,
we
hold
meetings
twice
a
week
Thursday.
It
was
actually
right
before
this
meeting
at
9:00
a.m.
Pacific
time
and
we'd
love
to
get
more
members.
So
thank
you
very
much.
A
All
right
thanks
so
much
for
powering
through
those
updates
you're
able
to
do
four
and
we're
still
way
ahead
on
time.
So,
let's
finish
up
with
the
announcements,
just
some
quick
conference
updates,
the
cloud
native
rejects
conference
is
happening
before
to
confound
native
comment:
San
Diego.
If
you're
not
familiar
with
this
conference,
they
try
to
host
all
the
talks
that
were
not
able
to
be
accepted
for
the
main
conference.
So
it's
really
good
that
they're,
repeating
that
in
San
Diego
they're,
just
trying
to
source
a
venue,
so
click
on
through
to
that
link
there.
A
If
you're
interested
in
that
also
related
to
CN
CF,
has
announced
the
kubernetes
summit
in
both
Sydney
and
Seoul.
These
are
two
events
and
they
are
going
to
be
in
December.
I've
left
those
links
in
the
notes
as
well.
If
you
want
to
click
through
save
the
date.
The
contributor
summit,
which
is
the
day
before
Yukon
cloud
native
con,
will
be
happening
this
year
on
November,
17th
and
18
so
save
the
date.
A
paris
and
crew
are
gonna,
be
putting
together
a
program
for
both
new
and
current
contributors.
A
So,
generally
speaking,
these
are
the
larger
ones,
so
she'll
be
providing
more
information
as
the
week's
go
on.
But
just
if
you
can
pencil
in
the
day,
we
always
have
lots
of
good
content.
That's
useful
for
people
who
attend
this
call.
Next
week's
updates
are
gonna,
be
sick
cloud
provider.
If
you
haven't
seen
that
the
note
on
kubernetes
dev,
all
the
old
con
provider,
SIG's
are
now
rolled
up
under
that
a
sub
project.
So
they're
gonna
give
an
update
on
on
what's
happening
that
next
week,
contributor
experience
and
scheduling
will
also
be
happening
next
week.
A
Shoutouts
is
the
hash
shoutouts
Channel
on
slack.
If
you
see
someone
going
above
and
beyond,
the
call
of
duty
feel
free
to
just
give
them
a
thanks
in
that
channel,
and
then
we
read
them
out
towards
towards
the
end
of
this
meeting
so
Nikita
this
week
we
only
have
one
would
like
to
shout
out
to
Alison.
Doubt
me
for
running
her
first
sig
contributor
experience
a
PAC
meeting
that
went
really
smoothly
so
with
that
we
are
gonna,
close
it
up
and
give
everyone
over
15
minutes
back.