►
From YouTube: Kubernetes Community Meeting 20180906
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
https://contributor.kubernetes.io/events/community-meeting/
A
All
right,
everyone,
let's
get
started.
This
is
your
weekly
kubernetes
community
meeting.
It
is
Thursday
September,
6,
2018
I'm,
your
temporary
host
George
Castro
I'll,
be
hosting
temporarily
for
Jeff.
She
put
together
the
agenda
and
everything
for
this
week's
meeting.
I
had
to
step
away
for
a
little
bit.
We've
got
a
demo
today
for
the
ji-suk
2018
project
demo
for
the
sed
proxy
controller,
some
release
updates
from
Tim
and
company.
A
Let's
see,
Erin
has
a
bunch
of
updates
about
the
submit
queue
to
tide
and
then
graph
for
the
week
and
announcements
and
stuff
like
that.
So
before
we
get
started,
please
keep
in
mind
that
this
meeting
is
recorded
on
YouTube
and
is
being
live
streamed
to
the
Internet.
So
please
be
cognizant
of
what
you're
doing
and
please
keep
an
eye
on
your
microphones
and
keep
them
on
mute
when
you
are
not
talking
and
with
that,
let's
get
started.
Alright
Marco
go
ahead.
A
B
B
B
The
API
service
are
the
most
powerful
and
most
feature
completely
of
ecstatic.
Kubernetes
core
API,
however,
it
kubernetes
aggravated
API
service
require
a
deep
diary
Texas
to
etcd
and
Evie.
Tyler
Texas
is
a
security
risk
and,
let's
see
Mike
from
the
following
diagram,
we
can
see
cube
API
server
talking
to
ET
city,
but
also
to
agree
with
the
DPI
service
totally
to
the
same.
Exceed
there
is
nothing
that
patrols,
outdated,
API
service,
22,
etcd,
so
aggravated
API
server.
Can
you
take
any
data,
including
potentially
critical
date?
B
Therefore
share
Aditi
Civic
roster
is
a
security
risk
and
not
an
option
in
environment
such
as
Google
cloud
or
Amazon.
We
don't
even
have
access
to
each
city
and
we
have
to
manually,
deploy
and
manage
dedicated
City
cost.
This
is
hard
and
time-consuming,
because,
let's
look
at
the
following
diagram
we
have
porn
with
is
a
plus
L
by
each
of
them
have
Bonita
city
and
this
one
Sara,
not
at
the
best
age
you're
going
to
manage
to
etcd.
If
we
have
our
own
cluster,
that's
already
freed
city,
fascist
and
magnetic
certificates.
B
Upgrading
maintains
overall
is
hard
and
takes
a
lot
of
time,
and
this
has
been
a
deal-breaker
for
many
users.
The
point
validated
API
sense.
Luckily,
today
we
are
happy
to
present
a
solution
to
this
problem:
dead
city,
proxy
controller.
So
now,
what's
a
city,
proxy
controller,
DLC,
the
proxy
controller
is
a
solution
to
this
problem.
It
allows
you
to
share
an
HDD
cluster
with
any
number
of
other
day
pi/6.
B
Our
vision
is
to
all
all
operators
twist
all
the
right
behind
service
with
a
simple
command
like
shell,
install
sample
API
server
and
without
any
interaction
it
to
configure
8
the
CD
or
anything
similar,
just
I
recommend
and
heavy
API
server
up
a
drunk.
Everything
is
automatically
and
secure.
So
there's
that
the
etcd
proxy
controller
can
even
be
used
by
cloud
providers
such
as
Google
or
Amazon
to
host
interceding
instances
for
their
users,
so
they
can
use
it
to
point
out
repeated
API
service
or
anything
else.
It
in
a
controlled
way.
B
Now
the
question
is
how
this
works
well.
At
City,
Prosecutor
is
based
on
the
APCD
proxy,
ultimately,
city
fish.
It
allows
us
to
separate
each
CD
case
files
when
you
are
talking
over
in
TCD
proxy
and
write
some
key.
It
is
added
the
prefix
and
etcd
proxy
can
only
access.
Is
that
held
a
specific
graphics.
B
Yeti
city
proxy
is
configured
by
deploying
etcd
storage.
Let's
add
city
storage
vessels
defines
how
certificates
will
be
generated.
What
need
space
for
etcb
will
be
used
and
many
other
stuff
in
the
nest.
Right,
we'll
see
how
matrices
looks
like,
but
it
is
important
to
mention
the
TT
city.
Proxy
controller
automatically
can
do
certificates,
equaling
the
renewal
and
drawing
rotation.
B
Now,
let's
take
a
look
at
the
architect
of
the
S
to
be
practical
drawer.
On
the
left
side,
we
have
the
following
data:
we
have
a
net
city
cost
here
we
have
a
city,
proxy
control,
deployed
indeed
city
proxy
namespace-
that
we
usually
call
something
like
cube:
API,
server,
storage
on
the
right
side.
We
have
how
long
each
city
stone
each
other
just
looks
like
we
have
a
name-
and
this
name
is
the
name
of
the
etcd
proxy
namespace.
We
are
going
to
use.
So
this
is
a
key
that
we
version
in
the
same
prefix.
B
So
this
one
is
going
to
be
that
prefix
in
DD
City
store
in
spec.
We
have
the
following:
keys:
I
see
a
third
config
map.
This
is
a
config
map.
This
is
the
student
ID
sample,
API
server
or
any
other
aggravated
API
server,
and
here
the
task
CA
certificate
is
stored
and
used
by
the
API
server
to
verify
identity
of
need
city
of
the
API
server.
B
Now
we
have
a
client
secret
and
fear
our
storage
client
certificate
used
by
the
aggregate
to
the
API
server
who
outfitted
it
with
it.
It's
city
blocks
also.
It
is
possible
to
configure
the
certificate
validity
of
the
certificates
in
the
crash.
Now
what
happens?
We
deployed
a
2gb
storage,
dataset
approach.
The
controller
automatically
creates
on
its
Sidney
proxies.
Thus,
it
generated
certificates,
deploy
certificates
in
the
config
map
and
the
secret
and
connect
CTC
be
proximity.
B
City
did
CD,
prosthesis
is
automatically
exposed
with
the
service,
and
at
that
point
it
can
be
used
by
the
Augmented
API
server.
Now
we
are
going
to
deploy
one
more
its
city,
storage
and
the
process
is
similar.
We
get
an
Adelaide,
City
proxy
is
have
another
prefix
because
it
can
share
this
same
prefix
because
cube
will
not
allow
to
rest
is
the
same,
and
that
means
that
nobody
can
access
your
data
by
creating
this.
B
You
see
this
knowledge,
and
so
when
we
came
out
etcd
proxies
to
society,
we
can
point
I'll
go
to
the
API
service
to
use
them-
and
let's
say
this:
we
have
first
targeted
API
server
and
it's
writing
something
to
eat
the
CD
box
and
I
say
that
keys
/
hubs
when
it's
written
to
the
city
proxy.
It
is
stored
in
the
etcd
with
some
pics
actress,
/,
proxy
prefix,
fun,
/
house.
B
Now,
second,
they
go
to
API
cellebrite,
something
such
as
/
card.
When
it
is
a
story,
the
etcd
is
stored,
such
as
/,
prof.
proxy
prefix
pool
/
card,
and
one
etcd
proxy
can
only
acts
keys
with
this
graphics,
and
that
is
secure
because
nobody
can
create
etcd
storage
to
access
your
keys,
and
only
you
can
access
your
own
keys
in
this
demo.
We
are
going
to
use
the
option,
shift
service,
servic
search
side.
This
is
a
project
by
the
option.
Shifting
it
works
on
any
kubernetes
cluster.
B
It
doesn't
have
operation,
appendices
and
instance-
is
to
create
servic
certificates
for
August
the
API
server.
So
we
have
truly
static
and
portable
manifest
note
that
this
project
will
be
renamed
soon
and
as
far
as
the
so
the
next
name
will
be
serving
CA.
So
if
you
see
that
plot
trick-
yes,
that's
the
same
word
under.
C
B
B
B
We
deploy
the
Arabic
roles
that
are
needed
for
the
algorithm
API
server
and
needed
by
the
city
prosecutor
to
update
config
method
secret.
Now
we
are
ready
to
deploy
against
the
API
server,
but
first,
let's
take
a
look
at
the
deployment.
Manifest
first
thing
we
need
to
deploy
is
need
to
see
the
storage
vessels
that
will
explain.
It
will
be
produced
several
sites
and
those
it
is
deployed.
We
can
deploy
the
algorithm
API
step.
The
most
important
points
are:
we
need
to
provide
the
others
of
the
it
is
TB
server.
B
It
is
in
format
of
absolutely
paranoid.
Name
of
DT
city
storage,
wrestles,
dot,
nice
list
of
the
controller
in
our
case,
cube,
API,
server,
storage,
point
CVC,
and
then
we
need
to
point
certificates
for
XE
d,
proxy
data
generated,
let's
the
prosecutorial
and
stored
in
the
secreted
coffee
map.
Also,
we
are
providing
the
cervix
certificate
generated
by
the
open
system,
and
now
you
have
about
a
minute
left.
Okay,
and
now
we
are
ready
to
deploy
the
argument
to
the
API
server.
Now
the
targeted
API
server
we
can
check
out
there
is
it
deployed.
B
It
takes
some
time
and
it
is
one
to
feel
the
first
time
because
it
takes
some
time
for
certificate
to
be
generated,
and
now
it
is
up
and
running
and
the
frontal
racist,
which
is
a
subprocess
for
sample.
Api
server
can
be
created
and
if
creation
is
successful,
that
means
the
API
server
is
working
as
intended.
It
is
using
the
etcd
proxy
and
it
works
now.
The
dis
works
will
shoot
a
city
proxy
works
and
our
API
server
is
working
as
attempt
at
last.
B
We
hope
this
is
going
to
help
the
other
looted
API
several
operators
to
easily
install
their
API
service.
We
told
such
as
hell,
you
can
learn
more
about
the
project
or
github
and
is
in
repository
X
mu,
3lcd
proxy
controller,
also
I'm
going
to
publish
a
blog
post
with
a
deep
dive
into
technicals
of
the
sed
proxy
control,
and
you
can
also
follow
me
on
twitter
for
some
more
details.
Thank
you
for
listening
all.
A
E
Crap
I
was
kind
of
hoping
that
Tim
would
set
me
up
for
this,
but
that's
totally
fine,
so
hi
everybody,
I'm
Aaron
Berger
and
a
strict
beard,
Eric
testing,
Aaron
a
steering
committee.
Let's
talk
about
the
submit
queue
and
hi
so
to
spoil
Tim's
thing
a
little
bit,
you
may
have
noticed
in
the
email
that
he
sent
out
to
kubernetes
dev.
There
was
this
little
thing
here.
That
said,
we
hope
to
shift
to
a
new
and
shiny
tied
implementation
in
the
next
week
and
that
kind
of
raised
my
eyebrows
so
I
figured.
E
E
So
today,
tide
is
used
on
the
majority
of
our
repos
I've
had
a
tracking
issue
out
there
for
a
little
while
to
make
sure
that
all
of
the
120
something
repos
we
have
on
the
project
to
use
some
form
of
automation
if
they
use
merged
automation
that
they
are
now
using
tied.
The
only
exception
to
this
is
cube,
root,
Nettie's
kubernetes.
So
if
you're
sick
of
me
talking
and
just
want
to
understand
how
tides
gonna
help
you
merge
your
PR
instead
of
the
submit
queue
to
merge,
your
PR
go
check
this
doc
out.
E
It's
going
to
show
you
exactly
what
you're
looking
for
I'll
walk
through
how
to
use
this
live
on
a
real
PR
in
a
moment,
and
then
it
just
tells
you
some
questions
you
can
ask
and
have
answered
via
tide
if
you
want,
if
you
think
this
is
really
cool
Oh.
First
off,
if
you
don't
know
what
labels
are
necessary
for
your
pull
request
to
get
merged,
this
is
a
quick
plug
for
a
refinement
of
the
released
oxygen.
E
Pepper
did
where
it's
just
talking
about
the
different
phases
of
release
and
the
different
miles
of
the
different
legals.
That
you
need
on
your
pull
requests
as
represented
by
the
trout
commands.
That
would
add
those
labels
or
milestones,
or
what
have
you
we
are
in
code
freeze
so
right
now
you
need
your
PRS
to
have
a
milestone,
a
sake.
Label
kind
label,
priority
critical
and
they
need
to
be
LG,
GM's
and
approved.
E
If
you
find
this
is
all
super
awesome
and
what
your
repo
added
tied,
here's
a
page
that
describes
sort
of
how
we
configure
tied,
along
with
an
example
of
how
we
can
configure
a
query
for
tied
and
some
context,
options
to
make
sure
that
things
other
than
tests
run
by
prow
are
required
for
merge.
So
the
ways
that
tied
differs
from
a
submit
queue
first
off
just
real
quick.
How
many
people
here
actually
know
what
the
submit
queue
is
or
have
used
this
UI
before
cool?
E
Hopefully,
this
will
be
not
much
of
a
change
for
you
or
not
noticeable
in
any
way
shape
or
form
the
submit
queue
provided
us,
a
single
serialized
ordered
queue
of
PRS
and
the
way
that
tied
differs
from
this
is
that
it
instead
uses
github
queries
to
select
pools
of
PRS
or
batches
of
PRS
into
tide
pools,
and
then
we
select
as
many
of
those
as
we
can
to
run
them
in
a
batch.
That's
the
tide
coming
in
and
if
they
all
merge
or
sorry,
if
they
all
pass,
we
merge
them
all.
E
That's
the
tide
going
out.
The
neat
thing
about
this
is
that
ty
does
this
in
parallel,
so
there
is
no
concept
of
a
queue
anymore.
So
what
do
I
mean
by
ty
grunts
github
queries.
Well,
if
I
click
on
this
thing,
it
kind
of
blows
up
to
the
massive
Kjetil
queries
that
we
use,
but
you
as
a
human
can
actually
click
on
the
github
search
link.
E
To
pretend,
as
though
you
were
tied
and
go
see
what
PRS
are
out
there
that
satisfy
those
queries,
it
does
boil
down
to
a
pretty
standard
query
that
looks
for
a
bunch
of
different
repos.
Make
sure
that
pull
requests
don't
have
these
labels
and
make
sure
that
pull
requests
do
have
these
labels
and
that
all
of
their
tests
are
passing
so
as
a
result,
because
there
is
no
queue
if
you
are
used
to
applying
a
queue
fix
label
or
anything
else
like
that
that
will
no
longer
have
any
effects
at
another
way.
E
If
you're
used
to
going
to
this
page
that
this
is
make
you
and
trying
to
figure
out
which
one
of
these
eight
things
you
can
do
to
your
pull
request
to
get
it
merged
sooner.
You
don't
get
to
do
that
anymore.
The
reason
we
got
rid
of
the
concept
of
ordering
is
it
means
we
can
do
more
batches
without
the
queue
changing
out
from
underneath
of
us
and
it
actually
has
significantly
improved
our
merge
velocity
across
other
repos
tide
will
also
currently
always
rerun
tests
prior
to
merge.
E
We
don't
support
the
concept
of
not
rerunning
test,
just
because
it's
a
Doc's
fix
or
something
we
kind
of
believe.
This
is
strongly
to
generate
more
test
signal
and
to
always
run
tests
before
urging,
if
you
feel
strongly
about
this
or
it
turns
out,
we
find
out
that
this
really
slows
down
our
merge
velocity.
Here's
an
issue
where
we've
been
brainstorming,
different
ways
to
allow
type
to
not
run
tests
prior
to
merge.
E
So
if
you're
used
to
the
no
retest,
what
is
it
retest,
not
required
or
Doc's
only
retest
not
required
those
labels
will
no
longer
have
any
effect
with
tied.
So
if
you
use
this
event,
queue
you
I,
perhaps
you
were
used
to
using
submit
cues
PR
dashboard
to
see
like
what's
the
status
of
all
the
PRS
out
there,
or
maybe,
if
not
to
pick
on
you
if
you're
Justin
but
I
just
call
your
avatar,
maybe
you're,
just
in
Santa
Barbara,
and
you
want
to
know
what
the
status
is
of
all
your
PRS.
E
E
It's
actually
going
to
look
at
all
my
PRS,
so
I
apologize,
I,
married
and
dirty
laundry
here,
but
you
can
see,
for
example,
here's
a
PR
that
is
the
eye
roof,
that
is
to
kubernetes,
org
master
and
all
of
the
tests
passed.
So
that's
cool,
but
my
PR
can't
merge
yet
because
it
doesn't
meet
the
labor
requirements.
Specifically,
it
doesn't
have
the
required
approved
label
if
I
click
on
this
little
question
mark
next,
it
does
not
meet
label
requirements.
E
It
tells
me
how
to
meet
the
merge
requirements
and
points
me
to
the
country
of
you:
got
contributor,
guide
and
command
help
page.
So,
if
I,
to
do
the
same
thing,
I
just
did
earlier,
where
I
stopped
just
in
Santa
Barbara
I
could
type
in
a
github
query
that
I
could
I
could
just
go
type
this
in
on
github
as
well,
and
it's
a
little
slow.
But
hopefully
this
will
show
his
PRS,
but
there
we
go
cool,
and
so
you
can
see
this
tells
me
we
need
to
resolve.
E
Labels
looks
like
it's
just
a
bunch
of
labels
or
unknown
merge
requirements,
because
this
is
still
kind
of
a
new
UI
in
focus.
Okay.
Moving
on,
if
you
are
used
to
taking
a
look
at
this
to
obsessively
see
what
is
in
the
submit
qs
q,
you
would
instead
now
go
take
a
look
at
the
tied
page
that
I
showed
you
earlier
right
now.
E
There
is
actually
nothing
that's
worth
merging,
so
tied
is
being
pretty
quiet,
but
if
there
were
stuff
worth
merging,
you'd
see
it
down
here
it
would
explain
what
repo
is
running
tests
for
and
what
TRS
it's
running
tests
for
in
a
batch
prior
to
merge
if
you're
used
to
using
the
history
view
to
understand.
Historically,
why
has
the
submit
queue
done
stuff?
We
don't
support
that
inside
anymore.
Basically,
you
would
look
at
the
merge
commits
for
your
repo
of
choice
to
see
what
pull
requests
have
merged
in.
E
If
you
are
used
to
looking
at
this
wonderful
page
on
this
admit
queue
that
shows
I.
Don't
know
a
bunch
of
stuff,
also
graphs
that
were
supposed
to
represent
queue,
health
that
have
been
failing
for
like
over
three
months
now.
I
would
instead
redirect
you
to
Belotero
monitoring
page
where
we're
going
to
see
some
graphs
that
maybe
don't
necessarily
apply
anymore,
because
we're
not
gonna
have
a
queue
anymore.
So
we're
not
necessarily
going
to
show
how
many
pull
requests
are
queued
up.
E
What
you
will
be
able
to
look
at
instead
are
how
many
pull
requests
are
in
different
tide
pools
and
again
this
is
I
know.
This
looks
like
a
lot
of
randomness,
but
it's
sort
of
illustrative
of
how
tide
is
able
to
just
really
quickly
merge,
pr's,
very
quickly,
you
don't.
You
don't
tend
to
have
too
many
outstanding
PRS
as
a
result
of
our
merge
philosophy
with
tide.
C
E
You
Cole
huge
shout
out
to
call
for
putting
in
a
lot
of
effort
to
actually
make
this
happen.
I'm
just
here
to
talk
about
it,
Cole's
really
been
putting
in
a
lot
of
the
hard
work
to
get
this
rollout
ready.
So
what's
the
rollout
plan?
First,
we
wanted
to
make
sure
we
took
a
look
at
all
the
different
things
that
launched
github
did
and
then
do
those
different
ways
this
has
been
going
on
since
about
July
of
last
year.
E
E
We're
going
to
then
create
a
tracking
issue
so
that
folks
can
follow
along
at
home
if
they
want
to
know
what's
up
down
here
latest
status,
I'm,
saying
I
discussed
this
and
contribute
yesterday
I'm
talking
to
you
about
it
today
and
we're
going
to
open
up
a
track
in
PR.
We
tried
doing
this
at
111,
but
couldn't
quite
get
there
because
of
a
couple
reasons.
So
this
was
sort
of
what
our
effort
looked
like
last
time
expect
to
see
another
PR
link
afterwards.
E
Finally,
I'm
going
to
take
this
sort
of
verbal
brain
dump
here,
sorry
next,
we
took
this
and
we
proposed
it
to
the
release
team.
So
Sokol
is
actually
the
test
infer
person
for
the
release
team,
so
he's
been
showing
up
to
the
burndown
meetings
and
will
continue
to
show
up
to
them
on
a
daily
basis.
We
propose
those
to
consider
experience
yesterday,
I
highly
value
their
opinions
since
they're,
mostly
about
the
experience
of
the
project
and
I'm,
proposing
it
to
you,
the
community
here
now
to
see
if
this
raises
any
wild
red
flags.
E
Next
steps
would
be
for
me
to
take
this
brain
dump
and
put
it
in
written
form
and
send
it
out
to
kubernetes
dev,
and
let
people
know
what
the
plan
is
for
us
to
put
together
a
pull
request
that
has
all
the
queries
to
run
against
kubernetes,
kubernetes
master
and
then
on
Monday
turndown,
lunch,
github
and
turn
up
tide
and
just
sort
of
see
what
happens.
We
also
switch
into
your
daily
burned
down
meetings
for
the
release
team.
E
Next
week,
and
so
that
gives
us
a
high
touch
point
to
be
in
touch
in
case,
anything
goes
wrong.
If
it
turns
out
that
this
doesn't
go
very
well,
you
can
always
roll
back
by
standing
the
munch
github
instances
back
up,
reverting
the
tied
query
and
reverting
any
PRS
that
have
accidentally
merged.
That's
all
I
have
any
questions.
I
apologize
if
that
was
super
fast,
but
I
was
just
trying
to
make
sure
I
didn't
blow
our
schedule
here.
One.
D
Small
thing
that
I
would
add
is
around.
Why
now
aren't
we
encode
trees
and
precisely
that
I
feel
like
we
have
a
slight
ability
to
do
an
a/b
test
between
what
we
have
today
for
some
it
Q
and
this,
and
do
it
in
a
context
where,
because
of
code
trees,
we
have
a
low-velocity
expected
on
merges.
So
if
something
goes
wrong,
there's
fewer
things
that
need
rolled
back
or
adjusted
or
changed.
So
we
have
a
little
bit
of
a
safety
net.
D
C
E
D
Thank
you
and
apologies
for
being
late,
a
couple
things
to
mention
on
the
release
update
for
112.
We
have,
as
I
guess
kind
of
mentioned,
already
entered
code
freeze,
there's
a
link
of
the
minute
sister,
if
you're
not
quite
sure
what
you
need
to
do
to
get
code
in
during
code
freeze.
What
you
need
to
do
to
merge
is
is
linked
there
and
the
notes
in
the
agenda.
D
Our
goal
was
the
eye
signal
is
to
understand
that
when
it's
something
merges
into
kubernetes,
that
that
is
good
and
when
we're
having
underlying
hosting
instead,
that
really
blurs
that
signal.
It's
hard
to
parse,
where
the
problem
is.
Is
this
a
hosting
issue,
provisioning
or
or
just
code?
That's
merging?
So
especially
during
code
trees,
we're
looking
for
solid
stability
and
discern
ability
on
merges,
so
we
do
have
some
help
coming
online
from
Google
there.
But
this
is.
This
is
a
major
concern.
D
We
have
two
and
a
half
weeks
to
sort
this
out,
but
we
really
need
additional
eyes
on
this
and
and
with
urgency
as
well.
A
lot
of
these
issues
have
been
around
for
a
few
weeks.
There
are
some
other
things
that
are
risks,
but
our
more
normal
risk
and
less
less
unusual
and
new
versus
this
current
state
with
gke
and
GT,
so
I
think
that's
my
general
update
tonight
I'd
be
curious
here.
D
F
Okay,
so
the
dev
stats
team,
led
by
LC
and
with
all
of
the
real,
the
coding
and
stuff
being
done
by
Lucas,
has
been
working
to
clean
up
and
reorganize
dev
stats
to
make
it
easier
to
find
data
that
you
care
about,
and
so
I
wanted
to
actually
show
you
some
of
that.
If
you
want
to
scroll
down
here.
So
this
is
the
kubernetes
that
you
want
to
scroll
down
a
little
bit.
E
F
We're
going
to
go
ahead
and
if
you
want
to
go
ahead
and
click
github
stats
by
repository,
so
one
of
the
things
that
we
did
to
consolidate
is
that
we
took
a
bunch
of
related
charts
and
consolidated
them
into
one
chart.
In
this
case,
we
had
a
bunch
of
separate
charge
that
track
different
github
stats
and
instead
have
a
single
configurable
chart
for
github
stats.
F
So,
but
if
you
actually
want
to
scroll
all
the
way
down
to
the
bottom
of
this,
something
else
wanted
to
share
okay.
So
one
of
the
other
things
that
Lucas
has
been
adding
here
is
explanations
for,
what's
in
the
chart
in
case,
you've,
looked
at
some
of
these
and
said
what
data
is
this
actually
showing
us?
Most
of
the
charts
now
have
this
detailed
information
that
includes
sequel,
queries
and
explanations
and
links
to
explanations
for
what
the
various
controls
are.
F
So
I
wanted
to
actually
check
up
on
a
couple
of
things
to
you.
Some
stats
like,
for
example,
we've
had
a
couple
times
in
the
community
meeting,
say
UI
coming
in
here
and
really
asking
for
help
with
dashboard
and
saying
that
they
don't
have
enough
people
contributing
to
it,
and
if
you
actually
look
at
merges
over
the
past
two
years,
you
can
see
that
that
is
indeed
the
case.
F
The
fourth
link,
okay
and
that's
avoid
like,
and
so
the
other
thing
I
actually
wanted
to
see
is
because
I
have
some
involvement
in
storage
and
with
all
of
the
new
storage
drivers
and
activity
and
storage,
and
that
sort
of
thing
that
was
actually
kind
of
curious
as
to
that
meant
for
activity
and
storage
itself,
particularly
I've
been
seeing
a
lot
of
things
in
terms
of
issues
filed,
and
so
this
is
again.
F
This
is
looking
at
issues
opened
and
there's
actually
interesting,
because
we
broken
out
so
the
previous
one
we're
looking
at
was
by
repository
and
so
that
specifically
github
repository,
and
that
was
the
dashboard
repository
and
so
we're
looking
at
stuff.
Just
in
that
repository,
which
you
could
get
off
of
github
itself,
oh,
not
necessarily
that
breakdown
of
it.
F
This
is
repository
groups,
which
is
where
we
try
to
take
the
directories
around
a
specific
Sigler
sub
project
and
combine
them
with
any
repositories,
also
home
and
signature
project,
to
basically
show
all
activity
around
that
area
of
kubernetes.
So
in
this
case
we're
showing
two
one
of
which
is
storage
and
the
other
one
of
which
is
the
related
CSI,
and
what
I
found
interesting
is
that
activity
in
terms
of
new
issues
filed.
F
This
was
actually
also
true
for
new
pr's
when
I
looked
at
it
for
the
sort
of
core
sig
storage,
stuff
has
remained
more
or
less
constant,
but
they've
gotten
a
lot
of
added
activity
around
CSI
I.
You
know,
since
CSI
was
introduced
in
December,
and
then
it's
just
grown
from
there
in
terms
of
new
she's
filed,
and
it
makes
a
lot
of
sense
since
CSI
is
now
where
most
of
the
development
is
all.
We
haven't
actually
seen.
F
G
The
cig
charter
is
still
in
progress,
I
thought
that
was
near
complete,
but
I'm
still
getting
comments
dribbling
in
and
just
got
a
new
one
last
week,
so
we're
keep
we're.
Keeping
on
that.
We've
noted
the
new
guidance
on
conducting
the
SIG's
zooms
and
we
our
next
meeting,
isn't
until
Thursday
of
next
week,
and
we
should
have
the
new
Titan
standards
for
zoom
administration
in
place.
For
that
meaning.
Just
a
note
that
these
are
the
schedule
for
our
sake
meetings,
so
turning
it
over
to
local
provider.
B
H
This
is
the
vSphere
cloud
provider
and
our
goals
right
now
is
that
we
would
have
an
alpha
release,
aligned
with
Kate
1.12,
a
beta
release
in
q4
and
a
stable
release
in
2019
we're
currently
adding
zone
support.
The
code
has
been
implemented,
we're
adding
tests
to
it
right
now.
Our
future
plans
is
to
support
vSphere
on
AWS,
with
load
balancer
support
and
we're
currently
adding
VZ
SIM
and
test
testing
to
VC
simian
integration
into
the
CIC
be
fully
the
entire
provider.
State
can
go
to
next
page.
H
Another
thing
that
we're
doing
is
the
effort
to
move
the
vSphere
cloud
provider
out
of
tree
is
currently
underway
and
right
here
we
have
the
URL.
We
have
a
weekly
meeting
every
Wednesday,
a
public
meeting
at
9:00
a.m.
Pacific,
Standard
Time
and
here
are
the
links
for
the
agenda
notes
and
the
view
to
employee
Steve
next
page.
Please
another
effort
that
we're
working
on
is
there.
These
fear
cluster
api
by
the
current.
H
G
Thanks
look
if,
if
anybody
has
any
questions
on
the
cloud
provider
or
the
cluster
API
jump
in
now,
otherwise,
I'll
move
on
to
coverage
of
the
yes
I
plug
in
for
vSphere
storage,
okay,
so
we're
aligned
with
the
design
doc
that
was
done
by
sage
storage.
This
effort
is
underway
with
an
alpha
expected
sometime
in
q4.
G
G
So
you
know
this
is
a
little
behind
where
we,
where
it
looked
like
it,
would
be
back
in
June,
but
we're
anticipating
getting
something
on
the
air
in
the
q2
q4
time
frame
for
general
cig
activity
on
any
of
these
subjects,
like
the
cloud
provider,
the
cluster
API
or
the
CSI
driver,
where
we'd
love
to
have
your
participation.
So
these
are
some
links
where
you
can
come
in
join
us.
That's
it
for
me,
I'll
unshare
and
turn
it
back
over.
A
I
We're
working
on
scoping
a
bug
bounty
obviously
like
this-
is
still
just
like
a
thing
that
is
not
real
yet,
but
we're
hoping
to
make
it
real.
So
if
you
could
take
a
look
at
this
pull
request
to,
which
is
great,
we're
kind
of
trying
to
make
sure
that,
when
bugs
come
in
for
bounty
program
that
we
have
an
easy
way
and
an
easy
you,
like
kind
of
pile
to
point
to
that's
like
this,
is
for
in
scope.
This
is
out
of
scope.
A
J
Hi
everyone,
it's
Paris,
the
election
officials
this
year
are
George
myself.
The
update
this
week
is
that
we
have
eight
days
left
until
the
next
important
deadline
that
would
be
September
14th
people
who
are
being
nominated
for
steering
committee
as
well
as
folks
who
would
like
to
vote
that
we're
not
on
the
voters
MD
list.
That
means
you.
J
That
is
your
deadline,
if
you're
not
on
the
voters
MD
list,
which
there's
a
link
in
the
agenda,
you're
not
eligible
to
vote
but
again
in
the
agenda,
there
is
a
form
actually
for
you
to
fill
out
to
get
on
that
list.
There's
also
a
bunch
of
resources
in
there.
The
steering
committee
charter,
for
instance,
who
the
steering
committee
is
the
election
process
as
well
as
the
voters
guide.
J
Please
look
at
this
next
week
we
will
be
sending
our
last
email
announcement
before
the
poll
gets
sent
out
and
that
announcement
will
actually
be
sent
to
the
emails
that
we
have
on
file
for
all
of
the
eligible
voters.
So
if
we
do
not,
if
we
get
a
kickback
or
something
that
means
that
we
just
don't,
have
the
correct
email
address
for
you,
but
that's
it.
A
Okay,
moving
on
I've
got
a
public
service
announcement
here.
Real
quick
Brandon
Phillips
pointed
out
a
post
from
the
Etsy
dev
mailing
list
that
they
have
minimum
versions
that
they
would
like
to
recommend
for
people
using
kubernetes.
If
you
are
using
a
version
of
SCD
below
those
versions
recommended
there
are
data
corruption
issues.
So
please
help
us
get
the
word
out
for
those
of
you
using
those
versions
of
CD
and
production.
A
It's
it's
probably
a
good
good
idea
to
upgrade
and
he's
got
a
lot
of
tips
there
as
far
as
the
support
cycle
for
SED,
which
runs
out
at
the
end
of
this
year.
So
please
check
it
out.
The
next
item
is
a
contributor
role
board,
which
I
covered
a
little
bit
last
week.
I
just
wanted
to
show
it
now
that
we
actually
have
jobs
from
people
I'm
being
submitted
just
real
quick
here.
So
what
this
is
basically
is
like
a
roll
board
for
volunteers
and
for
SIG's
to
connect
to
each
other.
A
So
if
you
are
a
cig-
and
you
have
roles
that
you
would
like
to
see
filled
as
an
example,
the
release
team
here,
I
just
posted
what
they're
looking
for
for
volunteers.
As
far
as
becoming
part
of
the
release
team
on
this
cycle
and
say
contributor
experience,
we
also
have
a
bunch
of
interesting
roles,
we're
looking
for
Stack,
Overflow,
Shepherd
or
team,
we're
looking
for
a
for
a
fo
hunter
to
go
through
the
kubernetes
commune
rebo
and
help
us
fix
a
bunch
of
stuff.
A
So
the
intent
of
this
is
for
SIG's
to
come
up
with
roles
and
things
for
people
to
do
and
for
contributors
to
also
post
kind
of
their
little
entries.
For
example,
this
is
Tarun
if
you're
looking
for
another
golang
developer
to
help
you
with
your
sig
and
you
could
just
go
through
these
volunteer
available
tags
and
see
people
who
are
available
so
things,
please
think
about
roles
that
and
mentorship
programs
that
you
have
into
your
sink
shadow
roles
are
a
great
way
to
do
this,
where
you
can
on-ramp
somebody.
A
Moving
on
kubernetes
the
one
that
thirteen
release
team
is
forming
related
to
that
post,
so
check
out
the
pull,
requests
there
or
I'm.
Sorry,
the
kid
have
issue
that
is
linked.
That's
issue
number
280.
If
you're
interested
in
joining
the
release
team,
we
have
an
outreach
eCall
for
mentors
and
projects.
This
is
a
post
that
Paris
sent
to
kubernetes.
Does
please
check
it
out
to
make
sure
that
you've
got
all
the
complete
information
for
the
outreach
program
and
shout
outs
US
week?
What
this
is
is
half
shout
outs
on
the
kubernetes
slack.
A
If
you
see
someone
going
above
and
beyond
the
call
of
duty,
you
want
to
get
my
nice
shout
out,
they
get
recognized
during
the
community
meeting,
just
go
ahead
and
whack
it
in
to
get
that
slack
channel
and
then
every
week
we'll
roll
it
all
up.
So
Chris
Hein
would
like
to
thank
Jordan
Liggett,
a
Stefan
SP
pts
and
DirectX
Man
12,
that's
Ollie
Ross
for
helping
it
support
for
HPA
in
the
metric
service
in
EK.
A
Yes,
zac
karna
would
like
to
shout
out
to
you
at
uni
hope,
I
get
that
right
and
Claudia
Jay
King
for
taking
me
in
like
family
over
a
long,
layover
and
soul.
You're
amazing
Korea
translations
are
only
matched
by
your
generous
hospitality.
I
love
this
community.
Even
more
that's
pretty
awesome.
J's
Dumars
would
like
to
shout
out
to
paris
and
meet
our
contributors
and
group
mentoring.
Yet
another
way
kubernetes
is
set
up,
setting
the
standard
for
excellence
in
open
source
community.
Then
the
elder
would
like
to
thank
Paris
meter
contributors
was
awesome.
A
This
week,
I
recommend
you
all
check
out
the
YouTube.
We
have
people
asking
questions
for
the
steering
committee
and
had
a
lot
of
great
discussions
there
and
I
like
stream.
Math
thanks
a
lot
Ben
Jim
angel
would
like
to
shout
out
to
just
Augustus
for
helping
us
out
with
wrangling
docs
for
1.12
before
the
freeze
very
awesome.
Work.
Jeff
would
like
to
shout
out
to
me
you're,
taking
over
hosting
duties
for
this
meeting
and
I
assumed.
A
I
would
like
to
do
a
huge
shout
out
to
Tim
pepper
for
being
an
incredible
and
patient
release,
lead,
striving
nonstop
to
heard
issues
pilot
and
generate
a
new
branch
manager,
playbook
and
keeping
all
the
documentation
and
I
like
to
add
just
a
huge
thumbs
up
to
release
him
in
general
Tim
&
Co,
as
you
guys,
wrap
up
y'all,
wrap
up
the
cycle,
great
job
and,
lastly,
there's
a
call
for
demos
for
this
call.
If
you
want
to
do
a
demo
in
the
first
10
minutes
like
we
do.