►
From YouTube: Kubernetes Community Meeting 20180412
Description
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
Okay,
you're
all
set
go
ahead:
okay,
thank
you
good
morning
or
afternoon
or
evening
or
middle
of
the
night,
depending
on
your
time
zone.
Welcome
to
the
April
12th
kubernetes
community
meeting.
As
a
reminder,
this
meeting
is
being
recorded,
so
anything
questions
you
asked,
etc
will
be
recorded
and
part
of
the
archival
meetings.
A
I
want
to
thank
Clint
consent
for
volunteering
to
take
notes
for
this,
so
that
we
have
them
for
posterity
and
then
we'll
be
thinking
other
people
at
the
end
of
the
meeting
as
usual,
we
are
starting
out
with
a
demo,
so
today's
demo
is
Antonio
murdocka,
demonstrating
cryo,
the
runtime
for
kubernetes,
and
so
he
will
go
ahead
and
start
that
demonstration
Antonio
you
want
to
take
it
away.
Can.
B
So,
okay,
guys
I'm
gonna
talk
about
cryo
I,
don't
know
how
many
people
already
heard
about
it.
It's
basically
all
for
those
11.
It's
basically
a
new
content
runtime
for
cube,
it's
actually
a
cube
or
Native
incubator
project
and
it's
stable,
where
it
is
that
we
say
it's
stable
and
we're
actually
supporting
cube
110
for
nine,
and
we
are
actively
working
on
curating
this
master
to
track
all
the
changes
in
the
CRI
we
use
run
see
under
the
hood,
even
if
we
can
actually
run
containers
with
other
OC.
D
E
B
Okay,
so
I'm
going
to
show
you
as
I
said,
I'll
transfer
and
all
this
is
this
cube.
Cluster
is
by
running
cryo
and
you
can
see
after
I
set
up
the
cluster
with
local
op
cluster
I
actually
have
the
cube,
Dennis
and
I
deployed
the
dashboard
as
well,
and
all
of
this
is
actually
running
with
cryo
and
run
see.
You
can
see
I'm
not
running
any
other
container
runtime.
B
B
B
B
B
Already
showed
you
that
every
container
is
actually
running
under
run
c4
talking
to
cryo
without
kubernetes.
We
also
have
support
for
a
tool
called
CRI
CTL,
which
is
another
cube
project
as
well,
and
this
is
actually
working
fight
with
cryo
as
well.
You
can
see
that
every
container
from
every
part
I
have
on
this
node
and
cluster
is
actually
running.
B
Pods
are
showing
up
as
well
and
likewise
images
and
yeah
as
assayed,
where
we're
supporting
Cuba
natives
one
time
and
one
nine
and
they're
actually
actually
actively
tracking
cuban
in
this
master
as
well.
We
do
support
deploying
clock.
Rio
is
a
container
runtime
for
cube
with
cube
ATM,
and
we
also
have
support
from
many
cube
and
I.
B
B
B
B
G
F
G
Guess
one
last
question
might
be:
is
signaled
planning
to
have
a
default
for
the
111
release
is
usually
they
curate
and
handpick
a
given
container
runtime
that
gets
all
the
tests
and
that's
the
one
that
is
recommended
for
a
given
release
cycle.
As
as
the
cig
note
folk,
are
the
signal
folks
going
to
first
class
this
in
any
reasonable
time.
F
G
F
H
H
Take
a
look
at
the
talk
that
I
went
in
champ.
My
understanding
is
signal
is
working
on
CRI
conformance
thing
to
address
the
multiple
CRI
implementations
issue
and
CRI.
Cryo
is
just
one
CRI
implementation
and
so
I
think
you
would
need
to
make
sure
that
it's
gonna,
whatever
implementation
you
chooses,
the
default
is
going
to
pass
those
tests
before
we
move
on
to
blocking
the
entire
kubernetes
release
based
on
a
default
cry.
Implementation
does
that
make
sense
and
I'm
totally
guessing
here.
I
H
That's
why
I
like
what
I'm
trying
to
understand
is
what
taste
data
is.
Do
we
have
on
the
crime
right
now,
I
heard
you
mentioned
that
the
all
the
end
end
tests
are
run
with
cryo
today
I'm
trying
to
go
look
for
those
results
on
test
grid.
Do
you
have
a
link
hit
e
that
I
can
look
at
to
see
how
many
of
them
are
passing
versus
failing
over
time?
Okay,
so
check
looks
like
stuck
at
merging.
B
So
so,
okay,
so
we
are
running
every
entrance
test
in
a
CI
of
our
own
I'd.
Rather,
we
actually
plan
to
publish
the
results
of
each
of
this
run
to
the
kubernetes
test
grid
upstream,
so
that
everyone
can
take
a
look
at
that
and
see
all
right.
This
is
table
if
you
will
or
not,
I'll,
actually
provide
you
with
a
link
to
results
where
we're
actually
running
those
end-to-end
tests
and
no
dent
when
test.
H
F
Yeah,
absolutely
and
like
we
were
kind
of
blocked
on
one
cube,
dashboard
issue,
which
was
preventing
us
to
publish
those
results.
So
we
were
working
on
that,
but
the
current
status
is
every
PR
that
goes
into
cryo.
We
run
the
entire
cluster
end-to-end
and
only
if
it
passes
we
merge
appear
and
will
be
the
putting
that
to
dashboard
as
soon
as
possible.
So
in
end
of
this
grid,
I.
H
F
G
F
G
I
A
We
need
to
move
on
to
the
rest
of
the
schedule
for
the
meeting
I
recommend
anybody
wanting
to
follow
up
on
cryo
and
its
status
within.
I
kubernetes
follow-up
I
in
sig
node
or
on
the
cryo
github
project
when
one
way
or
the
other.
So
thank
you
very
much
antonio.
I,
if
you
can
add
links
to
any
additional
resources
to
the
notes
that
would
be
appreciated.
I
A
A
Ok,
so
next
up
is
me:
I
am
changing.
Hats
to
111
release
lead
to
talk
about
the
111
release.
We
are
currently
in
week,
2
of
12
for
the
111
release
cycle.
This
means
that
we
are
currently
mostly
in
feature
collection
mode
plus.
The
rest
of
the
team
is
hard
at
work,
trying
to
improve
automation
and
procedures
for
later
in
the
release
cycle
to
make
things
go
more
smoothly,
which
I'll
talk
about
a
minute,
but
first
you
who
are-
and
you
want
to
talk
about
feature
request
for
features.
D
Just
a
brief
update
and
brief
requests
from
me
and
from
most
even
who
is
secondary,
lead
for
the
features
in
this
release,
so
we're
currently
in
the
process
of
collecting
features.
So
if
you're,
developing
Samson
Samson
year
or
updating
your
existing
feature
for
given
it
is
for
1.11
but
ensure
that
your
feature
is
strict
on
the
features
repo.
Also,
please
don't
forget
about
the
future
striking
spreadsheet,
where
we
are
collecting
all
the
features
that
our
target
in
this
release.
A
Thank
you.
Now
we
discussed
one
other
thing.
That
would
actually
be
a
change
to
the
schedule.
A
couple
people
have
pointed
out
that,
as
we've
gone
on
in
kubernetes
releases
code,
freeze
has
gotten
longer
and
the
reason
for
that
is
that
it's
gotten,
you
know
over
the
various
kubernetes
releases
harder
and
harder
to
get
a
clear
signal
with
all
tests
passing
and
everything
stable,
and
so
our
response
has
been
to
lengthen
code
freeze
to
the
reasonable
period.
To
make
that
happen.
A
Then
we
will
delay
code
slush
and
code
freeze
in
order
to
make
code
freeze
a
week
shorter
and
give
people
more
time
to
work
on
development
work.
A
The
I
will
send
out
an
announcement
about
this
with
the
schedule
and
update
the
release
document
tomorrow
and
on
kubernetes
dev.
For
this
the
idea
is
to
say,
hey.
The
the
whole
reason
why
code
freeze
has
been
getting
longer
is
because
we're
using
the
code
freeze
period
to
make
kubernetes
stable
and
make
the
test
pass.
If
the
tests
are
passing
all
the
time,
then
code
freeze
could
become
as
short
as
like
seven
working
days
and
I.
A
That's
the
goal
we're
aiming
towards
in
the
future,
and
this
really
true
that
conditionally,
if
that
works
out-
and
this
will
be
up
to
the
112
release-
lead
whoever
that
is.
But
if
that
works
out,
then
112
will
actually
schedule
code
freeze
to
be
shorter,
which
would
be
awesome.
So
I
look
for
an
announcement
on
kubernetes
dev
there
and
an
update
to
the
official
schedule
regarding
that
and
if
you
have
any
questions
or
suggestions
regarding
that,
please
take
them
to
sig
release
either
the
mailing
list
or
the
slack
Channel.
So
Josh
can
I.
H
Church,
which
tests
specifically
are
you
talking
about
if
I
were
to
go
to
test
code
and
look
at
it
dashboard,
so
you
see
right
now
the
column
for
begin
code.
Freeze
says
we're
going
to
be
looking
at
the
lease
master,
blocking
master
upgrade
and
I
think
111
blocking.
Do
we
need
to
make
sure
all
three
of
those
dashboards
are
passing
or
is
it
like
just
111
blocking
or
master
plague,
which
is
it
I
all.
H
A
J
Can
you
hear
me
now?
Maybe
yes,
oh
cool,
so
I
started
working
on
the
release
earlier
today.
There
are
some
issues
with
some
tests
like
Inc
and
the
scalability
issue,
which
I
love
sick
scalability.
So
there
are
some
last-minute
issues.
It
seems
we
already
have
the
fact,
all
of
them
I'm
just
waiting
for
some
more
green
earth,
so
probably
I'll
release
it
later
today.
Failing
that
it
may
sleep
tomorrow,
but
I
hope
not.
A
H
I'm,
sorry,
it
was
typing
something:
okay,
hi
I'm,
Aaron
Berger.
With
this
week's
graph
of
the
week.
These
links
are
gonna,
be
in
the
meeting
notes
which
I
will
share
now
to
click,
so
you
can
see
them
gray.
So
the
graph
of
the
week
I
want
to
talk
about
this
week
is
PRS
labels
repository
groups.
Technically,
what
this
means
is,
what
are
the,
how
many
pull
requests
have
a
given
label
applied
over
time?
H
What
we're
looking
at
right
now
is
the
kubernetes
repository
group,
which
is
just
to
burn
any
slash,
kubernetes
and
pull
requests
that
have
the
needs.
Rebase
label
applied.
Your
pull
request
is
going
to
get
this
applied.
If
you
need
to
rebase
your
comb
request
to
resolve
a
merge
conflict
so
generally
the
more
of
these
pull
requests
that
are
going
up
over
time,
the
more
that
an
author
is
being
unresponsive
and
has
potentially
abandoned
their
pull
request.
This
is
something
that
we,
as
reviewers
and
approvers
and
in
the
community
can't
really
do
anything
about.
H
This
is
something
you
as
an
author
need
to
be
responsible
for
Emily.
So
that's
what
this
graph
is.
It's
the
selected
label
for
the
selected
repository
group.
The
next
graph
shows
this
across
all
of
the
repository
groups,
so
this
is
just
what
kubernetes
looks
like
and
if
I
get
rid
of
the
all
kubernetes
I'll
repost
aligned
in
kubernetes.
This
is
what
the
rest
of
those
repository
groups
look
like.
The
needs
read:
pace
label
applied.
H
Historically,
you
can
see
Meads
rate
rebase
was
only
applied
to
like
a
couple
of
repo
groups
and
we
since
move
that
from
a
munge
github
plunger
to
a
prow
plugin
and
with
prowl,
is
possible
for
us
to
enable
things
for
all
repos
across
the
board
or
multiple
reposts
much
more
easily.
So
that's
why
you're
starting
to
see
this
pop
up
in
other
places,
because
although
github
zyou
I
will
show
you
like
a
little
merge
conflict
thing
for
pull,
requests
that
have
this,
which
I
guess
I
can
show
you.
H
It
doesn't
alert
you
for
so
for,
though,
of
you
who
rely
on
late
email
alerts
to
know
when
something
has
happened
to
your
pull
request
and
you
need
to
do
something
about
it.
The
application
of
the
label
will
give
you
that.
That's
why
we
apply
the
label.
The
label
can
also
be
searched
for
pull.
Requests
that
are
emergent,
be
easily
searched
for
this
next
graph
here
shows,
oh
I,
believe
all
of
the
given
labels
for
the
repository.
So
you
can
see
here
if
I
get
rid
of
all
evils
combined.
H
So
just
looking
at
individual
labels
fact
and
I
get
rid
of
the
needs.
Rebased
label,
here's
sort
of
what
the
rest
of
these
labels
are.
What
do
all
of
these
labels
mean?
Well,
you
can.
I
don't
know
how
readable
this
is,
but
you
can
sort
of
see
a
lot
of
them
are
prefixed
with
the
word.
Do
not
merge,
there's
also
CLA,
know
and
c
+
CF
CLA
know.
Basically,
these
are
all
things
that
the
pull
request
author
has
to
do
something
to
make
this
pull
request
multiple.
H
So
this
is
all
the
steps
beyond
just
reviewing
and
improving
the
PR
for
an
exact
definition
of
what
these
mean,
get
a
go
to
Kate
spod,
io,
/,
github
labels,
and
he
were
about
them
here.
We're
getting
we're
gonna,
try
and
get
better
about
sort
of
linking
across
to
these
things.
But
right
now
you
can
see,
do
not
merge
block
pass
means.
This
tells
you
what
thing
is
applying
that
and
what
plug-in
that
should
be
blockade.
Not
blockage
on
file
and
issue
later
is
responsible
for
this
and
clicking.
H
This
will
take
you
to
the
it
should
take
you
to
the
source
code
for
that.
Okay,
back
to
you,
the
dev
stats
thing,
let's
see
so
one
other
thing
I
will
show.
Is
this
wonderful
needs?
Okay,
tech,
not
okay,
to
test
comm,
URL
I
think
this
was
put
together
by
Jace,
but
I.
Call
this
the
the
doones
URL,
because
dance
is
a
superhero
and
generally
uses
this
query
to
go
figure
out.
H
What
pool
requests
need
somebody
from
the
github
organization
of
kubernetes
to
go
by
and
drop
an
okay
to
test
so,
for
example,
this
ingress
GCE
pull
request.
Pr
has
applied
I'm
going
to
take
a
look
at
it.
Usually
these
pull
requests
are
coming
from
first-time
contributors,
so
people
who
aren't
members
the
organization
will
always
get
this
applied
because
we
just
want
to
make
sure
that
they're
not
trying
to
pull
request
in
some
sort
of
Bitcoin
a
minor
or
something
crazy.
Like
that,
so
you
can
usually
do
a
very
quick
cursory
glance
in
this
case.
H
I
can
see
you
know
obviously
doing
markdown
files,
so
I'm
criminal,
slash,
okay,
to
test
I
will
comment.
The
bots
will
remove
that
label
labels
gone
and
so
now
see.
I
can
do
its
thing
and
maybe
a
reviewer
or
approver
is
more
likely
to
take
a
look
at
this.
So
this
graph,
like
anybody,
who's
a
member.
If
you
want
to
go
help
out
new
contributors,
you
can
go
use
this
not
okay
to
test.
H
You
can
do
the
github
query
yourself
and
this
initially
this
graph
initially
just
showed
needs
rebase,
but
you
know
I
can
use
it
to
go,
say
how
many
pull
requests
have
meet
okay
to
test
and
we
can
sort
of
see
whether
or
not
we're
making
a
difference
on
this.
So
I
just
go
like
over
the
last
six
months.
You
can
see
we
have
roughly
seventy
pull
requests
that
any
of
you
could
go
help
push
through
the
process
if
you'd
like
to
I
think
that's,
basically
all
I
have
to
talk
about
today.
Any
questions.
H
H
If
we
finally
need
to
do
that
same
optimization,
down
the
mind
of
using
something
like
basil
or
regular
expressions
to
identify
pull
requests
that
test
files
that
don't
update
any
code,
we
could
streamline
the
process
for
those,
but
we'd
like
to
try
and
get
back
to
a
world
where
every
home
request
goes
to
the
exact
same
workflow.
All
those
other.
I
H
Okay,
so
the
way
you
don't
get
needs
okay
to
test
apply
to
your
pull
request.
Is
you
asked
to
become
a
member
of
kubernetes?
I
can
go,
show
you
the
steps
to
do
that
real,
quick,
some
people
think
trying
to
become
a
member
of
the
organization.
Is
this
big
scary
thing?
Basically,
it
it
just
kind
of
requires
a
little
bit
of
trust
and
a
little
bit
of
human
vetting.
H
So
if
I
go
to
the
community,
repo
and
I
click
on
community
membership,
new
contributors
and
just
sort
of
do
a
lot
of
things
like
apply
labels
or
comment
on
issues
or
open
poll
requests
and
have
some
of
these
safeguards
put
in
place.
But
if
you
know
you're
gonna
be
working
on
the
project
for
a
while,
you
can
make
sure
that
you've
got
to
factor
off
and
abled
on
your
account
and
send
an
email
to
this
mailing
list.
H
Saying
you'd
like
to
join
the
kubernetes
and
you
know,
show
some
of
the
work
you've
done
demonstrate
intent,
demonstrate
dedication
to
the
project
and
will
totally
help
you
out.
There
used
to
be
something
in
here
about
what
you
have
to
have
contributed.
At
least
five
pull
requests,
or
something
like
that.
We're
not
really
looking
for
that.
We're
just
looking
for
a
little
bit
of
trust.
A
little
bit
of
dedication,
so
I
super
welcome
anybody
who's
interested
in
becoming
a
member
to
go
through
this
process.
A
K
Okay,
so
this
the
status
report,
this
one
just
got
started
so
in
fact
we're
going
to
have
the
second
meeting
in
the
hour
after
this
meeting,
but
we
did
have
our
first
one
I.
Think
we've
got
this
a
largely
in
place.
We've
got
the
the
Charter
defined.
It
has
a
presence
on
github.
The
first
zoo
meeting
was
two
weeks
ago
and
it's
repeating
on
two-week
cycles.
We
had
11
people
show
up
for
that
first
meeting,
but
I'm
hoping
it
grows
beyond
that.
K
The
meeting
agenda
and
notes
are
implemented
as
a
Google
Doc
shared
to
the
Google+
group
associated
with
the
sig,
which
I
think
seems
to
be
the
convention
and
the
other
SIG's
I've
been
a
member
of
the
Google
Group
membership
is
up
to
31
members.
We've
only
had
three
messages
exchanged
so
far,
but
it's
new
so
I'm
expecting
that
to
grow
too.
K
The
sig
VMware
channel
on
the
kubernetes
slack
has
50
members
following
it
right
now
with
about
20
messages,
the
intent
of
this
egg,
it's
in
the
Charter,
but
it's
as
a
recap:
it's
to
support
kubernetes
users
and
prospective
users
who
are
trying
to
deploy
in
an
enterprise
environment
at
scale
on
VMware
or
VMware
related
platforms,
and
it's
also
intended
to
be
a
group
to
focus
project,
architecture,
integration
and
development.
Work
related
to
the
VMware
cloud
provider.
K
K
J
A
L
Okay,
excellent,
so
sick
windows.
Ever
since
we
released
beta
with
1.9
who
have
been
super
busy
in
terms
of
getting
a
lot
more
people
to
start
using
our
technology
and
deploy
Windows
server,
containers
and
kubernetes,
we've
gotten
a
lot
of
bugs
as
well,
which
are
working
hard
to
fix.
But
a
few
things
that
kind
of
I
want
to
point
out.
Some
of
the
most
recent
work
that
we
were
able
to
deliver
is
we
added
support
for
container
CPU
resources,
so
you
can
put
the
resource
control
on
your
containers
on
on
Windows.
L
Additionally,
we
have
engaged
with
the
larger
team
and
we
started
writing
end-to-end,
CI,
CD
automation
and
tests
for
safe
windows,
so
that
will
allow
us
to
prevent
regressions
catch
issues
early
on
that
come
up
as
part
of
testing
and
re
th
to
kind
of
start
finalizing,
some
of
that
infrastructure
within
the
next
month
or
so
overall,
or
making
good
progress
and
we're
hoping
to
go
to
GA
at
or
near
when
Windows
Server,
2000
maintained
chips.
So
there's
some
additional
features
in
Windows
that
are
waiting
for
and
we'll
be
able
to
get
to
GA.
L
A
A
Okay,
well,
that
was
speedy.
So
thank
you
very
much
Michael!
Thank
you!
So
the
next
one
up,
I'm
gonna,
give
so
the
other
one
in
the
schedule
is
on-prem
and
there's
nobody
from
that
working
group
could
be
here.
They
just
wanted
me
to
make
the
announcement
for
them
that
I
sig
on
Prem,
which
is
originally
sig
on
Prem,
has
demoted
themselves
to
a
working
group
they're
mainly
now
just
in
area
forum,
for
discussion
for
people
with
on
Prem
deployments
and
not
a
formal
sig.
H
Can
speak
to
that
part
real
briefly
just
say.
First
off
usually
you'd
be
able
to
go.
Look
at
the
steering
committee
meeting
recording
right
now
to
go
see
what
we
talked
about.
You
can
also
take
a
look
at
the
meeting,
but
the
recording
is
not
supposed
to
get
because
one
of
the
person
in
charge
of
recording
is
on
vacation
a
little
bit.
You
can
take
a
look
at
our
agenda
to
see
specifically
what
we
discussed
but
loosely
we're
trying
to
clarify
whatever
they
get.
H
What
a
working
group
is,
you
want
one,
what
the
rules
for
Adar,
etc,
etc,
and
we
expect
you
have
more
concrete
answers
about
that
after
we
leave
next
in
two
weeks.
Also,
look
first,
more
clarification
about
what
a
sub-project
is
versus
a
person's
a
working
group
and
we'll
do
a
tour
of
what
those
are
and
why
you
might
be
more
interested
in
talking
about
some
project.
A
First
of
all,
I
want
to
start
the
announcements
with
the
usual
shout
outs
as
a
reminder,
if
you
have
anybody
who
you
want
to
shout
out
for
doing
good
things
for
the
kubernetes
community,
please
drop
their
name
in
the
shout
outs,
slack
channel
or
email
George,
so
first
shout
out
to
a
mentoring
class
for
graduating,
including
Robin
Percy,
my
explain
and
whoever
our
d
g
rd,
our
GM
n
zs
c,
is
for
graduating
and
Peretz
for
organizing
the
whole
group
mentoring
program
that
allows
us
to
train
people
to
bring
them
up
in
the
community.
Contact.
A
Shout
out
to
my
check
patel
for
providing
clarity
around
the
know,
de
autoscaler,
not
not
sure
what
that
means.
I
have
a
feeling
that
it
was
a
documentation
effort
involved,
so
sit
for
shout
outs.
Next
announcement
tray
pic
auto,
approve
Mundra.
That
looks
like
an
errand,
even
though
there's
no
name
on
it.
Yeah.
H
How
do
you
know?
That's
totally
me
so
I
just
said
an
email
to
kubernetes
to
have
activating
this,
but
if
the
words
cherry-pick
auto
approved
so
really
scary
or
confusing
to
you,
you're
not
alone.
This
is
a
sentiment
expressed
by
most
patch
release
and
grant
release
managers
that
I
have
talked
to
you
we've
all
collectively
forgotten
that
this
plunger
exists.
It
is
something
that
would
magically
apply
the
cherry-pick
through
of
legal
to
cherry-pick
for
requests.
If
certain
labels
were
applied
to
the
whole
request,
that
was
cherry-picked
from.
H
If
that's
all
confusing,
don't
worry,
we've
collectively
evolved
to
a
process
where
you
know
all
requests
merge
into
release
branches,
the
police
branch
manager
must
manually,
apply
a
label
called
cherry,
tic
approved
and
so
we're
going
to
just
keep
using
that
process
and
turn
off
the
bot
that
occasionally
magically
advised
that
label
in
a
confusing
or
surprising
way.
If
you
have
any
wild
objections
about
this,
please
respond
to
the
email
that
I
sent
out
to
you.
Ladies
death
I
have
cc'd
all
of
the
release
in
patch
release.
H
Managers
on
the
full
requesting
question
and
I
notified
sake,
release
mistake,
industry,
bitter
experience.
We
are
also
looking
to
improve
both
documentation
and
the
experience
of
the
cherry-pick
process,
as
part
of
if
you
want
leaven
process,
but
that's
down
the
line
for
now
we're
just
trying
to
call
it
what
it
is.
Any
questions.
A
E
I
just
wanted
to
quickly
tell
people
while
we
were
up
to,
in
addition
to
the
workgroup
stuff
that
Aaron
mentioned,
we're
trying
to
help
SIG's
create
their
charters.
So
there
are
a
couple
of
aspects
to
that,
but
one
is
explaining
to
SIG's
this
new
concept,
explicit
concept
at
least
sub
projects
mini
SIG's,
have
a
number
of
sub
projects
underway
which
sometimes
have
overlapping
groups
of
contributors
and
sometimes
not
a
example,
would
be
sig.
Cluster
lifecycle,
which
has
many
tools
like
cube,
idiom,
cops
Q,
Vidya,
a
B,
the
cube,
a
WS
keep
spray
and
others.
E
So
each
of
those
is
kind
of
a
distinct
sub-project,
but
most
SIG's
have
multiple
sub
projects.
So
that's
something
we're
trying
to
formalize.
So
we've
split
the
Y
to
the
SIG's
a
months.
The
difference
during
committee
members
and
those
members
will
be
working
with
each
sig
on
their
charters
right
now,
they're
about
a
half
dozen
charters
in
flight,
so
we're
gonna
focus
on
those
first,
as
we
flesh
out
the
details
of
what
a
charter
should
look
like
and
then
we'll
take
it
to
the
other
things.
That's
it.
A
Thank
you
so
other
things
we
had
in
ask
me
anything
on
reddit
this.
Last
week,
the
lot
of
people
participated
a
lot
of
questions,
a
lot
of
user
feedback.
Some
of
the
questions
that
were
asked
there
actually
have
have
relevance
for
how
we
we
organize
the
project.
Take
a
look
at
the
questions
and
the
answers
there
and,
and
thanks
so
much
for
the
team
of
folks,
including
Timothy
and
Chris,
and
several
others
for
fielding
questions
there,
and
whoever
put
this
on
there
said
will
likely
do
more
of
these
in
the
future.
A
A
M
Sorry,
can
you
see
that
screen?
Yes,
you
hear
me
yep
great
okay,
so
hi
I'm
Dan
from
CN
CF,
and
we
built
a
tool
that
we
think
folks
may
find
helpful.
Called
the
cloud
native
interactive
landscape.
Now
folks
may
have
seen
the
static
version
of
this
in
the
past,
and
it's
all
available
on
github
and
I
will
just
show
it
right
here
on
the
screen.
M
This
is
a
link
and
you
can
see
that
their
latest
commit
was
this
week,
which
is
just
an
incredible
level
of
devotion
and
commitment
to
see
that
through
or
open
source
by
stars
or
offerings
from
China,
and
so
there's
kind
of
it's
really
designed
as
a
tool.
That's
helpful
for
investigating
it,
but
it's
also
meant
as
a
crowdsourcing
platform,
and
so,
if
you
see
bugs
in
it,
please
open
a
pull
request
and
if
there's
projects
that
are
missing,
we
have
details
here
on
the
top
on
how
to
to
add
them
in.