►
From YouTube: Kubernetes Community Meeting 20200618
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
A
Next
step
that
I'm
really
excited
about
is
leaderboards,
so
I
would
like
to
have
leaderboards
that
were
worried
contributors
for
positive
behaviors
so,
for
instance
like
how
how
fast
they
do
code
review
times,
like
basically
like
here,
are
my
10
fastest
code
reviewers,
but
it
has
to
be
balanced
appropriately
with,
like
the
10
most
comprehensive
to
reviewers,
because
I
don't
want
to
reward
people
just
saying.
Yes,
they
are
I'm
the
fastest,
but
I
also
want
like
thoughtful
code
reviews
and
like
and
similarly
like
top
programmable
top
10
lists
is
kind
of
the
next
place.
A
So,
hopefully,
to
get
started,
it
should
only
take
who
maybe
15
minutes
of
your
time.
There
is
an
example
configuration
for
example,
configuration
for
kubernetes
projects
and
an
example
configuration
for
other
random,
open
source
projects.
Kubernetes
has
its
own
prioritization
label
system,
so
I.
That's
why
there's
like
a
specific
one
for
kubernetes,
so
there's
already
example:
llamo
files,
it's
easy
to
get
up
and
running
locally.
Our
documentation
is
pretty
clear.
As
long
as
you
have
docker
or
go,
you
should
be
able
to
get
up
up
and
running
low
Glee.
A
How
you
actually
deploy
it
to
production
could
get
a
little
bit
more
complicated.
Based
on
how
you
want
to
handle
data
persistence,
we
do
support
so
in
case
your
job
get
three
started
as
it
might
happen
in
in
kubernetes
cluster.
Our
cloud
run.
You
want
to
be
able
to
persist
the
github
cache
somewhere,
so
we
have
a
lot
of
options
of
like
sequel,
databases
and
disk
or
just
plain
memory.
C
D
A
On
so
I
would
say,
probably
the
biggest
effect
has
been
that
we
we
do
a
better
job
of
keeping
an
open
line
of
communication
with
our
users.
You
know
when
people
open
issues.
This
is
the
opportunity
for
developers
to
learn
from
their
users
like
here
are
the
rough
edges
here
at
the
use
cases
I
didn't
anticipate
and
we
were
basically
ignoring
a
lot
of
people
for
a
long
time
like
we
didn't
see
these
issues
so
I
think
we
have
definitely.
We
have
a
much
better,
much
better
integration
with
the
community.
A
Since
we've
adopted
this
for
the
people
who
are
doing
triage,
it's
much
less
of
a
painful
process.
They
know
they
can
just
go
to
a
web
page.
They
can
see
their
work
for
the
morning
and
do
their
thing
as
far
as
it
has
also
enabled
us
to
get
the
community
involved
with
the
weekly
triage
process,
and
so
we've
seen
more
of
an
uptake
on
that
versus
like
we
didn't
even
bother
hosting
a
triage
meeting
before,
because
it
was
so
painful
to
just
sit
in
triage
everything
at
weekly.
A
B
Alright,
thank
you
so
much
if
you
have
other
questions
so
we'll
have
a
short
question
and
answer
at
the
end
of
the
meeting,
but
I
think
it's
time
for
us
to
move
on
to
our
sig
updates.
So
thank
you
so
much
Thomas.
This
is
really
great
and
personally,
this
is
a
very
exciting
project
for
me,
because
I'm
a
big
process
nerd,
so
alright.
So
first
up
we
have
Stephen
Stephen
here.
B
E
E
E
So
we
are
working
on
actually
getting
that
instance
of
triage
party
up
and
running
for
kubernetes
kubernetes
we've
been
talking
to
working
group,
Kate's
infra,
to
get
the
appropriate
to
get
the
appropriate
resources
and
I.
Think
the
last
mile,
at
least
to
get
the
thing
standing
is
to
start
a
user
account
a
new
triage
party
user
account
for
for
github
and
attach
a
token
to
a
kerbin
Eddie's
cluster,
so
that
is
coming
soon
and
come
on
come
on
Sam.
Why
are
we
doing
this?
Okay,
all
right,
my
it's
ridiculous!
Okay!
E
So
what
have
we
been
working
on
in
the
last?
In
the
last
few
cycles?
It
has
been
a
while,
since
we
had
a
chance
to
meet
with
you
all,
but
we
have
been
doing
a
ton
of
work
with
in
cig
release
and
the
associated
sub
projects.
So,
first
off,
as
you
know,
we
are
chartered
to
do
releases
we're
responsible
for
the
kubernetes
releases
across
both
the
dev
cycle
and
minor
releases.
So
we
recently
did
the
116
116
11,
117,
7
and
118
for
patch
releases.
E
If
you
are
on
that
patch
release
includes
packages
that
that
address
ACN,
I
plug-in
CVE.
So
if
you
were
on,
if
you
were
on
previous
versions,
please
work
on
getting
upgraded,
so
you
can
take
advantage
of
those
those
new
packages.
As
Taylor
mentioned,
we
are
aware
of
some
issues
with
trying
to
install
older
packages
as
a
result
of
some
seemingly
miss
configurations
for
for
depth
and
rpms.
So
the
team
is
working
on
that.
We
want
to
make
sure
that
the
fix
that
we
bring.
E
You
is
a
comprehensive
fix
that
actually
re-enable
older
versions
and
not
just
is
the
next
series
is
next
in
series
of
fixes,
so
stay
tuned
for
more
information
on
that
will
be
sending
something
more
publicly.
As
we
get
more
details,
so
thank
you
for
your
patience.
In
the
meantime,
we
have
the
119
release
cycle
is,
is
underway
as
well
underway,
so
Taylor
gave
that
update
a
little
earlier.
The
I
think
the
119
and
you
know
very
quickly,
I
want
to
address
some
some
emails
from
the
119
release
cycle.
The
first
time
around.
E
The
the
tentative
release
date
for
119
zero
will
be
August
25th,
which,
coincidentally,
was
very
pretty
close.
This
recent
edit
was
pretty
close
to
the
original
edit
that
we
had
planned
for
the
schedule
so
home
yeah.
So
there
are
emails
out
for
that.
If
you
want
to
see
specifics
and
and
timeline
shifts
for
that
schedule,
but
for
the
patch
releases,
we
have
also
laid
down
a
monthly
cadence
for
patch
releases.
E
We
have
been
pretty
consistently
on
a
monthly
case
cadence
for
about
a
year
now,
but
we
are
going
to
start
forecast
forecasting
the
the
release
dates
even
further
out
so
right
now
we
have
an
additional
about
six
months
noted
on
the
schedule,
so
you
should
be
aware
of
that
schedule.
If
you
are
not
already,
that
is,
that
is
github.com,
slash,
kubernetes,
slash,
cig
release,
slash
patch
releases
md
and
that's
linked
in
the
slides.
We've
also
been
working
more
tightly
with
the
product,
Security
Committee
on
CV
handling.
E
There
are
some
process
updates
that
both
teams
are
working
on
and
a
documentation,
that's
already
starting
to
roll
for
that
stuff.
So
you
should
see
probably
more
announcements
around
that
coming
soon,
sorry
for
team
work
and
sustainability.
Overall,
as
I
mentioned,
we
did
the
extension
of
the
119
release
cycle.
We
are
always
excited
to
see
the
amount
of
people
who
flow
through
the
release
team.
It's
one
of
my
it's
one
of
my
favorite
sub
projects.
E
In
the
community
and
I
think
it's
a
lot
of
a
lot
of
our
it's
a
huge
opportunity
for
new
contributors
when,
when
the
shadow
applications
are
open
to
get
involved
in
the
community
in
a
really
deep
way,
so
we've
seen
over
60
community
members
some
new,
some
some
current
cycle
through
the
release
team.
Over
the
last
few
cycles,
we
are
working
on
activating
sub
projects
on
a
more
official
level
for
CI
signal,
as
well
as
triage,
as
we
had
mentioned
before,
to
ensure
continuity
across
time
for
these
projects.
E
Team
I
think
I
think
that
we
build
a
body
of
knowledge
over
time
with
these
newer
contributors
with
these
continuing
contributors
on
the
release
team,
and
we
need
to
ensure
that
that
knowledge
that
we
build
is
sustainable,
it
can
be,
can
be
used
across
the
release
team
as
well
as
across,
as
well
as
across
the
project
overall.
So
CI
signal
and
bug-free
after
two
areas
that
we've
identified
and
four
on
the
CI
signal
side
I
will
give
you
some
personnel
updates
as
well
its
words
the
end
of
this,
but
for
the
bug,
triage
side.
E
So
coming
soon,
we
have
established
a
contributor
ladder
for
branch
managers
which
actually
was
recently
rebranded
right,
so
the
the
branch
manager
and
and
patch
release
teams
that
you
may
be
familiar
with,
they
have
the
same
personnel,
but
they
are
now
called
release
managers.
So
we
have
the
release
managers
group,
which
is
composed
of
the
release
managers
and
the
release
manager
associates
which,
if
you
have
been
around
the
release
team
previously,
those
were
branch
manager,
shadows
right.
E
So
what
we've
we're
essentially
trying
to
do
is
establish
this
contributor
ladder
for
people
who
are
interested
in
release
engineering
and
specifically
have
had
time
on
the
release
team
so
are
familiar
with
our
processes,
bringing
them
a
little
closer
to
how
we
build
the
tools.
How
we
execute
on
the
releases
themselves.
E
We've
been
working
on
iteratively
transitioning
from
the
Bosch
fire
that
we
all
know
and
love
an
algo
and
the
GCD
manager
and
the
surrounding
tools,
so
tools
that
were
difficult
to
maintain,
because
because
we
had
less
active
contributors
and
authors
around
that
and
a
lot
less
people
willing
to
dive
into
at
my
last
count,
probably
around
five
six
thousand
lines
of
Bosch
that
handles
the
kubernetes
releases.
So
we've
been
working
on
transitioning
that
tooling
over
to
go
and
we've
had
a
lot
of
success.
E
In
the
last
few
cycles,
we
started
off
with
the
branch
forward
tool.
We
rewrote
GCP
Manager,
which
is
essentially
a
wrapper
for
GCP,
substitutions
and
and
build
submissions
in
go.
We
are
starting
to
break
out
the
bash
libraries
that
we
have
been
that
we
have
been
using
for
the
last
few
years
in
the
project
into
discrete
go
packages
for
for
reuse
in
Onaga,
v2,
Stevens.
E
E
So
we'll
send
it
will
send
an
email
out
later
around
more
in-depth
release,
release
sig,
release
news,
so
the
plans
for
the
upcoming
cycles.
We
need
continued
community
discussion
on
release
on
the
120
and
and
and
the
2021
timelines,
so
you'll
see
emails
from
us
come
out
around
that
we're
going
to
be
working
on
a
charter
updates
leadership
refreshes
for
each
of
our
team.
E
Sub-Projects
co-chairs
continuing
the
work
that
we've
been
doing
on
refactoring
and
sustainability
for
our
tooling,
as
well
as
our
processes
so
check
out
the
kept
around
annual
support,
which
is
which
we've
sent
a
few
emails
about.
Previously,
we
I
have
activated
annual
support
from
releases
119
and
forward.
So
we
want
to
talk
about
the
retroactive
enablement
of
annual
support
for
117,
1,
116,
117
and
118,
and
how
to.
E
E
B
G
Thanks
Larry
give
me
a
second
to
share
my
screen.
G
Okay,
can
everybody
see
a
slide
deck
I'll?
Take
that
as
an
affirmative?
Okay,
so,
okay,
so
the
major
things
that
we're
working
on
instant
gaps
in
collaboration
with
six
storage
we've
been
looking
at
what
we
can
do
to
make
the
PVC
resizing
feature:
that's
been
implemented
in
six
storage
and
in
the
PVC
and
volume
controllers
and
couplet
work
better
with
stateful
set
because
as
it
stands
right
now,
the
volume
claim
template
for
stateful
set
is
immutable,
meaning
that
you
there's
no
way
to
actually
do
a
kind
of
graceful.
G
Ppc
update
and
that's
inflate
I'm
kind
of
Shepard
in
it,
and
a
member
of
cig
storage
is
working
on
kind
of
updating
the
cap
and
pushing
forward
the
implementation.
Last
week,
another
member
of
six
storage
kind
of
wanted
to
so
in
the
design
of
staple
set.
We
purposefully
made
it
not
delete
PVCs
the
reason
being.
B
B
G
G
G
G
Alright,
so
PVC
resizing,
we
were
just
talking
about
staple
set,
control
or
PVC
deletion,
so
we
purposely
designed
staple
said
originally
to
not
delete
PVCs.
Basically,
if
you
delete
a
staple
set,
there
are
many
use
cases
for
structured
storage
workloads
where
you'd
want
to
leave
the
embedded
block,
storage,
Google,
persisted
disk
or
Azure
disk.
That's
backing
that
storage
available,
even
post
cluster
deletion,
and
that
was
the
more
common
use
case.
G
If
you're
familiar
with
how
the
deployment
controller
works,
you
can
actually
search
out
the
number
of
concurrent
pod
creations
and
in
order
to
kind
of
accelerate
the
deployment
of
things
like
any
administrative
workloads,
so
you
could
think
of
like
CNI,
plugins
or
CSI
plugins
or
any
of
those
things
and
to
make
them
be
able
to
update
large
clusters
in
a
timely
manner.
We
wanted
to
consider
adding
the
max
search
capability
there
for
PDB
we're
still
making
progress
onward
to
GA
and
we
hope
to
hit
the
deadlines
that
we
would
be
under.
G
G
One
of
the
prerequisites
that
we
decided
on
for
moving
crimes
up
the
GA
was
integrating
shared
informers
and
crime,
job
and
job.
The
batch
workload
controllers
in
general
work
quite
differently
from
the
other
worker
controllers,
so
there's
kind
of
a
very
large
moat
to
jump
over
to
be
able
to
successfully
contribute
there.
G
The
maintainer
of
that
API
is
currently
working
with
someone
to
try
to
get
us
there,
but
there
is
significant
risk
that
we
may
not
be
able
to
get
cron
job
2
GA
and
still
be
in
compliance
with
the
API
promotion
guidelines,
which
would
mean
that
we
would
have
to
promote
the
existing
API
to
be
one
beta
to
something
we'd
really
like
to
avoid,
because
we
really
feel
like
that's
not
the
best
turnout
for
our
users.
We
feel
like
having
people
half
making
people
upgrade
to
something
that
isn't
more
stable
and
provides.
G
G
It
was
merged,
work
began
on
it
and,
as
of
last
release,
we
had
an
alpha
release
ready
to
go
out,
but,
as
we
were,
trying
to
merge,
signaled
declined
to
take
it
at
that
point,
saying
that
they
weren't
confident
of
the
maintainer
ship
of
the
contribution
at
that
time,
and
then,
let's
go
to
how
this
affects
you,
so
that
this
doesn't
affect
core
contributors
as
much,
but
in
particular
the
sto
project,
which
is
not
CN
CF
and
the
linker
D
project,
which
is
CN
CF,
we're,
depending
on
sidecars,
to
make
their
sidecar
injection
work
at
a
high
level.
G
Both
of
these
projects
provide
a
service
mesh
and
both
of
them
use
mutating
web
hooks
to
inject
a
sidecar
containing
a
proxy
into
all
pods
that
are
labeled
to
be
injected
at
pod
creation
time.
The
problem
there
are
a
couple
problems
there,
one
during
pod,
lifecycle,
termination
and
couplet
will
basically
rip
the
network
out
from
under
the
pod,
occasionally,
which
is
not
particularly
graceful
to
low
disruption
and
to
with
any
workload
that
isn't
run
forever.
Basically,
job
cry
and
job
anything
that
runs
to
completion.
G
That
would
be
great
and
we
need
to
just
get
in
touch
with
cig
mood,
to
figure
out
what
the
Pat
board
is
for
sidecars
and
here's
the
contact
info
it
janet
myself,
matt
and
adnan-
are
all
the
chairs
we
meet
on
Mondays
at
9:00
a.m.
Pacific
time.
We
do
a
bug,
scrub
and
issue
scrub.
Generally
speaking,
every
meeting
and
we
review
open
caps,
open
issues
and
talk
about
the
direction
of
the
workloads
API.
So
thanks.
I
I
The
project
Minister
chained
up
enjoy
in
issuing
Stevie's
little
project,
that's
common
vulnerability,
new
variations
or
exposures.
I
can't
quite
remember,
okay
and
then
Craig
and
I
are
both
going
to
be
stepping
up
as
full-time
core
team
members.
Bringing
us
back
up
to
seven
members,
which
is
where
we
have
set.
The
goal
at
also
know,
is
very
move
on
from.
You
might
recognize
him
from
sick
off.
It's
gonna
be
joining
us
a
new
associate
number
more
about
what
we
do.
I
There's
our
two
ways
that
we
ingest
vulnerabilities,
the
first
is
through
email
to
security
occurring
based
on
video,
that's
kind
of
our
historic,
historically
away.
We've
gotten
vulnerable
reports.
I
Back
in
January,
we
launched
a
blueberry
program
which
is
now
where
we're
getting
the
majority
of
I
work
for
it,
from
kind
of
where
we're
steering
new
users
to
report
for
my
abilities
and
on-call
rotation
instead,
one
week
on
the
call
from
that
core
team
that
I
just
wanted,
I'm
Edwin
town
hall,
where
we
tend
to
Keith,
is
vulnerability.
Reports
that's
like
if
I
may.
I
Actually,
if
it's
coming
in
through
hacker
one,
a
hacker
one
has
an
initial
triage
team
that
helps
us
to
filter
out
a
lot
of
the
noise
and
kind
of
low
quality
or
relevant
reports.
I'm
dividing
time
gets
to
our
team.
It's
a
pretty
high
quality
report
that
we
do
some
initial
triage
from
on
and
see
if
it
seems
valid
or
sometimes
those
things
that
are
kind
of
security
relevant
by
working
tended
or
sometimes
it's
a
user
education
issue
we're
like
okay,
we
need
to
improve
our
documentation,
but
not
really
a
vulnerability.
I
We
then
try
and
understand
the
impact
to
make
sure
you
know
really
how
we
need
to
handle
the
vulnerability.
This
is
where
we
start
to
pull
in
some
of
the
trusted
domain.
Experts
from
the
broader
community
then
set
the
severity
based
on
that
and
that
helps
us
decide
how
we
want
to
proceed
with
actually
addressing
the
vulnerability
that
we
now
have
better
internal
issue
tracking.
We
have
a
few
things
kind
of
slip
through
the
cracks
when
we
were
managing
everything
by
email,
and
so
we've
made
some
big
improvements
there.
I
I
So
we
try
and
coordinate
with
them
for
hacker
one
reports:
we
issue
a
bounty
going
all
the
way
up
to
I
think
$10,000
for
a
critical
vulnerability
in
court
kubernetes,
and
then
we
kick
off
the
incident
response
process
which
I'm
not
going
to
go
into
here,
but
we
do
have
all
of
this
documented
on
our
in
the
community
security.
Repo
you'll
find
a
link
at
the
bottom
there,
so
I
mentioned
that
both
bounty
launched
in
January
and
we've
been
really
happy
with
it
so
far.
I
A
lot
of
these
submissions
are
closed
as
duplicates
or
invalid,
or
out
of
scope
until
you'd
be
ballad.
You
can't
quite
tell
from
this
graph,
but
the
valid
submissions
are
actually
holding
fairly
constant,
so
we're
getting
a
much
better
signal-to-noise
ratio
and
just
kind
of
another
view
of
some
of
that
data.
I
We're
getting
approximately
three
valid
reports
a
month,
I'm
actually
not
sure
if
this
is
a
bunch
of
issues
that
are
in
the
pipeline,
so
I'm
not
sure
exactly
how
this
is
counted
but
yeah.
We
have
a
may
be
approximately
three
reports
a
month
that
we
have
to
deal
with,
but
keep
us
busy
to
varying
degrees.
I
I
I
So
we
have
distributors
announcement
west
that
gets
pre-release
vulnerability.
Information
for
this
is
for
basically
companies
and
projects
that
are
kubernetes
on
behalf
of
other
other
people,
and
so
we
give
those
distributors
of
the
kubernetes
kind
of
a
heads
out.
So
they
can,
they
can
patch
or
be
ready
to
patch
when
the
vulnerability
goes.
Public
and
historically,
we've
kind
of
did
a
fairly
good
job
with
critical
and
high
vulnerability
x',
but
not
as
good
with
the
low
and
medium
severity
vulnerabilities.
I
I
So
a
certain
amount
of
what
we
do
is
does
need
to
be
private
by
necessity,
but
we're
really
trying
to
get
much
better.
A
community
involvement
around
the
things
that
don't
need
to
be
private,
so
improvements
to
our
process,
how
we
communicate
trying
to
get
more
feedback
on
what
we
can
do
to
improve
our
communication
and
process
publicly
and
also
sometimes
they're
the
follow-ups
to
vulnerabilities
that
happen
in
the
public.
I
So
if
this
is
something
that
you'd
be
interested
in,
we
have
a
bunch
of
the
issue
is
tagged
with
that:
Help
Wanted
and
the
first
issue
security
repo.
That's
the
top
link,
definitely
looking
for
feedback
on
the
whole
process
and
there's
the
kubernetes
security
discuss.
Group
that
we
monitor,
if
you're
interested
in
actually
hunting,
even
if
you're
a
regular
contributor
to
kubernetes,
you
can
still
be
rewarded
for
reporting
and
for
my
ability
unless
they're
on
the
product
security
committee.
But
everyone
else
hacker,
one
that
calls
left
communities
is
where
you
can
find
the
bug.
I
H
B
H
So
a
little
bit
about
what
we've
been
up
to
in
Sega
architecture,
so
there's
actually
a
number
of
sub
projects
of
Sagarika
texture,
and
so
each
one
is
relatively
active.
We
have
a
lot
of
things
for
the
nonce.
We
have
five.
Now
we
added
a
new
sub
project
recently
with
the
retirement
of
sig
p.m.
we
took
on
its
enhancements
on
project
and
the
that
some
project
is
to
manage
the
kept
process
and
make
improvements
to
to
that
process
and
we're
adding
some
tooling
and
automation
around
that.
H
Voc
also
introduced
this
production
readiness
review
process
so
I'm
sure
a
number
of
people
have
brushed
up
against
this.
Essentially,
the
idea
is
that
when
you
are
bringing
a
feature
really
to
alpha
to
beta
or
the
GA
there's
different
levels
of
production
readiness,
we
would
expect
out
of
that
feature
and
we
want
to
make
sure
that
all
the
bases
are
covered.
H
So
you
can
put
images
in
the
air
with
your
design
and
the
metadata
that
was
at
the
top
of
the
markdown
file
is
now
a
separate
yamo
file,
and
this
is
going
to
help
us
with
our
tooling
and
automation
but
to
others
sub
projects.
We
have
the
food
organization
interviews,
those
are
ongoing
processes
of
managing
dependencies
and
making
sure
that
right
guys,
new
api
is
all
here
to
our
conventions
and
those.
H
Lastly,
what
we've
done
recently
there's
along
with
CDA
and
scenery,
there's
a
new
working
great.
It
connects
Russian
working
really
to
work
on
additional
documentation
and
improvements
dollars
a
document
so
things
going
on
in
119
and
featured
on
there.
That
I
believe
is
all
implemented
now
to
improve
the
clarity.
A
H
The
East
window
illegals
the
as
I
mentioned
earlier,
it
can
include
it
in
continued
upon
delayed
processes
in
conformance
we've
been
working
for
quite
some
time
now
on
reworking
the
way
that
the
conformance
tests
work
so
that
we
can
separate
out
how
community
should
behave.
That
is
what
defines
what
communities
is
a
component
given
at
ease
from
the
tests
that
actually
evaluate
the
getting
cluster
and
this
new
processes,
as
well
as
the
leaves
the
groundwork
for
something
else,
we're
working
on
the
check
out
those
files.
So,
in.
H
Right
now,
the
only
things
that
are
subject
to
conformance
are
required
feature
features
that
have
to
be
there
in
any
cluster,
that's
running.
So
that
means
things
like
our
back,
which
is
actually
not
something
a
ye
isn't
subject
to
compliance.
However,
we
really
want
for
our
back
to
function
the
same
way
across
different
vendors
instances
of
kubernetes
that
order,
if
the
workload
to
be
portable
and
other
from
the
FS
to
be
portable,
we
need
that
to
work
across
different
vendors
in
the
same
way.
So
the
idea
of
profiles.
B
H
A
way
to
capture
a
set
of
conforming
behaviors
that
are
maybe
optional
in
a
given
cluster,
but
if
you
implement
them
in
that
cluster,
they
should
behave
in
this
way.
Image
quality
in
the
production,
readiness
sub-project,
like
I,
mentioned
earlier-
that's
a
new
process.
You
fill
those
things
in
the
cap
in
119.
We
introduced
that
as
a
strongly
recommended,
but
not
absolutely
required,
and
if
that
all
goes
well,
then
in
120
we
expect
it
for
things
graduating
eg
for
sherrilyn
beta
that
we
would
want
this
as
another
type
of
you.
H
H
H
The
other
thing
is
for
the
conformance
since
we're
changing
a
little
bit.
The
way
it
works,
but
before
long
we'll
be
reaching
out
to
SIG's
to
help
review
the
behaviors
that
we've
defined
and
to
define
new
behaviors
that
we
is
not
subject
matter.
Experts
in
every
cig
don't
know
how
to
define
and
so
we'll
be
reaching
out
and.
B
All
right,
thank
you
and
I'm,
not
sure
we'll
have
time
for
too
many
questions,
but
what
I
can
do
right
now
is
thanks
to
all
of
our
presenters.
So
you
did
great,
really
interesting
stuff
that
we're
doing
in
terms
of
announcements.
We've
got
a
new
Twitter
feed.
Please
follow
its
its
commands
contributors.
An
official
announcement
will
be
on
the
group
of
any
instead
of
mailing
lists.
B
Today
we
have
a
host
next
month,
who's
a
new
contributor
sushmi
tomorrow,
I'm
enough,
she
is
a
product
manager
and
she's
been
getting
involved
in
some
of
the
projects
for
the
enhancements
group.
So
we
are
always
looking
for
new
contributors
to
host
this
meeting.
I
think
we
have
a
few
lined
up
for
the
couple
months,
but
please
ping
us
in
the
country
back
cig.