►
From YouTube: Kubernetes Community Meeting 20200820
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
Hello,
welcome
to
the
august
community
meeting
of
kubernetes
nice
that
you
made
it
all.
Please
remember
that
this
is
live
streamed
and
it
will
be
posted
publicly
on
youtube.
So
just
please
be
mindful
what
you
say
it's
being
recorded
and
also
remember:
we
have
a
code
of
conduct,
so
please
be
excellent
to
each
other.
A
I'm
jenny,
I'm
from
berlin,
I'm
head
of
engineering
for
a
while
now
and
I'm
part
of
the
contributor
experience
sick
and
I'm
really
into
coaching
leadership,
organizing
and
this
kind
of
stuff-
and
I
just
joined
recently,
and
I'm
really
glad
to
see
how
you're
all
collaborating
and
working
with
each
other
and
please
mute
when
you're,
not
speaking,
and
we
would
need
a
note
taker
for
today.
A
Could
I
have
I
want
you.
Thank
you
perfect.
Then.
Today
we
have
as
always,
release
updates
and
the
six
today
are
scalability,
auto
scaling
and
scheduling,
and
for
the
first
time
we
will
also
have
the
naming
working
group
presenting
so
yeah.
I
think,
let's
just
start
right
with
jeremy,
for
the
release
updates.
B
Hello:
everyone,
my
name,
is
jeremy
rickard.
I
am
currently
one
of
the
shadows
for
the
119
release
team.
I'm
serving
as
a
release
team
lead
shadow
and
I
will
be
the
release
team
lead
for
120,
so
we
thought
I
could
start
picking
up
the
the
updates
today
and
we
don't
have
a
lot
to
report
this
week
and
last
week
we've
been
on
break
for
kubecon.
B
Things
are
still
happening
behind
the
scenes,
but
we
haven't
been
meeting.
So
there
haven't
been
a
lot
of
updates
going
forward,
we're
still
working
towards
the
targeted
release
date
of
august
25th.
It's
aspirational.
Anything
could
happen
between
now
and
then
and
obviously
we'll
communicate
that
out
should
it
become
necessary,
but
still
working
towards
that
that
aspirational
date.
One
thing
to
mention
taylor
had
sent
out
an
update.
We
are
currently
holding
kota
off
until
after
119
has
been
released.
There's
been
a
lot
of
things
happening
in
this
release
cycle.
B
B
B
B
They
are
from
the
ci
signal,
sub
team,
doing
awesome
things
to
help
get
some
of
those
tests
fixed
in
conjunction
with
sick
testing,
specifically
ben
and
aaron.
It's
just
been
amazing
to
see
the
the
colors
change
from
lots
of
red
to
yellow,
to
like
it's
pretty
green
rob.
The
other
day
said
we're
swimming
in
a
sea
of
green
now,
and
it's
it's
pretty
pretty
awesome
to
see
another
update.
B
Since
we
are
getting
pretty
close
to
the
end
of
the
119
release
cycle,
we
are
going
to
start
kicking
up
the
120
release
cycle.
We
opened
a
pr
to
start
assembling
the
team
and
you
can
follow
along
with
that.
I'll
add
it
to
the
agenda.
After
this
we
opened
it
yesterday
so
kind
of
late,
late,
breaking
and
then
stay
tuned
for
the
shadow
application
for
the
120
release.
C
Just
an
additional
note
regarding
communication:
we
realized
that
a
lot
of
josh
burkus
brought
this
up
in
one
of
the
sig
release
meetings.
We
realized
that
a
lot
of
the
the
communications
that
we
were
giving
to
the
community
were
at
this
meeting,
so
you
know
we're
taking
that
as
feedback
and
going
to
provide
more,
maybe
bi-weekly
or
something
sig
release
updates
kind
of
across
all
of
our
sub-projects
to
the
community.
So
stay
tuned
for
that.
B
Yeah,
that
was
gonna,
be
the
last
thing
I
mentioned
with
the
release
happening,
hopefully
on
the
25th.
The
last
milestone
for
119
will
be
the
the
retro,
and
that
will
be
on
thursday
august
27th.
A
Nice,
so
any
other
questions.
C
C
We
won't
see
something
as
long
for
code
freeze,
but
we
want
to
make
sure
that
stability
is
always
a
part
of
the
release
and
not
just
relegated
to
a
specific
release.
So
I
don't
plan
for
19120
to
be
a
quote-unquote
stability
release.
E
All
right
welcome
everyone,
just
present
the
slides
or
will
anyone
else
present.
E
E
All
right,
can
you
see
the
presentation?
Yes?
Yes,
let
me
just
keep
representing
that
better.
All
right,
so
welcome
everyone,
everyone,
my
name
is
matt,
I'm
a
secretary
of
scalability,
a
short
update
on
what
we
did
last
cycle
in
the
area
of
scavengitas
and
performance,
skeleton
performance
test
and
they're
doing
a
bunch
of
things.
I
think
the
major
one
is
the
continuous
scale
test
at
golang
tip
and
so
basically
starting
from
column
115,
and
we
have
this
large-scale
kubernetes
test
running
out.
E
E
What
caused
it
so
with
this
continuous
test
that
shouldn't
be
the
case
anymore,
because
we
will
be
able
to
detect
regression
like
almost
immediately
after
after
he
was
committed
to
the
golan
code
base
other
than
that,
a
few
minor
things
like
enabling
more
large-scale
network
tests
and
for
helpful
log,
balancer
and
internal
balancer,
and
also
things
like
enabling
container
dean
of
our
scalability
test.
E
When
it
comes
to
regressions,
it
has
been
actually
a
really
good
cycle.
We
didn't
have
anything
major
inside
kubernetes,
comparing
to
all
cycles.
We
used
to
have
a
few
regressions
every
every
every
cycle.
So
that's
actually
really
good
news.
The
only
two
I
have
listed
here
and
one
is
balance
regression.
So
it's
not
in
current
is
called
the
base.
I
already
mentioned
it
and
go
on
114
and
the
other
one
is
a
kind
of
ongoing
thing.
E
We
realize
that
if
cluster
is
highly
loaded
with
enough
thoughts
or
secrets,
it
may
cause
to
not
survive
a
restaurant
of
a
master,
and
this
is
something
we
are
we
are
working
on.
There
is
like
no,
we
have
some
mitigations,
but
there
is
like
no
no
full
solution.
Yet
when
it
comes
to
performance
improvements,
there
has
been
a
bunch.
E
Imitable
secret
is
one
so
basically
a
new
api
for
secrets
and
config
map,
which
is
much
more
scalable
than
the
the
regular
one
things
like
periodic
watch
bookmarks.
So
basically
we
built
on
the
watch
bookmark
api
that
was
added
a
few
cycles
ago
and
improved
that
and
dynamic
watch
exercise.
So
we
eliminated
all
this
problem
with
static
watch
exercise
and
one
was
reliability
issues.
It
was
actually
very
easy
to
forget
to
to
bump
watch
cache
size.
E
For
example,
we
had
this
regression.
We
forgot
to
do
that
for
endpoint
slices
and
that
resulted
in
scalability
regression.
So
now
it's
not
the
case
anymore.
Also,
we've
had
some
memory
savings
with
that
change
and
some
others
like
no
authorized
improvements
or
watch
cash
indexes
to
name
a
few
and
when
it
comes
to
plans
for
upcoming
cycles.
So
as
always,
we
would
like
to
work
on
hardening
and
extending
the
definition
of
scalability,
and
so
there's
always
work
to
do
in
this
area.
E
We
would
like
to
add
more
slis
and
the
slows
and
also
improve
the
the
existing
ones
and
scavenger
images
is
also
something
that
we
haven't
updated
for
a
long
time.
So
it's
definitely
something
we
should
do
in
the
next
few
cycles
and
when
it
comes
to
scabity
and
performance
tests,
big
team
is
is
working
on
deflating
our
test.
E
We
are
aware
that
our
testosterone,
we
already
started
like
doing
something
with
that,
and
but
we
hope
to
do
even
more
in
the
next
few
weeks
and
other
than
that,
we
would
like
to
invest
in
better
release
pipeline
for
our
performance
test
tools,
so,
for
example,
cast
roller
two,
and
because
what
we
have
right
now
is
is
not
the
best
thing.
E
It's
actually
really
painful
when
it
comes
to
backwarding
the
changes
to
all
the
releases
and
other
things
are
extending
and
improving
our
tests,
and
so,
for
example,
either
adding
upgrade
tests
or
spinning
averages
out
or
covering
more
vibrancy
concepts.
E
And
last
but
not
least,
so
the
bottleneck,
detection
and
performance
improvements
and
the
plans
in
this
area
are
things
like
efficient
watch
resumption.
That's
the
kept
that
voidtech
opened
a
few
weeks
ago.
Basically,
it's
like
one
of
the
it's
part
of
the
solutions
for
for
not
working
upgrades
in
large
clusters,
and
also
there
are
other
efforts
that
we
are
not
driving
directly,
but
we
are
very
interested
in
them.
An
example
is
priority
and
fairness.
So,
basically
it
will.
It's
very
it'll,
be
a
big
game
changer
for
for
scalability
in
kubernetes.
E
We
would
like
to
experiment
with
that
more
because
it
will
basically
open
a
new
possibilities
for
us
when
it
comes
to
configuring,
cubelet
and,
like
other
other
parts
of
the
system,
and
also
things
like
consistent
risk
from
cache,
would
like
to
dig
more
into
ideas
like
that
and
see
whether
whether
we
should
invest
more
time
and
things
like
that,
all
right
and
as
always,
if
you
would
like
to
contribute,
we
have
these
two
have
wanted
lists
in
perthes
and
bernardi's
repositories.
E
You
can
bring
us
a
stack
for
anything
and
a
short
slide
on
where
to
find
us
links
to
our
home.
Page
slack
channel
are
made
in
this
and
we
have
public
meetings.
Actually
one
is
starting
in
18
minutes
all
right.
Thank
you.
A
Okay,
that
sounds
like
no
cool.
Thank
you,
matt,
the
next
one
on
the
list
is
marcin
and
he
will
give
us
an
update
for.
E
F
F
What
sequel
to
scaling
is
all
about?
We
have
three
main
products
that
we
are
taking
care
of:
cluster
autoscaler,
horizontal
and
vertical
potato
scale
during
class
cycle
and
upbeat
in
cluster
autoscaler.
We
have
got
two
new
cloud
providers
providers
support,
one
is
huawei
cloud
and
the
second
one
is
long-awaited
cluster
api
cloud
provider
support.
F
We
also
improved
scalability
of
cluster
autoscaling
in
a
couple
of
the
areas
in
vertical
auto
scaler,
we
added
api
to
specify
which
resources
are
controlled
by
vpa,
is
it
cpu
memory
or
both,
and
we
added
controls
for
pod,
updating
frequency
and
vpa
is
more
aware
of
its
surrounding.
If
vpa
admission
web
hook,
pod
is
not
running,
the
pods
will
not
be
updated
and
we
added
support
for
out
of
memory
errors
in
posts
that
have
multiple
containers.
Previously.
F
This
was
a
bit
of
a
problem
and
we
are
also
now
supporting
sidecars
that
are
injected
during
the
admission
phase
in
horizontal
port
autoscaler,
the
main
topic
is
work
on
adding
container
level
metrics.
Currently,
we
have
only
pot
level
metrics
in
the
upcoming
cycles.
We
obviously
plan
to
have
more
cloud
providers
in
cluster
autoscaler.
F
Cluster
api
support
is
very
high.
A
current
topic
and
a
lot
of
people
are
working
on
it.
We
are
seeing
great
progress,
so
hopefully,
this
support
will
be
really
great
soon
and
we
are
also
working
on
scalability
improvements
in
vertical
pad
autoscaler.
We
want
to
add
more
controls
over
process
recommendations.
F
F
E
F
H
All
right,
hi
everyone,
I'm
wayne
huang
and
I
work
5
pm
right
now.
I
co-chair
the
6th
scheduling
so
today
I'm
going
to
walk
you
through
the
major
update
in
the
so
in
the
last
several
releases.
So
first
thing.
First
so
scheduling
framework,
so
we've
been
actively
working
on
scheduling
for
framework.
Since
what
150
in
118,
we
have
reached
milestone,
2
to
ensure
that
predicates
and
priority
functions
are
running
as
native
plugins,
and
that
also
implies
the
legacy.
Hard
coded
execution
path
has
totally
been
eliminated.
H
H
So
basically
the
two
parts.
However,
the
default
entry
plugins
cannot
always
satisfy
user's
requirement.
So
we
initiated
the
stub
repo
called
schedule,
plugins,
which
we
want
to
accommodate
different
kinds
of
requirements,
and
so
the
the
different
vendors
can
contribute
their
plugin
design
implementation
proposal,
ideas
there
so
to
resolve
the
particular
kinds
of
requirements.
H
So,
basically,
though,
we
call
those
plugins
as
out
of
three
plugins
so,
but
they
are
also
the
first
class
of
the
first
class
of
citizen
in
the
scheduled
world.
So
it's
basically
pretty
like
the
core
kubernetes
api
resources
versus
crds,
so
basically
the
same
concept.
H
Okay,
due
to
the
changes
made
in
the
scheduling
framework,
the
corresponding
configuration
has
been
has
to
be
changed
as
well.
So
we
upgraded
the
schedule
component
config
from
viva
offer,
one
which
has
been
actually
stayed
for
a
while
for
releases
and
until
we
introduce
the
framework
that
config
has
been
refactored
a
lot.
So
in
118
the
version
goes
to
what
v152
and
in
y119
the
version
bumped
to
viva
beta
1.
So
don't
be
surprised
by
some
significant
configuration
changes
in
the
in
the
releases
and
also
with
the
change.
H
We
also
enhanced
some
user
experience
on
configuring
scheduler,
for
example,
in
the
before
it's
very
difficult
to
say,
customize
some
scheduling
policies,
for
example.
You
have
to
know
the
full
list
of
the
predicates
and
priorities
so
that
you
can
do
some
tailoring
like
replace
ones
with
each
other.
So
that's
not
a
good
user
experience,
but
right
now,
with
the
new
component
config,
it's
much
more
easier
to
do.
H
That,
and,
and
also
some
parameters
has
to
be
deleted
because
they
are
not
necessary
anymore
anymore
and
also
some
fields
has
been
moved
from
one
layer
to
another
hierarchy.
So
that
is
all
I
think,
good
news,
good
news,
so
make
users
more
easy
to
use
the
scheduling
framework,
and
also
one
feature
I
want
to
highlight
here-
is
the
scheduled
profile.
H
So
right
now
the
scheduler
supports
multiple
profiles
configurations.
So
what
does
that
mean?
So
that
is
the
background,
is
that
users
are
very
likely
to
run
variously
various
kinds
of
workloads
in
one
kubernetes
cluster,
for
example,
running
both
the
long
running
services,
as
well
as
the
batch
workloads,
so
in
the
before,
how
can
they
do
that?
H
H
One
profiles
is
a
combination
of
different
plugins,
so
you
package
a
set
of
plugins
into
one
profile
and
then
the
other
set
of
plugins
into
another
profile
so
but
they
run
it
into
the
same
banner
and
in
runtime.
If
your
part,
let's
say
you
want
to
the
part
to
be
more
impact,
you
can
correspond
that
past
spec,
the
scheduler
name,
correspond
to
the
this
kind
of
profile,
name,
etc,
etc.
So
this
gives
you
more
flexibility
and
to
support
multiple
kinds
of
workloads.
H
So
that
is
one
feature
I
want
to
highlight
and
as
always,
we
continually
improve
the
performance
and
because
of
the
a
big
refactoring
of
code
base,
then
that
gets
much
easier
and
the
code
base
is
much
clean
right
now
we
have
also
improved
the
performance
of
the
part
affinity,
because
that
is
was
recognized
as
a
slow
feature,
but
right
now
it's
much
much
more
more
faster
and
also
we
look
into
the
code
to
try
to
skip
unnecessary
scheduling
attempts
and
also
reduce
the
number
of
calling
to
api
server.
H
In
terms
of
future
graduations,
we
have,
we
will
graduate
topology
spread
to
ga
in
one
19
release
and
also
we
ga
10
best
eviction
in
118
release
and
in
the
ongoing
kyoko.
There
is
a
scheduling,
intro
deep
dive,
so
check
it
out.
If
you
are
interested,
so
what
about
next
releases?
H
H
Maybe
some
refactoring
like
give
the
users
more
fine,
grained
control
on
choosing,
whether
to
enqueue
and
how
to
enqueue
their
pass
on
some
particular
cases,
and
also,
I
think,
there's
one
last
piece
left
in
our
code
base
is
the
assumed
logic
which
hasn't
been
migrated
to
the
plug-in
mechanism.
H
Yet
so
we'll
do
that
later
and
also
we
want
to
promote
the
scheduling,
plugins,
also
kind
of
blocking
marketplace,
to
engage
more
companies
to
contribute
so
right
now
we
have
several
companies
actively
contribute
to
that
on
some
common
requirements
like
gas
scaling
like
elastic
quarter,
et
cetera.
So,
but
we
want
to
engage
more
companies
and
in
terms
of
in
terms
of
scheduling
config,
we
will
definitely
graduate
that
so
that
we
can
duplicate
the
legacy
policy
api
and
also
because
more
and
more
more
external
users
are
using
scandinavian
framework.
H
So
we
should
sort
of
refactor
our
code
base
to
make
it
less
dependent
on
kubernetes
select
kubernetes,
maybe
move
some
logic
to
staging
level
so
that
it
can
be
easily
be
used
by
the
external
users
and
also
we
also
worked
with
skeleto
to
improve
our
benchmark
testing,
as
well
as
our
internal
benchmark,
testing
framework,
okay
and
in
terms
of
performance.
Another
focus
of
shift
is
from
the
regular
pass
to
the
to
the
the
preemption
path.
H
So
that
is
one
area
we
didn't
look
quite
into
so
the
best
one
focus
of
the
next
couple
of
releases:
okay
and
then
way,
hua,
which
is
me,
has
been
nominated
as
a
new
co-chair
customer
has
stepped
down.
So
thanks
for
him
for
the
efforts
in
the
past
couple
of
years-
and
here
is
the
china's
list
and
the
ageless
that
you
can-
you
can
find
out
and
feel
free
to
contact
us
in
slack
all
right.
That's
pretty
much
for
today.
I
Hey
wait
just
a
quick
question.
This
gotham,
I
just
want
to
know
how
the
pre-score
actually
differs
from
post
filter.
H
H
Ensure
that
hard
requirements
of
your
power
is
has
been
satisfied,
so
that
is
yes
or
no
question
right
and
the
score
tries
to
resolve
the
software
requirements,
so
that
is
best
to
have
good
head
to
resolve
good
health
problems.
So
post
filter
happens
after
filter,
which
can
only
happen
when
there's
no
single
node
can
satisfy
your
heart
requirements.
So
what
can
we
do?
So?
A
usual
solution
is
to
try
preemption.
H
So
that
is
one
implementation
in
post
filter
right,
but
you
can
so
with
the
new
poster
filter
exchange
point,
but
it
doesn't
limit
the
interview
implementation
to
be
preemption.
You
can
try
to
use
other
solutions,
like
you
can
spin
up
a
new
machine
like
using
uca
to
permission
provision
on
your
machine
to
to
make
this
part
schedulable
right
and
also
you
can
say
you
can
use
hpa-
to
reduce
the
request
of
the
path
to
make
it
schedulable.
So
that
is
that
I
mean
that
necessary
can
be
achieved
by
the
preemption.
H
H
You
have
to
consider
the
requirements
across
nodes,
not
on
a
single
node,
for
example,
pod
affinity,
but
anti-infinity
topology
spread
with
this
advanced
security
scheduling
features.
You
are
not
looking
at
a
single
path
can,
then
you
can
decide
whether
it's
a
fit
or
not.
You
have
to
look
across
the
nail,
so
with
that
said,
you
have
to
sort
of
to
make
it
efficient.
You
have
to
build
a
pre-cache
in
the
preschool
phase
so
that
the
pre-cache
can
be
used
in
the
scope,
so
that
is
the
difference
with
preschool
and
the
post-filter.
A
Good
thanks
ray
for
the
update
the
next
one
is
celeste
and
she
will
give
us
an
update
about
the
naming
work
group.
G
Hello,
everyone
in
case
you
didn't
know
me,
I'm
celeste,
I'm
a
senior
technical
writer
with
the
cncef
and
I'm
one
of
the
leads
of
the
naming
working
group.
We
had
our
first
meeting
this
last
quarter
and
this
is
really
just
an
update
on
that,
because
I
know
a
lot
of
people
in
the
community
are
quite
interested,
so
what
we
did
last
quarter
was
really
kick
off
the
project
in
full
at
this
moment.
G
At
the
moment,
we
have
a
strong
focus
on
language
in
the
kubernetes
project
that
has
racist
or
offensive
connotations.
Those
are
things
we
are
trying
to
eliminate,
but
we
anticipate
that
the
working
group
will
shift
in
the
future
to
make
recommendations
just
around
unclear
language
in
the
project
and
how
we
can
improve
that.
G
So
that's
one
of
the
goals,
the
other
goal-
and
I
think
I'm
covering
this
so
we're
gonna
move
it
aside,
is
to
utilize
our
existing
tooling
and
potentially
develop
a
bit
of
new
tooling
to
help
projects
implement,
naming
changes
in
the
most
efficient
way
possible
and
monitor
their
code
bases
in
that
way.
So
that's
what
we're
doing
come
on.
So
what
are
we
doing
in
upcoming
cycles?
G
So
the
naming
the
working
group
naming
mailing
list
is
kind
of
where
all
the
action
is
happening,
because
we
only
meet
monthly
and
at
the
moment
we
have
discussions
around
a
few
terms
which
are
listed
here
and
we're
hoping
to
resolve
those
and
get
to
a
place
where
we
can
start
making
recommendations
as
a
working
group,
we
do
not
own
code,
we
can
only
make
recommendations
for
plans
of
action
and,
in
this
case,
for
specific
terms
that
we
think
need
to
change
in
the
project.
G
It's
very
likely
that
those
recommendations
are
going
to
end
up
as
a
set
of
like
architectural
decision
records
of
some
sort.
But
one
of
my
to-do's
for
today
is
to
put
out
a
proposal
as
to
what
that
looks.
G
Like
to
the
group,
so
that's
the
main
thing
that's
happening
is
there's
a
lot
of
discussion
on
the
mailing
list
and,
as
I
mentioned
before,
start
to
think
about
how
to
develop,
process
and
tooling
and
how
to
make
how
to
make
what
we're
doing
with
the
working
group
sustainable
long
term
for
the
project
without
having
to
spin
up
a
working
group.
G
That's
effectively
the
the
second
goal,
how
these
plans
affect
you
in
like
in
the
near
kind
of
six-month
future,
probably
not
much,
and
this
kind
of
goes
back
to
120,
potentially
being
a
quote-unquote
stability
release.
I
wouldn't
expect
any
major
change
in
any
of
your
lives
right
away,
but
eventually
there
will
be
change
and
there
will
be
work
done
towards
these
goals.
G
Again,
as
a
working
group,
we
don't
own
code,
we
make
recommendation
and
I
believe
steering
committee
is
the
one
who
ratifies
those
recommendations,
so
it
needs
to
go
through
that
process.
First,
in
the
future,
though,
we
will
be
looking
for
projects
and
subprojects
in
the
kubernetes
organization
that
are
looking
to
get
the
jump
on
this
to
help
us
iron
out
the
process
and
what
actually
needs
to
happen.
G
For
example,
if
we
need
to
change
the
word
master
and
that
has
api
changes
that
could
be
breaking
for
customers
with
any
luck,
the
bulk
recommendations
and
language
evaluation
work
will
be
done
in
the
next
few
months.
I
skipped
over
the
how
you
can
help
slide,
because
how
you
can
help
is
really
where
to
find
us,
because
we
are
still
in
the
active
discussion
phase.
So
these
are
the
chairs.
The
homepage
is
in
k
community,
and
I
apologize
for
not
changing
that.
G
The
real
bulk
of
where
you
can
find
us
and
how
you
can
help
is
please
come
to
the
slack
channel,
raise
any
issues
you
see
in
your
day-to-day
work,
which
you
think
should
be
flagged
for
the
daming
working
group
and
raise
them
in
slack
and
join
us
on
the
mailing
list
to
help
out
with
the
discussion,
and
thank
you
all
very
much
for
your
time.
C
So
one
additional
note
is
that
you
know
given
speaking
of
the
like
existing
mechanisms
that
we
have
today.
We
know
that
we
can
search
issues
in
prs
using
labels
and
stuff,
like
that.
We
do
have
a
working
group
naming
label
for
pr.
So
we
have
a
project
board
that
we're
aggregating
prs
that
are
issues
or
pr's
that
are
listed
as
working
group
naming.
So
if
you
want
to
get
our
attention,
that's
one
way
to
do
it,
we'll
we'll
do
a
frequent
triage
of
that
board.
C
Also,
we
have
github
teams
if
you
want
to
reach
out
to
if
you're
interested
in
joining
as
a
quote,
unquote.
Member
of
working
group
naming
you
can
add
yourself
to
the
working
group
naming
github
team,
there's
also
a
working
group
naming
leads
github
team.
That
is
specifically
for
the
leads
chairs.
C
If
you
want
to
reach
out
to
us
that
way,
so
yeah
keep
using
the
usual
communication
mechanisms,
github
slack
mailing
list.
I
think
the
mailing
list
is
primarily
we're
trying
to
take
sig
architecture's
lead
where,
because
we
have
monthly
meetings,
we
want
to
make
sure
that
the
discussion
that
happens
during
the
meeting
is
essentially
like
closing
the
books
on
a
discussion
that
happened
on
the
mailing
list
right.
A
Okay
thanks
celeste
next
point:
we
have
a
couple
of
announcements,
so
the
kubecon
and
the
cloud
native
corner
is
wrapping
up
thanks
everyone
for
participating.
A
Also,
if
you
want
to
host
this
meeting
ping
custodio
and
stack
if
you
would
like
to
volunteer,
there's
the
free
slot
for
september
and
then
the
next
one
for
november,
I
guess,
and
then
we
still
have
three
spots
and
it's
really
easy
to
do.
Documentation
is
great,
so
be
brave
and
just
volunteer
for
that.
Please
also
follow
us
on
twitter
kubernatescontributors
and,
as
always,
we
also
added
to
the
agenda.
C
Just
just
just
an
additional
plus
one
to
all
the
positive
work
that
has
been
happening
across
the
ci
signal
and
ci
improvements
with
sig
release
and
sig
testing.
A
lot
of
people
have
stepped
up
to
do
that.
Work
and
we've
seen
positive,
really
really
positive
improvement
in
the
pre-submits
and
overall
release
blocking
and
informing
tests.
So
thank
you
again
to
everyone.
Who's
been
working
on.
A
That,
oh
great,
then,
if
there's
nothing
else,
I
will
give
you
20
minutes
so
to
update
the
agenda,
and
then
I
will
send
it
out
thanks
to
all
the
presenters
and
the
people
who
ask
questions
and
responded
to
that,
and
thanks
everybody
for
watching
wish
you
a
great
day
thanks
jenny.
Thank
you.
Bye.