►
From YouTube: Kubernetes Community Meeting 20200518
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
Like what you see here? Continue the conversation on https://discuss.kubernetes.io
A
Hello,
hello:
this
is
a
humanities
community
meeting,
May
21st
edition
2020.
This
is
a
community
meeting
that
is
live
streamed.
It
will
be
public
post
on
YouTube,
so
be
mindful
that
what
you
say
is
being
recorded
I'll
remind
day
that
we
have
a
code
of
conduct
so
be
excellent
to
each
other.
I
marked
more
in
each
and
I
will
be
your
host
today.
I
work
for
would
see,
and
you
may
know
me
for
culture
Buddhist
or
see,
Greece
and
Sikh
class
lifecycle.
I
am
a
release.
A
Manager
associate
I,
also
maintain
the
roster
API
provided
for
digital
OSHA
project.
It's
a
mother
of
resource
project.
I
am
really
happy
to
be
here
today.
So,
let's
get
started.
I
would
like
to
remind
you
that
you
should
mute
your
mic
when
you're,
not
speaking
and
I,
will
tease
just
click
the
shirt.
Do
we
have
a
note-taker?
It
seems
like
we
don't.
So
if
you
want
to
do
so,
please
leave
your
name
in
the
document.
I.
A
C
Absolutely
so
I
don't
have
any
slides
to
share,
but
would
kind
of
love
to
walk
you
through
everything
going
on
with
119.
So
as
of
Tuesday,
which
I
believe
is
two
days
ago.
It's
hard
to
keep
track.
These
days
we
had
119
0
beta
0
release
come
out,
which
was
fantastic.
We
did
have
a
enhancements
freeze
as
of
the
end
of
day,
Pacific
time
on
Tuesday
as
well.
C
So
far,
we've
gotten
a
few
exceptions
and
are
working
through
that
right
now
and
then
I'm
going
to
be
dropping
the
link
into
the
meeting
agenda,
but
I
did
want
to
bring
up
the
119
retrospective
as
well
as
we
work
through
the
119
release,
we're
always
happy
to
get
feedback
good,
bad
things.
We
can
improve
and
we'll
be
dropping
that
there,
it's
always
better
to
put
things
in
as
they
come
up
rather
than
try
to
remember
what
had
happened
at
the
end
of
the
release
is
just
a
general
note.
C
A
A
E
Thanks
Marco
setting
my
stopwatch
so
I
stick
to
real-world
time
instead
of
Aaron
time
for
this
update,
so
I'm
gonna
share
my
screen,
but
I'm
gonna
be
lazy
and
not
go
fullscreen.
So
I
may
spoil
some
of
my
surprises
here:
hi
everybody,
I'm
aaron,
aaron
berger,
I'm
one
of
the
chairs
of
state
testing.
The
todays
cat
t-shirt
is
my
favorite
cat
t-shirt
of
all
time.
It's
the
114
kubernetes
cat
or
Nettie's
t-shirt.
E
The
first
big
bit
of
news
I
have
about
sick
testing,
is
Eric
Veda,
who
has
been
state
testing
chair
for
a
number
of
years,
has
decided
to
step
down
and
has
nominated
been
the
elder
in
his
place.
It's
super
excited
to
see
the
spirit
of
rotation.
It's
the
sign
of
a
healthy
community
and
I.
Think
Ben
has
done
a
lot
of
wonderful
things
for
this
community,
so
I
can't
wait
to
work
further
with
him.
E
Just
a
brief
tour
of
some
of
the
things
we've
done
since
that
our
last
update
in
October
2019.
It's
not
really
specifically
under
a
sub-project.
This
is
more
like
general
test
health
stuff.
We
decided
collectively,
as
a
group
back
in
December
to
stop
automatically
retrying
ETV
tests
if
there
was
a
flake
in
there.
We
did
this
because
it
was
hiding
some
real
world
bugs
that
we
soon
discovered
once
we
turned
this
off
and,
for
example,
we
found
a
race
condition
in
run,
see
one
of
our
upstream
dependencies
and
kubernetes
has
become
more
robust
for
it.
E
One
of
the
things
we
did
to
help
us
with
this
was
to
update
our
triage
dashboard
to
support
inclusion
and
exclusion
by
regex.
So
a
quick
demo
here
go
dockets
that
eyelash
triage
is
like
my
favorite
thing
ever
for
finding
flakes
and
failures.
It
shows
you
a
graph
of
all
of
the
test,
failures
and
job
failures
for
a
given
time.
E
We
can
see
that
the
top
flake
that
has
occurred
most
recently
has
the
air
techs
timed
out
waiting
for
the
condition
my
favorite
Air
text
of
all
time,
so
I'd
like
to
find
tests
that
don't
have
that,
because
that's
kind
of
really
difficult
for
me
to
troubleshoot
and
diagnose.
So
if
I
exclude
tests
that
have
that
as
their
failure
text,
I
can
see
that
there's
a
what
looks
to
be
a
persistent
volume
failure
happening
in
some
of
the
CSI
related
jobs,
so
that
might
be
something
I
could
go.
Do
to
be
helpful.
E
We
have
continued
to
make
progress
on
improving
the
e2b
coverage
of
kind.
It
now
covers
about
85%
of
what
our
regular
e
to
e
tests
do,
that
you
know
spin
up
an
entire
trooper
Nettie's
cluster
in
the
cloud.
We've
also
been
using
time
to
make
sure
that,
when
an
enhancement
graduates
to
stable
or
generally
available,
we
keep
it
honest
and
make
sure
it's
only
using
stable
and
generally
available
api's,
which
was
caught
a
couple
things
already
and
then.
E
E
So
we're
planning
on
doing
aq
test,
that's
a
little
bit
more
modular,
less
monolithic
and
will
be
a
little
bit
more
pluggable
for
alternative
cluster
positioning
mechanisms.
Cig
testing
has
a
number
of
sub
projects.
One
of
them
is
Bosco's.
Bosco's
is
responsible
for
managing
pools
of
resources.
We
generally
use
it
to
manage
GCP
projects,
so
we
can
create
a
cluster
in
there
and
then
blow
it
away
at
the
end
of
that.
It's
turning
into
its
migrating
out
of
the
testing
for
repo
moving
into
its
own
repo.
E
Since
october
20
19
Kunduz
had
three
releases
as
of
kind
0.6,
we
started
using
kind
to
block,
merges
and
releases
of
the
kubernetes
project
kind,
0.7,
and
support
for
persistent
volumes
thanks
to
ranchers
local
volume.
Provisioner
and
one
of
the
most
asked
for
features
in
for
a
kind
0.8
is
the
fact
that
clusters
now
survive.
Reboots
prowl
browse
the
thing
that
does
all
of
the
CI
for
the
kubernetes
project.
We've
improved
our
security
stance
with
proud
by
using
workload
identity.
E
Instead
of
having
a
bunch
of
secret
files
passed
around
everywhere,
we've
updated
Prowse
reporting
capability
to
dump
a
number
of
artifacts
into
GCS
related
to
like
the
execution
of
the
job
itself.
I
don't
have
a
great
demo
of
showing
how
it
can
give
hints
about
why
your
job
may
have
failed.
But
now,
if
you
scroll
down
to
the
bottom
of
the
page,
you'll
see
this
job
pod
info
section
where
I
can
see
more
information
about
the
actual
pod
that
ran
the
job,
including
events
about
like
when
it
got
scheduled.
E
If
I
really
loved
my
mo
that
much
I
can
look
at
the
entire
animal
based
manifest
for
the
PI
to
help
me
with
troubleshooting,
we
also
have
started
to
move
to
support
for
remote
spyglass
lenses.
So
spyglass
lens
is
a
piece
of
code
that
describes
how
to
display
a
given
artifact.
So
in
this
page
that
I'm
loading
up,
for
example,
there's
a
j-unit
lens,
there's
a
coverage
lens
and
there's
a
build
log
lens.
E
E
We've
got
better
support
for
build
clusters.
So
the
idea
with
prowl
is
you
sort
of
have
the
prowl
control
plane
and
then
you
have
the
build
cluster
where
you
actually
run
everything
we're
now
using
that
in
the
community,
so
that
we
can
run
proud
jobs,
not
just
envy
google.com
owned,
build
cluster,
but
also
in
the
community
owned
kubernetes.
E
Testing
Commons
is
the
sub
project.
That's
all
about
like
how
to
have
best
practices
for
writing
tests
and
make
them
better
they're,
currently
working
to
try
and
migrate
the
e2e
test
framework
into
the
staging
repo.
So
it
could
be
consumable
outside
of
the
main
kubernetes
project,
so
you
wouldn't
have
to
try
to
be
vendor
the
entire
kubernetes
repo,
which
is
a
bad
and
painful
idea.
We've
also
started
leaning
into
using
import
bus
to
make
sure
that
dependencies
aren't
used
where
they're
not
supposed
to
be,
especially
for
tests
to
aid.
E
In
this
decoupling
and
there's
been
tons
and
tons
of
refactoring
work
to
just
try
and
clean
up
in
general,
it's
a
great
place
to
sort
of
that
I
get
started.
It
was
a
relatively
mechanical
work
if
you
want
to
get
dip
your
toes
into
the
community.
One
of
the
working
groups
sig
testing
is
involved
in
is
the
kate's
infra
working
group.
This
is
where
we're
taking
all
of
the
project
infrastructure
and
making
sure
it
runs
in
a
place.
That's
community
owned
and
managed.
So
as
of
today,
we
now
run
all
of
the
DNS.
E
All
the
artifacts
I
was
most
of
the
artifact
hosting
automated
groups
reconciliation
over
there.
The
publishing
block,
which
is
responsible
for
pushing
the
staging
repos
out
to
their
own
repos,
is
there
some
of
the
latest
additions
are
performing
its
dashboard
and
the
slack
infrastructure
that
helps
it
possible
that
helps
make
it
possible
to
moderate
or
ridiculously
large
slack
instance
and
as
I
lead
to
earlier
coming
soon
we're
going
to
be
moving
all
of
the
images
used
in
kubernetes
over
to
Kate's
GCR
I/o.
E
D
E
Kits
that
were
related
to
I'm
almost
there,
some
of
the
camps
that
are
on
our
plate
image
when
it
lowest
we're
helping
out
with
the
yearly
support
period
for
kubernetes
and,
like
I,
said
you
need
to
be
framework
to
staging
how
you
can
help
I
just
want
to
give
a
shout
out
to
sig
node
for
putting
a
group
of
people
together
to
actively
evaluate
their
tests.
What
are
they,
how
do
they
work?
Do
they
need
them?
E
B
Sweet
before
I
start
say
hello
to
Midna
she's,
my
co-presenter
all
right,
cig
UI,
it's
been
a
while,
since
we've
presented
the
big
thing
that
we
have
done
recently
is
we
released
2.0
of
the
dashboard
we
have
talked
about
what
2.0
was
for
a
couple
of
our
previous
updates,
because
it's
been
kind
of
a
Herculean
undertaking.
We
essentially
scrapped
the
entirety
of
our
front-end
and
redid
it
from
scratch,
because
we
were
several
versions
of
angular
out
of
date,
and
now
we
are
not
so
hurray.
B
B
What
else
have
we
done?
We
have
had
a
lot
of
translation
work.
That's
occurred
in
the
last
few
cycles,
both
just
in
general
improvement
as
well
as
completely
new
translations.
On
top
of
that,
all
of
the
translation
work
we
have
wound
up
delegating
to
specific
and
trusted
people
that
have
come
to
help
out
and
much
much
love
to
them
in
terms
of
community.
We
actually
promoted
a
new
sub
project
owner
Xu
Muto.
He
has
been
helping
us
out
for
over
a
year
and
he
his
help
has
been
invaluable.
B
B
Our
plans
for
next
upcoming
cycles,
2.0,
was
kind
of
a
big
thing,
so
we
really
have
some
time
a
lot
of
just
for
settling
and
finding
new,
bugs
and
fixing
them.
One
of
the
big
things
that
we've
been
wanting
to
do,
and
we
should
be
able
to
start
now
that
2.0
is
out,
is
having
better
support
natively
within
the
dashboard,
a
bit
of
history.
B
We
now
want
to
go
back
and
redo
a
lot
of
the
backend
work,
because
we
can
make
a
lot
of
headway
in
both
just
performance
as
well
as
reaching
feature
parity
with
other
different
projects,
and
the
last
thing
is
more
integrations.
One
of
the
ideas
that
we've
been
kicking
around
for
at
least
a
year
is:
wouldn't
it
be
nice
if,
through
the
dashboard,
you
could
actually
like
list
all
of
your
different
helm,
charts
and
maybe
be
able
to
install
helm,
charts
things
like
that
just
integrations
with
other
front-end
projects
in
the
community.
B
So
that's
what
we're
gonna
start
looking
into
next,
if
you
have
any
weird
cross-section
of
JavaScript
and
golang
experience
and
knowledge,
we
would
love
to
hear
from
you.
This
slide
is
out
of
date
because
the
angular
migration
is
done,
but
we
also
just
want
to
hear
from
anyone
and
everyone
that
wants
to
use
kubernetes
from
like
a
UI
perspective.
B
A
B
A
F
Perfect,
thank
you
so
hello
very
happy
to
be
here.
Thank
you
for
giving
us
the
time
what
we
did
on
the
last
cycle.
Last
cycle,
I'm
calling
it
since
our
last
presentation
that
was
done
by
one
of
my
other
co-chair
David,
it's
on
October,
2019,
I,
think
the
most
important
things
to
highlight
are
the
ones
that
I
have
listed
in
this
slide,
so
number
one
priority
unfairness.
This
is
a
really
important
feature.
It's
a
step
forward
to
guarantee
the
stability
of
API
server.
F
As
probably
all
of
you
know,
everything
in
cornetist
talks
through
API
servers,
so
everybody
is
trying
to
get
a
pay
service
attention
and
some
of
the
situations
that
have
aroused
in
the
past
is
that
you
know.
For
example,
a
controller
goes
crazy
and
you
know
hijacks
all
the
bandwidth,
so
in
practice,
API
server
cannot
perform
basic
tasks
like
you
know,
health
checks
or
garbage
collection.
So
this
is
a
feature
that
it's
really
important
looking
into
the
stability
and
making
it
more
reliable
for
the
future.
F
It's
in
alpha
in
118
don't
be
fooled
by
the
Alpha
qualification.
Think
it's
in
a
really
stable
state.
We
still
need
to
do
some
scalability
testing
and
some
more
testing
in
general,
but
the
future
is
looking
really
solid,
and
this
is
really
a
community
driven
effort.
You
know
many
people
from
different
companies,
mics
pray,
it's
lava,
lamp,
David,
EADS
and
many
many
others
so
really
happy
on
this
one.
F
F
There
were
some
performance
questions
we
were
able
to
overcome
all
of
them,
so
we
are
going
to
talk
about
the
next
steps
in
the
next
slide,
but
server
side
apply
now
in
118
is,
is
running
through
all
the
requests
keeps
ETL
dry
run
and
keep
CDL
deef,
which
are
popular
tools
that
we've
been
developing
in
the
past,
our
GA
in
118,
so
yeah
related
to
that
the
working
group
apply.
That
is
a
group
that
historically
has
been
dealing
with
servers.
I
applied
now
has
expanded
and
we
call
it
now
working
group
API
expressions.
F
F
One
important
note
there
is
that,
if
you
were
part
of
the
working
group
apply
meetings
or
you
know
from
time
to
time,
you
want
it
to
go
to
those
bi-weekly
meetings
and
you
want
to
still
be
involved.
There
is
a
new
mailing
list
to
be
subscribed,
so
you
get
the
new
invites
I
sent
an
email
a
couple
of
days
ago,
but
you
can
reach
me
if
you
don't
know
what
you
do
and
then
the
other
thing
that
we
have
promoted
is
the
care
that
David
has
been
working
on,
which
doesn't
allow.
You
know.
F
Injury
in
in
a
nutshell,
doesn't
allow
features
or
api's
to
stay
in
beta
forever.
In
this
you
know
letter
state,
so
either
they
go
back
to
being
demoted
or
they
are
promoted
to
TA.
So
I
think
this
is
a
great
step
forward.
You
know
in
the
ecosystem
of
corn,
it
isn't
currently
say
px
related
to
these
three
more
things
that
are
not
essentially
ap
machinery,
but
very
close
to
us.
You
probably
heard
of
the
SSH
tunnels
that
are
part
of
our
code.
F
It's
they
have
been
deprecated
for
a
long
time
and
we
are
working
on
this
new
site
project
that
is
called
Network
proxy
in
order
to
replace
them.
My
poor
proxy
is
beta
in
118
and
we
are
doing
large
scalability
testing
to
make
sure
that
you
know
it's
a
reliable
mechanism
to
be
able
to
finally
get
rid
of
SSH
tunnel.
F
We've
been
working
very
close
with
seeing
instrumentation
to
promote
this,
the
stability
of
the
metrics
that
we
emit
as
part
of
our
telemetry,
so
basically
the
cap
that
I
have
linked
in
the
slides.
What
says
you
know
in
a
very
short
summary
is
that
matrix
should
be
treated
with
the
same
importance
as
an
API
almost
so
they
you
cannot
just
change
them
and
you
know
deprecated
them
create
new
ones
that
breaks
all
of
our
systems.
You
know
all
of
our
compounds,
so
I
think
it's
a
very
important
step
forward
to.
F
Finally,
you
may
have
noticed
head
qualities
in
apa.
Machinery
is
running
at
cd34
7.
That
brings
a
lot
of
stability
improvements.
Many
of
the
people
that
works
in
apa
machinery
are
familiar
with
@cd
and
they
also
collaborate
in
the
open-source
at
CD
project
at
cd34
is
especially
important
because
of
four
things,
I
going
to
say
very
quickly
now
we
have
concurrent
reads
so
you
know
when
you
are
not
writing
transactions.
The
rich
can
be
concurrent.
F
There
is
a
new
feature
that
is
called
no
voting
members,
so
you
can
add
members
to
the
cluster
that
don't
have
both
in
power
but
allows
the
cluster
to
scale
horizontally.
There
is
improved
rough
consensus
and
B.
There
is
a
better
load
balancing
logic
from
the
HC
reply,
so
that
is
in
a
nutshell,
what
we
have
done.
What
are
we
planning
to
or
government
is
in
the
in
the
next
cycles,
so
Cerberus
I
apply.
F
The
next
step
is
to
make
it
default
in
the
client-side,
so
we
are
working
on
that,
not
in
one
night
in
probably,
there
is
a
lot
of
things
that
need
to
happen.
We
are
looking
more
at
120
and
we
will
see
if
that
happens,
for
priority
unfairness.
We
had
a
large
discussion
to
define
the
beta
criteria.
So
when
can
we
call
this
feature
beta?
That
has
been
arrived
to
a
consensus
point
so
now
we
know
what
needs
to
happen
important
step
forward.
F
There
is
a
number
of
kits
that
we
are
also
trying
to
move
forward
resource
version
of
semantics
made
consistent.
Basically,
when
you
list
resources,
there
is
a
bunch
of
inconsistencies
that
you
can
experiment.
So
this
is
a
way
of
you
know
getting
rid
of
that
david
and
many
other
other
people
have
been
working
on
the
next
two,
which
is
standardize.
F
F
Most
of
our
API
is
have
a
way
of
expressing
this
field,
but
it's
not
consistent
across
the
board,
so
we
are
trying
to
make
that
you
know
unified
a
number
of
things
that
are
following.
That
is
things
that
are
not
new,
but
you
know
they
take
some
time
the
definition
of
immutable
fields
being
able
to
declare
in
beautiful,
fill
in
your
ap
eyes
and
in
your
CR
DS.
F
You
know
we
promoted
seer
these
two
GA
in
116,
but
there
is
a
lot
of
backlog,
work
post,
GA
that
we
are
exploring
and
trying
to
also
make
progress
on
binary
transport
is
one
of
the
most
important
ones,
so
that
requires
some
work
and
attention,
but
is
in
a
war
backlog
and
what
else?
The
last
three
are
related
to
things
that
I
mentioned
in
the
previous
slide.
We
are
making
progress
in
the
deprecation
of
the
SSH
tunnels,
so
doing
large
scalability
zesting
on
the
network
proxy.
F
The
warning
mekinese
for
deprecated
API
is
I
recommend
to
watch
the
recording
of
the
meeting
of
API
machinery
secretly
Machinery.
What
Jordan
presented
is
so
really
important
to
be
able
to
detect
when
you
are
using
API
is
that
are
deprecated
to
avoid
surprises
when
you
face
an
upgrade
in
the
future
and
something
disappears
or
changes.
A
F
F
Bacteria
she's,
I'm
PR
three
ashes,
so
if
there
is
something
that
you
are
submitting
or
you
are
interesting
in
participating,
just
join
us
open
meetings
and
there
is
two
working
groups:
I'd
be
a
expression
that
I
explain
and
the
Q
building
an
SDK
working
group
to
meet
every
two
weeks.
Who
are
we
David,
it's
Danielle,
Smith
and
me
and
some
useful
links
for
you,
our
home
page
and
the
recording
of
our
needs.
That's
it.
G
So
quick
membership
update,
we
have
had
one
chair
moved
to
it.
Emeritus
so
think
her
for
her
work
with
said
still
have
three
active
chairs.
The
main
thing
that
we've
done
in
the
last
number
of
months
is
finally
polished
and
released
our
initial
user
survey.
So
we
have
a
medium-size
survey
that
we
have
circulating
right
now,
thanks
to
the
CNC
F,
for
their
help
with
that
once
this
survey
is
finally
closed,
I
don't
call
the
date
for
that.
We're
going
to
be
going
through
all
these
responses
for
that.
G
So
if
you
are
able
to
circulate
the
survey
internally
or
especially,
if
you
have
customers
a
reason,
current
ideas,
this
would
be
quite
helpful
to
get
an
idea
of
the
kinds
of
experiences
people
are
having
and
some
of
their
priorities
and
struggling
points
in
particular,
really
wanna.
Thank
gabby
morrow
from
the
sig
for
driving.
This
she's
been
instrumental
in
making
sure
that
the
survey
guess
how
it
happens.
G
So,
on
the
note
of
the
survey,
once
we
actually
have
some
data
put
together
and
once
we've
gone
in
over
the
responses,
we
will
probably
be
reaching
out
to
a
number
of
SIG's.
If
we
identify
key
problem
areas
or
places
to
share
feedback
as
well
we'd
like
to
understand,
given
that
sig
news
ability
comes
from
very
different
backgrounds,
more
of
some
of
the
technical
specifics
that
people
bring
up
and
also
with
this
survey,
we're
hoping
to
codify
some
of
our
practices
around
things
like
data
collection
and
good
question.
G
So
we
did
a
little
bit
of
exploratory
work
as
well
on
trying
to
introduce
safer
defaults
of
the
projects
that
has
stalled
due
to
lack
of
bandwidth
and
some
complexities
in
the
inspected
areas.
In
particular.
Given
that
we're
now
a
one-point
of
projects,
there
is
a
major
uphill
battle
to
change
any
kind
of
default
expected
behavior.
So
if
any
is
interested
in
helping
to
further
this
work,
especially
coming
from
stakeholder
SIG's,
that
would
be
vastly
appreciated.
G
G
Once
we
have
a
definition
for
that,
we're
going
to
be
talking
with
sifting,
Trebek's
and
then
hopefully
trying
to
go
out
and
actually
make
sure
those
labels
get
applied
to
new
issues
and,
lastly,
for
how
you
can
contribute.
We
are
exceedingly
bandwidth,
contain
constraint.
There
are
only
a
few,
a
number
of
people
working
with
sig
news
ability
and
most
of
them
it's
a
very
distant
concern
either
because
they're
volunteering
completely
on
their
own
or
because
it's
a
tertiary
or
lower
job
responsibilities.
So
this
is
a
potential
opportunity
for
big
impact.
G
If
you
or
your
organization
has
the
bandwidth
to
work
more
on
usability
design
and
working
with
some
of
the
specific
technical
problems
we
would
love
to.
Have
you
I
didn't
actually
put
our
meeting
time,
but
it's
every
third
Tuesday
at
9
a.m.
and
we
also
have
that
all
on
the
sig
page
in
github
right.
Thank
you.
That's
all
we
have.
A
B
B
It
made
a
lot
of
sense
because
then
the
staff
can
turn
over
and
start
working
on
Boston
and
it
lets
us
focus
and
make
sure
that
our
next
contributor
summit
is
just
a
much
better
experience.
Rather
than
trying
to
to
plan
a
contributor
summit
that
then
turned
into
a
virtual
summit
and
then
suddenly
we
have
a
contributor
summit
that
is
less
than
four
months
away
from
the
next
one.
So
it
just
made
a
whole
lot
of
sense
to
kind
of
cancel
that
and
then
start
looking
at
Boston
I
linked
the
email
that
was
sent
out.
A
Okay,
thank
you
for
the
address
up.
Next
I
would
like
to
announce
the
next
month
host,
who
will
be
and
that's
going
to
be.
Lowry,
Apple
and
I
would
just
like
to
say
that
if
you
want
to
be
a
host,
then
we
are
looking
up
for
contributors
who
are
going
to
take
over
and
you
can
ping
us
in
DC
cultivates,
convex,
slag,
China
and
then
do
we
have
some
and
the
other
announcements
or,
and
if
you
gathered
that
is
going
to
be
mentioning
Oh.
B
I
can
I
can
talk
on
that
a
little
bit.
That
was
my
bad
for
not
also
mentioning
that
the
goal
is
for
the
new
contributor
workshop
to
happen
sometime
during
the
summer
and
we're
going
to
start
doing
the
new
contributor
workshops
kind
of
in
between
cube
cons,
with
the
hope
that
doing
them
online
means
we
have
more
reach
and
also
it
means
that
the
current
contributors
that
were
normally
diverted
to
working
on
the
new
contributor
workshop
during
summits
can
actually
partake
in
the
summit.
B
A
Okay,
thank
you
once
again
for
the
update
the
last
call
for
announcements.
Do
we
have
some
something
else
to
mention
okay,
so
the
next
thing
is
the
indie
agenda
is
shutouts
and
we
have
a
selection
around
the
community
start,
which
is
called
shutouts
and
it
is
used
to
if
you
want
to
mention
something
that
just
be
done
by
someone
and
the
past
month
behave
for
the
past
month.
A
We
have
a
bowl
page
of
mentions
about
the
also
fix
dead
folks
were
doing
I'm
not
going
to
go
to
each
entry
right
now,
but
you
have
the
list
in
the
meeting
notes.
It
will
also
be
posted
in
the
Kuban.
It
is
that
mailing
list
and
the
viscous
tried
on
viscous
kubernetes
io
and
as
far
as
you
see,
we
are
coming
up
to
the
end
of
this
agenda
and
to
the
end
of
this
meeting.