►
From YouTube: State of the crates.io — Sean Griffin
Description
Crates.io is a critical piece of Rust's infrastructure. If it goes down, there's a good chance that your builds will stop working. In this talk we'll learn about what's happened in the last year to improve the resiliency and performance of the site. We'll look at what the plans are for the future, and you'll learn how you can get involved.
A
Love
this
picture
so
much
so
the
crates
on
IO
team
is
very
new,
so
first
I
want
to
talk
about
what
the
crate,
what
the
team
is
and
what
our
responsibilities
are.
We
have
two
main
responsibilities.
The
first
is
that
we
manage
the
day-to-day
operations
of
creció.
We
respond
to
incidents.
We
keep
everything
running.
The
second
responsibility
we
have
is
to
set
the
development
priorities
for
the
project
and
we're
usually
doing
this
based
on
the
challenges
that
we're
currently
facing
on
the
operations
side.
The
team
has
started
this
year
in
April.
A
We
started
with
just
three
people,
but
we've
grown
quite
a
bit
since
then.
We
now
have
nine.
Everyone
on
the
team
is
here
for
different
reasons.
Some
of
us
work
on
operations.
Some
of
us
work
on
code.
Some
people
are
here
just
to
increase
the
communication
between
the
create
CIO
team
and
other
teams.
One
thing
we
all
agree
on,
though,
is
how
important
creates
that
IO
is
to
the
rusty
ecosystem.
A
Now
you
might
be
wondering
why
does
crates
that
I
own
need
an
official
team?
Well,
there's
a
couple
of
reasons
for
that.
In
the
past,
it's
been
unclear.
What
the
role
of
crates
at
I/o
is
in
relation
to
the
rust
project.
The
RFC's
repo
for
rust
lays
out
what
needs
an
RFC,
but
how
much
of
that
applies
to
create
some
ill?
Do
it?
Does
a
new
feature
for
crates?
A
Io
require
a
feature
it
does
in
the
compiler,
but-
and
we
didn't
have
separate
policies
if
somebody
opened
in
our
see
about
crates
at
I/o,
who
would
be
the
team
that
actually
made
the
final
decision
on
that
establishing
an
official
team?
Only
responsible
for
creció
provides
a
lot
of
clear
answers,
these
questions,
or
at
least
an
avenue
to
do
so.
It's
also
an
effort
to
recruit
more
people
to
work
on
the
project.
A
The
team
was
started
this
year
when
it
became
clear
that
we
needed
more
people
working,
a
bug
had
made
it
into
production
and
nobody
with
the
ability
to
deploy
the
site
had
the
bandwidth
to
deal
with
it.
So
Ashley,
Williams
and
I
formed
the
team
in
order
to
deal
with
that
bug
and
some
of
the
other
operational
problems
that
we
had
at
the
time.
A
We're
gonna
talk
about
some
of
those
problems,
but
first
I
want
to
talk
about
some
of
the
results
we've
had
because
it's
been
a
really
exciting
year
for
Chris
at
I/o.
The
first
one
is
that
the
site
is
faster.
Did
anybody
try
using
crates
that
I?
Oh,
like
March,
early
April,
it
was
really
bad,
so
we've
spent
a
lot
of
time
and
effort
improving
that
throughout
the
year
these
are.
These
are
some
of
our
stats.
I
just
took
this
screenshot
today.
A
So
our
95th
percentile
times
are
consistently
staying
under
100
millisecond
a
hundred
and
a
hundred
milliseconds,
and
our
99th
are
consistently
staying
under
150.
That
means
that
99%
of
requests
coming
into
crates
that
I
owe
get
their
response
in
less
than
150
milliseconds,
and
this
is
huge
for
us,
because
at
the
start
of
the
year
these
numbers
were
both
frequently
multiple
seconds.
A
One
of
the
other
big
things
that
we've
done,
and
probably
the
most
important
thing
we've
done,
is
that
we
have
an
on-call
rotation
now
if
the
site
goes
down
at
3:00
in
the
morning,
somebody's
actually
getting
woken
up
to
deal
with
it.
We
set
up
some
basic
monitoring,
we're
not
monitoring
everything
that
we
want
to
yet,
but
some
of
the
big
ones
like
is
the
site
returning
an
error
for
everybody
or
our
response
times.
A
A
Now,
of
course,
there's
sort
of
a
golden
rule
in
operations
when
you
start
monitoring
something
that
didn't
previously
have
monitoring
you're,
inevitably
going
to
learn
it's
broken.
So
when
we
first
set
up
pager
Duty
I
was
the
person
who
was
on
call
for
that
for
that
rotation,
so
I
got
woken
up
in
the
middle
of
the
night.
A
lot
this
is
April
12th
is
when
we
are
April.
13Th
is
when
we
first
introduced
monitoring
to
cryo
and
every
single
night
for
three
nights.
I
got
woken
up
at
after
midnight.
A
A
We
were
randomly
sometimes
taking
more
than
30
seconds
to
respond
on
this
endpoint,
which
normally
responds
in
under
8
milliseconds.
Now
we're
hosted
on
Heroku,
which
means
we
have
a
hard
upper
limit
on
our
response
time.
If
we
don't
send
a
response
to
the
user
or
at
least
start
sending
a
response
city
to
the
user
within
30
seconds,
the
request
is
killed
and
we
have
no
control
over
that.
We
cannot
increase
it
any
further
and
we
really
don't
want
to
so.
The
end
point
is
really
simple:
we
grab
a
database
connection.
A
A
So
the
thing
that
was
timing
out
had
to
either
be
grabbing
a
database
connection
or
the
actual
query
that
we
were
running
the
problem
was
both
of
these
actions
had
a
timeout
of
greater
than
30
seconds.
So
we
really
didn't
have
a
way
to
figure
out
which
of
those
it
was.
If,
if
one
was
fair,
if
either
one
failed,
it
would
fail
after
the
request
already
got
killed
now.
A
A
Why
and
trying
to
improve
that,
but
at
best
it
takes
about
maybe
five
minutes
and
it
for
a
release
build
and
at
worst
it
can
take
upwards
of
thirty
minutes
which,
if
the
site's
down
not
that's,
not
really
the
feedback
cycle
you
want
to
have
so
one
of
the
things
that
we've
been
doing
to
improve
our
ability
to
respond
to
incidents
like
this
is
making
more
things
configurable
by
environment
variables.
Anything
we
might
want
to
change
on
the
fly
we
configure
with
an
environment
variable
rather
than
a
magic
number
in
code.
A
A
So
once
we
did
this,
we
set
the
timeouts
for
both
to
10
seconds
and
we
established
yep.
The
database
connection
is
coming
in
fine,
so
the
problem
is
that
the
query
itself
is
timing
out.
So
we
need
to
figure
out
why.
Why
is
this
query
now,
which
we
expect
to
finish
almost
immediately,
sometimes
taking
upwards
of
30
seconds,
so
we're
hosted
on
Heroku?
We
use
Heroku
Postgres
for
our
database.
They
have
some
really
really
great
dashboards
for
this.
A
A
Sometimes
it
was
taking
one
millisecond
on
average,
sometimes
it
was
taking
150
milliseconds
on
average,
the
average
never
went
up
to
30
seconds
because
it
was
just
a
few
outliers
not
like
every
request,
because
it
was
so
variable,
I
suspected
that
it
was
a
lock
contention
issue.
Basically,
some
other
database
connection
somewhere
was
trying
to
update
the
same
row,
locking
it
to
do
that
and
then
doing
a
bunch
of
other
work
which
apparently
takes
more
than
30
seconds
before.
A
Let
it
go
and
our
download
can
continue
as
I
was
digging
through
the
logs
to
try
and
find
more
information
on
this
I
actually
found
out
that
Heroku
just
logs
for
you
anytime,
a
query
is
waiting
on
a
lock
for
more
than
a
second
and
we
had
a
lot
of
those
logs.
So
we
have
a
lock
contention
issue
so
because
we
know
that's
that
something
else
is
locking
the
row.
That
means
there's
somewhere
else
in
our
code.
That
is
updating
the
same
table.
So
we
need
to
figure
out
where
that
was
didn't.
A
Have
any
fancy
graphs.
This
time
it
was
actually
a
really
blunt
instrument.
I
grabbed
the
code
base
for
the
table
name
and
looked
at
like
the
five
other
places
that
we're
touching
it
because
the
table
is
called
version
downloads.
It's.
How
many
times
has
this
version
of
this
crate
been
downloaded
on
this
date,
and
just
not
a
lot
of
things
need
to
need
to
talk
to
that
table.
So
we
narrowed
it
down
pretty
quickly
to
the
update
downloads
binary
that
we
have
running
constantly.
A
In
the
background,
what
this
does
is
every
few
minutes,
it'll
go
and
look
at
the
version
downloads
table,
so
that's
our
most
granular
view
of
how
often
a
crate
is
downloaded
and
that
just
updates
a
bunch
of
other
caches
like
we
can
calculate
the
number
of
downloads
that
have
happened
across
all
crates
across
all
time,
just
from
this
table,
but
that
would
be
obnoxiously
slow.
So
we
don't
do
that.
A
We
just
have
a
separate
place
that
we
store
that,
along
with
how
many
times
was
this
crate
download,
instead
of
just
this
version
per
day
and
then
version
all
time
create
all
time,
etc,
etc.
Lots
of
different
things.
So
what
this
does
is
this
goes
through
the
version
download
rows
and
it
grabs
them
in
batches
of
a
thousand
and
then
starts
processing
each
one.
The
first
thing
it
does
is
update
the
row
and
say
like
yeah,
we've
counted
more
downloads
from
here.
A
The
problem
was
each
batch
of
a
thousand
was
being
processed
in
a
single
database
transaction,
so
any
locks
it
held
we're
gonna
be
held
until
until
that
transaction
committed.
So
the
first
row
that
we
processed
in
that
batch
would
remain
locked
while
we
processed
the
remaining
999
rows
in
that
batch,
and
this
this
binary
also
logged
every
time
it
started
a
new
batch,
so
we
could
see
how
long
each
batch
was
taking.
Looking
at
the
logs,
it
took
just
over
30
seconds
bingo.
A
So
this
is
the
actual
query
that
we
execute
in
update
in
the
download
endpoint
you
don't
actually
need
to,
but
this
is
the
actual
graph
from
when
we,
when
we
deployed
this
fix,
the
fix
was
actually
really
easy.
We
just
changed
it
to
hold
a
transaction
for
one
row
instead
of
every
thousand
rows
and
the
lock
it's
released
immediately
and
as
soon
as
we
deployed
that
the
lock
contention
issue
went
away
and
what
used
to
take
upwards
of
30
seconds
sometimes.
A
Now
the
other
question,
though,
was:
why
was
the
update,
download
script
taking
30
seconds
for
every
bash?
We
didn't
used
to
have
this
issue
and
it
just
had
recently
started.
So
the
problem
here
was
that
we
were
just
missing.
Some
database
indexes
some
caches
on
your
database
that
make
it
easier
to
do
things
like
sort
by
individual
columns
and
our
table
had
grown
to
a
critical
mass
of
size
where
a
different
index
it
used
to
be
using
could
no
longer
be
used.
A
So
this
meant
that
the
the
every
query
on
this
table
had
to
scan
the
entire
the
entire
table
row
by
row
and
do
all
of
its
operations
in
memory,
and
this
meant
a
query
that
used
to
be
taking
a
handful
of
milliseconds.
Now
it
was
taking
14
seconds
on
average.
This
one
was
luckily
also
fairly
easy
to
fix.
A
Do
did
some
tuning
added
some
added
some
indexes
and
we
went
from
14
up
14
seconds
to
2
milliseconds
little
bit
of
a
substantial
change,
and
at
this
point
the
issue
was
fixed,
which
was
good
because
it
was
3
a.m.
and
I
wanted
to
go
back
to
sleep
now.
At
this
point
we
didn't
have
requests
timing
out
anymore,
but
we
still
had
performance
problems.
Everything
was
not
fine.
A
We
were
getting
reports
that
sometimes
Chris
audio
was
taking
as
long
as
6
seconds
to
load.
This
was
an
isolated
problem.
Everybody
could
reproduce
it.
It
was
intermittent
we
and
we
initially
thought
it
was
isolated
because
we
tried
to
verify
the
report
like
a
few
hours
later
and
it
was
I
want,
say
fine,
but
faster.
A
But
everybody
eventually
was
able
me,
but
we
all
realized.
Oh
there's
something
funky
going
on
here
and
one
of
the
metrics
that
we
get
paged
on
is
when
response
times
get
get
too
high.
So
inevitably,
I
got
paged
why
this
is
Ruby
when
she's,
like
thirty
seconds
old,
it's
great
I,
love,
I
love.
How
newborns
just
look
like
little
old
men,
so
the
home
page
is
one
of
the
slow
end
points.
A
The
other
one
was
our
create
search
page,
which
both
show,
which
is
the
name
of
the
end,
point
that
you
hit
both
when
you
actually
are
searching
or
if
you
just
click
the
view
all
crates
button
its
stilled.
We
just
hit
the
same
end
point
so
we
started
looking
for
traffic
pack
because
we
knew
it
was
intermittent
that
gave
us
some
specific
times
start
looking
at
in
the
logs,
and
so
we
started
looking
for
traffic
patterns
around
the
times
that
things
slowed
down.
It
turned
out
that
the
problem
was
BOTS.
A
Thanks
yeah,
what
was
happening
was
we
had
more
and
more
crawlers
that
were
coming
and
just
trying
to
get
all
of
our
information
on
all
of
our
crates,
and
that's
fine,
like
you
can
come,
do
that,
but
you
gotta
be
well-behaved
and
these
we're
not
being
well
behaved.
They
were
doing
things
like
sending
us
five
concurrent
requests
as
quickly
as
we
as
we
could
possibly
respond
to
our
slowest
end
point
on
the
app
in
a
loop.
A
You
know
it's
kind
of
like
you
know
that
feeling
when
you're
just
sitting
there
and
you're
trying
to
eat
your
chicken
nuggets,
and
these
creepy
fuzzy
giants
just
keep
coming
up
and
asking
for
a
hug
and
they
won't
stop
coming,
and
you
just
can't
handle
all
this
right
now.
So
you
start
throwing
your
chicken
nuggets
on
the
floor
yeah
that
was
us
with
BOTS.
A
Also
I'm.
Sorry,
your
request
or
not
chicken
nuggets,
but
they
kinda
are
now.
We
actually
could
have
solved
this
one
really
really
easily.
The
simplest
solution
would
have
been
just
to
upgrade
our
database
server
right
now
we
run
on
the
cheapest
production
to
your
database.
That
Heroku
has
to
offer.
So
there's
a
lot
of
room
for
us
to
scale
vertically
there
and
we
just
could
have
handled
this
increased
traffic.
A
We
just
had
a
slightly
beefed
your
database
server,
but
we
have
avoided
doing
that
for
as
long
as
possible,
because
it's
forcing
us
to
deal
with
a
lot
of
issues
that
we
will
eventually
have
to
deal
with
a
little
bit
sooner.
The
one
specifically
that
we
wanted
this
to
force
us
to
deal
with
was
writing
a
crawler
policy
which
we
didn't
have
at
the
time
and
well
we
still
don't,
but
we've
we
will
soon
I
promise
we
need
to.
A
We
also
didn't
have
anything
in
place
to
actually
block
the
misbehaving
bots
which,
if
the,
if
these
bots
weren't
just
misbehaving,
but
we're
actively
malicious,
was
something
that
we
really
really
would
have
needed.
So
that
was
our
main
solution
here
was
to
first
just
give
us
a
mechanism
to
start
blocking
them,
both
by
IP
address
and
by
user
agent,
and
we
saw
our
database
load
go
from
well
double
what
our
database
can
handle
back
down
to
near
zero,
which
is
sort
of
what
we
expected,
but
it
never
really
gets
above.
A
Like
ten
percent
these
days
and
the
site
started
feeling
more
responsive,
not
not
great,
but
not
six
seconds,
so
that
was
the
big
big
problem,
but
there's
still
the
problem
of
like
why.
Why
were
these
bots
giving
us
so
much
load
just
from
hitting
these
end
points?
In
the
first
place
they
still
felt
like
they
were
a
lot
slower
than
they
should
just
listing
all
of
the
crates
imaginating
it
shouldn't
take
that
long
well
turned
out.
A
The
problem
was
calculating
the
recent
downloads
number,
the
number
of
downloads
that
a
crate
has
had
in
the
last
90
days.
We
were
doing
this
like
just
the
simplest
way
you
could
you
join.
You
join
to
this
other
table.
You
group
it
down
and
you
sum
up
the
results
and
like
it
was
fine
it.
The
query
took
about
500
milliseconds,
which
I
mean
it's
not
unreasonable
for
what
it
was
doing,
but
it
was
making
the
site
feel
sluggish.
A
So
this
what
for
this
we
decided
to
create
a
materialized
view,
which
is
basically
just
another
form
of
cache
that
you
can
have
in
the
database
and
you
can
create
indexes
on
that.
So
basically,
it
was
just
a
way
for
us
to
occasionally
pre
calculate
the
recent
downloads
for
every
crate,
and
then
we
can
just
get
information
from
this
cache
table
much
more
frequently.
A
So
first
thing
I
want
to
do
is
fix
crate,
uploads
now,
I
know
some
of
you
might
be
thinking,
but
wait,
create
uploads,
aren't
broken
I,
just
uploaded
crate.
No,
it's
trust
me.
It
kind
of
is
broken.
So
the
way
this
works
is
you
upload
your
tar
file
to
our
server.
We
then
upload
that
tar
file
up
to
s3.
Then
we
update
the
database.
A
A
That's
fine,
but
we
have
that
hard,
30
second
limit
on
how
long
we
can
take,
and
that
means
that
there's
a
hard
upper
limit
on
how
big
your
crate
can
be
or
how
slow
your
network
connection
can
be
before
you
just
can't
upload
your
crate.
There
was
a
person
who
was
like
this.
The
same
IP
address,
trying
to
upload
a
crate,
and
it
was
timing
out
and
like
for
a
week.
They
tried
every
day
and
their
network
was
just
too
slow
for
them
to
be
able
to
do
it.
A
A
The
next
thing
I
wanna
do
is
load
testing.
I'm
gonna
talk
a
little
bit
later
on
about
some
of
the
scales
that
I
think
we
will
reach
with
our
current
setup,
but
this
is
all
based
on
some
very,
very
preliminary
ad
hoc
load
testing.
I
want
to
guess.
I
want
us
to
use
a
proper
service
for
this,
and
I
also
want
to
know
if
we
don't
change
a
line
of
code,
but
we
get
the
beefiest
database
server.
We
can
possibly
get
on
Heroku
and
buy
as
many
web
dinos
as
we
feel
like.
A
We
need
and
not
like
actually
buy
them
for
a
month,
but
have
them
up
for
10
minutes
for
testing.
What
is
the
max
scale
we
can
reach
before?
We
actually
have
to
start
changing
code.
I
also
want
us
to
be
monitoring
more
things
right
now,
we're
getting
paged
on
the
most
important
critical
items,
but
there
are
a
lot
of
things
that
we
just
don't
get
informed
about.
A
If
they
break
there
are
a
couple
of
times:
I've
broken
crate,
uploads
in
one
way
or
another,
and
if
it
specifically
crate
uploads
that
I
break
that
doesn't
that
doesn't
people
aren't
uploading
crates
frequently
enough
for
that
to
trigger
any
alarms
on
average
we
get
about
one
crate,
every
five
minutes
and
we
don't
really
alert
unless
our
error
rate
is
above.
One
percent
for
all
requests
for
five
minutes.
A
One
of
the
problems
with
growing
our
operations
team
is
that
the
credentials
that
you
need
to
manage
the
site
also
give
you
access
to
a
lot
of
things.
You
don't
necessarily
need
access
to.
This
isn't
like
a
question
of
trust.
It's
just
generally
not
good
policy
to
give
people
keys
to
things
they
don't
need
because
they
don't
need
them.
A
So
one
of
the
ways
that
we're
going
to
fix
this
is
just
by
creating
some
BOTS
and
the
bots
are
able
to
manage
the
site,
and
we
can
give
people
the
ability
to
give
commands
to
the
bots.
This
will
give
us
a
lot
more
granular
control
over
what
permissions
each
individual
contributor
has.
This
is
also
really
amusing,
because
this
year,
BOTS
were
our
problem
and
next
year.
Bots
are
the
solution.
A
We're
also
gonna
be
looking
at
redesigning
the
site.
The
main
rustling
org
site
is
going
to
be
getting
a
redesign
as
part
of
Rus
2018
and
we're
looking
at
it.
Whether
we
want
crates
that
IO
to
be
part
of
that
we're
also
discussing
while
we're
redesigning
a
site
if
it
makes
sense
for
us
to
switch
off
of
our
single
page
web
app
and
be
serving
static.
Server
rendered
HTML
instead.
A
A
Krista
has
been
a
very
different
kind
of
project
for
me.
I've
been
working
at
open
source
for
a
long
time,
but
this
project
is
just
an
entirely
different
experience.
It's
the
first
open-source
project
I've
worked
on
where
my
primary
contribution
is
operations
and
not
code.
It's
also
the
first
rust
web
application.
I've
worked
on
the
most
that
every
problem
I
described
today
was
solved
by
tweaking
the
database
and
not
our
code.
A
This
is
an
application
that
is
really
truly
database,
bound
I've
built
a
lot
of
web
apps
over
the
years,
and
this
is
the
first
time
I've
ever
really
been
able
to
say
that
the
amount
of
time
we're
spending
in
our
web
servers
is
virtually
zero.
There's
no
garbage
collector
to
tune
there's
no
unreasonable
memory,
growth
or
hard
to
debug
memory
leaks
and
the
amount
of
performance
we've
been
able
to
achieve
with
virtually
zero.
A
If
there's
one
thing
that
being
on
the
Cray
set
io
team,
if
there's
one
thing
that
being
on
the
kraits
that
io
team
has
done
for
me,
it's
making
me
really
excited
about
the
future
of
rust
on
the
web.
People
aren't
kidding
when
they
say
rust
gives
you
superpowers.
No
seriously.
We
spend
more
money
on
log
storage
than
all
of
our
servers
combined.
We
can
process
so
many
requests.
We
have
to
spend
more
to
store
the
logs
from
those
requests
than
it
costs
us
to
actually
process
them.
It's
insane.
A
There
are
only
a
handful
of
us
working
on
it,
and
each
of
us
only
has
a
little
bit
of
time
because
of
this
we've
been
trying
to
keep
our
stack
as
simple
as
possible
for
as
long
as
possible.
If
you
want
to
get,
if
you
want
get
involved
with
great
audio
right
now,
you
have
to
learn
our
web
server.
You
have
to
learn
Postgres
and
that's
it
and
I'd
like
to
keep
I'd
like
to
keep
it
that
way
for
as
long
as
we
possibly
can
it
limits
the
number
of
technologies
you
have
to
learn.
A
This
is
good
advice
for
any
startup
and
kind
of
this
creció
is
very
analogous
to
like
a
early
post
launch
startup
at
this
point.
Basically
prioritize
prioritizing
things
in
terms
of
keeping
our
snacks
simple
means
that
we're
doing
a
lot
of
things
wrong
for
whatever
value
of
wrong.
You
want
to
use
I'm
sure
when
I
explained
how
we
count
downloads.
A
Some
of
you
were
horrified
because
it's
not
gonna
scale
and
it's
true
download
counting,
is
for
sure
gonna
be
our
largest
bottleneck
going
forward
like
we
know
that,
but
we're
all
we're
also
pretty
sure
that
we
can
grow
about
an
order
of
magnitude
more
traffic
before
we
even
have
to
worry
about
just
upgrading
our
database
server
and
I.
Think
we've
got
about
two
orders
of
magnitude,
or
maybe
three
after
that,
before
we
get
to
the
point
where
we
actually
start
changing
our
approach
and
can't
just
throw
a
bigger
server
at
the
problem.
A
Things
are
a
little
different
when
you're
building
an
open-source
service
instead
of
open-source
software.
Your
priorities
have
to
change.
There's
a
lot
of
people
who
want
to
crawl
crates
that
I
owe
and
they're
building
all
sorts
of
cool
things
with
the
data
that
they're
getting
from
their
crawlers
and
I
really
want
to
be
able
to
like
just
let
them
hit
us
as
fast
as
they
can
and
give
them
all
their
information
and
see
the
cool
things
they
build.
A
The
problem
is:
if
we
do
that,
then
we
have
to
upgrade
our
servers
and
buy
more
servers
and
that
costs
actual
money.
It's
just
not
a
thing
that
you
worry
about
in
in
when
I'm
building
diesel,
for
example,
it's
like
to
do
what
to
do.
Our
users
want
it's
just
more
time,
but
it's
not
it's
not
actually
going
to
cost
more
actual
money
from
somebody
and
a
lot
of
the
things
that
we
have
to
deal
with
require
actual
lawyers
to
and
as
far
as
I'm
aware,
nobody
on
the
team
has
passed.
A
A
When
people
come
to
an
open
source
project,
most
people
are
expecting
to
contribute
some
code
or
write
some
Doc's
or
open
issues.
But
when
you,
what
you
need
to
grow
are
things
like
hey?
Can
you
come
join
our
on-call
rotation?
It's
a
little
bit
harder
to
make
that
work.
This
morning,
Nico
and
Ashley
talked
a
lot
about
how
open
source
by
serendipity
doesn't
always
work.
People
don't
just
come
I'll,
always
pop
out
of
the
blue,
with
the
pleura
Qwest.
A
A
A
Unfortunately,
we
got
to
learn
that
firsthand
earlier
this
week
on
Monday
afternoon,
while
I
was
working
on
this
talk.
Actually
I
didn't
get
to
work
on
my
talk
that
afternoon,
because
somebody
had
decided
to
create
a
bot
registered
a
user
called
crate
CEO
and
was
registering
as
many
crates
as
they
could
as
fast
as
they
could
with
an
empty
readme.
Just
saying,
if
you
want
this
crate,
please
open
an
issue
on
our
issue
tracker
linking
to
the
official
crates
that
I
know
issue
tracker.
A
We
think
we
can't
read
minds,
but
we
think
that
this
person
was
doing
this
to
make
a
point.
There's
been
an
escalating
discussion
around
our
squatting,
Paulo,
our
name,
squatting
policy
and
the
threads
gotten
very
intense.
It
takes
a
lot
of
energy
to
spend
time
even
just
reading
it
much
less
responding
and
so
I
I
know.
A
And
there's
been
a
lot
of
discussion.
This
is
it's
a
little
frustrating
to
talk
about,
because
I
know
that
most
likely,
nobody
in
this
room
is
the
kind
of
people
I'm
talking
about.
I
know
that
it's
a
handful
of
very
hostile
or
angry
people,
but
they
end
up
just
taking
up
an
obnoxiously
large
percentage
of
your
time
and
energy
and
it's
and
from
a
maintainer
point
of
view.
It
very
quickly
feels
like
that.
A
A
That's
so
frustrating
just
as
a
as
a
maintainer
to
have
people
too
angry.
Then
you
don't
respond
and
then,
when
you
do
respond,
they
think
that
you're,
talking
on
behalf
of
the
team
and
I
happen
to
disagree
with
some
of
the
opinions
that
these
people
have.
But
they
are
assuming
I'm
speaking
on
behalf
of
everyone
on
the
team,
so
they
think
I'm
then
shutting
the
conversation
down
so
now,
they're
mad,
because
I
did
respond.
A
A
Now
we
do
actually
communicate
in
a
lot
of
channels.
One
of
the
things
that
I'm
trying
to
be
careful
about
is
we're.
Trying
to
now
much
more
publicly
say:
hey
crates
thought
the
crates.
Ohio
team
is
a
thing
now
and
we've
got
a
lot
of
people
and
like
there's
a
lot
of
ways
we
want
to
communicate.
We
have
to
be
careful
with
the
messaging
that
we're
using,
because
it's
very
easy
when
I
say
things
like
an
RFC
needs
to
be
opened.
A
It's
very
easy
for
me
to
sound
like
I'm,
accusing
the
people
who
didn't
open
in
RFC
of
not
doing
that
when,
in
that,
in
all
of
these
cases,
I
think
it's
very,
very
reasonable
for
the
person
for
the
people
involved
to
just
not
known
that
was
even
an
option,
and
especially
for
all
of
these,
which
I
don't
think.
We've
ever
talked
publicly
about.
There's,
really
not
a
lot
of
reason
that
you
would
know
about
these,
but
we're
gonna
start
publicizing
them
a
lot,
because
we
want
people
to
get
more
involved.
A
So
if
you
want
to
know
like,
what's
going
on
with
great
audio,
sometimes
will
tweet
sometimes
will
tweet
features
from
here.
Sometimes
we
will
well.
We
will
always
tweet
when
we
have
an
incident
from
here,
but
crates
REO
status.
You
can
follow
it
and
you'll
get
a
tweet.
If
we
are
down
and
not
up.
We
also
have
a
Status
page
that
you
can
check
out
where
a
lot
of
that
information
goes
as
well.
A
A
lot
of
folks
that
the
incident
that
happened
on
Monday
got
emailed
to
the
rust
moderation
team,
which
is
fine,
like
that's
a
perfectly
fine
thing
to
do,
but
we
are
wondering
if
we
need
to
also
be
making
it
making
helping
folks
know
that
we
have
our
own
email
addresses
so
help
at
great
audio
will
get
you
in
touch
with
the
team.
We
also
have
a
discord
channel
on
the
it's
official
or
is
it
unofficial?
The
rust
discord.
A
Ok,
the
official
unofficial,
rust
discord.
We
have
a
channel
great
sigh,
it's
just
great
I,
oh
and
also
like.
If
you
want
a
response
from
us,
you
can
open
an
RFC
and
we'll
respond
and
you
should
come
get
involved.
Anybody
can
is
free
to
come,
join
our
weekly
team
meetings
as
an
observer.
They
take
place
every
Thursday
at
4:00
p.m.
Eastern.
They
happen
normally,
they
happen
by
text
in
our
discord
channel.
Anybody
can
come
to
that
once
a
month.
We
will
do
them
by
video.
You'll
need
an
invite
if
you
want
to
come
to
that.
A
But
if
you
just
reach
out
to
us,
we're
happy
we're
happy
to
invite
you
that's
a
great
way
to
get
started
if
you're
interested
in
joining
the
team.
One
thing
that
we
really
really
want
to
grow
that
super
low
effort.
If
you
are
awake
during
hours
that
not
that
not
allowed
members
of
the
team
are,
we
want
to
have
an
email
address,
that
more
people
know
that
just
page
an
on-call
person,
so
that,
if
somebody
reports
something
in
our
discord
channel
that
we're
not
already
monitoring
a
human
being
can
make.
A
The
call
like
is
this
worth
waking
somebody
up
yes
or
no
and
then
wake
us
up.
So
if
that's
something
you
are
interested
in,
doing
come,
get
in
touch
with
us.
I
want
to
just
thank
a
couple
of
people
before
I
go
Steve
and
Ashley
are
the
two
people
who
did
the
incident
response
with
me
this
week
and,
like
I
said
it's
for
me
personally
been
a
very
rough
week
and
how
professional
and
talented
they
are
has
really
helped
get
me
through
it.
A
I
also
want
to
thank
my
company
for
both
letting
me
take
the
time
to
come
here,
but
part
of
the
reason
we
were
able
to
get
the
retro
and
shepherding
on
the
incident
done
so
quickly
is
because
I'm
privileged
enough
to
work
for
a
company
where
I
could
tell
my
boss,
like
hey
crates,
IO,
is
under
attack
and
I
think
this
is
gonna
take
most
of
my
time,
so
I
need
to
take
the
rest
of
the
week
off
and
they
were
like
sure.
Okay.
So,
like
thank
you
Shopify
for
that.
A
If
you
want
to
come,
ask
me
questions,
or
just
talk
to
me:
please
come
to
do
after
afterwards
out
in
the
hallway
I've
got
these
stickers
that
this
is
Ruby's
official
sticker
I
would
love
for
you
to
have
one
if
you
want
to
get
in
touch
with
me
personally,
this
is
where
you
can
do
so.
That's
all
got.
Thank
you
very
much.