►
Description
Oh No! The Robots Have Taken Over! - Christopher Wilcox, Google
Speakers: Christopher Wilcox
As part of his work, Chris and his team at Google are responsible for managing libraries for 150 APIs across 7 languages. This results in a crushing amount of toil, making it hard at times to both make forward progress and maintain what has been created.
When you own over 70 node repositories you have to get creative. So the team decided to make an army of sorts. An army of screaming, free roaming robots.
Bots can have a freeing effect on your engineering team. Come to hear how embracing automation has let a team of engineers do what they do best and let the toil fall to the machines.
A
All
right,
I
think
we
can
get
started
thanks
everyone
for
showing
up
for
the
last
talk
of
the
day,
I
expected
it
to
be
half
empty
in
here
and
everyone
to
be
tired.
So
thanks
for
still
coming
so
if
your
career
in
software
development
resembles
mine
in
any
way
at
all,
you've
probably
found
one
thing
to
be
constant
over
everything
else.
A
There's
always
more
work
to
do,
no
matter
how
much
you
try,
no
matter
how
long
you
work,
there's
always
something
else
you
can
do
today
tomorrow
next
week
and
for
me,
like
I,
said
this
has
definitely
been
the
case.
I
work
on
the
Google
cloud,
client,
libraries
team,
and
we
maintain
hundreds
of
different
packages
and
to
make
this
problem
worse.
They
aren't
even
all
node
packages.
We
maintain
eight
different
languages
now.
A
Luckily,
no
one
on
the
team
is
expected
to
know
all
eight
of
these
languages,
but
almost
all
of
us
know
more
than
one
and
we
still
have
to
maintain
all
of
these
things.
So
that
only
helps
the
problem
a
little
bit
and
what
we
started
to
realize
is
that
we
needed
help
and
that's
when
the
bots
came
to
save
us.
A
And
I
say
the
bots
came
to
save
us,
but
I'm,
obviously
speaking
poetically,
they
didn't
come
to
save
us,
and
our
bots
aren't
all
that
smart,
so
I
wouldn't
expect
them
to
be.
Our
saviors
I
mean
they're
helpful,
but
they
tend
to
be
good
at
just
doing
a
single
thing
very
well
and
before
we
get
further
into
this
talk,
I
wanted
to
discuss
what
is
a
bot?
A
What
does
that
mean
in
the
terms
of
software
development
when
I
think
about
robots,
I
immediately
think
of
an
automotive
assembly
line
in
these
big
arms
that
replace
the
job
of
humans
from
decades
before?
But
we
really
don't
mean
this
when
we
say
BOTS
in
technology,
we're
not
usually
talking
about
a
physical
robot
of
of
any
kind
and
we're
not
even
talking
about
a
robots.txt
robots
that
txt
czar
crawled
by
web
crawlers
and
are
fairly
similar
to
the
bots
were
referring
to,
but
they're
a
little
different
as
well.
A
A
Bots
aren't
good
at
all
things,
of
course,
as
I
said,
they're
usually
good
at
repetitive
work
things
that
are
scoped
one
single
thing,
but
they're
very
good
at
that
one
thing,
particularly
they're
good
at
things
that
don't
require
intuition
or
any
sort
of
debugging,
so
the
sort
of
action
that
can
be
blindly
followed
with
this
strict
process,
and
this
isn't
necessarily
a
bad
thing.
These
it
turns
out
humans
are
really
bad
at
being
robots.
A
Multiple
studies
show
that
when
humans
and
the
counter
repeated
processes,
we
fatigue,
we
make
mistakes,
we
miss
steps,
and
so
it's
good
if
we
can
use
something.
That's
good
at
following
rote
rules
instead
of
ourselves.
I
think
it
should
be
our
goal
to
eliminate
it
as
many
of
those
sorts
of
tasks
as
possible.
A
So
to
frame
this
I
want
to
talk
a
bit
about
levels
of
automation.
The
SAE
society
of
automotive
engineers
has
a
standard
for
self-driving
cars
where
they
separate
the
levels
that
they
are
automated
the
the
amount
the
system
does
work,
so
they
separate
those
out.
So
we
can
have
an
understanding
of
the
advancement
and
a
sort
of
risk
involved.
This
all
of
us
sort
of
understand
things,
and
it
will
allow
us
to
start
with
simpler,
BOTS
and
get
to
more
complex
ones
as
I
talk.
A
So
this
is
the
chart
they
use
I'm
not
going
to
try
to
describe
it
exactly,
but
at
level
zero.
It's
your
typical
car,
the
ones
that
have
been
around
forever,
the
human
being.
Does
everything.
Eventually
you
get
to
level
five,
where
the
machine
does
everything.
In
theory
you
don't?
Human
being,
it
is
entirely
unscoped,
meaning
it
doesn't
just
have
a
list
of
tasks
it
can
do.
It
can
do
infinite
tasks.
Anything
at
all.
This
is
the
the
fully
self-driving
car.
A
So
if
we
take
that
and
apply
it
to
BOTS,
we
get
this
chart
we're
at
level
one.
We
have
things
that
are
automated
a
little
bit
sort
of
the
level
of
a
script
or
a
tool,
and
eventually
again
we
get
to
level
five.
Where
the
machine
does
everything
you
can
see
it,
each
level
of
this
chart,
one
more
thing
is
taken
over
by
the
system.
That's
the
bold
admit,
I'm
not
going
to
try
to
describe
this
much
further
here.
I
think
it'll
be
easier
as
we
go
into
examples
so
at
level
one
puts
simply.
A
Our
goal
is
to
automate
portions
of
our
workflow
not
necessarily
make
a
bot
do
all
the
work,
but
take
away
the
parts
where
it's
easy:
to
make
small
little
mistakes
as
a
human
being
you're
going
to
discover
the
work
you're
going
to
kick
off
a
task
hit
a
button
run
a
script,
but
the
work
itself
will
be
automated.
So,
let's
put
up
an
example,
we
have
a
package
and
our
goal
is
to
release
this,
but
releasing
takes
multiple
actions.
It
might
involve
tagging
a
branch
updating
a
release
number
publishing
to
NPM,
maybe
deploying
Docs
etc.
A
So
how
do
we
fix
that?
Well,
we
can
write
a
script.
The
script
can
do
all
of
those
things
and
we
can
click
a
button.
This
might
not
sound
entirely
like
a
body
yet
bots.
We
don't
tend
to
think
of
as
being
a
thing
where
we
just
actually
get
a
script
locally,
but
this
is
the
most
basic
bot,
the
deployment
environments,
your
machine.
It
does
a
task
for
you.
The
only
thing
that's
a
bit
odd
is
because
at
level
one
we're
still
triggering
an
assisting
system.
A
A
A
representative
task
for
that
would
be
that
script
we
authored
previously,
could
be
forgotten
to
be
run
and
it
would
be
good
to
know
if
that
happens.
If
we
have
some
release
ready
to
go,
let's
say:
we've
updated
the
release
a
number
within
package.json
and
we
haven't
published.
Yet
it
would
be
cool
if
we
had
some
sort
of
monitoring
to.
Let
us
know
about
that,
so
we
don't
just
let
it
sit
stale
forever.
A
And
when
we
started
into
level
3,
this
is
where
I
think
you
start
to
really
see
them
as
it's
very
useful
pots.
This
point
we
let
the
bots
start
doing
work
for
us
and
we
have
to
supervise
it
a
little
bit
probably
check
in
on
it,
but
for
the
most
part,
it's
fully
starting
to
self
monitor
and
we're
not
going
to
have
to
do
a
lot
of
intervention
ourselves.
A
An
example
of
how
that
might
manifest
is
we
have
issues
in
a
repository
that
go
stale,
so
we
all
have
repositories
that
we
have
to
work
in,
and
issues
could
assign
the
developers
on
the
team,
but
occasionally
people
on
the
team
become
overloaded
or
that
individual
and
the
team-
maybe
isn't
the
subject
matter
expert
for
that
so
they're
stalling
out.
So
we
can
implement
something
that
juggled
these
issues
around.
You
could
assign
them
to
a
different
team
member
to
see
if
that
would
help
us
get
traction.
A
Another
example
you
might
see
here
is
something
like
a
CL,
a
bot,
maybe
where
you
can
notice
that
someone
on
your
team
doesn't
have
clas
and
you
could
sort
of
walk
through
that.
It's
going
to
require
very
little
limited
monitoring.
The
monitoring
at
that
point
is
mostly
in
the
fact
that
nothing's
going
to
get
merged
without
a
human,
but
the
bot
can
still
go
through
that
entire
interaction
with
a
new
contributor,
and
then
we
get
to
level
4
and
things
get
a
bit
more
advanced.
We
don't
really
have
to
supervise
anymore
at
all.
A
A
So
what
does
an
example
like
this
start
to
look
like?
Sometimes
it
turns
out
that
in
our
repositories
we
have
branches
that
get
created
for
PRS
and
contributors
forget
to
delete
them,
and
this
starts
to
make
things
get
a
bit
below
D
and
hard
to
see.
What's
going
on,
so
maybe
we
could
write
a
bot
that
deletes
them
and
I
feel
like.
This
is
a
point
to
mention
that,
as
you
go
through,
the
levels
risk
starts
to
increase.
A
Also,
this
is
this
is
a
rather
risky
thing
to
do
what,
if
those
those
branches
are
needed-
and
you
made
a
mistake
in
this
bot-
this
is
where
I
mean
to
say
it's
not
really
supervised
anymore.
It
starts
to
take
action
that
would
be
hard
to
recover
from
so
other
buffs
in
this
category
are
things
that
look
like
merging
on
green
to
master,
where
maybe
you
reviewed
it
notated
earlier,
but
someone
merged
something
else
than
the
C
I
passed
and
it
gets
merged
in
and
that
could
be
rather
risky.
A
There
are
a
lot
of
bucks
in
this
category
and
the
example
we'll
use
later
falls
into
this
level.
Four
and
finally,
we
get
to
level
five
and
they
get
level
five.
The
easiest
way
to
describe
it
as
the
boss.
The
robot
starts
to
become
your
own
boss,
because,
unlike
the
previous
one,
it's
now
unscoped,
it
no
longer
has
a
question
and
a
solution.
It
just
responds
to
all
questions
in
all
solutions
and
I.
Think
you'll
find
the
science-fiction
has
taught
us
that
unschooling
BOTS
could
be
a
rather
dangerous
thing.
I.
A
A
So
while
we
can't
use
a
buck
to
do
this
for
us,
we
can
leverage
a
series
of
frameworks
and
our
team
found
one
that
we
liked
a
lot
called
pro
bot.
This
is
good
because
most
of
us
don't
want
to
spend
our
time
authoring.
Bots
the
bots
are
means
to
an
end
they're,
not
the
solution
itself,
to
network
marketing
and
so
being
able
to
leverage
other
open-source
products.
I
mean
we
can
get
back
to
our
product
and
not
just
a
bot.
A
Authors
Probot
integrates
really
well
to
github
as
authored
by
a
github
engineer,
and
allows
us
to
trigger
events
in
the
form
of
small
node
apps,
based
on
a
github
context.
Many
different
github
events,
the
nice
thing
too,
is
they
have
a
variety
of
samples.
We
can
use
to
sort
of
inspire
ourselves
and
understand
what
to
do.
A
So
a
simple
question
that
we
might
ask
yourself
is:
could
we
have
renovate
PRS
pen
appears
from
the
renovate
bot
automatically
runs
CI
for
us
and
not
wait
for
an
engineer
on
the
team
to
go?
Tell
the
CI
system
to
run
the
reason.
This
is
important
to
systems
like
travis
circle
and
the
internal
CI.
We
use
they
restrict
which
contributors
can
kick
off
builds,
and
this
is
important.
Most
build
systems
have
secrets
and
if
any
random
person
on
the
internet
can
run
a
build,
they
can
modify
those
files
and
they
can
expose
secrets.
A
So
we
don't
know
allow
that
just
happen.
It
needs
to
be
a
trusted
contributor
of
the
of
the
repository
of
the
project,
but
renovate
isn't
really
a
contributor.
It's
a
thing.
We
use
it's
the
thing
we
trust,
but
it's
not
part
of
the
github
org,
and
so
we
could
probably
write
about
to
do
this,
and
that
seems
like
a
small
enough
size
and
something
direct
when
renovate
creates
a
PR,
and
we
detect
that
it's
the
author,
we
run
CI.
A
The
next
thing
we
need
to
figure
out
is:
what
sort
of
events
do
we
need
to
trigger
this
on?
We
could
try
to
trigger
on
all
possible
events,
but
that's
probably
going
to
mean
it
runs
too
much,
so
we
might
want
to
trigger
an
initial
PR,
maybe
on
updates
to
the
PR,
maybe
on
the
creation
of
an
issue-
that's
probably
not
relevant
in
this
exact
example,
but
it's
a
common
one.
There
are
dozens
of
different
events
you
can
trigger
on.
A
But
for
me
these
four
are
most
often
the
ones
you
end
up
using
in
the
next
decision
you
get
to
make.
Is
our
bots
going
to
alert
or
retrieve
your
system,
and
this
again
goes
back
to
risk
if
a
system
only
ever
alerts
you
a
problem,
it's
generally
not
that
risky
in
the
case
of
something
like
a
CLA
bot
or
maybe
a
linting
bot,
it's
likely
to
just
leave
a
comment
on
the
pr.
It's
not
going
to
merge
your
code,
not
gonna,
run
your
build
system
relatively
safe.
A
On
the
other
hand,
if
we
make
changes
that
run
the
build
system,
merge,
publish
they
become
more
risky.
These
are
the
sort
of
things
that
cause
us
incidents,
and
so
you
need
to
decide
how
much
risk
you're
willing
to
take
on
in
this
instance.
So
for
the
case
of
this
bot,
we're
going
to
likely
add
a
label
to
our
repository
that
says
it's
safe
to
run
CI,
and
so
this
looks
a
little
more
like
a
change
and
so
that's
a
little
bit
more
risk,
but
we
can't
get
the
value
without
that.
A
Robot
comes
with
this
QuickStart
we
can
run
via
npx
and
it's
a
pretty
reasonable
place
to
start
if
you've
never
written
about
before
it's
going
to
populate
you
a
node
project
that
has
most
of
the
templating
after
asking
you
some
simple
questions,
we
didn't
use
this
exactly
because
we
found
out
that
we
wanted
to
do
templating
on
top
of
robot,
and
so
we
recently
added
our
own
bot
generator.
That
does
basically
the
same
thing.
There's
a
few
less
questions,
because
we
can
make
a
lot
of
assumptions.
A
For
instance,
all
the
authors
are
Google,
and
so
that
makes
that
a
little
simpler,
but
this
also
allowed
us
to
do
things
like
template
or
read
amis
and
have
consistent
style
across
all
our
samples
and
have
similar
targets
inside
package
at
JSON,
like
I,
said
at
the
end
of
the
day,
this
is
just
another
node
package.
All
the
box
system
does
is
runs
a
method
when
an
event
happens.
A
It's
a
pretty
straightforward
application
and
looks
like
a
lot
of
things
you've
used
before
and
has
a
very
bare
minimum
set
of
dependencies,
we're
going
to
use
something
from
octa
kit
to
interact
with
git
we're
going
to
use
Pro
bot.
In
our
case,
we
use
the
thing
called
GCF
details,
because
we
leverage
Google
Cloud
functions
for
this,
and
we
have
a
package
called
GS
GCF
details.
We
wrote
to.
Let
us
do
that
so
diving,
a
bit
deeper
into
source
I,
wanted
to
look
at
the
code
that
isn't
just
boilerplate.
A
Most
of
this
will
be
you'll,
probably
never
edit
it,
but
inside
of
one
of
the
CSS
files,
you're
going
to
find
a
function
that
takes
an
application
and
on
a
list
of
events,
some
action.
In
this
case
we
have
a
few
different
triggers
that
we're
going
to
go
on.
If
a
PR
is
open
to
reopen
or
synchronized,
that's
a
PR
update.
We
want
to
run
an
event
this.
A
A
So
how
do
you
set
up
your
environment
to
do
these
BOTS
to
offer
these
boss
locally?
This
is
a
pretty
good
hub,
centric
in
the
way
I
describe
it,
but
it's
likely
worth
noting.
There's
nothing
github
specific
about
what
we're
doing
here.
You
could
change
this
events
to
not
be
github
events,
you
could
use
Pro
bot
and
send
it
web
hooks
from
somewhere
else
all
possibilities.
A
So
the
first
thing
we
do
to
support
local
development
is
start
a
proxy.
This
is
so
that
way
we
can
use
our
local
development
system
as
the
target
of
the
web
hook.
That
github
provides
there's
a
service
called
Smee
we
can
use.
All
you
have
to
do
to
use
this
particular
proxy
is
go
to
smooth
out
I/o
click
a
button,
and
it
will
give
you
a
slug
Earl
that
you
can
then
use
to
route
your
issues
to
your
events
from
github
too.
A
So
we
start
by
running
our
proxy
once
we
get
that
slug
Earl,
and
this
sets
everything
up
for
us,
I
should
mention
as
possible.
You
don't
even
need
to
run
this
step,
but
we
found
environments
that
if
you
skip
this,
it
might
not
properly
configure,
and
so
you
run
it
once
you
only
ever
run
this
the
first
time
you
set
your
machine
up,
and
the
next
thing
you
do
is
you're
an
NPM
start.
A
Like
I
said
it's
basically
just
a
regular
package
once
used
Probot
and
it
will
direct
you
to
go
to
port
3000
in
your
machine.
This
is
that
way.
We
can
go
ahead
and
set
up
the
github
app
you'll
be
presented
with
the
screen.
That
looks
like
this,
where
you
have
to
register
your
github
app
you'll
go
through
the
github
apps
creation
process,
we'll
give
it
a
name.
Looking
figure
permissions
once
we
get
around
to
configuring
permissions,
things
start
to
get
a
little
bit
harder
to
do,
because
we
have
to
ask
ourselves
some
real
questions.
A
So
the
permissions
here
are
being
reviewed
are
from
the
previous
step
and
I
point
this
out,
because
if
you
don't
do
this,
nothing
interesting
will
happen.
Just
sit
here
a
while.
It's
an
easy
mistake
to
make,
because
you
think
well,
I've
made
a
bot
and
I've
set
permissions,
but
you
have
to
do
this
step.
You
also
might
have
to
do
this
step
again.
If
you
ever
change
the
permissions
and
that
that's
a
gotcha,
that's
caught
me,
I
haven't
given
it
all.
A
The
permissions
I
needed
the
first
time
around
and
if
you
forget
to
do
that
again,
it
won't
trigger
on
those
things
until
you
come
and
do
this,
so
we
click
on
that
link.
We
can
say
alright.
These
permissions
are
safe,
I'm,
happy
with
that.
We
can
install
it
on
any
repository
or
all
repositories
in
an
org.
Something
that
I
do
is
I,
create
a
repository
purely
for
testing.
It's
not
important.
I
can
make
PRS
against
the
branches,
whatever
it'll
be
fine
and
I
target.
A
A
To
do
this,
we
need
to
set
a
few
environment
variables.
Every
github
app
comes
with
an
application
ID.
It
comes
with
the
private
key
and
it
comes
with
a
secret
for
web
hooks.
These
are
pretty
straightforward.
To
get
the
app
ID
will
be
at
the
top
of
the
app
page.
We
can
export
it
to
app
ID
the
web
hook.
Secret
is
a
string
that
you
set
so
for
demonstration
purposes.
In
this
case
it's
a
pro
bot
demo
and
then
we
need
to
configure
private
keys
at
the
bottom
of
the
page.
A
A
So
here
we
show
a
sales
running
NPM
start
its
forwarding
to
spin
out
to
localhost
3000
and
we're
starting
to
get
these
post
requests
coming
through
the
post
requests
are
all
the
result
of
me.
Opening
this
PR
is
that
as
a
test
repo,
so
you
can
see
that
I've
updated
to
read
me
a
bunch
and
opened
and
reopened
a
whole
request,
and
then
we
can
go
over
to
Smee
and
start
to
look
into
what
events
we're
getting
if
we
expand.
A
One
of
those
pull
requests
will
see
the
JSON
payload
and
what
it
looks
like
on
a
live
repository.
You
get
a
better
idea
of
what
information
are
bots
receiving
and
how
RIS
how
to
respond
to
it.
So
this
is
handy
for
live
debugging
and
just
ad
hoc
testing,
but
it's
also
useful
for
them,
taking
these
payloads
and
turning
them
into
unit
tests
that
are
repeatable,
and
so
that's
that's
where
this
tends
to
be
most
useful.
A
So
what
is
the
deployment
stack
for
that
look
like
now
that
you
can
run
it
locally?
How
do
we
get
that
into
somewhere?
That's
not
running
on
our
development
machine.
We
use
a
variety
of
services.
Like
I
said
we
put
this
on
Google
cloud
functions.
We
ultimately
use
storage,
a
thing
called
a
KMS
key
management
service.
A
So
let's
talk
about
a
bit
of
those
components.
The
most
important
bit
is
Google
cloud
functions.
We
started
on
this
just
as
a
call-out
right
away
at
the
time
we
started
this
github
actions
didn't
exist
this
time.
If
we
started
this
again,
that
might
have
been
an
approach
we
looked
into,
but
we
already
started
this
for
Google
cloud
functions,
and
so
it
doesn't
really
make
much
sense
at
this
point
for
us
to
go
back.
A
Google
cloud
functions
take
a
web
event
any
sort
of
HTTP
trigger
and
they
can
start
executing,
and
so
there
are
these
little
on-demand
actions,
which
is
a
really
good
fit
for
a
bot.
They
tend
to
be
a
good
fit
for
anything
that
doesn't
have
a
lot
of
state
management
and
that's
not
being
called
very
frequently
and
most
spots.
Aren't
you
know
they're
intermittently
called
maybe
just
during
business
hours,
and
so
this
is
a
good
application
for
that
and
there's
an
existing
Google
cloud
function
handler
that
Probot
provides.
A
A
There's
a
potential
security
risk
in
using
environment
variables
in
the
bot
as
well,
just
like
there
would
be
for
a
CI
system,
and
so,
instead
of
using
environment
variables,
we
can
inject
these
things
through
the
kms
system,
which
ultimately
stores
them
on
Google,
Cloud
storage
and
then
they're
fetched
as
they're
needed
and
then
immediately
piped
into
the
command.
So
they're
never
saw
as
an
environment
variable.
It
would
be
more
difficult
for
someone
to
capture
those
things,
and
so
the
utility
that's
released
by
Probot
doesn't
support
this.
So
we
ended
up
writing
our
own.
A
It
is
a
rather
simple
a
bootstrapper,
so
that
wasn't
too
much
work.
But
if
this
sounds
interesting
to
you,
that
you
would
want
to
use
cloud
functions
and
some
of
these
advanced,
more
advanced
features
feel
free
to
reach
out
to
me
or
come
visit
any
what
the
Google
booth
I'd
like
to
talk
about
it.
We
haven't
yet
released
this
to
people
it's
only
in
our
repository,
but
if
there
was
a
value,
that's
something
we
could
consider
open
sourcing
further.
A
A
This
allows
us
to
use
the
secrets
that
we're
storing,
as
well
as
the
deployment
pipeline
no
developer,
needs
to
manage
the
publishing
just
self
publishes
as
we
need.
So
that's
that's
good
for
us,
but
let's
step
away
from
that
a
bit
and
get
out
of
some
of
the
google
specifics
and
just
talk
about,
if
you
were
to
publish
a
single
bot,
it's
just
something
like
loud
functions.
What
would
that
look
like
it's
going
to
look
again
like
a
lot
of
note.
A
Apps,
you've,
you've
written
before
we're
gonna,
have
a
compilation
step
all
of
our
BOTS,
our
typescript
most
of
our
code,
based
in
fact,
is
we're
going
to
make
a
target
directory
and
copy
some
things
over
to
it
and
that's
the
build
step.
Technically.
This
target
part
isn't
necessary,
but
it
is
a
bit
of
a
safety.
It
means
that
when
we
go
to
deploy,
we
don't
deploy
anything
unnecessary,
we're
only
going
to
deploy
the
things
we
most
care
about
for
the
bot,
not
random
artifacts.
That
happen
to
be
in
the
repository
for
publishing.
A
We
provide
a
function,
name
and
Google.
Cloud
comes
with
a
tool
called
tree
cloud
and
we
can
use
that
to
upload
a
function
once
we
provide
a
directory
through
here
we're
going
to
provide
it
to
use
all
the
kms
secrets
for
us,
it's
going
to
go
through
and
upload
it
for
us
through
G
Club,
and
that
that
can
be
done
with
without
cloud
build.
The
reason
we
use
cloud
build
ultimately,
is
we
have
more
than
one
bot,
and
so
it's
nice
to
have
a
centralized
system
for
that
and
a
pattern
we
can
follow.
A
So
I
hope
this
has
helped.
You
understand
a
bit
of
how
we
do
BOTS,
at
least
for
Google
cloud
client
libraries
and
has
inspired
you
to
embrace
using
Busta
for
your
team
from
a
lot
of
gardening
and
allow
you
to
do
more
meaningful
work.
I
would
say
that
any
test-
that's
repeated,
often
is
a
good
candidate
for
bots,
and
virtually
all
projects
can
benefit
from
using
them.
I
also
like
to
mention
that
all
of
our
bots
are
open
sourced
they're
on
github,
but
they
can
be
looked
at.
A
This
is
the
repository
they
exist
at
there's,
a
variety
of
instances
of
them
and
most
all
of
the
examples.
I
talked
about
our
bots
that
exist
that
we
are
using
today
and
I
also
wanted
to
take
a
moment
to
thank
the
others
that
contributed
to
this.
I
am
certainly
not
the
only
one
that
has
worked
on
this
project.
A
lot
of
people
have
I
just
wanted
to
take
a
moment
to
thank
them
all
for
their
efforts.
So
thank
you
all
for
having
me.