►
Description
The Linux kernel is the largest collaborative software development project ever. This talk will discuss exactly how Linux is developed, how fast it is happening, and how we all try to stay sane keeping up with it (hint, git is the reason).
Greg Kroah-Hartman is a Fellow at the Linux Foundation. He is currently responsible for the stable Linux kernel releases, and is a maintainer of the USB, TTY, and driver core subsystems in the kernel as well as other portions of the codebase that he wishes he could forget about. He is the author of two books about Linux kernel development, both free online, and has written many papers and articles about the Linux kernel.
A
Hey
we're
running
Linux,
you
never
know
all
right.
I'm
Greg,
like
you,
said,
I,
do
stable
releases
leanness.
Does
the
development
releases
I'll
talk
a
little
bit
more
about
that?
As
you
all
know,
get
came
from
Linux.
We
was
our
development
model
that
spawned
this
crazy
beast
sorry,
but
it
works
really
really
well
so
I'm
gonna
talk
about
how
we,
what
we
do
and
how
we
use
git,
because
it's
a
little
bit
different
than
most
other
groups
first
feel
free
to
heckle.
A
A
That's
also
4.5,
like
you
see
21
million
lines
of
code,
a
lot
of
code.
All
the
drivers
for
links
kernel
are
in
the
source
tree.
This
is
different
than
any
other
operating
system
had
done
before.
Most
of
them
kept
them
separate.
We
put
everything
in
the
tree,
and
that
makes
it
better
because
we
can
change
ap.
Is
we
can
change
the
way
things
work?
A
We
can
see
how
drivers
multiple
drivers
work
for
sort
of
the
same
hardware
and
merging
together
on
average,
a
driver
for
the
Linux
kernel
is
about
1/3
size
of
other
operating
systems,
so
it
works
out
really
well,
but
still
we
had
a
huge
huge
tree,
but
you
don't
run
it
all.
My
laptop
runs
Oh
1.6
million
I
think
your
phone
runs
about
4
million,
now
3
million
lines
of
code,
it's
different
than
what
I
do,
but
there's
five
or
sense
of
the
core
of
all
this
2
million
line.
A
21
million
lines
is
the
core
of
the
kernel.
Everybody
runs
out.
The
rest
is
all
other
stuff.
So
here's
what
we
did
last
year-
4000
developers
at
least
387
companies,
I
say
at
least
because
I
keep
track
of
this
and
I
haven't
been
doing
that
for
the
past
year.
So
if
you
submit
a
patch
to
the
kernel-
and
it's
not
obvious
who
you
work
for
I'll,
send
you
an
automatic
email
again,
I
haven't
been
doing
that
it
should
be
about
450
companies.
We
think
we
cracked
400
different
companies
about
three
years
ago.
A
A
This
is
what
runs
the
world.
The
Linux
Foundation
told
me
to
stop
using
the
word
scary
in
this
presentation.
I'll
use
it
a
lot.
This
is
a
lot.
This
is
supposedly
a
stable
kernel
that
runs
that
runs
the
world's
upper.
It
runs.
Everything
runs,
the
internet
runs
your
laptop's.
Well,
a
few
of
your
laptop's
runs.
All
your
phone's
runs
lots
and
lots
of
things
runs:
Wall
Street,
when's,
your
air
traffic
control,
things
like
that,
scary,
stuff,
sorry,
I,
won't
say
scary,
not
only
this
is
the
number.
A
You
should
be
scared
of
that's
a
lot.
We
we
keep
going
faster.
Every
year
we
go
faster,
we
think
we're
plateauing
every
year.
I
do
this
presentation.
I
say
we
can't
possibly
go
any
faster
and
every
year
we
do
five
ten
years
ago,
we're
going
to
and
a
half
changes
an
hour
and
that
everybody
thought
was
unsustainable.
There's
no
way
we
can
keep
up
with
that.
Five
years
ago
we
were
going
five
changes
an
hour
or
was
ago.
We
can't
go
faster
than
that,
we're
going
faster
every
single
year.
A
Every
single
release,
we're
kind
of
flat,
telling
they'll
still
keep
going
up
and
up
and
up
and
the
interesting
thing
is.
These
changes
aren't
just
in
drivers
they're
across
the
whole
tree.
So
out
of
all
those
lines
of
code,
I
said
the
core:
the
kernel
is
5%.
5%
of
these
changes
are
in
the
core.
Our
kernel,
drivers
are
about
40%,
40%
of
changes.
Aren't
drivers,
networking
is
about
10%.
10%
of
these
changes
are
in
the
networking,
stack
it
cuts
across
the
whole
tree.
This
goes
against.
A
A
We
went
even
faster
4.3
release
2
releases
ago
we're
up
to
8
changes
an
hour.
We
did
break
9
changes
an
hour
last
year
for
one
release.
I
think
the
release
we're
about
to
do
in
a
couple
weeks
we're
going
to
be
about
9.
Maybe
10
changes
an
hour.
I'll
have
the
largest
size
of
release
again
we're
going
faster
and
faster
and
faster.
A
What
this
means
is,
if
you
were
comfortable
with
what
we
did
a
year
ago,
but
for
the
past
work
we
did
and
your
driver
and
your
tree
is
not
merged
into
the
main
kernel
you
we
are
going
faster,
so
you
have
to
do
more
work
to
keep
up
with
us
and
that's
something
a
lot
of
companies
don't
realize
if
they
try
and
fork
and
go
off
on
their
own,
that's
great.
It
works
for
a
while,
but
again
we
are
going
faster
and
faster
and
faster
and
they
can't
keep
up.
A
They
have
to
invest
more
money
and
more
time.
The
best
thing
to
do
is
merge
into
the
kernel,
and
that
way
you
go
and
cost
you
money
to
keep
your
code
outside
the
kernel.
So
how
do
we
do
this?
Two
big
things,
time-based
releases
and
incremental
changes,
time-based
releases.
We
started
this
about
5
no
gee
10
years
ago.
We
said:
let's
stop
doing
this
stable,
unstable
development
cycle.
Let's
just
do
a
new
release.
A
Everything's
gonna
be
stable
and
we
said,
let's
make
it
between
2
and
3
months,
and
we
are
where
we're
about
6
to
7
weeks.
We
do
a
new
release
and
this
is
good.
This
means
that
if
you
are
developing
a
new
feature
and
you
try
and
get
emerged
that
gets
rejected
well,
then
you
have
another
release
in
two
months:
pretty
much
you
can
get
it
emerged
in
there.
A
You
don't
have
the
back
pressure
of
oh,
no,
we're
not
gonna
get
in
this
release
because
we're
doing
a
release
every
six
months
every
year
we
have
to
accept
it.
Now
we
can
push
it
off,
get
the
best
technical,
logical,
best,
technical
thing
working
and
get
emerged.
Next.
It
takes
away
that
barrier
a
barrier
of
us
having
to
accept
stuff
that
we
don't
want
to
it
also
takes.
It
also
is
very
reproducible.
We
know
when
a
new
release
is
going
to
come
out.
A
Companies
can
figure
out
I
want
to
base
my
phone
I
want
to
release
it
on
this
date.
So
I'm
going
to
pick
this
kernel,
that's
why
I
need
to
get
stuff
merged
in
and
it
works
out
really
well.
We
are
very,
very
regular.
Leanness
keeps
wanting
to
go
faster,
he's
knocked
it
down
to
six
weeks
a
couple
times.
He
says
he
wants
to
do
five
weeks.
I,
don't
know
that
might
be
tough.
So
how
do
we
do
this?
A
So
here
we
go
numbers
all
right,
so
leanness
releases
4.2,
that's
zero
at
the
top
and
then
I'll
talk
a
little
bit
more
about
this
later,
but
all
developers
throw
a
bunch
of
stuff
at
them
for
two
weeks
and
it
does
release
candidate
1
increments
number
4.3,
that's
zero
or
lease
Canada
one
and
then
every
single
week
we
do
another
release
candidate.
After
that
first
release
candidate,
its
bug
fixes
only
or
regressions.
We
are
very,
very
serious
about
regressions
because
we
move
so
fast
and
we
because
moves
so
quickly.
A
We
want
people
to
be
confident
about
upgrading.
You
should
always
feel
comfortable
about
upgrading
a
kernel.
It
should
just
work.
If
it
doesn't,
we
did
something
wrong.
We
made
this
statement
a
decade
ago,
saying
we
will
not
break
user
space
and
we've
held
to
it.
Facebook
has
talked
about
how
they
update
their
incremental
their
internal
servers.
Every
single
release.
They
haven't
had
a
problem
in
about
three
years.
It
works
really
really
well
so
bug
fixes
only
bug,
fixes,
bug,
fixes
or
regressions.
A
We
revert
things
using
git
I'll
talk
a
little
bit
more
about
how
we
use
git
later
and
then
up
the
rc6
rc7
everything
settling
down.
He
doesn't
mean
release
and
off.
We
go
that's
how
we
do
it.
So
we
started
doing
this
for
a
few
years
and
we
realized
wait.
What
happens
if
there
was
a
big
nasty
bug
in
4.2?
What
do
we
do?
So
we
came
up
with
the
idea
of
stable
kernels.
A
So
what
I
do
I'm
in
charge
of
these
I
forked
from
leanness
and
I
do
4.2.1
a
two
to
three
to
four
every
about
week.
I
do
a
new,
stable
kernel
and
the
rule
for
stable
kernels
is.
It
has
to
be
in
leanest
tree
first
and
that's
really
really
important.
We
never
want
to
diverge.
We
never
want
to
take
something
in
the
stable
kernel
that
isn't
already
in
this
is
tree
and
I'm.
Conversely,
I
don't
want
to
take
stuff,
that's
been
modified.
A
Sometimes
people
say
well
there's
a
little
bit
hairy
on
how
we
did
this
in
the
industry,
so
I'll
give
you
a
simpler
patch.
No
I
want
the
identical
patch,
because
95%
of
time,
if
I
take
something,
that's
been
rewritten
a
little
bit,
it's
buggy
because
it
hasn't
been
tested
and
it
always
happens.
So
we
have
some
rules
and
what
can
go
into
a
stable
kernel.
Bug
fixes
only
has
to
be
about
a
hundred
lines
has
to
be
obvious,
which
is
in
the
eye
of
the
beholder.
A
Of
course,
an
aura
can
just
new
device
IDs,
like
just
add
a
new
device
ID
for
your
USB
driver
or
device,
or
something
has
to
be
like
that.
There
has
been
some
bigger
changes
going
in
sometimes
when
the
memory
management
guys
said
here's
this
series
of
20
patches
that
are
all
100
lines,
but
they
all
fix
something.
A
I'll
take
that
but
again
keep
on
going
and
then
the
more
powerful
thing
is
I
get
to
throw
this
away,
and
you
upgrade
to
the
next
one
there's
a
little
bit
of
overlap
of
a
few
weeks,
but
we
throw
it
away
and
on
you
go
so
all
the
districts
that
you
run
their
community
base
like
fedora
openSUSE
arch
Gentoo.
They
all
run
off
these
stable
kernels.
The
enterprise
people
pick
these
kernels
and
go
for
a
longer
period
of
time.
A
They
like
a
long-term
kernel
and
I,
used
to
work
for
Novell
and
SUSE
at
the
time
and
I
was
in
charge
of
our
kernel.
Team
and
I
realized
that
I
could
use
these
kernels
as
my
day
job,
because
we
were
maintaining
a
kernel
for
a
couple
years.
So
let's
do
that.
So
we
have
something
called
a
long-term
kernel.
Now
I
pick
one
kernel
tree
a
year
and
I
maintain
it
for
two
years
right
now:
3:14
4.4.4.
A
We
moved
from
three
to
four
just
because
the
numbers
got
big
and
you
think
the
difference
between
3.17
and
21
is
less
than
it
is
between
seven
and
ten
and
it
gets
11
mental.
So
normally
I
pick
one
a
year.
4.4
was
odd
and
at
the
kernel
summit
we
decided
to
do
something
different
and
I
picked
one
of
the
beginning
of
the
year,
so
that
one
was
a
surprise,
but
this
way
I
actually
lined
up
good.
A
The
new
Chromebooks
that
are
coming
out
will
be
4.4
days
on
new
Android
before
that
for
based
I
go
around
talk
to
companies,
see
what
they're
using
I
think
Debian
is
gonna,
be
for,
for
maybe
canonical
I,
don't
know.
Somebody
else
is
going
to
be
for
full,
so
I
maintain
these
for
two
years
and
then
I
drop
them.
This
works
well
for
a
lot
of
companies.
Bigger
companies
like
again
Sousa
Red
Hat,
canonical
maintain
them
on
their
own
a
little
bit
longer
in
Japan
right
now
they
are
replacing
their.
They
call
it.
A
Social
infrastructure
I.
Think
that
need
a
better
name
because
I
think
of
Twitter
when
I
think
of
that
it's
their
streetlights
and
their
railway
systems
and
all
that's
converting
over
to
Linux
and
so
a
lot
of
companies.
There
have
come
to
me
and
said
we
need
you
to
maintain
a
kernel
for
20
to
30
years
and
I
said
yes,
retirement,
so
they're
going
to
have
an
interesting
problem.
What
are
they
they're
going
to
pick?
A
Probably
the
next
long-term
stable
kernel
I
do
next
year
and
they
want
to
they
want
it
for
20
years
and
that's
an
interesting
thing.
How
are
we
going
to
support
that?
And
what
are
we
going
to
do
so
I'm
working
with
a
number
of
companies
there,
so
the
Linux
Foundation?
Are
we
going
to
try
and
figure
out
how
we're
going
to
maintain
a
kernel
for
20
years?
That's
gonna
be
interesting.
Think
about
what
Linux
look
like
twenty
years
ago.
It
was
pretty
bad.
A
The
interesting
thing
is
going
to
have
hardware
that
runs
for
twenty
years,
so
I
might
end
up
with
a
big
light
stoplight
in
my
living
room
or
something
I,
don't
know
so
yeah,
so
long
term
currents
questions.
A
This
is
how
we
do
releases.
You
guys
are
easy.
Oh,
come
on
alright!
Now,
let's
talk
about
get
so
developers,
we
have
almost
four
thousand
of
them
and
they
make
a
patch.
They
make
a
change,
and
every
change
that
goes
into
the
Linux
kernel
has
to
be
standalone.
It
has
to
not
break
everything
and
it
has
to
be
quote
correct.
We
cannot
break
the
build
all
those
lines
of
code.
All
those
changes
that
go
into
the
kernel.
None
of
them
breaks
the
build.
A
Oh
one,
other
thing
I've
got
to
mention
those
are
the
patches
that
are
accepted,
not
that
are
submitted
on
average
I
I
accept
about
one
third
to
one
on
a
good
day.
One
half
of
patches
that
are
accept
cement
sent
to
me.
So
there's
a
lot
of
work
going
on
out.
There
takes
you
a
number
of
times
they
got
changes
in
and
a
lot
of
stuff
gets
rejected.
So,
there's
a
lot
more
work
than
just
what
you
see
accepted
just
to
give
you
a
sense
of
how
big
our
sense
of
scale
is.
A
Okay
developers
when
they
make
a
change
has
to
be
obvious,
has
to
be
broken
down
into
one
thing.
We
require
people
if
you're
going
to
do
a
complicated
thing
to
show
your
work.
Like
your
old
math,
professor,
said,
you
break
your
stuff
down
into
individual
steps
along
the
way.
Every
change
is
correct.
Every
change
this
and
breaks
something-
and
you
have
to
show
a
long
series
on
this-
puts
more
burden
of
work
on
the
developer,
but
that's
what
we
have
a
lot
of
so
we
waste
developers
time
because
we
don't
have
many
maintained
errs.
A
A
It's
a
hard
development
process
for
people
to
learn
how
to
do,
but
that's
what
we
make
developers
do
so
to
make
a
change
and
then
they
send
that
through
email,
sorry
to
the
owner
of
the
file
or
the
driver,
and
we
have
about
how
many
do
I
count.
We
have
1,000
maintain
errs
these
days,
which
is
crazy
if
we
have
4000
developers,
but
I
looked
and
I
looked
at
a
list.
A
There's
a
lot
of
people
on
the
list
of
maintainer
x'
that
haven't
done
a
lot
of
work
in
a
long
time,
so
they
don't
show
up
on
list
of
developers.
So
I
think
we
have
about
700
active
maintainer
x',
so
they
make
a
change
and
they
send
it
off
through
email.
We
have
mailing
lists
for
every
different
subsystem
of
the
kernel
like
USB
there's,
a
USB
mailing
lists,
Guzzi,
just
scuzzy
block
memory
management.
There's
a
big
Linux
kernel
mailing
list
which
gets
about
4
to
500
emails
a
day.
A
The
big
secret
is,
nobody
reads
that
we
all
filter,
Andrew,
Morton
I,
think
reads
it
all,
but
he's
different
I'll
show
you
some
more
work
that
Andrew
does
later,
so
they
make
a
change
and
we
review
it.
We
look
at
it
an
email,
we
respond
back
and
inline
comments.
We
don't
top
posts,
we
don't
post
an
HTML,
it's
all
plain
text
and
that's
good,
it's
good,
because
we
want
people
who
might
not
have
English
as
their
first
language.
We
want
people
from
other
backgrounds
through
email,
you're
anonymous
in
a
way.
A
It's
just
what
you're
writing
it's
just
the
text
right
there.
Some
projects
and
I
will
point
at
OpenStack.
They
make
people
get
together
in
a
room
or
they
make
people
work
on
IRC,
together
and
I.
Don't
think
that
works
well,
I
want
it
so
that
if
I
respond
to
an
email
they
can
take
a
day,
they
can
run
into
Google
Translate,
they
can
think
about
it
and
then
they
can
respond
back,
because
that
works
better.
That
works
better
for
people
who
again
English
is
not
their
first
language.
A
We
want
to
be
much
much
more
inclusionary,
we
don't
know
race.
We
don't
know
nationality,
we
don't
know
anything
and
we
don't
keep
track
of
that
and
that's
good.
So
again,
email,
plain
text,
old-school
works
really
really
well
yeah.
They
look
at
it.
They
say
yes,
no
whatever
and
they
go
on.
So
here's
a
change.
You
guys
all
know
what
patches
look
like.
There's
a
really
old
one.
We
make
people
include
a
few
interesting
things.
A
First
line
again
good
get
history,
I
mean
we
created
the
get
style
so
I
guess
you
can
look
at
one
line,
change
log
text
saying
what
did
and
then
we
say
signed
off
by
every
person
who
creates
a
patch
has
to
say
signed
off
by.
We
don't
require
contribution
copyright
assignments.
We
don't
require
clas,
you
own.
The
copyright.
You
just
have
to
say,
signed
off
I'll
talk
about
that
in
a
minute
and
then
the
owner
of
that
subsystem
at
the
time
was
David
the
USB
gadget.
A
He
said
yes,
I
acknowledge
that
looks
good,
an
email
and
then
I
picked
it
up
at
the
time
and
I
said
great
I
sign
off
and
I
add
it
to
my
tree
again.
The
line
was
an
obvious.
Let's
actually
look
at
this
variable
before
we
dereference
it
doesn't
see.
Wonderful,
that's
it!
That's
a
patch,
obviously
correct
in
a
way
it
goes.
So,
let's
talk
about
signed
off
by
the
get
developers
know
this
because
they
require
this
as
well.
Let
me
same
up
with
the
developer:
D
Co
developer
certificate
of
origin.
A
It's
a
little
bit
more
complex,
but
this
is
what
it
means
means
that
you're
allowed
to
contribute.
This
change
to
this
project
under
the
license
of
the
project
has
a
lot
of
other
groups
to
pick
this
up.
Scd
from
core
OS
are
using,
it,
samba
is
using
it.
Docker
is
using
it.
A
lot
of
other
groups
are
using
it
I
really
recommend
it.
It's
a
very
solid
body
of
legal
work
on
its
again.
This
is
the
legal
terms,
but
it's
very
readable.
It's
something
really
really
good.
A
The
leanness
did
it's
like
the
exact
opposite
of
a
CLA.
It's
giving
permission
not
requiring
all
these
other
things
works
really
really
well,
the
get
guys
get
developers
used.
It
works
out
good,
so
everybody
says
signed
off
by
so
what
this
means
is,
if
you
go
back
and
look
at
this
patch,
not
only
is
it
signed
off
by
so
there's
a
legal
term
there,
it's
now
a
path
of
blame,
so
this
is
wrong.
I
get
to
go,
hey
somebody
says:
hey,
Greg,
David,
Robert
fix
it
and
your
names
on
it.
A
It
isn't
a
company's
name,
you're,
hiding
behind
an
alias.
It
is
your
name
on
the
patch
and
when
your
name
is
on
something,
that's
public,
you
do
better
work.
You
just
do
that's
a
really
really
powerful
social
experiment.
That's
made
the
Linux
codebase
really
good,
but
it
also
means
it's.
The
best
audited
body
of
work.
I
can
take
anyone,
there's
20
million
lines
of
code
and
say
you
can
get
blame
who
changed
this
line
and
who
reviewed
it
to
people.
Every
single
line
is
a
21
million
lines
of
code
I've.
A
Given
talks
to
companies
about
this.
They
can't
even
claim
that
for
their
internal
code
bases,
so
not
only
is
the
largest
software
driven
a
community
driven
software
project
as
far
as
size,
it's
also
the
best
audited
body.
There's
no
question
where
everything
came
from,
which
is
pretty
amazing,
so
path
of
blame
it's
fun.
A
So
then
the
developer
sends
it
to
a
subsystem
maintainer.
So
sub
subsystems
are
like
USB,
PCI,
networking
Wireless
and
now
we
all
have
good
trees.
There's
a
bunch
of
get
trees.
We
use
get
kernel.org.
We
don't
really
use
github
I!
Think
we
have
about
two
hundred
three
two
hundred
different
trees.
Publicly,
a
few
people
use
a
github,
but
most
everybody
uses
get
the
kernel
org.
So
these
trees
are
now
public,
so
we
have
different
branches.
A
We
have
one
branch:
that's
going
to
go
to
leanness
now
for
bug,
fixes
one
branch,
that's
going
to
go
in
the
next
release
and
then
we
put
the
patches
in
there
and
that's
public.
That's
an
immutable
branch.
We
don't
rebase,
never
rebase
and
it's
public
now.
So
then
what
we
do
is
every
single
day.
Steven
Rothwell
in
Australia
takes
all
those
sub
trees
and
we
merges
them
together
and
then
he
builds
them
on
about
20
30
different
architectures.
Now,
maybe
more
now
a
knee
boots.
Him
I
think
he
boots
30
different
ones
and
tests
it.
A
So
then,
I'll
get
an
automatic
response
every
night
if
something
like,
if
my
tree
messed
with
the
networking
tree
or
my
branch
over
here
mess
with
this
other
subsystem
branch
and
it
boots
so
Linux
next
happens
every
single
weekday.
It's
really
good.
If
you
want
to
see
what
the
next
version
of
Linux
is
going
to
be
use
Linux
next,
if
you
want
to
do
development,
do
development
up
next,
not
off
what
leanness
has,
because
what
Lena
says
is
old.
This
is
what
happens
so
every
single
day,
all
our
trees
can
merge.
Yeah.
B
A
Time
a
compiled
test
on
it
next
level.
Hopefully
this
person
gonna,
compile
this
and
then
that
person
and
then
that
person
usually
I
get
I,
have
gotten
a
lot
of
patches
that
this
person
never
did
a
compile
test.
So
it's
up
to
the
subsystem
maintainer
to
at
least
build
it,
but
there's
something
else
happens:
I'll
get
back
to
that
on
Linux.
Next,
just
tests,
the
merge
and
test
that
the
merge
works,
because,
even
though
I'm
a
maintainer
of
USB
in
the
kernel,
you
don't
own
anything
absolutely
on
the
networking.
People
can
say:
hey.
A
We
got
networking
drivers
that
mess
with
USB
stack.
We
need
to
change
this
over
here
and
then
great.
It
goes
to
the
networking
stack.
Sometimes
they'll
see
it.
Sometimes
they
won't.
Other
people
can
change
your
code.
Yes,
you're
a
maintainer.
Yes,
your
name
is
on
it,
but
other
people
can
change
it
as
well.
It
isn't
absolute
that
works
out
great.
A
So
then,
in
Andrew
Morton's
over
on
the
side,
picking
at
things
that
aren't
maintained,
we
have
a
number
of
subsystems
where
people
don't
maintain
them
anymore
or,
like
those
1000
emails,
some
people
don't
respond
to
their
email
anymore.
Some
people
pass
away.
You
know
we're
all
human
people
die.
People
move
on,
so
he's
picking
up
all
the
random
pieces
and
he
has
a
tree
out
there.
A
Andrew
doesn't
use
git
uses
quilt
and
quilt
is
a
stack,
a
bunch
of
patches
on
top
of
a
base,
works
out
really
nicely
and
I
use
quilt
also
for
the
stable
kernels.
It
doesn't
really
work
well
with
that
because
he
can
rebase
this
tree
and
do
fun
things
like
that.
So
as
it
happens,
so
this
was
working
really
well
too.
A
couple
years
ago
we
realized
nobody
was
testing
anything.
A
It's
really
hard
to
write
a
test
for
an
operating
system.
If
you
say
hey
you
booted,
that
works,
which
is
a
non-trivial
thing
to
be
sure,
but
it'd
be
nice
if
other
people
tested
and
nice.
If
we
didn't
break
something
on
one
of
those
different
architectures,
we
supported
like
50
or
six
now,
80
different
architectures
be
nice.
If
we
didn't
so
Intel
came
along
and
some
developers
didn't
tell
did
a
skunkworks
project
and
I
think
they
just
grabbed
a
whole
bunch
of
CPUs
noe
was
using.
A
We
don't
know
what
they're
using
and
they
created
something
called
the
zero
day,
bot
and
the
zero
day
bot
scans,
all
our
trees,
all
our
public
trees
and
they
test
build
them.
They
build
them
on
I,
don't
know,
50
different
architectures,
different
random
configurations,
different
other
things
and
they
run
tests.
They
run
static
analysis
tests.
We
have
a
bunch
of
bunch
of
static
analysis
tools,
cochineal,
which
is
a
really
good
tool
for
testing
out
finding
patterns
and
C
code
and
seeing
what's
wrong,
we
have
a
ton
of
those.
A
We
have
other
static
analysis
tools,
we
have
something
called
sparse,
leanness
wrote
and
a
number
of
other
ones.
They
all
run
through
this
thing.
He
also
is
now
picking
up
patches
off
mailing
lists.
You
post
a
patch
to
a
mailing
list.
You'll
get
her
you'll
get
a
response
saying
this
broke,
something
which
is
great
I,
don't
know
how
he
does
it.
I,
don't
know.
A
What's
really
behind
it,
I
talked
to
him
last
year
and
he
said
he
can
handle
7,000
more
get
trees,
I
think
it's
a
testing
ground
for
Intel's,
new
processors,
I,
don't
know
what
he
does
we're
happy,
but
so
it
test
is
everything
it
tests.
Every
single
commit
of
your
commits
you
push
one
day:
I
landed
on
it.
After
taking
a
flight,
I
pushed
a
bunch
of
work
out,
I
went
to
get
a
coffee.
A
Fifteen
minutes
later,
email
saying
this
patch
in
the
middle
of
all
these
commits
broke
the
build
and
here's
an
automatic
patch
that
fixes
it.
So
now
we
have
scripts
that
are
writing
patches.
Who
owns
that?
That's
another
interesting
thing:
the
legal
the
lawyers
are
going
to
deal
with
anyway,
so
zero
day
dot,
he
tests
a
bunch
of
things
and
that's
what
we're
doing
testing
we're
doing
a
bunch
of
testing
the
performance
guys
are
doing
testing
in
there
to
make
sure
that
we
don't
slowly
degrade
or
different
things.
A
As
we
add
new
features,
that's
where
the
testing
happens
every
single
day
really
really
fast.
It's
amazing
the
processing
behind
that
that
happens.
So
then,
when
I
said
we
had
a
merge
window,
merge
window
happens
all
the
subsystem,
maintainer
rather
stuff
Salinas,
the
rule
is
it
had
to
be
in
Linux
next.
First,
we
used
to
be
really
bad
about
that.
Now.
We're
getting
better
I
think
about
95%
of
the
patches
that
end
up
in
leanness
is
tree.
We're
in
Linux
next
other
5%,
sometimes
come
in
during
the
merge
window.
A
There's
like
two
to
three
that
we
don't
know
where
they
came
from.
That's
not
good
we're
getting
better,
but
we
I
tell
Leena's
pull
from
this
branch
lean.
This
does
not
pull
from
next
on
his
own
and
that's
important,
that's
important,
because
sometimes
my
tree
can
be
broken,
so
I
maintain
the
tty
and
serial
drivers
as
well.
One
release
weed
has
something
wrong.
It
was
just
wasn't
working
all
these
changes.
It
wasn't
working
well,
we
couldn't
figure
it
out.
So
I
said
I'll.
Just
wait.
I'll
hold
off
on
the
big
merge
to
leanness
I'll.
A
Send
a
few
bug
fixes.
Let's
wait
till
the
next
release
again,
two
and
a
half
months:
it's
not
a
big
deal,
so
he
couldn't.
If
you
had
pulled
my
tree,
you
would
it
broke,
it
would
broke
for
a
lot
of
people,
so
he
we
throw
things
to
him
and
then
he
merges
so
in
a
merged
window
he's
taking
about
10
to
11
thousand
patches
in
two
weeks.
Kid
is
nice,
it's
all
done
through
get
poles,
except
for
Andrew
Morton
Andrew
send
some
email
series
and
applies
them
again.
A
Get
applies
mail
boxes
of
emails
very
very
easily
thanks
because
it
was
our
development
process.
It
works
really
well,
but
that's
all
get
Oh.
Everything
blue
on
up
is
good
and
it
works
really
fast,
really
easy
yeah
and
then
we
do.
He
doesn't
new
release
and
off.
He
goes
so
questions.
Oh,
come
on.
Yes,
so
the
question
is:
do
we
because
we
have
to
break
everything
down
individual
patches,
a
rebate
or
a
refactor
or
refactor
refactoring
a
fix?
That's
it
really
rare.
That
usually,
is
very
rare.
A
Normally
it's
like
if
I'm
going
to
refactor
something
it's
usually
the
next
patch.
It
does
something
real
and
even
then
it's
that's
very
rare,
but
again
if
we
take
10,000
patches
and
five
of
them
or
refactor.
Okay,
we'll
take
it.
But
if
you're
doing
that,
is
that
again
that's
a
step
along
the
way
you
have
to
show
your
work
and
that's
fine.
If
you
want
to
refactor
things,
we
do
like
to
see
if
you're
gonna
do
a
long
series
that
you're
fixing
some
bugs
the
bugs
first
fix
the
bugs.
A
C
A
Good
segue
what
happens
in
the
human
element
if
we
break
down
so
this
looks
like
a
nice
pretty
tree
right
triangle,
I
graph,
this
one
year
turned
out
to
be
a
1
meter
by
1015
meters.
Long,
it's
a
mess,
it's
a
network
of
connected
people.
It
doesn't
look
this
pretty.
We
can
route
around
anybody,
the
good
thing
about
git
and
the
way
our
patch
process
works.
So
if
I
go
on
vacation
I
just
get
routed
around
and
it
works
fine.
Somebody
comes
in
through
somebody
else.
A
Like
again,
networking
David
will
say:
hey
I'll
pick
up
USB
patches
for
a
week,
no
problem,
it
goes
into
him.
It
works
really
really
well,
everybody
just
gets
rerouted
around
and
it
works
really
good.
Another
thing
about
this
is
because
we're
using
email
in
the
beginning,
we
all
review
everything,
but
as
we
move
up
the
stack
where
you
start
using
git
and
I
can't
see
those
patches.
A
A
I
trust
that
they'll
be
there
to
fix
it
if
they
got
it
wrong,
and
that
happens
a
lot
like
there's
some
people,
I'll
just
blindly
say
great
Alan
I'll,
take
your
patches,
no
problem
and
because
I
know
he'll
be
there
when
you
fix
it
when
it's
wrong
and
that's
the
important
thing
if
I'm
taking
patches
from
people
and
I
am
I
put
my
sign-off
by
eye
when
the
path
of
blame
I'm
now
responsible,
so
I
have
enough
work.
I
want
to
take
patches
from
people
that
I
know
we're
responsible
to
fix
them.
A
So,
if
you're
trying
to
get
into
kernel
development
it's
hard
because
we
have
to
trust
you,
we
had
a
problem
with
a
networking
stack
about
five.
Six
years
ago
a
huge
nasty
hairy
change
landed.
Finally,
it
was
big
and
complex.
The
day
after
it
was
merged.
Email
address
disappeared.
It
took
them
six
months
to
unwind
the
mess.
The
networking
developers
are
very
paranoid
now
that
you
have
to
have
shown
a
history
of
good,
commits
and
good
changes
and
good
support
and
being
there
and
asking
questions
to
prove
that
you'll.
Take
your
patches,
sometimes
I'll.
A
Just
ask
a
dumb
question
back
to
make
sure
that
the
person
is
even
listening.
Like
hey
why'd,
you
do
it
this
way
or
things
like
that,
but
make
sure
there
is
going
to
be
some
feedback.
You
are
going
to
become
part
of
this
community
I'm
not
going
to
be
responsible
for
maintaining
this
code
for
forever,
because
we
do
I
maintain
code
that
I
wrote
almost
twenty
years
ago.
Other
people
do
that
too
it's
nasty
code,
but
we
all
work
through
it
and
do
it.
A
But
again
we
have
a
web
of
trust,
so
I
trust
five
to
six
people,
Alina's
trust,
ten
people,
it's
again
a
tree
of
trust
and
that's
how
it
works.
So
the
kernel
development
process,
because
we
put
our
name
on
things
and
because
we
take
pull
requests
from
other
people,
it's
really
a
development
model
of
people,
human
interaction
or
trusting
people,
we're
trusting
people
that
will
do
the
right
thing.
That
you'll
be
there
to
fix
things
and
that's
what
the
kernel
development
process
is.
It's
not
just
pure
technology.
A
It's
not
just
pure
blind
patches
flying
around
it's
people
that
know
each
other.
We
travel
around.
We
meet
each
other
at
once.
A
year
we
have
subgroup
meetups
once
a
year
for
different
subsystems.
We
meet
and
work
with
other
people
and
other
developers.
That's
good!
Oh
one
of
my
best
friends,
now
listen
Germany
15
years
ago
he
didn't
know
English,
it's
a
really
weird
development
model,
but
it's
human
interactions
and
through
human
interactions
and
human
responsibility
for
their
changes,
we
created
something
that
really
works,
really
really
well,
something
no
company
could
have
ever
created
mess.
B
A
Don't
care
about
your
background
like
here,
you're
going
to
be
there
so
I
have
to
rely
on
you,
because,
if
I
take
code
from
you
for
fixes
for
one
drive-by
patches,
so
half
of
those
patches
that
we
take
half
those
peak
patches
come
from
person
that
has
never
seen
a
guy.
That's
easy!
That's
fine!
One
drive
by
spelling
fix
new
device.
Id
simple
bug
fix
great
I,
don't
care
if
you're,
adding
new
features
if
you're
adding
a
new
subsystem.
Yes,
I
want
to
make
sure
you're
going
to
be
around
I.
A
Don't
care
who
you
are
I
just
want,
make
sure
you're
going
to
be
there
funny
story.
Many
years
ago
we
started
having
the
Linux
kernel,
or
we
were
worried
about
Microsoft,
sending
us
things
and
we're
all
being
paranoid.
Somebody
showed
up
out
of
nowhere
with
these
beautiful
patches
for
the
plug-and-play
subsystem.
We
had
no
idea
how
that
came
about
where
they
came
from,
so
we
I
made
him
prove
I
think
it
was
a
him
because
it's
it
Adam
made
him
prove
where
he
got
the
information
she
pointed
out.
A
A
He
now
is
a
professor
at
Stanford,
a
really
brilliant
guy.
But
again
we
had
no
idea
that
he
was
a
high
school
student
and
it
didn't
matter
because
he
was
a
maintainer
of
that,
but
he
proved
here
the
inner
back-and-forth
of
where
he
got
that
information
from
where
it
worked
out
and
went
on
from
there.
So,
yes.
A
How
do
I
deal
with
the
conflicts
between
maintainer,
it
usually
doesn't
happen,
get
as
good
at
merges.
As
you
know,
the
kernel
tree
is
pretty
diverse,
so
USB
doesn't
touch,
networking
doesn't
touch
so
it's
these.
These
other
things.
Everything
is
pretty
standalone.
It's
up
to
the
subsystem
maintainer.
Usually,
if
they're
going
to
be
having
to
deal
with
merge
issues
about
other
Nats,
we
will
get.
If
you
look
at
the
Linux
next
mailing
list,
Steve
will
send
out
an
automatic
response.
A
Saying
hey
this
conflicted
with
this
I,
don't
know
what
to
do
or
here's
the
here's
the
merge
that
I
did.
Is
this
correct?
You
know
tell
us
so
I
got
I
got
about
one
of
those
twice
a
week,
so
it
happens
but
they're,
usually
minor
conflicts
or
API
changes
like
if
somebody
changes
a
new
API
and
the
drought.
You
like
networking
and
I'm,
adding
a
new
driver
through
this
other
tree.
Well,
I
can't
change.
A
A
Yes,
I
take
a
bunch
of
patches
from
all
over
the
tree.
I
could
be
doing
cherry-pick.
I
could
use
cherry
pick,
put
them
in
a
branch,
but
the
problem
with
my
patches
is
sometimes
then
I
post
them
for
review.
Sometimes
the
fifth
patch
of
a
20-pack
set
because
they're
all
individual
patches
shouldn't
be
in
there
it
the
maintainer
says
no
wait
that
broke
or
no
you
shouldn't
do
this
or
wait
quick,
add
these
other
ones
after
that,
I
would
have
to
rebase
that
tree
I
don't
want
to
rebase
the
public
tree
ever
so
I.
A
Don't
do
that!
I
use
quilt
quilt
lets
me
put
in
and
remove
and
reorder
and
restructure
things.
And
then,
when
I
do
release
I
do
quilt
a.m.
our
quote.
Was
it
get
quilt
mailbox?
Well
get
quilt
apply,
sorry!
So
it's
a
commit
that
are
it's
a
command
that
only
me
and
a
few
Debian
developers
use
based
on
how
bad
it
works
and
it
creates
a
get
tree
and
applies
them
and
goes
away
from
there.
So
once
we
do
a
release,
then
it
goes
in
the
quilt
they
get.
A
Andrew
wrote
quilt,
actually
he
wrote
something
previous
to
quilt.
So
his
again,
he
takes
a
lot
of
despair,
things
from
all
over
the
place
and
he
reorders
them,
and
he
also
doesn't
send
them
all
to
Lena,
some
of
them
he'll
hold
on
to
for
three
or
four
releases.
He'll
pick
up
random
things,
then
he'll
sometimes
send
those
random
things
to
the
other
subsystem
maintainers,
because
we
missed
them
and
then,
when
we
does,
that
he'll
just
drop
him
from
his
tree.
A
If
he
sees
that
they
show
up
in
our
tree,
so
he
uses
quilts
because
he
can
reorder
things
and
doesn't
work
doesn't
bother
anybody.
There
are
two
different
models:
they
both
work
really
well.
I,
really
recommend
people
look
at
the
different
models
if
you're
curious,
yes,
I
think
last
question
over
time.
C
C
A
What's
the
role
performance,
people
who
would
touch
things
across
four
subsystems,
it
doesn't
happen
only
thing
that
would
happen:
performance,
wise,
I'm,
thinking
of
scuzzy
or
block
the
block
layer,
people.
They
have
to
worry
about
certain
things
again.
The
memory
manager
guys
say:
I/o,
joke,
hey,
you're,
writing
a
driver
for
memory.
Haven't,
aren't
you
guys
finished
yet,
but
they
keep
going
at
it
again.
We
are
in
pretty
siloed
things.
We
all
work
together,
but
it
works
out.
Well,
very
rarely
do
we
cross
paths
so
I'm
out
of
time
and
I'll
be
here
today.