►
From YouTube: Intern Presentation: Handsome Macros in Rust
Description
Paul Stansifer from the Research team presents "Handsome Macros in Rust"
Joshua Cranmer from the Platform team presents "Exploring code coverage in Mozilla"
Marshall Moutenot from the Security team presents "Sandboxing Firefox"
Joseph Kelly from the Metrics team presents "Applied Statistics at Mozilla: Modeling ADIs to Performance Testing"
Help us caption & translate this video!
http://amara.org/v/2FhM/
A
C
C
Okay,
okay,
you
haven't
missed
anything.
You
haven't
missed
the
very
exciting
titles
like
so
macros
and
scenes
he
will
espouse
sort
of
have
a
bad
reputation,
but
that's
just
because
of
the
way
that
those
macro
systems
were
designed.
I
want
to
I
want
to
start
by
justifying
the
existence
of
a
macro
system
at
all.
C
In
order
to
do
that,
I
want
you
to
assume
that
you
hate
code,
repetition,
I,
hope
that
this
is
not
hard
to
imagine
so
so
you
see
code
like
this,
where
we're
doing
we're
we're
doing
some
wedding
and
then
some
returning
and
we're
doing
the
same
thing
twice
and
I
hope
that
you
like
have
this
this,
like
sort
of
building
feeling
of
rage
that
there's
something
wrong
because
you
know
we're
taking
the
end.
We
were
indexing
and
then
we're
dividing
by
seven
times
the
index
twice.
C
So
so
we
need
to
fix
this,
and
we
can
do
this
by
writing
a
function.
Let's
call
it
just
that.
Does
that
thing
that
we
were
doing
twice
and
called
the
function
twice
instead
of
doing
the
thing
twice,
this
has
a
bunch
of
advantages.
One
thing
doesn't
do
is
shorten
the
code,
although
Mike,
if
we
do
a
lot
of
adjusting,
but
the
the
more
important
advantages
are
that
well,
it's
suppose
that
you
want
to
adjust
differently
now.
C
You
only
need
to
make
the
change
in
one
place
and
you
don't
have
as
much
of
a
maintenance
hassle
or
suppose
that
more
likely
there's
some
sort
of
representation
change
that
you
need
to
thread
through
a
lot
of
different
places.
You
wouldn't
have
an
easier
time
of
it
if
you
just
have
to,
if
you
just
have
to
change
the
representations
that
adjust
handles
rather
than
the
rather
than
what
happens
in
every
single
at
every
single
point.
C
It
also
makes
code.
It
also
makes
the
code
easier
to
read
because,
instead
of
instead
of
writing
out
the
steps
of
doing
something
you've,
given
that
the
name-
and
this
name
is
something
that
you
can
attach
documentation
to
so
now,
you
can
like
have
a
comment,
presumably
above
adjust.
The
indicates
like
why
you're
adjusting
or
what
circumstances
one
might
might
adjust
and
what
things
one
should
assume
and
what
things
one
should
assume
about
adjusting
so
so
eliminating
code
repetition
by
abstracting
things
out
is
awesome.
C
C
You
just
want
to
return
early
inside
in
cases
in
which
you've
gotten
the
in
cases
in
which
you've
like
got
in
the
answer
already
so
like
you
might
match
against
the
token,
and
it's
some
special
thing
that
wraps
an
X
I
should
probably
stop
and
explain
the
rust
match
syntax
for
those
of
you
are
not
familiar
with
it.
One
thing
it's
no
longer
called
alt,
but
basically,
on
the
left
hand
side
of
each
of
these
arrows,
we
have
a
form
that
we're
matching
against
and,
on
the
right
hand,
side.
C
We
have
something
to
do
in
response
to
that.
So
if
we
match
a
so,
if
the
token
happens
to
be
of
the
form
special
one
of
X,
can
we
just
take
that
Beck's
in
return?
It
underscore
matches
everything,
so
otherwise
we
do
nothing.
So
that's
great,
but
we
discover
that
we
need
to
do
this
at
multiple
points
in
the
parser,
so
we
have
code
repetition.
This
is
bad
okay.
C
So,
let's
fix
this
well
we'll
break
out
a
function,
so
would
be
a
copy
diversion
of
one
of
the
one
of
the
repetitions
up
as
a
template
for
making
our
function,
and
if
you
make
a
function,
call
it
early
return,
takes
an
SP
and
a
token
matches
that
token
sees
if
it's
of
the
form
s
P
of
X
and
returns,
X,
otherwise,
does
nothing.
This
doesn't
I'm
seeing
I'm
seeing
facial
expressions.
This
is
good.
In
fact,
this
function
is
very
wrong.
C
You'll
notice
that
I
have
left
off
the
type
of
ass
SP.
The
problem
is
not
that
there's
a
type
problem.
The
problem
is
that
the
problem
is
that,
like
is
the
SP,
isn't
the
value
at
all?
If
it's
a
pattern
that
doesn't
correspond
to
value
that
doesn't
make
sense
and,
furthermore
we're
returning
is
this
early
return?
You
can't
early
return
from
another
function
while
inside
a
function,
the
the
early
return
is
returning
the
function
that
you
just
created
in
order
to
avoid
in
order
to
avoid
this
code
rip
it.
C
This
is
completely
wrong,
but
in
some
sense
it's
sort
of
obvious
what
you
wanted
to
do
right
there
there.
There
was
some
cover-up
addition
and
you
you,
like
you
like
came
up
with
it.
You
like
figured
out
what
pattern
that
code
repetition
had
and
you
eliminated
it
so
so
the
answer
is
that
what
you
don't
want
is
a
function.
C
C
It
takes
it
SP,
which
is
an
ident
and
a
tee
which
is
an
expert
and
then
produces
the
code
that
you
see
on
the
right
hand,
side
dropping
T&S
feet
into
their
appropriate
places,
and
this
works
because,
although
SP
at
runtime
doesn't
have
any
sort
of
existence
at
compile
time,
it's
certainly
unidentified
and
macros
happen
at
compile
time.
So
it's
possible
to
do
an
abstraction,
though
it
is
possible
to
do
an
abstraction
over
these
compile-time
bits
of
syntax.
So
you
can
write
this
early
return
macro.
Give
it
the
identifier
that
you
want
to
that.
C
C
So
those
of
you
have
been
following
my
movements
closely
will
will
notice
that
I
was
here
at
last
summer
also
creating
a
macro
system
for
rust.
So
so
what
was
wrong
with
that
macro
system?
Well,
this
is
what
it
would
look
like
doing,
the
same
thing
on
macro
and
defines
early
return
and
a
pattern
bay
as
a
pattern
taking
an
SP
entity
and
translating
to
that
same
right-hand
side
that
we're
pretty
tired
of
seeing
already
so
it
looks
pretty
similar.
There's
dollar
signs
missing
it's
a
little
heavy
on
the
square
brackets.
C
So
so,
what's
wrong
with
one
thing
is
that
it's
not?
Quite
as
pretty?
You
may
not
be
able
to
see
it
too
much
here,
but
in
complicated
macros
the
square
brackets
get
sort
of
out
of
control.
Now,
because
in
old
macro
system,
it
was
impossible
for
the
macro
to
look
inside
and
d
structure,
any
arguments
that
weren't
square
brackets,
the
previous
syntax,
that
we
had.
C
There
is
another
slight
problem
with
the
with
the
old
macro
system,
and
that's
that
this
doesn't
work.
In
particular,
I
mentioned
that
SP
is
an
identifier
and
t
is
an
expression.
Well,
the
old
macro
system
actually
only
allows
you
to
put
holes
to
to
put
the
two
abstract
/
positions
in
do
abstract,
overexpression
positions,
so
match
t
is
fine,
t
can
just
be,
can
just
be
like
cut
out
and
you
drop
in
the
expression
that
you
pulled
out
from
there.
C
But
SP
is
an
identifier,
not
an
expression,
oh,
and
so
you
can't
do
that
so
so
that
was
bad.
Why
did
we
do
things
that
way?
Well,
sorry,
yeah!
C
So
now
you
see
on
the
screen
the
two
problems
that
I
just
mentioned.
Well,
the
problem
is
that
the
old
macro
system
was
based
on
the
was
based
on
the
rust
AST.
Now
r
us
a
is
t
is
a
fairly
conventional
design,
but
what
it's
not
is
fairly
conventional
for
a
sit
for
a
language
that
has
powerful
macros.
C
So
in
language,
like
scheme,
you
have
a
very
uniform
you.
You
have
a
very
uniform
AST
and
it's
possible
to
do
these
as
possible
to
do
these
very
these
very
abstract,
manipulations
of
the
AST.
C
Without
regards
to
what
role
something
is
eventually
going
to
play,
but
when
you
have
a
move,
when
you
have
a
complicated
as
tu
structure
is
determined
by
the
type
system,
you
you
can't
really
play
that
game
very
easily,
so
one
option
would
be
to
do
a
lot
of
coding,
which
would
involve
a
lot
of
code,
repetition
which
could
be
solved
by
submission
they
powerful
back
row
system.
Unfortunately,
the
old
macro
system
isn't
sufficiently
powerful
for
that.
C
So
the
solution
was
to
instead
of
abstracting
over
a
STS,
abstract,
/
sequences
of
tokens.
So
tokens
nicely
have
a
uniform
representation.
You
can
take
a
group
of
them
and
you
can
say:
oh
well,
let's
repeat
these
tokens,
the
sequence
of
times
I
haven't
talked
about
it,
but
this
matt
baume,
but
our
macro
system
allows
you
to
butter.
My
criticism
has
the
capability
to
talk
about
macros
that
take
arbitrarily
linked
sequences
of
things
and
expand
to
and
expand
to
code
that
has
arbitrarily
sequences
of
things
based
on
this
input.
C
That's
great,
but
perhaps
they
give
you
a
little
bit
too
much
flexibility
for
one
thing:
how
do
you
just
like
take
a
sequence
of
tokens
and
find
out
where
it
ends?
After
all,
when
you're
parsing,
when
you're
parsing
a
file
that
contains
macros,
you
really
want
to
be
able
to
say:
okay,
the
macro
has
the
macro
starts
here
and
it
has
our
argument
or
arguments
and
then
it
ends,
and
then
I
can
continue
parsing
like
normal.
C
You
don't
want
to
say,
okay,
I,
see,
macro
invocation
on
the
rest
of
the
file
is
a
pile
of
tokens,
so
we
need
to
know
where
macros
end.
One
thing
you
can
do
is
you
could
like
put
them
in
quotes
or
something.
This
means
that
you
would
need
to
escape
your
poke
character.
So
a
macro
that
involves
a
macro
invocation
which
involved
which
itself
involved
in
macros
vacations,
be
doubly
escaped.
When
you
go
further
down,
it
needs
to
be
more
and
more
escape.
This
is
completely
unsustainable.
C
You
don't
want
to
do
this,
so
so
what
we
do
is
we
decide
that
a
macro
invocation
is
a
sequence
of
tokens
surrounded
by
parenthesis
and
we'll
say
that
you
don't
have
to
escape
close
parenthesis
that
you
use
as
long
as
you
have
a
nice
open
parenthesis
that
matches
it
so,
basically
a
macro,
those
macarons
vacations
that
we
saw,
although
they
looked
fairly
structured.
In
fact,
they
were
given
a
fair
amount
of
structure
by
the
macro
definition.
C
Those
macros
were
interpreted
as
by
the
parser
as
just
a
sequence
of
tokens
about,
which
is
the
only
thing
that
it
knew
was
that
the
letters
in
them
were
balanced
and
that's
a
physis,
it's
possible
to
find
the
end
of
macros,
and
it
turns
out
that
people
don't
really
have
a
burning
desire
to
write
macros
which
take
unbound
to
two
rights
and
accomplice
in
taxes
which
take
unbalanced
identifier,
xin
them
there's
just
no
point
to
doing
that.
So
so
that
was
a
slide
with
a
lot
of
words.
I.
C
I'd
like
to
close
by
saying,
I,
actually
wrote
some
code
to
do
this.
I
didn't
just
like
sit
around
and
think
the
so
so
so
the
code
that
the
prime,
primarily
the
code
that
this
inhale
I
want
to
talk.
I
want
to
talk
about
implementation,
also,
so
that
you
get
a
sense
for
what
macro
system
design
is
like
macro
system.
Implementation
is
like
these
macros
because
they
define
custom
parsers
for
sequences
of
tokens.
C
While
we
needed
to
write
a
custom
parser,
something
that
something
they
could
take
an
arbitrary
grammar
definition
and
then
parse
the
sequence
of
tokens,
so
we
had
so
so
I
had
to
write
a
parser.
The
other
thing
that
we
had
to
do
was
we
had
to
since
the
sequences
of
tokens
some
of
them
needed
to
be
parsed
as
rust
expressions.
You
notice
that
we
were
able
to
take
a
rust
expression
as
an
argument
in
our
from
in
our
macro
system.
C
We
needed
to
be
able
to
hook
up
these
bags
of
tokens
to
the
app
to
the
ordinary
rust
parser,
so
that
we
could
rely
on
the
on
the
true
rust
rama,
to
give
us
the
interpretation
of
these
sequences
of
tokens,
so
I
needed
to
build
a
lexer
that
would
Lex
the
I
need
to
build
a
lecture
that
would
Lex
the
I
would
let
these
bags
of
tokens
and
turn
them
into
a
sequence
of
tokens
for
the
parser
to
understand
the
odd
thing
about
it
is
that
the
lecture
was
actually
harder
to
write
than
the
parser
go
figure.
C
So
so
that's
all
I
have
to
say
I'd
like
to
thank
the
rust
team
for
it
as
a
whole
for
their
achievements
in
the
field
of
Awesomeness,
I'd
like
to
thank
Eric
in
particular,
who's
who,
as
a
part
of
his
work
and
also
lately
just
recreationally,
has
been
testing
out
the
rust
Mac
system.
If
you
were
paying
close
attention
to
his
presentation,
you
would
have
noticed
some
telltale
exclamation
points
to
indicated
these
buildings
in
tax,
extensions
and
I
would
also
like
to
thank
the
actually
it's
now
1130.
C
Question
is
how
did
I
count?
The
number
of
sticks
of
spearmint
gum
that
I
shoot,
and
the
answer
is
that
I
stored
the
ice
toward
the
paper.
Wrappers
I
think
it
really
I
was
intending
to
just
like.
Oh,
like
okay
I'll
keep
the
paper
part
in
recycling,
then
the
bench
I
didn't,
recycle
it
and
they
just
kept
on
stacking
up
and
and
like
it
since
I
chewed,
every
single
gum
and
those
I
knew
just
multiply
it
by
14,
and
actually
the
estimate
is
not
precise.
C
A
Question
in
your
macro
rules
example.
B
A
C
So
keywords,
generally
of
if
you
want
to
come
up
with
a
key,
if
you,
if
you
want
to
like
come
up
with
new
keywords
using
identifier,
will
work
for
that.
So
I
think
that
for
most
things
for
which
you
would
be
thinking,
keyword,
I
didn't
with
ident
would
suffice,
actually
I'm,
not
precisely
sure
what
you
mean
by
the
seduction
keywords.
We
should
maybe
talk
about
that
later.
But
to
answer
your
question
is
whole:
what
sorts
of
things
can
we?
C
What
sorts
of
things
can
we
abstract
/
I
believe
that
the
current
list
includes
I've
endured,
so
I
did
device
expressions
you
saw
there
are
patterns,
statements
blocks
types,
token
trees,
so
you
can
at
all,
which
is
the
right
hand,
side
of
a
macro
and
vacation.
So
you
can
abstract
over
those
specifically
and
I
believe
items,
and
there
may
be
a
couple
of
others,
I'm
forgetting.
C
Why
does
the
things
that
it
does,
but
the
algorithm
is,
but
the
algorithm
itself
is
fairly
easy
to
implement
and
and
like
in
under
controlled
circumstances
it's
and
under
controlled
circumstances
in
the
abstract,
it's
fairly
easy
to
just
like
pack
it
up
in
a
couple
days.
In
reality,
it's
probably
more
confident
in
reality
is
probably
harder
than
that,
but
hygiene
itself
is
not
so
like
is
not
so
like
it
doesn't
put
its
tendrils
into
everything.
It
seems
like
it
should
be
relatively
self-contained,
which
will
make
it
easier
to
implement
any.
D
C
C
So
one
of
the
thing,
one
of
the
few
things
I
was
able
to
reuse
from
last
summer,
was
an
extension
to
the
span
system
that
we
use
to
indicate
code
to
indicate
the
location
of
code
problem.
So
when
you
get
an
error
report
in
terms
of
some
spam-
oh
it
sounds
like
from
here
to
here.
There
was
something
that
went
and
you
get
like
a
nice
sniff
at
you.
You,
like
it
Russ
compiler,
displays
as
I
snippet
of
code
and
underlines
the
and
underlines
the
era
for
you
and
it's
awesome.
C
So
what
I
did
was
I
extended
that
so
that
spans
also
have
a
stack
of
other
spans
indicating
the
expansion
path.
So,
wouldn't
you
get
an
error
in
when
you
get
an
error
of
code
that
was
on
that
results
from
macro
expansion
on
what
you
should
get
is
a
error
in
terms
of
the
in
terms
of
the
macro
that
expanded
at
the
very
end,
expanded
to
the
erroneous
code.
This
isn't
this
is
a
particularly
useful.
C
It
shows
you
the
general
shape
of
the
code
that
failed,
but
it's
not
going
to
tell
you
much
about
why
it
failed
and
then
and
then
it
will
give
you
a
back
trace
indicating
where
each
of
those
mapper
invitations
came
from
up
to
the
original
macro
invocation
that
you
called
in
straight
ordinary
code
that
kicked
it
all
off
and
in
practice,
I've
discovered.
This
seems
to
be
reasonably
useful
for
tracking
down
for
cracking
down
type
errors.
C
The
the
theory
of
debugging
code
of
debugging
code
that
was
produced
by
macros
is
oddly
not
all,
but
not
all
that
much
discussed,
despite
the
fact
that,
like
schemers
have
to
do
this
all
the
time,
but
this
seems
to
suffice,
especially
for
the,
especially
in
the
case
of
Russ,
in
which
we
tend
not
to
have
really
complicated,
really
complicated.
Like
whole
program,
spamming,
macros,
rust
macros
tend
to
be
a
lot
more
self-contained
than
that.
C
Question
is:
what's
the
whiz-bang
Gnaeus
macros
that
I've
built
with
the
system
well
I
kind
of
want
to
say
that
it's
actually,
the
macro
is
the
aircraft
the
aircraft
written
because
Eric's
been
migas
Eric's
been
doing
the
like,
exciting
macro
writing
and
I've
been
and
I,
like,
mainly
not
with
writing
backers
accepted
as
occasional
tests.
Let's
see
the
so
so
last
year,
I
had
last
year.
C
So
you
so
like
last
summer,
right
wasn't
possible
to
match
multiple
patterns
against
the
same,
to
match
multiple
different
patterns
in
a
match
statement
and
have
them
go
through
the
same
body.
I'm
going
to
be
possible
to
write
a
rust
macro
that
would
take
take
these
multiple
patterns
and
one
body
and
then
expand
into
at
pattern.
Body
pattern,
body
pattern
body,
and
the
neat
thing
is
that
the
the
macro
system,
the
macro
definition
system,
that
we
use
macro
by
example,
makes
it
really
easy
to
do
right.
C
A
F
B
C
E
F
F
Desktop:
that's
what
I'm
that's
what
it
looks
like
none
of
them.
Okay!
So
now
you
can
see
my
fighter
jets.
Okay,
so
hello,
my
name
is
Josh.
Miranda
I
am
currently
working
from
0
to
1,
so
this
year,
instead
of
mountain
view,
which
is
why
I'm
talking
to
you
about
it,
the
pontine
I
opened
working
on
this
past
year
is
based,
especially
working
with
code
average
in
Firefox.
So
the
three
many
faces
to
my
project,
the
first
interview
is,
is
making
this
Gator
displays
and
how
much
to
work
properly
and
also
getting
a
tinderbox
flow.
F
This
actually
album
intended
results
which
I
finally
got
looking
last
night
and
also
I'm
working
on
trying
to
get
javascript
code
coverage,
working
and
I
away,
because
all
of
our
test
suite
is
not
necessary.
Get
out
of
how
committee
works.
I've
got
the
addictive
of
how
people
actually
use
the
browser.
Also
what
they
get
good
coverage
results
find
people
when
they
actually
like
their
browser.
F
F
G
F
F
And
you
can
actually
see
for
any
given
by
the
code,
allows
you
can
back
you
seen
anything
about
how
little
scrubber,
so
you
get
very
nice
to
you
alone.
Actually,
the
visual
native
code.
You
can
you
conceive
an
example
branch
coverage
here.
This
means
that
this
planet
was
taken
to
top
this,
especially
taking
one
to
the
truth
in
the
truth,
branch
and
not
the
time
to
the
false
batch.
F
Now
those
downsides
to
this
video,
one
of
the
things
we
can
do
is
we
can
actually
break
out
over
the
middle
results
by
test
coverage
except
the
Czech
capital,
assist
accessibility,
which
is
not
a
general
covered
by
one
suite
of
tests.
So
there's
only
one
file
and
get
it
isn't,
but
if
I
go
into
some
more
common
code
like
this
is
a
very
one,
not
one
can
go,
we
can
see
that
HTTP
code
is
used
most
of
the
time
in
culture
distinction
now.
F
The
other
day
outside
is
that
when
I
come
back
on
this
cone
I
see
a
giant
block
of
data
and
it
gets
really
hard
to
get
a
good
set
of
a
lot
more
now,
as
all
of
the
Firefox
current.
So
when
I
have
done
is
I
bought
a
treemap,
we're
probably
to
be
loved
this
page,
that
you
see
everything.
So
this
is
a
waveform.
F
A
tree
map
which
shows
in
this
case
is
just
the
accessibility
for
how
well
everything
is
covered,
so
these
values
of
each
box,
it
is
proportional
to
how
many
lines
of
code
below,
but
I,
can
also
make
proportion
to
how
many
functions
or
because
sometimes
you
have
curly
have
a
lot
of
functions,
but
each
functional
academy
or
other
times
you
have
10,000
flying
functions.
This
is
more
confidence
about
generated
code
bindings.
You
can
have
some
each
color
is
negative,
how
more
covered
CODIS.
So,
for
example,
export
code
is
not
here.
F
F
F
F
Now
let
me
get
in
a
single
picture
and
very
good
idea
of
how
well
covered,
or
in
some
cases
how
not
welcome
at
firebox
is
to
examine
clearly
see
magnetic
ride
two
tests.
Research,
I'm
pretty
much
completely
non
colored.
This
is
the
graphics
Kyoko
and
the
cc-come,
which
I
info
have
very
good
coverage,
just
not
actually
momenta
streets.
F
F
Parkin
lot
of
tests
streets-
this
is
basically
a
glass
listed.
Every
single
testify
a
box
words.
We
have
a
flexible,
didn't
platforms,
Linux
Android
boots
can't
go
to
hell
OSX
and
some
of
these
bathrooms.
You
had
even
more
stuff,
like
you,
OS
X,
10.5,
10.6,
temper,
17.8
or
when
those
under
64
xt720
etcetera.
If
these
five
books,
he
also
have
some
with
the
contest's
about
compiled
contests,
we
have
got
several
streets
and
movie
tests.
F
We
have
breath
tests
which
one
category
crash
tests
of
JRR
fans
which
test
rj
sejin
more
than
just
regular
old
blood
tests,
and
we
also
have
a
potassium
tutorial
browser
and
next
PC
shop
tests.
We
should
just
enjoy
maybe
the
tests.
So
if
you
need
to
contribute
change,
you
want
to
know.
Where
is
this?
What
does
this
kind
of
coverage
come
from?
It
is
just
steal,
mine,
accounting
and
no
idea,
which
has
to
be
too
much.
F
So
what
I
have
is
an
awesome
view.
Our
other
coverage
is
not
very
specific
pessaries
systems
in
progress,
my
first
tests
and
wait.
You
cannot
go
back
to
accessibility
to
the
office
or
is
uncomfortable,
and
if
you
go
back
to
our
main
palpable
directory,
you
can
see
several
changes
after
the
finishes.
Really
don't
have
a
lot
of
a
graphics
comes
along
was
fantastic
and
a
lot
of
other
great
features.
So
this
is
going
to
be
places
called.
We've
got
some
work,
you're
learning
something
she's,
not
voluntary.
It's
an
observable.
F
F
This
is
JavaScript
not
c++,
and
if
an
actually
telling
you
how
punch
lines
of
code
actually
in
covered
and
the
way
this
works
is
it
turns
out
that
crying
Hackett
over
the
past
year
committed
a
feature
of
the
JavaScript
engine
which
actually
reports
out,
which
GC
of
all
the
cats,
a
very
simple
jobs
to
pop
coat.
How
many
times
it
is
an
awful
is
the
next
cute
so
by
looking
at
all
the
counts,
maybe
some
of
us
were
strong
in
the
third
base
of
which
there
is
a
grand
total
of
aqua,
as
if
I
would.
F
F
Yeah,
unfortunately,
this
is
this
is
an
order
drop.
The
third
base
is
one
making
is
not
user
with
JavaScript,
which
means
I'd
like
to
get
about
freely
sidewalks
before
finally
shuts
off
and
actually
shows
you
the
results.
I
think,
as
a
temporary
is
doing
website
in
the
future.
You
CSS
transitions,
it's
an
impending
animation
in
javascript
yourself,
and
it
makes
your
life
so
much
easier.
F
Anyways,
while
we
wait
for
this
to
finish,
lovely
I
can
chew
some
more
stuff,
so
the
last
piece
of
puzzle
I
was
discussing
was
being
able
to
actually
show
I.
They
never
code
coverage.
So
what
I've
done
is
I've
taken
about
a
final
box
is
and
I
met
and
I
impact
compiler
so
that
had
very
similar
to
entry.
We
stop
the
program
record
in
the
table,
nother
level
in
the
function
and
then
patch.
F
The
codes
no
longer
uses
a
very
functional
because
if
we
kept
doing
every
single
an
open
that
reduction
on
the
browser
would
slow
down
to
a
crawl,
so
here
I've
got
another
example
of
another
browser.
This
is
in
Linux,
and
this
is
this
is
one
of
the
things
I'm
actually
really
interesting
things
available
and,
as
you
can
see,
the
guys
was
still
very
snappy.
They
can
do
all
sorts
of
stock,
at
least
as
long
as
I
was
doing
before
I
think
factory,
whistle
rates
from
your
knowledge.
This
topic-
these
are
the
death.
F
So
now
that
culminated,
my
instance
of
the
browser
said
what
I
do
as
I'm
you,
symmetry
and
I
will
actually
went.
Where
would
you
some
of
the
state
of
pain
you
are
awesome
include
unless
all
I
put
the
dry
a
table
mechanism?
That's
the
money,
never
code
coverage,
so
what
I
can
do
is
I
have
some
scratch
which
can
take
the
results
of
this.
These
set
tables
and
worthlessness
of
dialogue
to
stop.
I
can
just
kind
of
here
and
hopefully
the
other
than
your
love.
F
F
F
F
Actually,
let
me
just
move
to
SVG,
because
something
has
did
you
kind
of
I'm
used
to
sing
together.
I
am
paying
some
of
this
fact
image
kind
of
SVG
instead,
so
if
I
take
off
state
early
actually,
once
you
actually
get
this
pushed
out
to
some
Lisa
stuff
like
that
profile
and
we
can
actually
start
recording
results
from
the
giant,
usually
some
lunch
good
idea
out,
whatever
features
people
actually
use,
another
browser.
F
H
D
F
F
F
H
F
F
If
not
just
look
for
Jake
Pamela
and
github
and
IDC
them,
some
accomplishments
are
all
of
my
tools
strolling.
Ladies
changes.
They
haven't
yet
uploaded,
but
there's
a
cooking
they're
called
make
you
I
got
pod
and
when
that
does
is
when
you
pass
in
the
results
from
so
have
to
an
outcome
of
all
the
gcard
data.
It's
because
I
SPECT
re
knocking
profile.
If
you
feed
that
into
the
scripts,
I
have
you
can
Jason
pot
a
dark,
JSON
file.
We
shouldn't
then
plug
into
the
web
UI.
B
I
Think
most
my
team
is
Garrett.
Rixon's,
yeah,
I
think
everyone's
here,
all
right,
so
are
we
ready?
Okay,
I'm
Marshall,
oh
no
I'm
an
intern
on
the
security
engineering
team
and
one
of
the
things
I
worked
on
this
summer
is
a
low
rights.
Sandboxed
Firefox
and
I
worked
with
Ian
melvin,
primarily.
So
what
does
Sam
boxing?
You
guys
have
all
probably
heard
the
term
around.
I
It's
like
as
popular
as
anything
security-related
can
be
its
featured
prominently
at
every
single
security
conference
and
you've
probably
seen
things
like
the
protected
adobe
reader
and
you've
heard
about
Chrome
being
sandboxed,
so
I
think
it's
going
to
be
present
in
almost
every
single
web
facing
application,
especially
browsers
so
sandboxes,
don't
sound,
especially
strong.
You
can
like
walk
over
and
crush
the
sand
castle,
no
problem
so.
D
I
Is
like
Fort,
Knox
Inge,
you
are
turning
a
browser
into
something
that
can't
be
broken
out
of
or
into
so
what
they
do.
Is
they
help
prevent
malicious
untrusted
code
from
writing
installing
accessing
resources
that
they
shouldn't
have
access
to,
and
it
does
that
by
separating
components
from
each
other.
So
it
takes
something
like
the
render
and
separates
it
from
some
other
component
so
that
if
one
is
exploited,
the
other
one
isn't
affected.
So
each
component
is
given
only
the
minimum
number
of
permissions.
I
It
needs
to
be
able
to
complete
the
task
for
function
that
it's
meant
to
complete,
so
it
doesn't
prevent
exploitation
at
all.
If
there's
a
vulnerability
in
the
renderer,
it's
still
going
to
be
present.
What
happens
is
sandboxing
focuses
on
post
exploit
mitigation,
which
is
to
say
once
an
attacker
breaks
in
and
successfully
exploits
the
browser.
The
sandbox
prevents
the
attacker
from
gaining
any
control
over
the
operating
system.
I
I
So
we
use
the
chromium
sandbox,
which
is
a
user,
only
user
mode,
only
sandbox,
which
means
that
it
doesn't
need
any
special
kernel
mode
drivers,
which
is
something
that
other
implementations
of
sand
boxes
need.
Users,
don't
need
to
be
administrators
in
order
to
have
the
sandbox
operate
correctly.
So
it's
perfect.
It's
unobtrusive,
it's
perfect
for
something
like
Firefox
and
we
focused
initially
on
windows,
because
that's
where
our
largest
insecure
user
base
is.
It
would
be
interesting
in
the
future
to
look
at
like
OS
X.
I
Now
has
all
these
policies
for
sandbox
and
so
it'd
be
interesting
to
look
at
that
the
future,
but
for
now
just
windows
and
for
those
of
you
who
don't
know
there
used
to
be
a
project
called
eat,
NS
or
electrolysis,
which
was
aimed
at
separating
content
and
chrome,
and
they
finished
a
lot
of
the
same
problems
that
we're
facing
so
especially
related
to
add-ons
which
we'll
get
to
later
we're
gonna,
have
to
figure
out
these
problems,
which
may
or
may
not
be
possible.
I
I
So
the
broker
is
the
conduit
for
all
sandbox
target
processes,
all
the
restricted
processes
to
access
resources
that
are
otherwise
restricted
through
their
policy.
So
they
do
through
do
so
through
an
API
to
the
broker
and
the
broker
enforces
a
policy.
That's
built,
built
from
a
series
of
rule
which
dictate
which
targets
can
access
which
resources
so
like
I
was
saying
earlier
about
identifying
which
components
need
which
resources
that
all
lives
in
the
brokers
policy.
So
here's
a
nice
illustration.
We
have
the
broker
with
its
policy
and
to
target
restricted
processes.
I
So
the
most
interesting
problem
is
add-ons.
So
if
an
add-on
tries
to
access
a
resource,
it's
intercepted
that
that
request
is
intercepted
and
sent
along
the
IPC
to
the
brokers,
API
and
the
broker.
Compares
that
to
the
policy
and
says:
is
that
allowed?
Can
that
target
access
this
resource
and
for
add-ons?
It's
really
hard
for
us
to
say:
well
what
resources
does
this
give
an
add-on
you
to
access?
So
what
happens
is
if
it's
not
in
the
policy?
The
broker
just
returns
a
failure
and
the
target
will
fail
when
it
tries
to
access
that
resource.
I
So
add-ons
are
the
biggest
challenge
and
the
reason
for
that
is
because
Firefox
add-ons
are
the
most
powerful
add-ons
of
any
browser.
They
have
complete
free
reign
and,
if
you
install
and
that
on
in
Firefox
it's
hard
to
tell
what
is
Firefox
and
what
is
the
add-on.
Other
browsers
are
able
to
implement.
Sandbox
is
much
easier
because
they
have
api's
and
their
add-ons
can
only
access
resources
through
these
api's.
I
So
another
interesting
statistic-
and
this
is
really
rough,
but
somewhere
on
sixty
percent
of
users-
have
at
least
one
add-on.
So
even
if
we
can't
find
a
perfect
solution
for
add-ons,
we
could
even
ship
a
sandbox
Firefox
to
users
without
add-ons
to
at
least
secure
that
user
base,
and
it
would
be
significant.
It's
a
significant
percentage,
so
I
spent
a
lot
of
my
time.
Researching
patterns
of
resource
usage
in
add-ons
and
I
found
a
lot
of
interesting
pattern
patterns.
I
Some
examples
are
that
most
add-ons
that
access
files
do
do
so
through
a
file
paper,
which
is
great
because
we
could
just
that
file
picker
in
the
broker
and
the
worst
an
exploit
could
do
if
it
compromises.
The
process
is
open
up
the
file
picker
and
hopefully,
users
are
smart
enough
to
like
quick,
cancel,
pop-up,
sir
or,
if
they're
being
prompted
randomly
to
pick
files,
and
then
there
are
a
few
components
that
have
random
dll
is
really
hard
to
tell
what
they
do
so
they
could.
I
So
we
came
up
with
a
bunch
of
possible
solutions,
and
my
personal
favorite
is
that
we
require
a
manifest
with
each
ad
on
that
states.
Each
resource
than
that
on
has
to
access.
It
would
be
similar
to
what
you
see
in,
like
the
android
app
store
but
way
more
granular.
So
it's
like
I
have
to
access
this
executable.
This
file
path,
they're,
definitely
pros
and
cons.
I
So
you
could
say,
oh
you're,
about
to
install
this
addon,
that's
going
to
lower
the
security
of
the
browser,
but
that
is
an
ideal
either.
So
I
mean
this
is
this:
is
an
open-ended
prod,
a
problem
and
I
wanted
to
keep
this
high
level
so
I
wanted
you
guys
if
you
have
any
suggestions
to
just
discuss
this
now
I'm
just
going
to
like
open
up
this
problem,
the
discussion
so
yeah.
Thank
you.
I
We
looked
at
stannick,
statically
analyzing
add-ons
just
try
to
figure
out
what
resources
is
it
they
use,
but
currently
a
lot
of
these
javascript
analyzers.
We
have
aren't
able
to
follow
all
the
API
calls
that
they
got
the
add-ons
make
to
get
handles
on
those
resources.
So
currently
it
seemed
really
difficult
to
do
that,
but
it
looks
like
people
are
trying
to
make
things
that
can
statically
analyze
add-ons
static
stem,
so
I
mean
that
would
be
ideal.
I
Wrote
a
bunch
of
scripts
I
downloaded
a
ton
of
add-ons
from
the
amo,
because
the
add-ons
mxr
wasn't
working
and
I
just
wrote
a
lot
of
Python
scripts
to
use
a
regular
expressions.
I
went
through
the
top
like
10
manually,
just
to
get
a
feel
for
because
I'd
never
really
looked
at
on
before
and
then
I
talked
with
ton
of
add-ons
people.
So.
I
E
I
It's
hard
to
tell,
especially
because
a
lot
of
the
the
top
top
ones
not
nemat
amo,
are
installed
automatically
like
the
skype
dot
net
helper
thing,
all
these
random
junk
add-ons
that
have
binary
components.
It's
like
with
DLLs.
It's
like
really
hard
to
figure
out
what
to
do
so.
I
mean
that's,
that's
definitely
true.
We
could
do
like
some
rough
estimate
estimation
just
to
get
some
support,
but
it's
like
the.
E
G
I
I
I
E
I
Well,
the
top
three
are
the
skype
net,
like
the
three
that
are
really
high
up
bar
I,
think
have
binary
components
and
then,
unfortunately,
only
22
in
the
top
hundred
our
jet
pack.
So
if
we
have
lots
of
jet
pack,
that'd
be
great
because
they're
going
through
this
API,
so
yeah
I've
been
I've,
been
gathering
the
statistics
on
that
I.
Don't
remember
them
off
the
top
of
my
head,
but
with.
I
It
is
possible
that
we
I
know
that,
like
Chrome
has
a
separate
sandbox
forwards
add-on,
so
it
could
be
possible
to
like
cater
one
towards
that
children
specifically
a
lot
of
those
things,
but.
I
A
B
B
B
J
B
B
J
J
Okay,
but
note
that
ad
is
not
the
same
as
active
daily
users
right
so
for
an
example,
a
user
could
have
multiple
Firefox
installations
running
on
multiple
machines.
You
can
also
disable
this
feature,
so
you
won't
send
anything
back
to
the
seller
as
well.
It's
not
the
same
as
active
daily
users,
but
it
does
give
some
sort
of
indication
of
how
how
much
usage
there
is
a
Firefox
in
the
wild.
So
it
is
something
of
interest.
J
And
so
this
is
what
we're
looking
at.
This
is
the
ad
I
count
in
tens
of
millions
are
ranging
from
july
2008
to
almost
the
end
of
june
2012,
and
this
is
what
we're
interested
in
modeling.
You
can
kind
of
see
here
that
it
does
seem
to
be
some
structure
right.
In
fact,
over
this
time
period
that
does
seem
to
be
some
sort
of
positive
trend
does
seem
to
be
growing,
so
one
of
our
goals
is
in
fact
to
try
and
model
this
guy,
and
in
particular,
we
have
two
goals.
J
We
want
to
smooth
the
series
to
get
a
better
idea
of
how
a
DA
changes
over
time,
and
we
want
to
produce
a
weekly
report
comparing
observed
ad
is
for
that
week.
So,
though,
is
predictive
from
the
model
right,
so
this
is
going
to
give
us
some
indication
of
how
things
are
going
if
we
have
a
forecast
for
next
week-
and
we
observed
that
our
ad
I
counts
were
low,
perhaps
there's
some
reason
for
this,
then
we
can
go
and
further
investigate.
J
So
we
want
to,
we
want
to
model
this.
You
want
to
model
this
guy
right,
so
the
first
approach
I
was
thinking,
was
assuming
a
dynamic
linear
model.
Okay,
and
this
is
a
constant
downer,
dynamic
linear
model,
and
what
this
basically
assumes
is
that
the
response,
which
is
the
YT
right,
gives
the
actual
observed
ad
I
count
four
at
time
T.
So
that's
what
we
get
to
observe
and
that
we
then
assume
that
this
observation
is
actually
equal
to
some
trend
value
plus
some
random
noise
right.
J
J
We
saw
earlier
that
the
trend
changes
over
time,
so
you
need
to
put
some
sort
of
time
structure
built
in
here
right.
This
is
the
second
line
of
the
equation,
the
basic
view.
Soon,
as
the
trend
at
time
t
is
equal
to
what
it
was
yesterday,
plus
some
white
noise
right
well
as
a
Gaussian
distribution.
The
one
noise
here
is
this
a
magnitude
as
a
Gaussian
distribution
means
zero
in
some
variants.
J
J
J
That
it
could
just
go
ahead
and
fit
that
model
to
what
we
have
here,
but
upon
a
closer
look.
If
we
condition
on
a
variable
called
days
of
the
week,
will
actually
notice
that
there's
some
structure
here,
okay-
so
this
is
this-
is
the
same
plot-
a
DI
count
over
time,
but
now
we're
just
labeling
we're
just
color
coding
the
days
of
the
week.
So
here
we
can
actually
see
that
there
does
seem
to
be
some
trend
right.
That
each
day
of
week
seems
to
be
some
offset
from
this
trend.
Value.
J
So
the
model
we
assumed
earlier,
the
constant
linear
model,
wouldn't
really
pick.
This
up
right
so
mean
to
somehow
incorporate
the
fact
that
different
days
of
the
week
has
a
different
offset
away
from
the
trend,
and
this
is
going
to
lead
us
to
the
next
step
of
our
model,
which
is
a
dime,
dynamic,
linear
regression,
which
allows
us
to
include
covariance.
J
Area,
it's
it's
something
other
than
the
response.
Then
you
observe
at
some
point
in
time,
which
you
may
think.
Moosh
may
give
you
some
information
about
that
response.
So
in
terms
of
Adi
counts
here
we
know
David
knowing
what
day
of
the
week
it
is
actually
helps
us
predict
what
the
ad
I
would
be,
because
we
know
that
ATH
changes
according
to
day
of
the
week
and
so
something
other
than
the
variable
of
interest
which
may
provide
some
information
about
the
variable
of
interest.
That's
that's
a
code
area.
J
J
J
J
Ok,
but
this
dotted
line-
and
this
dotted
line
have
different,
intercepts
and
different
slopes.
As
we
progress
through
time,
we
need
to
have
the
intercept
and
slope
changing
in
order
to
fit
the
trend
line
which
is
sort
of
currently
and
not
necessarily
just
a
straight
line,
and
so
basically,
this
is
the
setup
that
we
have.
J
And
this
is
what
we're
going
to
do
well,
this
is
what
we
did
do,
so
we
actually
assumed
this
done.
Dynamic,
linear
regression,
where
r
co
various
before
I
just
said
one
code
barrier
but
I
variance
here
is
a
set
of
indicators
indicating
which
day
of
the
week
you're
in
ok-
and
it
may
be
a
little
bit
difficult
to
see.
I
hope
the
colors
turn
out.
Ok,
but
the
black
lines
which
you
may
be
which
may
be
hard
to
see,
is
actually
the
observed
a
DI
counts.
J
J
J
K
J
J
J
Definitely,
and
the
blue
line
here
is
actually
the
smooth
waves.
This
is
the
main
train.
This
is
what
we're
actually
interested
in
okay,
and
maybe
this
is
what
we
should
be
paying
attention
to
all
right.
This
gives
us
an
indication
of
power
avi
a
change
over
time,
not
interested
in
all
this
situation.
The
day-to-day
noise
we're
interested
in
the
blue
trend.
J
J
So
what
we're
doing
here
is
the
black
dots
are.
The
observed.
Adi
counts
the
red
dots
here,
and,
let
me
know
if
this
plot
works
on
our
because
it's
something
about
playing
with
is
you
can
simulate
from
the
model
and
project
values
predict
a
DI
values
in
the
future,
but
there's
some
variability
according
to
these
values.
J
A
I
D
J
Think
it's
based
on
a
thousand
predictions
I've
applauded
on
on
top
of
each
other.
So
indeed
you
can,
you
can
say,
like
you,
can
get
the
two
and
a
half
percent
190
7.5
percentile
interval
as
well
from
this
to
say:
okay,
it
lies
outside
of
this
I'm
prediction
interval.
Do
you
think?
That's
a
maybe
a
more
intuitive
thing
to
do,
and
this
is.
B
J
B
J
J
The
blue
line
here
is
the
predicted
main
trend
that
we
had
from
earlier,
and
so
the
idea
is
that
for
particular
week,
if
you
observe
Eddie
I
count
for
a
particular
day
say
that's
way
off.
Maybe
that
may
be
worth
investigating.
Maybe
it's
just
a
holiday
and
we're
not
that
worried
about
it
right,
but
maybe
it's
more
indicative
of
something
else
which
is
going
on
alright.
This
is
just
kind
of
a
way
to
keep
track
of
how
ADIZ
going
and
make
sure
that
we're
it's
still
on
track.
Basically,.
K
K
J
K
The
visualization
to
me
being
sort
of
not
tactical,
is
that
I
can
look
at
one
of
those
and
say
if
that
dot
is
way
higher
way
low
on
one
of
those
bars
away
from
the
dread
dense
areas
that
that's
less,
that
that
that's
a
potentially
abnormal
and
worth
investigating.
So
if
I
saw
that
black
dot
on
Monday
sitting
down,
you
know
outside
of
the
lightest
pink
at
the
bottom,
there
I,
don't
you
know
I'd
word
I'd,
like
did
something
happen.
Did
our
measuring
go
askew?
Did
we
lose
a
hunk
of
data
in
the
system?
J
A
K
So
I
can
also
take
away
from
that
sort
of
some
of
those
are
going
to
be
sort
of
an
even
cheat
across
a
larger
piece
of
it,
or
have
a
couple
of
dark,
dense
areas
in
them
where
you
would
have
pretty
good.
So
there's
there's
even
more
information
that
I
can
very
quickly
absorb
visually
yes
by
just
seeing
those
density
differences.
Yes,
yes,.
J
Okie
dokie
I
think
that's
pretty
much
it.
I
think
I
have
a
thank
you.
I
know
has
some
extensions
not
thanked
you
yet
so
the
purpose
of
this
model
I
think
about
the
NRO
expression
to
the
purpose
of
this
model.
That
was
to
smooth
this
series
and
obtain
one
week
ahead.
Predictions
and
the
model
does
very
well
for
this
purpose,
which
is
what
it
was
designed
for.
J
If
you
do
want
to
do
something
like
project
more
than
one
week
and
predict
months
into
the
future,
this
is
not
going
to
get
it,
but
it
may
be
possible
to
obtain
accurate
forecasts
by
doing
a
couple
of
things.
So
one
is
that
you
need
to
actually
model
the
structure
a
little
bit
more
than
what
we're
doing
so.
You
need
to
include
some
seasonal
factors.
May
be.
You
know
those
related
to
calendar
time
such
as
months
and
holidays.
You
can
include
further
key
barriers,
maybe
firefox
download
data.
J
K
The
mobile
be
so
good
factor
in
things
like
aprox
proximity
to
a
release
date,
because
we
know
that
around
firefox
releases
we
see
you
know
little
spikes
in
our
big
spikes
and
downloads
and
little
spikes
in
usage
and
then
oftentimes,
a
new
baseline
established
for
what
usage
is
going
to
look
like
releases.
Have
this
weird
impact,
but
that
is
pretty
easily
modeled
by
looking
at
our
history.
It's
very
consistent.
What
happens.
K
A
H
Joe,
oh
sorry,
youngsters.
What
was
there
anything
the
data
that
lent
itself
to
music
Benedict
linear
model,
like
did
you
chosen
a
bit
like?
Did
you
have
eight
other
models
like
versus,
like
they
do?
Don't
like
an
additive
and
multiplicative
decomposition
or
whatever
I
did
do
a
Rhema
model
as
well?
J
You
mean
the
endlessness:
how
do
you
assess
how
how
good
it
is
right,
dis,
numerous
things
you
can
look
at,
you
can.
J
Him
and
how
I,
how
I
like
to
evaluate
modest
ears,
have
a
validation,
so
you
have
an
out-of-sample
set.
We
should
joining
fleet
in
your
model
and
then
you
start
to
create
it.
So
they
could
do
one
hand
one
step
ahead,
predictions
using
only
the
weeks
previous
to
that
week
and
see
how
well
it
performs,
and
you
can
use
that
to
assess
how
good
your
model
is.
J
Can
you
can
do
many
things
you
can
look
at
the,
for
example,
how
close
your
up
for
past
value
is
to
the
observed
value
and
calculate
that
difference
and
which
is
the
residuals
read,
and
you
look
at
the
sums
of
full
spread
of
these
residuals
and
compare
that
to
other
moms
right.
So
this
is
how
some
of
the
moments
that's
the
time,
see.
Basically,
how
far
away
were
your
predictions
from
the
forecast
values
you
take
like
the
difference,
you
square
them
up,
some
of
them,
you
can
compare.