►
From YouTube: The Minimum Viable Chunk - Jan Krems, Google
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Back
in
the
old
days
when
jQuery
had
just
taken
the
web
development
world
by
storm,
most
developers
had
a
very
clear
idea
of
what
a
good
build
process
for
the
web.
Looked
like
an
application
had
a
list
of
scripts
to
be
included
on
the
page.
They
all
got
concatenated
into
one
big
bundle
and
aesthetic
script
tag
in
the
HTML
pointed
to
the
latest
version,
but
over
time
user
expectations
changed
websites
were
expected
to
do
more
and
at
the
same
time
they
were
still
expected
to
load
fast,
even
on
slow
mobile
connections.
A
A
A
Since
this
is
about
finding
a
minimum
viable
chunk,
let's
quickly
recap
what
a
chunk
is
for
that
purpose.
He
is
a
very
crude
illustration
of
building
a
web
app.
The
build
process
collects
all
the
source
files
that
belong
to
the
application,
analyzes
them
and
then
distributes
the
code
across
a
number
of
output
files
and
be
called
each
one
of
those
output
files,
a
chunk
because
it
contains
a
chunk
of
the
application
code.
A
Let's
stick
with
this
illustration
a
little
longer,
though,
because
you
might
have
noticed
something
interesting
if
we
remove
the
colors
the
left
side
and
the
right
side
are
eerily
similar,
it
does
look
like.
We
are
taking
code
that
has
already
been
split
up
into
nice,
individual
units
combined
it
just
to
then
spit
it
up
again
into
various
files.
We
could
make
our
lives
a
lot
easier
if
we
kept
the
separation
already
present
in
the
source
files.
A
What
makes
a
set
of
Chung's
viable?
Fortunately,
the
answer
is
simple
and
it's
my
two
favorite
words.
It
depends
I
know
it's
not
a
very
satisfying
answer
and
I
think
we
can
do
better
on
what
does
it
depend?
We
could
look
at
how
the
way
the
code
is
split
into
chunks
effects,
user
experience.
There
are
many
different
factors
that
influence
the
final
experience,
but
here
I
want
to
focus
on
three.
Each
of
them
is
directly
affected
by
the
way
we
decide
to
bundle
the
app
and
each
of
them
could
lead
to
an
unacceptable
user
experience.
A
If
we
ignore
it,
the
three
are
download
efficiency,
cache
hit
rate
and
code
execution
time.
The
download
efficiency
determines
the
time
it
takes
to
transfer
the
application
to
the
user's
browser
or,
at
the
very
least,
the
parts
of
the
application
that
the
user
needs
right
now.
If
this
is
too
slow,
the
user
may
not
stick
around
long
enough
for
anything
else
to
matter,
but
no
matter
how
fast
the
download
is.
We
don't
want
the
user
to
load
everything
from
scratch.
A
If
we
can
help
it
to
provide
a
great
user
experience,
we
want
to
serve
responses
from
cache
as
often
as
possible.
This
will
determine
if
our
app
is
fast
enough
on
average,
not
just
in
the
worst
case
after
the
code
has
been
loaded,
it
still
needs
to
run.
It
doesn't
really
help
the
user
if
the
code
loads
super
quickly,
but
then
takes
too
long
to
execute.
A
This
part
is
even
relevant
when
all
code
came
from
the
cache,
it's
all
about
how
quickly
we
are
done
running
it,
and
there
are
two
aspects
to
this
on
one
hand:
we
need
access
to
all
relevant
code.
If
we
only
discover
that
we
need
additional
code
halfway
through
execution,
it
will
count
against
the
execution
time.
On
the
other
hand,
we
really
don't
want
to
accidentally
run
code.
That's
not
needed
right
now
that
will
cause
delays
as
well.
Download
efficiency,
cache
hit
rate
and
code
execution
time
are
at
least
to
some
degree,
mutually
exclusive
fun
fact.
A
This
kind
of
diagram
is
called
a
ternary
plot,
which
I
learned
while
making
the
slide
the
point
is
we
can't
fully
optimize
for
one
of
them
without
sacrificing
something
about
the
others?
If
we
are
somewhere
in
the
middle
of
this
triangle,
and
we
want
to
move
closer
and
closer
to
the
download'
corner,
eventually,
we'll
have
to
move
away
from
both
execution
and
cache.
It's.
What
does
that
mean
in
practice?
Let's
ignore
execution
and
just
look
at
down
load
efficiency
and
cache
it's
if
we
are
focused
on
getting
more
cache.
A
It's
we
might
come
to
the
conclusion
that
smaller
chunks
are
always
better.
We'd
start
with
a
chunk
of
any
size.
We'd
see
that
it
changed
towards
the
end
of
the
chunk,
invalidates
the
cache
for
the
rest
of
the
chunk
and
so
we'd
split
it
up
as
long
as
there's
any
way
to
split
chunks
into
smaller
pieces,
we
would
have
to
continue
at
the
end.
We'd
have
the
perfect
cache
hit
rate.
Every
tiny
section
of
code
get
its
very
own
chunk,
cache
independently
from
the
rest
of
the
program.
A
But
what
happens
when
we
don't
hit
the
cache?
What
happened
to
our
download
efficiency?
It
got
real
bad
and
one
reason
is
compression.
We
usually
don't
send
the
actual
file
contents
of
the
network,
especially
for
text
files
like
JavaScript
code.
What
we
sent
instead
is
a
compressed
version.
The
principle
idea
is
to
try
to
find
patterns
in
the
input
to
reduce
the
size
of
the
output,
and
that
is
a
problem
as
the
inputs
get
smaller
and
smaller.
A
If
an
input
contains
ten
function
declarations,
there
is
an
obvious
pattern
that
can
be
used
if
it's
a
single
function,
not
so
much,
which
means
the
same
amount
of
code
split
up
into
many
small
chunks
will
need
to
transfer
more
bytes
over
the
network.
Each
of
the
chunks
will
be
harder
to
compress
individually
than
when
they
were
still
part
of
the
same
response,
and
so
eventually
we
get
to
the
point
where
we
have
to
make
a
choice.
A
Do
we
keep
splitting
chunks
to
get
higher
and
higher
cache
hit
rates
or
with
the
download
efficiency,
become
unacceptable?
If
we
do,
a
viable
solution
would
have
to
fall
somewhere
in
between.
We
can
repeat
this
exercise
with
the
next
edge
of
the
triangle.
The
line
between
cash,
its
and
execution-
let's
take
this
example.
We
have
two
snippets
of
application
code
on
the
left
side
a
and
on
the
right
side,
B,
we'll
assume
that
each
one
gets
loaded
on
a
different
page
because
they
both
reuse.
A
A
Let's
make
a
small
change
a
removes
one
of
the
two
imports.
Now
we
are
faced
with
a
conundrum.
We
didn't
change
B
or
any
of
its
dependencies.
If
we're
interested
in
cash,
it's
we
would
want
to
preserve
the
cache
of
B
and
of
C.
Neither
of
them
has
changed
after
all,
but
wearing
our
execution
time
head.
We
don't
want
to
run
code,
that's
not
necessary,
and
if
C
doesn't
change,
then,
whenever
we
load
a,
we
will
run
code
that
isn't
necessary.
A
So
we
have
to
choose
again.
Do
we
change
or
restructure
the
common
chunk?
So
it
always
reflects
the
latest
state
of
what
is
actually
shared,
or
do
we
need
more
cash?
It's
there's
an
inherent
friction
here
between
allowing
global
optimizations
for
fast
execution
on
one
side
and
stable
chunks
that
stick
around
in
the
cache
on
the
other.
If
you
kept
track,
there's
one
more
edge
we
haven't
talked
about,
and
that
is
the
one
between
execution
and
download.
At
first
glance,
they
may
seem
like
the
same
thing.
A
If
we
download
more
code,
we
execute
more
code,
intuitively
they're
the
same,
but
it's
not
always
as
one-to-one
relationship.
Sometimes
it
can
take
downloading
more
code
to
execute
less
code.
Let's
take
these
two
modules.
Entry
is
an
entry
point
into
the
application
and
sometimes
that
entry
point
imports
em,
but
it
is
only
known
at
runtime
if
it
will
import
em
or
not
in
real
code.
This
could
be
because
it
depends
on
certain
browser
features
or
because
it
depends
on
data
that
was
loaded
from
an
API.
A
For
this
example
we'll
say
it's
random
and
to
clarify
this
example
assumes
that
using
dynamic
import
isn't
good
enough
in
the
cases
where
we
need
em.
We
can't
wait
for
another
round
trip
to
get
it.
It's
a
crucial
part
of
the
initial
user
experience
our
build
process
assigned
both
of
these
files
to
the
same
chunk,
and
because
we
wanted
to
have
the
fastest
possible
download,
we
decided
to
use
a
technique
called
scope.
Hoisting
both
modules
are
merged
into
one
combined
module.
A
We
saved
all
the
bytes
from
setting
exports
properties
and
calling
require-
and
it
certainly
looks
like
execution-
should
be
cheaper
as
well,
but
in
this
particular
case
we
introduced
a
problem
before
we
merge.
The
two
modules
calculate
value
was
only
executed
when
the
value
was
actually
needed.
Now
we're
always
running
it.
If
that
function
is
expensive,
we
just
make
the
average
execution
time
a
lot
worse.
A
practical
example
would
be
top
level
code
that
builds
a
complex
data
structure.
Think
a
big
static,
JSX
fragment
that
get
rightfully
moved
out
of
a
components
render
function.
A
A
This
was
very
situational,
usually,
scope,
hoisting
reduces
the
download
size
and
also
leads
to
faster
execution
times.
Also,
in
example,
we
were
talking
about
running
the
entire
modular
body
of
em
lazily,
but
the
same
idea
applies
to
any
kind
of
lazy
value
for
every
lazy
value.
Additional
code
has
to
be
shipped
to
handle
the
lazy
calculation,
but
if
the
value
is
needed
or
until
it
is
needed,
execution
can
wrap
up
more
quickly.
What
did
all
of
this
tell
us
about?
A
What's
viable
we've
seen
that
having
more
fine-grained
chunks
can
mean
that
the
download
becomes
too
inefficient.
We've
seen
that
maintaining
high
cache
hit
rates
may
prevent
us
from
adding
important
global
optimizations,
and
we've
seen
that
sometimes
we
need
to
download
more
code
to
ensure
execution
is
fast
enough,
so
any
viable
solution
we'll
have
to
make
some
trade-offs
between
those
extremes.
With
that
we
are
ready
to
draw
some
conclusions.
We
know
what
a
chunk
is.
We
know
what
makes
a
set
of
chunks
viable.
So
how
small
can
we
go
without
running
into
issues.
A
If
you're,
using
es
modules,
you
can
try
it
out
today,
you
might
have
already
heard
about
snowpack
or
def
server,
both
of
those
tools
effectively
treat
your
source
files
as
chunks,
so
there's
no
need
to
run
an
extensive,
build
process
and
in
development,
build
speed
is
often
more
important
than
a
realistic
user
experience.
If
we
want
a
minimum
chunk,
that's
viable
for
end-users,
we
have
to
take
one
more
look
at
the
triangle.
It
turns
out.
The
triangle
is
actually
a
pyramid
in
the
earlier
slide.
One
of
the
corners
just
wasn't
quite
visible.
A
You
might
say
there
was
some
hidden
complexity,
I'm.
So
sorry,
this
new
corner
represents
how
concerned
we
are
about
introducing
more
complexity
into
our
system
if
we
want
to
get
the
smallest
chunks
that
are
viable
in
production,
we'll
have
to
accept
some
additional
complexity,
but
it
starts
with
small
changes.
This
is
an
example
of
a
manifest
or
digest
file.
It's
a
fire
that
lists
all
entry
points
into
the
application
and
maps
them
to
a
fingerprinted
file
to
be
loaded
in
production.
A
In
this
case,
there's
also
an
explicitly
listed
common
chunk
that
will
be
loaded
for
all
entry
points.
So
on
the
home
page,
there
be
two
script
tags,
one
for
the
common
chunk
and
one
for
the
home
page
chunk.
The
problem
is:
there's
only
two
ways
to
deal
with
code
that
is
needed
for
multiple
entry
points
either
it
has
to
be
put
into
the
common
chunk
or
the
same
code
has
to
be
duplicated
for
each
entry
point
that
needs
it.
A
A
huge
improvement
is
to
remove
the
assumption
that
there's
a
one-to-one
relationship
between
an
entry
point
into
production
chunk.
This
can
be
done
incrementally
step.
1,
add
square
brackets
step
to
move
the
common
chunk
into
each
entry
point.
This
is
also
where
we
remove
the
hard-coded
two
script
tags
and
use
a
loop
instead
step
3
time
to
cash.
In
now,
we
can
update
the
buildconfig
to
create
more
granular
chunks
for
webpack.
This
could
mean
setting
split
chunks
to
all
I.
Have
some
good
news
at
this
point
for
users
of
frameworks
like
next
jazz
or
Gatsby.
A
All
of
this
may
already
be
taken
care
of
for
you.
It
may
seem
like
a
small
change,
but
when
next
Jes
and
Gatsby
roll
this
out,
many
larger
websites
saw
that
total
JavaScript
size
dropped
by
20
to
30
percent.
So
far
we
haven't
touched
the
application
code
itself,
but
what
if
we
did?
We
can
go
one
step
further
on
the
complexity,
scale
and
design
our
application
for
progressive
fetching.
Many
applications
already
use
route
level
code
splitting,
which
is
a
basic
form
of
progressive
fetching.
A
It
groups
similar
pages
together
and
builds
a
special
entry
point
to
be
used
when
loading
that
kind
of
page.
This
works
well
as
long
as
all
pages
of
the
type
are
very
similar,
but
when
the
pages
are
assemble
dynamically,
using
a
variety
of
components,
it
isn't
quite
optimal
anymore,
maybe
there's
a
form
that
is
only
visible
to
Lockton
users,
maybe
there's
an
optional
video
player.
Maybe
the
page
has
a
common
section:
that's
below
the
fold.
A
Progressive
fetching
means
designing
our
page
in
a
way
that
loads
the
code
we
need
as
early
as
possible,
but
only
the
code
we
actually
need.
In
this
case
we
may
want
the
initial
page
to
reference
the
form
code,
but
only
if
the
user
is
locked
in
and
we
don't
want
to
load
the
code
for
the
comment
section
until
we
run
out
of
other
things
to
load
or
until
the
user
actively
Scrolls
down
doing.
This
requires
actively
designing
the
application
to
allow
for
this
kind
of
fine-grained
control
of
the
load
owner.
A
Adding
dynamic
import
calls
is
a
start,
but
to
prevent
waterfall,
behavior
and
unnecessary
delays.
It's
likely
not
sufficient
without
that
kind
of
architectural
change
will
quickly
run
out
of
meaningful
chunks
to
create.
But
let's
say
we
did
all
that
we
have
all
these
small
chunks
in
our
application
can
leverage
them
effectively.
If
we
stop
here
downloading
the
resources
for
our
website
may
be
highly
inefficient.
Unless
we
get
very
lucky
caching
and
also
ignore
first-time
visitors,
we
need
to
deal
with
a
download
site.
Somehow
we
touch
the
build.
A
We
touch
the
application
code
time
to
take
a
closer
look
at
how
we
are
serving
chunks.
The
simplest
solution
is
that
we
serve
each
chunk
as
a
static
file
from
a
CDN
there's
one
script
tag
per
chunk,
one
HTTP
request
per
trunk
and
one
HTTP
response
per
chunk
as
we
covered
before.
Transferring
each
of
these
small
files
on
its
own
isn't
the
most
efficient
way
to
download
the
contents
enter
dynamic,
bundling
in
its
most
basic
form.
A
The
idea
is
quite
simple:
instead
of
serving
each
file
individually,
there's
one
HTTP
endpoint
that
accepts
the
IDS
of
multiple
chunks
and
all
of
the
chunks
are
then
sent
back
in
one
response.
Congrats,
we
just
reinvented
the
big
bundle
we
started
with,
but
not
quite
since
the
bundled
response
only
contains
the
chunks
the
client
asked,
for
we
are
not
over
fetching
code.
A
We
are
doing
well
on
the
execution
side
and
with
a
way
we
are
combining
the
chunks
it's
about
as
efficient
to
download
as
we
can
make
it,
but
we
did
sacrifice
our
cache
hit
rate.
One
way
to
make
up
for
it
is
to
use
a
serviceworker
as
long
as
the
serviceworker
understands
how
this
HTTP
endpoint
works.
A
It
can
compare
the
list
of
the
chunk
IDs
in
the
request
against
the
cache
and
then
only
request
the
chunks
that
aren't
cached
yet
and
once
it
gets
the
response
back
from
the
server
it
can
extract
the
chunk
contents
and
cache
them
individually
in
the
future.
We
may
also
be
able
to
use
web
bundles
for
this
purpose.
With
that
in
place,
we
have
a
solution
that
runs
exactly
the
code.
That
is
absolutely
necessary
downloads,
what
it
needs
efficiently
and
can
achieve
a
high
cache
hit
rate,
and
that
may
be
the
minimum
viable
chunk.
A
Thank
you
for
watching
I
hope
you
enjoyed
this
exploration
of
taking
code
splitting
to
the
extreme,
as
promised
here
is
the
link
to
the
guide
on
setting
up
granular
chunks
and,
if
I
made
you
at
all
curious
about
dynamic,
bundling
the
folks
from
Netflix
gave
some
great
talks
and
how
they
use
dynamic,
bundling
to
run
a
be
experiments
at
scale.
Also,
my
Twitter
handle
again
just
in
case.
If
you
want
to
chat
about
Jia's
modules
or
novel
ways
to
bundle
web
apps,
that
might
be
the
best
way
to
reach
me.
Cheers.