►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
Hey
so
yeah
thanks
for
tuning
in
again
for
the
third
chapter
yeah.
Does
someone
want
to
raise
something
before
we
start?
D
C
Yeah
I
found
that
sometimes
it
goes.
I
think
it
goes
too
much
into
detail,
or
maybe
it's
good
that
it
goes
into
detail,
but
sometimes
I
felt
like
skipping
a
few
of
the
details,
because
I
mean
I
could
spend
a
lot
of
hours
on
this
to
understand
everything,
but
sometimes
I
thought
okay,
I
need
to
to
read
this,
and
maybe
it
makes
sense
in
the
bigger
picture
later,
but
there
are
so
many
details
in
it
and
yeah.
I
think
it
just
depends
on
how
much
time
you
want
to
spend
on
it.
B
B
Also
it's
skipping
some
things
where
you
think,
like
okay
and
like
there's,
this
string
variable
assignment
like
in
one
stack
operation
and
you
don't
know
what
it
does
until
the
end,
and
then
he
explains
rarely
what
it
does
so
like
the
face.
How
he
builds
it
up
is
a
bit
hard
to
understand.
B
B
Okay,
what
I
found
interesting
was
this
difference,
be
that
they
only
introduced
this
execution
like
compiler
part
in
1.9,
and
it's
that's
interesting
to
me
and
what
also
was
interesting
that
it
forms
a
bit
worse
in
the
beginning,
with
less
instructions.
B
So
maybe
like
yorick,
you
have
been
kept
up,
maybe
long
enough,
that
we
you've
seen
a
ruby
1.8
to
1.9
switch
in
in
rails,
or
maybe
matthias
also
has
some
insights
into
that.
Is
there
like.
I
would
have
imagined
that
ruby
1.9
performs
worse
on
rails
than
ruby
1.8,
because
you
only
have
not
that
many
instructions
when
we
perform
a
rails
request.
E
Yeah,
so
the
last
time
I
worked
with
ruby,
one
eight
was
back
in
2012.
I
think
when
we,
my
previous
employee
ported
over
a
bunch
of
one
eight
apps
to
1934
whatever
it
was
at
the
time.
E
E
I
think,
the
the
performance
it
kind
of
depends.
I
guess
on
what
versions
they
compared
because
when
ruby
one
nine
was
first
introduced,
it
was
kind
of
a
train
wreck
because
it
was
a
completely
different,
vm
implementation
and
if
I
recall
a
lot
of
stuff
was
lacking
and
buggy,
there's
kind
of
the
python
2
the
python
3
issue,
although
it
took
less
time
to
resolve.
E
So
I
think,
if
you
compare
ruby
today
to
108,
it
will
basically
blow
ruby
1
8
out
of
the
water
yeah
as
to
why
something
fewer
instructions
might
be
slower.
That
depends
entirely
on
what
those
instructions
do,
for
example,
typically,
in
an
instruction
set,
there's
sort
of
two
approaches
you
can
take,
you
can
have
more,
but
generally
smaller
instructions,
so
instructions
with
fewer
arguments.
For
example,
or
fewer
instructions
with
more
arguments,
I
think
the
that
might
be
mistaken
here,
but
I
think
apple's
new
m1
chip.
E
They
take
a
wider
approach
where
some
instructions
they
have
quite
a
number
of
arguments,
and
such
it's
a
hotly
debated
topic
for
many
decades
where
people
are
some
people
are
in
favor
of
one
approach
or
the
other.
F
Yeah
sorry,
I
missed
the
first
step
of
the
question.
I
had
to
answer
the
door,
but
this
is
about
ruby
1.8
versus
ruby
1.9,
which
introduced
yarf
right.
F
Yeah,
so
this
this
switch
definitely
predates
me
joining
gitlab
and
in
between
I
had
not
worked
on
ruby
for
a
while
as
well.
So
my
ruby
1.8
experience
is
yeah.
It
was
a
long
time
ago,
so
I
can't
give
you
any
specifics
as
to
how
that
compared
for
a
specific
app
so
but
yeah
like
outside
of
what
york
said
already.
My
understanding
as
well
is
that
the
main
benefit
is
to
move
to
move
to
a
bi-code
model,
which
is
that's
the
main
thing
I've
accomplished
right.
F
A
F
Is
still
yeah,
it
is
not
as
condensed
as
by
code
might
be.
Although
again,
that
also
depends
like
you're
touching
this.
That
also
depends
a
little
bit
on
the
byte
code,
how
you
encode
it
there's
this
trade-off
between
and
that
I
guess,
is
a
I
guess,
like
a
size,
speed
trade-off.
You
know
how
how
compact
so
so,
the
the
the
less
instructions
you
have
available
to
express
the
same
program,
the
more
notes
you
pro.
You
typically
need
to
represent
that
program.
F
So
that
usually
means
it
will
consume
more
memory,
but
it
might
be
faster
to
execute,
and
then
the
converse
is
true
for
a
for
a
richer
instruction
set.
So
yeah
there's
this
traditional
trade-off.
It
was,
I
remember,
like
the
last
time
I
actually
had
to
deal
with
this
was
for
android.
F
I
don't
know
if
anyone
has
ever
done
any
android
development,
but
they
they
completely
changed
the
way
they
interpret
or
they
deal
with
java
by
code
right,
because
the
java
vm
is
also
the
stack-based
machine
and
they
changed
this
actually
to
a
register-based
vm
like
initially
it
was
delbig,
and
then
I
totally
wrote
it
again
into
art,
but
that's
a
different
story.
D
F
A
F
So
this
helped
reduce
the
size
of
the
binary.
F
You
have
to
load
into
memory
yeah
and,
like
another
benefit,
I
can
see
of
this
approach.
Yarf
takes
and
by
code
is
a
jet
and
also
remove
reusing.
F
C
Also
part
of
the
question
was,
if
there's
a
big
performance
impact
for
rails,
because
it's
rather
short-lived
and
I
was
wondering,
if
rail's
in
production,
because
as
I,
if
I
understand
correctly,
the
thing
that
makes
ruby
ruby,
slow
is
the
or
which
is
what
it's
slower
is
to
compile
it
before
it
runs,
and
then,
when
it's
compiled,
it's
actually
faster
than
1.8,
but
for
rails.
If
it
runs
in
production,
does
it
compile
once
at
startup
or
does
it
require
compile
on
every
request.
F
Yeah,
that's
a
great
question,
so
this
actually,
you
have
some
influence
on
this.
The
way
we
run-
and
this
is
maybe
not
rail,
specific
so
much,
but
rather
how
you
run
your
application,
so
we
use
a
prefork
server
and
sidekiq
operates
similarly
or
at
least
the
way
we
run
it,
where
you
have
some
kind
of
some
kind
of
main
process
that
you
fork
other
processes
off
of
and
what
we
do
is
before
we
do
that
we
pre-load
the
application.
F
So
what
this
means
is
that
the
runtime
will
go
through
it
will
kind
of
go
through
as
much
code
as
possible
that
we
can
see
as
used
during
the
lifetime
of
the
application
and
at
that
point
in
time
that
this
is
when
actually.
This
is
all
this.
F
This
all
happens
so
that
you
basically
have
the
final
form
of
the
application
in
memory
already
before
you
even
serve
the
first
request,
that's
kind
of
what
puma
does
as
well,
so
we
pre-load
the
application
and
the
puma
master
and
then
there's
additional
benefits
from
this
from
a
memory
perspective,
because
if
you
then
go
continue
to
fork
off
workers
from
this
master
pro
or
the
sorry,
the
main
process,
which
already
is
the
application
preloaded
in
memory
there
there
are
some
savings
you
can
have
from
a
memory
perspective,
because
these
processes
will
look
very
similar
by
the
time
they
fork.
F
So
they
can
re.
The
operating
system
is
actually
able
to
share
memory
between
these
processes.
That
would
be
totally
transparent
to
each
individual
process,
but
in
terms
of
the
raw
physical
memory
that's
actually
being
used
like
consumed
on
that,
node
will
actually
be
less
so
so
that's
not
a
benefit
I
can,
I
can
think
of,
but
that's.
D
F
G
F
The
vm
can
actually
run
yeah
so
so
then,
then
you
expand
memory
earlier,
but
then
you
don't
have
to
redo
that.
F
But
yeah
by
the
way
I
agree,
it
was
quite
a
dense
chapter.
I
read
it
like
a
year
ago,
so
I
don't
remember
everything
because
I
yeah
I
read
the
book
a
while
ago,
but
I
I
thought
it
was,
would
be
a
nice
refresher
as
well
to
maybe
drop
in
and
see
what
you
made
of
it.
F
I
think
I
still
think
it's
nice
that
there
is
a
book
that
actually
goes
into
that
kind
of
detail
and
while
it's
not
something
you
will
typically,
you
probably
won't
be
thinking
about
this
all
the
time
right.
Well,
writing
application
code,
but
I
thought
it's.
I
thought
it's
nice
that
there
is
a
book
which,
actually
you
know,
does
look
under
the
hood
of
what's
going
on
and
there's
not
that
many,
I'm
not
aware
of
any
other
book
that
goes
into
that
sort
of
detail.
F
So
I
thought
it
was
kind
of
neat
to
to
have
that.
Yeah.
D
D
Well,
as
I
said
for
me,
it
was
like
a
bit
difficult
to
read
this
chapter
because,
like
it's
dancing,
there's
so
many
things
details
and
you
have
to
remember
about,
but
I
figured
my
way
to
read
these
books
that,
like
just
skip
all
the
like
small
details,
that
I
not
really
like
understanding
don't
have
time
to
like
understand
it
right
now,
because
I
feel
that
we're
going
to
in
next
chapters,
we
are
going
to
read
about
classes
like
blogs,
some
like
more,
let's
say
down
to
earth
things
that
we
are
like
use
in
everyday
work.
D
D
For
example,
in
this
chapter
we
get
the
list
of
special
variables
that
can
like
show
you
the
arguments
like
error
string
and
so
on,
but
like
I
was
thinking
how
why
like
why
we
need
them
like
what
is
the
purpose
of
these
special
variables
like,
and
I
guess
for
just
like-
for
users
of
the
language
as
we
are,
it's
not
really
brutal
value,
but
I
maybe
I'm
mistaken.
G
But
I
guess
it's
difficult
here
in
this
book,
since
we
are
going
head
first,
everything
is
presented
to
us
in
like
in
the
very
beginning.
That's
why
it's,
I
guess
less
handable,
but
I
I
think
I
can
still
quite
follow
it.
If
I
read
far
enough
and
don't
try
to
understand
everything,
as
I
read.
G
G
Ep,
oh
yeah,
I
don't
know
if
there's
an
answer
or
I
miss
lancer.
G
So
there's
a
paragraph
saying
that
the
paragraph
first
introduced
the
idea
of
environment
pointer
and
it
goes
down
and
say
that
the
sp
will
keep,
will
change
quite
a
lot,
but
the
ep
will
remain
constant.
Normally
that's
what
a
paragraph
says,
but
I
think
throughout
the
chapter
I
see
that
the
eps
also
change.
G
We
have
like,
in
example,
where
we
have
ep
and
previous
ep.
So
I
don't
quite
understand
why
the
paragraph
specifically
say
that
ep
will
remain
constant.
G
G
A
F
And
it's
been
a
while,
but
I
think
it
isn't
the
difference
simply
that
so
the
environment
pointer
is
used
to
track
lexical
scopes
from
what
I
this
is
my
understanding
of
it.
So
because.
G
F
Every
time
you
enter
a
method,
you
create
a
a
new
stack
right
or
stack
frame,
so
so
the
stack
pointer
always
will
refer
to
the
scope
you're
currently
in,
but
but
you
can-
and
maybe
this
is
only
important
for
these
special
variables
or
if
you,
if
you
pass
a
block
where
you
have
like
a
like
a
closure
right
where
you
can
access
data
that
is
outside
the
current
method
scope,
then
you
need
to
know
where
to
look
that
up.
So
so
that's
my
understanding.
F
Why
why
it's
talking
about
this
ladder
of
environment
pointers
that
you
have
to
traverse
to
resolve
something
that
is
actually
not
in
the
same
lexical
scope
as
your
current
method,
so
that
you
can
kind
of
follow
the
breadcrumbs
back
to
where
it
was
originally
defined?
B
G
I
don't
have
a
page
number,
epub
doesn't
show
me
the
numbers,
but
I
don't
think
it's.
G
G
Then
I
guess
I
don't
like
there's
no
answer
for
this,
but
I
have
another
question
that
I
think
so
I
think
it's
implied
that
the
virtual
machine
will
keep
multiple
environment
pointers
like
he
has
in
a
graph.
He
would
say
previous
ep.
So
I
guess
there
is
a
array
of
these
environment
pointers.
G
D
F
A
E
C
F
The
block
you
need
to
you're
able
to
access
variables
that
are
defined
outside
of
the
immediate
scope
of
the
block
right,
so
the
runtime
needs
to
have
a
way
of
finding
where
these
are
defined.
So
it
needs
to
kind
of
be
able
to
backtrack
to
you
know
up
the
up
the
stack
right
to
see
what
value
they
have
when
when
it
executes
this
block.
So
I.
F
Is
why
it
needs
to
do
some
extra
bookkeeping
there,
because,
as
you
mentioned
earlier,
I
think
the
initial
value
of
the
environment
pointer
it's
actually
pointing
to
the
area
where
the
local
variables
of
that
current,
the
current
block
or
method
are
stored.
So
so
that's
why
it
needs
to
so.
D
F
It
is
not
sufficient
to
like
keep
keep
one
pointer
there,
otherwise
he
would
lose
track
of
what
was
defined
outside
the
immediate
scope
of
what's
currently
executing.
I
I
think
like
it
sounds
like
this
is
only
really
used
when
you
execute
blocks,
but
I
I'm
I'm
not
totally
sure.
Maybe
I'm
missing.
B
So
this
previous
end
point
pointer
that
this
is
just
a
pointer,
so
it's
an
environment,
pointer-
and
I
think
it's
the
last
like
grayed
out
like
great
in
part
with
a
climbing
the
environment
point
or
letter
in
c,
where
it's,
what
you
just
described
with
the
blocks
and
basically
gets
the
value
based
on
the
environment,
pointer,
minus
the
it
says
index
though
it's
the
pre
but
the
yes
environment,
point,
that's
the
previous
one.
B
A
B
So
we
see
each
other
next
week
again
and
I
I've
seen
a
few
people.
This
lining
the
22nd.
I
think
so
maybe
we
want
to
skip
that
because
it's
so
close
to
christmas
on
holidays,
so,
like
general
holidays,
so
yeah,
but
we
can.
We
can
check
yeah,
otherwise,
wish
you
all
good
afternoon
and
yeah
have
a
nice
day,
let's
see
each
other
next
week.