►
From YouTube: 2023-03-02 Code Review Performance Round Table
A
And
we're
live,
welcome
everybody
yet
another
weekly
performance
Roundtable
for
our
group
Food
review
and
we
have
a
new,
a
new
attendee.
So
welcome.
A
Do
you
want
to
give
it
a
little
intro
on
your
role
since
you're
here
in
the
ring
for
everybody
watching
the
recording
they
might
be
questioning
who
you
are?
What
are
you?
What
are
you
doing
what's
you're
interested
here
in
this
topic.
B
Sorry
so
I'm
I'm,
mostly
here
to
listen
today
but
I
am
I'm
a
principal
across
the
create
I
just
joined
recently.
B
I
was
over
in
incubation
and
I
was
popped
into
here
recently
and
and
that's
sort
of
my
my
background
here,
I
used
to
work
at
a
bank
before
this,
you
know
doing
you
know
as
an
architect
for
in
Investment
Banking
working
on
IPOs
and
things
like
that
and
and
prior
to
that,
I
spend
some
time
on
Microsoft
and
and
at
Salesforce
doing
other,
like
architecture,
stuff
and
Engineering
on
on
kubernetes
and
other
platforms.
B
A
Awesome
great
to
have
you
our
discussions
are
kind
of
loose
and
they
sometimes
are
initiated
by
just
some
brainstorming
or
an
issue.
So
I
just
take
a
look
at
the
board
that
we
have
linked
to
the
agenda,
so
the
agenda
is
in
the
invite,
so
you
should
be
able
to
get
to
that.
It's
probably
next
to
the
couch
to
the
company
and
yeah
feel
free
to
create
your
own
issues,
your
proposals,
fashion,
everything!
A
That's
that's
why
it's
a
wrong
table
like
everything's
up
to
be
questioned
so
moving
moving
on
to
the
topics
that
we
have
currently
open,
so
I
want
to
give
you
the
opportunity
to
bring
up
again
the
micro
code
review
front
and
POC
last
week
or
last
call.
We
had
a
couple
of
discussions
sparked
out
of
that.
So
I
wanted
to
check
the
pulse
here
and
has
there
been
any
advances
anything
you
want
to
discuss
more
about
this
POC
anything
that
anybody
wants
to
share.
In
particular,.
A
Right
and
if
not
I,
think
we're
still
not
yeah
I,
don't
feel
comfortable.
Yet
closing
that
issue,
in
particular
I
mean
boc,
still
has
to
be
taken
apart
and
pulled
into
particular
feature
proposals,
I
guess
but
I
guess
the
natural
Revolution
so
I'll
keep
it
open
for
now
for
now
and
move
over
to
the
discussions
that
sparked
from
that
particular
proposal
in
POC,
which
is,
we
created
us
an
issue
for
a
spike
about
rendering
the
Mr
diffs
from
diff
gloves
instead
of
line
by
line
Json
structures.
A
So
there's
been
a
couple
of
discussions
in
there
already
so
essentially,
this
is
about
like
checking
whether
we
were
able
to
pull
it
off
from
having
the
back
end
Deliver
us
a
full
blob,
rather
than
a
line
by
line
structure
which
wasn't
someone
perhaps
the
processing,
but
it
could
allow
us
to
do
some
Nifty
Things
on
the
front
end
if
we
just
take
care
of
the
whole
different
head
ones,
but
then
the
question
becomes
how
to
identify.
A
How
do
I
identify
the
lines
to
match
the
notes
and
comments
to
the
difference?
Like
that's,
the
first
biggest
hurdle
which
is
I
can
share
the
screen.
We
need
to
start
that
so
share
the
screen.
A
Kind
of
like
one
of
our
biggest
questions,
is
the.
How
can
we
match
notes
from
this
lines?
I
shared
some
rained
up
here
from
previous
fall
to
have
euids
populated
both
of
the
front
and
back
end
that
we
can
have
it
used
to
match
those
two
ends
and
then
Thomas
you
had
something
to
share.
Do
you
want
to
vocalize
that
since
you're
here.
C
Sure
yeah,
it
seems.
C
Let
me
make
sure
we're
phrasing,
this
correctly,
if
the
vacuum
is
going
to
be
delivering
less
fully
baked
dips,
which
is
a
good
idea
and
and
on
an
approach
here.
Presumably
they
have
some
way
out
of
identifying
each
line
of
diff
that
they
have
I,
don't
know.
C
You
know
there
are
a
couple
of
mechanisms
that
we're
using
currently
I
think
right
now,
our
our
identifier
is
basically
just
like
the
file
name
hash
to
sha1
and
then
like
some
line
numbers
appended
to
that,
or
something
like
that
or
I
think
it
might
have
the
it
has
like
the
three
ad
remove
change,
States
also
included
into
that.
C
But
it's
basically
just
a
shot
one
of
of
some
plain
text,
but
if
there's
some
way
to
ID
lines,
they
could
just
send
those
lines
back
with
the
HTML
that
they
send
along
to
us.
C
C
So
it
seems
like
that
would
be
if
they
get
some
those
IDs
back
to
us.
Looking
those
up
in
the
returned
structure
that
they
send
is
pretty
pretty
trivial
for
a
browser
more
or
less
I
said
it's
an
all
of
one.
That's
what's
about
right,
I!
Think.
A
The
question
immediately
that
jumps
sorry
for
interrupting
is
so
how
were
you
so
if
we're
getting
a
blob
out
of
the
back
end,
syntax
highlighted
I
guess
how
are
we
so
you're
saying
like
add
an
attribute
within
the
uid?
Is
that
what
you
were
thinking
at
a
span
within
an
ID
or
something
yeah.
A
C
C
Would
be
the
biggest
thing
is
just
having
all
of
the
we're
having
having
everything
in
the
in
the
dunk.
C
I
mean
it,
it
is
that
data
is
somewhere
always
right
now,
right
like
like
just
because
it's
not
delivered
to
us
in
the
Dom,
it
is
delivered
to
us
in
data,
and
then
we
do
those
yeah.
We
do
the
we
do
the
matching
up,
so
that
data
is
coming
down
somewhere
and
the
most
efficient
way
to
do.
It
is
probably
HTML
that
can
be
the
compression.
There
can
happen,
probably
a
lot
better
than
than
in,
like
Json.
So
right.
B
A
Okay,
so
the
difference
is
right.
Now
we
get
the
differences
right
now.
The
the
request
we
do
to
render
the
disc
lines
is
the
batch
diffs
and
inside
the
batch
divs.
We
have
the
highlighted,
zip
lines
and
each
one
of
the
entry
in
that
array
is
aligned
right
and,
within
that
end,
a
lot
of
object.
We
have
pretty
much
all
the
metadata
per
line,
which
then
we
use
the
rich
text
to
kind
of
like
render
the
HTML
of
that.
A
So,
as
you
can
see,
the
line
code
is
what
what
we're
talking
about
here
would
be
the
uid
I.
Guess
we'll
replace
this
with
that,
but
the
idea,
then,
that
we're
proposing
is
instead
of
having
this
line
by
line.
We
would
have
something
that,
similar
to
this,
you
have
one
thing
with
all
the
HTML
for
all
the
blob
files
kind
of
thing.
This
is
profile,
I,
guess,
yes,
this
would
be
profile.
A
The
main
difference
is
this:
like
we
won't
have
a
line
by
line
we'll
have
a
one
thing
with
all
the
HTML,
but
we
lose
all
the
metadata
contextualized
to
that
I'm.
Guessing
the
difference
is
that
this
would
be
markup.
Instead
of
just
like
raw
file,
raw
diff
right.
C
B
A
A
B
You
dump
the
whole
HTML
there
in
the
diff
and
then
that's
per
file.
You
render
that
out.
A
Correct
and
last
last
call
by
the
way,
if
you
want
to
catch
the
recording,
we
discussed
that
length
the
the
possibility
of
doing
syntax,
highlighting
it
on
the
front
end
like
we're
doing
on
the
blog
page,
where
we're
using
highlight.js,
but
the
support
the
language
support
and
the
Fidelity
isn't
as
good
as
Rouge
that
we're
using
on
the
back
end.
So
it
feels
like
there's
essentially
some
things
we
can
do
to
pre-render
or
pre-calculate
the
disks.
So
that's
highlighted
before
it's
needed
to
make
it
faster
to
deliver.
C
Yeah
I
think
I
think
one
of
the
reasons
that
we're
talking
about
this
even
is
because
the
demo
is
noticeably
faster
than
our
diffs
in
on
a
lot
of
metrics,
and
so
one
of
the
questions
is:
how
can
we
get
something
kind
of
like
the
public
API,
but
for
us,
and
one
of
the
things
that
the
public
API
does
is
just
send
the
whole
blog
back.
We
don't
get
diff
lines,
it's
just
like
here's,
the
diff,
The
Blob
of
diff
and
then
I
was
doing
that
the
highlighting
on
the
front
end.
A
So
that's
what
we're
talking
about
looking
something
like
60,
something
that
exists
yes
and
then
switching
over
to
that
one.
Just
named
cash
in
there,
your
next
DB
yeah
so
back
end
folks.
C
So
I
want
to
make
sure
I
understood
your
question
yeah.
You
said
we
need
unique
IDs
so
that
we
know
which
line
we're
attaching
say
a
comment
to,
for
example,
yeah
trying
to
I
want
to
make
sure
I
don't
misspeak
here.
C
We'll
edit
it
out
I,
yes,
I-
think
that's
correct.
I
think
that
the
the
primary
reason
for
iding
line
is
knowing
where
to
put
discussions,
and
that
includes
new
discussions
coming
back
so
like.
If
the
line
doesn't
have
a
discussion
on
it,
the
user
has
to
be
able
to
click
and
say.
I
would
like
on
this
line,
to
create
a
discussion.
C
Andre
commented
in
the
issue
about
well
what,
if
just
kind
of
like
the
first
line
or
like
this,
this
blob
of
changes
has
an
ID,
and
then
we
could
do
relative
line
numbers
from
there,
which
I
think
would
work.
I.
Think
that's!
Okay,
it's
just
a
matter
of
like
the
problem
is,
do
we
like?
Do
we
always
the
back
end?
Who's
gonna
have
to
do
a
lot
of
computation.
For
that,
because,
let's
say,
let's
say
someone
changes,
one
line
in
a
diff.
C
What
we
send
back
is
I
believe
it's
like
what
three
lines
before
and
three
lines
after
or
something
like
that,
and
so
the
blob
that
we
get
back
is
the
id'd
line
would
actually
not
be
the
change
line.
It
would
be
an
unchanged
line
and
then
say
they
want
to
leave
a
comment
on
the
change
line.
Okay,
so
it's
the
ID
plus
three
all
right.
Next
merger
press
version:
they
changed
two
more
lines,
so
the
ID
has
gone
back
a
little
bit.
C
We
can
still
calculate
the
offset
right
like
the
not
the
the
idea
of
the
line
coming
back
has
changed
and
the
offset
is
going
to
be
like
lost
five
now,
because
the
comment
has
like
moved
down
a
little
bit,
there's
more
dip
lines
in
there,
but
that's
like
nothing
is
deterministic.
I
guess,
like
everything
has
to
be
computed
on
the
fly
like.
What's
the
id
of
this
comment?
Well,
it's
the
it's.
The
new
diff
first
line,
plus
five,
because
that
was
the
old
diff
plus
three
or
whatever
it
would
work
but
I
think
iding.
C
Each
line
is
it
sets.
C
F
Easier
but
I
think
it's
also
just
simpler
to
treat
the
back
end
more
like
just
the
data
store
or
the
front-end
application
right.
So
we're
not
trying
we're
not
mixing
across
that
boundary.
What
we're
calculating
okay,
I
just
want
to
make
sure
I
understood
why
you
needed
it
and
like
what
his
role
was.
I.
C
Think
that's
the
only
reason
I
would
I
I
really
want
other
people
to
check
me
on
that,
because
iding
lines
has
has
been
sort
of
I
mean
right
now
we
really
only
go
by
like
file
IDs,
and
then
we
do
line
codes
for,
like
scrolling,
I,
think
I
think.
The
only
reason
we
need
ID
lines
is
comments,
but
I
could
be
wrong.
There.
I
could
be
missing
stuff.
F
D
C
A
There's
a
couple
more
things
that
we
use
the
line,
so
studies
I've
identified
so
anchor
links
if
you're
linking
to
a
line
where
it's
we
need
a
way
to
identify
those
lines
and
the
other
is
code
quality
feedback.
So
you
know
how
you
have
a
little
like
stripes,
a
big
gutter
to
the
line,
numbers
and
stuff,
so
essentially
any
declaration
we
want
to
do
on
a
line.
We
would
have
to
have
identification
for
those
lines
some
way
or
another.
A
F
F
Change
in
the
code,
so
it's
it's
just
like,
because
we
have
that
concept.
We
built
all
this
stuff
around
it.
So
you
take
that
away
like
now,
after
figure
out
how
to
unwind
that
those
assumptions.
So
these
folks,
for
example,
in
the
code,
the
the
code
quality
report
annotations.
F
C
A
If,
if
for
I
want
to
add
one
more
thing
to
the
mix,
because
I
feel
like
that's
the
moment
to
add
it
here,
so
one
of
the
one
of
the
natures
of
the
current
implementation
that
ideally,
we
could
improve
in
the
new
implementation
or
in
that
theoretically,
new
implementation
is
this.
Currently,
the
blob
hunks
that
were
used
that
were
rendered
on
the
UI
are
based
on
the
lines
that
were
changed
by
the
MI
right
and
then
we
request
the
comments
on
a
separate
request,
and
then
we
match
them
up
this
Mexico.
A
This
distance
between
the
notes
and
the
diff
lines,
if
you
can
improve
that
I'll,
add
another
level
of
Food
For
Thought.
A
Some
competitors
do
allow
commenting
not
on
the
line
level
but
at
the
code
level
and
some
some
allow
you
to
go
in,
like
just
a
text.
Select
kind
of
thing:
I
want
to
select
this
part
of
a
line,
others
do
it
as
a
code
object,
so
I'm,
cold,
I'm,
commenting
on
a
knit
Clause
I'm
commenting
on
a
function,
I'm
commenting
on
a
class.
So
that's
going
deeper
than
just
the
file
line
resolution
since
we're
discussing
these
things.
This
might
be
a
good
opportunity
to
consider.
A
Are
there
ways
that
we
can,
while
improving
performance,
introducing
some
new
features
set
as
well,
that
actually
potentially
raise
the
bar
for
the
quality
of
our
products?
C
I
am
a
couple
of
times,
I've
thought
about
a
couple
times:
I've
thought
about
how
we
could
calculate
highlights
on
the
front
end
and
I
think
the
only
way
is
if
we
have
like
the
full
file
and
then
the
diff
says
of
this
file.
It's
these
lines
that
are
different,
because
then
we
could
compute
the
Syntax
for
the
the
whole
the
full
file
at
the
time,
but
I
don't
know
of
it.
C
I
can't
I
can't
think
of
a
way
to
make
that
efficient,
because,
like
downloading
an
entire
file
for
every
diff
would
be
pardon
my
language,
but
that
would
be
insane
I.
Don't
I,
don't
know,
I,
don't
know
how
we
could
do
that
in
our
UI,
with,
with
the
one
caveat
that,
if
we
can,
if
we
can
figure
out
some
way
to
do
this
more
efficiently,
we
could
cache
full
files.
Like
you
downloaded
kind
of
on.
B
C
The
full
file
is
cached
and
then
we
just
kind
of
parse
out
the
diffs
in
the
middle
of
it,
but
to
your
to
your
point
about
commenting
on
sublines,
actually
in
that
spike
in
that
Spike
issue
that
we
spun
off
last
time,
I
leaked
out
this
document
that
I
created
a
long
time
ago
about
why
we
need
IDs
and
just
the
just
a
brief
overview.
C
This
is
like
the
top
bars
like
how
we
are
currently
where
we're
trying
to
ID
files
like
after
the
front
end,
gets
them
and
I
was
proposing
or
I
am
proposing
that
the
the
back
end
here
basically
has
the
ID
so
that
once
we
get
it
off
the
API,
everyone
agrees
on
what
the
IDS
are,
and
the
reason
for
that
is
I
was
like
Hey
at
some
point
in
the
future.
We
would
probably
want
to
be
like
commenting
on
code
hunks
with
abstracts
and
Abstract
syntax
tree
or
however,
we'd
like
to
parse
it.
C
We
can
start
commenting
on
sub
line
chunks
and
we're
going
to
need
a
way
to
identify
individual
lines
so
that
we
can
parse
out
hunks
of
code.
But
I
just
wanted
to
mention
that,
because
it's
what
you
just
said,
Andre
about
being
able
to
do
more
nuanced
stuff.
On
diffs.
A
Ids,
it's
like
how
to
identify
the
things
that
can
become
as
a
learning
model
and
equate
all
of
EMR
versions
variations.
One
thing
they
do
really
nice
I.
Just
add
it
to
the
note,
but
I
didn't
voice
it.
What
I
think
they
do
really
nice
when
they
allow
you
to
comment
in
the
object
of
the
code.
Is
that
the
line
changes?
Because
you
change
something
above
it
like?
The
comment
can
persist
across
multiple
Mr
versions,
because
you
comment
on
that
class
definition,
which
again
it's
much
deeper,
complicated
complexity,
but
will
be
an
experience.
A
Does
any
of
this
achievable
or
or
should
we
open
an
issue
to
because
tied
to
the
next
issue
that
carry
I
don't
know
if
you've
created
that
issue
for
the
pre-calculating
on
push?
A
If
you
did,
please
add
a
link
to
the
agenda,
the
one
I'm
selecting
right
now
so
tied
together
with
that
we
could
potentially
have
some
smarter
ways
of
annotating
bad,
it's
not
just
as
highlighting,
but
you
could
have
a
couple
of
more
smarter
operations
done
to
it
to
reveal
identifier.
You
can
annotate
it
with
beyond
the
surprise,
coloring
annotated
with
identifiers
that
would
be
used
for
everybody,
because
identifiers
could
be
smarter
or
Dumber
depending
on
how
much
we
want
to
go
in
terms
of
resolution.
C
That
I
think
that
we
should
consider
a
future
world
where
we
have
the
ability
to
comment
on
you
know,
class
names
or
file
names
or
whatever,
and
so
we
should
build
that
into
whatever
we
plan
to
do
here,
but
I
think
we
should
keep
this
I
think
we
should
keep
the
scope
to.
How
can
we
deliver
syntax
highlighted
lines
more
efficiently
or
whatever,
while
keeping
that
in
our
minds?
C
C
I
guess
it
would
be
interesting
to
see
someone
else
also
create
like
a
concept
of
like
commenting
on
well
commenting
on
the
ASC,
I
guess
or
some
code.
A
F
F
C
C
I
think
that's
a
good,
a
good,
maybe
path
forward
of
saying:
let's
stop
treating
Lions
as
our
foundational,
like
special
thing
in
Mrs
and
just
start
treating
them
as
one
of
a
type
of
thing
that
you
can
interact
with,
because
at
some
point
we
could
break
that
down
and
be
that's
not
just
lines.
It's
also
small
pieces
of
code.
C
D
C
F
Yeah
I
mean
once
you
once
you
get
rid
of
the
line
concept
you're
just
dealing
with
diff
blobs
right
then,
if,
if,
for
example,
you
know
kind
of
that
situation
right,
we're
like
oh
the
lines
disappeared
so
now,
I,
don't
know
where
to
put
the
comment.
It's
like
I,
don't
know,
put
it
at
the
bottom
of
the
file
or
you
know
like
you.
Can
you
can
kind
of
like
we
kind
of
work
around
solving
from
these
problems
once
we
remove
that
dependency
in.
D
A
The
more
we're
talking,
the
more
it
seems
like
we're,
not
looking
for
a
necessarily
A
uuid
the
format.
What
Thomas
is
describing
we're
almost
looking
for,
like
a
urn
or
even
like
sort
of
a
URL
and
I'm
thinking
more
like
in
the
rdf
space,
where
you
have
vocabularies
to
find
Concepts,
and
we
could
have
like
a
layer
of
things
that
can
be
commented
and
then
the
identifier
itself
would
be
the
locator.
A
So
we
could
then
the
front
end
could
then
use
that
locator
to
know
how
to
render
them
out
so
in
the
in
the
locator
we
could
extract.
Is
this?
Is
this
apartment?
Is
this
a
file
name?
Is
this
a
commit
message,
slash
and
then
in
front
of
it?
We
have
the
ID
of
that
object,
whatever
that
is,
and
that
would
be
the
format
of
the
uid,
probably
I.
Guess
we've
been
talking
about.
A
We
start
with
that
Baseline.
We
can
then
extend
the
number
of
object
types
that
we
could
support
in
the
future
without
restricting
ourselves
into
lines.
We
started
with
the
lines,
but
then
expand
later
some
sort
of
topic
of
ideas.
This
might
make
sense
in
trying
out
coming
back
to
the
top.
You
will
be
nested
to
have
a
POC
to
have
this
particular
problem
solved.
Have
a
string
act
as
the
blue
between
the
two
things:
do
the
the
notes
and
the
dip
lines
if
they
come
all
at
once.
A
That's
still
a
question
to
be
answered,
so
that's
what
that
spike
is
about
I
guess
so
all
that's
probably
missing
is
scheduling.
Website
I,
guess,
I
have
a
back
inner
front
and
assigned
to
it
or
or
a
full
stack.
Maybe
a
principal
in
G
I,
don't
know,
but
yeah
I'll
leave
it
open,
but
we'll
definitely
keep
that
issue
in
mind
to
try
to
schedule
an
upcoming
Milestones
to
see
if
we
can
get
some
answers
in
that
sense,
either
this
feasible.
C
Very,
very
very
obstacles,
or
very,
very
naively,
I
think
this
is
actually
a
pretty
light.
Lift
for
front
end
I'm,
not
100
sure
about
that.
But
if
the
backing
is
delivering
us
fully
baked
diffs
and
the
the
idea
here
is
maybe
just
swap
out
IDs
or
something
the
front
end
just
has
to
kind
of
make
sure
that
we're
computed
like
matching
up
IDs
and
diffs
properly,
if
the
IDS
change
otherwise.
C
This
fully
baked
diff
into
the
Dom
and
attach
discussions
just
like
we
do
now
and
I
I
think
this
is
going
to
be
mostly
a
back
mostly
back
end
like
lifting
here,
I'm,
not
sure.
Obviously,
the
front
end,
we
run
into
issues
all
the
time
making
the
most
minor
changes.
So
it's
it's
there's
going
to
be
issues
but
I
think
a
lot
of
the
work
is
going
to
be
back
in.
F
D
A
Okay,
so
I
think
we'll
discuss
that
in
part
enough
for
this
fall
we
haven't
gotten
to
the
bottom
of
it,
but
you
have
a
call
next
week
after
that,
so
I'll
move
over
the
discussion
to
that
pre-calculating.
A
The
data
we
need
to
render
the
Mr
so
Gary.
Is
there
an
issue
for
it
and
do
we
have
any
more
thoughts
on
that
particular
topic
since
last
time
we
discussed
it.
A
A
If
I'm
not
mistaken
the
system,
we
have
to
do
these
sort
of
things.
Is
you
schedule
a
job
to
generate
the
files
we
need
for
HMR
upon
the
push
of
a
code
change
in
whatever
UI
interface
thing,
you
can
use
command
line
web
ID
web
UI,
all
that
stuff.
So
there's
a
line
because
of
a
code
change
push
into
the
Repository.
A
D
A
D
A
D
F
A
C
The
user
doesn't
have
to
wait
a
long
time
while
an
on-demand
request,
parses
the
to
get
data
and
creates
a
diff
for
them
to
see
it
just
starts
happening,
and
then,
when
they
visit,
they
get
a
cash
response
or
they
get.
A
pre-compiled
response
is
that
is
that
what
we're.
F
Doing
yeah
I
mean
the
the
complaint
that
I've
the
the
problem
that
has
been
presented
to
me
is
that
you,
you
know
generic.
You
see
that
the
slow
the
slowness
is
from
the
highlighting
that's
coming
is
happening
on
the
first
load,
and
so,
if
we
accept
that
at
face
value,
it's
okay!
Well,
how
do
I
get
rid
of
that?
Well,
we
calculate
it
right.
Just
warm
the
cash,
because
if
subsequently,
if,
if
two
plus
loads
are
fine,
then
let's
just.
B
F
Yeah
I
mean
that
was
that
was
the
whole
thing.
A
You
will
incurring
in
paying
the
competition
price
before
the
MREs
even
requested
and
that
finding
something
to
be
considered,
because
now,
if
we're
generating,
we
have
to
run
if
we
have
to
do
a
task
for
every
coach,
regardless
of
VMware,
is
open.
We
might
be
doing
work
for
nothing
right.
The
between
two
pushes
they
might
not
be
rendered
at
all.
So
the
question
here
becomes:
when
does
that
become
a
positive
payoff
right.
F
F
C
A
C
D
A
F
A
Is
it
the
specific
response
that
this
program
will
get?
If
that,
in
that
case,
we
could
even
prepare
this
in
a
way
where
rails
is
not
even
involved
in
requests
in
that
data?
Well,
we
have
privacy
concerns.
We
have
access
permissions
kind
of
thing
or
we
probably
do
need
to
go
to
the
rails,
but
it
would
be
just
a
read
from
the
file
and
there's
a
delivery
to
the
board.
Then
it
should
be
much
faster
than
creating
databases
in
Italy
and
that
stuff,
so.
F
Right
and
like
I
mean
like
even
instead
I'm
just
sitting
here,
thinking
about
it
like
okay-
well,
maybe
like
maybe
the
rails,
isn't
the
right
right,
because
rails
gets
the
event
code,
a
code
push
happens
but
like
maybe
that's
not
the
right
place
to
do
it.
Maybe
maybe
it's
scheduling,
maybe
giddly
needs
to
respond
back
to.
Let
us
know
if
there's
another
thing
and
that's
actually
that
really
kind
of
implies
in
their
service
entirely
of
right.
F
Airing
spinning
all
day,
happily
highlighting
things
for
us,
but
it's
it's
it's
it's
like
if
you
had.
If
you
you,
we
have
a
truck
right
with
this
really
awesome
truck.
We
built
up
over
the
years
and
it's
super
awesome,
and
it's
it's
great
for
crawling
over
Boulders,
but
we're
really
worried
about
fuel
economy.
Well.
You've,
just
like
suggested
like
a
formula
two
card
now,
like
other
ideas,
it's.
F
Still
worry
about
fuel
economy,
it's
the
same
problem
but
like
is
it
as
much
of
a
problem
in
that
context?
So
I
can't
I'm
not
that
we
have
to
commit
to
one
of
the
other.
That's
what
they're
they're
exclusive
but
I,
don't
know.
We
have
real
limited
resources
to
to
kind
of
put
which
was
solving
any
problems
so
so.
C
D
C
The
like
the
thing
that
I
keep
is
keeps
bouncing
around
in
the
back
of
my
head.
Every
time
we
have
the
discussions
is
that
we
well
at
least
we
on
the
front
end.
Maybe
we
have
taken
perhaps
for
granted
that
we
need
the
back
end
to
do,
highlighting
and
and
to
me
highlighting,
is
a
front-end
concern.
That
is
a
thing
that,
like
the
the
back
end,
should
be
delivering
us.
C
The
data,
the
diff
files
or
the
files
and
the
diff
information
that
kind
of
stuff
and
the
front
end
should
be
highlighting
it,
and
in
a
couple
of
places,
we've
experienced
issues
with
a
particular
highlighting
Library
kind
of
failing
when
we
don't
have
the
full
context
of
of
diffs,
because,
like
it's
in
a
weird,
it's
in
a
weird
scope
of
the
file
and
then
so
the
highlighting
breaks,
because
it
doesn't
know
it's
in
some,
it's
in
some
structure,
I
think.
C
Maybe
we
should
think
about
a
little
bit
more,
how
it
could
be
possible
to
just
remove
highlighting
from
the
back
end
entirely
because,
for
example,
in
the
proof
of
concept
at
the
top
of
this
document
and
the
the
microcoda,
the
review,
the
highlighting
is
happening
on
the
fly
as
you
as
you
scroll
near
a
file.
It
gets
highlighted
instantly
and
otherwise
it's
just
blank
text.
C
So
you
can
still
do
the
control
F
in
the
browser
and
find
diff,
because
it's
there,
it's
just
hasn't
been
highlighted
yet
yet
and
I'm
not
sure
that
we
need
to
or
I'm
not
sure
that
we
should
continue
doubling
down
on
the
back
end
being
our
highlighter.
C
D
A
Remember
I
remember
reading
a
long
time
ago
on
a
computer
science
book,
the
history
of
thin
clients
and
bad
clients
and
moving
processing
from
like
the
Mainframe,
then
to
the
clients
and
then
go
back
to
the
cloud
and
it's
the
Everlasting
yo-yo
travel
of
the
data
computation
where
it
should
go
right
and
when
we're
talking
about
the
web
and
we're
talking
about
a
diff,
that
could
be
like
two
lines
that
we
need
to
show
that
we
use
around
two
plus
contacts
would
be
like
20
lines
and
some
things
so
20
lines
of
a
file
that
is
like
three
megabytes
raw
file.
A
It's
a
highlight,
potentially
what
I
can
think
of?
We
can
have
the
raw
file
highlighted
still
on
the
back
end,
but
the
front
end
can
be
smart
and
be
the
one
requesting
the
intervals
we
need
from
that
highlighted
file.
But
such
a
such
an
intensive
task,
where,
theoretically
to
highlight
properly
the
diff
chunks,
we
should
highlight
the
whole
file
type
of
context
right
according
to
with
grammars
and
stuff.
A
It's
it's
it's
challenging
for
me
to
imagine
It
Again
by
the
way,
I'm,
not
thinking
on
a
context
of
a
UI
based
on
vs
code
on
a
tab,
interface,
I'm
thinking
you
interface,
like
the
one
we
have
in
depths,
which
is
a
list
of
file
changes
right,
that
we
have
many
of
them
and
we
need
to
make
it
performance
fast
and
all
the
Jazz
that
we
have
been
trying
to
do
in
the
past
couple
years.
A
To
me,
it
strikes
me
as
very
challenging
to
be
able
to
do
this
performance
performance
cleaning
on
the
front
end,
leaving
that
way
from
the
back
end,
at
least
the
initial
pass.
So
that's
what
I
understand
is
I,
don't
from
all
the
things
we've
seen
for
the
blobs.
We
got
away
with
that
because
we
do
have
to
request
a
Blog
anyway,
so
we
have
the
blog
entirely.
So
the
highlighting
is
kind
of
like
okay
to
do
that.
Yeah
we're
still
having
struggles
with
performance
on
that
side
by
the
way.
A
That's
another
part
so
relevant
here
and
then
there's
language
support
which
highlight.js
is
not
as
good
as
Rouge
at
the
moment,
and
that
could
be
just
a
potentially
time
will
give
us
better
support
on
the
front
languages,
but
we're
not
there.
Yet.
B
So
I
just
done
this
with
you
know:
clients
are
getting
faster
right,
I
mean
even
you
know,
there's
more
more
compute
sort
of
at
the
client
side
than
overall
at
the
at
the
seller.
B
Side
right
in
part,
so
I
was
wondering
whether
you
know
this
comes
in
the
realm
of
crazy
ideas
is
what,
if
we
don't
use,
highlight
yes
right
and
instead
we,
you
know,
we
use
some
fast
library
in
C,
plus
plus
something
like
that
to
to
do
the
highlighting
compilative
awesome
and
and
push
the
highlighting
to
the
vasim
piece
right
and
and
that
that
is
one
one
possibility.
A
We've
discussed
in
the
past
the
possibility
of
easy
web
assembly
for
such
a
intensive
task
that
we
could
forward
some
libraries
to
the
front
and
by
using
webassembly
we're
open
to
that.
The
question
for
me
always
comes
to
the
nature
of
the
web
right.
We
want.
A
We
want
our
users
to
be
able
to
load
VMR,
regardless
of
where
they
are
fast
networks,
lower
Networks
and
the
diff
sometimes
do
touch
on
very
large
files,
and
we
have
the
example
of
our
log
release
blog
post,
that
the
files
get
so
huge
that
vmrs
have
always
been
like
breaking
down
performance
wise
on
bad
Mrs.
It
seems
to
be
real,
larger
Mars
and
we
have
customers.
That's
all
they
do
like
all.
Some
of
the
customers
are
using
Mrs
to
revise
all
the
code
going
into
a
release.
A
A
But
that
means
the
bandwidth
that
we
don't
want
to
have
to
have
to
count
to
travel,
to
bring
the
entire
file
to
highlight
it,
but
then
extract
just
a
few
lines
seems
wasteful
to
me.
That's
my
thing.
So
it's
entertained
that
in
a
little
while
I.
C
Think
that
is
the
absolute
worst
case
for
highlighting
on
the
front
end
like
the
absolute
worst
case,
because
these
highlighters
are
pretty
good
I
mean
I,
mean
you
see,
I,
don't
think
you've
seen
it
break
down
when
you
post
Mrs
into
the
demo
that
I
did
and
that's
just
with
nothing.
It's
just
like
highlighting
this
text.
C
If
we
have
a
breakdown
of
like
what
it
doesn't
understand,
what
scope
it's
in,
so
it's
highlighting
incorrectly,
you
don't
need
the
full
file.
You
just
need
enough
to
give
it.
So
it
knows
what
scope
it's
in
right.
If
it
doesn't
know
it's
in
a
function,
you
just
need
to
go
back
enough
to
tell
it.
Oh,
hey,
here's
the
here's,
the
thing
that
you're
inside
of
a
function
or
a
class
or
like
whatever
you
don't
need
a
whole
file.
C
You
just
need
like
just
enough
to
give
it
the
right
context
and
that
introduces
a
whole
other
level
of
like
what
is
doing
what
is
what
is
doing
the
parsing
to
know
to
give
it
the
right
amount
of
context,
but
needing
the
entire
file
like
a
three
megabyte
file
or
whatever.
Theoretically
is
like
the
worst
case
scenario
for
highlighting
I.
Don't
I,
don't
think
you'd
need
that
always.
F
F
Right,
obviously,
we
know
ahead
of
time
like
this
file
is
so
big
or
it's
particular
format
or
language
or
whatever.
So
we'll
just
do
that
back
out.
We'll
do
that
we'll
just
do
like
I,
don't
know
a
certain
kind
of
go
header
file,
we'll
we'll
do
that
in
the
back
end
right
I
mean
like
we
just
may
not
be
able
to
come
up
with
some
magical
solution
that
fits
every
single
possible
thing.
The
users
can
ever
possibly
ever
throw
at
us
like
we
I
think
Kai
was
just
sharing.
F
Today,
like
customer
was
like
oh
yeah,
we
regularly
have
9
000
line
ifs,
you
know
per
commit
and
that's
getting
real
close
to
our
limit.
F
C
The
question
that
has
been
bugging
me
for
a
long
time
we
keep
we
we
keep
talking
about
how
like
highlight.js,
for
example,
which,
by
the
way
that's
the
one
that
I
have
used
I,
think
the
source
code
has
used.
That
one
is
that
the
right
tool-
I,
don't
I,
don't
I,
don't
know
I
mean.
Is
that
the
right
one
anyway?
We
keep
talking
about
how
I
dress
can't
break
down
sometimes
and
I.
Just
I
I'm
always
wondering
how
does
Rouge
do
it
because,
like
this
is
not
a?
A
C
That's
so
I
mean
it's.
That
answer
sounds
right.
It
sounds
fine,
but
it's
so
weird,
because
that
means
that
Rouge
has
some
code
in
it
somewhere
that
it.
That,
like
just
knows
how
to
resolve
these,
like
you
know,
I,
don't
I,
don't
I,
don't
know
what
scope
I'm
in
bugs
that
that,
if
that's
true,
then
theoretically,
we
could
just
Port
those
over
to
lhas
and
solve
our
problems.
F
E
I
think
some
of
it
is
the
library,
though,
right
like
it
is
the
age
of
the
library
Rouge.
Has
you
continually
see
issues
opened
in
Rouge
for
like
ad
support
for
this
language?
Add
support
for
this
language
and
highlight.js
just
doesn't
doesn't
have
that.
Yet
it
was
the
same
issue
we
had
in
editor
and
all
the
Monaco
based
things
like
syntax,
highlighting
in
all
of
the
Monaco
stuff
uses
when
I
say
it
uses
like
text,
mate
structure
which
doesn't
exist
necessarily
for
every
language,
either
and
so
like.
E
We
have
to
either
contribute
it
back
into
like
Monica,
or
you
have
to
give
it
like
different,
like
Paths,
of
where
it
can
go,
find
like
the
syntax,
highlighting
definitions
and
so
like
I
know.
The
language
gap
between
Rouge
and
highlight.js
is
pretty
significant
like
in
the
order
of
several
hundred
language
is
different,
so
I
mean
it
is
just
time.
F
F
If
it's,
if
whatever
JS
library
is
faster
and
better,
and
if
it's
like
this
new
paradigm
we're
building
like
and
it
will,
if
it
covers
80
or
85
of
our
users,
then
great.
Let's
do
that
because
the
other
15
they're
not
going
to
notice
any
Improvement
they're
not
going
to
notice
a
decrease
right,
you
could
continue
to
use
Rouge
on
the
back
end.
Just
a
couple
they're
weird,
you
know
whatever
the
language
they're
using
it's.
A
E
Yeah,
but
give
us
a
rough
idea
of
how
often
we
would
fall
back
and
then
sort
of
like
what
the
impact
would
be
of
Shifting
things
around
like
how
many
people
would
potentially
benefit.
A
I'll
try
to
get
that
for
next
call
and
I,
probably
so,
which
which
you
should
capture.
This
discussion
seems
like
it's
a
separate
one
but
highlight
Thomas.
Did
you
open
this
option
again?
You
want
to
create
an
issue
first
to
capture
this
discussion
in.
C
Yeah
I'll
create
an
issue
I
also
want
to.
This
is
just
to
give
Kai
I'll
give
you
a
heads
up.
I
also
want
to
dig
into
the
supported
languages
and
stuff,
because
the
the
lists,
the
list
of
languages
visually
I
can't
I
can't
I
can't
do
a
diff
in
my
mind,
but
the
list
of
languages
supported
in
highlight.js,
for
example,
is
similar,
maybe
not
quite
as
large
as
those
languages
supported
Rouge.
C
So
I
want
to
dig
into
that
as
well.
So
I'll
open
an
issue
to
be
like
talking
about
I,
guess
the
how
our
highlighter
work
is.
It
just
highlight
JS
we
want
to
kind.
D
C
C
Do
we
want
to
roll
performance
into
this
be
like?
Can
we
do
some
testing
what
What's
the
what's
sort
of
the
all-encompassing
issue
here.
B
I
think
we
should
do
some
benchmarking
as
well.
I.
Think
benchmarking
is
an
important
part
of
this,
so
understanding
different,
JavaScript
libraries
and
what
the
Benchmark
is
against
in
a
large,
or
this
rather
I
mean
there's
the
two
aspects
of
this
one
is
benchmarking
in
seconds
accuracy
right,
that's
the
two
two
Dimensions
you
understand
we're
going
to
measure.
C
Yeah,
okay,
so
I'll
open
an
issue
just
talking
about
how
highlight
gets
to
the
front
end
and
focusing.
C
On
maybe
the
tool
that
we
want
to
use,
whether
it's
highlight
or
something
else.
A
Oh
former
Lord
in
there
we're
at
time
one
of
the
things
I
might
want
to
throw
in.
There
is
the
crazy
idea
of
using
webassembly
to
run
Ruby
in
the
browser
then
have
Rouge
run
there
by
you
know
installing
it
on
the
client
side
once
and
then
running
it
for
all
pages.
We
have
a
service
worker,
so
we
could
use
that
to
negotiate
the
loading
of
that.
That
really
stick
in
the
first
load
kind
of
thing.
A
But
yeah
great
discussions,
everybody
anything
before
we
go
I
should
just
call
it
push
the
button
all
right
thanks.
Everybody
great
chat,
have
a
great
week.