►
From YouTube: 2021-07-15 Frontend Performance Discussion: Source Editor (with Monaco) in Blob viewer refactor
Description
On July 15, 2021 we had a discussion about the recent improvements shipped and the upcoming work.
Namely a simple CSS improvement for LCP that Denys is hoping to generalize for other parts of the application. Worth a watch!
Agenda:
https://docs.google.com/document/d/18qidhvZksXJ7TxdgmJQEJB1IVISr-aSki00bONhbAf4/edit
A
Okay,
hi
everyone:
this
is
our
meeting
to
talk
about
performance
regarding
the
source,
editor
usage
in
the
blob
viewer
inside
the
repository
browser
view
app,
and
today
we
start
off
with
dennis
dennis.
Well,
what
do
you
got
for
us.
B
Yes,
so,
first
of
all
I
have,
I
have
two
updates.
First
one
is:
we
have
dramatically
improved
loading
of
large
blobs.
B
The
emerge
request
is
linked
in
in
the
agenda,
so
this
this
is
the
fix
for
the
hamel
application.
For
now
I
I
had
a
call
with
jacques
to
figure
out
how
how
long
it
would
take
for
us
to
to
to
publicly
move
to
the
view
application
so,
and
it
seemed
to
be
like
the
perspective
of
two
three
milestones,
so
I
I
I
I
think
it
like
it
was
not
viable
to
wait
for
for
that.
B
So,
especially
considering
the
very
simple
fix
there,
so
the
main
issue
there
was
actually
like,
in
addition
like
what
we,
what
the
problem
was
on
the
front
like
on
the
back
end.
The
problem
is
obvious.
The
significant
amount
of
time
is
spent
on
the
back
end
for
building
this.
This
response,
like
well-formatted
html
to
to
front-end
that
part
we
cannot
fix
now
unless
we
switch
to
to
source
editor
for
viewing
the
blob
right,
but
what
we
could
do
on
the
front
end.
B
We
were
out
putting
on
the
screen
and
I
started
playing
with
javascript
and
trying
to
figure
out
some
smart
solutions
using
the
document
fragments
and
things
like
this-
it's
still
possible
to
go
that
way
further
and
that's
what
we
will
do
for
the
for
the
next
step
improvement,
probably
but
the
the
the
thing
is
that
I
came
up
with
with
a
very,
very
simple
solution
there
that
was
so
trivial
that
I
couldn't
believe
it.
B
It
was
just
two
lines
of
css
that
actually
put
us
to
down
more
than
70
percent
for
for
lcp.
For
that
view,
so
the
the
cool
thing
is
that
we
can
reuse
this
approach
in
other
parts
of
the
application
and
actually
including
the
situation
that
we
were
talking
about
yesterday
when
the
when,
when
we
just
rendered
the
the
rows
for
for
repository
like
when
you
get
to
a
repository
with
a
lot
of
files
and
we
have
to
output
all
the
files
like
the
approach
your
group
has
taken.
B
Is
this
incremental
load
incremental
page
size
increase
that
that
should
work
great?
However,
it's
it's
like
javascript
and
it's
it
might
be.
I
I
haven't
dived
into
the
into
the
mr
implementing
this
like
deeper,
but
technically
for
simplicity.
You
could
also
apply
the
same
technique
with
just
with
the
pure
css
and
be
done
with
that.
B
So
that's!
I
was
thinking
about
generalizing.
This
approach
yesterday
like
using
probably
some
utility
class
on
an
element
and
then
like
giving
some
instructions
on
what
elements
to
hide.
While
I
wants
to
show
so
maybe
I
will
come
up
with
something
like
this
and
yeah
that
that's
the
the
the
only
thing
to
keep
in
mind
is
that
this
solution
is
specifically
for
improving
lcp.
B
It
doesn't
doesn't
affect
the
total
blocking
time
it
doesn't
affect,
like
last
visual
change
or
fully
loaded
like
those
things,
but
technically
last
visual
change
and
fully
loaded
might
even
get
worse
become
worse,
but
we
don't
care
about
those
things
like
the
total
blocking
time
might
be
crucial,
but
this
solution
shouldn't
affect
the
total
blocking
time
it
should
be.
It
should
stay
on
the
same
level.
It
used
to
be
before
this
fix
so
yeah.
This
is
lcp
only.
A
So
this
sounds
sounds
really
good
and
it
would
be
great
to
see
this
turn
into
a
more
generic
utility,
because
then
it
would
be
easy
to
just
circulate
across
the
team.
I,
like
I
like
the
cleverness
of
it.
I
love
it.
I
I
especially
like
the
fact
that
you
wrote
more
comments
on
the
css
file
than
actually
lines
of
css
yeah
yeah,
exactly.
B
B
You
know
if
the
comment
has
more
lines
than
the
actual
solution.
Then
probably
comment
is
really
important,
so.
A
A
No
this,
this
might
actually
be
good
for
when
we're
we're
tackling
like
hard
pages
where
it's
hard
to
lower
the
lcp.
This
is
definitely
the
way
to
go
on
the
repository
browser.
Yes,
we
should
definitely
try
this
approach
as
well.
There's
a
the
repository
browser
has
a
different
requirement
kind
of
thing,
which
is
we
load
content
on
each
row
at
different
steps.
B
Right,
the
problem
is
that
this
is
this.
Is
the
tricky
part
like
every
time
you
put
new
data
in
the
table,
so
you
fill
out
one
column
yeah,
you
fill
it.
You
fill
it
row
by
row
and
every
time
you
paste
content
in
a
cell,
not
in
the
row,
but
just
every
single
cell.
The
styles
recalculation
gets
triggered.
A
B
The
width
is
the
recalculating
layout
right,
the
layout
doesn't
get
affected,
but
we
need
to
recalculate
this
style
right
because
yeah
correct
and
that's
costly,
and
then,
if
you
imply,
if
you
apply
this
css
technique,
you
will
hide
the
lines
that
are
not
shown
and
when
you
will
still
be
able
to
put
content
in
into
those
rows.
B
A
A
However,
then
we
won't
be,
we
will
be
withholding
the
content
that
they
really
care
about,
which
is
the
files
and
that's,
I
would
have
to
see
and
play
around
with
it
but
yeah,
but
play
keep
in
mind
that
the
files
is
the
one
thing
that
they
really
care
about
the
other
like
sugar
on
top
and
sure.
But
but
I
would
really
like
to
see
how
this
technique
would
apply
to
the
repository
file
list.
B
Yeah
yeah
so
I'll,
I
would
try
to
find
time
and
because
it
would
be,
it
would
be
really
interesting
to
see
this
in
action
and
be
more
applicable
because
yeah.
If
this
is
really
a
sustainable
solution,
then.
A
B
A
B
This
is
this
is
the
thing
this
is
the
the
trick
with
request
idle
callback.
There
is
that
we
technically
like
the
solution,
is
still
css.
Only
the
the
things
we
delegate
to
requests
idle
callback
are,
first
of
all,
removal
of
the
class
that
protects
those
css
styles
correct.
So
once
this
the
class
is
removed,
we
do
show
those
lines
and
that's
that's
yeah.
That's
the
combination
of
the
thing,
so
we
output
the
71st
lines
as
soon
as
po,
like
70
lines,
plus
the
last
one,
because
there
is
there
is
a
thing
that
that
is.
B
That
might
not
be
very
obvious
why
we
output
the
70
lines
and
the
very
last
one-
and
this
is
the
trick-
to
prevent
unnecessary
layout
shifts,
because
when
you
output
the
I'll
put
the
blob
the
first
column,
that
shows
the
line
numbers,
it
has
the
automated
width.
So
if
you
output,
70
70
lines,
it
will
be,
it
will
have
width
to
contain
only
two
digits
there
right.
So
yes,
so.
B
Will
take
the
whole
width
and
it
won't
shift
the
layout
when
we
output
the
lines.
I'm.
A
B
A
Good
good,
good,
good
good,
I
would
lovely
I
would
love
to
read
that
we
also
have
a
couple
of
examples
on
the
on
the
merge
request,
page
that
we've
been
using
and
again
there
are
different
layers
of
efforts.
Right
through
scrolling
is
a
different
one.
Yeah
it
comes
with
different
pros
and
cons,
so
yeah.
B
The
problem
why
I
couldn't
I
couldn't
use,
for
example,
virtual
scrolling
here
or
like
any
other
sorts
of
reactions
based
on
like
scrolling
or
visibility
of
the
elements,
is
that
for
the
blob
view,
we
still
have
to
support
linkage
to
your
line
right.
So
if
we
introduce
the
virtual
scrolling,
we
won't
have
the
line
that
we
are
linking
to.
A
We
we
so
we
the
way
we
handled
that
on
the
merge
request
was
that
basically
we
we
had
to
update
the
routine.
That
does
the
jump
to
the
line
because
we
already
had
the
files
being
loaded
in
batches.
So
that
could
happen
the
time
that
you
load
the
page.
You
might
not
have
the
line,
so
you
have
to
go
to
the
line,
render
that
line
and
then
jump
after
that,
so
yeah
it
gets
complicated,
really
fast,
but
yeah,
but
yeah.
This
is
a
really
nice
low
effort.
I
mean
low
effort
in
terms
of.
B
A
Them
this
was
really
really
clever
from
your
part,
but
we're
looking
forward
to
seeing
this
generalized
to
move
on
to
monaco.
B
B
Don't
think
so
I
don't
think
so,
not
that
I'm
not
that
I
saw.
A
B
Might
became
bigger,
but
the
the
reason
why
it
became
bigger
is
that,
like
there
is
one
more
step
that
we
should
do
for
for
for
for
this
process
to
actually
have
positive
benefit
on
the
bundle
size.
The
the
reason
is,
there
are
several
places
in
the
code
base
where
we
intend
monaco
to
be
globally
available,
so,
like
window.monaco
object
to
be
available
correct,
and
for
that
we
explicitly
tell
barako
to
expose
the
global
api
by
default.
B
B
But
if
we
update
all
the
instances
where
we
require
global
monaco
to
explicitly
input
monaco,
then
we
will
get
the
smaller
bundle
size,
because
the
monaco
chunk
will
already
be
reused
by
several
places
and
we
won't
need
to
to
pollute
the
global
bundle
with
this
global
api.
It
will
be
just
located
in
separate
chunks.
B
A
It's
good,
even
if
we
don't
have
it,
we
can
create
it,
but
I
think
that
would
be
a
nice
effort.
Yeah.
B
Mean
right
here
we
go.
I
will
paste
the
the
link
so.
B
A
All
right,
it's
fortune,
three,
your
manager
is
david
right.
B
A
A
Sweet
just
put
a
little
bit
of
pressure
there
just
to
get
you
time
cool
thanks
for
that.
That's
nice!
Anything
else!
You
wanted.
A
Cool,
so
then
I'll
go
on
to
my
point,
so
just
giving
a
bit
of
a
status
on
the
work
that
we're
doing
on
the
refactor
work
continues
to
progress.
Well,
the
the
tip
that
tim
gave
us
in
this
meeting
a
couple
weeks
ago
was
really
useful.
A
We
we
shipped
this
issue
that
I
have
on
the
agenda
where
we
basically
found
a
way
to
import
all
the
haml
viewers
that
are
still
you
know,
request
html
from
the
server
run
the
scripts
that
they
need
to
run,
so
we
imported
all
of
them
into
the
view
app.
I
think
that's
probably
one
of
them
that
was
added
recently,
like
the
csv
table
that
we
might
have
to
go
out,
go
back
again
to
import
it
into
view
app,
but
that's
just
a
small
detail.
However,
we
did
manage
to
ship
a
bunch
of.
A
Viewers
pull
them
into
the
view
app,
which
makes
it
very
convenient
for
us.
If
you
want
to
enable
the
app
we'll
be
able
to
render
the
the
legacy
viewers
inside
the
view
app
without
having
to
go
to
the
handle
page,
it
does
beg
the
question
if
it
still
warrants
a
refactor
of
the
viewers,
we
will
look
at
each
one
of
them
and
we'll
refactor
the
ones
that
are
beneficial.
A
But
if
this
was
a
very
good
progress
and
we
also
shipped
already
the
text
viewer
and
the
empty
viewer,
we
also
working
on
the
download
viewer
this
week.
I
think
hopefully
we'll
go
in
soon,
and
then
we
have
a
bunch
of
scheduled.
A
A
While
the
backend
on
source
code
was
focused
on
something
else,
they
were
they're
they're
working
on
the
gitlab
sshd
work,
which
is
very
meaningful,
and
here
what
we,
what
we
try
to
do
as
a
group
is
we
we
came
together
and
we
made
this
an
okr
to
ship
the
remaining
viewers
of
the
of
the
refactor.
This
will
put
the
back
end
priority
as
well
back
into
this
topic,
and
where
is
it
trying
to
get
to
the
just
a
second?
A
If
you
will
right
so
one
of
the
things
we
had
missed
was
the
auxiliary
viewers,
so
we'll
be
all
supporting
that
in
and
a
couple
tweaks
on
the
back
end
so
yeah.
So
progress
is
going
well
in
a
sense
of
we're
finding
time
to
work
on
these
things
and
and
getting
priority
agreed
with
product
and
backend,
which
is
great
so
yeah.
A
We
we're
not
enabling
it
right
away
yet
because
we
want
to
make
sure
that
we
have
a
complete
experience
and
we
don't
break
any
meaningful
features
right,
but
yeah,
potentially
in
in
like
two
two
milestones,
will
be
in
a
good
position
to
so
fourteen
two,
I'm
not
sure
if
it's
fourteen
three
fourteen
four
something
along
those
lines
will
probably
be
in
a
good
shape
to
turn
it
on
for
customers
and
yeah.
That's!
Oh!
That's
where
I
react.
So
this
covers
my
two
points:
making
the
okr
so
the
quarter
quarter.
Three!
Okay!
A
I
will
be
focused
on
this
to
get
back
and
buy
in,
and
I
just
wanted
to
check
about
testing
so
just
want
to
make
sure
that
we're
that
we
don't
that
we
have
all
we
need
so
from
what
I
understand,
the
the
monaco
is
being
loaded
or
pre-loaded
or
prefetched
or
whatever
it
is
on
the
explore
page
at
least.
A
Pre-Fetched
prefetched!
Yes,
because
that
way,
so
there's
there's
two
concerns
here:
one
is
the
user
experience
and
that
also
that
always
comes
first
as
long
as
that
user
experience
is
better,
I'm
happy,
but
we
should
also
spend
some
time
ensuring
that
the
tests
will
reflect
will
reflect
the
improvements
or
will
will
reflect
what
the
user
user
usually
sees.
So
in
that
sense,
it's
going
to
make
sure.
B
Wait
a
second
wait.
Yeah
we
do.
A
B
A
B
A
B
Many
one
is
like
general
dashboard
driven
by
team
yeah
that
one.
A
B
Go
through
it
doesn't
go
through
the
explorer.
It
goes
straight
to
the
page,
because
we
are
measuring
non-cached
version
like
right,
we're
doing
the
cold
cache
testing
there
and
another
setup
is
the
one
driven
by
the
quality
team
there
we
are,
we
are
going
through
the
cache
and
that's
exactly
where
you
saw
the
explorer
page
was
the
video
that
you
posted
yesterday
yeah
we
go
to
the
explorer
page,
we
cache
the
assets
and
then
we
go
to
the
required
page
and
that's
where
we
measure
the
things
so
technically
for
technical.
B
There
are
like
there
is
no
good
answer,
which
one
should
we
prioritize,
but
all
of
our
efforts
are
measured
and
assessed
against
the
default
dashboard
given
by
team.
That's
where
we
have
have
all
of
our
lcp,
letterboard
leaderboard
and
that's
where
we
get
our
numbers
for
from
yeah.
B
I
mean
I
mean
when
we
when
we
get
when
we
get
the,
for
example,
like
the
the
the
okrs
of
moving
things
into
the
moving
a
page
to
like
to
under.
A
A
B
That
engineers
in
general
have
no
idea
that
we
have
10
grand.
A
Right
right,
right,
right,
yeah,
the
the
team
at
large,
usually
the
way
that
I've
seen
this
happen
is
like
we'll
focus
on
the
one
that
you
just
mentioned:
the
team
one
and
then
eventually,
when
we're
creating
issues
from
the
quality
team,
you
know
all
of
those
timings
will
be
in
reference
to
the
quality
10k
architecture,
one
yeah.
But
yes,
so
I
think
we
should
focus
on
both,
at
least
so.
We
should
probably
have
a
task
to
check
how
we
can
make
sure.
A
So.
Give
me
a
second
one
of
the
things
we
did
for
the
virtual
scrolling
was
that
we
created
a
query
string
parameter
to
enable
virtual
scrolling
by
the
url,
and
in
that
way
we
were
able
to
put
that
feature
flagged
off
mode
tested
on
the
site,
speed
because
we
just
created
an
entry
with
the
query
string,
saying
virtual
scrolling
equals
true.
A
We
should
do
the
same
thing
for
the
blob
viewer
and
I
probably
create
an
issue
for
it
just
to
be
what
we
need
just
to
have
a
way
to
force
them
the
the
feature
flag
on
the
url.
Once
we
have
that
we'll
be
able
to
add
the
entry.
B
So,
the
from
what
I
understand
yeah
this
this
particular
issue
is
not
related
to
the
source
editor
per
se.
It's
just
for
testing
with
the
feature
flags
on
and
off
right,
because
it
because
it's.
B
Testing
inclusion
of
the
source
editor
on
the
blob
view
you
will,
you
will
have
two
num
two,
two
testing
environments
by
default,
one
and
10k,
and
with
with
a
cache
and
one
at
in
in
the
default
dashboard
on
the
cold
cache,
so
the
by
the
time
it
goes,
goes
live
technically.
B
The
numbers
should
be
more
or
less
comparable,
because
since
we
do
not
pre-load
prefetch
source
editor
on
the
explorer,
essentially
you
will
get
the
cold
cash
in
regards
of
the
source
editor,
you
will
get
the
cold
cash
on
10k
as
well
yeah.
If
we
have
to
test
this
with
a
hot
cache,
it's
going
to
be
like
one
one
simple
merge
request
to
include
to
prefetch
source
edit
on
the
explorer
page.
B
The
reason
I
didn't
do
this
in
from
the
beginning
is
that
from
what
I
understand,
usually
people
do
not
like
real
users.
Do
not
land
on
explore
page
that
often
they
either
land
in
the
in
the
pro
on
the
project
page
or
like
group
page,
and
then
they
navigate
through
to
to
project.
So
it's
kind
of
I
didn't
see
any
point
in
preloading
on
the
in
prefatching
it
on
the
expo.
B
Yes,
yes,
but
they
will
still
land
on
the
project
page
on
the
repository
page
before
they
land
on
any
page
involving
a
source.
Editor.
That's
the
point.
A
Yeah
yeah,
I
get
that
okay,
it's
really
weird,
because.
A
First,
you
can
have
a
preference
on
your
homepage
contents,
I'm
just
checking
on
the
preferences
you
can
choose
between
your
your
projects
start
project,
your
projects,
activity,
start
project
activity,
your
groups,
your
to-do
list,
your
assignment
issues,
assignments,
there's
a
bunch
of
options
that
we
can
have
as
an
home
page,
which
is
the
one
you
get
after
signing.
But
if
you
follow
a
link
and
then
you
sign
in
from
that
page,
I
think
you're
thrown
back
to
that
page.
It's
really
tricky
to
guess
where
the
user
will
land
after
login.
B
It's
like
the
point
is
that
we
we
can
easily
get
get.
This
like
include
source
editor
into
the
hot
cache
testing
for
on
the
10k.
That's
not
a
problem,
but
we
still
have
to
watch
out
for
the
default
dashboard
to
to
know
how
it
behaves
on
the
cold
cache
and
because
the
the
whatever
improvement
we
have
we
make
for
the
called
cash.
B
A
Okay,
we
will
definitely
have
a
second
entry.
We
would
not
be
like
just
using
the
one,
but
you
you
have
a
point.
Maybe
so
we
should
probably
add
an
entry
right
now
with
the
call
cache
to
the
site
speed
so
that
we
can
see
how
bad
of
a
picture.
It
is
right,
because
we
do
expect
that
to
be
like
more
data
to
load
more
time,
loading
assets.
A
But
we
don't
know
the
contrast
in
terms
of
metrics.
So
what
we
can
probably
do
is
have
the
same
project
that
we
already
have
added
an
entry
to
that
project,
enabling
the
blob
viewer
using
monaco
using
source
editor,
because
that
way
we'll
be
able
to
contrast
the
metrics
as
they
are
right
now,
without
tweaking
the
call
cache
without
picking
anything.
Just
as
it
is
enabling
the
mode
and
then
we'll
see.
B
So
you
mean
okay,
if
I,
if
I
understand
this
correct,
you
want
to
have
to
to
have
a
feature
flag
and
behind
this
feature
flag,
you
want
to
make
the
replacement
of
the
viewer
with
source
editor
right
now
to
to
see
what.
How
big
is
the
impact
without
moving
to
view
or
doing
anything
else.
So
just
just
the
replacement.
A
No,
no,
so
we
already
have
so
the
records
already
merged
right.
So
we
already
have
a
way
so-
and
I
think
jacques
has
a
testing
project
where
he
has
enabled
that.
B
Yes,
there
is
this:
the
the
texture
does
does
include
source
editor.
Actually,
I
think
so.
The
problem
is
that
the
problem
is
that
it.
It
would
also
be
beneficial
to
to
separate
the
things
so
the
default
view,
as
it
is
now
than
the
default
view,
as
it
is
not
like
the
hubble
application
with
source
editor.
So
we
compare
this
and
then
the
source
editor
and
the
view
application.
A
Can
we
separate
the
two?
I
don't
think
we
can
separate
the
two.
A
A
B
A
It's
ready
to
be
right
now,
because
I've
seen
I've
seen
one
demo.
I
think
I
have
wait.
Give
me
a
second.
B
There
there's
like
we,
we
discussed
this
with
jacques.
There
is
one
trying
to
find
the
link,
so
the
the
the
thing
is
that
he's
using
source
editor
as
the
view
component.
That
makes
sense.
The
problem
is
that
he
needs
to
and
to
add
at
least
one
extension
to
that
instance.
Yeah
and
the
view
component
for
the
source.
Header
has
a
known
bug
with
applying
the
extensions
okay.
So
what
I
suggested
to
him
is
convert
this
to
a
plain
javascript
implementation.
B
Instead
of
the
view
component,
okay
and
then
include
the
required
extension,
okay,
the
the
the
extension
that
is
responsible
for
actual
line
linking
to
a
line.
So
when
you,
when
you
include
the
view,
the
source
editor
by
default
as
the
view
component,
you
don't
get
a
link
into
linking
to
a
line
out
of
the
box
there.
You
have
to
add
the
base.
Extension.
B
Okay
and
that's
that's-
that's
the
point
that
doesn't
work
that
well
in
source
later,
I'm
I'm
planning
on
getting
back
to
source
hitter
in
the
cup.
In
the
next
couple
of
milestones,
we
have
we've.
We
had
a
a
a
meeting
yesterday.
B
The
group
meeting
and
it
seems
like
the
vector,
is
going
towards
the
source.
Editor
work
more
in
a
couple
of
milestones.
So,
okay.
A
No,
I
I
found
the
what
you're
saying
about
the
line,
so
just
just
so
that
we
have
it
on
the
on
the
call.
This
is
the
example
that
I'm
talking
about.
I
was
trying
out
the
tell
me
when
you
see
my
screen.
You
see
my
screen.
A
A
But
yeah,
like
you
said,
you
can't
link
to
a
line
so
that
that
makes
sense
trying
to
think.
B
That's
the
bug
that
I'm
gonna
fix,
I'm
probably
not
gonna,
squeeze
it
in
14.1,
but
it's
going
to
be
fixed.
B
A
It's
not
seen
by
users;
it
will
be
useful
for
our
instrumentation,
so
I
think
that
would
be
smart,
yes,
okay,
I'll
find
a
way
to
to
have
this
enabled
via
the
url
and
then,
instead
of
just
using
this
dummy
test,
we'll
go
and
use
the
one.
That's
inside
speed,
we'll
just
enable
the
mode
there
and
we'll
be
able
to
contrast
the
two
modes.
B
Right,
okay,
that's
yeah,
that
that
makes
total
sense,
and
then
we
could
track
the
progress
and
like
this
is.
This
is
the
challenging
part
now
that,
since
we
are
behind
the
feature
flag,
we
cannot
really
track
performance
changes
in
in
in
in
the
implementation
that
source
code
is
working
with
right
right.
A
B
A
Yeah
I'll
take
care
of
that
I'll,
create
an
issue
and
I'll
see
if
we
can
put
it
as
a
stretch
in
142..
It's
usually
a
very
straightforward
thing,
so
we
might
be
able
to
do
that
and
I'll
send
I'll.
Give
you
an
update
in
two
weeks,
hopefully
with
the
numbers
and
I'll
actually
I'll,
just
I'll
just
share
it
on
slack
once
I
have
it,
but
we'll
talk
about
it
here
as
well,
but
that's
it.
I
don't
have
anything
else
to
talk
about
right
now.
A
I
think
we're
in
a
good
spot
we're
progressing
fine
and
yeah
we're
seeing
things
roll
out,
which
is
great
and
thank
you
so
much
dennis
for
the
help
you
gave
on
the
large
blobs.
It's
heavily
appreciated
and
it
helps
a
lot.
So,
thanks
for
that
and
looking
forward
to
seeing
that
generalized.
B
Yeah,
that's
there.
B
Like
just
keep
in
mind,
there
is
so
this
was
the
severity
four
to
severity,
three
transition
right
right
or
I
mean
severity,
one
to
severity,
two
right,
okay,
and
now
there
is
another
issue
to
transition
from
severity.
To
two
to
seventy.
A
B
Okay,
so,
but
we
are
very
comfortable
in
the
very
very
low
a
little
like
very
close
to
the
lower
border
of
severity,
two,
so
all
right
so
close
getting
to
severity,
three
or
yeah
to
severity.
Three
right.
B
Severity:
three
we
can
actually,
I
assume
we
could
even
get
to
severity
four
there
if
we
make
the
replacement
of
the
hamel
the
with
like
of
these
server
side,
rendered
content
with
source
editor
right
away.
A
That's
the
numbers
we
need
to
see,
because
if
we
have
to
load
monaco,
that's
the
trade-off
where
it's
going
to
take
longer.
Is
that
better?
Is
that
faster
or
not
faster?
Overall,
we
have
to
see
with
the
real
example.
A
But
yes,
severity
to
disparity
3
is
usually
like
a
less
urgent
move
than
from
one
to
two
and
from
three
to
four
the
same
thing,
but
we
should
definitely
keep
pushing,
but
right
now
I
think
the
next
thing
would
be
trying
that
out
on
this
on,
the
repository
list
would
be
awesome
and
if
it
will
be
in
a
generic
way,
then
you
can
present
it
to
the
rest
of
the
team
to
start
using
this
everywhere.
Yeah
so
sounds.