►
From YouTube: WebIDE Integration Tests discussion
Description
Create::Editor session on discussing the Web IDE integration tests and potential ways of improving the testing environment for Web IDE
A
Hello,
we
are
at
the
web
id
integration
tests,
discussion
for,
create
editor
group
and
that's
what
we're
gonna
do.
Today,
we
are
going
to
talk
about
the
integration
tests
for
the
web
id
recently.
The
integration
tests
have
indicated
some
some
unpredictable
at
least
okay.
I
will
be
speaking
for
myself
some
unpredictable
failures
and
some
I
found
some
hard
to
maintain
and
hard
to
enter
the
integration
tests.
A
Issues
like
I'm
in
particular,
most
of
the
confusion.
Most
of
the
complexity
is
related
to
the
to
the
way
we
deal
with
monaco
apparently,
and
how
monaco
technically
notifies
the
end
users
and,
in
this
particular
case,
the
tasks
about
different
life
cycle
events.
A
What
happens
when
does
it
happen
and
from
that
we
I've
experienced
a
lot
of
race
conditions
and
especially
like
when
you,
when
one
touches
web
id
and
wants
to
do
some
changes,
our
integration
tests
bite
really
hard,
because
things
might
not
happen
when
we
expect
things
to
happen
due
to
monaco
being
being
not
very,
like
first
monaco
being
not
very
communicative
and
second,
our
tests,
not
listening
very
attentively
to
what's
going
on.
So
this
meeting
is
mainly
for
us
to
discuss
what
issues
are
the
most
critical?
A
A
So
that's
that's
my
my
pain,
especially
in
the
light
of
of
the
merge
request
that
paul
that
paul
is
reviewing
at
the
moment
so
I've
to
give
you
a
perspective.
I've
I've
started
that
merge
blast
originally
about
six
months
ago
and
pretty
much
all
of
the
integration
tests
failed
for
me
back
then
so
gradually
I
was
trying
to
solve
them,
but
it
was.
It
was
pretty
scary,
like
fixing,
so
many
integration
tests
at
once
was
was
not
the
main.
A
That
was
not
the
enjoyable
path,
so
we
are
we're
in
a
good
spot
now
and
I
think
in
that
merchant
west
we
fix
one
particular
issue
with
the
race
conditions,
but
I
would
like
to
hear
whether
what
is
your
experience
from
what
do
you
think
we
could
do
in
order
to
make
integration
tasks
easier
to
to
understand.
C
B
So
there's
there's
two
issues
and
dennis
highlighted
introduced
both
of
them,
and
I
was
trying
to
find
a
link
to
the
integration,
a
comment
about
the
integration
test,
http
that
you
ran
into,
but
that
has
to
do
with
monaco
for
sure
it
has
to
do
with
something
about
monaco
cleaning
things
up
and
not
happening
when
we
expect
it
to
and
the
fact
that
we
just
individual
just
tests.
B
The
files
themselves
are
all
sandboxed,
but
the
individual
tests
within
a
file
are
not
sandboxed
from
each
other
and
if
we're
doing
something
that
expects
things
to
be
cleaned
up,
but
they're,
not
that's
going
to
cause
problems
and
be
really
hard
to
debug
problems,
because-
and
that's
that's
kind
of
what
what
you're
running
into
on
it's
a
very
similar
situation
to
the
similar
part
is
very
hard
to
debug
problems.
B
To
what
dennis
was
talking
about
dennis
we
were
changing
the
web
id
and
change
it
in
such
a
way
that
the
way
we
we
started
up
the
ide
from
the
way
we
started
it
up
and
our
integration
test
had
to
change,
because
our
project
data
was
coming
from
different
place.
B
But
when
the
project
data
had
some
inconsistencies
with
other
things
that
the
test
was
setting
up
with
the
results,
don't
point
anywhere
to
what
the
problem
is
it's
hard
to
debug
and
you
just
kind
of
have
to
you
just
kind
of
have
to
to
do
it
and
you
know,
try
to
be
clever
and
trying
to
figure
out.
What's
where's
where's
the
problem
could
be,
and
it's
and
it's
hard
and
I
don't
totally
know
yeah.
I
really
like
to
know
what
we
could
do
to
make
it
better,
because
we
want
this.
B
I
I
think
the
main
problem
is
when
there's
a
problem.
Ideally
the
error
should
point
to
what
needs
to
be
fixed.
That's
that's,
definitely
a
significant
aspect
of
it,
but
the
monitor
the
two
problems
are
related
in
that
when
there's
a
problem,
it's
really
hard
to
tell
it's
hard
to
debug.
B
But
then
two:
when
error
happens,
it's
not
it's
not
oftentimes.
It's
not
clear
why
it's
failing
and
where
it's
failing
and
dennis's
case
was
really
weird,
but
thankfully
we
were
able
to
figure
that
one
out
I
haven't
figured
it
out.
Yours,
david.
A
There's
also
one
one
thing
to
keep
in
mind
that
it
might
somehow
be
related
to
the
I'm
not
sure
I
haven't
seen
any.
I
haven't
searched
for
any
any
evidence
of
this,
but
it
might
be
that
part
of
the
problem
is
with
with
like
this
cleanup
and
tearing
down
the
between
tests
might
be
related
to
the
fact
we
are
not
dealing
with
real,
dumb
right
so
in
in
the
in
the
tests.
So
this
synthetic
dom
might.
A
We
have
to
make
sure
to
make
sure
that
tests
are
easy
to
understand,
and
that's
that's
will
that's
what
combined
with
what
paul
mentioned
like
clear
indication
of
where
things
fail
would
make
things
much
much
easier
and
much
more
maintainable,
because
if
we
want
to
provide
better
coverage
for
web
id
in
integration
tests
in
particular,
it
would
be
nice
to
have
this
like
more
user-friendly
environment
for
for
the
integration
test
for
the
developers,
but
one
of
the
one
of
the
possible
things
that
might
help
us
might
not
help.
A
I
don't
know
we
have
to
try
was
the
thing
that
I
mentioned
to
paul.
The
the
fact
that
there
is
one
really
redundant
thing
with
monaco
that
we
that
we
have
now
is
that
monaco
in
the
latest
versions
by
default
is
is
has
to
be
explicitly
imported
for
the
end
user.
So,
wherever
you
want
to
use
monaco,
you
have
to
explicitly
import
it.
This
was
not
the
case
prior
to
our
like
recent
upgrade
of
monaco.
Previously
monica
was
the
global
object.
A
It
was
sitting
in
the
global
namespace
and
that's
what
quite
a
few
places
in
our
code
base
still
expect
to
happen.
So
why?
While
upgrading
monaco
we
had
to,
we
had
to
cheat
a
bit
and
because
monaco
technically
provides
not
monica,
but
the
monaco
webpack
plug-in
provides
the
provides
a
way
to
cheat.
With
this
and
technically,
we
specified
the
parameter
in
our
webpack
configuration
so
that
webpack
would
build
monaco
as
a
global
object.
A
So
that's
that's
achieved
by
monaco
webpack
plug-in.
So
we
have
now.
We
have
monaco
as
a
global
object,
but
there
are
also
places
that
explicitly
import
manic.
So
there
is
quite
a
mess
with
how
we
treat
monaco,
and
I
think
one
of
the
very
first
things
we
have
to
to
do
is
actually
unify
this
and
make
things
things
clear
so
that
we
sort
of
eliminate
any
any
chance
for
for
just
mistakes
due
to
due
to
us
messing
up
with
the
global
namespace.
A
A
B
And
I
I
think,
there's
a
strong
possibility,
that's
related
to
they've
the
issue
dave
was
run
into
because
when
we
were
pairing
we
saw
something
that
looked
like
two
two
monaco's
and
I
think
that's
got
to
be
related
to
you.
Yeah.
A
And
that,
like
and
in
the
in
in
in
such
scenario,
what
information
they
share,
what
information
is
sort
of
scoped?
It's
totally
unclear.
B
A
Once
we
once
we
once
we
create
a
model,
we
cannot
be
sure
what
material
this
model
goes
into,
and
this
means
that
we
might
discard
modal
in
completely
different
monaco
that
don't
even
doesn't
doesn't
even
have
that
mode.
Yeah.
B
The
taking
this
problem,
taking
that
specific,
that
specific
solution
and
abstracting
it,
the
integration
tests,
inherit
the
difficulty
of
the
thing
they're
trying
to
test
and
so
we're
trying
to
test
the
entire
web
id
and
integration
and
because
it's
not
our
capybara
environment.
We
also-
and
this
is
a
good
thing-
we
want
to
fail
if
we
do
like
a
console
error
or
anything
like
that,
we're
gonna
we're
gonna
fail,
we're
eagerly
waiting
to
fail.
B
And
the
web
idea
has
a
has
a
freaking,
a
lot
of
complexity
like
with
you
know,
loading,
monaco
and
and-
and
we
have
all
the
different
tabs
and
stuff
that
you
know
the
life
cycles
of
those
things
have
to
be
managed.
That's,
I
think
one
of
the
things
makes
it
a
makes
it.
A
big
challenge
is
the
integration
test
is
not
doing
a
whole
lot
other
than
than
just
bootstrapping
the
web
ide
in
a
non-browser.
B
Which
is
you
know,
and
then
the
web
id
does
its
thing
and
it's
makes
it
hard
to
debug,
because
the
web
id
has
a
lot
of
things,
a
lot
of
complexity
and
things
happening
asynchronously.
Some
things
aren't
cleaning
up
and
those
it's
not.
It
just
requires
hard
debugging
and
yeah.
It's
a
it's
a
challenge.
I
think
there's
some
things
that
we
could
do
to
improve
that
one
100.
B
What
did
what
dennis
is
talking
about
cleaning
up
our
references
to
monaco
to
figuring
out
you
know,
trying
to
make
sure
we
have
good
unit
test
coverage
for
are
we
actually
disposing
of
things
correctly
would
be
helpful,
but
then
three.
B
One
of
the
problems
I
often
run
into
when
I'm
using
this
is:
we've
opted
to
use
the
testing
library,
library.
B
And
I
might
be
a
little
biased,
but
I
kind
of
freaking
hate
this
library
and
one
of
the
reasons
I
hate
it
is
when
something
fails
to
find
something
it
says:
hey.
We
couldn't
find
this
in
all
of
the
html
of
the
entire
document
and
I
had
to
I
I
had
to
add
something
so
that
it
wouldn't
do
that,
but
I
just
know
by
default,
the
air
reporting
of
testing
library
is
really
poor
and
it's
hard
to
find
out
when
something
isn't
found
or
whatever.
I'm
not
super
pleased
with
that
library.
Maybe
there's.
B
B
You
know
we're
getting
asynchronous
responses
back
and
everything
the
testing
library
will
actually
just
say,
like
it's
kind
of
like
happy
bar
like
we're
gonna
just
wait
for
this
thing
to
show
up
and
this
recent-
and
it
was
maybe
a
couple
months
ago
when
I,
when
I
tried
to
change
the
error
reporting
on
it,
that
may
have
helped,
but
I
haven't.
I
have
not
done
that
activity
of
let's
make
a
test
fail,
see
what
error
shows
up
and
if
it's
not
the
era
that
would
point
to
the
issue.
B
Maybe
that's
gonna
highlight
some
some
things
that
we
could
do
and,
and
maybe
one
of
those
things
is
you
know
we
we
need
to
add
some
more.
We
need
to
add
some
more
hooks
into
the
testing
library
or
we
need
to
use
something
else.
But
that's
that's
one
thing
I
know.
Historically,
I've
not
been
pleased
with
the
error
reporting
on
testing
library.
A
But
we
also
like
when
it
comes
to
the
to
the
testing
library.
We
also
have
like
it's
it's
clear
in
the
code
as
well,
that
some
of
the
integration
tests
were
flaking
to
the
point
that
we
had
to
increase
the
time
out
for
for
the
integration
tests.
A
So
this
is
a
good
indicator
that
well,
probably
probably
things
are
not
going
100
correct
and
we
might
I'm
just
wondering
whether
things
like
this
are
still
related
and
attributed
to
the
fact
that
we
do
not
know
them
when
things
are
happening
on
the
in
the
monaco,
like
that,
we
do
not
listen
to
particular
monaco
things,
I'm
asking
because
technically
we
could.
We
could
say:
okay
like
in
order
one
of
the
one
of
the
possible
things
for
this
would
be
to.
A
Since
we
are
using
the
source
header
in
web,
we
could
technically
make
source
header
communicate
all
sorts
of
like
hooks
to
us
to
use
in
the
integration
tests
and
then
source.
It
would
be
up
to
source
header
to
to
detect
when
to
communicate
those
based
on
the
underlying
monaco
events.
A
So
that
would
be
one
of
the
one
of
the
things
to
to
to
to
implement
and
sort
of
have
a
clear
api
from
the
source
header
in
order
to
not
dive
into
into
the
moniker
realms
and
not
test
the
the
internals
of
monaco.
B
Really
yeah,
let
me
share
some
context
to
the
original
objective
and
still
the
objective
of
the
integration
tests,
and
so
before
we
had
these,
we
had
some
significant
missing
testing
coverage
of
the
web
ide
we
had
youth.
This
was
the
really
big
warning
sign.
We
had
ridiculous
amount
of
unit
tests,
but
highly
visible,
bugs
showing
up
user
regression
showing
up
because
of
unrelated
changes,
and
things
like
that
and
it's
just
because
of
the
nature
of
we
need
to
test
these
things
working
together
and
we
had.
B
We
didn't
even
have
any
feature
specs,
so
we
added
one
feature
spec
and
then
we
identified
it.
Okay,
we
want
to.
We
want
to
test
these
test
cases,
so
we're
gonna
add
a
few
more
feature.
Specs
and
after
adding
five
feature
specs,
we
ended
up
creating
the
longest
running
r-spec
job
of
just
five
feature.
Specs
of
the
web.
Ide
was
running
at
like,
like
some
ridiculous
amount
like
20
minutes
or
something
and
and
it's
just
because
of
how
slow
capybara
is
and
how
front-end
heavy
the
web
ide
is.
B
The
these
tests
need
to
be
pushed
down
the
pyramid,
so
the
integration
tests.
Ideally,
we
would
like
them
to.
We
would
like
the
tests
themselves
to
not
know
about
source
editor,
not
know
not
have
to
interact
with
anything
like
that
interact
like
it's
capybara,
where
it's
just
like
click
button,
and
you
know
like
enter
text
like
we
want
it
to
be
like
that,
but
way
faster
and
that's
the
main
goal
and.
B
But
yeah
it's
it's
a
lot.
It
seems
like
it
and
there's
issues
you
know
you
know
when
running
even
feature
specs
you
get
some
weird
failures
that
are
hard
to
debug
and
sometimes
you
have
to
debug
those.
So
it
inherits
those
problems
of
just
testing
things
in
integration,
but
yeah
there's,
I
think,
there's
definitely
opportunities.
We
could
take
to
make
it
easier.
C
That
sounds
to
me
roughly
like
we
are
correct
me
if
I'm
wrong,
but
it
does
sound
to
me
like
we're,
roughly
in
alignment
with
the
idea
of
standardizing
what
type
of
monaco
we're
using
global
versus
an
implicit
import
and
that's
a
good
place
to
begin,
and
then
we
can
look
at
trying
to
find
a
way
to
match,
or
even
you
know,
lighten
the
load
of
the
front
end
of
the
web.
Ide
possibly
might
be
a
good
approach
to
take,
as
opposed
to
trying
to
speed
up
the
actual
testing
itself.
C
B
100,
what
dennis
is
talking
about,
I
think
we
need
to.
We
need
to
unify
our
monaco
references.
That's
definitely
one
thing.
I
think
it's
worth
us
and
I'll
just
create
I'll,
just
create
a
general
issue
for
us
to
maybe
track
this.
It's
worth
us
having
an
issue
to
improve
error
reporting
on
in
the
integration
tests.
It's
worth
us
having
an
issue
for
that
of
one.
B
Let's
get
examples
of
this
failed
locally.
This
was
why-
and
this
was
the
error
message,
what
happened
like
it's
worth
us
collecting
those
instances,
so
we
should
just
have
an
issue
there
and
be
on
the
lookout
for
improving
how
errors,
how
we
can
improve
error
reporting
there,
and
the
goal
is
to
make
it
like
like
as
simple
as
capybara,
which
sometimes
isn't
very
simple.
B
So
it's
not
going
to
be
it's
not
going
to
be
as
seamless
as
possible,
but
it
just
keep
in
mind
the
nature
of
it,
because
it
is
testing
everything
is
going
to
imply
some
issues,
but
my
experience
is
when
it's
failed.
Besides,
there's
some
time
out
issues
and
the
issue
we
run
into
david,
but
even
the
issue
running
into
david.
B
If
it
is
fixed
by
this
you
know
hey,
we
have,
we
have
duplicate
references,
it's
catching
real
issues
with
with
the
web
ide
and-
and
that's
that's
one
of
the
challenges
too
it's
our
web
id
is
just
is
just
so
complex
and
and
not
complex
in
a
good
way.
It
just
does
a
lot
and
a
lot
more
than
it
probably
does
and
not
in
a
clean
way,
so
reducing
the
complexity
of
the
web
id
for
improving
our
own
error
reporting
on
the
web
id
probably
could
go
a
long
way.
A
That's
true,
that's
the
clearly
like
the
complexity
of
the
of
the
underlying
thing
being
tested
is,
doesn't
doesn't
make
things
any
any
more
reliable.
So
that's
clearly
the,
but
at
least
you
know
like
I'm,
not
sure
whether
we
have
any
problem
with
this,
but.
A
Clearly
indicating
whether
the
problem
occurred
in
web
id
itself,
where
the
problem
occurred
in
the
testing
environment,
that
would
be
would
be
a
good
starting
point
as
well,
because
if
problem
happens
somewhere
somewhere
in
the
web
ide,
then
then
it's
one
thing
and
we
should
be
able
to
to
to
reproduce
it
by
just
click
testing.
Probably
if
it's
the
problem
in
the
test
in
the
testing
environment,
then
there
might
be
several
issues
but
as
as
the
the
experience
tells
us
most
often
than
not,
we
have
some
racing
conditions
in
testing
environments.
A
So
yeah,
I
think,
starting
with
unifying
monaco
imports
would
be
would
be
the
very
good
start.
Yeah
like
this.
The
third
one
david
is
so
meta.
It's
like
reduced
web
led
complexity.
It's
just
like
well,
apparently,.
C
C
So
sold
this
so
the
way
I'm
thinking
about
this
is
the
first
two
are
good,
concrete
actions
that
we
can
take
kind
of
immediately
to
sort
of
dig
into
this
and
the
third
one
for
me
personally
ties
in
a
little
bit
to
the
web
id
state
management
redesign.
I
think
that
will
solve
some
of
the
complexity
and
should,
if
we
implement
some
better
error
reporting,
should
give
us
a
really
good
place
to
start.
So
I
think,
just
by
continuing
that
direction,
we're
kind
of
going
the
right
way.
A
I
was
thinking
about
this
reducing
web
id
complexity
and
I
was
like
I
was
thinking
about
it
in
the
context
of
the
oh
man
pun,
intended
in
the
context
of
in-context
editing
when,
for
example,
we
would
and
the
gradual
editing
experience
that
we've
discussed
during
our
think
big
session
on
the
editor.
A
So
one
of
the
like,
as
paul
presents,
is
one
of
the
crazy
ideas
would
be
to
reduce
web
id
complexity
by
actually
splitting
this
huge
application
into
smaller
ones.
Like
the
actual
editor
part,
the
tree
like
this
is
the
very
base.
Explain
the
editor
one
application,
the
navigation
tree,
another
application.
A
The
problem
is
that
we
don't
only
have
the
navigation
tree.
We
also
have
the
tabs
that's
sort
of
the
that's
that
adds
some
uncertainty
to
this
puzzle
and
then,
like
the
status
bar
at
the
bottom,
can
be
the
third
another
application,
so
have
some
very
basic
split.
But
that
would
already
reduce
the
complexity
and
would
give
give
us
a
way
of
better
unit
test
things.
Probably,
and
then
would
it
would
help
us
to
catch
the
issues
on
the
unit
level,
but
in
general.
A
Maybe
this
would
help
us
to
to
reduce
complexity
in
the
in
the
code
base
would
improve
the
maintainability
of
web
id
and
that
it
would
magically
solve
our
issue
with
maintainability
of
the
integration
tests.
I
don't
know
this
is
just
just
just
an
idea.
B
Yeah,
I
I
think
I
think
the
more
we
can
distinguish
responsibilities
and
either
we
use
components
or
separate
view
applications,
but
right
now,
just
how
everything
is
sharing.
Everything
and
part
of
that
is
because
we
it
all
comes
down
to.
We
all
use
view
everybody's
sharing,
ux
state
and
that
has
its
win
of
everyone-
has
the
same
state,
but
also
has
its
downside
of
everyone
has
the
same
state
and
so
that
state
is
doing
way
too
much
and
yeah.
B
I
I
think
that's
that's
kind
of
the
goal
is,
is
we
could
split
it
up,
split
those
responsibilities
up
through
view
applications,
but
somehow
the
state
still
needs
to
be
there,
but
when
you
do
it
through
view,
applications
you're
going
to
have
different
almost
if
we
use
vx
or
reuse
whatever
you'll
have
different
stores,
different
state
stores,
and
so
that's
the
main
target
is
all
our
complexities
root
in
our
vx
complexity.
And
how
can
we
improve
that
and
yeah?
B
It
would
be
interesting
to
maybe
for
the
status
bar
splitting
it
out
having
it
use,
something
like
apollo
state
and
receive
events
from
the
vx
store
to
stay
up
to
date
or
something
because
the
status
bar
doesn't
ever
change
the
state.
I
don't
think
so,
or
something
like
that
so
like
having
that
reflection
of
the
state
technically
having
that
be
a
separate
thing,
would
be
really
interesting.
Yeah.
I
don't.
B
I
don't
know
if
it
would
having.
I
don't
know
if
temporarily
having
multiple
stores
would
improve
the
complexity
situation
like
technically,
if
I'm
booting
bootstrapping
the
whole
thing,
I
don't
know
if
having
multiple
applications
would
improve
the
complexity
or
hopefully
it
would
just
improve
the
simplification
of
the
units,
but
it's
gonna.
I
think
it's
a
good
idea,
because
it's
it's
about
responsibility,
segregation
and
that's
the
name
of
the
game.
C
I
think
natalya
might
be
a
great
person
to
touch
base
with
on
that.
She
did
something
similar
enough
with
the
right
sidebar,
while
they
were
transitioning
away
from
one
giant
vx
store
into
a
more
apollo
driven
format,
and
she
was
doing
a
lot
of
back
and
forth
communication
between
apollo
and
vue.
So
she
might
have
some
good
ideas
about
performance,
yeah.
B
B
Now
we
have
some
units
that
might
do
this,
but
unit
testing.
When
this
unexpected
thing
happened,
did
we
do
a
helpful
console,
error
or
console
log
or
whatever
we
kind
of
just
let
the
unexpected
failures
happen
and
almost
we
might
want
to
consider
as
we
write
units
and
unit
tests
do
I
have
is,
could
something
unexpected
happen?
A
I
think
I
think
this
this
would.
This
is
first
of
all.
A
This
is
a
really
good
point
and
I
actually
started
started
thinking
about
it
more
after
your
comment
to
my
merch
request,
paul
when
you,
when
you
suggested
to
not
swallow
the
error
but
actually
throw
it
in
and
like
put
it
into
the
console,
and
it
makes
a
lot
of
sense,
and
I
think
if
we
would
be
more
disciplined
about
doing
this
sort
of
things,
we
would
even
be
able
to
catch
these
things
on
different
levels
like
it's
not
even
only
about
the
unit
test.
A
A
B
A
So
technically
we
could
do
that,
and
that
would
all
of
a
sudden
that
would
make
our
test
much
more
robust.
A
A
So
I
think
that
would
be
a
really
good
idea
and
like
taking
the
holistic
approach
about
not
being
afraid
to
throw
errors
into
the
console
like
because
the
the
the
general
pattern
we
see
in
the
code
base
among
the
front
end
engineers
is
that
we
sort
of
we
show
these
toasts
or
like
alerts,
but
we
complete,
and
we
think
that
that's
that's
sort
of
enough
for
for
the
end
user.
Like
okay,
we
show
the
error
in
this
nice
shiny
box
on
the
screen
like.
A
Why
do
we
need
to
care
more,
and
apparently
we
have
to
because
we
won't
be
able
to
catch
this
type
of
errors
in
the
tests
properly,
but
we
will
be
able
to
catch
proper,
console
error
or
console
warning.
Even
if
we
want
to
go
that
harsh
on
ourselves,
we
would
be
able
to
catch
those
in
the
appropriate
testing
environment.
So
I
think
this
is
a
good
idea,
really
good.
One.
B
B
B
This
is
the
one
of
the
reasons
why
we
introduced
that
log
error
module
because
beforehand
everything
on
our
code
base
was
discouraging
developers
from
doing
this,
because
we
had
eslent,
don't
console
error
and
we
have
you
know,
but
I
think
now
we
we
highlighted
my
main
point
of
doing
that
was
well.
We
actually
want
the
real.
We
want
a
helpful
error
in
sentry.
B
We
want
a
helpful
error
for
users
going
to
report
something,
but
now
I
wasn't
aware
of
we
want
to
be
able
to
debug
when
our
integration
and
feature
specs
go
wrong
and
these
console
errors
are
a
big
part
of
that.
So
I
would
I
hadn't
thought
of
that
until
our
conversation
just
now
and
yeah,
I
think
that's
a
that's
a
message.
I
think
we
all
need
to
be
across
front
end.
We
need
to
think
every
unit.
B
A
This
is
this
is
a
very,
very
important
thing
actually
and
bringing
this
message
across
the
department
bringing
it
to
the
maintainers
level
so
that
we
could
keep
an
eye
on
when
we
reviewed
the
emergency
quest.
That's
that's
a
very
important
thing
and
that's
very
valuable
thing
because,
like
swallowing
the
errors
in
such
a
big
product,
this
gitlab
is,
is
just
it's
just
not
correct.
That's
like
shooting
ourselves
in
the
foot
actually
in
both
feet,.
C
So
should
we
should
we
look
to
create
an
rfc
that
we
could
bring
to
the
meetings.
A
Are
we
proposing
something
we
sort
of?
We
do
we
propose
to
because,
like
we
could,
we
could
say
that
okay,
like
whenever
you
you're
about
to
to
import,
create
flash.
A
You
have
to
also
import
the
log
error
module
and
this
have
to
play
hand
in
hand
like
it
doesn't
make
sense
to
to
to
import
log
error
into
the
create
flash,
because
not
all
not
every
flash
yields
for
the
proper
error,
but
if
you're
importing
create
flash,
then
probably
you
have
a
nice
use
case
for
load
error
as
well
log
error,
but
when
we
like
we,
I
think
we
are
about
to
propose
a
fundamental
concept
here
to
be
implemented
in
the
company
like
in
the
com
in
the
whole
department
and
that
that
sort
of
yields
for
the
rfc.
A
B
Sure
yeah,
that's
that's
a
good
idea,
the
the
quote
that
stuck
with
me
as
long
as
I've
heard
it
was
from
someone
from
boston
dynamics,
giving
a
talk.
They
were
like.
How
do
you?
How
do
you
guys
do
it
make
these
robots
that
you
can?
You
know,
abuse
and
they
still
smile,
and
he
said
you
have
to
build
resiliency
at
the
lowest
levels
and
the
resiliency
at
the
lowest
levels.
B
Your
your,
the
resiliency
of
the
system
is
as
weak
as
its
weakest.
You
know
unit,
and
so
part
of
that
is,
you
know
every
unit
test
we
got
to
make
sure.
Are
we
doing
the
resilient
thing?
Are
we
do?
We
have
tests
for
the
error
cases
and
if
they're
errors
are
we
being
really
informative
of
what's
problematic
yeah,
do
you
david?
Would
you
be
up
for
creating
such
a
such
an
rfc.
C
Absolutely
I
can,
I
can
put
it
together
and
the
base
that
we're
looking
at
is.
We
are
looking
to
basically
standardize
the
use
of
error
modules
to
just
give
the
entire
the
entire
code
base
a
much
more
robust
error,
logging
experience,
which
I
think
I
I
can't
imagine
anybody
being
against
personally.
B
Yeah
we
I
added
it
to
our
agenda
and
I'm
added
to
resume
chat.
We
have
we've
started
on
client-side
logging
doc.
You
know
in
our
fe
guide,
so
we
we
have
started
this,
but
there's
much
more
to
be
done.
C
Okay
super,
let
me
open
that
and
I
can
drop
it
into
the
next-
the
next
agenda
and
we'll
go
from
there
nice
and
then
in
terms
of
that'll,
take
care
of
number
two
and
then,
in
terms
of
our
direct
action
from
this
I
think
yeah.
Our
next
goal
is
to
really
consolidate
the
monaco
imports
and
see
how
far
that
gets
us
yeah.
A
Yeah,
as
I
said
again
like
technically,
this
is
this
is
the
beauty
of
this
of
this
discussion
today,
like
none
of
the
things
we
mentioned
are
like
super
directly
related
to
the
integration
tests
right,
but
all
of
those
things
have
to
be
done
in
order
to
improve
the
quality
of
of
the
code.
We
are
responsible
for
as
a
group
and
for
the
whole
product,
and
this
is
this-
is
a
really
beautiful
outcome.
A
I
think-
and
I
I
do
believe
that
if
the
first
item
won't
make
things
work
more
predictable,
at
least
with
the
better
better
error
logging,
we
will,
we
will
be
able
to
fix
the
things
faster
and
with
much
better
understanding
of
when,
where
the
things
fail.
So
that's
that's
a
really
good
good
plan.
I
think.
C
C
C
And
if
we
want
to
spin
up
an
issue
for
putting
for
consolidating
the
the
monitor
instances,
we
can
look
at
that
scheduled
yeah
I'll
I'll,
create
the
issue.
A
B
I
I
have
a
to-do
to
create
an
issue
just
to
gather
and
generally
improve
error
reporting
on
integration
tests
and
I'm
going
to
add
a
link
to
this
issue
on
the
integration
test
failures
so
that
anyone
who's
running
into
them.
Hopefully
they
can,
they
can
be
funneled
into
our
customer
service
desk
customer
service.
Your
error
is
very
important
to
us.
A
A
Okay,
so
david,
do
you
create
the
rfc?
I
create
the
issue
for
monaco
importing
unification
paul,
you
create
the
the
epic
for
for
testing
things
or
like
issue,
and
we
will
promote
it
to
the
epic
later
on
cool
awesome.
Thank
you
very
much.
It
was
really
productive.
Yes,.