►
From YouTube: Node.js User Feedback Initiative Meeting
Description
A
All
right
welcome
everybody.
It's
been
a
bit
of
a
slow
week
with
the
July
4th
holidays
here
in
the
United
States,
but
this
week
we
have
some
internal
topics
that
we
want
to
move
forward
here.
The
user
feedback
initiative-
we've
been
really
happy
with.
You
know,
progress
that
we've
made
in
not
being
the
blocker
in
enabling.
A
User
feedback,
so
you
know
building
out
some
some
process,
building
out
the
infrastructure
for
gathering
user
feedback
and
mostly
we've
helped
the
tooling
group
led
by
Christopher
Hiller
of
IBM
he's
the
lead
maintainer
of
mocha
bone
skull.
This
is
handle
so
Christopher,
Hiller's,
emkin,
great
and
bringing
you
know
tooling.
You
know
forward
and
taking
the
some
aggregate
understanding
that
we've
gathered
amongst
the
tolling
developers-
and
you
know
carrying
that
back,
organizing
that
and
carrying
that
back
to
the
technical
steering
committee
to
influence
decisions.
A
E
I
am
I,
am
a
collaborator.
Also
do
other
stuff,
you
know,
like
moderation,
and
basically
we
did
a
lot
of
interesting
work
on
promises
in
the
summit,
which
has
been
very
fruitful
so
far,
and
one
of
the
next
steps
you
want
to
take
is
to
survey
is
like
to
use
foundation
tools
to
survey
the
user
base
about
how
people
use
promises.
A
F
A
A
F
A
One
of
the
most
active
groups
and
organized
groups
was
the
group
that
Benjamin
had
around
promises,
and
you
know
kind
of
like
to
set
the
stage
with
promises
to
get
an
understanding
of
like
where
we
are
with
promises
and
what
the
desired
outcome.
The
Benjamin
and
maybe
some
of
the
other
recommend
representatives
would
like
to
achieve
and
get
feedback
on.
E
So
basically
promised
us
at
the
moment
you
know
have
a
bunch
of
usability
issues
and
debugging
experience
isn't
exactly
where
we
want
it
to
be,
and
basically
one
of
the
things
I
have
done
before
the
burden
meeting
was
to
survey
like
some
people
and
try
to
get
a
better
understanding
of
how
the
community
has
been
using
promises.
And
another
thing
that
we
have
done
is
that
we
got
a
bunch
of
people
together
who
have
different
viewpoints
around
what
our
AP
is
and
what
we
should
do
can
look
like,
which
is
really
good.
E
It's
something
I
really
hoped
will
happen,
and
one
of
the
next
steps
we
want
to
take
is
to
try
and
survey
the
community
to
understand
like
we
have
several
technical
options
and
we
have
discussed
like
the
implications
of
implementing
them
and
I.
Think
at
this
stage.
You
want,
like
some
feedback
from
the
community,
to
better
understand
like
what
users
are
expecting
in
certain
patterns
like,
namely,
what
should
and
rejections
do
and
like
what
default.
A
A
F
F
The
I
think
in
more
general
terms,
I
think
my
what
I've,
what
I
feel
like
I've
seen
a
lot
is
so
that
the
the
statement
that
was
made
earlier
that
the
debug
liked
it
was
actually
something
about
like
debug
abilities,
not
where
we
want
it
to
be.
That's
a
fine
statement,
but
what
I
usually
hear
is
from
folks
in
node
core,
primarily
and
almost
exclusively
is
that
debugging
promises
is
terrible
and
has
always
been,
and
that
was
a
large
part
of
the
reason
for
the
pushback
against.
F
Like
you
know,
there's
been
a
lot
of
pushback
in
core
historically
about
promise
based
things,
but
I
don't
feel
like
anyone
outside
of
core
or
many
people
outside
of
core
have
really
shared
that
opinion
and
I.
Don't
think
there
is
a
huge
debug
ability
problem
with
promises.
This
comes
up
a
lot
around
unhandled
rejections,
for
example,
like
the
desire
to
exit
the
process
when
there's
an
unhandled
rejection.
Things
like
that,
so
I,
like
I
I,
don't
think
anyone
would
argue
with.
Let's
make
debugging
anything
better
write.
F
Code,
inherently
is
hard
to
debug
harder
to
debug
than
sync
code,
so
like
there's
always
something
that
needs
to
be
improved
there
and
any
work
that
we
can
do
to
improve.
It
should
be
considered,
but
I
think
that
that
the
there's
a
often
a
picture
painted
that
things
are
on
fire
and
that
I
don't
think
I,
don't
really
feel
like
that's
the
case
and
I,
don't
feel
like
I
get
that
sense
from
non
node
core
developers
who
use
promises
and
asynchrony.
C
I
was
just
to
say,
you
know,
having
talked
to
my
co-workers,
who
go
on
site
to
large
enterprises
and
deal
with
people
using
promises.
It's
not
necessarily.
You
know
in
my
experience,
having
talked
to
them
and
what
they've
conveyed
to
me
from
that
experience
is
that
it's
not
necessarily
user
code,
its
third-party
code
and
you
don't.
C
When
you're
pulling
in
a
bunch
of
modules,
you
can't
guarantee
that
you
know
they're
handling
promises
of
appropriately,
and
so
that
makes
it
a
hard
story,
and
so
that's
kind
of
why
I
think
there's
a
part
of
a
push
at.
You
know
the
node
level
to
do
that.
That's
just
been
kind
of
what
I've
heard.
Does
that
make
sense?
Does
that
kind
of
fall
in
line
with
your
experience
or
anyone
else's
experience
here,
I.
A
So
I,
you
know,
I
share
a
lot
of
you
know:
Tierney's
experience
mostly
out
of
proximity
right,
the
interiors
of
the
company
that
I
created.
So
you
know
I've
definitely
seen
that
and
you
know
adding
to
what
tyranny
said.
It's
often
the
the
practitioners
that
are
tasked
to
sort
of
you
know
get
code
on
messed
up
and
promises
code.
You
know
tends
to
exacerbate
that
that
situation.
E
We
give
them
seriously
and
improving
things
in
the
engine
like
one
of
the
things
they're
doing
for
us
now
is
a
sync
stack,
traces
in
production
and
then,
like
the
ability
to
inspect
the
stock
in
production,
is
I.
Think
a
big
improvement.
Now,
while
things
are
getting
better
and
like
in
my
just
like
just
my
opinion,
is
that
there
are
already
a
lot
better
than
they
were
with
callbacks,
at
least
for
the
like
the
big
crowds
and
like
I,
evaluate
that
with
some
stories.
E
I've
done,
which
were
like
I,
look
a
lot
less
representative
in
the
server
I'm
hoping
to
get,
but
like
the
fact
things
are
getting
better
I,
don't
think
they're
getting
better
like
as
quickly
as
they
should
have
been,
like
I
still
I.
Try
to
like
monitor
the
promises
tagged
in
Stack
Overflow
so
like
there
are
things
I
still.
E
It
are
getting
brought
up
a
lot
and
we're
asking
the
v8
team
for
their
time
in
implementing
stuff,
and
they
started
like
implementing
different
warnings
and,
like
things
occur,
often
for
us,
which
is
very
generous
of
them
to
like
dedicate
that
time.
But
I
really
want
to
make
sure
that
we're
not
wasting
their
time
and
things
that,
like
are
my
intuition
or
like
a
not
a
personal
intuition,
which
is
why
I
think
it's
really
important.
That
Jordan
is
here
because
he
has
a
different
opinion
about
water.
E
Look
like
what
the
behavior
should
be
so
like
I'd,
be
like
you're,
like
Jordan
I,
really,
like
your
help
and
like
working
on
this
and
I,
think
that
next
step
is
to
like
try
to
figure
out
good
questions
to
ask
and
like
we
would
really
love
your
guidance
and
that's
and
like
to
get
that
that
survey.
I'ts
of
so,
we
can
like
bring
it
up
in
the
meetings
we
have
with
the
VA
team
like
trying
to
make
the
stuff
that
users
find
more
people.
A
D
Think
I
don't
really
hear
too
much
complaints
or
see
kind
of
situations
where,
like
our
deployment
or
our
application,
development
is
affected
by
that,
but
I
think
it
only
becomes
a
problem
in
the
context
of
inconsistencies
and
in
the
context
of
debugging.
As
we
said,
I
mean
it's
not
great,
but
again
I
would
echo
the
point.
It's
it's
not
a
burning
problem.
D
A
Right
so,
let's
talk
about
in
the
the
meeting
issue.
I
dropped
in
the
DNS
promise
divide
a
pea
eyes,
so
your
core
is
beginning
to
experiment.
With
these
things
you
know
someone
who's,
not
you
know,
following
that
day
to
day
you
know
Michael,
maybe
you
could
could
update
us
on.
You
know
what
the
current
you
know
would
approach
the
TSE
is
having
to
to
dealing
with
them
promised
if
I'd
api's,
I.
G
Mean
I
think
at
this
point
is
it's
sort
of
an
experimental
state
where
you
know
we're
at
or
different
people
are
adding
a
couple.
You
know
so
far,
it's
up
to
us
and
DNS
it's
a
sort
of
experiment
and
get
a
feel
for
what
works
or
what
doesn't
as
prelude
to
you
know,
maybe
making
more
organized
effort
to
cover
the
broader
surface
area.
I
mean
my
my
understanding
of
the
discussion
around
the
debug
ability.
Is
that
the
challenge
is
you
know
without
before
promises?
G
G
Is
a
challenge
for
them
and
it's
it's
that
it
may
be:
there's
not
the
developers
that
end
up
having
you
know
the
people
who
wrote
the
code
in
the
first
place
that
have
to
face
the
problem.
Is
it's
the
operations
team
that
needs
to
do
that,
but
I
also
agree
that
you
know
breaking
the
spec.
It's
not
the
obvious
choice
either
right.
So
it's.
B
E
G
Post-Mortem
I
guess
it's,
the
kind
of
debugging
like
I
just
like,
and
we
will
just
need
to
make
sure
we
have
questions
when
we
go
out
to
capture
who
we're
talking
to
because
I've
seen
quite
often
you
know
the
broader
it
is
in
a
lot
of
cases.
It
sounded
like
the
broader
or
said
developers
don't
have
to
debug
those
issues.
You're,
not
you're,
not
the
ones
pulled
in
to
be
both
the
issues
and
in
production.
G
So
when
we
talk
to
them,
they're
gonna
say,
like
you
said:
well,
you
know
actually
they're
doing
a
different
kind
of
debugging,
so
maybe
the
async
ones
are
easier
in
that
context.
Right
right,
so
it's
just
a
matter
of
understanding.
Like
is
the
you
know,
and
you
could
be
right
like
maybe
it's
that
that
it's
not
an
issue
but
I
think
we
need
to
get
the
right
people
answering
as
opposed
to,
but
it
or
to
be
confident
that
we're
asking
the
right
people
the
question
before
we
decide
that
it's
not
that's.
Alright,.
E
E
So
I
think
there's
actually
a
bunch
of
like
interesting
discussion
going
around
promises
right
now.
Netflix
are
like
have
committed
to
spend
development
effort
on
getting
like
the
postmodern
debugging
story
better
and
having
like
meetings
on
that,
and
we
like
are
moving
not
like
very
quickly
but
they're
moving.
So
that
part
is
working
well
and
then
like.
We
want
to
get
to
a
point
where
we
are
comfortable,
like
with
the
debugging
experience.
People
have
now
that
isn't
to
say
that's
like
like
debugging
promises
is
easy
or
hard,
but
it's
like
it's.
E
Understand
and
then,
like
Michael's
point
about
having
to
make
sure
we
capture
the
right
audience
that,
like
we
don't
just
get
people
who
are
building
api's,
we
also
get
like
the
people
who
are
on
call
that
need
to
debug
stuff
in
production.
It's
it's
also
like
a
very
good
point,
which
is
the
focus
on.
A
E
Think
we
need
to
embrace
the
sink
of
eight.
No.
This
is
like
if
it's
an
opinion,
it's
like
it's
a
very
maybe
one
would
say
a
pretty
strong
opinion,
but
it's
like
just
my
opinion.
Now,
like
my
initial
like
thought
process
would
be.
Okay,
I'll
go
an
advocate
for
doing
it
like
optimizing
for
a
sink
await
and
not
like
regular
premises,
and
this
is
not
like
to
say
you
shouldn't
support,
regular
promise
usage.
F
Yeah
I
think
that
async/await
is
promises,
but
it's
not
even
all
of
promises.
It's
like
like
async
function,
I
think,
is
awesome
right,
because
it's
a
static
way
to
guarantee
that
a
per
function
returns
a
promise
and
never
throws.
But
a
wait.
I
find
to
be
actually
terrible
in
practice.
It's
only
used
easily
people
will
it's
very
tempting
to
accidentally
create
a
serialized
second
set
of
async
actions
when
most
of
them
could
be
parallelized.
In
other
words,
I
would
I
would
I
the
advice.
I
give
people
is
don't
ever
change.
F
Your
promises
make
each
promise
be
a
separate
variable,
and
then,
when
you're
done,
you
can
combine
things
into
chains
if
it
makes
sense
and
then
everything
that's
a
chain.
You
can
write
with
in
terms
of
a
weight
if
it
makes
sense,
and
if
you
in
my
experience,
is
if
you
followed
that
order,
where
you
don't
type
a
weight
ever
until
the
very
last
step,
then
you
end
up
with
maximally
Syria.
F
You
know
parallelizable
or
concurrent
code
and
avoid,
like
accidentally,
chaining
things
that
don't
need
to
be
changed,
and
it's
so
so
I
guess
I'm
kind
of
mystified.
How,
like
I,
don't
actually
understand
how
you
would
be
able
to
optimize
async/await
in,
like,
like
I'm
sure
there
are
some
things
the
engine
could
do,
that
it
could
do
that.
It
wouldn't
be
a
reveal.
You
know
that
wouldn't
apply
to
normal
promises,
because
its
syntax
but
I
I'm
confused,
why
it
would
be
hard
to
optimize
both
at
the
same
time.
You
know
that's
a
good
question.
E
So,
like,
like
you
said,
like
a
sink
of
eight
is
promises.
You
await
promises
like
a
sink
reductions,
waiting
for
return
promises,
and
it's
like
always
this-
the
same
native
promise
now
this
gives
like,
if
I
understand
correctly
from
benedict's
and
I
like
from
the
v18,
like
an
easier
path
towards
like
providing
useful
debugging
experience
about
the
promises.
So
I'm
not
like
saying
our
API
should
be
a
single
bit
focused
or
and
I
definitely
agree
that
people
are
experiencing
through.
Take
credit
that
running
them.
Concurrently
is
a
problem.
E
So,
basically,
let's
say
like
let's
say
you
have
a
sing
Stax,
you
have
the
ability
to
like
pause,
a
function
in
debugger
or
like
after,
like
post
mortem
after
the
process
terminates,
and
you
can
inspect
like
each
step
in
the
async
stack,
so,
for
example,
providing
that
for
promise
chains
might
might
be
more
challenging
to
like
their
compiler.
So
this
is
not
like
a
functionality
thing:
it's
like
a
debugging
experience
thing
and
we
can
tell
v8.
You
should
spend
the
equal
time
or
like
that.
F
Well,
just
real
quick,
the
the
I
mean
I,
understand
what
you're
saying
about
the
debugger
itself
like
if
you
are
only
dealing
with
async
functions,
and
you
know
that
call
into
other
async
functions.
But
many
promise
based
api's
are
not
implemented
in
terms
of
async
functions
and
a
weight
is
just
promise,
start
resolved
around
the
thing
you're
awaiting,
and
you
know,
moving
on
so
like.
B
E
Async
functions
are
not
necessarily
like
because
it's
a
subset,
like
you
said
it's
like
it's
easier
to
solve
like
if
you
eat
it's
subpar,
to
provide
like
divine
an
experience
for
async
functions
and
not
like
promises
in
general.
But
the
question
is:
is
it
like?
Is
the
limitations
severe
enough
to
not
spend
twenty
percent
of
the
time
on,
like
eighty
percent
of
the
game,
Benjamin.
A
A
So
you
know
one
thing
that
you're
you're
going
to
have
to
decide
in
you
know
asking
folks
about
you
know.
Promises,
for
example,
is
how
much
you
want
to
ask
folks
about
async/await
and
how
much
in
that
you're
going
to
want
to
ask
about
sub
permutations
of
you
know
the
the
technical
approach
Michael
you
you
had
a
comment
on
this
yeah.
G
One
of
the
key
things
kind
of
related
to
that
is
that
there's,
when
we
say
debugging,
it's
kind
of
an
overloaded
word
I.
Think
part
of
the
the
the
idea
of
like
narrowing
at
a
sink
weight
is
that
one
of
the
problems
is
with
the
unhandled
rejections
in
the
spec,
allowing
you
to
add
something
at
any
hand
or
at
any
point.
If
you
think
a
weight,
you
know
it's,
it's
deterministic.
You
can
deterministic
lee
decide
whether
there's
a
handler
or
not.
G
That's
a
case
where
it
may
be
that
you
can
do
something
better
for
async
await
than
the
general
case
in
in
what
was
mentioned,
though,
in
terms
of
the
thing
all
the
stock,
traces
and
and
that
kind
of
debugging
like
trying
to
look
at
the
flow
those
you
know.
Maybe
there
isn't
a
difference
right.
So
it's
it's
kind
of
teasing
out
where
it
you
know
what
kind
of
debugging
we're
talking
about
and
in
which
cases
then
does
it
actually
make
sense
that
one
you
could
do
a
better
job
than
the
other
kind
of
thing.
A
Survey
into
the
the
chat
I
invite
you
know,
especially
Benjamin
and
Jordan,
to
have
a
look
at
that.
In
terms
of
you
know,
questions
asked
and
you
know
different
approaches.
You
know
I
felt
like
we.
We
got
some
of
the
best
and
most
interesting
answers
out
of
the
open
feedback
sessions.
It's
an
interesting.
You
know
in
constructing
a
survey,
you
know
it's
it's
an
interesting
balance
between
you
know
laying
out.
You
know
how
many
different
clear,
four
different
choices
that
folks
have
to
choose
from,
and
you
know
providing
opportunities
for
open
feedback.
A
So
you
know
I
think
the
next
step
for
the
you
know.
The
promises
folks
is,
you
know,
begin
to
sketch
out
some
of
the
rudimentary
questions,
and
you
know
I'm
happy
to
you
know,
put
that
workflow
in
our
regular
team
meetings.
So
you
know
we're
you're,
helping,
encourage
and
develop
that
that
along
you
know,
through
through
our
user
feedback
sessions,.
A
A
We
can
provide
the
infrastructure
for
as
elaborate
as
you
want
it
to
be,
and
yeah
there's
some.
You
know
existing
signal
in
benchmarking
that
that
may
be
interesting
as
well,
and
at
this
point
our
next
meeting
next
general
session.
Sorry,
we
have
a
general
feedback
meeting
in
two
weeks.
B
A
Meeting
would
be
in
four
weeks
on
August
3rd,
so
we
can
reconvene
then
and
bring
that
forward,
and
you
know
in
terms
of
a
timeline
for
realization,
I
think
you
know
having
something
that
we
you
know
put
together
and
submit
in
at
the
beginning
of
September.
When
folks
are
back
from
you
know,
summer
holidays,
it
would
be
good
timing,
for
that.
A
F
E
E
C
A
A
A
A
A
C
Okay
with
that
week,.
A
D
Only
my
only
feedback
on
that
is,
you
know,
depending
on
the
kind
of
action
items
you
want
to
drive,
doing
something
on
a
Friday
presents
less
of
an
opportunity
for
a
follow-up
the
day
after
because
people
and
enjoy
their
weekends,
especially
in
the
context
of
the
enterprise.
Folks,
if
we're
bringing
people
in
part
of
activities,
then
I'd
want
to
see
an
opportunity
for
them
to
follow
up,
but
that
doesn't
mean
we
have
to
change
anything
now
on
Fridays.
This
seems
to
be
a
recurring
date
that
we
can
continue
relying
on.
G
A
Yeah,
so
you
know
just
just
the
the
feedback
of
why
it's
on
Fridays,
you
know.
Fridays
is
the
reason
why
I
said
on
Friday
is
now
is
largely
out
of
my
convenience,
and
Fridays
is
a
time
that
I
can
consistently
carve
out
to
do
open
source
work.
So
you
know
that's
why
these
sessions
are
scheduled
for
Friday's,
but
in
you
know,
in
driving
the
the
tooling
group
forward.
A
D
A
A
C
A
A
Great
so
well.
A
So
that's
great
and
you
know
Achmed
if,
if
there's
anything
around
you
no
promises-
and
you
know
what
what
folks
are
doing
with
promises
that
that
you
would
want
to
weave
into
that
discussion.
You
know
I
think
that
would
be.
You
know
interesting
perspective
because
you
know
it
has
been.
You
know
some
of
the
bigger
enterprise
users
that
have
been.
A
D
Yeah
I
think
the
there's
an
opportunity
there
I
would
want
to
also
be
careful
of.
You
know
when
we
say
enterprises
understanding
a
little
bit
more
about
the
use
case.
There's
you
know
you
can
like
I
was
looking
at
the
survey
numbers
we
shared
earlier
and
we
interesting
to
know
like
okay
or
all
the
people
are
using
XY
and
Z
and
are
relying
on
on
certain
features
and
certain
things.
But
what
are
they
actually
doing
with
it?
Is
it.
B
D
A
D
Does
help
kind
of
clarify
exactly
where,
where
the
kind
of
the
interest
lies
as
opposed
to,
if
I,
if
I
come,
if
we
come
back
and
say,
look,
there's
production
level,
kind
of
systems
that
are
running
that
actually
this
does
impact
and
this
does
affect
in
one
way
or
another.
That's
a
different
conversation
than
saying
you
know
are
built
tools
so.
A
We
were
able
to
get
some
signal
around
like.
Is
it
a
micro
service?
Is
it
an
API
that
level
of
granularity
I,
don't
know
how
we
would
you
know,
apply
that
out
that,
since
open
and
open
ended,
it
really
doesn't
get
the
same.
Sort
of
this
will
roll
up
that
you
get
with.
You
know
having
some
sort
of
an
option
there
I'm
not
sure
how
we
would
you
know,
go
and
build
out
that
second
layer
of
categorization
yeah.
D
I
don't
know
if
I
have
an
answer
either
I'm,
just
thinking
in
the
context
of
if
we
look
at
these
type
of
data,
these
type
of
numbers
behind
it,
you
actually
know
what
is
it
actually?
What
is
this
product
or
a
micro
service
that
you're
building
driving?
Are
you?
Are
you
just
doing
some
hello
world
api's
or
are
you
building?
You
know
really
important
business
kind
of
drivers.
It
does
help
shed
a
light
on
what
is
the
actual
value
being
presented
and
again
I.
D
A
D
B
E
So
I
think
specifically
in
the
context
of
like
that's
a
good
point,
I
think
specifically
in
the
context
of
promises
and
the
performance
working
group,
and
there
has
been
a
lot
of
context
around
that's
about
like
whether
or
not
people
actually
care
about
like
how
fast
promises
are
in
OGS
and
like
how
commonly
they're
used
and
then
like
I,
think
the
facts
which
people
are
using
word,
which
is
unfortunate
like
it's.
What
it's
something
I
like
as
a
Bluebird
maintainer
and
someone
who's
been
like
involved
in
Bluebird.
E
C
E
Like
added
a
survey
like
a
warning
in
like
I
added
the
warning
during
the
summits
to
the
homepage
of
blueberry,
about
like
measuring
your
code
before
assuming
like
many
processors,
though,
because
there
is
a
lot
of
like
contents
online
about
that
and,
like
my
like
and
I
understand
why
it
would
be
very
interesting
to
understand
how
people
are
using
it.
But
I
think
the
data
is
telling
us
like
specifically
for
the
performance
working
group
and
specifically
like
seeing
Bluebird
there
I
think
it's
telling
us
like
a
lot
and
it
can
probably
be
explained
better.
D
So
I
think
in
the
context
of
like
an
enterprise
feedback
group
and
especially
like
looking
to
dive
deeper
on
these
things.
This
is
like
exactly
the
point
you're
making
Benj
benjamin
is
like
you
can
start
with
those
data
points,
but
then
we
can
go
ahead
further,
like
we
can
systematic
look
at.
You
know
the
hundreds
of
millions
of
dollars
of
investment
of
infrastructure
that
we're
doing,
for
example,
here
at
tell
us
and
what
type
of
applications
we're
building
and
the
actual
kind
of
level
of
products
that
we're
creating
and
deep
deep
dive
into.
D
Okay,
where
is
exactly,
is
lured
being
used?
Why
is
it
being
used
that
way
and
what
the
actual
business
problem
is
at
solving
or
not
solving,
perhaps,
and
in
the
context
of
the
native
approach
of
using
promises
an
API
is
what's
holding
us
back
like
this
is
where
you
can
actually
leverage
the
enterprise.
B
D
E
That's
actually
very
interesting.
It's
something
like
I've
always
wanted
to
know
like
how
money
like
when
we
make
a
call
like
we
want
to
stand.
I,
don't
know
like
100
hours,
improving
the
situation
of
X
nodes.
It's
very
interesting
to
know
like
how
that
translates
to
like
saving
money,
peer,
avoid
losing
out
and
improving
the
lives,
and
that's.
D
Where,
like
coming
back
to
the
point
about
performance
like
if
I'm,
if
all
my
usage
of
but
I'm
just
gonna
head
on
Bluebird
as
an
example,
if
all
my
lucid
your
Bluebird
and
my
in
my
application
code
is
on
machine
learning
and
on
data
processing
stuff,
then
maybe
I
don't
really
care
about
performance.
That's
just
ETL
jobs
that
are
gonna
run
over
time
and
who
cares
versus
if
I'm
telling
you
know
like
this
is
a
in
my
critical
path
of
e-commerce.
D
E
It's
actually
like
I'll
keep
it
short,
but
it's
an
interesting
story
because,
like
in
a
gist,
Bluebird
was
designed
from
dream
of
debug
la
rather
them
to
be
like
as
fast
as
possible.
They're
like
there
are
things
that
our
that
can
be
faster
in
Bluebird,
but
or
not
because
it
would
hurt
a
bug
ability,
but
I
think
we
have
an
issue.
E
It's
like
more
of
a
community
issue,
a
technology
issue
where,
when
you
tell
people
something
is
like
faster
or
fastest,
even
if
it
doesn't
matter
as
much
further
up,
there
are
more
willing
to
adopt
it
and
like
introduce
you
to
the
workflow,
so
I
think
it's
more
of
a
people
issue
then
like
technology
issue.
In
this
case,
yeah.
A
All
the
good
ones
are
all
right.
Well,
it
seems
like
we
got
our
ducks
in
a
row
on
that
closure.
On
the
European
user
feedback
session.
Unfortunately,
I
lost
out
to
yang
the
v8
team.
So
you
know
there
was
a
an
option
of
a
low
level.
It
was
code,
code,
reuse
and
then
low
level,
debugging
and
and
v8
led
by
you
know
one
of
the
leaders
to
the
v8
team,
so
everyone
followed
the
yang
to
the
debugging
session
and
the
user
feedback
session
in
Europe
was
was
very,
had
very
low
attendance.
A
That
said,
the
there
was
a
crew
from
je
s,
Congress
with
a
k'
that
you
know
did
stick
around.
Did
you
know
ask
has
a
bunch
of
questions
around
now:
user
feedback,
gathering
user
feedback-
and
you
know
the
value
of
that.
So
we
had
a
you
know
a
great
discussion
around
that
and
the
outcome
of
the
session
was,
you
know,
there's
the
the
VA
team
is
again
up
to
no
good
and
they've
invited
everyone
to
to
Munich
in
February
for
the
next
iteration
of
the
diagnostic
summit.
So
you
know,
there's
a
nice
opportunity
with
that.
A
To
do
some
user
feedback
and
to
sort
of
you
know
you
use
the
opportunity
of
having
the
folks
that
are,
you
know,
engaged
interested
in
doing
the
improvements
to
our
diagnostic
tooling
in
node
and
you're
spending
a
half
a
day,
or
you
know
some
time
you
know
while
we're
there,
you
know
gathering
some
feedback
from
from
end
users.
So
the
positive
outcome.
You
know
not
quite
the
one
that
I
was
expecting
going
in
yeah.
A
Anyway,
oh
good
yeah,
that
bet
we
may
have
influenced
that
date
because
jeaious
Congress
is
in
March
and
getting
those
two
closer
together.
Since
there
was,
you
know,
jeaious
Congress
in
Munich
and
this
event
in
Munich
bring
those
things
closer
together.
So
you
know,
folks,
are
you
know
able
to
participate
more
and
in
there
in
the
trip,
good
stuff.
A
All
right,
well,
I
think
we
can
wrap
it
up
there
for
today,
thanks
Benjamin
and
Jordan,
for
bringing
yeah
promises
to
the
table
here.
Look
forward
to
continuing
to
collaborate
with
you
on
next
step
and
thank
you
quit
for
you
know,
spearheading
things
with
the
Enterprise
Group
and
look
forward
to
general
user
session
on.
G
I
guess
Ahmed
et,
if
you
want
to
talk
or
sync
on
setting
up
that
next
session,
I
think
I
somehow
somewhere
in
my
my
emails
have
the
list
of
attendees.
If
you
don't
have
that
or
people
who
would
register
to
do
that,
but
anyway,
just
feel
free
to
reach
out
to
make
you
need
any
help.
Getting
that
set
up.
Yeah.