►
From YouTube: Node.js User Feedback Enterprise Focus Meeting
C
B
Okay,
hi
everybody,
my
name's
Ahmed
nasty
I'm,
the
chief
architect
that
tell
us
das
was
a
Canadian
Telecom
I'm
representing
the
enterprise
users.
Feedback
group
is
part
of
the
user
feedback
committee
of
no
js'
and
before
we
get
started
talking
about
some
of
the
edges
of
topics
today,
perhaps
a
quick
round
table
and
stretching
around
attendees
just
to
get
everybody's
kind
of
current
enterprise
relation
flash
company,
representing
so
I,
think
I'm
just
going
to
go
by
the
list
of
names
I'm
seeing
in
the
zoom
chat,
so
me
high.
You
want
to
go
first.
D
B
E
Hi,
my
name
is
dimensions:
I
work
for
inhales
I
have
managed
a
team
inside
India
working
on
nodejs
use
case
optimisation
for
server
AR
Center.
The
topic
today
is
pretty
interesting
to
me,
although
we
don't
really
have
use
not
of
notice
in
Silent
Hill,
but
we
are
pretty
interest
in
how
it's
been
used
and
order
like
which
part
is
been
used
heavily
or
cover
nipple
improvement.
So
that's
why
I'm
here
thanks?
E
A
F
G
I
Hey
all
just
happy
here:
I
have
too
many
meetings
and
not
enough
food,
so
I'm
driving
to
go
get
some
lunch.
So
I'll,
you
know
manage
my
mute
appropriately,
but
I
work
at
IBM.
I
am
an
open
source
engineer
and
developer.
Advocates
I've
been
on
the
community
committee
for
a
while
and
started
doing
some
stuff
with
the
user
feedback
here.
Actually,
I
still
need
to
start
doing
some
stuff.
I
got
a
couple
of
surveys
to
start
that
I
hope
to
get
going
today
and
generally
I'm
interested
in
the
enterprise
focus
of
this
group.
B
B
B
You
know
multi-million
dollar
projects
on
top
of
nodejs
and
bringing
the
context
of
business
operations,
and
the
values
of
this
is
investing
in
ojs
back
to
the
OJS
project,
so
that
we
can
surface
data
points.
We
can
service
means
we
can
surface
a
feedback
cycle
that
benefits
not
just
the
businesses
and
the
notes
and
the
GS
project
itself,
but
the
entire
community
of
the
gist
users
and
developers
on
the
world
and
obviously
then,
with
a
different
lens
on
it.
B
Then
the
kind
of
public
feedback
groups
and
the
public
feedback
kind
of
insight
that
we
get
from
the
general
public
just
looking
at
the
lens
of
a
business
in
the
lens
of
a
dollar
sign
investment
in
front
of
it
of
how
does
that
affect?
How
does
something
as
simple
as
an
API
change
might
affect
their
business
operation
or
values
that
are
trying
to
achieve
so
for
today's
meeting
we
wanted
to
have
a
little
bit
of
a
focus
around.
B
How
can
businesses
contribute
back
to
the
project
in
anything
other
than
you
know,
volunteering
individual
time
or
not,
and
specifically
non-monetary
type
of
contribution
back
to
the
project,
so
whether
that
be
in
data
reporting,
anonymized
insights
or
you
know,
structured
call
to
actions
around
certain
things
that
the
nodejs
project
is
trying
to
achieve.
Actually,
hypothetically
a
new
version
comes
up
and
we
want
to
make
sure
that,
on
top
of
all
the
testing,
that's
the
project's
already
invested
in
we
want
to
see.
Is
this
a
new
version
or
a
change
in
the
performance?
B
Does
it
impact
businesses
on
companies
in
a
significant
way,
I
don't
positively
or
negatively?
Both
things
are
valuable,
learnings
and
then
any
other
means
as
long
as
it's
a
structured
approach
that
can
be
it
be
adopted
by
large
scale
teams
as
opposed
to
individuals,
because
we
all
know
that
your
times
is
precious
and
valuable.
We
all
know
that
you're
volunteering,
your
time
just
to
be
able
to
scold
to
begin
with,
so
any
sort
of
approach
to
asking
feedback
or
gathering
data
needs
to
be
in
a
structured
way.
That
makes
it
easier
for
you
to
execute.
B
And
let
me
just
start
by
giving
some
examples
based
on
the
type
of
user
feedback
conversations
that
we've
had
at
least
in
the
past
couple
of
months,
and
when
we
look
at
things
like
API
changes
in
the
core,
and
we
look
at
things
like
package
maintenance
topics
that
are
happening
right
now.
There's
a
lot
of
information
lacking
from
what
happens
behind
private
repositories
and
private
projects,
which
I
think
by
an
estimate.
B
I,
read
a
long
time
ago
represents
at
least
80
70
to
80
percent
of
the
the
developer
ecosystem,
as
opposed
to
what's
available
to
open
source
packages
on
NPM
or
a
github.
So
that
means
we're
missing
a
significant
amount
of
data
and
which
needs
to
do
a
significant
amount
of
misalignment,
potentially
on
making
decisions,
whether
it's
technical
decisions,
the
core
of
nodejs
or
to
the
ecosystem
around
it.
A
H
A
F
Right
this
is
over
here
at
PayPal.
That's
something
we
can.
We
could
certainly
do
we
don't
capture,
you
know
we
do.
We
don't
do
an
analysis
of
API.
You
know
no
js'
API
usage
today,
but
pretty
much
every
I
mean
every
app.
That's
in
production
certainly
goes
through
our
build
pipeline
and
there
are
so
there
are
opportunities
for
us
to
capture
that
metric.
B
C
F
B
Would
be
the
requirement
for
you
to
easily
do
that
and
share
that
back,
say
hypothetically
again,
because
I
don't
have
anything
on
mine
right
now.
Nor
does
the
tool
exists
but
say
the
tool
does
exist
and
say:
hey.
Can
you
run
this
tool
and
tell
us:
hey
thousand,
provides
XY
and
Z
in
terms
of
some
internal
API
usage
of
some
internal
core
library?
B
G
Assume,
though,
that
the
the
model
would
be
like
here's,
a
tool
which
you
can
probably
look
at
fairly
easily
to
understand,
it's
just
scanning
for
something
and
then
you
would
not
have
the
tool
report
the
data
back.
It
would
be
like
it
would
have
the
answer
that
says.
Ok,
you
know
this
is
used
in
your
codebase.
It
would
be
like
you
know.
You
know
we
found
eight
thousand
occurrences
of
promises
being
used,
and
that
would
be
something
you
could
probably
more
ease.
You
know
easily
get
legal
approval
for.
F
F
You
know
and
a
big
one
off
and
you
know
and
make
our
repository
hosting
team
sad,
because
we
would
blast
the
the
api's
with
a
ton
of
closet.
Maybe
that's
not
a
good
way
to
do
it,
but
that's
another
approach.
I
I,
my
instincts
say
for
us,
probably
capturing
in
on
an
ongoing
basis
and
storing
it
in
our
existing
metrics.
You
know
works
for
us,
but
that
might
not
be
the
best
solution
for
everybody.
B
F
I
really
I
really
I,
like
the
ongoing.
You
know
analysis
at
build
time,
because
we're
already
doing
analysis
for
other
reasons,
and
we
wouldn't
have
a
one-off,
hey.
We
have
a
great,
you
know
full
results
in
a
few
days.
You
know
because
this
would
be
an
ongoing
thing.
We
would
be
capturing,
but
it's
a
it's
an
easier
thing
for
us
to
get
into
our
existing
workflow.
If
we
went
that
direction.
B
H
I
think
the
challenge
for
us
would
just
be
figuring
out
how
to
audit,
and
not
just
the
services
themselves,
that
are
dependency
trees.
Most
of
our
stuff
is
push
down
into
the
models
or
modules
and
I
can
see
that
it'd
be
very
hard
to
tell
at
Build
time
which
dependencies
are
actually
being
used
on
the
execution
path.
But
as
long
as
the
tool
itself
doesn't
report
out
over
the
network-
and
we
have
a
file
that
we
can
audit
before,
sharing
I
would
be
very
surprised
if
we
saw
any
resistance
but
I
think
for
us.
J
Sorry
yeah:
this
is
a
really
cool
conversation,
so
I'm
gonna,
I'm
gonna
talk
with
enterprise
LinkedIn
employee
had
on
here
just
comment
on
the
legal
aspect
of
it:
there's
a
whole
slew
of
apps
and
tooling
and
CLI
tools
that
opt
in
kind
of
automatically
when
you
download
them
to
report.
Anonymized
statistics
on
the
run
I'd
be
very
curious.
What
the
precedence
is
for
those
tools
like
language
that
they
use
in
the
Terms
of
Service
to
make
these
larger
companies.
J
Okay
with
doing
it,
I
know
that
LinkedIn
will
probably
have
concerns
about
anything
leaking
out
from
behind
our
corporate
network,
but
it
already
happens
so.
There's
there's
good
precedent
here
for
the
reporting,
so
there's
stuff
that
node
can
probably
do
on
on
its
end
to
make
enterprise
users
more
comfortable
legal
standpoint,
reporting
these
things
partially,
a
like
an
audit
and
a
guarantee
that
statistics
are
anonymized,
secure
or
there's.
You
know.
G
B
B
So
like
if
we're
looking
at
the
dependencies,
like
you
know,
I
know,
will
mention
dependencies.
That
is
talking
more
about
like
the
core
API
usage
within
the
dependency.
But
what
about
the
dependencies
themselves?
For
example,
we're
all
familiar
what
happened
earlier
in
the
week
with
some
of
the
vulnerabilities
that
surfaced
just.
B
Times
so
we
can
easily
look
at
as
a
community.
We
can
look
at
NPM
and
look
at
the
dependents
and
say:
ok,
some
bigger
libraries
are
being
affected,
but
that
doesn't
tell
you
immediately
the
actual
level
of
impact
for
businesses,
specifically
if
you're,
using
that
dependency
directly
or
a
fewer
business
operations,
are
running.
On
other
words,
in
our
case
we
looked
at
it
and
it
doesn't
affect
us
good
great
success,
but
we
still
have
a
lot
of
work
to
do.
B
We
still
have
to
upgrade
a
lot
of
our
dependencies
or
somehow
to
clean
up
a
lot
of
our
technical
debt
around
that
stuff.
So
in
terms
of
package
dependencies
out,
there
you're
not
likely
to
get
that
information
from
NPM
because
you're
not
publishing
every
single
thing
to
NPM.
Your
applications
are
marketing,
living
and
github,
and
only
your
libraries
and
potentially
living
in
STM,
if
not
living
in
that
private
registry,
with
that
type
of
information,
be
also
something
your
company
can
share.
Yeah.
A
You
get
into
that
security
incident
area.
You
know
that
that
definitely
gets
a
lot
dice
here.
You
know
we
may
want
to
explore
non-secure
on
vulnerability
considerations.
You
know,
I
think
something
like
es
module.
Adoption
would
be
a
happier
path
for
us
to.
You
know
explore
that
sort
of
framing
well.
B
Just
let
me
just
articulate
that
more
I
wasn't
suggesting
that
to
share
whether
or
not
you're
vulnerable
I
was
suggesting
is,
would
you
bet
say
it?
What
I'm
gonna
use
PayPal,
because
you
guys
woke
up
first,
what
a
file
say:
here's
the
top
list
of
packages
that
we
use
in
PayPal
we're
not
saying
where
that's
being
used,
but
hey
we
use
requests.
We
use,
you
know
something
stream.
F
You
know
that
was
a
public
incident
and
I
may
or
may
not
have
been
anything
related
a
node,
but
it
was
an
open
source
project
and
it
did.
There
was
a
lot
of
concern
expressed
internally
from
our
leadership
about
not
we
wouldn't
want
to
be
associated
with,
with
anything
related
to
something
like
that.
So
yeah
I
definitely
hesitate.
Oh
when
it
comes
to
sharing
any
anything
through
detailed
about
what
we
use
other
than
what
we've
already
exposed
or
what
we've
decided
to
say.
F
B
J
B
Yeah,
so
the
reason
for
that
asked
is
exactly
what
William
said,
because
a
lot
of
time,
a
lot
of
your
depended
on
a
certain
API,
might
actually
be
in
dependencies
or
hidden
away
in
a
tree
or
dependencies
yeah.
But
then
the
other
is
the
other.
Less
to
this
is
there's
now
there's
now
a
working
working
group
proposal,
I
think
Michael
is
leading
in
the
context
of
how
can
the
foundation
supports
the
package
maintainers
and
and
actually
get
some
some
value
to
the
community
through
that,
and
the
very
first
question
that
probably
came
up
is
well.
E
B
Can
look
at
NPM
dependencies?
You
can
look
at
the
numbers
of
downloads,
but
that's
not
exactly
an
indicator.
In
our
case,
all
of
our
downloads
are
cached,
so
we're
probably
registering
one
download
hit
on
NPM
for
any
particular
libraries
that
we're
using
or
any
particular
version
of
any
library
we're
using
and
that's
it.
So
how
do
you
quantify
value
to
the
community
beyond
the
open
source
numbers
that
we
have?
That's
we're
not
asked
for.
Can
we
even
ask
companies
for
what
dependencies
yeah.
A
B
F
So
the
circumstance
when
I
be
here's
20
bucks,
20
important
wheat
modules,
we
think,
are
really
critical
ecosystem.
Yeah
you're
about
this
list
company
could
safely
say
yeah.
We
care
about
this
list
because
Express
is
not
in
our
case
right
right
and
we
don't
actually
have
to
say
if
we're
using
19
of
the
others
or
an
ACME
to
say
we
care
yeah.
G
B
G
B
We
actually
track
all
of
the
time
getting
the
bar
from
zoom
in
the
way.
Okay,
we
track
all
of
our
libraries
usage,
both
external
and
internal.
This
is
an
example
of
internal
libraries
that
we're
using,
which
is
specifically
our
design
system.
We
tracked
across
all
of
the
packages
so
cross
all
of
the
applications
which
package
is
being
used
where
and
what
version
of
each
package
is
being
used.
G
B
B
We
gotta
divide
this
by
our
teams
as
well,
so
we
can
even
see
here's
the
version
tracking
number
we
look
at
you
know,
because
we
also
use
automated
BOTS
for
PO
requests.
We
look
at
current
version
of
each
component
and
what
version
is
each
application
on
and
we
just
drill
through
it
and
track
them,
so
we
can
see
like
there's
some
things
that
are
lagging
behind
some
things
that
are
way
behind,
so
we
can
actually
part
eyes
where
we
as
again
the
package
maintainers
can
focus
our
investments
on
helping
those
applications
get
updated.
B
So
just
you
know
sometimes
ugly
diagram,
sometimes
pretty
diagrams,
but
the
point
is
like
we
have
that
internal
data
of
tracking
stuffs,
not
just
for
our
own
components
but
for
all
the
libraries
that
we're
using
and
then
we
track
adoption
across
all
the
older
company
of
all
the
libraries
and
the
components
that
we
have.
We
use
data
studio
for
this,
just
to
filter
it
out
and
we're
doing
all
this
manually.
B
This
would
be
like
very
interesting
to
associate
with
a
dollar
sign,
at
least
for
the
value
kind
of
measurement.
If
I
look
at
some
of
the
type
of
work
and
type
of
applications
that
are
being
used
for
like
a
lot
of
it,
might
be
marketing
type
applications
or
static
Webster's
websites,
but
there
are
applications
in
there
that
are
directly
in
the
path
of
the
customer
in
the
path
of
sales
in
the
path
of
a
checkout
workflow
that
actually
have
dollar
value
to
the
business.
So
I
can
share
and
show
you
any
of
that
data.
B
What
version
of
expression
is
also
relevant
and
how
it's
being
used
as
being
as
relevant,
but
at
the
very
least,
knowing
that
we
have
a
dependency
in
that
context,
makes
makes
it
so
that
we,
as
a
business
or
as
a
technology
team,
have
a
clear
mandate
to
invest
back
in
this
tool
and
whatever
shape
or
form.
That
may
be
so
like
that
was
just
a
quick
example
of
some
of
the
data
we
have
collected.
F
And
just
to
piggyback
on
that
we
have
a
lot
of
the
same
statistics
like
you
were
just
outlining
in
your
pretty
tool.
We
had
a
tool
like
that.
We
recently
months
ago,
we
shut
it
down
for
separate
reasons,
but
what
I
wanted
to
point
out
was
it
comes
to
those
tools
like
you
know
the
node
security
tool,
the
scanner
thing
and
there's
Amada
now,
and
you
know
those
you
know
we
require
passing.
F
J
We
we
have
a
similar
set
up
back,
so
we
have
a.
We
have
an
internal
tool
very
similar.
What
you
just
showed
marlin
file
is
pretty,
but
it
also
be
interested
in
the
the
cost
to
enterprise
companies
to
manage
that
type
of
tool
themselves,
because
I
know
we
have
poured
a
lot
of
dead
hours
into
getting
that
spun
up
and
keeping
that
working
I,
and
you
could
make
a
case
for
providing
a
tool
that
does
that
type
of
dependency
tracking
across
multiple
repositories
that
you
comply.
J
That
and
their
endpoints
get
all
the
stats
and
then
potentially
a
feature
could
be
opted
in
anonymized
reporting
to
a
central
source
to
help
the
ecosystem
as
a
whole.
So
Enterprise
gets
it
gets
to
old
up.
They
don't
have
to
pour
money
into
managing
node.
You
get
to
opt
in
to
helping
the
community
with
imagine
porting
statistics
for
color
packages,
yeah.
B
I
think
absolutely
like
I
think
tools
such
as
that,
like
we
said,
there's
a
cost
development
teams
and
there's
a
product
to
do
it.
Ourselves,
anyways,
like
I,
think
the
expectations
that
people
should
be
building
or
using
something
like
that,
but
the
output
of
such
a
troll
that
you
guys
all
said
have
similar
things,
might
have
been
useful
in
some
context
to
share
back
to
the
to
the
community,
and
that's
that's
the
question.
I'm
asking
is
like
what
type
of
information
can
be
shared.
We
talked
about.
B
You
know,
potential
analysis
tool
that
can
give
you
core.
If
you
are
users,
it
helps
the
technical
steering
committee
and
understanding
core
decisions
about
the
the
core
elements
of
nodejs
itself.
We
talked
about
libraries
and,
like
sorry,
dependencies
and
publishing
that
sort
of
information
is
not
as
useful.
B
The
only
thing
I
found
an
hour
tracking
that
can't
be
shared
publicly
is
the
external
dependencies
and
their
versioning,
but
due
to
the
reason
we
just
discussed,
that's
a
bit
of
a
challenge:
I
don't
have
more
data.
I
wish
I
did
more,
have
more
data,
but
that's
why
I'm
asking
for
your.
You
know
large
scale,
enterprise
teams.
What
other
data
do
you
collect?
Perhaps
there's
commercial
products
that
you
use
that
have
surfaced
some
value
that
hasn't
datapoint,
so
we
can
ask
about
as
well.
J
So,
at
least
here
I
wouldn't
say
we
collect
the
data
so
we're
able
to
analyze
it
as
needed.
There's
a
lot
of
performance
metrics
that
we
track
just
on
how
things
are
running
at
any
given
point
that
might
be
able
to
I,
don't
know,
could
actually,
sir
there's
new
information
to
to
node
other
than
hey.
Here's
a
you
know:
here's
a
hot
function
and
in
core
that
it's
causing
delays
in
this
certain
scenario.
J
B
F
B
I
A
G
I,
don't
think
that
that's
like
that
might
bring
out
specifics
about
the
hardware
or
like
environment
that
you
run
things
on,
but
it's
probably
more
likely
that
there's
things
specific
to
your
application
that
would
be
affected
by
the
changes
versus
the
underlying
hardware
right.
So
so
the
the
more
useful
thing
might
be
to
develop
and
provide
benchmarks
which
reflect
your
internal
applications
right.
B
G
It's
like
you
know
if
you
had
a
really
simple
application
and
it
was
like
okay,
here's
how
you
can
set
up
and
run
that
as
a
benchmark,
then
that's
something
that
you
know,
maybe
that
we
could
run
nightly
and
then
feed
that
data
back.
If
we
basically
had
here's,
you
know
five
different
benchmarks
that
reflect
different
types
of
applications
in
real
life
businesses.
Then
the
data
coming
out
of
those
would
say:
hey.
Is
it
getting
better?
Is
it
getting
worse
and
that
would
be
a
useful
thing
right.
J
We
we
have-
and
they
don't
really
quite
count
as
benchmarks,
so
that
we
do
care
about
this
as
well,
but
we
presumably
it's
well-established
companies-
have
pretty
robust
test
Suites
around
all
of
our
applications.
If
there
was
a
way
other
than
just
like
more
a
way
to
get
more
valuable
data
other
than
just
record
start
time
record
at
end
time
see
the
destroy.
G
G
Definitely
I
mean
we
have
a
couple
of
benchmarks,
but
it's
it's
always
a
question
of
like
how
closely
do
these
relate
to
real-life
workloads
and
anything.
We
can
do
to
sort
of
get
to
the
point
where
it's
like
yeah,
okay,
we're
pretty
confident
these
few
benchmarks.
If
they,
if
they
go
down,
then
that
means
that
you're
really
affecting
businesses
as
well,
that
that
would
be
quite
useful.
It'll.
J
Be
interesting
to
get
a
group
of
people
with
a
vested
interest
in
guest
performance
committed
to
having
one
instance
of
their
some
significant
product
spun
up
locally,
where
they
can
report
kind
of
fairly
anonymous
per
statistics
and
be
running
kind
of
pre
releases
of
latest
node
to
make
sure
that
nothing
is
imploding
in
the
process.
Right.
G
That's
sort
of
the
along
the
lines
of
like
the
early
testing
early
canary.
If
you
can't
make
a
benchmark
that
that
you
can,
you
know,
act
as
a
proxy
actually
having
data
that
says.
Yeah,
we've
run
it
and
things
only
look
better
or
hey.
Wait.
A
second
we've
seen
these
things
and
then
maybe
reproduce
a
test
case.
That's
also
like
the
enterprise
canary
crew,
yeah,
yeah.
B
That's
that
touches
on
topic.
We
talked
about
an
easy
feedback
session
a
while
ago,
where
we
talked
so
far
about
how
can
enterprises
provide
data
into
the
project,
the
let's
switch
gears
and
talk
about
how
the
project
can
help
my
data
back
to
the
enterprise
and,
in
this
context,
I
would
say
what
at
what
point
to
make
everybody
he
representing
kind
of
the
different
enterprise
companies
that
you're
part
of
at
what
point?
Do
you
leave
and
look
at
there?
B
Were
these
candidates,
I
thought
if
at
all
and
and
even
think
for
yourself
about
performance
impacts
or
any
other
kind
of
impacts.
Right
I'll
speak
for
myself,
I!
Don't
yes,
unless
I'm
doing
it
for
myself,
as
opposed
to
much
for
my
business
or
the
business
I
represent.
Is
that
something
that
the
project
like
Adam
suggested?
If
there's
something
that
the
project
can
provide
as
a
structure
or
as
a
mechanism
saying
you
know,.
C
B
More
clear,
clearer
than
by
the
way,
there's
a
release
candidate
somewhere,
go
downloads
from
github
and
run
it
and
there.
If
there
is
such
a
structure,
are
people
here
confident
in
their
own
business
ability
to
go
and
try
that
out
like
do.
We
even
have
that
capacity
within
your
team's
speaking
for
my
team
I'm
trying
to
create
that
capacity.
That's
not
there
today.
Do
you
have
that
type
of
level
of
maturity
of
DevOps
practices
are
similar,
that
you
can
decide
to
spin-off
a
isolated
container
somewhere.
F
That
it's
really
tricky
when
I
I
think
that's
really
tricky
for
us
just
looking
at
across
our
ecosystem,
very
many
different
use
cases,
different
types
of
teams
made
up
with
different.
You
know
personalities
where
they
care
about
some
things
and
other
folks.
Don't
product
people
sure
they
like
it.
If
bottom
line
is
improved
great,
how
much
do
they
want
to
invest
in
it?
How
do
you
quantify
being
you
know
if
they
need
to
invest
to
be
able
to
put
this
in
place?
Well,
how
much
of
a
cost?
Is
it?
F
A
F
G
I'm
guessing
it's
because
there
haven't
been
any
big
enough
disasters.
You
know
you
haven't
gone
to
upgrade
and
then
hey.
It
falls
over
to
half
the
performance
and
it
cost
you
a
bunch
of
money
right.
So
it's
it's
been:
okay,
alright,
I'm!
Just
guessing
that,
that's
usually
when
it's
sort
of
like
well,
we
we
think
it's
a
great
idea,
but
we
can't
justify
it.
It's
often
because
it
hasn't
been
enough
pain
to
yes,.
B
We
can,
speaking
for
myself,
I
think
that
ultimately
comes
down
to
dollar
signs
right
at
the
end
of
the
day.
I
want
to
do
a
whole
bunch
of
things,
but
I
have
to
justify
them
with
the
business
and
make
the
business
cases
for
them
and
just
to
kind
of
bring
this
home
a
bit
more.
Let's
say
again:
this
doesn't
happen
today,
but
say
there
is
a
Enterprise
Canary
group
like
Adam
called
it,
and
that
Enterprise
Canary
group
receives
a
caller.
B
The
monthly
emails
from
the
project
describing
an
upcoming
release
candidate
and
but
within
that
project,
within
that
kind
of
notification
or
alert.
Saying
here,
is
the
actual
numbers
of
the
performance
gains
you
will
get
if
you
upgrade
from
this
version
to
this
version,
does
that
trigger
at
least
a
dialogue
in
term?
Oh,
this
is
coming.
This
might
give
me
10%
improvement
overall,
maybe
80%
in
some
cases
like
think
crypto
improvements
or
kind
of
HTTP
improvements.
B
Does
that
number
or
something
along
those
lines?
How
to
build
an
internal
business
case
to
further
investment
along
the
way,
and
maybe
it's
a
one-off
and
maybe
it's
sometimes
you're
gonna,
say
yeah
duh
pair
but
other
times.
Just
having
that
knowledge,
how
do
you
kind
of
build
a
business
game,
because
the
challenge.
G
B
It
certainly
is
I
still
have
no
six
running
and
short
of
me
having
the
authority
and
power
and
telling
people
thou
shalt
not
use
that
anymore
people
don't
upgrade
and
I'm
not
just
talking
about
the
latest
version
of
notes.
X
I'm
talking
about
an
earlier
version
about
sex
with
security
vulnerabilities,
so.
B
To
get
people
to
actually
press
that
button
to
build
making
you
build,
but
I
have
to.
If
I
have
the
business
case,
for
even
something
that's
performance,
impacting
it
becomes
a
little
bit
of
more
of
a
there's,
an
investment
than
there's
a
dollar
sign
associated
with
it,
which
drives
action,
at
least
in
bigger
companies,
as
I've
noticed
more
so
than
goodwill,
and
wanted.
F
J
G
J
Putting
on
putting
on
LinkedIn
enterprise
hat
I
think
here
were
finally
getting
to
a
point
where,
where
I
could
make
that
case
and
help
get
the
team
moving
in
a
direction,
we
we
interested
in
being
a
part
of
that
type
of
program.
If
it
existed,
we
would
need
the
friction
of
doing
this
to
be
very
low.
So
like
here's.
A
here
is
a
here
is
a
cut
of
the
node
binary
that
you
run
it.
H
Interested
in
exploring
that
work,
so
what
quality
of
data
is
the
project
interested
in?
Is
it
mostly
like
the
synthetic
tests
run
against,
like
maybe
our
infrastructure,
or
is
it
more
along
the
lines
of
like
what
will
the
impact
to
actual
production
services
be?
If
we
release
this
like
theoretically,
today,
I
guess
turning
it
back
to
what
are
you
interested
in
from
our
end,
I.
B
Would
say
anything
and
everything,
obviously,
but
here's
why
just
having
you
will
come
here
and
say:
hey
Netflix
has
had
a
performance
impact
of
one
percent
and
our
production
level.
I
don't
need
to
know
where
I
don't
need
to
know
why.
That
sets
the
type
of
a
conversation
in
a
dialogue,
but
generally
that
can
surface
itself
in
multiple
different
ways.
It
can
surface
itself
to
the
performance
team,
saying
oh
we'd,
better
benchmarks,
because
maybe
we
missed
that,
because
our
benchmarks
show
that
this
was
better
but
Netflix
runners
and
it
actually
made
them
slower
right.
B
It's
my
surface
itself
in
the
kind
of
core
API
team
and
the
way
some
of
those.
If
you
hadn't
evolved,
saying
we'll
wait,
we're
just
doing
them
non
productions,
breaking
changes
or
non
non
breaking
changes
in
the
API.
How
come
performance
is
impacted
like
there's
no
way
to
predict
how
it
would
surface
other
than-
and
this
is
why
we're
having
this
down,
knowing
that
this
is
not
just
library,
maintainer,
X
and
saying
my
performance
top
by
Y,
okay,
but
knowing
that
this
is
actually
affecting
Netflix,
even
its
1%,
even
at
0.5%.
B
C
B
Node
then
we
don't
need
to
know.
We
don't
need
to
talk
about
where
that
reliance
is
happening
for
privacy
and
security
concerns,
but
just
knowing
that
that
there's
an
impact
there
would
help
surface
where
the
investments
need
to
be
within
the
project
itself.
Like
I
said
whether
it's
when
you'd
better
performance,
we
need
better
benchmarks.
We
need
better
kind
of
investment
in
these
tools,
rather
than
just
keeping
shipping
things
and
keeping
making
changes
and
dealing
with
the
consequences.
After
that.
H
J
Yeah
I
I
honestly
think
that
even
like
I
said
just
running
you're
running
a
test
suite
against
a
new
node
binary.
It
would
give
us
valuable
data
like
maybe
not
it's
not.
It
wouldn't
be
like
production
data.
It
would
be
possibly
differences
but
yeah
canary
in
the
gold
mined.
We
don't
need
it
to
be
a
plan
with
the
actual
bird.
G
F
B
B
So
that
leads
to
the
last
question
of
how
do
we
actually
tie
all
of
these
topics
together?
I,
don't
know
if
a
weekly
meeting
is
probably
going
to
take
a
lot
of
time
or
even
if
it's
a
monthly
meeting
I'll
take
everybody's
time
and
commitment
and
I'd
rather
give
more
of
the
a
synchronous
type
of
feedback
cycle
a
bit
more
try.
F
E
C
B
Know
can
you
know
we
can
do
the
jostling,
because
I
heard
that
something
we
can
refine
and
improve
over
time
of
how
to
best
do
the
these
type
of
engagements,
but
I
think
a
very
good
starting
point
is
like
we
immediately
identified
like
doing
some
sort
of
call
to
action
early
on
and
release
candidate
for
saying,
hey,
here's,
here's
the
major
impacts.
We
don't
need
to
list
all
the
change
logs,
but
just
major
impacts
things
around
performance,
things
around
breaking
changes
just
sending
that
out
and
for
people
to
opt-in
and
providing
back
an
answer.
B
Saying
yeah
this
sounds
painful.
Maybe
we
can
be
involved
and
then
the
actions
from
that
depends
on
the
type
of
answers
and
then
the
the
other
thing.
We
also
identified
just
to
wrap
this
up
in
terms
of
providing
data
out
in
a
continuous
fashion,
as
opposed
to
a
call
to
action.
I
need
to
look
tooling
that
makes
it
frictionless
or
simple
that
provides
level
of
information
that
is
manually
inspected
and
shared
at
least
the
first
start.
B
A
H
Right
so
one
of
the
things
I'd
like
to
share
is
before
we
do
another
one
of
these
I'm
not
sure
about
it
sounds
like
other
people
are
in
a
very
similar
case
where
we
have
a
lot
of
new
projects
going
across
the
company
a
lot
of
different
moving
parts.
I
guess
the
lead
time
for
this
meeting
really
didn't
give
me
an
opportunity
to
do
the
due
diligence
of
reaching
out
to
these
teams
and
really
like
representing
everything
across
Netflix
and
really
implying
up
the
dialogue
here.
D
H
B
And
I
thought
also
be
cautious
at
the
time
of
the
years.
Well,
vacations
and
holidays
and
snow
might
prevent
a
lot
of
people
from
being
active,
at
least
in
extracurricular
activities.
So
I
would
say
at
the
very
least,
we'll
probably
should
try
to
Saturdays
for
a
monthly
type
of
thing,
but
it's
going
to
be
at
December,
so
we'll
figure
something
else.
It.
B
And
that's
why
I
wanted
to
emphasize
on
the
a
synchronous
nature
of
doing
any
sort
of
call
to
action
or
doing
any
sort
of
action
of
gathering
data.
Even
if
we
do
not
have
this
type
of
forum,
we
should
still
be
able
to
communicate
as
synchronously
and
provided
there's
a
very
simple
ask
and
very
simple
clear
to
call
to
action.
Then
people
can
opt
into
that
great
well
we're
at
time.
So
thank
you
all
for
joining
us.
This
has
been
great,
losing.