►
From YouTube: A Chat with the Node.js Technical Steering Committee
Description
Michael Dawson, IBM; Anatoli Papirovski, Postmates; Gabriel Schulhof, Intel; Matteo Collina & Anna Henningsen, NearForm
The Node.js project is a vibrant and fast-moving place and it's sometimes hard to keep up with everything that's going on. Come listen to Technical Steering Committee members talk about how they keep up, their views on key strategic initiatives, what's up in the project and what they are most excited about going forward. We'll save time at the end for questions from the audience so think about what you might want to ask the TSC members and we hope to see you there so we can answer them.
A
Okay,
so
I
guess
it's
nine
o'clock,
so
we
might
as
well
get
started
just
want
to
say
before
we
do
get
started
that
we
will
have
time
about
two-thirds
of
the
way
through
through
questions.
So
as
we're
talking,
you
know
think
about
what
you'd
like
to
ask
and
we'll
pass
around
the
mic,
and
you
know,
do
our
best
answer
and
so
at
this
point,
I'll
ask
Mateo
to
everybody
to
sort
of
introduce
himself
hi.
B
C
D
A
B
B
But
at
this
point
it's
it's
a
technology
available
for
everybody,
like
you
consider
things
like
you.
Probably
all
of
most
of
you
will
use
Visual
Studio
code
to
develop
in
whatever
language,
and
this
is
built
on
electron
and
node
and
or
other
folks
might
just
use
server
less
and
there's
some
note
in
there
as
well.
So
not
is
basically
just
a
technology
that
is,
you
know,
available
for
developers
to
use,
and
you
know
everybody
can
there
are.
Others,
of
course,
is
not
that
there
is
a
mantra
of
you
know.
A
Of
the
things
I
really
noticed
is
that
we've
had
I
don't
at
this
conference,
and
you
know
as
opposed
to
the
first
one
were.
A
lot
of
people
are
experimenting.
Now
almost
everybody
I
talked
to
it's
like
yeah,
we're
using
node
and
we're
using
it
in
production
in
some
way.
So
it's
really
a
big
change
on
that
yep.
D
Yeah
I
think
the
other
part
that's
interesting
is
there's
a
whole
community
of
developers
now
who
are
using
it
on
the
front
end
and
for
react
and
that
whole
ecosystem
that
doesn't
even
know
that
everything
they
do
is
powered
by
node.
All
of
the
build
tools,
everything
and
it's
just
kind
of
in
the
background
right
and
I-
think
that's
that
kind
of
speaks
to
the
maturity
part.
B
Absolutely
I
just
wanted
to
add
one
quick
thing:
it's
there
is
one
of
the
greatest
thing
that
I
that
I
add
that
happened
to
me.
This
year
was
I,
went
to
design
design
system
training,
so
I
was
in
the
room
full
of
designers,
most
of
the
time
just
completely
like
completely
out
of
my
comfort
zone.
So
what
and
I
one
of
the
things
that
happened
was
okay
in
order
to
do
the
one
of
the
exercises
of
during
during
this
day
of
training
was
well.
B
You
need
to
use
this
software,
and
in
order
to
do
that,
you
need
to
install
nodejs
and
they
were
just
like
ok
now
we
have
all
this
room
full
of
designers
that
have
no
js'
installed
in
their
app.
They
don't
even
have
know
what
this
is,
but
they
just
use
this
tool
to
use
node
to
use
some
tool
on
top
of
it,
and
for
them
is
basically
just
aster
runtime
just
a
dependency,
so
it
has
been
a
fantastic
feeling
really
like
it's.
You
know
we
have
been.
B
A
D
Think
it
depends
right.
So
it's
it's
interesting
because
there's
a
lot
of
a
lot
of
parts
in
dough
that
they're
moving
really
fast
and
are
really
progressing
at
a
good
pace
and
moving
forward
I
mean
you
look
at
stuff
like
workers,
you
look
at
stuff
like
modules
and
that
stuff
is
really
kind
of
the
quote.
The
code
quality
is
really
amazing,
but
then
you
go
back
to
some
of
the
stuff
that
requires
backwards.
D
Compatibility
if
you
think
about
streams
and
HTTP
there's,
you
know,
there's
ecosystem
modules
like
Express
out
there,
the
support
all
the
way
back
to
zero
point.
Ten
I
think
maybe
yeah
yeah.
So
you
have
this
tremendous
range
of
versions
they
have
to
support
and
it
creates
a
lot
of
difficulty
in
terms
of
maintaining
you
know
you
fix
a
bug
or
you
think
you're,
making
something
more
consistent
and
in
the
process
you
break
everybody
that
you
didn't
know
you
were
gonna
break
and
we
have
tools
that
help
us
help
us
deal
with
that.
D
B
We
definitely
need
more.
We
always
need
more
people
involved,
especially
on
those
on
those
areas
that
are
not
most
of
the
time
or
not.
You
know
there
is
no
sparkles,
like
he's
maintaining
that
that
that
that
those
that
code,
like
you
know
it's
a
friend
of
mine,
Nadia's
collaborator,
Matthias
booth,
says
you
know.
Every
time
we
try
to
like
maintaining
node
streams
is
like
playing
whack-a-mole
every
time.
B
If
you
cross
a
bug
time
more
pops
up-
and
you
know
pop,
it's
basically
trying
to
you
know,
keep
them
all
down
at
the
same
time,
and
this
is
not,
but
it's
that's
it
really
it's
no
it
it's
hard
because
of
backward
compatibility
needs
and
we
are
slowly
making
things
a
little
bit
more
coherent.
Everyone
releases
the
time
trying
to
break
as
as
little
as
possible
it
at
any
single
step
and
hopefully
providing
incremental
improvements.
So
it's
just
it's
a
long
process
and
we
are
trying
to
work
with
you
know.
B
Modulo
sorts
and
maintain
errs
to
you
know,
add
them
evolve
their
module
and
proactively
fix
them,
because
we
can
detect
with
canary.
We
can
detect
that,
for
example,
your
block
express
or
some
something
like
that,
so
we
can
go
ahead
and
hit.
Look
we're
going
to
break
this
so
I'm
sending
you
pull
requests
right
now,
so
it
will
pass
so
it
will
pass
when
not
been.
You
know,
node
version
is
coming
out
so
and
we've
done
this
several
times
already.
B
D
I
think
one
other
thing
speaking
to
the
workbook:
could
we
do
better
I
think
this
has
worked
for
some
modules
and
notice,
just
kind
of
code,
documentation
and
documenting
each
cases
and
and
why
certain
things
are
done
the
way
they
are
I
think
in
HTTP
and
streams
that
sometimes
isn't
quite
isn't
quite
the
case,
so
I
think
in
general,
like
if
people
are
trying
to
contribute
I
mean
if
you're
kind
of
browsing
through
code
and
you
kind
of
understand
why
something
is
away.
There's
documenting
that
I
think
Apr
just
add
comments.
A
C
No,
no,
it's
not
an
issue
I
like
I.
Think
that
really
just
reflects
what
material
said
earlier
to
like
I.
Think
your
first
question,
and
that
is
that
no
just
has
become
a
lot
more
mature,
like
I,
like
I,
know
our
contribution
rate
and
the
rate
at
which
we
add
new
collaborators.
I
think
it's
like,
maybe
I,
don't
know
70
80
percent
of
what
was
two
years
ago
or
so
so
it's
slowed
down
a
bit,
but
that's
really
just
a
sign
that
ya
notice,
mature
and
maybe
there's
fewer
low-hanging
fruit.
A
A
Much
stuff
coming
in
that
I
I
agree,
100%
that
it's
it's
really
just.
You
know
we're
reaching
a
certain
point
in
our
maturity
and
we
need
to
continue
to
focus
on
making
it
as
easy
as
possible
for
people
to
join,
but
it
is
gonna,
get
a
little
harder
and
just
with
so
much
going
on,
it's
not
necessarily
an
issue
yep.
A
E
Who
knows
how
many
times
so
we
have
rough
figures,
the
the
maintainer
of
level
down
did
a
great
job,
compiling
all
the
native
add-ons
recently
and
and
breaking
them
down
by
which
ones
are
moved
to
an
API
and
which
ones
are
didn't.
That's
that's
a
great
help
for
us,
and
and
in
the
new
year
when,
when
when
node
8
goes
out
goes
out
of
maintenance.
We
have
a
lot
more
sort
of.
We
have
like
fairly
big
modules,
like
I,
think
note.
E
Sas
is
sort
of
poised
to
move
to
an
API
and
they've
just
been
waiting
for
no
date
to
sort
of
drop
off
the
map,
and
so
that's
gonna,
add
I,
suspect
a
few
more
downloads
a
week
to
our
tally,
and
so
so
yeah
an
API
adoption
is
going
along
well
and
I've
actually
heard
from
Mattel
that
in
one
case
I
forget
which
module
yeah
level
down
they
did.
They
actually
had
a
performance
boost
by
moving
from
from
nan
to
an
API.
So
that's
great
to
hear
so.
E
A
That
you
know,
because
of
all
the
value
that
that,
because
system
is
delivering,
there
does
seem
to
be
some
discussion
about
like
how
do
we
get
funding?
How
do
we
get
support?
So
I
think
we
have
a
few
things
that,
like
that
we're
working
on
through
that
PAC
package,
maintenance
effort
in
terms
of
like
helping
the
ecosystem,
but
it
still
seems
to
be
pretty
healthy
and
you
know
it's
growing
and
doing
well.
A
D
Yes,
modules,
I
mean
I,
come
from
front
and
back
around,
and
don't
really
do
that
anymore,
but
yes,
modules,
I've
been
around
forever
and
tools
like
babel
and
typescript
and
I.
Think
a
lot
of
people
are
familiar
with
them
in
some
way,
but
they're
now
I'm
flag
and
note
they're
a
little
different
than
you
might
have
been
used
to
from
kind
of
the
previous
implementations.
So
I
think
it'd
be
fun
if
people
played
around
with
them
give
their
feedback
I
know
the
modules
team
is
always
looking
for
feedback.
A
C
I'd,
like
I,
don't
think
it's
actually
new
at
this
point
anymore,
but
I
I
mean
workers
are
still
something
that
I'm
excited
about
and
proud
of.
To
be
honest,
like
I
put
a
lot
of
work
into
that.
Making
that
happen
and
I
mean
it's
only
been
officially
stable
since,
like
September
or
August,
I
think
something
something
like
that.
Yeah
and
I
mean
like
it
has
like.
C
There
haven't
been
many
changes
up
until
that
point,
so
it's
almost
been
stable,
like
the
factor
for
a
year
or
so,
and
people
are
starting
to
use
it
and
people
are
actually
starting
to
use
it.
What
we
intend
to
people
to
use
it
for
like,
like
we
know
when
we
started
working
workers,
who
are
kind
of
worried
that
people
might
use
it
like
basically
do
the
same
as
cluster
like
put
IO
onto
multiple
threats
and
try
to
do
multi-threading
like
the
classical
way
they.
You
know
you
did
before
note.
C
A
That
was
definitely
one
of
the
ones
I'm
really
excited
about
and
just
to
see
all
the
different
ways.
People
are
gonna
use
it
and
I
think.
Hopefully
it
gives
nodes
some
of
some
other
use
cases
that
people
are
going
to
be
able
to
use
it
for
us,
though,
really
important
than
that
front
Gabriel.
How
about
you?
What's
your
favorite
I
I.
E
Will
surprise
you
greatly
it's
an
API,
but
so
so
I
mean
an
API.
Isn't
a
new
feature
right
I
mean
it's
I
think
it's
around
in
stable
last
year
right
this
year,
though
we
we've
added
some
stuff
to
it
that
that
actually
does
a
lot
to
to
support
the
new
workers,
because
something
really
strange
has
happened
because
of
worker
threads
right,
so
so
native
add-ons
and
API
are
otherwise
before
worker
threads
were
basically
Singleton's
right.
E
The
know
process
starts
sooner
or
later
the
application
load
some
native
add-ons,
and
then
the
application
runs
quote-unquote
forever
and
then
the
process
quits
right
there
at
at
forever
it
quits
right,
but
the
the
native
add-ons.
They
had
absolutely
no
motivation
to
to
to
clean
up
right,
because
you
know
the
Colonel's
gonna
clean
up
the
process
anyway.
So
why
clean
up
right?
E
There's,
there's
not
that
much
reason
I
mean
you
know
if
you,
if
you,
if
you
have
things
like
you,
know,
database
handles
going
in
and
out
in
and
out,
you
might
want
to
clean
those
up,
but
but
there's
a
lot
of
like
static
module
data.
That's
just
not
necessary
to
clean
up,
because
you
know
it's
like
global
static
and
call
it
a
day
right.
So
all
that
changes
with
with
with
workers,
because
now
you
essentially
have
node
instances
going
on
and
off
on
and
off
on
and
off.
E
You
know
running
what
is
potentially
an
entire
application
with.
Who
knows
how
many
native
add-ons
and
then
this
rent
quits,
but
the
process
is
still
there.
So
there's
nobody
cleaning
up
that
memory
right!
The
kernel
is
still
waiting
for
the
process
to
quit
and
that
doesn't
quit
right.
So
so
what
what
became
global
Singleton's
had
to
become
essentially
self-contained
and
properly
lifecycle
managed
objects,
and
that's
that's
a
huge
shift
right
because
we're
talking
about
like
thirty
percent
of
the
of
the
of
the
ecosystem
that
has
to
now
all
of
a
sudden,
whoa
Nelly.
E
Now
this
thing's
on
this
thing's
static
data,
now
I
all
of
a
sudden
got
to
move
it
because
it
has
to
be
threat,
safe
and
and
and
it
has
to
be
cleaned
up
and
all
my
reference
is
all
my
wrapped
objects.
What
happens
if
they
don't
get
garbage
collected
before
the
environment,
quits,
right
and
and
so
so,
a
lot
of
the
features
for
an
API,
four
and
five
which
we
released
since
last
year,
are
about
that.
E
E
Yes,
you
have
a
pointer
now,
no
more
static
data
yeh
and
you
have
a
way
to
free
it,
but
you
still
have
sort
of
thread
it
through
through
all
your
async
workers
and
thread-safe
functions
and
and
all
your
bindings
and
stuff
like
that
with
with
instance
data
you
don't
need
to
thread
anything
you
can
just
get
it
very
cheaply
from
from
the
environment
and
and
it's
gonna
be
unique
to
your
module
right
to
your
module
instance.
In
fact.
So
if
you
have
3d
instances
of
the
module
each
of
one
get
a
different
one.
E
So
so
these
are
some
of
the
features
for
an
API
and
then
I
mentioned
threats
a
function
in
passing.
That's
that's
another
one
that
that
was.
That
was
really
in
demand,
because
people
had
their
own
native
libraries
out
there
that
were
doing
threading
and
so
forth,
and
they
they
tried
to
write
bindings
for
no
GS
to
expose
all
of
this
to
to
people
who
write
JavaScript
and
and
the
first
well.
No.
E
The
second
thing
they
encounter
is,
you
know:
I
can't
call
into
the
engine
from
another
thread
right,
and
so
so
how
do
I
make
that
easy
right
and
so
basically
threats
a
function,
sort
of
bundles,
a
bunch
of
utilities,
threading
utilities
provided
by
libuv
together
to
give
you
the
the
abstraction
of
just
making
a
function,
call
into
JavaScript
as
if
it
were
on
the
same
thread.
But
now
it's
okay,
because
it's
thread
safe
right.
A
B
You
probably
know
already
to
some
extent,
it's
not
news
to
start
something
still
not
new
still,
but
it's
it's
sync.
Iterators
I
did
a
big
talk
yesterday
about
that,
so
I
don't
know.
Maybe
you
watch
that
if
not
this
YouTube
and
I
think
it's.
It's
also
many
problems
that
people
are
facing,
with
using
streams
and
nodejs
for
a
lot
of
cases,
and
you
should
probably
know
more
about
this
new
primitive
that
is
available
in
the
language.
Consider
that
from
January
all
active
LTS
lines
of
not
yes,
will
ever
sink
iterators.
B
So
essentially
we
you
can
actually
ship
a
sink,
a
traitor's.
You
use
a
sink
iterator,
so
not
supported
lines,
so
there's
no
real
reason
for
not
to
use
them
everywhere
to
some
extent.
So
it's
pretty
cool
another
thing
that
a
another
thing
that
it
has
been
happening
in
the
last
in
the
last
while
is
we
are
making
some
progress
on.
B
Minh
handle
rejection
problems
so
I,
don't
know
how
many
of
you
are
familiar
with
and
handle
rejections.
Hopefully
it's
essentially
the
core
part
of
this.
It's
just
summarized
because
it's
a
very
long
topic,
very
long
discussion
like
what
I'm
talking
about
a
hundred
plus
comments
on
github
issues,
every
single
time.
This
thing
has
been
open
so
or
you
know,
order
of
magnitude
after
that
yeah.
It's
like
thousand
comments.
Yeah
thousand
comments
like
for
every
single
github
issues
and
the
key
part.
B
There
is
what
happens
when
your
prom
is
rejects,
and
there
is
nobody
listening
for
it.
No,
nobody
are
touching
a
calendar.
What
should
be
the
default
behavior?
How
node
should
behave
and
so
on
and
so
forth
to
be
about
in
some
flexi
node
to
introduce
a
strict
mode
which
will
actually
make
node
crash,
which
I
personally
think
it's
the
right
thing
to
do,
and
a
lot
of
other
people
do,
but
not
everybody.
So
you
know
long
long
story
short
says:
all
of
that
has
been
making
some
progress.
B
I've
recently
landed
that
PR
at
I'd
be
working
for
six
months.
You
know
when
you
want
a
PR
working
for
six
months,
you're
actually
quite
happy
and
that
basically
allow
you
to
capture
the
rejections
that
happen
inside
event.
Emitters
and
you
know,
have
them
actually
do
the
right
thing.
For
example,
this
try
a
stream
or
close
an
HTTP
request,
or
so
on
and
so
forth.
So
it's
a
might
be
get
a
little
bit
safer
to
not
crash
on.
You
know
an
hundred
rejections
by
default,
but
you
should
really
you
should
now.
A
A
It
includes
I,
see
full
ICU,
so
it
includes
the
full
ICU
data
by
default.
So
if
you
use
the
ICU
api's
before
by
default,
you
would
only
get
English
and
you
would
have
to
take
extra
steps
to
be
able
to
get
the
data
for
those
other
languages.
Now
it
comes
bundled
in
and
all
the
data
for
all
the
languages
is
there.
B
So
cities
so
that
this
is
the
reason
no
such
thing
as
done
in
software.
Okay,
like
it's
there's,
always
bugs
always
new
feature
the
JavaScript
language
itself,
it's
evolving
its
evolving
rapidly,
so
this
new
feature
being
added
new
paradigms
and
so
on
and
so
forth.
I
think
one
of
the
key
one
of
the
key
topics
for
the
next
few
years
would
be
to
to
some
extent
try
to
reconcile
nodejs
with
the
largest
front-end
and
web
community.
B
So
there
is
some
quirks
in
making
code
that
can
run
on
both
node
and
the
browser,
so
that
for
me,
that
is,
that
is
a
challenge,
and
this
was
a
lot
of
focus
will
go,
and
on
top
of
that
there
is
new
things
that
are
happening,
the
ecosystem.
So,
for
example,
there
is
the
quick-quick
and
HTTP
3
is
going
to
happen
and
it's
happening
already
all
over
the
in
all
over
the
industry.
B
Overall,
no
js'
is
doing
well
to
some
extent,
so
it's
there's
a
ton
of
work.
So
if
you
want
to
get
involved,
open
issues
so
and
300
open
protocol
s,
so
you
know
it's
a,
however
I
think
with
the
addition
of
off
worker.
Probably
we
have
tackled
most
of
the
you
know,
biggest
challenges
of
bigger
quadrants
from
note,
there's
also
lots
of
things
that
can
be
done,
for
example,
to
improve
the
performance
of
on
on
serverless
environments,
so
reducing
cold
start
and
other
things
it
needs
to
be
done
inside
the
internals,
which
might
not
be.
B
C
Yeah
like
like
so
one
thing
that
we
always
have
to
do
is
keep
up
with
the
language
like
mentor
said
it's
not
just
that
you
know
the
language
is
evolving
and
we
ship
new
v8
versions
and
we
get
new
features,
but
we
also
like
we
try
to
keep
like
integrating
them
with
noches,
for
example,
a
single
rater
support
and
screams
stuff,
like
that,
that's
going
to
keep
happening
like
like.
If
there
are
some
open
questions
like
for
example,
what
exactly
do
we
do
with
private
properties
when
we're
inspecting
objects
with
consular
log
right?
C
E
So,
for
coming
from
the
hardware
side,
what
what
I'm
seeing
is
that
we
have
nowadays
like
in
in
the
whole
industry,
we
have
like
a
couple
of
very
well-established
algorithms
right,
like
we
have
image
processing
we
have
AI,
we
have
compression
encryption
and
hashing
these
kinds
of
things
and
the
CPU
is
not
always
the
only
thing
that
they
run
on
include.
Increasingly,
there
are
all
kinds
of
specialized
hardware.
E
You
know,
there's
basically
chips
out
there
that
do
one
thing
and
one
thing
well
right:
I
mean
GPU
is
generic
in
the
sense
that
it
can
do
a
few
things,
not
as
many
as
a
CPU,
but
that's
just
an
example
right
there
there
are
FPGA
is
out
there.
There
are
specific
chips,
just
for
AI,
right
and
and
especially
for
for
these
standard
algorithms.
Node
has
them
all
right
like
we
have.
We
have
open
SSL,
we
have
Zed
live.
You
know
these
things,
so
you
know
integrating
integrating
I
mean.
Why
wouldn't
you
want,
like?
E
You
know,
compression
that's
five
times
as
fast
as
what
what
Zed
live
can
do
right
now.
We
know
J,
yes
right,
you
know
if
there
is
Hardware
out
there
that
can
do
it
or
or
if
there
is
a
better
implementation
out
there
right.
So
basically,
what
I'm
seeing
is
that
there's
this
heterogeneous
sort
of
computing
environment,
slowly
kind
of
making
its
way
through
different
cloud
service
providers
and
so
forth,
and
the
runtimes
and
and
all
the
software
that's
running
on
them
and
no
J's
being
a
major
one.
E
There
could
benefit
from
this
right,
but
it
takes
a
lot
of
integration,
work
and
and
and
some
of
the
some
of
the
the
ways
in
which
these
these
capabilities
can
be
accessed
are
fundamentally
different
from
just
you
know,
you
call
the
function
and
it
it
does
its
thing
really.
Well,
you
know
so
some
some
of
the
things
are
asynchronous
by
default,
so
you
don't
no
longer
need
to
shove
them
off
on
a
thread
to
make
them
asynchronous,
but
that's
a
completely
different
paradigm.
E
Now,
and
so
so,
integration
is
not
always
it's
not
always
easy
and,
and-
and
you
know,
figuring
out
do
can
I
do
this.
Do
I
have
the
hardware
for
it
in
this
process.
You
know
if
you
wake
up
on
a
machine,
has
it
versus
a
machine
that
doesn't
you
know,
is
it
architecture
specific?
Is
it
platform
specific?
What
is
it
right
and
you
got
to
do
all
this
at
runtime
without
making
too
many
of
statements
which
increase
your
startup
performance
right
or
decrease
your
startup
performance
so
so
I?
This
is.
E
This
is
a
little
fuzzier
than
then
specific
features
that
that
need
to
land
and
so
forth,
but
I
think
it's
a
it's
a
trend
and
and
I
I'm
personally,
very
interested
in
in
how
this
is
gonna
play
out
and
and
how
we're
gonna,
you
know
always
make
the
best
of
the
hardware
that
that
we
run
on
you
know
and
and
find
the
features
that
are
available.
It's
I
think
about
it.
E
F
Guys,
I'm
Jamie
Michael
talked
yesterday
about
the
new
maintenance
process
that
you're
putting
into
place
or
the
community's
putting
into
place
and
I'm
curious.
How
does
that
change?
The
TSC
I
mean,
if
I
mean,
maybe
with
all
those
new
chips.
This
isn't
true,
but
if
there's
some
kind
of
asymptotic
drop
off
after
a
while
and
things
stabilize,
you
have,
you
know
fewer
feature,
requests
and
more
people
saying
help.
E
F
I
thought
I
can't
get
up
that
prioritizes
the
maintenance
process,
presumably
and
I'm.
Just
wondering
I
mean:
do
we
you,
oh,
it
would
always
be
at
ESC
and
will
always
be
different
than
maintenance
people,
or
does
it
change
over
time
or
who
knows?
How
do
you
deal
with
that
maturity
in
terms
of
this
process?.
A
Talking
about
is
the
work
in
the
package
maintenance
group
to
try
and
oh
yeah.
That's
right,
I
don't
have
the
mic.
Sorry,
the
the
maintenance
process
he's
talking
about
is
the
work
in
the
package
maintenance
group
to
try
and
figure
out
how
we
work
with
the
the
overall
community
to
make
things
better
for
maintainer.
X'
I
personally
see
that
as
there's
some
good
synergy
with
no,
but
it's
not
it's
it's
kind
of
still
its
own,
its
own
thing,
like
so
I,
don't
think
personally,
that's
gonna
affect
the
TSC.
A
Necessarily
it's
something
that
I
think
it's
good
to
have
the
input
and
sort
of
the
attention
of
the
TC
members
to
help
me
move
it
forward,
but
I,
don't
think
there's
going
to
be
a
direct
impact
in
that
front
area.
Other
people,
the
TSE,
is
fairly
focused
on
the
you
know.
Node
and
the
features
that
are
related
to
that
I
mean.
Obviously
we
think
the
the
the
rest
of
the
ecosystem
and
those
pieces
are
important,
but
I
don't
think
it
needs
to
be
a
everything
merges
and
is
in
one
necessary
one
area.
A
It's
we
have
strategic
initiatives,
so
it
you
know
could
fit
into
something
like
that
where
it's
one
of
the
areas
we
have
a
champion
who
pushes
it
forward,
but
we
have
lots
of
those
different
things
in
this
and
as
everybody
on
the
panel
mentioned,
there's
lots
going
on
and
lots
happening.
So
you
know
I
think
we'll
always
have
those
different
things.
E
And
and
I
think
so
so
you
know
we
have
we
work
very
closely
right
with,
with
with
all
the
working
groups
that
we
have
in
our
organization
and
I'm
fairly
certain
that
I
don't
think
there
is
even
one
where
we
don't
have
at
least
one
TSC
member
there.
So
so
you
know
between
us.
We
basically
try
to
keep
abreast
of
all
the
stuff.
That's
going
on
right
and
and,
and
it
is,
it
is
of
immediate
impact
to
us.
E
You
know
so
like
I
personally,
even
even
if
I
don't
participate
in
all
the
discussions
and
stuff
I
read
like
most
of
the
threads
right,
so
so
it'll
it'll
set
off
alarm
bells,
positive
ones
or
negative
ones.
If
I
see
anything
that
that
that
catches,
my
eye
right
and
I,
think
that's
true
for
all
of
us
right.
So
so
you
know,
you
know
these.
E
These
working
groups
they're
they're,
not
working
in
a
vacuum
right,
so
it
is
all
as
far
as
I
can
tell
a
fairly
cohesive
project
and-
and
it
is
in
our
best
interest
to
to
to
keep
the
ecosystem
and
the
core
aligned
as
much
as
possible,
especially
now
that
we
are
such
a
mature
project
and
and
and
that
there
is
like
you
know,
real
money
right.
Excuse
me
riding
on
us
right.
So
so
you
know
we
it's
in
our
best
interest
to
do
this.