►
From YouTube: YUI Open RoundTable 02/28/2013
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
My
broadcaster
twice
one
day
and
welcome
to
the.
Why
do
I
open
round
table
it's
fibber
828
this
month
is
almost
over
it's
our
last
open
round
table
for
this
month,
and
we
have
some
interesting
questions
on
the
topix
table
before
we
get
into
topics
in
terms
of
discussion.
I
want
the
whole
group
see
if
anybody
had
India
down
for.
A
B
We're
seeing
it
now
okay,
so
let
me
think
here
the
project
that
I've
been
working
on
the
last
like
three
weeks
has
been
performance:
benchmarking
tools
or
used
within
Yui.
So
each
week,
I've
kind
of
gone
through
and.
C
B
Let's
see
if
we
go
back
to
the
first
week,
that
was
when
I,
using
the
current
tools
that
we
have
benchmarks.
Benchmarking
profiles
develop
for
or
a
benchmarking
test
developed
for
those.
My
meal
plan
benchmark
is
then
also
develop
an
HTTP
server,
similar
to
what
we
have
there.
What
we
have
for
Grover-
and
some
of
the
other
tools
too
so
demonstrated
that
then
next
week
got
some
feedback
from
the
rest
of
the
team.
B
That
Yeti
would
be
awesome
which
delivery
so
worked
with
Reid
to
get
that
and
integrated,
and
so,
as
opposed
to
only
running
your
benchmarking
test
against
the
head.
Ref
in
your
Yui
03
repo
expand
that
to
be
more
configurable
and
be
able
to
pass
in
an
array
of
refs
to
compare
against.
So
then
you
can
see
how
your
performance
is:
increased,
decreased
or
stayed
the
same
over
time.
Then.
Last
week,.
B
Well,
that
has
many
iterations
as
part
of
its
as
part
of
its
process
of
developing
statistical
significance
within
the
return
value
be
the
current
state
of
the
machine
can
certainly
affect
browser
performance,
so
adding
in
iteration
supports
not
relying
only
on
one
benchmark,
jazz
result
but
5
or
20
or
100
or,
however
many
12
so
show
that
off
ism
and
then
also
talked
a
little
bit
about
developing
a
yogi
plugin,
because
I
feel
like
a
command-line
tool
is,
was
a
much
easier,
shorter
term
win
than
developing
a
chart
scape
the
ability
to
display
all
of
these
results
within.
B
Why
do
I
charts
so
then
also
supporting
benchmark
jaya,
suites,
and
so
this
is
stuff
that
I'll
show
now
supportive
benchmark
test
suites,
as
opposed
to
only
one
test
within
benchmark
chess
and
most
of
the
performance
tests.
We
have
throughout
the
source
tree
right
now
used
suites
and
they
have
ford
4
or
10
tests
per
page.
B
So-
and
I
also
did
some
work
to
enhance
the
Yui
bench-
clients
so
I
guess
I
can
show,
let's
see
if
I
half
over
its
command
line,
so
I
actually
won't
show
any
so
yeah
you
just
had
Yogi
benchmark
and
it
will
actually
generate
the
results
for
you.
I
want
you
any
live
demos
of
it
because
it
just
takes
sometimes
I,
don't
know
30
seconds
or
five
minutes,
depending
on
how
many
iterations
you
want,
but
I've
gone
through
and
actually
started
running
this
against
different
tests
that
we
actually
currently
have
throughout
the
system.
B
So,
like
here's,
one
running
against
app
so
Eric,
this
is
running
the
running.
Just
the
one
performance
SD
about
there
in
phantom
JSC,
there's
quite
a
bit
of
an
improvement
in
speed
from
36
to
38,
not
much
of
a
surprise
considering,
but
what's
360
or
initial
release
for
that.
D
B
B
B
You
could
date
current.
So,
as
you
see
up
here,
the
r
+
y
bench
executed
against
one
of
the
app
framework
examples
using
fan
of
Jay
s,
and
so
it
ran
two
tests
in
there,
and
so
this
is
basically
what
the
results
that
looks
like
so
yeah,
but
so
going
back
over
to
at
the
yogi
plugin,
and
there
is
one
up
here
where
I
have
quite
a
bit
more
I
believe
it'll
actually
show
all
the
browsers
and
comparisons
episode,
so
yeah,
here's
one
for
example.
This
is
also
running
against
the
a
framework.
B
So
you
can
see
you
can
just
run
yogi
bench
work
and
then,
basically,
what's
good.
What's
going
to
happen,
there
is
I
directed
Safari,
Chrome
and
Firefox
all
to
the
HTTP
server
that
was
started
up
the
Yeti
server
and
then
ran
it
against
36
through
the
head
and
some
now
you
can
actually
see
the
comparison
across
browsers
of
the
performance
for
those
tests
and
so
I,
at
least
in
development.
B
Right
now,
I've
opted
this
to
display
the
percentage
slowness
as
opposed
to
the
raw
numbers,
because
that's
just
operations
per
second,
that's
not
really
too
significant
if
you're
just
casually
looking
at
numbers
and
you're
not
doing
really
hardcore
performance
development,
so
yeah.
So
that's
is
part
of
the
development
process
for
supporting
the
yogi
plugin.
B
It
also
required
a
little
bit
more
work
to
export
the
data
and
a
little
bit
more
friendly
formats,
and
so
then
that's
basically
one
process
talking
to
the
Yui
benchmark,
clutter
the
that
process,
that's
generating
the
results,
so
it's
useful
to
have
at
least
one
another
process
talking
to
there.
Yeah,
do
a
little
bit
more
development,
so
yeah,
that's
kind
of
current
the
current
state
of
it
I
believe.
B
Last
week,
I
ended
with
saying
there's
three
things:
we
need
the
target,
which
was
more
visually,
more
visually
useful
yeah
functionality,
which
is
circuit,
which
is
using
charts
and
graphics,
but
I'm
hunting
that
too
a
little
bit
later
on
down
the
road,
because
that
actually
does
require
quite
a
bit
more
experience
using
this
type
of
stuff
and
figuring
out.
What
how
it
is.
B
We
want
to
interact
with
the
data,
so
the
immediate
22
dues
were
the
yogi
plugin
as
well
as
CI
innovation,
so
the
yogi
plugin
is
it's,
certainly
not
complete,
but
it's
functional
and
it
works
and
it
kind
of
does
the
basics.
So
now
I
believe
the
next
step
will
be
actual
CI
integration
and
executing
this
yeah
executing
it.
This
functionality
against
all
of
the
other
tests
that
we
currently
performance
stuff.
B
So
we
currently
have
about
the
system,
so
I've
done
scrollview
base
and
app,
and
they
are
basically
all
all
working
and
functional
and
surprises,
not
really
no,
nothing
too
surprising
and
I
think
maybe
the
the
authors
of
these
performance
benchmark
tests
will
find
maybe
some
surprises
hidden
in
there,
because
we've
been
doing
all
of
this
or
this
work
all
along
improving
performance.
And
but
it's
been
up
to
the
individual
developers
to
using
tools
like
Jas,
perf
and
whatever
other
tools.
They
have
to
kind
of
aid
them
in
that
process.
B
But
we
haven't
known,
we
haven't
had
any
metrics
on
how
much
we're
improving
from
release
to
release
to
release
at
least
stored
anywhere.
So
by
going
through
and
looking
at
stuff
like
this
I
think
somewhere
I
have
base
no
there's
more
app
stuff
yeah
somewhere
in
here,
I'm
based
so
I
feel
like
Satya
and
might
find
some
of
that
interesting
because
base
got
quite
a
bit
of
an
improvement
in
speed
from
36
down
to
its
current
state.
So
yeah.
If
you
compared
any
non,
no
idea
why
today's
code,
no.
A
B
Yet
oh
yeah,
so
here
here
was
base
basically
from
36
to
38.
We
got
in
yeah
there's
like
a
fifty
percent
speed
improvement,
but
I'm
not
I
mean
there
could
be
some
really
logical
reasons
for
that.
B
Like
I
know,
we've
been
breaking
some
of
these
modules
out
into
smaller
sub
modules,
so
maybe
took
some
of
the
functionality
on
for
the
end,
some
module
added
up
but
yeah
in
terms
of
using
it
against
other
any
other
libraries
I
haven't
yet
it's
certainly
that
capability
is
certainly
there
so
I
guess
it's
worth
showing
what
the
test
file
actually
looks
like
so
incest.
So
here's
like
the
app
example
that
I
was
showing
test
results
against
the
only
difference
it's
still
using
benchmark
jas
in
the
exact
same
way.
B
The
only
difference
is
simply
that,
instead
of
doing
I
believe
before
it
looks
something
like
vara
sweet
equals
new
benchmark,
that's
sweet!
So,
instead
of
that,
within
this
tool
we
actually
have
to
have
a
factory
that
will
go
through
and
create
the
instance
for
you,
because
the
way
the
way
it
integrates
with
Yeti
is
that
in
the
way
it
essentially
does
everything
for
you.
B
If
you
don't
have
to
worry
about
sin
from
within
your
within
your
test,
you
don't
have
to
worry
about
you
coding,
something
than
to
send
those
results
off,
because
all
that's
happening
is
there's
a
wideout
benchmark
or
yeah.
Yui
bench
benchmark
is
listening
in
on
the
complete
event
for
benchmark
Jas
and
then,
when
it
gets
that
completing
event,
it
will
send
it
off
to
send
it
back
up
to
the
Yeti
server.
So
yeah
this
little
bit
right
here,
I've
gone
through
a
number
of
different
iterations
and
ways
to
do
it.
B
I
think
I'm
kind
of
liking,
this
one
more
than
some
of
the
other
ones,
and
then
so
that
is
you
see,
is
a
white
up
benchmark
factory.
If
I
look
into
why
you
a
which
is
here,
we.
D
B
Cool
yeah
I've
been
totally
open
to
feedback
on
some
of
the
stuff.
This
I
actually
just
coated
I
recoded
this
client
or
this
module,
which
is
essentially
the
the
benchmark
client
that
you
drop
into
the
page
a
Rico
to
this
about
two
hours
ago,
and
so
this
whole
factory
concept
is
a
little
bit
new,
so
yeah,
but
basically
yeah.
This
will
go
through
you
passing
the
constructor
and
then
by
you
passing
in
a
benchmark
jas
constructor.
B
It
will
attach
the
complete
event
or
a
touch
a
complete
event
listener,
and
so,
when
that
completing
a
Fire's,
then
it
sends
off
the
results
and
then
down
here
is
really
just
the
way
who
I
bench
stuff
doesn't
anywhere.
Anyone
can
try
out
right
now
and
give
you
feedback
no
I
actually,
in
order
to
get
it,
I
would
have
to
double
check.
But
in
order
to
publish
this
as
a
new
project,
it
will
have
to
go
through
the
yeah.
B
I'll
just
have
to
get
approval
for
open
sourcing,
which
won't
be
an
issue,
but
just
the
steps
of
going
through
that
process.
I
haven't
done
yet
for
anybody.
Internally,
yeah
it's
on
my
the
github.
My
internal
github
profile
that's
on
there,
so
well
yeah,
china,
they
so
there's
one
v.
B
B
So
much
of
this
is
just
redone
not
redundant,
but
so
much
it's
just
boilerplate
code
that
we
are
doing
over
and
over
and
over
and
over,
and
that's
where
I
feel
like
a
tool
like
this
can
really
help
out.
So
one
of
the
things
we'll
have
to
do
is
essentially
instead
of
your
HTML
file
and
being
a
full
document.
A
full
self
executable
document
just
take
out
the
meat
of
what
it
is
that
you're
testing.
B
Basically,
your
benchmark
test
will
be
dropped
down
into
the
body
replaced
out
in
there
there's
a
number
of
reasons.
Why
why
it's
much
much
easier
to
do
it?
This
way,
as
opposed
to
the
full
self
executing
document
like
we
currently
have
with
Yui
test,
so
a
few
of
those
reasons,
a
because
we're
doing
it
against
multiple
versions
of
Yui,
so
we
have
to
populate
it
with
the
seed
base
that,
with
the
Yui
see
from
different
locations
every
single
time
we
execute
these
tests.
So
basically
it's
a
different.
B
It's
a
different
document
that
we're
testing
against
versus
with
Yui,
with
all
of
our
Yui
based
unit
test.
Those
are
all
nothing
changes
without
HTML
document,
it's
always
the
same,
but
with
these
the
the
content
of
this
document
is
changing
slightly
and
we
so
we
want
to
execute
it
against
multiple
versions
of
Yui
plus.
B
A
B
Of
a
full
document,
you
just
have
the
the
script
tags
and
then
I
mean.
Basically
you
just
know
that
whatever
it
is
that
you
right
in
here,
you
can
even
have
so
like
the
scrollview
test,
or
one
of
the
scrollview
tests
has
a
bunch
of
HTML
up
here,
and
this
is
that's
totally
fine,
so
yeah
you
can
put
HTML
and
CSS
and
everything
up
there,
but
that
eliminates
all
of
the
boilerplate
code
that
that
we,
that
you'd
otherwise
have
to
have.
A
B
Benefit
is
if
there
is
a
change
in
aesthetic
tests
garage
a
four-mile
vs,
exactly
exactly
so,
oh
and
another
big
benefit
is
that
if
we
are
going
to
have
them
all
is
self-executing
full
documents.
We're
going
to
have
to
include
benchmark
j/s
in
the
Yui
source
tree,
we're
going
to
in
everybody
else
to
pull
it
from
essentially
at
the
same
spot,
we're
going
to
have
to
include
a
new
benchmarking
component
into
the
Yui
source
tree
I'm,
not
saying
that
neither
of
those
that
belong
in
the
sorcery
I
certainly
could
see
the
benchmarking
component
being
added.
B
B
I
did
actually
I
come
home
and
so
definitely
valuable
conversation
that
occurred
in
there
and
some
of
that's
a
little
bit
longer
term
thinking
the
yeah.
It's
it's
very
tapping
a
conversation
there,
but
at
least
in
terms
of
the
things
that
I'm
working
on
right.
This
second
hey
I,
feel
like
a
lot
of
some
of
the
discussion
was
around
maintaining
consistency
across
I.
B
Don't
know,
basically,
whatever
your
tent,
whenever
you're
gathering
those
metrics
making
sure
that
you
can
use
those
metrics
Adam
or
yeah,
but
those
are
that
you
can
use
those
against
other,
similarly
derived
metrics
in
the
future,
on
ensuring
that
machine
states
are
similar.
But
honestly,
like
I,
feel
if
we
get
to
the
point
where
we
can
increase
the
speed,
how
we
can
gather
all
these
tests
we
can,
but
by
being
able
to
execute
the
tests
against
all
other
previous
versions
of
the
library.
B
We
can
then
do
all
of
the
the
tests
at
their
head,
state.against
all
previous
yeah
versions
of
library,
and
then
at
that
point
you
don't
need
to
worry
about
yeah.
You
don't
need
to
worry
about
maintaining
stability
and
ensuring
consistency
and
testing
environments,
because,
if
you're
doing
the
testing
all
at
the
same
time,
all
within
I
don't
know
ten
minutes
or
within
an
hour
window
of
each
other.
B
Yeah
I
mean
this
is
all
stuff
that
it's
all
a
learning
process
that
will
kind
of
go
through
as
we
as
we
use
this,
but
I
feel
like
that.
That
might
actually
solve
that
problem,
but
maybe
that
was
I
I'm,
not
sure
if
I
totally
answered
what
you
were
getting
out
there,
but
if
that
was
at
least
my
response
to
the
conversation
that
that
occurred
on
Yui
control.
Okay,.
C
C
B
So
what
will
definitely
want
to
want
to
do
things
outside
of
the
outside
of
I
guess:
benchmark
j/s
things
so
yeah
memory
consumption
would
be
really
useful
to
happens,
and
that's
what
yeah
and
that's.
Why
I
feel
like
having
this
having
a
separate,
Yui
benchmark
tool
will
be
pretty
valuable
because
we
can
actually
do
some
of
that
stuff
cool.
So
we'll
keep
us
posted
on.
A
In
progress,
okay,
so
next
item
up
is
teil,
always
brought
up
the
topic
of
our
touch
and
mouse
inputs
exclusive.
So
t,
listen!
I!
You
have
some
background
on
it.
Yeah.
E
Sure,
oh
side,
yet
and
I
have
been
kind
of
in
touch
with
some
people
from
chrome
and
basically
this
question
I
just
kind
of
had
a
question
for
you
guys.
Basically
there's
been
there
are
all
these
devices
now
coming
on
to
the
market
which
have
both
touch
input
and
mouse
input
so
like
touch
screen
laptops
and
right
now,
our
our
code
in
event
basically
listens
to
touch
and
if
touch
exist
like
listens,
like
checks,
if
touch
exists,
and
if
it
does
that
it
doesn't
look
at
mouse
input
at
all.
E
So
for
some
of
our
gestures,
jazz,
move
or
flick,
or
something
like
that,
if
you're
on
one
of
those
touch
devices,
you
can
do
those
gestures
with
your
finger
on
the
screen,
but
you
can't
necessarily
do
with
your
with
your
mouse,
so
I
we
were
looking
into
how
to
solve
that
problem
and
until
so
in
windows.
8,
there's
that
vice
ms
pointer
api
that
we
can
use
so
wrap
both
input
methods,
but
that
doesn't
exist
on
on
chrome.
E
Yet,
for
example,
so
the
chrome
guys
were
wondering
what
we're
doing
about
this,
because
there
are
some
apps
out
there.
They're
breaking
and
we
weren't
sure
if
it's
something
that
was
that
which
fixes
at
the
library
level
or
at
the
app
level,
so
the
short-term
solution
thats
onion
was
proposing,
am
was
to
win
in
these
higher
level.
Synthetic
events
like
the
dresser
events
to
listen
for
both
under
the
hood
and
whichever
we
listen
to
first
to
prevent
the
other
one
from
occurring.
E
So
that's
I
was
I
just
wanted
to
kind
of
think
through
that.
If
there's
any
like
feedback
that
you
guys
have
or
any
gotchas
that
I
haven't
really
thought
about
it
too
much,
but
I
thought
I'd
just
pass
just
like
bring
it
up
for
discussion.
What.
D
E
I
guess
there
will
be
some
sort
of-
I
guess,
I'm
thinking
of
there
would
be
some
sort
of
global
notifier
that
that
that
is
triggered
when
you
get
into
one
of
these
one
of
one
of
the
like
either
touch
start
or
mouse
down
is
the
callback
fires
and
that
prevents
the
neck
that
bad.
Let
that
stops
the
other
call
back
from
firing
after
that,
if
both,
if
both
the
events
or
fired
by
the
browser
that.
A
E
I
think
that
I
think
that
seems
like
a
less
I
just
it
feels
like
when
you're
synthetically,
Tribune
that
stuff
that
that
could
lead
to
more
edge
cases.
Then
listening
for
job
events
and
preventing
right.
A
E
A
D
Isn't
there
the
pointer
event,
they're,
like
pointer,
spec
or
pointer
events
back
or
something
yeah,
it
seems
like.
We
should
be
tracking
that
as
well
to
see
like
you're
saying:
are
they
going
to
solve
this
like
Microsoft,
how
they
did
it
or
or
is
this
a
library
thing
or
do
we
need
to
provide
some
interim
fix
until
like
chrome
deals
with
this
yeah.
E
There
was
some
emails
going
back
and
forth
between
the
chrome
people
inside
yet
and
and
me
and
I
was
kind
of
cc'd
in
them
and
I
don't
want
to
speak
for
any
of
them,
but
it
seems
like
Microsoft
is
trying
to
drive
that
spec
and
I
think
we
are
keeping
tabs
on
it.
E
Get
this
so
for
me
was
like
I
was
thinking
about
like
events
like
I'm,
sorry
like
yeah,
I,
guess
just
like
modules
such
as
top
and
flick
and
move
and
would
would
need
this
fix
in
it
and
I'm
wondering
for
like
I
guess,
DD
drag
and
drop
on
touch
pulls
in
DD
gestures,
which,
under
the
hood
I
believe,
listens
to
like
just
remove
start,
but
it
seems
like
that
would
also
need
a
fix,
still
listen
to
mouse
mouse.
Mouse
events.
F
D
Yeah,
okay,
cool
and
I-
guess
something
like
so
like
the
Chromebook
and
the
surface
to
environments
where
this
problem
exists
are
also,
the
Chromebook
doesn't
handle
it,
but
the
surface
does
through.
I
well.
F
D
E
This
is
a
very
high
level,
like
I,
said,
a
very
high
level
of
my
figure
of
the
implementation
details.
But
right
now
seems
like
for
a
lot
of
the
touch
based
event:
modules
like
gesture
and
tap
and
GD
just
risen
so
stuff
like
that
to
listen
to
Mouse
events
as
well
and
have
some
way
to
prevent
the
other
call
back
from
firing
once
the
first
call
backfires.
A
E
E
E
A
E
D
F
D
F
I
have
noticed
that
as
I've
been
using
this
pixel
that
I've
actually
rested
my
hand
next
to
the
monitor-
and
I
use
my
thumb
to
scroll,
but
I
would
still
click
on
something
with
the
mouse
because
of
my
other
hand,
was
was
right.
Next
to
it,
you
know
my
thumb
was
so
there
I
mean
that
is
a
use
case
to
test
against
is
to
make
sure
that
the
touch
event
from
somebody
scrolling
doesn't
interact
with
something
else.
F
F
D
This
is
something
that
we
should.
We
should
look
to
see
if
the
spec
defines
the
behavior
for
this
ordering
issue,
in
terms
of
which
one
to
listen
to
or
do
you
do
both
things
if
you're
like
trying
to
drag
something
with
the
mouse
and
your
finger
at
the
same
time,
the
same
element
like,
hopefully
the
spec
has
enough
detail
to
describe
that
or
what
should
happen
and
then
you're
talking.
D
A
D
D
D
A
C
Sure,
sorry,
it
says
a
bit
of
a
kind
of
splurge
of
stuff.
Then
there
may
not
be
much
discussion
but
they're
just
questions
things
that
have
occurred
to
me
from
some
time.
So
it's
kind
of
having
some
sense
beyond
the
next
sprint
or
two
of
what
is
in
mind
for
why
you
I
think,
would
be
quite
a
good
thing
to
publicize.
C
If
it
exists
and
I
guess,
there
are
maybe
kind
of
commercial
issues
about
what
you
can
say
and
what
you
can't
and,
but
so
is
the
kind
of,
from
my
perspective,
from
the
kind
of
on
the
edges
of
space
it'd
be
quite
nice
to
be
able
to
see
a
bit
further
into
the
future
and
like
Satyam,
I
think
in
one
of
the
emails
there's
Yui
for
?
and
you
know,
is
it
stuff
to
see
as
far
as
we
can
into
the
future,
and
maybe
it's
just
one
sprint
ahead.
C
But
if
we
could
see
further
I
think
that'd
be
helpful
and
then
the
other
stuff
is
about
the
other.
Two
items
are
about
involving
the
community
and
whether
there's
anything
that
Yahoo
could
do
to
support
people
working
together,
and
it
may
just
be
for
space
on
the
forums
or
opportunities
in
open
hours
or
something
similar
or
it
may
not
be
necessary
at
all
week.
C
D
A
B
You
know
I
feel
like
so.
I
am
certainly
a
support
of
like
a
more
formalized
collective
roadmap
for
everything.
That's
coming
up
at
least
what
we
can
broadcast.
I
said
with
the
with
the
new
approach
of
doing
these
hangouts
publicly,
and
so
we've
done
demos
every
week
it
seems
like
some
of
the
CSS
stuff.
B
That's
coming
up
of
the
benchmarking
stuff,
so
I
feel
like
that's
one
outlet
where
everybody
can
you
at
least
get
a
little
bit
of
a
taste
on
things
that
are
currently
coming
up,
so
things
that
are
vocalized
in
the
Hangout
should
be
more.
Transparency
on
are
certainly
will
be
beneficial,
yeah.
D
Like
I
say
no,
we
said
this
is
a
priority.
We
have
to
still
do
it
like,
like,
for
example,
like
some
of
the
performance
stuff
right.
Like
some
of
these
things,
you
can
kick
kick
down
the
road
a
little
bit,
but
then
eventually
it's
like
no.
We
said
this
is
a
high
enough
priority.
We
need
to
be
doing
this,
and
so
there's
like
been
some
now.
Some
really
good
work
going
on
with
like
attribute
and
based
performance,
for
example,
but
to
be
able
to
forecast
that
and
send
in
the
sense
of
like
me
know.
D
A
On
the
list
that
people
will
be
the
look
through
Yui,
wiki
and
stuff
like
that
and
try
to
find
these
things
and
then
they'll
ask
and
then
they'll
find
out
that
the
the
wiki's
like
out
of
date-
or
you
know,
oh
yeah-
we
were
going
to
do
that.
But
you
know
we
had
a
change
of
priority.
So
I
think
it's
it's
tricky
because
once
you
have
that
out
there,
it
has
to
stay
up
to
date.
Otherwise
it's
you
know,
sort
of
worse
than
no
information
right,
yeah.
B
Yeah
and
in
our
priorities
change,
hopefully
not
as
much
as
it
sometimes
seems
like
it,
but
yeah
and
sometimes
we'll
be
working
on
one
thing
and
then
we'll
start
working
on
something
else
and
if
we
committed
to,
if
we
committed
to
something,
then
it
makes
it
a
little
bit
more
tricky
to
do
those
transitions.
I
think.
A
We've
set
up,
you
know
unassigned
for
bugs
that
are,
you
know
like
we
wish
we
could
do
and
there's
no
one
assigned
to
it.
It'd
be
nice.
If
not
only
do
we
set
you
mentioned
sending
these
priorities,
we
don't
necessarily
say
there's
people
behind
them,
but
we
say
it's
a
party
for
you,
the
Yui
project
and
that's
a
good
opportunity
for
even
community
members
to
come
in
maybe
and
take
a
lead
on
something.
A
D
And
calls
her
the
any
progress
made
on
that
work,
to
be
made
public
and
in
US
and
in
a
way
that
somebody
and
like
anyone
could
essentially
pick
up
from
where
that
person
left
off,
because
their
personal
priorities
change,
but
still,
like
you,
know,
the
the
project
as
a
whole
that
that
thing
still
maintains.
But
yeah.
A
A
D
I
think
just
continuing
to
have
these
talks
to
will
allow
us
to
help
like
the
thing
you
know,
Tilo
just
brought
up
today,
like
that
all
of
a
sudden
became
a
priority
as
of
recently
right
and
and
so
like,
just
continuing
to
have
these
talks
to
to
put
these
things
up
on
look
out
on
the
table
to
say
hey.
This
is
an
important
thing.
D
Let's,
let's
work
on
it
and
no
one's
off
in
their
corner
working
on
something
and
which
tends
to
lead
to,
like
you
were
saying,
where
you're
working
on
some
resize
component.
So
somebody
else
and
no
one
really
knows
the
two
people
don't
know
that
they're
doing
that
yeah
we
saw
the
same
thing
happened
with
the
NBC
work
right
where
there
was
a
bunch
of
us
all
off
in
our
own
corners,
doing
something,
and
then
we
realized
that
I
go
let's,
let's
all
collaborate
in
some
way
or
you
know
get
something
in
there.
A
It's
literally
make
your
second
question
about
supporting
collaboration.
I
know
that
we
have
a
lot
of
like
ad
hoc
hangouts
we
have,
but
I,
don't
think
we
really
a
broadcast.
How
we
do
that
I
mean
that
might
be
one
thing
to
do
is
how
we
we
just
sort
of
two
people
will
decide.
Then,
when
I
talk
and
they're
not
the
same
room,
though
jump
on
a
hangout,
maybe
having
something
like
that
for
other
people
to
know
they
can
just
do.
That
might
be
one
way
to
support
collaboration
and.
C
It
might
flow
from
the
first,
if
there's
some
sense
of
in
the
next
six
months.
These
are
the
things
that
we're
going
to
be
trying
to
hit,
and
then
people
pick
up
on
that
and
say:
okay
I'd
like
to
be
interested
in
that,
then
that
might
then
flow.
But
if
you
kind
of
or
it's
like
it
or
a
kind
of
gallery
things
I'd
like
to
do
rather
than
gallery
things
I
have
done.
C
D
So
I
think
one
thing
that
that
you
just
said
Andrew,
which
is
yeah
people,
aren't
in
the
same
room.
They
should
have
hang
out.
But
I
also
think
that
if
you
are
in
the
same
room-
and
you
should
maybe
think
about
having
one
of
these
hangouts
too.
So
if
this
information
that
you're
talking
about
is
good
for
everybody
to
to
sort
of
be
able
to
watch
later
or
if
they're
interested
in
hop
on
yeah.
D
A
C
A
A
Kind
of
RNA
might
be
yeah
and
one
thing
I
wish
it
about
the
sort
of
the
round
table
that
we
have.
We
always
have
sort
of
like
what
we've
done
and
we
never
talk
too
much
about
like
what
we
could
do
or
featuring
a
full
type
stuff.
But-
and
maybe
this
isn't
the
forum
for
that,
but
I
would
like
to
have
more
of
those
kind
of
pie
in
the
sky
kind
of
conversations.
You
know
yeah
yeah,
so
they
can
ties
the
collaboration.
You
might
spark
an
idea
having
a
chat.
A
You
know
someone
I
mean
I
have
to
say.
Irc
has
been
one
of
the
places
that
I've
found.
The
collaboration
really
happens
a
lot.
These
people
will
they'll
be
hanging
out
in
there
and
they'll
start
swapping.
You
know
fiddles
and
stuff
like
that
and
there's
a
lot
of
great
collaboration
that
goes
on
there.
Thank.
A
B
D
D
More
more
along
this
line
of
coming
up
with
the
list
of
the
projects,
priorities
independent
of
individual
priorities,
with
the
idea
that
there's
some
that
we
strive
to
an
ability
for
someone
to
essentially
get
so
far
with
some
effort
of
work
and
allow
somebody
else
to
take
over
or
collaborate
with
them
on
getting
it
done.
Yeah.
A
F
See
I
didn't
realize
that
you
could
also
mark
it
to
where
people
could
comment
and
vote
on
things,
so
they're
they're
hidden
settings
that
aren't
on
by
default.
When
you
set
it
public
that
allows
people
to
comment
or
vote
on
different
cards.
They.
A
C
A
A
F
A
A
C
A
A
I
wanted
to
just
jump
through
that
I
want
to
make
sure
we
cover
the
four
requests
and
bugs
real,
quick
I.
Don't
think
the
bugs
have
an
issue,
but
just
want
to
jump
through
the
pork
West
real,
quick
and
see
if
there's
anything,
that's
really
out
of
out-of-date
looking
at
the
375
is
still
pending
the
contributor
making
a
comment
or
making
some
additions
to
it.
There's
one
about
skin
order,
loader
enhancements.
That
is
also
kind
of
waiting
in
the
wings.
A
A
Basically,
looking
at
issues
that
have
not
had
any
kind
of
activity
for
more
than
two
weeks
and
right
now,
there
isn't
anything
like
that.
There's
one
that
is
on
the
seven
days,
which
is
from
satyam
about
at
its
spanish
language
files
for
those
components
that
use
them.
But
it's
not
at
a
point
where
we
need
intervene
yet
so
I
just
encourage
everybody
to
look
over
from
the
round
table
topics.
D
A
D
A
A
So
there
aren't
anything
anything
else,
that's
at
risk
right
now,
so
let
me
check
out
the
bugs
so
having
a
lift,
kill
these
bugs.
Yet
because
one
question
I
had
about
you
know:
we
we
assumed
that
menu
and
then
you'd
have
in
focus
manage
you're
going
to
be
deprecated,
but
has
anybody
actually
started
the
official
defecation
process
for
those?
Are
we
waiting
for
the
replacements
first.
B
I,
don't
think
it
hurts
to
start
the
deprecation
process
on
them
now,
because
we
can
leave
them
is
deprecated
for
as
long
as
you
want
to
know,
but
that's
at
least
my
thoughts
on
it
does
anybody
feel
like
we
shouldn't
start
the
dedication
process
on
this?
Well.