►
From YouTube: Captioning Legislative Websites
Description
This session was held Aug. 6, 2017, at NCSL's 2017 Legislative Summit in Boston.
Increasingly, states are providing captions for the deaf and hearing impaired as part of their live webcasts of legislative proceedings. Learn about the issues concerning accessibility and captioning, including live stream captioning, necessary software, support on mobile devices and browsers and potential impact on remote testimony.
A
Good
afternoon
everyone,
my
name-
is
Jeff
Ford
I'm,
the
chief
technology
officer
for
Indiana,
and
that
this
is
a
discussion
on
captioning
legislative
webcasts
I'll
be
moderating
the
session
today.
I
do
have
a
couple
of
brief
announcements
before
we
get
started,
though
our
session
today
is
being
streamed,
live
on
the
web
with
the
help
of
the
Ohio
channel.
A
Increasingly
states
are
providing
captions
for
the
deaf
and
hearing-impaired
as
part
of
their
live
webcasts
of
legislative
proceedings.
Today,
we'll
hear
about
how
that's
done,
we'll
look
into
live
stream,
captioning
archiving
of
caption
recordings,
necessary
software
support
on
mobile
devices
and
browsers
potential
impact
on
remote
testimony
using
closed
captioning
for
verbatim
journals
and
many
other
issues
in
Indiana.
We
started
to
provide
closed
captioning
services
for
our
live
streams
in
2015
on
a
limited
basis.
A
Last
session
for
2017
was
the
first
session
where
we
provided
closed
captioning
support
on
all
of
our
video
streams
and
all
of
our
archived
content
from
July
2016
through
June
of
2017.
We
provided
over
eleven
hundred
and
fifty
nine
hours
of
caption
content.
We
utilize
a
third-party
service
that
provides
live
captioners
for
all
of
our
streams.
A
They
require
a
48-hour
notice
for
each
meeting,
except
during
conference
committees,
where
we
have
two
captioner
is
ready
to
provide
captioning
service
at
server
services
at
a
moment's
notice,
so
we
basically
pay
them
in
advance
to
basically
be
there
whether
they're,
captioning
or
not.
So
we
can
pick
up
those
meetings
quickly.
That
was
something
we
actually
just
added
this
last
year,
once
the
stream
has
ended,
they
provide
us
with
a
captioning
file.
We
do
do
some
minor
editing
on
that
file.
We
will
check
it
for
misspelled
words.
A
Some
basic
spell
checking
that
all
the
names
were
actually
spelled
correctly.
Then
we
resync
the
captioning
text
with
a
video
and
upload
to
our
website
as
archived
content.
So
it's
a
fairly
automated
process
at
this
point
doesn't
require
a
whole
lot
of
hands-on
effort
in
Indiana
to
make
this
stuff
happen
and
a
lot
of
that
work
is
handled
in
the
House
and
Senate
chambers
by
staff.
On
the
other
side,
we
do
have
a
third-party
vendor.
A
We
also
work
with
it's
actually
funded
by
the
state
of
Indiana
that
provides
streaming
services
to
all
executive
branch,
legislative
branch
and
judicial
branches
of
government,
so
they
provide
all
of
our
encoders
our
bricks,
our
streaming
servers,
and
then
we
just
kind
of
tacked
everything
else.
On
the
end
of
that,
we
still
pay
for
that
on
the
back
end,
of
course,
through
our
budget,
but
it
does,
it
is
kind
of
provided
for
us.
A
Our
first
speaker
today
is
going
to
be
Paul.
Pak
Paul
is
the
chief
information
officer
for
the
Massachusetts
legislature
and
has
served
in
this
role
for
the
past
five
years.
Paul
previously
served
as
IT
operations
director
for
Grand
Circle
Corporation
and
as
the
IT
software
development
manager
for
Chronos
incorporated.
He
is
responsible
for
the
Legislature's
technology
roadmap
and
works
with
the
legislature
to
foster
and
support
their
vision
and
capacity
for
IT
innovation
and
change.
Paul
graduated
with
a
BS
from
UMass
Amherst
and
enjoys
running
working
with
startups
and
spending
time
with
his
family
and
friends.
A
B
Hello
and
good
afternoon,
everyone,
as
Jeff
mentioned
my
name-
is
Paul
Peck
I
bought
back
in
on
the
CIO
for
the
state,
the
state
legislature
of
Massachusetts.
So
today,
I
just
want
to
thank
you
for
coming
to
this
session.
I
know
you
have
a
lot
of
sessions
to
choose
from
and
I
want
to
thank
the
panelists
and
a
moderator
just
for
speaking
briefly
with
you
today.
B
So
today,
I
got
asked
to
open
this
particular
session
and
sort
of
go
over
the
bigger
picture,
specifically
as
it
relates
to
how
Massachusetts
is
using
captioning
technology
and
incorporating
that
into
their
legislative
webcasts
before
I
can
do
that
in
answer.
Sort
of
the
lies.
I
think
it's
important
to
give
a
little
bit
of
a
background
as
to
how
this
came
about,
at
least
from
the
context
of
Massachusetts.
B
It
was
an
important
agreement
that
achieved
two
really
important
initiatives
or
two
outcomes,
the
first
of
which
was
it
was
a
commitment
by
the
legislature
to
basically
adhere
to
title
2
of
the
American
Disabilities
Act
and
specifically,
as
it
relates
to
me
as
the
head
of
IT,
section
508,
which
deals
with
the
the
web
sites
and
the
technology
is
related
to
that,
and
that
was
inborn
and
basically
it
allowed
them
to
basically
say
hey.
This
is
a
priority
for
us
and
then
the
legislature
since
then
has
never
looked
back.
B
You
know
they
look
at
transparency
and
accessibility
as
to
sort
of
focal
points
that
go
hand-in-hand
and
I'll
touch
upon
those
little
bit
on
with
an
example
and
the
second
important
outcome
of
that
agreement
in
2007
was
the
establishment
of
an
ad
a
coordinator
position,
so
here
in
Massachusetts
we're
very
fortunate
to
have
an
actual
ad8
coordinator.
The
role
of
that
coordinator
is
to
basically
be
responsible
for
all
the
accessible
initiatives
that
are
available
at
the
State
House
and
the
person
that
they
appointed
at
that
time.
B
So
we
are
again
very
lucky
to
have
that
sort
of
there,
so
so
those
to
the
wise.
We
do.
This
there's
a
very
important
reason,
and
we
really
kind
of
took
it
upon
ourselves
to
make
sure
that
this
is
something
that
we
carry
forward.
So
from
the
past.
Up
to
the
present,
we
basically
have
provided
open
captions
for
all
of
our
live
broadcasts,
as
well
as
on
our
archive
media.
B
So
all
the
video
sessions
of
our
archived
video
sessions
are
open,
captioned
and
there's
a
particularly
excited
about
this,
because
right
at
this
time
it's
sort
of
coincided
with
this
event.
We
are
actually
just
about
to
release
something
that
we're
pretty
excited
about
and
that
I'll
demo
to
you
in
a
moment,
but
just
to
kind
of
go
over
what
the
hows
and
whys
what
we're
doing
right
now.
So
as
of
the
moment,
live
broadcasts
are
open
captioned
and
we
do
that
with
the
help
of
a
cart
provider.
B
So
a
lot
of
you
may
be
familiar
with
this,
but
cart
is
basically.
It
stands
for
communications
access,
translation,
a
real-time
translation
and,
as
the
name
would
suggest,
cart
enables
us
to
have
instant
or
near
instant
translation
of
the
spoken
word
in
an
actual
event.
So
we
work
with
our
cart
provider.
We
work
with
a
provider
who's
been
with
us
for
the
past
ten
years
and
they
basically
produce
a
live.
Caption
stream,
that's
made
accessible
by
URL,
so
via
web
address.
You
can
actually
see
that
caption
stream.
B
So
what
our
video
team
our
broadcast
teams
do,
is
they
actually
use
an
encoder
to
overlay
that
streaming
caption
text
on
top
of
the
video?
And
that's
when
you
see
when
you
actually
see
our
live
broadcast
events?
Our
archives
are
available
to
the
public.
On
our
site
and
they're
also
available,
you
know,
individually
as
files
or
as
DVDs
upon
request
when
we
do
sort
of
provide
that
thing
as
needed.
They
are
with
open
captions
on
them.
B
So
these
are
the
things
that
we've
actually
done
in
the
past
up
come
now,
but
it
actually
was
time
for
a
change.
A
couple
of
years
ago,
we
kind
of
started
this
multi-phase
initiative
to
kind
of
change
up.
What
we
are
doing
and
part
of
it
is
reusing
a
little
bit
of
what
we
already
have
in
place,
so,
for
instance,
and
since
the
cart
providers,
they
end
up
providing
us
with
a
transcript
file
and
that's
important
for
us
in
what
we're
going
to
do
a
little
bit
later
on.
But
that's
provided
to
us.
B
A
cart
for
us
is
typically
a
service
that
costs
anywhere
from
60
to
200
dollars.
It's
similar
to
what
Jeff
mentioned.
You
know
these
are
folks
who
are
kind
of
paid
to
be
ready
and
they
work
with
us
whenever
we
have
sessions
or
overruns,
and
these
are
folks
who,
who
provide
that
service
to
us.
So
open
captions
was
something
that
we
had,
but
we
wanted
to
make
sure
that
closed.
Captions
is
something
that
we
were
trying
to
aspire
to
and
get
to
so
a
couple
years
back.
B
We
looked
at
that
and
we
kind
of
did
a
the
multi-phase
approach.
I
was
talking
about.
The
first
step
was
really
to
just
look
at
our
website.
Look
at
the
foundation
of
it
see
what
could
be
improved
before
we
could
even
address
the
captioning
aspect
and
what
we
came
up
with
is
through
some
feedback
forums.
Survey
monkeys,
you
know,
through
hallway
conversations
with
members
and
staff,
with
webmaster
feedback
that
was
provided
via
our
site
or
then
public
site
via
webmaster
email.
B
B
And
you
know
what
happens
during
these
actual
legislative
processes
so,
for
example,
if
they
were
able
to
search
the
text
of
an
actual
formal
session
or
a
committee
hearing,
they
would
know
who
is
a
champion
for
or
against
a
particular
item,
and
this
was
something
that
we
couldn't
provide
at
the
time,
so
we
really
kind
of
did
in
multi-phase.
The
first
phase
was
just
a
revamp.
The
website
and
the
website
basically
has
a
sort
of
a
it's.
A
focus
around
the
end
user
and
I
can
show
you
quickly
what
the
hell
page.
B
So
the
focus
of
the
website
is
literally
just
to
show
you
it's
sort
of
like
Google,
and
that
the
first
thing
you
see
front
and
center
is
a
is
a
web
searchable
text,
so
that
search
engine
is
literally
there
to
provide
you
with
a
deep
search
into
everything.
That's
on
the
site
and
there's
also
the
concept
of
what
we
call
a
mega
menu
literally,
this
menu
is
there
to
provide
one-click
access
to
pretty
much
eighty-five.
Ninety
percent
of,
what's
on
our
site,
everything
on
our
site
was
designed
to
be
just
a
few
clicks
away.
B
So
this,
in
conjunction
with
the
search
engine,
we
thought
that
we
had
a
pretty
good
basis
for
what
we
wanted
to
do
with
captioning,
going
forward
so
kind
of
fast-forward.
To
today
what
we've
been
able
to
do
is
actually
provide
for
the
first
time.
What
is
an
interactive
transcript
and
again
this
is
not
something
terribly
new,
but
it's
new
for
us.
B
What
we've
pretty
much
been
able
to
do
is
just
show
that
if
you
actually
go
to
our
site
and
for
example,
this
is
the
budget
of
this
year-
you'll
notice
that
there
are
two
kinds
of
archived
media.
So
archived
is
just
the
regular
old
archived
session
that
we
had
in
the
past
and
there's
a
denote
delineation
for
an
advanced
version,
so
that
enhanced
version
is
what
I'm
going
to
show
you
today.
B
So
if
I
were
to
actually
start
clicking
on
this
video,
one
of
the
things
that
you'll
notice
is
that
there
are
captions,
where
are
the
transcript,
is
right
below
and
if
I
were
to
go
into
the
actual
video
itself,
it
will
actually
highlight
what
they're
speaking
in
blue
and
so
everything
I
pass
will
be
in
blue
and
everything.
That's
yet
to
be
spoken
will
follow
along
and
real,
and
every
one
of
these
words
is
hyper
linkable.
B
So,
for
example,
if
I
wanted
to
click
on
Braintree
it'll
jump
to
that
portion
of
the
video
and
it'll
allow
that
to
continue
transcripts,
allow
you
to
search
for
different
key
terms.
So,
for
example,
if
I
wanted
to
search
on
a
proposal,
it
will
actually
show
me
time
marks
on
wherever
the
proposal
is.
Actually
the
word
or
term
proposal
is
available.
You
can
search
on
multiple
terms.
B
You
can
put
them
in
quotes
to
catch
phrases,
but
this
was
a
pretty
big
step
forward
for
us,
and
this
is
something
that
we've
allowed
to
happen
or
developed
in
conjunction
with
the
IT
team,
and
the
video
is
now
in
high-def
for
the
first
time
for
us,
and
it
will
soon
be
streaming
in
1080p.
The
reason
why
this
is
really
the
next
phase
for
us
is
because
we
actually
had
some
and
Jeff
alluded
to
this
in
the
intro.
B
B
This
would
potentially
replace
that
if
you
were
to
apply
an
algorithm
to
that,
you
could
literally
sort
of
either
augment
of
the
existing
journals
or
sort
of
kind
of
have
a
get
pretty
far
into
establishing
them
and
automating
that
for
the
clerk's
offices,
so
that
they
can
provide
that
and-
and
all
of
this
is
again
towards
that
ultimate
far-reaching
goal
of
transparency.
You
know
the
time
to
market
the
time
that
we
can
get
this
information
out
to
the
public
and
the
accessibility
of
it
all
closed.
Captions
is
finally
enabled
for
us
as
well.
B
So
if
I
were
to
fullscreen
this
video
you'll
see
that
we
have
closed
captioning
available
to
us
in
English.
This
is
actually
interesting
because
you
can
actually
set
the
different
font
sizes.
You
can
actually
set
the
background
color.
So
the
reason
why
this
is
important
is
again
for
our
accessible
users.
We
want
to
make
sure
that
this
is
something
that's
available
to
them
and
if
you're
not
wanting
the
captions,
you
can
turn
the
captions
off
and
you
can
actually
see
the
title,
graphics
and
all
of
the
you
know.
The
video.
B
It
was
originally
designed
to
be
presented,
as
so
that's
what
we
are
doing
currently
and
that's
what
we
are
striving
to
continue
so
for
live
sessions
right
now,
other
than
the
fact
that
they
will
be
streamed
in
HD.
The
way
we're
actually
rolling
this
out
is
we're
actually
going
to
be
every
future
session.
B
That
happens,
we're
going
to
actually
have
that
within
24
hours,
be
enhanced
archive,
interactive
transcript
and
we're
going
to
slowly
work
our
way
back
to
the
very
beginning,
it's
going
to
take
some
time,
but
we're
going
to
work
ourselves
back
to
the
very
beginning
so
for
the
interest
to
staying
on
pointing
on
time
for
the
other,
panelists
I
can
go
into
great
detail
as
to
how
we,
you
know
are
doing
this.
We
do
work
with
third
provide
party
providers.
B
A
Thanks
Paul,
our
next
speaker
is
going
to
be
Bruce.
Ellsworth
Bruce
is
the
digital
media
manager
of
the
South
Carolina
legislative
services
agency.
Bruce
began
his
IT
career
in
the
early
90s
and
for
over
20
years,
worked
with
amateur
ace,
Inc,
a
small
digital
video
integration
and
design
build
firm
based
out
of
Atlanta
Georgia.
A
In
that
capacity
he
worked
with
diverse
clientele,
ranging
from
individual
small
business
owners
to
educational
institutions
and
government
agencies
all
over
the
US
to
provide
best-of-breed
solutions
aimed
at
live
video
production
tools,
robotic
camera
control,
cable,
TV
distribution,
web
streaming
and
digital
communications
in
May
of
2015
Bruce
joined
the
Legislative
Services
Agency
full-time
and
now
brings
his
extensive
experience
with
leading-edge
multimedia
and
audio-visual
technology
to
the
public
sector.
Bruce.
D
Excuse
me
a
moment:
I'll
start
with
full
disclosure.
This
is
not
what
I
do
public
speaking
is
not
my
forte.
You
want
me
to
push
some
buttons
or
pull
some
wires.
I'm
your
guy,
but
I
was
asked
to
give
this
talk
today
to
kind
of
show
you
guys
the
technical
solution
that
South
Carolina
has
developed
sort
of
from
scratch.
We've
been
a
pioneer
for
some
time
in
in
web
streaming
as
part
of
the
the
as
part
of
my
research
I
actually
ran
across
this
meme
that
I
found
so
funny
I
had
to
share
it.
D
D
So,
like
I
mentioned,
South
Carolina
we've
actually
been
stringing
video
from
our
website
since
the
late
90s
1999
and
we've
actually
had
the
House
and
Senate
both
available
on
the
website
as
live
streams
since
2001
to
kind
of
show
you
where
we
are
today,
you
notice
we
have
about
11,000
total
connections
on
our
peak
day,
which
was
April
26th
this
year
and
on
an
average
day
we
get
about
1500
connections
all
served
out
of
our
data
center.
We
do
not
use
a
content
delivery
network.
We
do
not
use
any
third
parties
to
publish
our
media.
D
This
is
all
served
out
of
our
own
infrastructure
notice.
We
had
about
five
hundred
eighty
two
simultaneous
viewers
on
that
peak
day,
so
this
is
just
kind
of
give
you
an
idea
of
the
scale
that
we're
dealing
with
and
my
focus
today,
even
though
I've
been
asked
to
try
to
keep
this
high-level,
there
are
technical
aspects
to
streaming
and
to
captioning
that
we
have
to
touch
on
just
to
even
be
able
to
discuss
the
subject
intelligently.
D
So
if
I
get
out
in
the
weeds
I
will
I
will
try
to
avoid
that
and
talk
about
more
of
the
high
level,
but
we
did
not
get
there
in
one
day
you
can
see
either
from
these
pictures
over
our
data
center
in
2002,
we
had
kind
of
very
humble
beginnings.
My
favorite
part
is
the
pink
chair
and
the
yellow
cabinet
there,
so
that
was
our
data
center,
and
today,
we've
actually
built
out
a
world-class
facility.
D
Everything
is
racked
properly.
Everything
is
powered
with
dual
power.
We
have
proper
cooling,
we
have
enough
capacity
for
Network
and
storage
and
compute
to
be
able
to
handle
all
of
the
new
demands
that
we're
making
our
system.
With
all
of
the
the
new
media
that
we've
been
adding
to
our
system,
you
might
notice
in
the
very
back
some
little
displays
back
there.
This
is
a
better
image
of
that,
where
you
can
actually
see
our
command
and
control
center
for
all
of
our
video
feeds.
D
We
found
that
this
was
necessary
with
the
growing
number
of
committees
that
were
handling
all
simultaneously.
We
had
to
have
a
way
to
do
quality
of
service
monitoring
for
video
and
audio
for
all
of
our
feeds.
So
you
know
we're
talking
today
about
captioning,
but
at
the
basis
of
captioning,
before
you
can
even
get
there,
you
have
to
be
able
to
distribute
video
to
everyone,
and
there
are
a
lot
of
different
ways
that
you
can
do
that.
D
The
way
we've
chosen
to
do
it
because
we
started
so
early
is
we've
actually
built
a
homegrown
solution
from
the
ground
up
I
mean
before
anybody
was
out
there
trying
to
do
this
as
a
off-the-shelf
solution.
We
were
out
there
actually
putting
this
together,
so
you
can
see
even
as
early
as
1996,
we
did
have
a
web
presence
again
humble
beginnings.
It
looks
kind
of
like
something
you'd
see
from
a
school
website,
but
again
this
is
1996.
This
was
a
cutting
edge.
Just
being
able
to
watch
streaming.
D
This
is
kind
of
a
timeline
of
some
of
the
technology
milestones,
so
I'm
not
going
to
go
into
detail
about
a
lot
of
it.
Just
I'll
say
that
they've
won
some
awards
along
the
way.
They've
made
a
lot
of
they've
done
a
lot
of
things
before
a
lot
of
other
people.
We've
been
kind
of
a
pioneer
so
again
2002
you
can
actually
see
at
the
top
for
the
video
you
can
actually
watch
the
House
and
Senate
which,
for
the
first
time,
allowed
people
to
choose
which
lives
fee.
They
wanted
to
watch
before
that.
D
They
were
watching
South
Carolina,
educational
television,
whichever
feed
that
they
they
wanted
to
carry
kind
of
a
later
version
of
the
site,
even
closer
to
what
we
have
today.
Just
improvements
in
search
ability,
stability
and
then,
with
our
most
recent
site,
we
actually
added
on
dashboards
for
the
House
and
Senate,
so
that
we
could
follow
the
chamber
debate.
We
added
social
media
capability,
of
course,
providing
live-streaming
as
we
did
since
2000,
and
then
we
also
added
for
the
first
time
in
2014
the
ability
to
actually
see
captions.
So
we
started
captioning
about
four
years
ago.
D
This
is
kind
of
a
view
of
our
command
and
control
system.
We
call
this
vent
on
or
video
encoder
controller.
This
was
part
of
the
development
of
the
solution
that
we
put
together
as
a
custom
system
at
the
very
top
you
can
see,
there's
a
thumbnail
view
of
the
one
screen.
That's
going
out
right
now.
This
is
actually
tied
to
our
database
in
our
backend,
which
allows
us
to
schedule
jobs
for
each
of
the
events
or
meetings.
That's
upcoming,
and
you
can
see
on
the
following
days.
D
Little
thumbnails
of
the
events
that
are
that
are
coming
up.
So
as
a
meeting
is
coming
online,
the
system
will
automatically
figure
out
when
it
needs
to
spool
up
an
encoder
it'll.
Do
that
about
an
hour
before
the
broadcast
begins
so
that
there,
if
there
is
a
technical
issue,
there's
actually
time
to
have
it,
either
spent
up
another
encoder
or
figure
out
what
the
problem
is.
But
the
system
is
completely
automated.
It
actually
tracks
to
see
if
there's
a
technical
issue,
it's
doing
quality
of
service
monitoring
on
its
own
and
it
even
generates
alerts.
D
So
it
reduces
the
overall
cost
of
having
a
solution
that
can
do
this
many
streams
at
once
because
effectively,
even
though
we
have
16
different
venues
where
we
can
have
meetings,
we
effectively
really
only
have
about
6
at
max
going
on
concurrently,
but
we
have
the
capability
to
do
up
to
14
at
once.
If
we
had
to,
if
we
stress
the
system
to
the
limit,
that's
how
many
we
can
handle
with
the
hardware
that
we
have.
D
If
there
is
a
problem
with
the
hardware,
the
system
will
automatically
detect
that
put
it
in
the
maintenance
mode,
take
it
out
of
the
system
and
then
roll
over
any
existing
streams
that
are
still
going
on
to
the
next
available
coder.
So
we're
pretty
proud
of
what
we've
been
able
to
accomplish
here,
building
something
completely
from
scratch
and
not
using
third-party
developments
and
being
tired
and
dependent
to
those
third-party
developments.
D
D
They
provide
full
full-blown
service
for
streaming
for
captioning
everything
and
then
there's
the
Granicus
system,
which
is
similar,
but
possibly
not
as
good
and
then
you'll
notice
down
at
the
bottom.
We
have
a
cloud-based
content
delivery
platform.
So
if
you're
starting-
and
you
don't
want
to
have
you
know
any
footprint
in
your
facility
at
all,
you
can
pretty
much
start
with
YouTube
and
publish
out
to
their
platform.
D
You
can
see
it's
quite
a
complex
system
so
again
we're
trying
to
keep
this
simple
and
high
level,
but
there
is
a
lot
going
on
behind
the
scenes
that
you
do
need
to
be
aware
of
if
you're
going
to
try
to
tackle
this
all
right.
So
this
is
just
kind
of
a
top-down
description
of
some
of
the
features
of
the
the
system
that
we've
designed.
We've
arrived
at
this
kind
of
incrementally,
so
we
didn't
really
start
out
with
this
as
a
plan
and
try
to
get
there.
D
But
we've
been
adding
on
features
incrementally
over
time
to
to
make
the
system
more
robust.
Have
high
availability
have
a
lot
of
automation,
a
lot
of
integration
with
our
existing
tools
so
that
it
reduces
the
impact
on
our
workflow
and
on
our
personnel
to
be
able
to
handle
this
many
feeds
going
on
same
time,
so
we're
kind
of
getting
into
more
of
the
technical
aspects
of
the
solution.
What
I'm,
showing
here
is,
if
you
were
starting
completely
from
scratch,
didn't
really
want
to
make
a
huge
investment.
D
You
could
just
go
completely
third-party
and
have
an
external
content.
Delivery
network
provide
all
of
your
streams,
all
of
your
captions,
and
you
can
basically
start
out
with
just
a
web
encoder
which
could
be
as
simple
as
a
laptop
with
flash
media
live
encoder
on
it
to
be
able
to
push
the
feed
to
them,
and
then
there
are
other
solutions
that
allow
you
to
add
captions
to
that.
D
But
you
don't
really
need
any
huge
investment.
The
big
disadvantage
of
the
system
like
this
is
that
anybody
that
is
in
your
facility,
that's
watching
the
stream
from
inside
your
network.
It's
going
to
cause
a
lot
of
loading
on
your
network
to
be
able
to
hit
this
external
entity
and
pull
their
streams
into
your
network,
so
this
is
why
we
did
not
go
this
route.
D
Our
system
looks
more
like
this,
where
everything
is
on-premise
and
we
have
a
cluster
of
machines
that
handle
all
of
the
video
traffic
being
able
to
handle
the
number
of
connections
that
I
showed
you
at
the
first
slide.
You
notice
that
the
public
viewers
and
the
private
viewers
are
all
on
the
same
network.
However,
we
have
dedicated
machines
that
kind
of
push
things
in
different
directions.
D
D
The
rest
of
the
system
is
pretty
much
the
same
so
getting
into
the
actual
caption
ingest
I'm,
illustrating
here
one
of
the
methods
that
you
can
use
to
actually
get
captions
into
the
the
wowza
streaming
solution.
This
system
is
actually
going
directly
into
alza'
from
the
caption
encoder,
so
this
is
using
an
on-premise
encoder,
that's
TV
compatible.
So
if
you're
also
pushing
captions
to
local
televisions
on
closed-circuit
network
or
even
going
out
to
broadcast,
you
can
actually
do
that
with
a
system
like
this,
essentially
with
the
way
it
works
is
in
your
meeting
rooms.
D
This
is
similar,
but
this
is
actually.
This
is
going
through
the
encoder,
the
web
encoder,
to
provide
the
caption
source
instead
of
going
directly
from
the
caption
encoder
to
the
web
streaming
solution.
This
is
actually
going
video
out
to
the
caption,
the
the
caption
encoder,
the
captions
get
embedded,
and
then
it
goes
to
the
web
encoder,
which
then
the
handles
video
audio
and
captions
all-in-one.
D
This
requires
higher
end
encoders,
but
this
is
kind
of
where
you'd
want
to
get
to
if
your
enterprise
level,
where
you
really
want
to
be
able
to
have
something,
that's
going
to
work
long
term
with
high
bandwidth,
high
availability,
things
like
that
and
notice
I
even
put
the
closed-circuit
TVs
on
there
in
case
you
want
to
push
captions
to
your
closed-circuit
system
as
well.
So
again,
this
is
all
on
premise,
one
of
the
exciting
things
that
I
actually
found
while
doing
research
for
this.
D
This
is
something
new
this
just
in
2017
the
same
company
that
provides
the
hardware
encoders
that
we
use
also
now
has
a
cloud-based
solution
that,
if
you're
not
interested
in
captions
for
closed
circuit,
but
you
are
interested
in
web
captions,
then
you
can
use
a
solution
like
this
to
keep
the
captioning
portion
of
the
system
off
premise.
This
is
all
low
bandwidth.
D
There's
really
no
impact
to
your
network
to
use
a
solution
like
this,
and
the
advantage
is
that
it's
completely
scalable,
because
you
don't
have
to
buy
a
new
encoder
for
every
channel
that
you
want
to
add
on.
You
literally
just
tell
the
software
that
you
want
to
add
another
channel,
and
it
does
it
for
you.
So
all
the
hardware
involved
to
make
this
work
is
handled
off-site
by
third
party
AED,
which
is
kind
of
the
leader
in
providing
captioning
solutions.
You
can
see
from
their
dire
and
this
is
actually
copied
from
their
documentation.
D
D
So
now
we
get
to
sort
of
the
the
why
we
would
want
to
do
this
getting
away
from
the
technical
discussion.
This
is
more
about
why
we
should
bother
to
spend
the
money
to
accomplish
this
in
the
first
place.
So
what
you'll
notice
from
this
from
this
table
is
that
we
have
a
265
and
over
almost
30
percent,
are
either
hearing
impaired
or
deaf,
and
this
numbers
going
to
keep
growing
as
we
have
more
baby
boomers
aging,
and
this
is
just
kind
of
illustrating
where
we're
going.
D
D
So
the
other
issue
is
that
captioning
is
not
really
just
for
the
hearing
impaired
and
the
deaf.
There
are
actually
a
benefit.
There's
a
benefit
to
everyone.
Just
like
Paul
mentioned
the
metadata
search
is
a
big,
is
a
big
application
so
for
search
ability,
convenience,
transparency
and
inclusiveness.
Those
are
kind
of
my
buzz
words
for
why
we
should
be
doing
this.
These
are
the
benefits.
There
are
also
a
lot
of
obstacles.
Obviously
there
is
the
initial
cost
capital
investment
in
equipment.
You
do
have
recurring
costs
for
transcriptionists.
D
You
do
also
need
to
have
generally
more
IT
support
personnel.
They
may
have
to
be
specially
trained
in
the
equipment
that
they're
using
you
may
also.
Instead
of
using
a
third-party
transcription
of
team,
you
may
hire
a
dedicated
personnel
that
handle
the
transcription
for
you
on-site,
so
that's
another
possible
cost
and
then
technology
is
a
big
challenge
because,
as
you
saw
from
the
things
we
saw
earlier,
there
are
a
lot
of
components,
there's
a
lot
of
different
brands
that
have
to
work
together.
Sometimes
that's
a
challenge.
D
D
Really
why
we
should
do
this
empowerment
since
doing
nothing
disenfranchised
as
a
sizable
population
segment,
the
deaf
and
hard
of
hearing
which,
as
I
said,
is
growing
compliance
because
the
FCC,
which,
in
the
past
only
regulated
programs
on
broadcast
TV
now
are
also
regulating
programs
that
originate
on
the
web.
Only
and
that's
going
to
keep
increasing
as
well
accessibility,
because
it's
a
lot
easier
if
you
can
do
metadata
search
even
for
people
that
are
not
a
hard
of
hearing
or
deaf
to
be
able
to
find
the
media
in
multiple
mediums.
D
If
you
can
find
it
in
text
copy
it's
easier
to
search,
if
you
can
find
it
in
videos,
it's
easier
to
see
and
hear
what's
going
on.
Those
are
all
are
things
that
add
to
open
is
openness
and
transparency
and
honestly,
the
cost,
while
it
can
be
significant
or
seemingly
so,
for
a
point
of
reference
in
2014
w
bt,
w
comm
noted
that
just
one
legislative
session
day
in
south
carolina
cost
taxpayers
approximately
thirty
four
thousand
dollars
just
to
have
a
legislative
session
day.
So
on
the
grand
scheme
of
things
the
scale?
D
It's
not
it's,
not
that
significant.
We
can
definitely,
we
definitely
should
do
this,
so
in
closing,
I
just
wanted
to
share
with
you
some
of
the
resources
that
I
found
like
I
mentioned
earlier.
Eeg
is
kind
of
a
barkat
leader
in
captioning.
They
provide
a
lot
of
hardware.
A
lot
of
software
and
also
the
transcription
ICAP
service
which
allows
remote
transcription
is
to
log
in
to
your
captioning
system
and
actually
provide
captions.
A
Great
thanks,
Bruce,
okay,
our
next
speaker
today
is
going
to
be
Sanji
of
menaces.
Sanjeev
is
a
CEO
and
founder
of
slick
media
technologies,
which
has
been
serving
the
legislative
broadcast
and
entertainment
industries
with
internet
scale
cloud
streaming
solutions
since
1999
Sanjeev
has
been
the
founder
or
on
the
founding
team
of
many
successful
technology
companies
in
the
domain
of
software,
telecommunications
and
fiber
optics
since
1985
as
a
well
regarded
subject
matter
expert
on
media
asset
management,
Sanjeev
is
regularly
called
upon
to
speak
in
industry
conferences
and
provide
technical
and
operational
expertise
to
legislative
bodies
worldwide.
E
Okay,
so
first
I'd
like
a
show
hands
of
how
many
people
here
are
technical,
okay,
pretty
good.
That's
over
half
the
audience,
because
I've
been
asked
to
respond
to
some
very
technical
issues.
I,
just
don't
want
to
put
anyone
to
sleep
here
so,
as
Bruce
had
mentioned,
that
my
company
makes
a
platform
for
streaming
that
handles
closed
captioning
and
a
lot
of
technical
issues
around
it
and
we
solve
it
in
a
lot
of
innovative
ways
with
server-side
software,
but
there's
other
ways
to
do
it
as
well.
E
So
I'm
going
to
describe
some
of
the
issues
they're
commonly
faced,
and
this
is
not
going
to
be
a
product
pitch.
But
probably
if
you
have
more
technical
questions,
you
could
come
and
see
me
after
or
in
the
Q&A.
So
first
off.
These
are
the
three
issues
that
I'm
going
to
deal
with
today.
First
is
rethinking
of
closed
captions
with
video
after
an
event.
So,
for
example,
it's
rescheduled
are
you
going
to
trim
a
little
bit?
You
have
a
problem,
rethinking
it
after
an
event.
Next
is
the
policy
on
editing
captions
for
archive
files?
E
E
How
do
you
do
this
efficiently
and
finally,
what's
the
policy
on
the
archived
webcast
with
transcripts
that
we
see
so,
let's
see
so
that's
not
just
the
captions,
with
the
actual
transcripts
and
possibly
verbatim
transcripts,
so
I'm
going
to
start
by
defining,
what's
the
difference
between
subtitles
and
closed
captions,
and
it
makes
a
minor
difference,
especially
in
the
editing,
so
the
subtitles
dialog
only
who
spoke
and
what
they
said
closed.
Captions
includes
a
little
bit
of
context.
E
E
So,
on
to
my
first
subject:
it's
the
rethinking
of
closed
captions
with
video
archive
file.
So
why
is
this
important,
and
why
is
this
a
problem?
What
we
often
see
is
that
some
of
our
customers
have
built
homegrown
solutions
or
some
other
commercial
solutions,
use,
off-the-shelf,
encoders
or
off-the-shelf
media
servers
such
as
wowza
or
an
elemental
server
or
whatever,
and
it
creates
an
archive
file
of
your
webcast
while
you're
webcasting
and
regardless
of
how
you
got
your
captions
into
it,
that
it's
all
bundled
together.
E
So
the
audio/video
captions
are
all
interleaved
together
into
a
single
and
PEG
for
file,
and
if
you
want
to
lop
off
30
seconds
or
80
seconds
at
the
beginning
and
regenerate
that
file.
Well,
there's
no
simple
way
to
do
it,
or
at
least
no
simple
way
to
do
it
in
volume.
So,
first
off,
let's
talk
about
the
workflow
of
doing
it.
There,
the
caption
generation
it
could
be
manual
or
computer,
assisted
how
the
captions
actually
get
into
the
stream
gets
inserted
into
the
video
stream
in
a
number
of
ways.
E
E
Finally,
is
the
capture
by
the
encoder
streaming
it
and
entreating
the
the
closed
captions
as
metadata,
which
is
quite
important
as
well.
If
you
want
to
be
doing
searching
after
the
fact,
if
you
want
Google
to
index
the
the
metadata
as
a
searchable
video,
if
you
want
to
do
something,
innovative
like
like
Paul
is
doing
in
Massachusetts
is
doing
which
is
more
or
less
the
video
Carioca
karaoke.
Pardon
me
where
it
follows.
E
So
these
are
the
workflows
and
you've
seen
some
hints
on
how
they're
done
from
the
other
speakers
and
you'll
see
some
more
on
on
what
happens,
and
the
challenge
is
that
you
end
up
with
a
single
monolithic
file,
possibly
with
the
captions
inserted
into
them
and
and
if
you're
lucky
you
get
it
in
a
sidecar
file,
as
it's
called
so
I
get
two
files
with
the
captions
in
them
timestamps.
If
you're
really
lucky,
you've
got
them
into
a
database
and
where
they're
searchable
by
your
own
website
search
engine
and
you've
published
them
in
a
format.
E
That's
google
friendly
and
a
sitemap
XML
time-stamped,
all
that
kind
of
stuff,
most
likely
depending
on
your
tool
chain.
You
just
have
this
mpeg-4
file,
so
this
mp4
file
with
inside
of
it.
The
closed
captions
problem
with
that
is.
The
mp4
file
is
not
time-of-day,
it's
starting
at
zero,
and
so
every
little
fragment
in
there,
including
your
your
timestamps
for
the
closed
captions,
are
offset
from
zero.
So
if
your
meeting
was
204
and
13
Ekans,
he
took
off
11
seconds,
there's
all
kinds
of
math
that
has
to
happen
to
re-edit
that
file.
E
So
this
is
this
is
complex.
So
let
me
first
describe
to
you
how
we
do
it.
It's
quite
different
from
this
monolithic
file.
We
store
everything
when
we
ingest
it,
so
we
have
our
own
encoder
that
we've
written
from
scratch.
We
store
everything,
much
like
an
Adobe,
Premiere
or
iMovie
timeline,
where
every
fragment
of
video
every
two
seconds
we're
storing
a
little
slice
of
it
in
multiple
resolutions.
E
So
if
you
want
a
very
high
resolution
for
a
long
term
archive
a
streaming
resolution,
you
know
a
couple
of
streaming
resolutions
a
proxy
edit
resolution
so
for
sharing
on
social
media,
multiple
audio
tracks,
multiple
closed
captioning
tracks,
metadata
such
as
who
spoke
bill,
IDs
all
that
kind
of
stuff.
We
store
this
on
a
timeline
separately
with
actual
time
of
day,
not
zero
awesome.
So
what
we
have
is
called
a
ring
buffer.
E
It's
a
loop
recording
buffer
with
multiple
tracks,
from
which,
when
we
assemble
an
archive
that
we
just
cut
out
the
part
we
want,
we
make
a
composition.
So
if
you
use
a
video
editor,
even
a
you
know
making
your
your
vacation,
video,
you
put
all
kinds
of
things
on
a
timeline
you
have
a
render
command
render
command,
creates
the
output
file
for
you
so
with
yet
we
have
the
equivalent
of
that.
Now.
We've
invested
many
many
men
man
years
into
this.
You
probably
don't
have
the
equivalent
of
that.
E
So
you
know
this
is
how
we
do.
It
is
all
captions
timestamp
word
by
word:
captions
are
separate
from
the
media
files
and
the
media
files
themselves
are
separate
by
different
video
tracks
by
resolution
different
audio
tracks.
So
we
have
customers
who
have
8
or
10
audio
channels.
So,
for
example,
the
a
International
Maritime
Organization
or
a
few
other
ones
use
many
translators
and
when
we
generate
our
output,
we're
able
to
generate
multiple
outputs
in
multiple
formats
as
our
T
files
webvtt
files.
E
Does
anyone
know
what
that
is?
There
are
text
files
with
the
captions
timestamps
within
them,
and
multiple
mpeg-4
files
all
from
the
same
ring
buffer
for
the
same
period
of
time.
So
compositions
are
really
easy,
you're
going
to
have
a
much
harder
time
to
do
it
yourself.
So
before
I
go
on
to
the
next
question
sort
of
the
next
question
that
I've
got
address,
let
me
describe
to
you
how
you
would
do
it
if
you
had
to
do
it
yourself.
E
E
If
your
input
resolution
doesn't
match
the
output
resolution
or
there
are
certain
rules,
it
follows
to
try
to
not
re-encode,
but
it's
it's
cumbersome,
so
there's
a
tool
called
ffmpeg
which
allows
you
to
deem
Ock's
of
demultiplex,
so
unwind
all
of
the
audio
and
video
tracks
separately,
then
cut
them
up
and
you're
going
to
have
to
script,
cutting
out
the
captions
there's
zero
time
base.
So,
for
example,
if
you
have
1
hour
file
start
at
a
two
o'clock,
you
want
to
take
off
the
first
15
minutes.
You're
gonna
have
to
do
some
time.
E
Math
you'll
end
up
with
a
bunch
of
output
files
which
you'll
have
to
re
multiplex
again
together,
so
most
of
that
could
be
done
by
ffmpeg
a
bunch
of
scripting.
So
if
you
have
to
do
a
very
large
volume
of
this
kind
of
stuff,
that's
how
you
do
it.
It
also
gives
you
the
opportunity
at
that
point
of
looking
at
the
text
file
the
SRT
file,
which
which
kind
of
answers
this
next
part
of
this.
This
presentation
is:
how
do
you,
edit
it?
You
have
a
simple
file:
that's
a
text
file,
that's
time-stamped!
E
You
could
do
the
edits
in
that.
If
you
had
to
using
this
kind
of
tool,
if
it's
a
very
infrequent
action,
you
could
also
use
Adobe,
Premiere,
Pro
and
possibly
Final
Cut
I'm,
not
100%
sure,
if
Final
Cut
does
it.
So
on
the
subject
of
editing
captions
after
the
fact
there's
two
aspects
of
that
one
is
the
technical
policy.
So
are
you
really
willing
to
invest
in
either
doing
it
manually,
which
is
a
whole
bunch
of
work
and
a
whole
bunch
of
time,
and
you
need
this
edit
station?
E
That's
pretty
beefy
video
workstation,
hopefully
with
some
SSD
storage
on
there,
all
that
kind
of
stuff.
So
will
you
be
willing
to
invest
in
that
and
the
second
is:
will
you
be
willing
to
invest
the
time
to
babysit
this
slightly
fragile
workflow?
If
not,
you
might
want
to
find
an
external
video
provider.
So
there
are
people
actually
produce
videos
and
spend
their
entire
lifetimes
in
front
of
avid
or
Final
Cut
or
whatever,
and
they
understand
the
tool
chains
to
produce
this
stuff.
From
time
to
time
send
them
the
file
say
hey.
E
If
you
change
you
three
words:
they'll
do
it,
for
you
send
back
two
files,
can't
recommend
anyone,
but
there's
many
providers
who
could
do
that
kind
of
thing?
So
that's
the
technical
policy.
Second
is
procedural,
so
is
it
even
allowable
in
the
way
that
you
treat
your
archives
to
change
it
after
the
fact?
That's
up
to
you,
so
that's
nothing
that
we
could
dictate
we're
just
an
enabler
of
technology,
so
our
tools
enable
both
of
these
scenarios,
so
the
technical
policy
with
our
platform
for
edits.
E
It's
a
really
an
operator
action
where
our
staff
does
it
customer
puts
in
a
request,
executed
within
a
business
day,
less
urgent
services
required
and
it's
simply
executed
on
the
backend.
We
have
the
tools
and
tool
chain
to
do
this
kind
of
stuff,
the
procedural
policy
by
our
customers.
We
see
editing
of
captions,
very
rarely
done.
The
exception
is
really
when
the
closed
captions
are
used
as
the
starting
basis
of
verbatim
transcripts.
Usually
what
happens
is
the
captions
are
extracted
in
real
time,
not
from
the
streams
from
the
broadcast
signal?
E
So
that's
in
broad
caps
terms
of
the
line
21,
so
the
blinking
lines
used
to
see
on
the
analog
TV
or
what
they
call
CEA
708
on
you
digital
formats,
which
is
pure
digital.
So
it's
it
from
the
broadcast
facilities.
There's
D
in
bettors
which
pull
out
the
closed
caption
slice
are
our
tools,
slice
it
up
into
five-minute
segments.
There
are
customers
have
some
scripting
to
fix
the
punctuation,
the
capitalization
to
recognize
speaker
changes.
Subject,
changes
that
kind
of
thing
create
a
Microsoft
Word
template
document
from
it,
and
then
they
start
their
transcripts
from
that.
E
E
So
that's
the
process
for
closed
caption
edits,
and
so
we
very
very
rarely
see
edits
done
and
then
put
back
into
the
archive
file
so
where
there
was
a
problem
with
live,
the
exceptions
are
are
very,
very
rare
when
something
really
bad
happened,
something
really
bad
went
out
on
the
air,
so
the
policy
on
archived
webcast
would
transcripts.
So
what
we
see
generally
is
that
in
Canada
Commonwealth,
it's
very
common
that
verbatim
transcripts
are
produced
along
with
the
archives
and,
like
I,
said
that
the
rate
is
probably
no
more
than
15%
USA,
extremely
rare
municipal.
A
Thank
you.
Our
final
speaker
today
is
going
to
be
Daniel.
Kerr
Daniel
is
the
director
of
technology
for
swagat
Productions
LLC,
a
leading
provider
of
hands-free
streaming
and
broadcast
solutions
for
municipalities,
Daniel's,
innovative
and
inventive.
Can-Do
attitude
is
typical
of
his
native
New
Zealand
and
he
loves
the
challenge
to
prove
it
prior
to
joining
swagat
Daniel
designed
and
built
infrastructure
for
video
AXS,
a
dot-com
provider
for
streaming,
video
for
TV
and
newspapers,
Daniel's,
multidisciplinary
career
spans
system,
network
administration,
InfoSec
and
software
development
with
further
roles
in
leading
telecom,
payment
processing
and
SAS
companies.
Daniel.
F
F
Just
a
little
intro
swagat
of
business
since
2003,
but
I
think
I
can
safely
say
that
in
the
last
six
months,
we've
had
more
inquiries
for
closed
captioning
and
we
have
in
that
entire
14
years.
So
I
think
it's
a
great
job
to
be
speaking
on
and
I
am
happy
for
the
opportunity
to
do
so.
So
just
an
overview
of
the
talk
we'll
talk
about
why
we
caption
the
state
of
the
industry.
F
So,
as
we've
touched
on,
it
seems
like
the
main
reason
for
captioning
is
accessibility
for
those
that
are
hard
of
hearing
36
million
roughly
Americans,
that's
10%
are
deaf
or
hard
of
hearing.
My
wife
is
in
that
latter
group
and
in
addition
to
causing
arguments,
it
means
I
watch
a
lot
of
closed,
caption
television.
So
I'm
hope
I'm
an
authority
on
the
subject.
I
mean
you
know.
In
addition
to
accessibility,
though
there's
ancillary
benefits,
so
it
allows
you
to
search
the
spoken
word.
F
We've
seen
already
provides
you
with
transcripts
that
we
use
for
the
minutes
process
or
journaling
and
allows
you
to
maybe
set
discussion
alerts,
so
email
alerts
or
social
media
alerts,
so
people
that
are
interested
in
particular
discussion
topics
that
have
occurred
or
may
be
occurring
currently
the
other
sort
of
main
reason
that
I
think
that's
been
driving
things.
The
increased
amount
of
the
last
six
months
is
the
changes
in
the
law
or
really
implementation
and
enforcement
of
laws.
F
F
F
F
So
you
buy
one
box,
but
it
does
everything
you
need
as
long
as
it
fits
within
your
SD
or
HD
arrangement
when
it
comes
to
web
streaming,
there's
a
lot
more
to
it,
really,
depending
on
the
technology
that
you're
using
each
new
piece
of
streaming
technology,
of
which
you
know
they've
evolved
a
long
way
from
real
media,
Windows,
Media,
QuickTime,
h.264.
Now
and
beyond.
F
Every
time
we
think
we
kind
of
have
a
set
standard,
it
tends
to
change
on
us,
but
each
of
those
has
a
different
method
of
actually
getting
captions
into
the
video
feed,
and
so
you
may
have
technology
that
you
know
doesn't
support
it
or
used
to.
But
now
things
have
moved
beyond
it
and
then
that
way,
it's
really
an
evolving
landscape
reality
is
there's
still
work
to
be
done
in
the
market.
F
F
The
standards
were
evolving
but
weren't
quite
set
in
stone
yet,
and
so
you
end
up
with
actually
encoder
that
you
may
have
spent
your
you
know,
five-year
budget
on
that
then
doesn't
necessarily
support
closed
captioning
in
the
manner
that's
most
appropriate
for
complete
accessibility
these
days,
and
the
irony
is
that
with
SD
caption
it
was
a
lot
easier.
The
captions
were
carried
in
the
video
feed
dancing
white
line,
maybe
at
the
top
you
would
see
if
your
TV
over
scanned
it.
F
So
you
know
even
intermediate
devices
didn't
support
captioning
as
long
as
you're
sorted
and
then
the
TV
on
the
other
end
did
you
were
fine,
but
with
HD
you
know
it's
moved
from
using
visible
captions
or
captures
in
the
video
frame
to
using
carried
in
different
ways
through
the
video
containers,
but
it's
a
lot
more
technically
advanced,
and
it
also
means
that
almost
every
step
along
the
way.
Your
equipment
needs
to
support
that
captioning.
F
So
a
lot
of
workarounds
were
developed.
Those
include
include
using
open
captions,
really
just
for
the
purpose
of
you
know.
As
long
as
your
source
equipment
and
your
closed,
caption
encoder
supported
closed
captions,
they
could
burn
that
into
the
video
so
that
there
it's
always
there
and
it's
always
carried
with
the
video.
But
then
your
streaming
encoders
didn't
need
to
support
it
and
onwards.
To
you
know:
end-user
devices.
You
were
confident
that
they
would
support
it
because
it
was
right
there
in
the
video
frame.
It
works
well
for
that.
F
But
the
difficulty
is
then,
that
you
can't
turn
those
captions
off,
and
so
they
start
impacting.
You
know
the
wider
accessibility
of
your
video
for
people
who
maybe
don't
need
the
captions
and
don't
want
to
see
them
and
that
that's
a
real
issue
in
you
know
you
have
a
presentation
or
a
PowerPoint
like
this
one,
and
then
you
know,
there's
a
captions
overlaying
key
parts
of
the
text,
so
you
know
some
other
workarounds
included
using
a
little
text
box.
F
That
was
maybe
separate
from
the
video,
but
the
problem
is
that
it
takes
up
real
estate
on
the
webpage
still
for
people
that
may
not
need
it.
Maybe
you
can
turn
that
off,
but
that
doesn't
work
well
on
mobile
devices.
You
know
iPhones
by
default,
will
want
to
take
video
fullscreen,
and
then
you
lose
the
webpage
surrounding
it.
So
you
don't
really
see
those
captions
anymore,
so
you
know
you're
striving
for
accessibility,
but
not
able
to
achieve
it
on
a
lot
of
vices.
F
Similarly,
different
technologies
had
different
closed
captioning
implementations
and
workarounds.
Adobe
Flash
was
a
big
one.
It
did
really
well.
It
was
just
an
overlay
that
went
over
the
video
you
could,
you
know,
have
open
or
closed
captions.
We
had
to
turn
them
off,
but
now
Adobe
Flash,
which
was
the
dominant
video
format
for
a
very
long
time.
It's
now
deprecated,
you
know
Adobe
themselves
have
announced
they're
going
to
finally
drop
support
for
it.
F
They
were
relatively
late,
not
as
late
as
you
know,
I
guess
a
YouTube
there
abouts,
but
Facebook
really
is
the
standout.
There
I'm
sure
one
of
the
largest
live
video
platforms
sharing
platforms
these
days,
not
that
long
ago.
They
started
supporting
third-party
encoders,
so
you
know
we
have
customers
that
we
string
to
YouTube
but
YouTube
or
to
Facebook
sorry,
but
Facebook
didn't
support
captioning
tool
very
recently,
and
you
know
it's
a
conflict
for
me
that
you
know
we
have.
F
Customers
are
getting
sued
for
not
supporting
closed
captioning
and
then
YouTube,
who
have
all
the
money
and
frankly
make
a
lot
of
money
from
advertising
review
from
this
and
should
have
been
able
to
support
it
more
readily.
Don't
you
know
until
very
recently
it
really
comes
down
to
I
think
a
chicken-and-egg
problem.
Customers
were
dependent
on
encoders
to
support
closed
captioning
encoders
would
then
depended
on
streaming
server
and
CD
ends
that
may
not
have
supported
closed.
Captioning
may
have
supported
different
varieties
of
closed
captioning.
The
great
thing
is
now
that
we
have
the
technology.
F
F
But
the
difficulty
is
that
they're
very
expensive.
When
you
start
looking
at
the
elements
that
are
involved,
a
closed,
caption
encoder
decoder
that
takes
in
a
video
feed,
you
take
strips
out
the
audio
and
makes
it
available,
perhaps
to
an
off-site
captioning
company
brings
back
in
their
captions
that
they
provide
real-time
injects
that
back
into
the
video
stream
and
then
that
video
stream
of
video
feed,
I
should
say,
comes
out
of
the
closed
caption
encoder
and
is
then
available
for
cable
television
for
CCTV
within
rooms
and
also
to
go
to
streaming
encoders.
F
But
then
it's
up
to
the
streaming
encoders
to
then
pull
those
captions
back
out
of
the
SDI
or
otherwise.
Video
signal
that
they're
taking
in
and
then
re
embed
those
back
into
the
appropriate
containers
for
whatever
they're
outputting
to
whether
it's
still
supporting
Adobe
Flash,
because
that
is
necessary
on
some
platforms
or
whether
it's
because
HLS,
which
has
a
different
caption
format
entirely.
So
the
the
right
equipment.
We
should
say
to
do
this.
The
commercial
grade
equipment
is
expensive.
F
The
closed
caption
encoder
decoder
traditionally
required
a
phone
line
or
two
one
to
provide
audio
to
the
remote
transcriber
one
to
bring
the
captions
back
in
and
that
has
to
be
a
particular
type
of
phone
line.
It
couldn't
just
be
a
PBX
line
that
you
broke
off
of
your
digital
system.
It
generally
had
to
be
an
analogue
phone
line
that
was
delivered
by
the
phone
company
specifically
sometimes
possible,
sometimes
not
more
modern,
closed
caption
encoded,
decoders
use
the
Internet
they
use
IP.
F
The
really
great
ones
EEG
we've
touched
on
is
good
because
it's
very
firewall
friendly.
It's
smart
enough
to
establish
all
those
connections
out
down
to
the
world,
but
other
you
know
otherwise
quite
capable
closed,
caption,
encoded,
decoders
that
support
AP
need.
You
know,
holes
punched
in
your
network,
then
any
port
for
words
made
you're,
then
giving
this
third-party
access
to
a
resource
on
your
network,
and
then
you
know
five
fifteen
thousand
dollars
for
those
those
boxes,
then,
for
production
quality,
video
encoders!
F
You
know
this
is
maybe
a
handful
of
major
market
ones
that
now
all
finally
support.
You
know
the
most
modern
closed
captioning
standards,
but
the
price
of
those
is
fifteen
to
twenty
thousand
dollars,
typically
depending
on
which
one
and
that's
all
per
channel.
You
know
it's
one
thing:
you
know
if
you
just
have
one
channel
or
if
you're
just
doing
you
know
your
house
and
chamber
floors,
perhaps,
but
we
start
looking
at
deployments
where
you
have
10
or
12
committee
rooms.
F
On
top
of
that,
it's
a
much
bigger
number
and
then
you
know
the
equipment,
cost
sort
of
pales
in
comparison,
often
to
the
recurring
expenses
and
that's
just
what
it
takes
to
have
a
third
party.
You
know
whether
it's
someone
that's
on
site
with
you,
whether
it's
a
remote
captioning
company,
to
provide
that
service.
Typically,
we
see
rates
of
between
100
and
150
dollars
per
hour.
You
know
things
are
baked
into
that.
You
know.
If
you
have
long
meetings,
transcribers
have
to
hand
off
at
some
point
through
that
meeting.
F
You
know
they
have
to
go
to
the
bathroom
or
whatever,
because
caption
committee
has
to
make
sure
they're
available
to
do
that.
Typically,
they
absorb
those
costs.
But
the
fact
is,
it's
a
very
highly
specialized
job.
You
know
a
lot
of
training
goes
into
becoming
a
closed,
captioning,
transcriber
and
I
think
that's
reflected
in
in
the
hourly
rate
for
it
and
that's
you
know
the
total
run
time,
including
breaks
if
you're
not
able
to
schedule
them.
So
you
know
you
may
have
a
meeting,
that's
taking
a
break.
F
You
may
have
the
luxury
of
knowing
that
the
break
is
going
to
be
a
set
amount
of
time,
but
there's
no
guarantee
that's
going
to
be
the
case.
If
someone's
likely
to
come
back
early,
you
know
the
risk
is
that
you
may
miss
it
and
you'll
need
someone
there
covering
that.
Likewise,
if
they
come
back
late,
you
know
you've
got
to
transcriber,
that's
on
the
clock
potentially
and
doing
nothing
so
there's
some
challenges.
F
F
But
when
you're
talking
about
having
to
cover
a
number
of
committee
rooms
that
may
all
be
meeting
simultaneously
you're
multiplying
the
numbers
we
touched
on
in
a
lot
of
cases,
you
know
municipal
content,
legislature,
meetings,
a
very
long
form,
I,
can't
think
of
anything
that
might
run
longer
than
it,
except
maybe
a
paint
drying,
YouTube
channel,
and
this
content
only
feels
like
it
is.
It
is
uniquely
different,
the
brakes
of
indeterminable
length.
You
know.
Often
you
just
don't
know
how
long
the
break
is
going
to
be
during
the
meeting
that
you
need
someone
sitting
there.
F
Tv
stations
by
comparison
typically
have
one
channel
each
and
they
do
three
to
four
hours
of
live
programming
per
day,
whereas
the
municipality,
you
know
might
have
you
know
six
or
eight
meetings
per
month,
and
then
legislature
has
you
know
that
code
a
often
for
the
extent
of
their
legislative
period.
So
instead
of
three
or
four
hours
a
day,
you're
talking,
you
know
thousands
of
hours
perhaps
per
year.
F
What's
also
quite
unique
in
this
market
is
the
number
of
homegrown
solutions
there
are
that
have
evolved.
Perhaps,
based
on
you
know,
budget
requirements,
technology
refresh
schedules
and
all
of
those
have
dependencies.
So
you
know
you
might
have
integrated
your
encoders
with
your
scheduling
system
so
that
you
know
when
a
meeting
is
going
to
start,
the
encoder
is
ready
to
go.
F
It
may
be
tied
in
to
you
sort
of
bill
tracking
system
and
it's
very
difficult
to
replace
those
pieces
of
equipment
piecemeal
and
really
there's
actually
a
lot
of
innovation
that
we
see
happening
in
the
legislative
market,
which
is
great,
but
in
an
area
like
yes,
it
can
hold
us
back
a
little
bit,
but
ultimately
that's
all
because
of
budget.
We
find
you
know
it's
doing
more
with
less
and
we
we
welcome
it
for
sure
so
moving
forward.
There
are
improvements
in
the
works.
F
Outside
of
that
you
know,
if
you
have
too
many
rooms
that
are
only
going
to
the
web,
it
reduces
the
amount
of
equipment
you
need
considerably.
You
have
the
option
of
not
having
a
closed
caption
encoded
decoder,
which
is
typically
more
targeted
towards
broadcast
video.
You
can
instead
inject
the
captions
actually
after
the
streaming
encoder
just
at
the
video
streaming
server
level.
So
you
have
the
web
only
captions
options
that
really
avoids
any
of
the
additional
capital
expenses,
and
a
lot
of
this
again
is
a
problem.
F
When
you
know,
maybe
you
have
fourteen
distinct
meeting
rooms
that
traditionally
have
had
their
own
encoders
in
them.
Realistically,
it's
better.
If
you
can
backhaul
all
of
those
rooms
to
one
location,
because
then
you
can
do
two
things,
you
can
consolidate
them
on
multi-channel
encoders,
so
you
certainly
save
on
the
per
channel
amount.
If
you
have,
you
know
four
or
more
channels
on
a
single
encoder
and
so
bringing
all
of
those
channels
from
the
remote
community
rooms
back
to
one
location
is
great
and
then
additionally
scheduling
wise.
F
You
know
if
you're,
not
using
all
of
those
meeting
rooms
simultaneously,
you
can
aggregate
many
different
channels
or
many
different
rooms
into
a
smaller
number
of
channels
which
ultimately
reduces
your
equipment
costs.
There
are
also
a
number
of
you
know:
cheaper
new
contribution,
encoders
on
the
market.
In
the
last
couple
of
years,
all
of
the
major
kind
of
video
manufacturers
have
their
own
cute
little
box,
video
encoder
they're,
not
necessarily
web-capable
directly.
F
You
can't
stream
out
to
the
world
immediately,
but
if
you
have
an
on-site
wowser
server,
they
can
ingest
those
feeds
and
then
restream
it
to
the
world,
and
you
actually
save
a
lot
and
it
evolves.
Maybe
some
of
the
physical
consolidation
you
might
have
had.
Otherwise
you
know
really
the
the
equipment
cost
I
think
pallet
embarrassin
to
the
ongoing
transcription
costs.
F
It's
been
around
for
a
long
time,
and
it
was
that
way
just
because
it's
hard
to
train
a
computer
system
to
understand
a
lot
of
different
speakers
that
needs
to
often
be
specialized
training
for
that
unique
speaker,
but
with
the
application
of
machine
learning
technology
over
the
last
couple
of
years.
Things
have
improved
in
that
area
considerably,
so
that
there's
now
commercial
solutions
that
can
do
automated
transcription
to
a
suitable
degree.
F
For
you
know
much
lower
cost,
typically,
an
upfront
cost,
often
maybe
not
even
a
recurring
or
a
much
lower
recurring
cost.
You
know
the
target
for
transcription
is
often
97%
accuracy.
That's
the
human
transcription
company
is
self-imposed
target
and
certainly
they
meet
that,
but
I
think
the
threshold
might
be
lower
if
you're
talking
about
meeting
rooms.
That
are
lesson
important.
You
know
it's
just
a
matter
of
you
know
making
sure
that
someone
gets
some
benefit
out
of
it.
F
If
you
can
understand
you
know,
what's
going
on
in
the
room
to
me,
that's
sufficient
early
attempts,
you
know
YouTube
automatic
closed,
captioning
was
unintelligible
and
it
was
only
worth
watching
because
it
was
hilarious.
You
know,
as
recently
as
two
years
ago,
I
think
with
the
introduction
of
machine
learning
technology.
It's
now
to
the
point
that
you
can
really
get
the
gist
of
what's
being
discussed,
and
you
know
I
think
you
know
perhaps
that's
enough
for
an
area
where
the
alternative
is
no
closed.
F
Captioning,
but
now
current
solutions
are
being
widely
deployed
in
the
market
by
television
stations
and
and
really
there
they
make
occasional
mistakes
and
there's
a
lot
that
goes
into
that.
You
know.
The
mistakes
that
are
made
are
often
on
unknown
or
new
words,
so
street
names
place
names,
people
names
realistically.
Those
are
probably
the
more
important
terms
when
it
comes
to
you
know,
meetings,
but
the
technology
is
vastly
improved
and
continues
to
research
technologies
roughly
at
95%.
F
You
know,
but
further
breakthroughs
are
being
made
daily
on
that
you
know
it's
a
big
area
of
investment
for
all
the
big
cloud
computing
companies
as
well,
as
you
know,
other
independent
researchers.
This
is
I,
think
you
know
one
of
the
big
applications
for
machine
learning
humans
make
mistakes
as
well.
F
This
is
an
example
of
one
from
New
Zealand,
where
there
were
some
rubberneckers
looking
at
the
traffic,
but
I
think
the
human
transcriber
was
probably
using
respeaking
system,
but
it
misunderstood
rubberneckers,
so
their
commercial
solutions
for
TV
stations,
they're,
currently
replacing
human
transcribers
and
a
lot
of
new
stations.
You
know
those
are
ideal
circumstances,
they
have
very
close
microphones.
The
presenters
know
how
to
speak
into
microphones
and
those
they'll
make
a
difference,
but
the
demand
there
is
such
that
closed.
F
Captioning
companies
that
are
involved
with
this
just
can't
keep
up
with
the
exploding
demand
over
the
last
couple
of
years
and
part
of
that's
driven
by
you
know:
increased
deployment
and
implementation
of
closed
captioning,
but
in
cases
where
you're
replacing
Samanya
already
captioning,
it's
just
a
cost-saving
and
so
I
think
you
know
they're
appropriate
and
we're
applying
these
technologies
in
different
solutions.
Already,
you
know,
live
captions
for
less
important
meetings,
less
important
meeting
rooms.
F
Often
what
we'll
do
is
take
the
live,
transcript
and
then
within
for
eight
hours,
go
and
clean
it
back
up
and
make
that
the
version
that's
used
for
on-demand
subsequently.
But
it's
still
cheaper
to
have
someone.
Do
that
then
have
the
live,
real-time
transcription
and
then
also
just
using
it,
for
you
know,
alternative
technologies.
Were
you
just
searching
audio?
You
know
if
you
have
if
it
sounds
like
it's
close
enough,
and
so
you
can
actually
use
it,
for
you
know
finding
points
within
video
where
the
technology
might
not
otherwise
be.
A
A
Okay,
we're
now
going
to
move
into
the
question
and
answer
part
of
our
session.
If
you
have
a
question,
please
use
the
microphones
on
either
side
of
the
front
up
here.
If
you
have
a
question,
make
sure
you
tell
us
your
name,
affiliation
and
state
you're
from
before.
You
ask
your
question
also
at
this
time
we'll
open
it
up
for
any
of
the
panelists.
If
you
have
any
additional
questions
you
can
also
chime
in
in
between
for
somebody
else,
something
maybe
sparked
interest
in
what
you
heard
today
right
go
ahead.
Thank.
C
You
thank
you
so
much
for
the
information
I'm,
not
the
techy
talk,
I'm
the
person
who's,
trying
to
get
information,
I'm,
Patricia
Smith
from
Baton,
Rouge,
Louisiana
and
I'm
at
the
state
legislature.
So
just
this
past
session,
I
was
capable
of
passing
a
tax
I
serve
on
our
Louisiana
Commission
for
the
deaf
and
that's
taxes
to
take
tax
away
from
as
a
reduced
amount.
We
we're
paying
on
our
residential
lines,
to
put
it
on
cell
phones,
to
begin
to
look
at
how
we
can
improve
our
access,
the
access
for
our
deaf
community
to
access
legislature.
C
So
we
did
add
some
money
in
4k.
We're
starting
all
brand-new,
have
no
captioning
whatsoever,
we're
using
sign
interpreters
for
any
meeting
where
the
Deaf
will
appear
and
to
testify
so
we're
starting
brand
new
with
that
money
that
we
have
in
our
budget
will
gain
us
some
ideas
on
how
we
can
get
captioning,
at
least
starting
with
our
committee
rooms,
but
I
like
what
you
said,
start
and
find
out
where
you
are,
how
you
can
improve
your
website.
C
C
C
I,
thank
you
very
much
because
there's
only
one
other
thing,
I
want
to
say.
We
also
passed
a
bill
on
terminology.
Our
deaf
culture
doesn't
want
to
be
called
hearing
impaired
and
that's
what
I
saw
in
the
the
description
of
this
clip
on
this
session.
So
it
might
be
wise
to
look
at
the
terminology.
Now
it
no
longer
hearing
impaired,
they
feel
they're,
not
impaired.
C
B
So
from
the
Massachusetts
standpoint,
you
know
we
were
just
starting
out
as
well.
I
think
this
has
been
a
culmination
for
us
in
terms
of
the
kinds
of
different
technologies
that
we
really
didn't
have
any
handle
on.
So
it's
great
to
see
that
you
know
there
are
so
many
options
available
to
you
if
you
are
just
starting
out.
You
know
personally
I'm
very
happy
to
speak
with
any
any
person
who
you'd
want
us
to
reach
out
to
from
the
legislative
side,
because
we're
still
working
through
some
of
the
kinks.
B
So
you
know
we're
definitely
in
this
process.
I.
Think
part
of
the
things
that
we're
working
on
from
the
Massachusetts
aspect
is
we're
trying
to
get
enough
of
a
process
so
that
we
understand
what
we
need
to
do
from
a
goals
perspective,
and
then
we
can
try
to
fine-tune
that
and
find
ways
either
automate
that
cut
out
some
of
the
costs
in
that
process
and
find
out.
You
know
whether
there
are
other
options
and
other
solutions
that
are
more
turnkey,
but
that's
something
that
I'd
be
a
petitioner
and
fill
with.
B
A
Know
Paul
you
mentioned
it
real
quick
was.
He
said
something
about
retro,
actively
captioning
old
content,
how
are
Bruce
and
Paul?
How
are
you
handling
that?
What's
your
kind
of
timeline
on
that
sub,
something
in
Indiana
we're
dealing
with
is
we've
got
video
content
back
to
1998
2017
was
really
the
first
year
we
captioned
everything,
including
all
of
our
archive.
How
do
we
go
back
and
deal
with
that
and
do
it
in
a
cost-effective
way?
Well,.
D
The
logical
way
is
to
do
it
backwards.
You
start
from
the
most
reason
you
work
your
way
to
the
to
the
oldest
archives,
and
then
a
lot
of
it
has
to
do
with
the
automatic
transcription
that
we
can
do
now
with
machine
learning.
Like
the
guys
mentioned.
There's
a
company
called
3-play
media,
they're,
really
big
on
this
subject.
They
can
actually
take
all
of
your
archives
and
they
can
work
through
it.
They
have
very
reasonable
rates.
They
actually
start
with
it's
a
two-stage
process.
D
That's
probably
the
way
I
would
get
started,
really
south
carolina's
sort
of
in
the
nascent
phases
of
this.
We
still
do
not
provide
any
captions
on
our
archives,
which
is
something
that
we're
really
working
on
trying
to
provide.
Last
year
we
did
install
equipment
in
both
the
House
and
Senate
chamber
that
allow
us
to
capture
our
our
finalized
captions
as
a
transcript.
D
It
sorts
towards
it
in
an
SRT
format,
so
that
it's
still
usable
with
timestamps
and
our
goal
is
to
start
stitching
that
on
to
the
archive
files
as
they're
added
to
our
archives,
so
the
first
phase
would
be
added
to
the
ones
you're
doing
now.
Second
phase
is
start
working
backwards
through
your
content,
to
add
it
on
to
everything
that
you've
done
before
yeah.
B
And
definitely
to
dovetail
on
what
Bruce
just
mentioned.
We
had
the
added
benefit
of
actually
having
the
the
cart
services
that
I
talked
about
earlier.
So
in
my
presentation,
I
mentioned
that
we
had
you
know
communications
access,
real-time
translation.
They
actually
provide
a
transcript
to
us
within
24
hours
of
the
closest
session.
B
It's
not
perfect
that
car
transcription
in
a
transcript
is
about,
to
be
honest,
I'd
say
about
80,
maybe
85
percent
at
best,
but
the
benefit
there
is
that
we
also
partner
we
actually
partner
with
3-play
media,
which
is
the
technology
that
Bruce
mentioned,
and
it
does
significantly
significantly
reduce
the
cost.
If
you
were
to
start
transcription
from
scratch-
and
you
just
gave
let's
say
two
hour-long
video
to
a
service
like
3-play
media,
it
would
cost
considerably
more
to
just
fully
transcribe
that
than
if
you
gave
them
that
file
that
at
least
gets
you
started.
B
So,
for
example,
in
80
percent,
like
Bruce
mentioned
the
run
to
throw
machine,
will
try
to
get
it
to
as
high
as
they
can
probably
about
85
90
percent
and
then
they'll
have
a
human
actually
bring
it
up
to
97
or
99
percent,
and
then
a
final
review
actually
audit
that
you
know
for
the
the
last
version
of
that
product
to
you.
So
we
did
have
that
added
benefit.
A
B
No,
that's
a
great
question,
I
think
the
legislature,
based
on
the
sort
of
the
context
of
the
whys
that
I
gave
we're
very
much
on
board.
With
this.
You
know
the
legislature
again
they're
looking
at
accessibility
and
transparency
as
two
of
the
focal
points
for
them
because
of
the
history.
What
we've
been
through
back
in
2007,
you
know
with
potential
litigation.
They
just
realize
that
this
is
something
that
they
wanted
to
put
focus
on
from
a
technology
perspective
and
yeah.
B
They
were
fully
supportive
of
that
I
think
anybody
in
the
legislature,
at
least
you
know,
in
terms
of
Massachusetts
they're,
really
wanting
to
connect
with
their
constituents
the
constituency-
and
this
is
just
another
way
that
they
can
do
that
and
its
historical
to
you
know.
If
you
do
this
kind
of
thing.
B
This
is
information,
that's
going
to
be
out
there
in
perpetuity
and
it
allows
you
to
you
know
like
we
talked
about
I,
think
Bruce
and
Sanjeev
mentioned
it
actually
Daniel
mentioned
it
is
that
you
know
we
could
use
this
to
shortcut
some
of
the
end
goals
and
products
that
we
have
to
deliver
in
other
parts
of
the
legislature.
So
you
know
we
could
automate
the
the
journals
that
we
have.
B
There
were
a
lot
of
things
that
we
tried
to
do,
but
we
just
stopped
just
to
get
this
sort
of
beta
phase
out
a
lot
of
things
where
you
know
you
could
turn
hyperlinks
within
the
text.
Not
just
to
you
know
a
portion
of
the
video
or
timeslice
of
the
video
you
can
have
it
pull
out
a
contact
popup
menu
or
you
can
choose
whether
you
want
to
go
to
a
different
kind
of
resource
or
search
to
Google.
A
D
Either
one
of
you
guys,
a
lot
of
our
rooms
are
already
equipped
with
equipment
to
be
able
to
handle
that
the
Senate
in
general.
All
those
rooms
have
have
inputs
for
computers
to
be
able
to
feed,
Skype
or
other
remote
technologies
into
the
room
which
carries
audio
and
video,
so
it's
accessible
as
a
source
in
our
production
environment,
and
then
it's
also
accessible
as
an
audio
feed
into
the
room
itself,
so
that
people
trying
to
interact
remotely
they
can.
They
can
do
that.
It's
a
little
bit
more
of
a
challenge
in
our
our
house
rooms.
D
We
don't
have
built-in
capability
for
that,
but
we
do
have
the
capability
to
wire
it
into
our
production
workflow
and
then
be
able
to
work
from
there.
Really.
The
problem
is
not
more
technology.
It's
more
policy
trying
to
get
people
to
allow
remote
testimony,
because
then
you
have
the
problem
of
verifying
who
they
are
where
they're
coming
from.
If
it's
you
know
a
plant
or
you
know,
there's
all
these-
these
crazy
political
shenanigans
that
come
up
from
questions
like
that.
B
Yeah
in
Massachusetts,
where
actually
that's
the
one
area
where
you
know
we're
we're
still
not
quite
there
yet
our
hearing
rooms
are,
you
know
we
cover
all
events.
So
I
didn't
mention
this
when
I
was
talking,
but
you
know
we
don't
cover
just
formal
sessions.
We
cover
anything
that
is
agreed
to
be
streamed
or
televised,
and
that
may
include
committee
hearings
and
even
special
events,
but
we
we
don't
have
any
sort
of
you
testified
door.
E
G
I'm
indoor
servants
in
California,
so
for
those
of
you
who
are
doing
this
and
you
have
it
out
for
the
public
if
you're
getting
90
97
percent,
whatever
the
percentage
accuracy
rate
you
do,
are
you
getting
any
pushback
for
not
you
know
for
how
good
it
is
or
not
good.
It
is
especially
with
maybe
places
and
names
and
multiple
people
talking
coming
back
or
is
it.
D
We
haven't
seen
that,
but
again
we
only
provide
our
captions
live.
We
don't
have
captions
yet
available
on
our
archives,
I'm
sure.
If
we
had
it
on
the
archive
people
would
catch
it.
More.
You'll
really
only
have
one
chance
to
see
it
if
it's
wrong
in
our
situation,
but
we
actually
do
have
a
lot
of
very
unique
names
in
the
state.
So
it's
sometimes
very
difficult
for
the
remote
transcription
is
to
know
the
spellings
of
a
lot
of
our
our
state
locations
and
and
other
people
there
in
the
legislature.
So
it
is
a
problem
you.
B
Know
one
thing
too
is
that
you
know
you
could
make
it
sort
of
like
a
disclaimer
that
you
know
these
transcripts
are
not
perfect.
You
know,
even
though,
if
you
go
through
you
know
what
at
this
current
moment
in
time,
is
you
know
the
position
analogy
and
you
have
human?
You
know
eyes
actually
looking
at
this,
you
know
it
is
not
100%.
We
will
try
to
make
that
disclaimer.
B
This
is
so
new
for
us
that
you
know
we
just
pretty
much
aligned
that
launch
with
this
event,
so
we
don't
know
yet
I'm
sure
we
will
get
plenty
of
feedback,
hopefully
both
positive
and
negative,
on
it.
Part
of
the
problem
stems
from
what
daniel
mentioned,
which
is
you
know
we
have
we
struggle
with
incredibly
long
sessions
and
I'm
sure
all
the
panelists
do
right.
You
know
at
some
points
we're
talking
about
eight
and
a
half
hour
sessions
that
go
throughout
the
night.
B
So
surely,
within
that
context,
you're
gonna
find
some
issues
with
the
transcript
it'll
especially
come
to
light.
If
you
search
for
something-
and
it's
just
not
there
or
it's
inaccurate-
and
we
are
kind
of
fearful
of
that,
but
if
you
take
a
step
back
and
look
at
it
as
a
whole,
you
know
this
is
the
there's
a
there's,
an
onus
to
improve
this
and
make
this
better
as
long
as
it's
a
work
in
progress?
First,
hopefully,
you
know
the
folks
who
use
the
website
and
use
this
technology
will
understand
them.
We.
A
Did
get
some
feedback
this
year
was
the
first
year
we
provided
all
our
archived
content,
so
we
did
actually
take
a
step
back
and
start
making
our
own
edits.
We,
when
and
after
we
did
it
for
awhile,
we
figured
out
the
common
things
that
were
happening
so
we're
able
to
fairly
quickly
kind
of
get
back
in
and
get
those
things
fix,
because
people
didn't
like
to
see
their
name
spelled
incorrectly.
People
didn't
like
to
you
know
certain
words
were
just
wrong.
A
Sometimes
the
context
was
a
little
off,
but
generally
we
didn't
think
anything
and
change
anything
substantive
I
mean
it
was
just
basic
grammar
basic
changes
just
to
come
and
clean
it
back
up,
but
we
do
have
lots
of
disclaimers
on
our
site.
You'll
see
no
legislative
intent,
lots
of
phrases
like
that
say
hey!
You
can't
basically
take
this
and
put
any
intent
on
it.
It
is
what
it
is.
It's
what
we
provide.
So
thank.
E
A
Our
provider
like
to
tell
us
ninety-eight
percent.
Okay,
we
don't
think
it's
that
good
I,
don't
know
that
we've
got
a
number,
but
we
know
it's
not
that
good,
but
we're
striving
to
get
it
better.
We
just
renegotiated
in
a
new
contract
with
our
vendor
just
trying
to
make
things
a
little
bit
better
having
just
gone
through
that
this
last
session.
But
you
know
our
target
is
to
get
as
best
we
possibly
can
I
think
if
we
hit
98
percent.
That
would
be
pretty
good.
D
A
Started
with
JW
player
using
flash
and
ran
in
problems,
especially
with
some
of
the
other
browsers,
we
just
moved
to
html5
we're
seeing
things
going
better,
less
less
issues,
we're
looking
into
some
field
mpeg-dash
as
a
possibly
something
moving
to
in
the
future.
A
couple
of
our
guys
are
looking
into
that,
but
it's
definitely
an
ongoing
issue
for
us.
Just
like
everything
else
on
our
website
is
trying
to
make
everything
compatible
across
every
browser
that
we
can
fit
everybody's
need,
and
we
we
do
get
the
calls
on
that
stuff.
A
A
D
I
think
that
a
lot
a
lot
of
people
have
actually
switch
over
to
newer
browsers
in
South
Carolina.
We
don't
see
a
lot
of
people
still
using
Explorer.
So
that's
not
really
the
main
issue.
The
issue
that
we
run
into
is
that
at
least
for
live
captions.
If
you're
trying
to
support
multiple
platforms,
especially
mobile
devices,
you're
better
off
supporting
what
they
call
C,
I'm,
sorry
forgetting
the
term,
it's
the
HLS
form
of
captions,
rather
than
the
rtmp
or
flash
form
of
captions.
D
So
the
problem
to
run
into
is
that
a
lot
of
player
applications
don't
support
those
same
HLS
captions,
so
we've
actually
still
stuck
with
a
flash-based
player.
Unfortunately,
it's
the
only
one
that
we
can
find
that
actually
still
supports
those
native
captions
so
that
we
can
caption
in
one
format
and
have
it
be
universal
across
the
board
working
everywhere.
So,
even
though
we
have
kind
of
an
effort
to
get
rid
of
flash
and
move
away
from
flash,
especially
since
Adobe
is
now
deprecated
they've
abandoned
it.
D
B
That
will
only
take
the
Adobe
HDS
stream,
the
better
players
and
the
ones
that
are
fully
supported
in
the
later
browsers
for
html5.
They
seem
to
work
very
well,
so
we
actually
have
an
odd
mix.
You
know
you'll,
see
over
time
that
we
have
some
code
that
looks
at
what
kind
of
browser
type
you're
using
and
we'll
have
to.
You
know
kind
of
interject
the
right
player.
B
E
The
problem
with
mpeg-dash
is,
it
will
probably
never
be
supported
on
Apple
devices,
so
you
will
always
have
to
support
HLS,
so
I
think
looking
forward
for
us.
We
do
support
MPEG,
we
do
support
HLS.
The
beautiful
thing
that
happened
last
year
is
the
container
format,
so
the
file
formats
are
the
fragments
the
they
got
together
and
they're
using
the
same
file
format.
So
the
only
difference
not
talking
about
old,
browsers
old
players,
but
going
forward
the
only
difference
is
going
to
be
the
manifest.
E
So
the
text
file
that
specifies
the
fragments
and
not
the
actual
need
files
themselves.
So
there's
there's
a
light
at
the
end
of
the
tunnel.
There
is
that
mpeg-dash
and
hope
I'm,
not
speaking
a
foreign
language.
They
were
in
here
mpeg-dash
NH
LS
of
what
the
Apple
format
and
the
rest
of
the
world
file
formats
will
be
the
same.
E
The
only
thing
that
change
is
a
small
file
that
shows
how
to
deliver
them
and
both
of
them
support
the
captions
really
nicely,
but
in
your
browser,
so
to
answer
your
question
that
light
that
you'll
see
somewhere
around
2018
is
in
video
dot.
Jaso,
the
the
pure
JavaScript
implementation
of
of
HLS
decode
has
a
closed,
caption
decode
in
it
and
it's
the
same
thing.
There's
other
libraries,
so
I
believe
dailymotion
is
a
big
website
for
video.
They
put
out
an
open-source
player
that
also
supports
I.
Think
it's
called
HLS
edge.
E
A
F
Problem
we
saw
JW
player
was
the
dominant
video
player
for
a
very
long
time,
still
very
widely
used?
It's
fantastic.
You
know
all
of
the
big
commercial
sites
use
that
they've
got
a
very
aggressive
marketing
and
pricing
model.
So
then
it's
targeted
towards
you
know
big
concerts,
big
movie
studios,
but
they
decided
to
stop
supporting
it's
an
explorer.
7
ie
7
by
itself
is
a
relatively
small
part
of
the
market.
Yes,
maybe
the
low
single-point
percentages
and
in
some
cases
our
customers
are
started.
F
They're,
ok
with
that,
but
the
difficulty
became
if
you
have
IE
configured
with
your
website
as
its
homepage
as
a
lot
of
your
internal
users,
probably
do
IE
in
its
infinite
wisdom,
even
in
version
IE
levin
decides.
Let's
render
this
page
as
if
we
were
I
e
7,
including
breaking
JW
player.
So
we
have
four
backs
to
video
J
s
specifically
for
that,
but
we're
sort
of
making
the
move
away
from
JW
player
entirely.
Based
on
that
decisions
like
that
and
the
pricing
roll
around
it
yeah,
we
have
a
pretty
good
solution.
F
We
think
we
don't
hear
a
lot
of
complaints.
That
certainly
hits
all
of
the
major
and
minor
browsers
anything
you
typically
support,
but,
of
course
it's
difficult
getting
back,
you
need
flash
is
what
it
comes
down
to
and
so
players
that
drop
support
for
flash
are
difficult
as
well.
If
you're
also
trying
to
get
captions
to
you
know
ie
on
Windows
7,
you
know
ie
Windows
10
is
maybe
fine,
Windows
7
different
story,
same
version.