►
Description
Dave James, Head of Policy takes you through the slides for our CQC's future regulatory model: where we are and next steps CitizenLab project.
Sign up and log-in to share your views https://cqc.citizenlab.co/en-GB/projects/cqc-s-future-regulatory-model-where-we-are-and-next-steps
A
Hi,
I'm
dave
james,
I'm
head
of
adult
social
care
policy
at
cqc
and
I'm
just
going
to
talk
you
through
where
we've
got
to
and
thinking
on
our
future
regulatory
model
so
where,
where
we
got
to
and
what
are
the
next
steps
we'll
be
taking
so
there's
quite
a
few
changes
that
we're
looking
to
make
so
that
we
can
implement
our
strategy
and
so
that
we
can
really
respond
to
the
the
feedback
that
we've
had
from
people
and
how
we've
what
we've
learned
over
the
last
few
years
of
running
our
model,
and
this
slide
draws
out
some
of
the
key
differences.
A
So
at
the
top
there
there's
a
description
of
how
we
currently
run
our
regulation
and
we
have
a
number
of
assessment
frameworks.
We
we
have
one
for
inspection
of
social
care,
one
for
inspection
of
health
care
and
then
the
same
on
the
registration
side.
So
we've
got
four
frameworks
and
they
are
quite
detailed
and
quite
complex,
and
we
know
from
experience
that
there's
lots
of
duplication
in
there.
A
So
the
first
thing
that
we
want
to
draw
out
and
and
I'll
describe
this
in
a
few
moments
is
actually
how
can
we
arrive
at
a
single
framework
and
a
single
set
of
expectations
and
a
single
definition
of
quality
that
works
for
everybody
and
is
in
plain
english
and
and
supports
us
to
deliver
a
more
structured
and
transparent
regulatory
model
and
the
other
kind
of
change.
A
So
at
the
moment
we
have
our
monitoring
function
where
we
stay,
we
stay
alongside
risk
and
we
consider
whether
or
not
we
need
to
inspect
in
our
future
model.
We
won't
be
having
the
monitoring
function,
we'll
set
we'll
be
moving
into
a
an
ongoing
assessment
model
where
we
can,
as
new
information
becomes
available,
we
could
consider
what
it
means
and
and
update
our
our
judgments
and
our
assessments
accordingly,
so
related
to
that
in
our
in
our
old
model.
A
Essentially,
everything
we
did
was
was
through
inspection,
so
the
the
the
evidence
that
we
gather
would
be
we'd
be
doing
that
at
a
single
point
in
time,
looking
across
our
framework,
either
in
a
across
all
five
key
questions
or
even
in
a
focused
way,
but
very
much
a
fixed
point
in
time
and
then
going
away
and
considering
what
we
found
using
our
key
lines
of
inquiry.
A
Looking
at
our
our
ratings
characteristics
and
there
are
70
pages
of
those
in
our
current
model.
So
just
going
back
to
my
point
earlier
about
the
complexity
and
the
length
and
the
density,
we
really
want
to
to
to
to
address
that
lining
up
our
judgments
against
those
wretched
characteristics
to
come
out
with
what
is
predominantly
a
narrative
based
inspection
report.
We
have
some
structured
outputs.
A
We
have
our
ratings
at
service
and
at
key
question
level,
of
course,
but
below
that
there's
there's
a
lot
of
a
lot
of
narrative
so
stepping
into
our
our
future
approach.
I
mentioned
the
single
assessment
framework,
so
one
one
articulation
much
clearer,
much
shorter,
reducing
the
duplication
and
articulation
of
what
good
looks
like.
So
our
single
assessment
framework
is
pitched
at
the
level
of
good,
and
then
we
have
this
model,
which
is
an
ongoing
assessment
of
quality
and
risk.
So
we
can.
We
can
respond
to
information
much
more
readily.
A
That's
both
in
terms
of
information
about
risk,
but
it's
also
information
about
improvement
and
we
don't
need
to
feed
that
into
a
separate
inspection
process.
We
can
consider
whether
or
not
that's
that
information
is
is
actionable
and
we
can
update
our
our
judgements.
A
As
I
mentioned
a
few
moments
ago,
so
inspection
and
site
visits
will
be
absolutely
still
a
completely
essential
part
of
our
model,
but
that
won't
always
be
the
case
for
every
type
of
evidence
or
for
every
service
model,
or
indeed,
as
we
step
into
making
assessments
of
local
authorities
and
integrated
care
systems,
then
site
visits
won't
necessarily
be
the
core
approach
to
gathering
evidence.
A
A
The
other
big
change,
then,
is
that
we
will
use
a
scoring
system
so
similar
to
our
ratings.
That
will
be
on
a
four
point
scale
and
as
we
look
at
evidence,
we
will
consider
whether
that's
meeting
our
articulation
of
good
or
whether
it's
exceeding
or
it's
it's
falling
slightly
short.
So
it
may
not
be
sustainable
or
fully
embedded
or
if
it's
falling
well
short
and
perhaps
is
more
in
line
with
inadequate
care,
and
we
might
need
to
take
enforcement
action.
A
So
we're
going
to
be
applying
those
those
scores
at
the
at
the
level
of
the
evidence
categories.
I'm
going
to
go
on
to
explain
those
in
a
moment
there
are
six
of
those
which
will
then
feed
up
to
scores
for
each
of
the
topics
under
our
five
key
questions,
which
in
turn
will
drive
the
rating
and
another
another
area
of
improvement
that
we
that
we're
keen
to
make
is
on
the
time
it
takes
to
publish
our
reports.
A
We
know
sometimes
that
can
take
too
long,
and
we
need
to
be
much
more
efficient
in
getting
those
reports
out.
So
we're
going
to
be
streaming
streamlining
that
process
and
the
single
assessment
framework
and
the
scoring
mechanism
and
and
moving
away
from
a
single
point
in
time,
assessment
to
to
to
more
iterative
judgments
and
building
up
a
picture
over
time
is
how
we
think
we
can
do
that.
A
So
this
is
an
overview
of
our
single
assessment
framework
and,
as
it
says
here,
this
is
what
we'll
be
using
to
make
assessments
of
providers
across
health
and
social
care
to
make
assessments
of
local
authorities
and
integrated
care
systems.
So
there's
a
consistent
set
of
themes
and
we're
playing
this
from
registration
through
to
through
to
making
those
scores
and
and
those
and
those
ratings.
A
I
should
say
that
we
won't
be
applying
the
whole
framework
to
local
authorities
and
integrated
care
systems,
we'll
be
looking
at
what
the
so.
What
the?
What
the
kind
of
core
subset
is
that
that
make
most
sense
to
look
at
at
those
levels,
but
just
stepping
into
the
structure,
then
we're
still
retaining
the
five
key
questions
of
safe,
effective
caring,
responsive
and
well
led.
But
what
we've
done?
What
really?
What
is
the
heart
of
our
new
approach?
A
As
I
say,
they've
been
they've
been
co-produced
with
with
the
public
by
an
organization
called
think
local,
like
personal,
and
they
developed
this
framework
called
making
it
real
a
few
years
ago,
which
really
does
articulate
what
good
person-centered
care
looks
like,
whether
that's
through
regulated
provision
or
through
other
means,
and
and
also
includes
statements
that
articulate
what
a
provider
or
a
commissioner
or
a
system
leader
the
commitments
they
need
to
make
to
deliver.
A
On
that
standard
of
good
and
those
are
expressed
as
we
statements
so
we've
we've
we've
lifted
some
of
the
ice
damage
from
the
think,
local,
our
personal
framework
and
our
actual
standards
that
we'll
be
making
assessment
against
assessments
against
are
the
quality
statements
expressed
as
we
and
and
there,
as
I
say,
they
are
pitched
at
the
level
of
good.
A
In
our
current
frameworks,
we've
got
around
335
key
lines
of
inquiry
in
prompts,
which
is
an
inordinate
number
and
dizzying
really
and
and
what
we've
realized
in
looking
at
those
is.
They
are
essentially
six
questions
that
we
ask
over
and
over
again,
and
that
is
what
we
have
used
to
formulate
our
evidence
categories.
So
the
six
of
those.
What's
people's
experience,
what
feedback
have
we
had
from
staff
and
leaders?
What
feedback
have
we
had
from
our
partners
in
the
system?
What
have
we
observed?
A
A
So
what
we
found
is
if
we
start
with
the
we
statements
and
then
move
into
asking
those
six
questions,
we're
able
to
have
a
much
more,
a
much
more
structured
and
a
much
clearer
and
shorter
framework,
as
opposed
to
starting
from
that
point
of
key
lines
of
inquiry.
We've
got
to
front
load
all
the
all
of
the
detail.
A
So
at
that
bottom
level,
that's
where
we
need
to
map
out
the
specific
evidence,
requirements
and
another
core
part
of
our
new
approach.
Is
that
we'll
be
really
clear
on
what
the
required
evidence
is
so
that
those
entities
that
we're
assessing,
whether
that's
system
or
provider,
the
people
that
work
there
and
our
staff
are
all
really
clear
on
what?
How
much
evidence
is
enough
and
and
what
needs
to
be
covered
and
we're
going
to
be
making
that
as
focused
and
as
tight
as
we
can
and
then
underpinning?
A
So
just
a
couple
of
examples
here
of
an
I
statement
from
that,
making
it
real
framework
that
I
mentioned.
A
So
what
good
person-centered
care
feels
like
is
when
I
move
between
services,
settings
or
areas,
there's
a
plan
for
what
happens
next
and
who
will
do
what
and
all
the
practical
arrangements
are
in
place
and
then
the
commitment
that
the
provider
and
and
the
commissioner
and
the
system
would
would
make-
is
this
we
statement
here,
and
this
is
what
we
make
an
assessment
of.
We
work
in
partnership
with
others
to
establish
and
maintain
safe
systems
of
care
in
which
people's
safety
is
managed,
monitored
and
assured,
especially
when
they
move
between
different
services.
A
So
I've
talked
then
about
our
approach
and
in
the
emphasis
on
structure
and
how
we
think
that
structure,
as
well
as
a
scoring
approach,
lends
itself
to
more
consistent
and
transparent
practice.
A
I'm
going
to
go
on
in
a
moment
to
talk
about
how
we
will
how
we
will
set
out
clear
frequencies
and
and
a
plan
for
assessment
and
how
it
can
be
flexible
in
the
best
method
for
gathering
evidence
and
how
that
of
course
includes
inspection,
and
all
of
this
allows
us
to
have
a
much
more
up-to-date
view
of
quality
and
to
maintain
our
judgments
and
keep
them
live
much
more
than
we
can
at
the
moment
where
we're
taking
that
that
I
described
earlier.
A
That
kind
of
one
single
point
in
time.
Approach
of
of
an
inspection
across
the
piece
this
table
is
just
for
illustration.
So
I
would
urge
anybody
watching
this
not
to
take
too
much
weight
from
this.
But
this
is
just
to
bring
to
life
what
a
planned
evidence
collection
timetable
might
look
like.
So
these
evidence
categories
along
the
top
are
really
crucial
in
allowing
us
to
chunk
up
our
activity
and
and
to
step
away
from
thinking
that
we
go
on
site
and
we
look
at
everything.
A
How
often
do
we
need
to
understand
people's
experience
and
that
will
that
will
again
will
vary
on
on
the
on
the
on
the
needs
and
and
the
characteristics
of
the
people
using
the
service
and
what
we
know
about
that
service.
What
we
know
about
that
area,
feedback
from
staff
from
leaders
again
might
vary.
So
this
is
the
framework
that
we're
looking
to
apply
to
to
allow
us
to
step
away
from
that
one-size-fits-all
approach
of
inspection
and
actually
consider.
How
often
do
we
need
to
observe?
A
How
often
do
we
need
to
understand
people's
experience
and
and
and
how
quickly
do
those
things
go
out
today
after
date?
So
that
we
need
to
go
back
and
top
up
our
assessments,
so
this
is
about
what's
our
planned
activity
as
the
regulator,
how
often
do
we
need
to
initiate
some
activity
to
gather
evidence,
but
it's
also
very
much
about
how
often
do
things
change?
How
often
are
external
data
sets
updated,
for
instance,
and
how?
How
frequently
can
we
expect
those
to
to
be
available
to
us?
A
How
often
will
will
providers
update
policies
and
procedures
and
strategies?
All
of
those
things
need
to
be
considered
as
well
as
risk
and
as
well
as
an
ongoing
picture
of
priorities
and
themes
to
allow
us
to
to
set
clear
timetables
that
that
will
be
transparent
for
providers
and
for
local
authorities
and
for
systems.
A
So
this
slide
just
breaks
that
process
down
a
little
bit
and
describes
that
the
layers
that
we
apply
to
arrive
at
something
like
that
that
timetable
that
I've
just
showed
you
and,
as
it
says,
on
the
slide
here-
we're
driven
by
national
local
priorities
and
by
risk
analysis,
and
they
inform
our
planned
work
and
our
responsive
work
in
terms
of
the
evidence,
collection,
and
this
is,
we
believe,
going
to
be
a
much
more
dynamic
and
flexible
approach
than
than
than
our
current
model.
A
So
the
starting
point
really
is:
is
a
national
understanding,
a
national
set
of
profiles
that
articulate
key
priorities,
risks
and
issues
for
for
the
different
service
models
and
for
the
different
layers
that
we
make
assessments
and
that
gets
us
almost
to
a
base
camp
of
okay.
So
what
broadly
do
we
think
that
means
for
the
evidence
that
we
need
to
be
looking
at?
How
frequently
do
we
need
to
be
looking
at
those
things,
and
then
we
overlay
that
with
a
greater
degree
of
context
and
detail.
A
So
thinking
about
the
context
of
the
service,
the
particular
risk
that
we
might
know
about
from
regulating
that
service
or
perhaps
what's
happening
in
that
area.
What's
the?
What
is
the
picture?
That's
building
up
here,
remembering
that
cqc's
future
model
will
be
multi-disciplinary
teams
where
we're
much
better
able
to
look
at
care
cutting
across
systems
to
understand
a
broader
picture
and
less
about
individual
services
and
less
about
individual
sectors,
so
that
modifies
that
initial
timetable
and
updates,
given
as
the
best
indication
of
what
evidence
do
we
need?
A
A
We're
really
keen
to
continue
to
work
with
our
partners
and
our
stakeholders
to
to
develop
the
model
further
and
build
down
to
those
those
lower
levels
of
that
pyramid
and
that
I
talked
through
earlier
we'll
be
starting
to
test
this,
as
well
with
internally
and
externally,
with
with
providers
and
there'll,
be
lots
of
opportunities
to
to
share
your
thoughts
and
to
to
help
us
with
that
next
level
of
detail
over
the
coming
weeks
and
months.
A
A
What
concerns
might
you
have
what
questions
might
you
have
on
gaps
in
your
understanding,
based
on
what
I've
talked
about
today
and
it'd
be
really
helpful
to
be
thinking
about
what
are
the
absolutely
key
pieces
of
evidence
that
for
your
for
your
sector,
for
your
service
model?
For
your
context?
What
do
you
really
think
we
need
to
be
focusing
on
and
what
perhaps
is
less
important
than
we
should?
We
should
not
be
looking
at
at
all
or
less
frequently,
and
how
frequently
should
we
be
reviewing
and
updating
evidence
and
reviewing
and
updating
ratings?
A
An
opportunity
we've
got
here
is
that
we
can
make
much
more
iterative
judgments,
but
a
danger
is
we
end
up
chasing
our
tails
and
stepping
on
the
toes
of
providers
and
and
la's
and
systems,
because
we
have
information
available
and
we're
able
to
update
it.
So
thinking
about
what
is
the
rhythm
that
we
need
to
be
getting
into
in
terms
of
the
different
evidence
categories
and
how
frequently
might
it
makes
sense
for
us
to
review
what
we
know
and
update
our
assessments
and
and
publish
what
we
know
on
our
on
our
website?.