►
From YouTube: OpenActive W3C Community Group Meeting / 2020-07-15
Description
The Dataset Site Specification
Call notes and agenda at: https://w3c.openactive.io/meetings/2020-07-15-dataset-site-specification
A
So
hello
welcome
all
to
the
w3c
call
for
the
15th
of
july.
A
The
topic
for
this
call
is
going
to
be
the
data
set
site
specification,
so
this
is
looking
more
at
the
harvesting
or
data
consuming
end
of
the
spectrum,
so
less
about
what
gets
published
and
how
and
more
about
how
you
find
it
and
how
you
consume
it,
which
we've
tended
to
focus
on
less
reflecting
that,
I
think,
is.
I'm
just
gonna
make
a
preemptive
apology.
The
data
set
site
specification
is
a
little
bit
shambolic.
A
I
flung
it
together
over
the
last
couple
of
days
based
very
much
on
existing
practice
for
our
data
set
sites
and
for
our
data
catalogs.
A
A
So,
as
I
said,
the
data
set
site
specification
is
a
specification
for
how
data
set
sites
are
supposed
to
be
structured,
so
that
is
to
say
the
kind
of
splash
page
that
links
you
to.
The
data
feeds
themselves.
A
So
it's
been
kind
of
a
de
facto
standard
simply
because
there's
that
one
code,
library,
which
is
generating
everything
and
that's
how
everything's
worked
so
far,
nobody.
B
A
A
Need
for
the
standard
to
be
explicitly
defined,
and
indeed
the
specification
as
it
currently
just
barely
exists,
is
really
just
a
codification
of
what
that
library
does
right
now,
along
with
some
pointers
towards
future
functionality.
A
The
overall
picture
of
how
this
works
or
what
the
what
the
json
structure
is,
is
building
on
dcat
on
schema.org
and
google
dataset
discovery.
So
it's
intended
to
be
compatible
with
with
all
of
those
vocabularies
and
we're
very
much
helped
by
the
fact
that
dcat
version
2
has
published
a
mapping
from
dcat2schema.org.
So
essentially,
what
we're
doing
is
publishing
everything
in
a
schema.org
structure
which
allows
it
to
be
the
data
set
to
be
discovered
and
described
unambiguously.
A
So
I
will
take
you
to
the
dataset
api
discovery,
0.1
document,
so
yeah.
Please
note
that
version
number.
This
is
extremely
extremely
rough.
This
is
building
on
a
base
that
nick
wrote,
which
was,
I
think,
it's
fair
to
say,
boilerplate
and
some
headers.
A
A
It's
also
worth
notice,
noting
that,
in
addition
to
the
somewhat
unfinished
state
of
the
document,
or
rather
sparse
state
of
the
document,
some
of
the
work
in
particular
for
the
markup
related
booking
api
implementation,
is
using
the
schema
web
api
type.
This
has
not
been
finalized.
This
is
pending
integration
into
schema.org.
A
A
If
the
discussion
about
web
api
moves
on
in
the
future,
that
said,
they're
aiming
for
a
release
date
of
schema
10,
including
a
finalized
version
of
web
ipa
in
late
august,
so
that
window
is
going
to
close
fairly
soon,
so
everything
that's
defined
in
the
document
right
now,
it's
a
little
bit
wobbly
depending
on
schema.org
discussions,
but
it
looks
like
it
is
likely
to
go
ahead
in
something
very
close
to
its
current
form.
A
So
issues
with
the
data
set
site
specification.
The
first
as
I've
noted,
is
just
the
rough
state
of
the
document.
It
needs
a
lot
of
tidying,
I'm
sure
there
are
typos
in
there.
A
It
could
certainly
do
with
a
lot
of
preamble
text
because
right
now,
there's
really
just
sort
of
tables
of
attributes
and
possible
values,
so
preamble
text
guidance
and
all
of
those
things
need
to
be
supplied
more
fully,
but
I
think
oh
and
if
certainly
if
you
review
the
document-
and
you
see
any
of
those
just
please
open
a
github
issue
about
that
I'll-
be
tidying
as
I
go
along
over
the
next
couple
of
weeks,
but
any
more.
What's
the
saying
to
a
sufficient
number
of
eyes,
all
bugs
are
shallow.
A
So,
yes,
please
do
just
give
any
advice
you
might
have
or
given,
or
raise
any
questions
that
occur
to
you
in
the
github
space,
but
I
suppose
I'd
open
up
with
a
more
general
question
which
is
in
a
way.
A
What
is
this
standard
actually
aiming
to
achieve?
This
is
really
a
scope
question.
If
you
look
at
the
end
of
section,
1.1,
you've
got
a
little
note
at
the
bottom.
That
says
note
that,
although
the
specification
of
the
open
active
community
group,
it
is
designed
to
apply
to
any
open
data
set
where
an
api
is
available
to
manipulate
it.
So
that's
actually
quite
a
quite
a
wide
scope.
That's
saying
any
any
kind
of
data
set
that
is
specified
and
is
available
over
an
api.
A
This
data
set
site
specification
will
stretch
to
fit,
and
I
wonder
how
necessary
that
really
is
simply
because
really
what
we're
describing
here
like
a
lot
of
open
active
standards,
is
a
kind
of
specialization
of
schema.org.
A
But
unlike
say
the
opportunity
specification
or
the
open
booking
api,
what
we're
doing
here
is
a
lot
closer
to
the
core
of
what
schema.org
is
trying
to
do.
You
know:
schema.org
constructs
like
data
set
or
data
catalog
or
distribution
cover
the
case
of
data
set
publication.
C
It
might
be
helpful
for
some
context
here.
The
the
reason
I
guess
the
original
draft
included,
that
that
statement
was
because
the
the
apis,
sorry,
the
specifications
of
open
active
generally
are
quite
modular
by
design.
C
So
the
idea
is,
you
could
use
rpde
if
you
wanted
to
with
any
data
structure.
Just
so
happens
that
the
modeling
spec
is
the
one
that
we're
using
in
in
open
active
and
so
that
they're
kind
of
they're
entirely
separate
and
the
modularization
is
helpful,
because
it
means
that
actually,
the
standards
themselves
can
be
quite
focused
on
what
they're
doing
and
not
so
much
on
what
other
things
are
doing
with
this
specification
yeah.
C
The
idea
was
that,
exactly
like
the
other
specs,
if
this
is
a
more
general
spec,
then
it
forces
us
to
focus
ourselves
on
making
sure
it
only
does
this
and
not
other
things,
and
so
I
think
the
discussion-
probably
I
guess
doing
it-
both
ways
around
I
feel
like
making
it
more
general
is
more
generally
applicable
is
helpful
because
there
is
a
lack
of.
I
mean,
apart
from
the
the
work
of
the
web
web
api
and
community
group,
which
is
kind
of
doing
this
in
parallel.
C
That
doesn't
actually
go
to
that
that
doesn't
describe
what
this
describes.
This
describes
a
data
set
and
an
api
being
described
together
in
a
way
that
is
compliant
with
with
dcat,
with
schema
and
and
generally
is,
is,
is
well
tested
and
that
that's
quite
a
useful
package
of
something
that
people
might
start
to
use
outside
of
what
we're
doing-
and
I
guess
the
idea
is,
if
you
make
anything,
that's
more
generally
applicable
and
other
people
start
picking
up
and
using
it.
C
Obviously,
sustainability
increases
because
you've
got
more
eyes
involved
and
more
people
engaged
in
that
the
the
flip
side
of
that
is
that-
and
this
is
around
conformance.
C
So
we've
got
this
kind
of
idea
of
conformance
testing
and
something
about
how
we
how
we
describe
what
features
of,
for
example,
the
open
booking
api
are
implemented
by
a
particular
implementer
and
that's
quite
specific
to
us
rather
than
more
general,
and
I
actually
have
an
issue
that
I
haven't
yet
submitted
that
I
will
on
this
because
I
didn't
say
to
tim.
I
would
about
representing
feature
profiles
of
specifications
so
I'll
put
that
in
and
get
to
this.
C
But
there's
a
good
question
there,
because,
basically
we're
not
even
looking
at
that
issue
are.
We
is
it
in
scope
of
this
to
talk
about
feature
profiles
of
what's
been
implemented,
and
if
so,
then
that's
a
good
argument
for
making
this
more
slightly
more
specific
to
our
use
cases.
C
Unless
we
come
up
with
a
very
general
way
of
describing
features
to
profiles,
which
also
could
be
useful.
But
if
not,
I
guess
I'd
argue,
there's
no
harm
in
having
it
more
generally
applicable,
because
it's
lots
of
advantages
to
that,
and
maybe
we
could
do
conformance
in
a
way
that's
generally
applicable,
which,
if
we
think
about
that
in
the
design
we'll
make
it
more
useful,
I
mean
if
we
come
up
with
another
api.
That's
you
know.
C
A
C
Yeah,
that's
right.
I
I
will.
I
will
quickly
format
the
thing
I
was
going
to
submit
to
to
github,
so
you
can,
you
can
see
it
but
yeah
that
that's
exactly
it
like
like
these
are
the
these
are
the
list
of
features
which,
if,
if
you're
publishing
this
dataset
site,
you're
almost
claiming
you're
asserting
to
the
world.
A
I've,
I
think
I
have
views
on
that.
I
think,
but
I'd
be
interested
to
hear
what
tom
and
nathan
might
might
feel
about
this
as
people
more
on
the
publishing
end.
A
lot
of
the
time.
B
Yeah,
I
think,
as
a
consumer
and
much
more
likely
to
use
it
for
like
a
way
to
find
documentation
and
a
way
to
find
what
what
type
of
flow
like
booking
flow
they
support
or,
more
importantly,
what
kind
they
require.
So
if
it
requires
an
approval
flow,
then
that
information
is
going
to
be
very
helpful
because
they're-
probably
already,
I
guess,
we're
not
necessarily
everyone's
case
and
we'll
normally
have
pre-existing
agreements
with
these
people
before
we
start
consuming
the
feeds
and
because,
of
course,
we're
primarily
focused
around
booking
rather
than
just
availability.
C
Sure-
and
I
guess,
even
with
booking
you're,
probably
that's
a
good
point
when
you're
looking
at
the
different
feeds
you
might
choose
to
integrate
with.
Actually
knowing
what
flows
and
features
they've
implemented
is
probably
going
to
be
a
deciding
factor
on
which
you
integrate
with
next
or
whether
you're
eating.
A
Okay,
so
that's
a
that's
a
useful
filter
to
you.
If
you
had
that
kind
of
information
available
to
say
right.
B
D
E
Moment,
sorry,
I
was
kind
of
half
sidetracked
doing
some
other
work.
From
my
perspective,
can
you
just
I
missed
like
the
actual
point,
but
I
understand
that
it's
kind
of
around
what
information
we
need
to
make
a
decision
on
whether
we
would
integrate
with
a
specific
feed
is
that
right.
A
Yeah,
I
guess
it's
about
the
specificity
of
I
suppose
it
could
be
the
specificity
of
of
how
the
data
feed
is
described
and
then
for
booking
the
specificity
of
how
booking
is
described.
So,
whether
or
not
you
want
to
be
able
to
pull
information
about
which
flows
are
implemented
by
a
particular
feed
yeah.
I
would.
A
So
it
lowers
the
barrier
from
a
consumer
point
of
view
yeah.
If
you've
got
that
information,
forefronted
right.
D
A
And
I
think
I
think,
that's
sort
of
why
I
I
got
some
alarm
bells
about
about
that
notion
of
it
being
a
wider
or
more
generic
specification
is
just
that
I
feel
like
if
the
point
is
to
describe
a
data
set
and
an
api
for
manipulating
that
data
set
together
yeah.
That's
that's
a
very,
very
wide
use
case
with
a
lot
of
specifics.
A
So
I
take
your
point
nick
about
our
pde,
where
you
know
it's
kind
of
a
wrapper
format,
almost
it's
very
lightweight
and
very
generally
applicable,
so
that
makes
it
useful
conceivably
across
a
lot
of
use
cases.
A
I
don't
know
if
that's
true
yeah,
there's,
obviously
there's
a
whole
range
of
apis
that
range
from
the
very
generic
kind
of
rpd
type,
to
the
very,
very
particular
for
for
manipulating,
very
particular
kinds
of
data.
So
I
get
a
little
bit
worried
about
trying
to
make
a
claim
for
the
generic
applicability
because
then
I
think
like
well,
you
know
what.
A
Genomics
data
are,
we:
are
we
making
a
claim
that
this
is
going
to
be
a
useful
standard
for
that
use
case,
so
it
starts.
It
starts
getting
a
little
bit
shaky
to
me
about
where
the,
where
you
draw
the
line
between
a
generically,
applicable
kind
of
frame
and
the
more
particular
details
that
you
implement.
C
So
would
it
be
a
good,
a
good
half
way
to
try
and
get
a
bit
of
the
benefits
of
both
to
say
that
this,
although
this
is
targeted,
I
mean
words
the
effect
of,
although
this
specification
is
targeted
at
the
open,
active
use
cases,
it
is
designed
with
more
general
applicability
in
mind
against
something
that
says
we
are
not
doing
this.
It's
not
so
specific
that
you
that
we're
hemming
ourselves
into
a
corner
with
it
almost
because
it's
you
know
just
us,
but
equally
we're
not
gonna.
As
you
say.
C
Realistically
I
mean
practically
anyway
we're
not
going
to
go
to
all
the
genomics.
Well,
I'm
going
to
get
enough
use
cases
from
a
broad
enough
community
to
get
consensus
on
this
as
a
general,
and
what
have
we
got
that
appetite
to
get
that
consensus
from
a
general
point
of
view
and
obviously
we're
using
web
api
and
some
much
more
loose
specifications.
I'm
sure
it's
fair
to
say,
like
decan
web
api
very
loose,
and
so
I
I
think
it
like.
C
I
still
think
it
would
be
useful
for
someone
to
know
that
this
is
the
stuff
that
google
is
happy
to
consume.
Out
of
you
know,
and-
and
I
guess
that's
the
I'm
kind
of
wondering
whether
I
know
obviously
google's
got
lots
of
plans,
but
if,
if
there
was
a
nice
document
that
google
could
point
to
when
they
would
come,
would
they
come
to
implement
this
themselves
because
obviously
they
have
to
implement
web
api
within
their
interface?
C
At
some
point,
and
and
we've
surfaced
this
in
the
skin.org
community,
as
you
know,
a
nice
document
that
explains
how
d-cat
and
web
api
work
together
and-
and
this
is
something
that
fits
with
all
of
the
specs-
that
they've
got
as
well,
then
there
might
be
a
chance.
They'll
actually
use
it
themselves
to.
You
know
to
think
about
this,
and
or
maybe
even
take
ideas
from
it.
C
So
is
there
a
way
that
we
can
kind
of
phrase
it
and
frame
it
and
describe
it
and
not
go
over
the
you
know
like
not
spend
three
pages
of
preamble,
not
that
we
would
talking
about
open,
active
requirements.
You
know
what
I
mean
and
going
off
into
detail
about
why
we're
doing
this
to
get
more
people
active
there.
C
We
could
just
make
this
very
clearly
like
pithy
in
we're
doing
this
to
make
data
set
sites
discoverable
with
with
reference
to
a
use
case
of
open,
active
kind
of
over
there,
but
not
not
the
center
and
frame
of
this
document.
So
someone
like
google
read
it
later.
They
would
be
hopefully
quite
quickly
into
the
detail
of
oh
wow.
You
guys
have
really
thought
in
detail
and
solved
the
problem.
We're
currently
thinking
about.
C
A
A
A
I
feel
like
the
downside
of
that
of
the
latter
approach.
Is
it
just
adds
one
more
layer
of
confusion
for
developers
trying
to
get
their
heads
around
it?
If
you
say
so,
you
know
here's
it's
tricky,
because
we
really
wanted
it
to
be
a
generic
document
or
as
generic
as
possible.
You
wouldn't
have
pointers
pointing
the
other
way
right.
You
wouldn't
say
for
open,
active,
see
all
of
this
stuff.
A
C
So
I,
I
suppose,
there's
an
argument
that
if
we
had
it
general,
it
wouldn't
actually
change
the
way
people
approach
it
just
be
that
current
implementations
is
backed
up
by
something
quite
concrete
and
well
thought
through,
but
but
the
other
way
around
looking
at
it
is,
we
can't
really
call
it
conformance
to
a
specification
if
the
detailed
conformance
information
is
in
guidance.
C
So,
for
example,
if
we're
mandating
certain
set
of
values,
those
really
should
be
in
the
specs
so
that
we
can
talk
about
conformance
so
yeah.
So
maybe
maybe
all
right,
maybe
I
mean
maybe
it's
not
as
not
as
far
as,
like
all
guidance
goes
in
the
document,
because,
obviously
that's
that's
kind
of
what
some
of
the
developer
docs
are
for.
A
Yeah,
that's!
That's
a
that's
a
very
handy
principle.
Anything
the
validator
validates
yes,
indeed,.
A
Yeah,
I
think
that
point
yeah,
it's
points
towards
a
situation
where
this
is
specifically
about
open
active,
I
mean
perhaps
the
preamble
states.
You
know
this
is
a
useful
pattern
for
and
there
could
even
be
more
specific
guidance
in
particular
sections
saying
you
know
this
is
relevant
only
to
open,
active,
et
cetera
but
yeah.
I
think
that's,
I
think,
that's
a
good
point.
A
We
do
need
there
to
be
one
source
of
truth
about
what
you're
doing
when
you,
when
you
create
a
dataset
site,
and
that
should
be
ultimately
this
document
with
the
idea
that
other
guidance
flexes
and
changes.
C
Yeah
yeah
and,
to
that
end,
actually,
we
probably
then
want
to
do
things
like
like
we
do
with
the
modeling
spec
version,
two,
where
we've
started
to
put
arrays
and
not
arrays
on
things
and
and
get
a
bit
more
more
specific
than
web
api.
Even
is
I'm
sorry,
then
schema.org
even
is,
but
but
having
that
level
of
specificity.
Specificity
allows
for
someone
who's,
pausing
this
to
have
reasonable
confidence
that
the
structure
is
going
to
be
not
as
crazy
loose
as
schemer
is
sometimes.
A
Yeah,
that's
it!
That's
that's
another
good
point.
Yeah
schema.org
is
like
it's
nice,
but
it's
wild
and
woolly.
So.
B
A
A
I've
got
a
more
specific
question.
This
is
this
sorry.
This
is
mostly
a
nick
question.
I
noticed
that
in
the
data
set
sites,
so,
as
I
said,
this
is
really
kind
of
a
transposition
or
transcription
of
what's
on
the
data
set
sites
right
now,
which
is
to
some
extent
an
artifact
of
the
tooling
we've
got,
there's
a
reference
to
the
open
graph
and
open
data
right
statement
vocabularies
in
the
existing
data
set
sites.
A
C
All,
oh
right,
okay!
So
the
seo
there's
a
bunch
of
seo
requirements
that
basically,
oh.
A
I'm
sorry,
I
didn't
realize
that
okay
yeah,
they
were
not
doing
that
in
this
back.
A
Okay,
well,
let's,
let's
move
on
from
that,
then,
if
that's
got
a
clear
answer,
fantastic!
The
next
point,
I
suppose
is
guidance,
and
I
guess
all
I'm
doing
right
now
is
flagging
up
how
useful
it
would
be
to
get
feedback
on
the
required
recommended
optional
properties.
A
As
I
said,
the
specifications
exists
right
now
just
reflects
what
we've
got
being
output
by
the
libraries
in
place
for
data
set
site
generation,
so
the
guidance
implicitly
is
that
everything
is
required.
This
is,
to
my
mind,
probably
too
strict.
A
C
I
was
gonna
say
because
most
most
fields
at
the
moment
can
be
filled
out
for
open
active
with
pretty
defaulty
values
like
there's,
not
much
that
you
couldn't
and-
and
we
probably
want
to
do
things
like
make
the
discussion
url
recommended,
because
we
want
to
mandate
that
people
have
a
place
that
they
can
raise
issues
with
the
data
set.
And
maybe
we
don't
want
to
mandate.
That's
github,
maybe
that's
a
recommendation,
but
you
know,
like
the
stuff.
That's
best
practice
kind
of
build
that
in.
A
Well,
yeah,
absolutely
I
mean
yeah,
I
think
I
think
implicit
in
what
I
was
saying
was
that
yes,
we
do
want
best
practice
and
we
also
want
wanted
to
be
realistic
for
people
to
achieve.
A
Let's
just
hear
as
many
voices
as
we
as
we
can
about
that,
because
it's
it's
just
not
clear
to
me
where
the
where
that
balance
lies-
and
I
guess
I
suppose
it
opens
up
a
wider
question
of
how
far
this
can
or
should
be
divorced
from
the
tooling
that
currently
exists
anyway,
from
a
sustainability
point
of
view.
My
feeling
is,
this
does
have
to
exist
in
a
more
or
less
self-standing
way
that
somebody
who
doesn't.
B
C
Yeah,
that
would
be.
That
would
certainly
be
my
feeling
on
it,
but,
as
you
say
also,
but
but
from
a
sustainability
standpoint,
good
to
make
sure
that
this
is
self-standing
and
even
if
you're,
using
whatever
language,
you
know
that
isn't
supported.
A
Right
yeah,
that's
I
guess
that's
the
primary
use
case
for
this.
Yes,
if
you're,
if
you're,
writing
everything
in
eiffel
or
something
yeah
right,
you'll
be
able
to
re-implement,
although
even
you
know
the
template,
is
there
the
template's?
Really
the
end
point?
Isn't
it
that's
true,
you're
right,
yeah,
that's
true
with.
A
C
Yeah,
and
especially
as
we
want
to
really
push
people
to
make
their
stuff
discoverable,
I
guess
that's
part
of
the
point
of
open
data,
so
this
is
this
stuff
isn't
costly.
It's
just
an
extra
field
like
filling
in
the
field
that
says
publisher
with
your
name
of
organization,
for
example,
like
is
a
it's
a
it's
a
one-off
task.
All
of
this
is
is,
is
a
one-off
task,
most
in
most
cases
or
a
one-off
configuration
task
for
customers.
If
for
bigger
systems,.
A
Yeah,
although
it's
interesting
in
our
open
data,
how
often
that's
missing
but
yeah
yeah,
that's
true,
but
but
making.
C
It
making
it
recommended
or
required-
I
suppose,
pushes
it's
interesting,
because
if
we,
if
we
look
at
all
the
data
set
sites
that
exist
right
now,
I
see
I
think
we've
got
100
coverage
across
these
properties.
Where
they're,
I
I
think
everyone's
got
documentation.
Everyone's
got
discussion.
Url,
that's
really
been
a
push,
I
think,
partly
because
the
odi
either
guidance
or
recommendation
or
mandate.
I
don't
know
what,
when
we,
this
all
came
from
the
odi's
guidance
originally
years
ago.
C
I
guess
and
part
of
that
was
yes
having
a
way
of
discussing
what
the
data
looks
like
is
really
important,
so
I
guess
that's
kind
of
what's
left
out
to
be
there.
A
Okay,
so
I
guess
we
can
be
fairly
stringent
there.
So
does
anyone
have
anyone
else
have
anything
to
add
on
that?
On
that
point,.
A
Okay,
possibly
a
bit
abstruse
and
and
into
down
into
open
active,
tooling
questions.
So
my
next
question-
and
this
is
really
about
what's
useful-
to
to
data
consumers
but
also
to
some
extent,
what's
helpful
for
data
publishers.
A
A
What
documentation
for
good
human
readable
standards
looks
like
it
would
be
possible
to
be
on
one
end
of
the
spectrum
extremely
vague
about
this
and
to
say
you
should
give
information
about.
You
know
you
should
have
the
links
to
your
data
and
you
should
give
information
about
the
publisher
in
a
human
readable
way.
That's
it
or
even
people.
Reading
your
site
should
be
able
to
conclude
the
following
from
the
text
description
that
you
give.
A
I
started
going
down
the
ladder
path
when
I
was
creating
the
documents
the
other
day,
with
a
kind
of
you
know,
css
selector
and
here's
the
data
that
that
selector
should
point
to
I'm
not
sure
that
that's
the
best
way
of
going
about
it,
because,
obviously,
if
you're
a
publisher
that
imposes
a
very
particular
structure
on
your
page,
which
might
be
good
from
the
point
of
view
of
parcelability.
A
B
I
think,
as
so
far
as
you're
defining
data,
I
think
going
down,
the
css
route
can
be
a
bit
too.
Like
you
mentioned
on
a
publisher
site,
it
could
be
terribly
complicated
to
try
sort
that
out
with
your
existing
css
rules.
Right-
and
I
feel
like
a
tag
based
approach-
might
work
a
bit
better
using
the
extension
to
html5.
C
Yes,
it's
probably
worth
saying
actually
that
the
I
I
must
admit.
I
misread
that
part
of
the
the
draft,
what
those
were
doing
those,
but
the
heading
is
actually
very
clear.
Having
read
it
again,
I
just
looked
at
the
css
selector
and
thought.
Oh
that's
good!
That's
that's
interesting!
Then,
in
terms
of
the
html
annotations,
that
d-cat
recommends.
C
I
thought
that's
what
that
was
doing
because
d-cap,
actually
it's
rdf
a
or
something
I
think
or
micro
formats
or
there's
a
name
for
it
in
schema,
which
is
basically
when
you
bake
it.
You
don't
use
the
json
format.
You
bake
into
the
html
itself,
all
the
different
metadata
properties.
C
So
the
current
scheme,
the
current
data
set
site
template,
does
both
it
has
metadata
properties
baked
into
the
html
to
conform
with
that
d-cat
and
some
of
the
older
d-cap
parsers
that
are
around
some
of
the
names
escape
me
now.
But
there's
a
bunch
of
open
data
directories
that
pass
decant
would
do
all
of
that
for
you,
and
so
that
would
mean
things
like
data.gov
could
read
it
and
pull
in
the
relevant
metadata
as
easily
as
google,
because
it's
kind
of
supporting
both
and
there
was
no.
B
C
This
is
before
dcat2
bear
in
mind
and
before
that,
jason
ld
really
took
off
as
the
preference
for
schema.org
because
skin.org,
if
you
even
look
at
the
docs,
it's
all
originally,
it
was
all
very
much
the
other
formats
first
across
the
examples
so
see
what
the
names
of
those
formats
are.
Micro
data
rdf,
a
and
json
ld.
C
That's
that
so
whether
we
want
to
I
mean,
I
think
it
would
be
good,
maybe
as
it
should,
if
it's
not
a
must
to
recommend
that
they
and
and
a
profile
of
which
rdf
slash
micro
data,
whichever
the
properties
are
that
are
most
widely
recognized
by
the
open
data
community
that
are
are
included.
I
don't
I
agree
with
nathan.
I
don't
think
css
selectors
are
the
way
to
do
that,
because
that's
really
quite
restrictive
and
much
more
restrictive
than
the
rdf
a
right
requirements.
C
C
So
I
guess
to
separate
things
out:
there's
rdfa
and
microdata
on
the
one
side
which
yeah
suggesting
potentially
should
be
assured,
if
not
a
must
about
making
sure
that
we've
got
maximum,
because
the
point
of
this
page
is
for
for
discovery,
so
might
as
well
make
sure
it
has
all
the
bits
for
the
systems
and
then
the
other
side
of
that
is
human
readable
and
my
suggestion
there
is
is
just
to
make
it-
maybe
not
not
super
super
vague,
but
something
as
simple
as
ensuring
that
all
data
described
in
machine,
readable
form
is
available
in
a
human
readable
form.
C
On
the
same
page
that
it's
clear
and
well
structured,
and
doesn't
I
don't
know,
maybe
you
want
some
accessibility
things
in
there
too,
but
you
know
I
mean
like
a
high
level
like
these
things
you
should
do.
The
template
obviously
does
that
for
you,
but
just
to
make
sure
that
if
you
don't
have
a
human
readable
aspect
to
this,
then
what
you
could
end
up
doing
is
publishing
like
one
of
those
really
vague
pages
where
it's
like
everything's
in
the
json
ld.
C
And
if
you
look
at
it,
it's
just
the
title
and
the
description
you
have
to
like
right,
click
and
view
source
to
find
out
what
what's
supposed
to
be
going
on.
But
we
obviously
don't
want
that
user
experience
for
everyone
in
open
active
to
have
to
go
through.
So
if,
if
the
machine
readable
matches
the
human
readable
in
terms
of
access
to
content-
and
it's
clear-
maybe
that's
enough
without
going
into
the
specifics
of
what
html
structure
is
used
to
achieve
that.
A
Okay-
and
I
guess
it
sounds
like
the
action
coming
from
that-
is,
I
guess
on
me
actually
just
to
yeah
look
at
the
extent
to
which
micro
format,
rdfa
etc
remains
useful
in
discoverability
yeah,
so
that
so
that's
actually
a
third
part
of
the
of
of
the
specification
really.
C
A
C
A
C
Non-Json
ld
machine
personal
vocabularies,
so
I
yeah
I
put
open
graph
and
open
data
rights
in
that
same
bucket.
There's
a
there's,
basically
a
bunch
of
stuff
that
we
were
with
the
open
data
people
I
mean.
I
don't
know
what
the
adi's
current
guidance
is
on
this
as
well.
That
would
be
interesting,
there's
a
bunch
of
mic
of
random
markup
that
you're
supposed
to
add
to
all
these
things.
How
much
of
that
I
mean
a
lot
of
this
was:
is
four
years
old
now
this
information.
So
how
much
is
current.
A
B
C
So
I
mean
like
I
said:
I
think
I
think
if
we,
if
we
the
point
of
this,
is
to
about
musks
and
should
kind
of
give
the
ideal
outline
of
what
we
want
everyone
to
do
and
if
we've
already
got
a
template
that
does
it
for
them
as
well.
Yeah
then
yeah,
I
don't
see
any
harm
in
almost
going
slightly
overboard,
with
specifying
open
graph
rdfa
open
right.
I
mean
this.
C
The
point
of
this
page
is
really
to
make
sure
that
anyone
that
reads
it:
oh
yeah,
that
that's
it
if
you
ever
apply
for
an
open
act.
Sorry,
an
open
data
institute
certificate
that
tool
reads
the
rdfa,
I
think
or
micro
data.
In
the
page
it
doesn't
read
the
json
ld
because
it
predates
that
so
you
can,
it
pulls
all
the
stuff
out
into
the
certificate
for
you,
for
example,
one
of
the
tools
that
does
that.
A
C
Yeah,
so
I
guess
it's
about
shoulds
and
must
isn't
it
like?
Can
we
can
we
put
it
in
as
it
should
unless
yeah
and
then
and
then
here's
a
template
that
does
it
all
for
you
and
includes
the
shits
and
the
musts
if
you
wanna,
yeah,
okay.
A
A
C
So
could
could
we
who
even
got
much
time
left
in,
but
would
it
be
possible
to
bring
up
the
actual
issue
on
the
the
the
group,
the
w3c
group,
because
it
would
be
also
good
to
get
people
on
the
call
to
contribute
to
the
issue
directly
if
they've
got
thoughts
before
the.
So
this
is
sorry
to
add
further
context.
C
What
tim's
just
said
that
or,
as
I
said
earlier
in
the
call,
so
we've
got
a
web
api,
a
proposal
in
with
schema.org
their
own
machine
to
actually
change
add
to
schema.org
some
properties
which
hopefully
then
will
get
adopted
by
google
and
everyone
else.
And
so
one
of
the
outstanding
questions
in
that
proposal,
which
is
gonna.
C
This
is
due
to
be
answered
very
shortly,
because
they're
gonna
kind
of
include
that
in
they're
going
to
include
that
in
the
the
next
version
of
schema's
release,
which
I
think
is
in
august-
and
so
it's
really
very
simple
semantic
question,
which
hopefully
everyone
from
a
technical
spectrum
on
the
core,
will
have
a
clear
view
on,
and
then
we
can
contribute
those
views,
ideally
to
that
discussion,
which
will
help
move
that
forward.
In
terms
of
scheme.org
zone
processes,
that's
that
was
the
thing.
A
That's
it
yeah
there
we
go
yes,
and
I
can.
I
can
include
this
link
in
the
call
notes
as
well
yeah,
so
this
is
just
making
it
more,
as,
as
the
user
summarizes
it,
making
a
more
explicit
separation
between
human,
readable
and
machine
editable,
endpoint
descriptions
yeah,
it
seems
like
confusing
these
two.
C
Would
be
just
incredibly
irritating
so
can
we
make
this
super
tangible?
Sorry,
tim?
Would
you
mind
clicking
on?
Maybe
you
need
to
have
the
the
web
web
api
discovery
link
at
the
top
there,
and
just
and
just
opening
up
from
there
from
the
readme
that
it
will
appear
the
the
in
that
repo.
Sorry
in
that
repo
rf
cs,
then
in
the
readme
there's
a
link
which
is
the
right.
That's
it!
So
that's!
This
is
the
proposal
that
is
being
put
forward
by
the
another
community
group.
C
So
mike's
very
kindly
allowed
me
to
jump
on
his
editor
of
this
document,
so
we
can
move
it
forward
because
he
didn't
have
much
time
to
do
that.
So,
if
you
scroll
down
to
the
documentation,
section
3.2,
sorry
not
scroll
down,
click
on
yeah
3.2,
and
then
yes
right.
So
this
is
really
concretely
what
we're
saying.
So,
if
you
scroll
down
to
that
a
little
bit
of
json
there,
that's
it
so
you've
got.
So.
This
is
what
right.
C
So
this
is
the
current
proposal
from
the
dcat
two
working
group
in
their
mapping
now
context
here,
as
decatur
working
group
were
doing
that
at
a
point
in
time
before
schema.org
had
any
other
features,
so
they
haven't
necessarily
tried
to
do
the
mapping
taking
into
account
all
possible
options.
C
They've
just
done
the
mapping
to
whatever
of
schema.org
happened,
to
have
at
the
time,
and
what
they've
decided
to
suggest
is
that
both
human
and
machine,
readable
documentation
go
into
one
property,
as
this
shows
in
this
json
and
use
it
and
and
then
by
implication,
use
a
different
encoding
format
to
to
specify
which
it
is
so
in
this
you
can
see
that
the
human
readable
documentation
is
text.
C
Html,
that's
the
first
element
in
the
array
and
the
second
and
third
elements
are
machine,
readable
documentation,
and
you
can
tell
that
by
the
encoding
format.
That's
in
there
again,
they're,
not
text,
html,
they're,
a
machine,
readable
format,
and
so
that's
what
they're
proposing
is
mapping
and
as
as
that
issue
discusses
the
the
issue
makes
it
clear
that,
although
that
is
the
case,
that
that
is
what
is
being
suggested
and
proposed
by
gcat.
C
Actually
that
creates
some
ambiguity,
because
if
you
want
to
parse
out
the
links
for
human
machine
readable
separately,
you
need
to
white
list
those
mime
types
to
know
which
is
human
and
which
is
machine,
readable,
and
that
assumes
that
you've
obviously
got
an
idea
about
all
the
possible
mime
types
of
either
and
that
might
be
straightforward,
because
maybe
you
only
want
to
pull
out
text
html
as
the
human,
readable
ones
or
it
might
create
more
complexity,
because
that's
just
that's
just
another
thing
that
you've
got
to
code
in
and
it's
not
entirely
clear.
C
So
I
I
personally
can
go
either
way
on
this.
Having
done
the
editing
of
the
draft,
I
put
it
in
conforming
to
decat2,
because
that's
what
seemed
easier
and
a
little
the
least
friction.
However,
for
good
reasons,
some
folks
in
the
community
have
said.
Is
that
really
the
right
way
and
and
have
we
thought
about
this?
C
Maybe
we
should
think
about
endpoint
description
instead,
so
the
alternative
proposal
is
instead
of
having
one
property
and
only
differentiating
based
on
the
mime
type,
have
two
properties:
one
property
for
human,
readable,
one
property
for
machine,
readable
and
just
split
those
across
both
and
that's
literally
the
difference,
and
I
can
see
good
arguments
for
both
sides.
So
it
gets
to
be
really
interesting
to
see
what
you
guys.
Think
of
that.
B
I
think,
when
you're
parting,
a
list
like
that
being
able
to
very
quickly
tell
what's
going
to
be
human
readable
machine
readable,
would
be
very
useful.
B
Not
necessarily
afford
to
put
them
in
your
white
listing.
D
To
render
them,
if
you
don't
know
how
to
render
markdown,
for
instance,
you
know,
are
you
by
necessity,
on
fetching
this
having
having
something
ready
to
render
it
in
human
readable
form?
And
so
you
wouldn't
have
a
white
list,
because
you've
only
got
the
things
that
you
know
how
to
handle.
D
I'm
trying
to
work
out
for
the
human
readable
part.
What
is
there
a
case
where
you
need
to
do
something
to
process
it
that
isn't
on
a
an
actual
thing?
That's
going
to
be
read
by
a
human,
it's
what
I
mean
at
which
point
it
can
either
handle
a
format
or
not.
D
I
I
sort
of
slightly
informed
by
the
sort
of
rss
approach
where
you
know
you
hit
it
asking
for
a
text
html
and
you
get
a
web
page
with
a
blog
post
in
it,
and
you
hit
it
asking
for
xml,
for
instance,
and
or
atom
or
whatever
it
happens
to
be,
and
you
get
that
same
content,
but
in
the
machine,
readable
format,
and
this
feels
what
what
nick
showed
on
the
proposal
there
feels
kind
of
like
that.
C
So
you're
saying
that
I
guess
in
in
the
case
that
we
wanted
to,
for
example,
render
this
to
a
web
page.
You
know
that
browsers
can
only
handle
a
set
number
of
types
of
human
readable
forms.
So
in
order
to
put
a
link
in
the
browser
that
went
to
view
more
documentation,
for
example,
just
thinking
practically
what
it
would
look
like,
you
probably
want
to
put
a
view
documentation
button
in
the
browser
to
know
what
you
could
render
that
button
for
which
links
that
button
would
be
appropriate.
For.
C
I
guess
you
would
know
you
would
just
take
what
you
understand
of
browsers.
They
can
probably
deal
with
html,
maybe
there's
maybe
xhtml,
maybe
whatever
else
they
might
deal
with,
and
then
then
inferred
from
that.
So
you're.
You
think,
chris,
that
you
think
the
white
listing
approach
actually
could
could
work.
I.
D
I
certainly
fishing
for
an
anti-case
to
say
that
it
couldn't
work.
If
you
see
what
I
mean
rather
than
accepting
well,
it
was
a
problem,
and
it's
interesting
because
browsers
browsers
are
an
interesting
thing
because
they
are,
they
can
represent
a
lot
of
different
formats,
and
so,
if
they
say
accept
star.start,
when
asking
for
one
of
these
things
and
you
you
start
star,
then
then
you
haven't
really
got
any
clue
as
to
what
to
give
them.
C
A
Yeah
I
mean
I
remember
on
an
earlier
call.
We
were
talking
about
sort
of
subspecies
of
markdown.
We
would
like
to
support
so
I
guess
there
is
there's
always
going
to
be
edge
cases
there
yeah.
D
So
I
I'm
actually
wandering
very
back
towards
nathan's
idea
that
that
possibly
there
is
a
thing
that
is.
This
is
intended
as
a
human,
readable
format.
I
mean
at
least
gives
you
if,
if
one
of
the
things
you've
given
back,
even
if
you
don't
know
for
sure
whether
you
can
support
it,
is
being
flagged
as
it
should
be
human
readable,
then
you
can
throw
the
browser
at
it
and
hope
that
it
can
cope
with
whatever
it
is
being
asked
to
render.
A
And
I
think
it's
acceptable
for
things
to
be
borked
if
it's
marked
human
readable-
and
you
pick
you-
you
pull
it
in
and
it
looks
weird
to
you.
Nothing
particularly
goes
wrong.
There
I
mean
it's
not
intelligible
to
you
and,
of
course,
you're
not
too
sure
how
to
serve
it
on
to
subsequent
clients.
But
if
it's
marked
machine
readable,
there's
a
sort
of
strong
presumption
of
how
usable
and
automatable
that's
going
to
be
so
it's
useful
to
have
that
distinction
asserted
right
right
up
front.
I
would
have
thought.
A
The
other,
the
fault,
tolerance,
I
suppose,
of
those
two
kinds
of
guidance-
are
very
different.
C
Great
well
so
it's
I
mean
it
sounds
like
well,
I
mean
yeah,
so
this
is
really
great
feedback,
because
I
think
when,
when
that
that
proposal
point
in
the
proposal
was
was
added
yeah.
I
think
this
consideration
about
how
easy
it
would
be
in
in
practice
wasn't
maybe
as
clear.
C
So
I
wonder
if
it's
possible,
I
mean
you
know
if
you
guys
are
able
to
just
post
the
thoughts
that
you
have
on
this
particular
issue.
Just
so
that
that
is-
and
you
know
please
be
as
we
don't
have
to
all
agree
with
each
other.
It's
a
good
discussion
to
to
have
it's
just.
C
It's
not
me
trying
to
rally
consensus
from
from
another
group,
it's
more
just
good
technical
people
with
some
some
thoughts
that
will
have
a
vested
interest
in
the
problem,
so
I've
just
posted
on
the
group
chat
to
zoom
the
the
issue
so
yeah
I
mean,
if
you're
able
to
just
quickly
post
a
thought
on
there.
That's
about
that
discussion
and
it
that
would
be
really
useful
to
to
just
progress
that
chat.
A
Which
leads
into
the
final
point
on
the
on
the
slides,
which
is
just
scheduling
here.
So,
as
I
mentioned,
it
looks
like
schema
version.
10
will
be
hopefully
shipping
late
august,
which
means
that
we
can
make
firm
statements
in
this
data
site
specification
after
that
point,
so
in
the
very
near
future
before
september.
A
So
yeah
gathering
consensus
early
would
be
very
helpful
in
order
to
make
sure
that
the
existing
web
api
proposal
gets
integrated
into
schema.org
as
soon
as
possible,
and
then
I
will
also
work
on
integrating
actions
from
this
call
and
also
just
general
proofreading,
and
any
other
issues
raised
by
that
point.
A
With
two
minutes
left
on
the
call
are
there
any
comments?
People
would
like
to
make
either
on
the
scheduling
or
more
generally,
on
any
other
business
or
rising.
A
Okay,
I'll
just
thank
you
all
for
being
on
the
call
then
and
yeah,
so
we
should
be
able
to
progress
this
into
something:
that's
not
a
zero
point,
something
in
the
in
the
reasonably
near
future.
C
C
So,
there's
a
couple
typos,
I
noticed
when
skimming
the
dock,
which
I
can
I
can
mention
separately.
So
I
won't
spend
time
on
that
now,
but
there
was
one
thing
that
I
noticed
that
was
missing,
which
was
the
booking
service.
Was
that
an
intentional
thing
to
missed
and
just
get
an
understanding
of
that?
If
that
was,
if
it's
not,
then
I
can
just
add
it
as
a
yeah.
Just
raise
that.
As
you
should
know,
it
was
not
intentional.
C
That's
not
perfect,
okay,
great
and
then,
and
then
the
other
question
I
had
was
just
around
this
issue
of
the
the
conformance
stuff.
So
I've
posted
it
up
now
I'll
I'll
amend
it
after
the
call,
the
conformance
certification,
not
the
covert
certification,
but
the
links
to
whether
to
link
to
conformance
certifications
within
feature
profiles
or
not
to
do
that.
But
I
realize
that's
probably
not
a
one
minute
topic,
so
I
just
kind
of
flagging
that
as
something
that
might
be
worth
coming
back
to
or
or
thinking.
A
C
A
We
can,
we
can
add
it
to
the
agenda
for
future
calls.
I'm
not
too
sure.
I
think
it
might
be
kind
of
a
hypothetical
discussion
for
the
present,
but
but
yeah
we
can
revisit
it
in
the
near
future.
Yeah.
C
Yeah
that
sounds
good
yeah
I
mean
mainly
mainly
to
solve
the
problem
we
spoke
about
earlier
about.
How
do
we
get
these
feature?
Profiles
that
I
know
nathan
at
the
top
of
the
call
kind
of
mentioned
into
the
spec,
and
so
I
think,
there's
two
ways
to
do
doing
that.
One
involves
conformance
search,
one
doesn't
maybe
we
were
thinking
about
more
of
the
conformance
roots
before,
but
just
want
to
make
sure
that's,
not
lost
sure
yeah,
okay,.
A
Okay,
if
that's
everything
I'll
thank
you
again
at
ending
with
laser
sharp
precision
at
the
end
of
the
hour,
thanks
very
much
everyone
all
right.