►
From YouTube: IETF-SCIM-20220920-1500
Description
SCIM meeting session at IETF
2022/09/20 1500
https://datatracker.ietf.org/meeting//proceedings/
D
Sure
thing,
good
morning,
everybody
it'd
be
great.
If
someone
could
volunteer
to
take
notes,
there
is
a
little
note-taking
tool
inside
of
this
meeting
software.
You
should
find
it
with
a
little
pencil
icon.
Can
we
get
at
least
one
volunteer.
D
All
you
have
to
do
is
just
yes,
thanks,
Nancy,
for
the
link
roughly
take
notes
on
what
people
are
presenting,
and
especially
more
importantly
than
the
presentation
content
is
any
of
the
replies
that
people
are
in
the
discussion
that
happens
here.
B
D
You
should
be
able
to
just
click
the
little
edit
Link
in
that.
D
D
So,
thank
you
Joel
for
that,
a
quick
intro
to
the
agenda.
You
should
be
able
to
find
the
agenda
Link
in
this
in
the
software
as
well,
where
it's
in
the
notes
document
actually-
and
we
have
a
couple
of
drafts
to
cover
today
so
hopefully
we'll
hear
from
from
everybody
who
has
updates
on
their
drafts
I'm
starting
with
Pamela.
We
have
Pamela
first
on
the
agenda,
but
I
don't
see
her
in
the
participant
list
here,
so
we
may
have
to
bump
her
to
later.
Hopefully
she
shows
up
shortly.
D
So
I
only
got
one
set
of
slides
emailed
to
me
ahead
of
time,
but
again
sorry
for
the
short
notice
of
sending
out
the
request
for
them.
If
you
have
slides
that
you
want
to
present,
you
can
share
your
screen
through
this.
That's
fine
or
if
you
want,
you
could
also
email
them
to
me
right
now
and
I
can
upload
them.
D
Get
there
is
yeah
on
the
left
column,
there's
the
little
list
of
all
the
people
and
there's
a
separate
tab
kind
of
next
to
that,
with
little
chat,
Bubbles
and
that'll
switch
the
participant
list
into
the
chat
View.
B
B
A
I
just
joined
using
the
link
and
moved
in
with
the
data
tracker
credentials,
went
through
the
front
door,
I
think
but
I'll
just
look
at
the
notes
and
if
there's
anything,
it's
important
that
happens
in
chat
I'm,
just
hoping,
maybe
to
see
some
questions
that
people
have
them
and
answer
them.
If
they
don't
want
to
say
them
out
loud
or
or
can't
because
they
don't
have
Mike
or
something
like
that.
Yeah.
D
I'll
be
sure
to
repeat
anything
in
the
that
comes
in
the
chat
so
that
they
can
all
all
hear
it.
There
is
one
other
way
to
get
to
the
chat,
which
is,
if
you
go
to
the
it's
called
Zoo
lip.
So
it's
a
zulip.ietf.org
and
they've
now
integrated
that
with
this
meeting
tool,
so
there
is
a
skim
room
there
and
it's
the
same
as
the
one
for
this
meeting.
D
It's
a
little
bit
hard
to
find
it,
but
hopefully
you
can
get
in
that
way
too.
Okay,
Pamela
is
here
fantastic.
E
Can
you
guys
hear
me?
Okay,
yes,
oh
yeah,
oh
good
I
was
about
to
panic.
Okay,
so
use
cases
are
we
and
I
think
I
have
25
minutes
right,
but
I'm
happy
to
keep
it
shorter
to
make
sure
we're
still
on
agenda
here.
D
E
Right
just
came
me
off
all
right:
let's
share
my
screen,
so
the
quick
wow
is
it
well
I
guess
we're
gonna
find
out
window
yeah
all
right
that
works,
so
the
the
quick
update
here
on
the
the
use
cases
is
I
have
been
going
through
the
specs
and
looking
for
the
terms
that
are
heavily
used
in
this
in
RFC
7643
and
44..
E
Not
I
just
thought
I'd
give
a
quick,
oh
okay,
quick
update,
so
I've
been
going
through
to
try
to
find
the
terms
because
I,
you
know
in
some
sense,
if
we
can
use
the
terms
in
the
spec
within
the
use
cases
I
think
that's
probably
the
best
way
to
get
people
who
are
reading
the
use
cases
sort
of
into
the
spirit
of
using
these
these
terms
in
context,
so
that
when
they
read
this
back,
they
can
understand
what's
happening,
and
so
what
I
ended
up
doing
is
going
down
a
rat
hole
of
interesting
proportions.
E
E
So
it'll
only
let
me
share
the
deck.
That's
in
the
queue
the
cursive-based
pagination
deck.
B
E
I'm
gonna
try
the
entire
screen
just
to
see
if
that's
gonna
work.
B
E
F
E
E
E
So
what
the
specifications
Define
is
a
provisioning
domain
right
and
that
provisioning
domain
is
Affiliated
to
the
client
right,
so
there's
a
provisioning
domain,
and
if
you
look
through
the
specs
there's
a
provision,
the
provisioning
client
right
either
pushes
or
pulls
into
the
provisioning
domain
right
pushes
to
the
provisioning
domain
or
sorry
pushes
out
of
the
provisioning
domain
or
pulls
into
the
provisioning
domain.
Now,
what
isn't
clear
in
the
specs
as
far
as
I
can
interpret
them,
is
exactly
what
happens
in
the
case
of
different
clients
in
different
provisioning
domains.
E
So
the
client,
the
external
ID,
is
defined
as
being
relative
right
and
defined
only
to
the
scope
of
a
given
provisioning
domain,
and
so
the
question
that
I
have
for
you
that
maybe
you
guys
can
weigh
in
on
will
I
share.
This
with
Danny
is
in
the
case
where
the
provisioning
client
is
pushing
a
new
resource
to
the
skim
server
right.
E
They
have
a
chance
to
specify
that
they
external
ID
right
and
as
long
as
the
server
agrees
that
that
external
ID
is
not
a
collision,
then,
generally
speaking,
the
skim
server
will
accept
it,
but
they
may
return
back
a
201
right
within
with
a
differentiating
external
ID,
so
that
piece
is
well
defined
in
the
spec.
E
What
is
not
well
defined
is
that
if
you
now
have
a
client
in
a
separate
provisioning
domain
that
wants
to
access
that
same
resource,
we
know
we
have
an
external
ID
that
is
scoped
to
The
provisioning
Domain,
but
the
provisioning
domain
has
you
know
this
is
perhaps
the
first
time
the
client
in
the
the
new
provisioning
domain
has
ever
accessed
the
resource.
B
H
Yeah,
it's
Phil
philhunt
here,
so
this
goes
back
to
one
of
the
issues
you
have
in
writing
standards
and
that
the
focus
the
primary
focus,
always
when
you
read
us
back,
is
what's
the
interop
issue
and
giving
implementers
the
flexibility
to
do
things
how
they
want.
So
there
wasn't
a
consensus
on
describing
the
entire
procedure
for
this.
So
that's
why
it's
not
all
there
and
that's
so
because
some
people
will
want
to
do
something
sophisticated
and
some
people
will
want
to
do
this
trivially.
H
So
what
the
contract
is
really
saying
is
that
the
client
that
sets
an
external
ID
entity
is
can
keep
using
that
external
ID
to
reference
that
object
and
that
that
opens
takes
some
pressure
off
because
then
the
client
doesn't
have
to
remember
all
the
skim
identifiers
which
are
permanent,
so
the
contract
is
between
the
client
and
the
server.
So
what
a
server
implementer
could
do,
then
is
bind
the
client,
credential
or
client
subject
to
that
external
ID,
so
that
client
aim
asks
the
question.
Client
B
can
use
a
different
identifier
if
it
so
chooses.
H
Now
that
is
not
exposed.
You
can't
list
the
multiple
external
identifiers.
Each
client
then
sees
its
own
another
implementer
can
say
there
can
be
one
and
one
only
it's
up
to
the
implementer
or
the
design
of
their
system,
underneath
as
to
what
the
overall
capability
is,
and
so
the
spec
by
intention
was
silent
on
that
matter.
H
E
E
Okay,
so
for
those
of
you
who
have
implementations
right,
if
I
do
I
mean
there's
if
I
did
share
it,
Danny
I
did
share
this
slide
with
you,
so
that
you
can
put
it
up
just
so
that
we
have
some
kind
of
reference
right
for
the
conversation,
but
here's
the
question.
So
if
I
am
a
new
client
accessing
I
didn't
create
the
resource,
so
I
never
negotiated
that
external
ID
interaction
right
or
the
The
Binding,
essentially
between
the
skim
server
and
the
client,
then
do
I
get
no
external
ID
back
so
so
now.
E
Finally,
thank
you
so
much
Danny
I'm!
So
sorry
you
all
so!
If
you're
going
to
create
the
resource,
then
the
negotiation
of
the
external
ID
right
with
provisioning
domain
a
here
on
the
left
hand.
Side
is
very
straightforward
right.
So
the
question
is:
if
you're
going
to
query
the
resource
on
the
right
hand,
side
then,
are
you:
are
you
going
to
get
an
external
ID
back
at
all?
H
It's
only
it's
only
specified
that
if
the
same
client
asks
it
back,
it
would
get
back
the
external
ID,
just
it's
up
to
the
implementer
of
the
service
to
decide
whether
they're
binding
it
to
the
client
specifically
or
or
to
the
entire
object.
It's
it's
silent
on
that
matter,
so
you
can't
count
on
it.
If
you're
doing
a
multi-client
system.
H
That's
the
downside.
What
you
can
count
on
is
that
the
URI
for
a
skim
object,
never
ever
changes
and
you
can
calculate
the
URI,
because
the
identifier
also
never
changes.
So
you
can
share
that
identifier
between
multiple
clients.
That's
always
guaranteed
to
be
stable.
So
it's
unlike
a
graph
API
skim.
This
skim
identifiers
are
permanent
and
this
is
why
the
whole
we're
into
the
delete
Resurrection
discussion,
because
identifiers
are
permanent.
We
want
them
to
survive,
deletion
and
Resurrection
as
well.
H
H
The
identifier
is
actually
better
in
my
opinion,
because
even
over
time,
the
domain
name
for
the
server
might
change
more
often
than
the
skim
identifier
does.
So
the
identifier
is
is
the
permanent
identifier
that
you
can
always
count
on.
H
Still
there
were
people
who
said
I
can't
store
anything
in
my
client.
It's
a
it's
a
a
limited
capability,
client
or
whatever.
It
just
wants
to
use
a
key
that
it's
got,
which
might
be
a
user
ID
or
something
so
I.
Don't
know
that,
and
this
is
totally
unscientific,
but
my
experience
is
that,
for
all
the
reasons
you
described,
external
ID
is
not
actually
used.
That
often-
and
that
was
no
surprise
to
me-.
B
B
E
Okay
is
I
mean,
is
this
something
then
that
I
mean
to
me
it's
a
very
confusing
concept
and
I
think
we
should
try
to
communicate
it
in
the
use
cases
and
and
Concepts
piece
so
that
people
at
least
understand
that
piece
I
mean.
E
Is
there
ever
a
case
where
an
external
ID
can
be
negotiated
for
a
client
that
did
not
create
the
resource,
because
that
seems
like
the
only
possibility
here
and
I've
also
never
heard
of
a
skim
server
managing
different
external
ideas
per
client,
but
that
seems
to
be
the
that's
the
implication
of
the
standard.
H
Well,
I
think,
as
Matt
said,
and
in
my
experience,
external
ID
is
really
a
Fail-Safe
connector.
It's
not
what
clients
use
they
use
the
permanent
URI.
That's
that's
the
way
it's
done
and
and
part
of
the
simplification
is
to
do
that.
I
I
want
to
sort
of
alert
you
to
there's
another
parallel.
That
group
that
went
through
this
and
that's
the
open,
ID
foundation
and
shared
signals.
One
of
the
problems
they
went.
H
Is
they
rolled
out
with
open
ID
without
specifying
in
the
contract
how
to
reference
objects
over
time
so
that
you
create
a
single
sign.
You
create
a
sign
on
assertion
and
that
oauth
that
jot
token
flows
through
the
system
to
the
downstream
client
web
server.
That
then
establishes
its
own
session,
but
the
session
of
the
client
web
server
is
completely
independent
of
the
the
open
ID
provider,
and
one
of
the
things
they
forgot
to
say
is:
if
I
talk
about
this
object
ever
in
the
future.
Here's
the
common
identifier
between
the
two
so.
H
Google
talked
about
this
publicly,
so
I'll
comment,
I
mean
they
said.
You
know
they
have
a
hundred
thousand
at
the
time.
So
this
was
eight
years
ago,
nine
years
ago
they
have
over
a
hundred
thousand
active
clients
against
the
Google
IDP,
not
one
of
them.
There's
no
standards
can
do
it.
So
if
Google
said
this,
this
token
is
revoked
or
this
session
is
revoked.
The
clients
are
tracking
in
a
completely
different
way,
so
once
they
so
all
that
to
say
there
was
a
strong
need
to
have
agreement
on
identifiers.
H
What
started
then
was
in
the
shared
signal
Community.
They
started
the
subject,
identifier
drafts,
so
that,
in
addition
to
specifying
a
subject,
you
could
specify
a
subject
in
different
ways.
It
became
very
quickly
and
I
don't
mean
to
be
derogatory.
It
became
spaghetti
on
the
wall
where
I'll
throw
out
every
identifier.
I
know
for
the
subject,
and
hopefully
you
understand
one
of
them,
and
so
they
created
a
spec.
So
you
could
do
that.
H
That
was
one
of
the
ways,
one
of
the
use
cases
behind
the
subject,
identifier
draft
and
that's
useful,
but
it's
also
scary.
It's
because
you
didn't
establish
the
standard
at
the
outset.
D
E
F
Hey
yeah
so
in
in
my
experience,
so
I
guess
there's
two
parts
there,
although
I
raised
my
head
originally
was
just
one
so
to
the
question
of
different
clients:
negotiating
like
different
external
IDs
I've,
never
seen
that
sort
of
in
the
the
major
that
implementations
out
there
and
I've
actually
seen
external
ID
be
leveraged
fairly
frequently
for
sort
of
you
know,
larger
Enterprise
applications
using
skim
where
there's
an
expectation
that,
like
this
game
manager,
is
probably
coming
from
an
organization's
connected
IDP
and
specifically
for
scenarios
where
maybe
like
a
migration
from
one
IDP
to
another,
is
happening,
IDP
being
a
identity
provider
or
anywhere
I,
don't
want
to
cause
confusion
with
any
acronyms
I
buy
it.
F
If
you're
moving
from
Identity
provider
A
to
B,
it
may
be
that
your
different
systems
have
different.
You
know
we'll
call
them
friendly
names
here.
Things
like
email
address,
a
username
type
thing
where
you
don't
have
the
right
information
to
match
on
that,
but
that
external
ID
may
be
something
that
you
can
leverage
and
it's
being
pre-populated.
F
You
know
knowing
that
you're.
You
know
for
your
first
IDP
gave
it
such
and
such
internal
identifier,
and
you
can
then
Port
that
over
and
leverage
that
as
sort
of
that
I
I
do
agree.
It's
a
Fail-Safe
mechanism
for
identifying
or
matching
you
know
an
object,
so
yeah
I've
never
seen
like
different
clients,
return
different
extra
IDs
as
something
that
anybody
has
implemented.
F
I've
always
seen
it
as
just
sort
of
a
a
straight
string,
but
I
I
do
see
value
in
it
and
I
I
think
I
see
it
implemented,
probably
more
than
some
of
the
other
folks.
E
Okay,
I
mean
I
feel
like
we
could
do
a
revisionist
approach
to
external
ID
to
state
that
there's
two
use
cases
where
it's
huge
I
mean
if
there's
ever
a
document
to
put
that
in
right.
We
can
talk
about
the
Enterprise
IDP
use
case
in
the
migration
use
case
and
say
this.
You
know
external
ideas
where
this
is
valuable
and
it
does
seem
to
be
about
really.
E
Like
you
said,
it's
a
fail,
safe
during
initial
creation
of
the
objects
right,
whether
that's
creation
of
the
object
for
migration
or
creation
of
a
very
long-term,
very
authoritative
relationship
between
an
Enterprise
IDP,
and
you
know,
and
a
data
store.
Does
anyone
have
heartburn
if
we
at
least
try
that
in
draft
format.
C
This
case
is
where
at
least
not
necessarily
your
skin
content,
but
we
see
cases
where
a
user
profile
gets
populated
from
multiple
data
sources
and
each
data
source
actually
identifies
that
particular
identity
to
use
different
external
IDs.
In
another
pretty
case,
the
identity,
like
external
ID,
is
not
stored,
necessarily
in
the
central
scheme
profile,
but
it's
kept
from
other
Pro
data
source
basis
and
that
mapping
exists
to
allow
allocate
the
client
a
different
like
a
clients
from
team
data
sources
to
it
to
essentially
identify
the
users
in
their
system
is
easier.
So.
B
G
Fermented
this
on
a
internal
organization
and
basically
the
organization
did
have
a
unique
ID
always
so
it
didn't
matter
that
the
number,
the,
how
many
skim
clients
we
had
of
the
external
ID
would
be
always
the
same.
Then
we
had
an
IDP
involved
as
well
as
well,
as
you
know,
HR
systems,
so
they
all
shared
that
one
ID.
G
E
Okay,
yeah,
that's
I
mean
that's.
The
dilemma
I
have
right
now
is
how
how
we
characterize
this
in
the
way
that
people
are
actually
using
it
like
it
sounds
to
me
like
there's
a
sort
of
a
split
right.
There
are,
at
least
you
know,
so
it
sounds
like
OCTA
is
using
external
ID,
more
Faithfully
I
would
say
to
at
least
how
it's
explained
in
the
spec
than
others,
because
the
others
just
haven't
had
the
need
for
it.
E
Okay,
I
know,
I'm
going
to
get
caned
off
here,
so
I
will
I,
will
try
and
summarize
this
in
and
mail
it
to
the
list
so
that
others
can
weigh
in
and
I
may
take
a
survey.
If
that's
okay
Nancy
on
the
list.
D
Foreign
thank
you.
Next
up,
we
have
Danny
and
Matt
who,
which
of
you
is
going
to
present
or
both
and
already
loaded.
D
A
All
right,
thank
you
yeah,
so
I
guess.
The
first
thing
to
mention
is
that
currently
the
only
draft-
that's
publishes
the
first
revision
and
there's
a
second
revision
underway
that
addresses
an
ambiguity
that
was
brought
up
on
the
list
and
I'll
get
to
that
in
a
minute.
So,
unfortunately,
prior
to
this
meeting,
we
did
not
have
a
chance
to
upload
the
latest
draft,
but
it's
very
similar
to
the
first
one.
It
just
has
a
few
wording
changes
and
addresses
the
ambiguity.
A
Let
me
just
go
through
really
quickly
and
and
and
talk
about
the
reason
for
the
draft.
This
year
the
draft
was
born
out
of
multiple
implementations,
service
provider,
implementations
in
front
of
many
different
application,
apis
and
databases,
and
what
we
found
is
we
implemented
skim
service
providers
is
that
the
pagination
strategy
was
already
defined
in
existing
code
bases
that
we
were
writing
scheme
implementations
on
implementations.
On
top
of,
if
we
were
riding
on
top
of
an
API
that
already
supported
index
offset
pagination,
then
it
was
a
very
easy
implementation
for
us.
A
But
if
our
underlying
database
or
our
underlying
API
supported
only
cursor-based
pagination,
it
was
really
difficult
to
implement
indexed
pagination.
On
top
of
that,
so
you
know
it's
a
really
easy
diagram
to
visualize.
This
is
that
you
know
you've
got
you've,
got
your
your
typical
applications.
Kind
of
on
the
right
hand,
side
of
this
slide
where
there's
a
web
API,
usually
or
a
database,
that's
sitting
on
top
of
either
a
directory
API
or
a
database
API,
and
then
you're
building
your
skim
service
provider
either
directly
to
the
web.
A
Api
that's
already
existing
or
you're
reaching
underneath
and
talking
to
the
database
layer
and
pagination
strategy
is
already
established.
You
don't
really
have
the
opportunity,
as
a
skim
service
provider,
to
go
back
and
change
any
code
in
the
application
to
make
it
easier
for
you
to
implement
index
pagination,
and
so
we've
needed
to
have
a
way
of
implementing
cursor
patch
Nation,
and
that's
really
what
the
the
spec
is
about
or
the
the
draft.
A
It's
just
a
proposal
for
cursor-based
pagination
and
when
we,
you
know
kind
of
got
together
as
an
interest
group
as
prior
to
recharging
the
work
group,
Phil
and
I
put
our
heads
together,
and
we
we
just
you
know
we
tried
to
think
about
what
are
the
scenarios
where,
where
pagination
is
important-
and
you
know
it's
obviously,
when
there's
going
to
be
large,
results
sets,
and
it
turns
out.
A
A
Most
of
the
time
when
we
were
retrieving
large
data
sets
was
to
get
the
initial
load
of
users
or
groups
onto
the
client,
and
then
the
client
would
periodically
make
queries
to
the
service
provider
to
keep
this
local
cash
up
to
date,
and
this
is
a
very
typical
scenario
for
identity
management
systems
is
to
have
a
local
representation
of
all
of
the
identities,
including
all
of
those
that
come
from
skim
service
providers,
so
that
identity
management
tasks
can
be
accomplished,
and
so
we
kind
of
proposed
a
question
which
is
that,
if,
if
all
we
really
need
is
the
ability
to
kind
of
keep
a
client
and
a
server
up
to
date,
you
know
in
sync
with
each
other:
do
we
need
pagination,
or
can
we
just
get
away
with
having
some
kind
of
a
change
notification
which
I
think
Phil
will
talk
about?
A
And
we
propose
that
to
the
interest
group
and
the
response
back
as
I?
Remember
it
I,
don't
have
the
notes
to
prove
this,
but
I
remember
distinctly
many
people
being
having
continued
interest
in
pagination,
in
spite
of
also
having
the
ability
to
do
change
notification,
and
so
what
Danny
and
I
will
will
be
saying
in
submitting
this
draft
is
that
if
we
need
pagination
and
skim,
we
need
to
have
some
interoperability
for
cursor
pagination.
A
A
There's
a
next
cursor
and
a
previous
cursor
and
a
response
which
ends
up
looking
a
little
bit
like
this.
Where
you've
gone
ahead
and
asked
for
a
resource
specified,
do
you
want
cursor
pagination?
You
get
an
X
cursor
in
your
response,
which
you
can
include
in
the
subsequent
request
to
get
the
next
page.
A
So
I
don't
want
to
go
through.
You
know
all
of
the
draft
in
slide
form,
but
it's
a
very
simple
proposal
and,
like
I
mentioned
earlier,
it's
very
similar
to
the
original
proposal.
What
we'll
have
in
the
next
three
revision
is
just
a
few
wording
changes
and
some
some
grammar
punctuations
that
were
in
the
original
draft
pick
some
of
that
stuff,
and
then
there
was
an
ambiguity
that
was
discovered
and
pointed
out
on
the
list
about.
A
You
know
what
do
I
do
as
a
service
provider
if
I
support
both
cursor
and
index
pagination,
if
the
cursor
parameter
is
optional,
which
is
what
the
original
draft
specified
and
there
is
an
ambiguity
there
and
we
resolve
that
by
just
simply
making
the
cursor
query
parameter
mandatory.
And
if
you
want
the
first
page,
you
just
Supply
an
empty
cursor,
and
that
was
the
way
that
WS2
ended
up
implementing.
It
worked
great
for
them,
so
that
solved
ambiguity
and
we
just
written
the
draft.
G
I
just
want
to
mentioned
that
one
of
the
other
issues
is
even
you.
If
you
do
change
notification,
you
may
find
that
you
need
with
the
paint
set
and
do
some
sort
of
cursor
or
count
notification.
G
The
if
you
are
doing
the
change
set
notification,
the
the
change
set
may
be
large,
so
you
may
need
to
do
cursor
or
space
term.
10
sets.
H
Thank
you
I'm
a
little
confused
in
the
event
notification
events
are
always
per
object.
There's
no
set
mode.
H
H
The
only
place
where
it
may
get
complex
is
is
a
group,
which
is
why
we
have
a
minimum
profile
where
you
might
not
want
to
send
all
of
the
change
data
in
a
group,
or
you
might
not
want
to
send
an
event
for
every
change
to
a
group.
H
So
you
may
say
the
group
has
changed
and
the
client
has
to
go.
Pull
that
group
to
get
its
current
state.
All
right,
larger
group
gets
and
you're
often
it
changes.
So
it's
the
opposite.
You'll
get
you'll,
get
reduced,
numbers
of
changes
or
the
or
the
the
asserting
party
May
May
aggregate
changes
to
a
single
object,
but
you'll
never
get
multiple
objects
in
a
single
event.
That's
that's
not
even
allowed.
By
set.
A
Okay,
so
so
Danny's,
you
know,
concern
isn't
an
issue,
then
it's
basically
what
you're
saying:
no,
no
okay,
yeah,
I,
I'm
I.
Think
that
one
of
the
concerns,
although
it
wasn't
spoken
when
we
when
we
had
the
meeting
of
interested
people,
was
that
what,
if
you
know.
D
A
The
light
of
a
of
an
upcoming
change
notification
mechanism.
It
would
if
what,
if
a
client
didn't
want
to
implement
that,
and
they
want
to
just
kind
of
go
old
school
and
just
submit
a
query,
can
I
please
have
pagination
I
feel
like
there
was
still
some
interest
in
pagination.
In
spite
of
having
changed
notifications.
H
Yeah
yeah
and
I
still
confused,
because
so
one
of
the
trigger
issues
is
the
Max
results
limit,
which
is
not
a
mandatory
thing.
It's
just
that
servers
are
allowed
to
assert
it.
But
if
you
are
paginating
that
wouldn't
solve
your
Max
results,
because
if
you're
only
allowed
a
thousand
results
in
a
query,
the
fact
that
you
pulled
it
back
in
10
pages
doesn't
let
you
break
through
Max
results.
H
So
while
it
works,
if
you
try
to
lock
up
the
entire
set
you're
asking
the
server
to
store
all
that
data
in
virtual
memory
and
if
you
look
at
stack,
Overflow
you'll
see
thousands
of
reports
of
thrashing
that
starts
occurring
on
SQL
serving
and
that
that's
why
I
say
it
opens
the
door
for
for
for
denial
of
service
attack,
because
you
only
need
one
or
two
clients
to
make
that
call
and
if
they're
allowed
to
make
that
call,
you
run
out
of
memory
on
your
server
and
you
could
take
out
an
entire
cluster.
H
The
other
problem
is
that
something
can
you
help
they're
backed
by
a
directory
by
a
database,
but
if
you
have
a
highly
sharded
system
or
an
API
that
is
not
backed
by
a
database
but
by
some
intermediary
system,
implementing
cursors
may
be
impossible.
So
if
people
are
doing
this
to
support
coordinated
provisioning
and
synchronization
you're,
getting
it
something
that
may
only
address
half
the
possible
given
implementations
out
there,
so
I
I
think
you
can
argue.
Cursor
Bank
paging
is
useful
for
other
reasons,
and
I
could
support
that.
H
My
concern
is
that
a
lot
of
systems
are
not
doing
monolithic
architectures,
where
we're
talking
about
a
skim
directory
sitting
on
top
of
a
database.
That's
not
even
skims
Windows
skim
is
a
provisioning
API.
Only
it's
not
meant
to
act
like
a
database
to
do
this
kind
of
functionality.
D
Point
about
whether
the
idea
is
that
the
cursors
are
in
fact
tied
to
the
database
mechanism
or
if
it's
possible,
to
implement
it
in
other
non-non-relational
database
systems
or
without
making
a
copy
in
memory.
Does
anybody
have
any
experience
there
to
share.
A
Yeah
I'll
just
mention
that
you
know
in
in
our
case,
in
implementing
multiple
service
providers.
We
just
simply
reuse
the
cursor
from
the
underlying
API
or
database.
A
If,
if
it
wasn't,
you
know
we
didn't
and-
and
there
always
is
a
mechanism
by
which
you
can
page
results
and
we
did
not
have
to
implement
any
kind
of
cursor
implementation
within
the
Sim
service
provider
itself
and
I,
don't
know
if
our
experience
in
in
35,
you
know
skim
service
providers
is
is
perfect,
but
it
obviously
isn't,
but
it
it
did
end
up
being
a
very
useful
thing
for
us
not
to
have
to
do
translation
from
cursor
based
to
index,
based
because
in
trying
to
translate
from
cursor-based
index
base,
which,
by
the
way
is
the
only
pagination
style
that
you
can
use
by
by
the
current
spec.
A
Then
you
do
have
to
keep
pull
result
sets
in
the
service
provider
which
brings
up
the
problem.
That
Phil
was
mentioning
is
that
even
implementing
index
pagination
on
top
of
a
cursor-based
pagination
code
base,
you
can
deplete
the
memory
very
quickly
in
a
service
provider.
So
that
was
our
our
main
reason
in
needing
to
do.
Cursors
is
to
match
the
underlying
code
base
so
that
we
didn't
have
to
have
a
lot
of
memory
in
the
service
provider,
foreign.
A
G
Yeah
I'd
like
to
say
that
I
I
agree
with
Matt
on
that
the
just
pulling
everyone
into
memory
just
to
serve
it
up
is
can
be
a
real
problem.
You
know
you
have
to
have
a
hundred
thousand
employees.
G
You're
gonna
have
a
problem
getting
all
the
data
that
you
need
into
memory
bit
that
you
can
then
serve
up.
I
wouldn't
like
to
try
it,
but
you
know
you
know
we
shouldn't
be
dependent
on
on
the
translation
between
server
or
indexing
and
how
things
operate
behind
the
scenes.
I
think
that's
sort
of
off
topic.
Really.
D
Okay,
Danny's
owner
did
you
want
to
address
this
too.
F
So
yeah
on
this
topic,
I've
had
a
chance
with,
like
our
engineering
teams
here
at
Microsoft.
This
was
requested
during
the
July,
like
itf14
skim
session
and
in
a
you
know,
hypothetical
world
where
we
did
build
a
some
service
provider.
It's
currently,
we
only
have
a
client.
If
we
did
build
a
service
provider,
we
would
you
know
with
pretty.
It
would
be
more
likely
than
not
that
we
would
prefer
to
use
and
actually
would
want
to
require
cursor
based
pagination,
which
is
a
a
separate
problem.
F
Actually,
hey,
that's
the
other
thing.
I
originally
raised
my
hand
for
which
is
but
I
guess
a
long
term.
I
think
Matt
and
I
would
like
to
find
a
way
for
a
service
writer
to
let's
say
only
support,
cursor-based
pagination
for
precisely
that,
like
denial
of
service
type
reason
that
was
originally
brought
up
surrounding
index-based
imagination,
but
so
to
get
back
to
the
original
question
that
we're
still
on.
If
Microsoft
massive,
you
know,
Azure
trajectory
big
big
directory.
F
If
we
were
to
implement
a
skim
server,
it
seems
today
based
off
of
the
research
that
we've
done,
that
we
would
want
to
go
with
I.
Like
cursor
based
Foundation
model
and
actually
require
it
to
make
sure
that
we
never
had
to
serve
up
results,
Beyond
a
certain
amount.
D
Okay,
thanks
Pamela
did
you
have
something.
E
Yes,
and
on
the
same
topic
so
I
mean
I
think
we
have
to
keep
in
mind
that
we
are.
You
know
we
are
creating
architecting,
an
interface,
not
an
implementation
right,
so
I
think
you
can
go
either
side
and
say
that
there
are
ways
to
DDOS
both
sides.
There
are
ways
to
run
out
of
memory
on
both
sides
right,
but
at
least
for
now.
Well,
we
have.
We
have
a
default
method
and
we're
all
we're
trying
to
do.
I
think
is
add
a
second
option
that
could
a
lot.
E
A
Right,
like
I
I
mean
we,
we
just
needed
to
be
able
to
have
a
way
to
have
an
interoperable
implementation
that
that
we
could
even
expose
we
just
found
like
we.
A
So
that's
why
we
proposed
the
draft
was
so
that
we
could
have
some.
You
know
we
would
at
least
tell
the
world
here's
what
we
did
right
and
we
feel
like
it.
It
might
be
useful
for
others
and
I.
Don't
know
how
many
people
have
implemented
the
draft,
but
it's
been
more
than
just
us,
so
there's
been
other
interests
for
it.
A
A
Well,
I
know
that
WSU
implemented
it
because
they
were
the
last
ones
to
comment
on
the
on
the
mailing
list
that
they
had
implemented.
It
and
I've
had
enough
questions
on
the
mailing
list
to
make
me
think
that
others
have
implemented
it
I,
just
don't,
have
their
their
email
domains
on
on
hand
right
now
to
know
which
companies
they
came
from
I
just
yeah
there's
been
interest,
though,.
D
A
G
A
thousand
users
either
in
the
so
it
it
wasn't
as
bad,
but
but
looking
ahead
to
trying
to
deal
with
a
hundred
thousand
users.
This
is
a
big
chore
to
try
and
figure
out
what
the
best
strategy
is.
D
A
quick
time
check
here
we
are,
we
are
ahead
of
schedule.
We
have
two
more
topics
left,
so
I
do
want
to
make
sure
we
get
to
those,
but
we,
if
there
is
any
more
discussion
here,
we
can
spend
another
five
minutes
or
so
on
this
topic.
A
If,
if
not,
you
know,
we
will
will
submit
the
draft
the
new
revision,
it's
very
similar
to
the
old
one
and
just
watch
for
people
to
comment
on
on
the
mailing
list,
just
happy
to
have
any
feedback
that
we
can
get.
That
would
result
in
in
more
interoperability.
A
And
you
know
just
I
guess
in
closing,
if
you
know-
and
this
is
also
to
to
the
discussion
that
Phil
and
I
have
had
in
the
past,
I-
can
understand
an
argument.
Possibly
that
would
say
we
don't
need
pagination
at
all
in
skim
and
that
we
can
rely
on
Max
results
and
and
other
mechanisms
by
which
you
know
pagination
just
isn't
even
necessary
you
just
if
you,
if
you
have
a
query
that
results
in
a
large
result
set
you
get
all
of
the
results
up
to
Max
result
and
there's
just
never
any
pagination.
A
The
concern
that
I
had
was
that
if
we,
if
index
pagination,
is
required
which
it
is
in
the
current
spec
that
makes
it
difficult
for
people
to
create
skim
implementations-
and
you
know
just
the
adoption
of
skin
might
be
affected
by
that.
So
I
guess
it
really
boils
down
to
me-
is
if
we're
going
to
have
pagination
in
skim,
I
think
we
need
to
address
cursor
and
index.
If
we're
not
going
to
have
pagination
at
all,
then
we
need
to
make
that
change
in
the
spec.
H
H
H
A
H
H
It
also
supports
bootstrap
cases
and
all
the
other
cases
you
want
to
cover
cursor
base.
Pages
paging
means
I'm
going
to
take
up
to
date
that
I
need
you
to
hold
that
cursor.
Open
I
mean
I
ran
into
this
with
ldap
a
lot
of
times
where
the
clients
we
had
the
most
troubled
with
were
the
ones
that
wanted
State
maintained
for
three
or
four
days,
because
their
processor
was
too
slow
that
led
to
a
number
of
lost.
H
If
you
have
a
network
problem
as
time
goes
by
that
soon,
as
you
have
a
network
problem
or
other
reason,
you
lose
that
state
and
then
the
client
never
actually
completes
a
synchronization.
So
a
number
of
problems
open
up.
But
if
your
use
cases
I
want
cursor-based
pagination,
because
it's
better
for
the
server
for
a
thousand
results
or
yes,
then
I
would
support
the
spec.
But
if
you're
saying
I
want
to
be
able
to
download
the
entire
data
set,
I,
don't
think
this
is
the
correct
tool.
H
You
don't
you
don't
download
the
entire
data
set
with
events
you
can't,
but
there
is
a
need
now
and
then
for
people
to
be
able
to
do
an
export
and
import
on
mass
and
that's
more.
The
scenario
I
think
you're
coming
after
cursor.
If,
as
a
consultant,
paging
seems
to
be
the
simplest
way
given
no
other
options
available.
But
if
we're
here
at
the
standards
Community
we're
talking
about
not
an
open
source
approach
which
those
who
want
to
participate
can
participate.
H
There
is
a
bit
of
an
implication,
even
though
it'll
always
be
optional,
that
that
cursor-based
paging
be
supported.
If,
if
we
we
say
so,
there'll
be
a
lot
of
pressure
to
support
it
and
that
really
changes
how
people
are
going
to
implement
the
data
systems
underneath
skim
I,
think
it's
not
it's.
It's
just
difficult
for
everybody.
I
do
think
one
important
concession
that
was
helpful
to
me
that
you
made
was
index-based
paging
versus
cursor-based
paging.
H
Those
observations
make
sense
to
me
because
it's
the
index-based
Beijing
and
holding
State
on
all
of
that
entire
index.
That's
the
problem,
I
I
would
agree
a
cursor-based
paging
only
will
make
it
more
doable,
but
then
I
keep
coming
back
to.
A
Yeah
I
I
know
we're
out
of
time,
but
I
I
think
it's
in
our
in
our
implementations.
It's
more
the
fact
that
index-based
pagination
is
not
optional
in
the
spec.
A
X,
so
if
I
want
to
implement
pagination
at
all,
you
know
I
I
have
to
implement
some
form
of
collagenation
in
order
to
be
an
Opera
interoperable
and
if
I
am
going
to
implement
pagination
in
order
to
be
interoperable,
I
need
to
be
able
to
follow.
The
pagination
strategy
of
the
existing
code
base
like
like
Danny,
was
mentioning
like
there's.
A
If
you're
going
to
put
a
skim
interface
in
front
of
azure
active
directory,
and
you
talk
to
the
engineering
team
and
they're
like
we
would
prefer
cursors
that
conversation
is
being
had
all
over
anytime
anybody's
thinking
about.
Do
I
create
a
skim
implementation.
These
questions
come
up
for
the
engineers
and
they
have
a
really
strong
preference,
and
that
was
our
case,
so
we
either.
A
Well,
not
that's
why
we
tried
the
author,
the
draft.
The
way
that
we
did
is
we
did
not
want
to
create
a
breaking
change.
That
would
say:
look
you
don't
have
to
implement
pagination
at
all
and
that's
our
choice.
We
tried
to
add
something
that
would
be
able
to
allow
interoperable
cursor
pagination
without
affecting
index
pagination,
and
that's
the
way
that
the
draft
reads:
The
Proposal
that
that
Danny
mentioned
earlier,
which
is
having
a
server,
only
be
able
to
support
cursor
pagination.
A
A
D
We
have
I
want
to
keep
us
moving
on
the
agenda.
Do
we
have
any
action
items
on
this
topic
before
we
wrap
it
up.
A
The
only
action
item
that
I
think
we
propose
is
that
we
would
resubmit
the
draft
with
revisions
I
mentioned
earlier,
that
eliminate
the
ambiguity
and
that
will
put
a
a
draft
in
the
in
our
in
our
library.
That's
that's
actually
valid.
The
old
one
is
is
several
years
old,
and
this
updates
that
and
I
think
puts
it
on
the
docket
for
the
discussion.
If.
D
Okay,
great
I'm,
looking
at
the
agenda
again,
I
made
a
mistake:
this
is
my
fault:
I
skipped
over
one
of
them
or
Miss
misread
one
of
them.
So
we
are
now
a
little
bit
behind
schedule.
Sorry
Philip!
Can
you
can
we
give
you
18
minutes
instead
of
20
and
then
we
can
give
Daniel
the
rest
of
the
Jenny
the
rest
of
the
time
between.
H
Fine
I
I
don't
need
that
much
time
at
all,
so
I
think
we'll
be
on
schedule
shortly,
we'll
be
way
ahead.
There
hasn't
been
much.
H
Yeah,
so
there's
been
a
an
update
to
the
draft.
Not
much
has
changed.
I
removed
a
lot
of
the
discussion
about
Kafka
and
message,
bus
systems,
but
I
think
there's
enough
still
left
in
there
as
an
example,
I
think.
In
reality,
a
service
provider
domain
and
a
client
domain
will
have
their
own
event
or
bus-based
systems
that
are
likely
happening
or
you're
going
to
have
like
a
Master,
Slave
replication
hierarchy.
H
The
spec
doesn't
say
how
you
do
that,
but
that's
sort
of
acknowledging
the
reality
that
you
may
have
hundreds
of
servers
out
there
in
in
one
domain
and
then
in
a
client
domain.
Again
you
have
an
unknown
set
of
infrastructures,
so
the
objective
with
skim
advance
is
to
say
we
need
to
across
those
administrative
boundaries.
We
need
to
share,
be
able
to
share
change
events
from
skim
and
security
events
from
skim
across.
H
So
there
is
a
mechanism,
that's
been
defined
by
the
security
event
specs
for
for
either
doing
that
physical
transfer
by
push
or
by
polling,
long
polling.
That
allows
events
to
be
delivered
in
real
time
and
the
Assumption,
then,
is
that
essentially
there's
a
single
Gateway
between
domains.
Communicating
those
events
and
then
once
that
event
crosses
into
the
receiving
domain,
some
kind
of
system
processes
that
event
and
then
decides
whether
it
gets
sent
to
a
provisioning
manager
or
some
other
system
that
that
reconciles
what
to
do
inside
the
the
domain.
H
So,
for
example,
if
we
had
Salesforce
as
one
issuing
events
back
to
Microsoft,
Azure
Salesforce
doesn't
need
to
know
anything
about
what's
going
on
in
Azure
at
all,
it's
up
to
Azure
inside
of
its
domain.
So
this
is
all
hypothetical
to
reconcile
that
change
decide
what
to
do
about.
It
can
go
back
to
Salesforce
for
more
information
and
then
can
decide
the
parallel
request
there.
So
all
that
to
set
up
Danny
sent
out
an
email
saying,
there's
no
recovery
mechanism
in
skim
events.
H
Well,
that's
part
of
by
design
that
there
shouldn't
need
to
be
one.
The
recovery
mechanism
that
is
in
set
transfer
methods
is
short-term
recovery
to
deal
with
failures
in
the
delivery
mechanism,
so
that
data
Integrity
is
maintained.
What
is
supposed
to
happen
is
that
the
event
receiver,
if
they
need
a
recovery
mechanism
in
the
long-term
system,
can
maintain
that
themselves,
using
whatever
architecture
they
choose
to
do
that.
H
So,
if
you
were
using
Kafka
as
a
message
bus
and
you
drop
those
events
and
once
you
drop
that
event
successfully
and
it's
confirmed,
received
or
placed
on
the
bus,
then
you
can
acknowledge
back
and
set
event
transfer
that
I
have
successfully
received
the
event.
So
now
recovery
is
not
an
issue
that
bus
locally
on
the
receiver
side
then
can
be
used
to
bootstrap
new
servers.
H
It
can
you
can
go
back
anytime,
so
the
change
log
becomes
yours,
so
that
was
the
design
for
Recovery
is
that
the
recovery
is
under
the
complete
control
of
the
client
and
they
can
do
whatever
they
want
to
do.
They
can
make
this
simple
or
they
can
make
this
based
on
a
huge
infrastructure,
can
go
either
way.
H
The
other
reason
for
doing
this
is
that
once
you
move
from
50
000
entries
up
to
half
a
million
or
500
million
entries
having
a
centralized
changelog
style
mechanism
becomes
harder
and
harder
to
do.
Just
the
same
way
that
doing
a
cursor
across
that
many
entities
becomes
harder
and
harder
to
do
so.
The
idea
is,
is
that
now
you
have
hundreds
of
servers,
each
accepting
and
processing
changes?
What
are
they
going
to
do
with
those
changes?
H
So
this
is
what's
envisioned
by
the
shared
signals
event
framework
put
out
by
open
ID,
not
just
for
security
events,
but
also
we
can
use
it
for
provisioning
events
and
that's
how
this
is
all
supposed
to
come
together
when
we
went
through
this
on
the
security
event
side,
we
realized
that
to
maintain
a
change
log
on
the
service
provider,
side
or
an
aggregated
recovery
mechanism
for
every
possible
client
would
blow
up
quickly
because
you
end
up
with
a
permutation
and
combination
problem,
because
you're
maintaining
speculative
copies
for
each
security
entity
that
may
maintain
it.
H
So
not
only
do
you
have
500
million
entries
in
your
database,
you
have
every
change
event
for
all
those
500
million
times.
Every
single
client
that
you've
got
it
won't
work.
So
the
idea
is
is
that
the
recovery
mechanism
for
events
is
base
is
up
to
the
receiver
to
decide
to
implement.
If
the
receiver
needs
a
recovery
mechanism,
that's
long-term,
they
can
build
that
to
whatever
mechanism
and
architecture
they
choose
to
use,
so
they
could
do
their
own
ldapis
change
log
on
their
side.
H
There's
nothing
saying
you
can't
do
that
and
you
can
fit
your
own
Master
replication
or
whatever
strategy
you
have.
You
can
do
whatever
you
want.
So
the
idea
is
that
skim
events
fit
in
with
the
shared
signals
framework
so
that
it's
not
layering
any
additional
things
and
then
skim
event
is
just
then
a
schema
definition.
We're
not
inventing
any
new
protocol
here.
H
There's
implementations
out
there.
So
that's
really
all
the
update
that
I
have-
and
we've
talked
a
lot
about
that.
H
In
my
responses
to
to
Matt's
question
and
one
of
the
things
that
I
still
Envision
is
you
still
are
going
to
have
use
cases
that
neither
paging
addresses
nor
skim
events
addresses,
which
are
how
do
I
bootstrap
a
new
skim
replica
or
a
new
copy?
How
do
I
do
this
when
the
situations
where
I
want
a
full
copy
of
the
entire
data
set?
Is
that
something
even
the
skim
working
group
should
work
on
or
not?
But
I
would
argue.
H
Neither
draft
solves
this
problems,
nor
should
it
so
there's
that
the
other
thing
is
is
that
this
may
seem
complex,
but
this
was
the
decision
of
the
skim
working
group
to
move
this
out
to
the
security
events
group,
because
all
of
these
different
standards
groups
were
developing
signals
based
on
jot
tokens,
and
we
wanted
to
have
common
processing
common
abilities
to
transfer
them
and
to
do
things
all
the
same
way.
H
It
gives
the
appearance
that
it's
a
big
huge
thing
that
you
have
to
implement,
but
really
it's
a
shared
system
shared
set
of
specs,
which
means
there's
already
implementations
available
for
you
to
use.
So
the
work
I'm
doing
right
now
in
reality
is
to
implement
this
using
the
open,
ID
shared
signals
framework,
particularly
the
one
Cisco
Duo
developed,
and
so
that's
going
on
and
I've
had
some
feedback
for
the
shared
signals
group,
because
there's
some
unnecessary
restrictions,
but
that's
really
just
about
it.
C
There
are
different
and
it
is
translating
to
profiles
correctly
if
I'm
wrong,
so
I
just
want
like
in
terms
of
Security,
even
scope
that
it
only
covers
profiles
for
the
identities
or
it
also
covers
account
management,
login
sessions
and
other
events
which
looks
like
I'm,
not
sure
if
the
scope
of
the
user
profiles.
C
So
that's
probably
my
first
question
because
just
just
to
understand
the
scope
of
the
humans
itself.
The
second
question
yeah.
H
Well,
just
to
answer
that:
it's
not
not
for
different
resource
types:
it
is
it
matches.
Skim
events
matches
the
skim
protocol.
Essentially,
so,
if
you
do
a
a
create
event,
if
you
do
a
create
and
skim
there's
an
analogous
create
event
in
the
event
architecture,
so
it
is
not
specific
to
any
kind
of
resource
type
at
all.
So
and
then
you
can
decide
which
types
of
objects
you're
in
you're
issuing
events
for
that
would
be
up
to
the
the
parties
to
negotiate
foreign.
C
H
That
would
depend
on
the
event
definition
itself.
If
you
look
at
the
original,
RFC
I
think
it's
8417
for
a
security
event
token.
H
H
There
are
skim
events
that
match
the
skim
protocol,
create
patch,
put
delete
and
so
forth,
but
the
security
event
for
password
reset
event
is
simply
saying:
Phil
hunt
reset
this.
This
object,
which
is
Phil,
hunt
reset
his
password
and
in
the
example
it
shows
that
you
could
also
have
down
the
way
or,
if
you're,
in
a
local
relationship,
you
could
add
extended
data
that
says
he's
reset
his
password
now
four
times,
because
that
might
be
additional
information.
H
H
Events
like
like
a
create
event
in
one
version:
you're
sharing
all
the
raw
data
which
is
presumably
being
used
for
security
partners
that
are
tightly
related
and
they're,
essentially
doing
replication
and
then
there's
another
one
which
shares
no
data
at
all.
It
just
simply
says
this
object
changed
and
then
it's
up
to
the
receiver
to
go
back
and
pull
that
object
so
that
it
can
reconcile
the
changes
that
occurred
and
it
also
allows
access
controls
to
flow
it
to
work.
And
things
like
that.
H
H
And
if
you
look
at
the
security
events
which
are
higher
level
events,
it
shows
you
how
you
can
pass
standardized
claims
as
well
as
additional
claims
that
you
might
say
well
in
our
deployment.
We
need
to
know
the
password
reset
count.
So
we're
going
to
add
this
additional
data
and
you
could
devise
any
other
information
you
want
to
exchange
There's
No
Limit
there.
It's
just
a
matter
of
how
much
you
want
it
standardized.
It's
similar
in
the
sense
that
in
skim
there
is
the
user
object.
H
D
Okay,
if
you
don't
mind,
Phil
I'd,
like
to
close
this
topic
and
move
on
to
the
last
two
sounds
good
great.
Thank
you.
D
So
Danny's
owner
you've
got
the
rest
of
the
15
minutes
of
the
session
for
your
two
drafts.
Sorry
for
cutting
them
a
little
bit
short.
F
No
worries
I
think
I
can
still
run
under
time,
probably
great
so
yeah
the
roles
and
entitlements
draft
back
at
ietf,
14
I
wrote
a
revision
to
the
draft
I
emailed
about
it
to
the
to
the
working
group
mailing
list,
I
I've
gotten
a
bit
of
feedback.
F
This
I
guess
I'll
start
off
with
a
reminder.
Anybody
who
might
potentially
be
interested
in
a
way
to
retrieve
roles
like
the
illustracceptable
values.
Please
go
review
it
and
provide
feedback,
I
well,
I'm
gonna!
You
know
work
with
the
chairs
to
put
this
up
for
a
call
for
adoption
at
some
point
in
the
next
couple
of
weeks,
probably
I
I
guess
so
I'll
I'll
start
there's
like
I
can
list
off
the
updates,
but
I
think
I
also
did
that
via
email?
F
Does
anybody
have
any
thoughts
on
the
draft?
Any
comments,
concerns
suggestions.
C
This
is
the
entitlement
draft
right.
C
Particular
I
have
to
have
a
question
in
terms
of
school
for
the
problem
we
try
to
address
because
usually
a
user's
or
assumptions
the
usually
users
entitlement,
all
the
rules,
it's
probably
limited,
different
Downstream
systems
might
have
different
roles
or
entitlements
and
the
permission
hierarchies
so
I
just
warned
in
this
particular
case,
given
that
the
directories
or
it
is
usually,
is
centralized
and
it's
in
some
cases
centralized
where
they
have
a
lot
of
Downstream
systems.
C
So
I
just
wondered
like
as
a
single
attribute
might
not
be
the
same
or
applicable
to
all
the
systems.
All
the
downstream
systems
that
actually
is
integrated
with
I
just
learned
in
this
particular
case
is
a
spec
trade
or
at
least
who's
trying
to
address
that
particular
problem,
or
it
have
a
different
assumption
here.
F
So
I
I
didn't
really
think
about
I
guess
systems
were
like
the
skim.
Api
is
built
on
top
of
multiple,
like
I
guess,
you
know
distributed
and
somewhat
disconnected
systems.
F
The
problem
that
I
was
trying
to
address
with
the
draft
is
that,
with
both
the
roles
of
entitlements
attributes
on
a
user
resource.
Typically,
there
are
a
finite
set
of
acceptable
values
for
those
the
values
are
already
predetermined
somewhere,
based
on
what
roles
or
entitlements
exist
in
the
system.
So
you
can't
just
provide
you
know
a
value
for
a
role
of
you
know
anything
any
possible
character
combination.
F
It
would
usually
get
rejected
in
a
lot
of
the
at
least
the
the
fast
game
implementations
I've
seen,
and
so
in
that
case,
there's
I
I
would
classify
it
as
an
interoperability
problem
where,
if
a
skin
client
and
a
skim
service
provider
are
becoming
acquainted
for
the
first
time
and
trying
to
work
with
each
other,
if
the
skim
client
wants
to
sort
of
increase
its
chances
of
making
successful
requests,
so
that
is,
you
know,
formulating
the
body
of
a
request
to
you,
know,
create
or
update
a
user
and
to
provide
them
with
you
know,
roles
or
entitlements
in
that
external
system.
F
Currently,
there's
no
way
inside
of
the
standard
for
any.
You
know
any
anything,
acting
as
a
skin
client
to
retrieve
a
list
of
the
acceptable
values
for
roles
or
entitlements,
and
you
know
almost
any
implementer
is
going
to
document
this
somewhere.
They
you
know
so,
whoever
develops
the
skim
client
could
in
turn.
You
know,
go
and
like
do
manual.
Work
like
read
a
piece
of
documentation
and
then
write
code
to
you
know,
add
them
to
a
list
somewhere,
but
that's
obviously
not
a
scalable
solution.
F
So
the
the
goal
here
was
entirely
they're
like
to
allow
without
human
intervention.
You
know
you
can
go
and
then
connect
a
skim
client
to
a
hundred
different
skim
service
provider,
implementations
that
have
implemented
some.
You
know
varying
list
of
roles
and
entitlements,
or
you
know,
roles
and
or
entitlements
and
allow
the
skin
client
to
know
you
know
via
the
standard.
What
are
the
possible
acceptable
values
for
these
attributes?
F
That
way
requests
are
not
built
with
disallowed
values,
and
this
gives
back
has
a
caveat
in
it
where
the
Sim
service
provider
can
sort
of
partially
accept
a
a
request
from
a
skim
client,
and
you
know
any
attributes
that
it
chooses
the
skin
service
provider
can
just
drop
them.
So
you
know
conceivably,
that's
actually
a
thing
that
the
this
game
service
writer
could
do
of.
Just
oh,
hey.
You
gave
me
a
wrong.
F
You
know
a
role
that
I
don't
have
so
I'm
going
to
ignore
it
in
practice
in
the
SAS
world.
There's
a
lot
of
service
right
implementations
where,
if
you
provide
a
value
that
is
not
part
of
the
predefined
list
of
values
for
that
application,
the
requests
will
get
rejected,
which
is
you
know,
sort
of
wasted,
computational
and
network
energy
and
all
that
of
trying
to
make
requests
that
fails,
and
so
that
is
the
goal,
so
I
think
so.
F
To
answer
your
question
of,
like
the
all
the
downstream
systems,
that's
probably
on
the
skim
server
provider
implementer
to
build
something.
You
know
whether
it's
manually
writing
a
list
or
you
know
an
internal
thing
that
queries
all
the
different
sources
to
be
able
to
show
that,
because,
if
the,
if
the
skin
service
writer
is
only
going
to
accept
a
finite
set
of
values,
those
values
have
to
be
defined
somewhere
already.
So
this
is
really
just
surface.
Those
values
that
you're
already
restricting
on
somewhere
else.
C
So
so,
essentially
it's
on
the
scope
of
the
problem
and
try
to
address
it's
I
would
just
I
would
think
this
is
a
more
like
a
cross
grain
level,
a
kind
of
level
roles
and
the
entitlements.
So
it's
not
going
to
address
the
permissions
or
entitles
for
the
granular
resources
within
a
particular
with
a
particular
like
steam
service
provider.
F
Yeah,
so
this
is
I,
think
sort
of
a
fundamental
question
and
it's
an
area-
that's
maybe
a
little
ill-defined,
at
least
in
practice.
Inside
of
the
standard
which
is
like
I
guess.
Good
examples
are
good,
like
use
cases
behind
the
roles
and
entitlements
attributes.
Typically,
what
I've
seen
you
know
anecdotally
from
my
experience,
is
that
roles
tend
to
be
those
high
level.
You
know
course
set
of
permissions
rather
than
fine
grain
entitlements.
F
I
I
haven't
seen
implemented
as
much
as
roles
some
like
some
of
the
few
times
I've
seen.
It's
typically
been
used
more
in
the
in
the
vein
of
like
license
assignment
or
like
no
feature
assignments
inside
of
an
application.
I
I
think
conceivably
entitlements
could
also
be
used
in
that
same
way
for
a
a
finer
grained
set
of
permissions.
F
There
may
also
be
you
know,
justification
behind
a
third
set
of
action.
A
third
attribute
covering
roughly
the
same
purpose
of
you,
know
permissions
versus
entitlements
versus
roles,
but
with
the
two
today
they're,
not
in
my
experience,
particularly
well
defined
or
well
understood
so
I
think
that's
more
a
problem
for
the
spec
rather
than
for
specifically
my
draft,
which
is
more
about
discoverability,
of
the
values
that
are
allowed.
F
Any
other
questions
that
question
actually
serves
a
really
good
got
a
whole
lot
of
thoughts
out
there.
Thank
you
one
thing:
if
not
I'll
move
on
to
the
referential
values
thing
which
shouldn't
take
that
long,
okay,
so
yeah
I
man
I'm
not
actually
sure,
if
I
ever
emailed
this
one
to
the
mailing
list,
if
not
I
I'll
go
check
and
make
sure
that
I
do
mail
it
so
I
I
wrote
this
draft.
You
know
back
during
its
114
as
well.
F
The
idea
had
been
floating
around
for
a
while,
so
in
a
nutshell
that
this
it's
trying
to
solve
that
same
set
of
problems
that
I
was
just
talking
about
with
Thrills
entitlement.
Things
of
how
can
a
stem
client
be
well
informed
enough
to
sort
of
increase
its
rate
of
successful
requests.
So
you
know
knowing
all
of
the
intricacies
of
what
is
and
is
not
allowed
with
the
skim
service
provider's
implementation,
so
this
referential
value
location
not
to
be
confused
with
reference
attributes
it
has
to
do
with.
F
F
Programmatically.
There's
no
way
to
to
determine
that,
and
the
problem
is
that
there's
other
attributes
besides
manager,
that
people
will
Implement-
and
you
know
they
may
be
custom
attributes-
you
know
their
own
steam
extensions.
It
may
be
I,
don't
know
things
like
job
title
or
you
know,
potentially
role
even
would
work
here
as
well.
Actually,
but
there
may
be
a
predefined
list
somewhere
of
you
know
we
for
this
attribute.
F
We
will
only
accept
this
set
of
values,
so
a
cost
center
I
think
the
like
Enterprise
user
cost
center
is
probably
a
really
good
example
as
well.
F
So
the
problem
is
that
if
there's
a
finite
set
of
acceptable
values
for
an
attribute
and
those
are
defined
somewhere,
that
it's
really
helpful
for
the
skim
client,
if
it
can
go
and
discover
this
somewhere,
so
that
this
draft
is
written.
Essentially,
it
has
a
couple
of
new
some
attributes
or
attributes
to
the
actual,
like
schema,
schema
or
schema
definition.
So,
like
the
the
properties
that
make
up
the
you
know,
the
schema
definition
of
an
attribute
and
those
are
to
say
yes,
you
know
yes
or
no
true
or
false.
F
This
attribute
is
the
values
are
restricted
based
on
you
know,
a
list
that
is
defined
somewhere
else
and
then
there's
two
sub
attributes
that
tell
you
the
resource
type
that
houses
those
attributes,
or
rather
those
values
and
the
the
attribute
or,
like
the
schema,
your
URI
for
the
the
attribute
where
those
values
will
be
present.
So
you
know
you
could
using
like
the
the
manager
as
an
example
for
users,
it
would
say
well
for
the
manager
attribute
this
value.
F
Is
you
know,
sort
of
constrained
by
a
value
on
the
user
resource
that
exists
at
you
know,
you
know
and
just
give
the
URI
for
the
core
schema
ID
value,
which
is
actually
a
common
attribute.
Whatever
it
is,
we
got
into
weird
educate
stuff,
but
I
I've
seen
ample
the
use
for
this
and
some
of
the
skin
implementations
that
I've
had
to
work
with
and
I
think
from
a
discoverability
standpoint.
This
would
be
really
helpful
sort
of
at
scale
to
help
solve
problems
and
be
more
efficient.
F
D
Great
thanks:
we
only
have
a
couple
minutes
left,
so
let's
I
guess
wrap
this
up
for
everybody.
Who's
got
in
progress.
Draft
would
love
to
get
those
posted,
and
then
we
will
continue
the
discussion
on
the
mailing
list.
I
did
already
submit
the
request
for
a
session
for
ietf
115,
which
is
in
London,
and
there
will
be
remote
options
of
course,
available
to
join
remotely.
D
If
you
can't
make
it
there,
so
that'll
be
the
next
time
we
get
together
to
talk
in
person
and
yeah
thanks
Joel
for
taking
fantastic
notes
from
the
session,
so
yeah
any
closing
thoughts
from
anybody
else.
D
Great
in
that
case,
oh
Nancy's
here.
D
All
right,
in
that
case,
we
will
give
you
all
the
90
seconds
back
and
thanks
everybody
for
joining.