►
From YouTube: IETF115-SCIM-20221107-1300
Description
SCIM meeting session at IETF115
2022/11/07 1300
https://datatracker.ietf.org/meeting/115/proceedings/
C
B
B
Awesome
thanks:
I
have
a
second
person
Rakesh,
but
he's
totally
new
and
so
I
told
him
to
just
do
it
in.
Oh,
so
somebody
already
started
copying.
A
A
C
B
Finish
all
right
good
afternoon:
everyone
you
are
in
the
skim
working
group
so
welcome
to
the
first
day.
I
guess
this
is
the
second
session
of
the
first
day,
and
this
is
Kim.
So,
if
you're
not
prepared
to
listen
and
discuss
on
system
for
Crest
domain
identity
management,
you
are
in
the
wrong
room
and
can
those
on
remote
hear
us?
Okay,.
B
B
Okay,
meeting
tips,
I
think
the
only
reminder
that
I
will
put
here
but
and
everyone's
wearing
masks.
So
if,
for
some
reason
your
mask
breaks
or
you
need
a
new
one-
we
have
some
more
here
up
and
at
the
chairs
table
for
those
that
are
remote.
Just
make
sure
your
your
audio
and
video
are
off
unless
you
are
presenting
or
you
want
to
be
in
the
queue
headsets
are
strongly
recommended.
Next
next
slide.
B
Yeah,
that's
to
say
using
the
the
meet
Echo
queue
right
to
call
off.
Okay,
so
just
be
courteous
and
professional
and
follow
our
ITF
code
of
conduct
practices
next
slide.
Okay,
so
I
will
thank
an
advanced
Peter
for
being
our
Note
Taker
I,
don't
see
where
cash
so
can
we
get
a
second
note
taker?
Please
and
the
you
are
free
to
take
notes
as
Peter
will
do
separately
or
you
can
use
the
headstock
live
for
the
second
note
taker
so
can
I
get
a
second
volunteer
to
take
notes.
B
B
Do
that?
Okay,
thanks!
Okay,
so
we've
got
the
links.
The
minutes,
we'll
post
after
the
the
ietf
meeting,
ends
and
I
think
we're
ready
to
go
into
the
agenda
Bash.
So
we
have
one
request.
B
So
you've
got
the
agenda
posted
here.
We
have
one
request
for
our
last
presenter
Elliott
to
actually
be
the
first
presenter.
E
Yeah
apologies,
I'm
I've,
had
to
cover
another
presentation,
because
a
colleague
has
come
down
with
covid.
If
people
can
indulge
me
and
allow
me
to
go
first
I'd
appreciate
it.
B
So
are
there
any
objections
going
once
going
twice?
Okay,
so
we
will
let
Elliot
find
by
us
was
the
note
in
the
remote
we
do,
but
I
was
just
going
to
do
one
last
any
any
other
changes
to
the
agenda
or
additions.
E
E
I
don't
mind:
my
name
is
Elliot
liar
I
work
at
Cisco
I'm,
presenting
on
behalf
of
myself
Muhammad
shahzad
who's,
hopefully
participating
remotely
Hassan
Akbar
now
Muhammad
and
Hassan
are
at
NCSU
North
Carolina
State
University,
and
we
want
to
talk
a
little
bit
about
skim
for
devices.
E
So
next
slide.
Please!
Okay!
So
here
are
some
of
the
questions
that
we
have
and
that
are
we
think
administrators
will
have
and
actually
some
of
the
business
to
business
Community
will
have
so
how
do
I
provision
a
new
device
into
Enterprise
infrastructure
or
many
new
devices
into
Enterprise
infrastructure
and
I
might
be,
and
an
individual
within
the
Enterprise
or
a
partner
to
that
Enterprise?
E
How
do
I
establish
bootstrapping
credentials
for
that
device?
What
are
bootstrapping
credentials
in
this
case,
a
means
by
which
the
device
can
be
can
be
introduced
into
the
Enterprise
or
deployment
environment
in
a
way
in
which,
in
in
some
some
notion
of
a
secure
way
and
I'll
talk
a
little
bit
about
what
I
mean
by
that
in
a
little
bit.
E
How
can
I
provide
ancillary
information
about
the
device
that's
being
introduced
into
the
environment?
One
example
of
ancillary
information
might
be
a
software
bill
of
materials
for
that
device.
Another
piece
of
ancillary
information
might
be,
in
fact,
what
the
device
is.
E
E
You
know,
let's
say
a
a
light,
bulb
illuminaire
a
refrigerator,
a
good.
It's
a
good
question,
a
a
thing:
I
I
always
talk
in
terms
of
things
and
to
me
things
are
everything
but
general
purpose,
computers
that
have
displays
and
keyboards
right.
We
know
how
to
add
those
things
onto
networks.
We
have
great
difficulty,
adding
non-things
onto
Network
sort
of
weird
saying
non-things,
but
there
you
have
it
so
there's
a
class
of
device
that
will
require
a
data,
a
data
plane
Communications.
E
Those
are
going
to
be
devices
that
are
only
L2
enabled,
and
so
they
might
want
information
on
API
level,
information
on
how
to
communicate
with
those
L2
devices.
So
these
are
the
questions
that
we've
come
across
next
slide.
Please.
E
So
I
sort
of
introduced
this
concept
I
think
it
was
in
spring
meeting
that
we
were
thinking
about
skim
for
this
well
yeah.
Now
we're
really
thinking
about
skim,
we're
thinking
about
it,
such
that
a
few
of
us
have
actually
done
some
prototyping
and
a
little
bit
of
coding
around
this,
and
why
not
right?
What
we
want
is
something
that
skim,
largely
speaking,
provides
a
normalized
set
of
schemas,
starting
with
a
very
small
base.
E
We
don't
want
to
really
introduce
any
sort
of
Technology
religion
around
which
onboarding
Technologies
are
in
use,
but
allow
for
all
of
them
to
be
used.
Allow
for
all
the
bootstrapping
Technologies
to
be
explored
through
extension,
schemas
and
in
the
draft.
We
provide
a
list
of
examples
and
one
of
them
is
device
provisioning
protocol.
We
called
it
the
Wi-Fi
schema,
but
we
probably
should
just
rename
it.
The
DPP
scheme
and
I'll
talk
about
some
of
these
things.
We
need
to
change.
E
We
did
something
with
ble
low
energy
and
with
zigbee
and
we've
tested
out
these
Technologies
such
that
we
can
make
sure
that
they
can
interact
with
the
various
components
such
that
devices
can
onboard
onto
a
network,
we're
sort
of
looking
for
others
like
what
would
we
do
for
private
5G?
How
might
this
work
for
matter,
and
things
like
that?
E
The
table
Stakes
for
for
for
creating
such
a
schema
is
to
provide
the
deployment
enough
information
to
get
the
device
to
trust
it
and
for
it,
and
so
that
it
knows
to
trust
the
device
to
have
that
Mutual
trust
established
next
slide.
Please.
E
So
you
might
think
there's
got
to
be
other
other
things.
We
could
have
used
sort
of
the
obvious
one
that
people
start
talking
about
is
well
couldn't
you
have
created
a
Yang
model
for
all
this
and
couldn't
you
have
used
netconf
or
rest
conf?
Well,
in
fact,
what
we're
talking
about
here
is
something
that
isn't
provisioning
or
touching
the
actual
device,
but
rather
something
that
is
describing
that
device
and
specifically
describing
how
to
enable
communication
with
that
device.
E
So
it's
a
little
bit
different
in
terms
of
the
purpose
of
of
netcap
and
netconf
and
yang.
It's
no
device,
config
is
exchanged
and
there's
no
reusability
here
that
we'd
be
benefiting
if
we
had
gone
to
net
comp
react,
so
it's
actually
really
right
up
the
alley
of
skim
in
terms
of
doing
provisioning
operations.
As
we
said
when
this
working
group
was
formed,
crud
right
create,
read,
update
and
delete
operations.
E
Now
the
other
choice
might
have
been
SNMP,
and
this
was
just
in
my
in
in
in
in
in
a
nightmare.
I
had
somebody
said,
come
and
come,
and
do
this
with
SNMP
and
and
my
only
response
to
that
was
no,
so
we're
not
going
to
do
SNMP.
So
that's.
Why
not
other
stuff
next
slide?
Please,
okay,
so
we
have
a
couple
of
basic
elements
that
are
in
a
core
schema.
We
tried
to
roughly
speaking
stay
aligned
with
how
skim
was
operating
right.
E
You
have
a
very
small
core
and
then
you
expand
using
extensions,
and
so
we
have
a
couple
of
issues
that
we
have
to
resolve
around
how
to
do
that.
E
None
of
us
who
are
doing
this
work
claim
to
be
skim
experts,
so
ID
and
meta
look
a
lot
like
what
they
should
be
from
core
from
the
from
the
original
core
schema
admin
state
is
a
little
information
about
the
device.
Schemas
is
from
core
connectivity.
Connectivity
refers
to
how
to
connect
to
the
device.
E
What
sort
of
connectivity
the
device
supports
a
display
name,
what
the
thing
might
be
right
and
my
URL,
and
these
are
just-
and
you
notice
those
last
two
are
optional,
because
perhaps
you
don't
even
have
you
know
much
more
information
on
that?
E
These
are
sort
of
opening
openers
for
us,
as
we
begin
this
process
with
a
working
group,
and
we
begin
the
dialogue
to
say
well,
maybe
these
are
the
things
that
need
to
be
in
the
court.
Maybe
these
things.
Maybe
these
things
shouldn't
be
in
device
core.
They
should
just
pull
back.
We
should
pull
back
and
even
make
the
vice
call
Core
even
smaller.
E
These
are
possibilities
we
we're
really
just
getting
going
so
these
are,
but
this
is
what
we
started
with
and
embarrassingly
the
dash
zero
zero
of
the
draft
forgot,
the
core
which
was
sort
of
you
know
bad,
but
that
was
corrected
today.
If
you
look
at
the
dash
zero
one,
it's
got
all
these
core
objects
in
in
it.
Apologies
for
that
next
slide.
B
So
Elliot,
if
I,
can
Channel
there's
a
question
on
the
chat
from
massimiliano
Pala.
So
he
says
you
mentioned
it's
not
net
conference
conf
for
the
device
that
he
agrees.
But
wouldn't
this
be
an
infrastructure
configuration
and
as
such,
solvable
using
netcomf
response
for
the
infrastructure.
E
Hi
Max,
maybe
you
could
claim
that,
but
the
issue
would
be
that
this
is
really
high
up.
The
stack
and
net
conference
comp
really
work
at
a
much
lower
point
in
the
stock.
You
could
conf,
you
could
think
of
it
more
as
an
analog
as
almost
like
a
a
radius
diameter
analog.
If
anything.
E
So
that's
that's
our
thinking
at
least
and
again
we
get
no
model
reuse
from
net
conference
conf
in
this
case.
So
from
our
perspective,
I
think
we're
we're
more
comfortable
in
the
skim
world.
For
this.
It's
exactly.
You
know,
sort
of
the
crud
operations
that
you
would
expect
for
provisioning
and
we're
not
trying
to
do
things
like
describe
interfaces
or
IP
addresses
or
any
of
that
stuff
and
in
fact
the
information
we're
communicating
is
entirely
devoid
of
deployment
information.
E
E
Answers
your
question
Max
thanks,
so
is
that
leaf
hi?
Yes,.
F
Hi,
how
are
you
I'm
good
you
good,
there's
a
there
was
significant
work
in
the
Adrian
Community
towards
Wi-Fi
schema
or
what
you
might
call
Wi-Fi
schema
I
mean
they
have
a
lot
of
experience
there
about
the
challenges
and
there
there's
a
there's,
a
lot
of
dragons
buried
and
skeletons
buried
with
respect
to
the
platform
providers
in
that
space.
I
mean
if
you're
talking
about
deployability
here
that
it's
going
to
be
like
really
really
difficult,
but
I
I
would
encourage
you
to
go
talk
to
them
to
go.
F
Get
a
little
bit
of
sort
of.
You
know
background
information
on
what
kind
of
well
I
would
the
problems
they
ran
into
in.
F
Yeah
absolutely
drop
me
an
email
I'll
do
a
stick.
Room
put
it
on
the
top
of
my
stack
but
yeah
that
that's
a
very,
very
useful
conversation
to
have
on
the
other
comment.
I
was
going
to
make.
Is
that
when
I,
if
I
I,
think
that,
if
you
kind
of
put
that
kind
of
stuff
I
mean
Wi-Fi
schema
is
about
sort
of
describing,
you
know
what
you
need
in
order
to
connect
to
the
device
right.
F
B
Okay,
I
need
to
pause
here
for
a
minute,
so
for
the
gentleman
we
are
following
the
ITF
procedures
and
a
mask
is
required.
Thank
you.
Okay,
so
Elliot
I,
don't
know
how
much
more
time
you
need,
but
you're
beyond
the
10
minutes.
E
These
are
there's
many
unanswered
questions
right
now
that
we
have
to
sort
through
I'll
just
finish
up
relatively
briefly,.
E
So
first
we
know
we
have
Phil
and
a
few
others
have
already
provided
comments.
We
want
to
incorporate
those
we
want
to
make
sure
we're
using
the
right
schema
description
method
like
right
now.
It's
we're
using
Json
schema
to
describe
this
stuff.
Some
people
would
prefer
open,
API.
E
In
fact,
there's
benefits
of
using
open
API
and
that
you
can.
You
can
fold
comments
onto
two
lines
or
three
lines
or
however
many
lines
you
want
or
descriptions
onto
into
those
mini
lines
and
and
as
we
all
know,
Jason
doesn't
do
a
good
job
of
that,
and
that
particularly
now
this
might
be
the
stupidest
reason
in
the
world
to
do
this,
but
that
makes
it
rather
hard
to
to
just
simply
incorporate
something
into
an
internet
draft,
for
instance,
whereas
open
API
is
a
lot
easier.
E
We
want
to
use
the
schema
language,
though
we
want
to
use
a
formal
schema
language
so
that
we
can
always
make
sure
that
we're
formally
verifying
correctly
right,
not
having
a
formal
schema
language
would
be
very
bad.
We
think
what
needs
to
be
normalized
into
and
out
of
the
device
core
is
something
that
I
think
we
need
to
pay
a
lot
of
attention
to,
and
we
already
know
that
at
least
the
the
Wi-Fi
schema
that
we
list-
and
this
goes
to
leif's
point-
is
it's
not
really
a
Wi-Fi
schema?
E
E
In
fact,
we've
talked
to
the
Fido
people
about
that
and
again,
as
matter
matures,
I
would
expect
that
somebody
will
want
to
come
forward
for
a
matter
proposal
right
now,
but
matter
today,
at
least
as
published
doesn't
suit
the
Enterprise,
maybe
it
will
and
when
it
does
then
probably
we'll
want
to
go
down
that
route
too.
Again,
no
technology,
religion,
you
know
maybe
there's
something
to
do
with
RFC
and
e366
vouchers
happy
to
go
there
right.
E
E
B
We
said
we
were
gonna
honor,
the
the
meet
Echo
Q,
so
Danny
go
ahead.
D
Hi
Elliot
I've
got
a
well
I've
I've
written
the
internet,
buddy
sort
of
straighten
out
and
aligned,
like
you
know,
put
your
in
the
in
in
a
excuse.
E
Me
Danny,
maybe
slow
down
step
a
little
bit
away
for
you're
over
modulating
a
little
bit.
It's
a
little
hard
to
understand
you,
oh
try
again
testing
yep.
We
can
hear
you
just
a
little
slower
a
little
softer.
D
Yeah,
so
to
start
I'd
be
happy
to
help
you
with
the
representing
the
schema
in
the
sort
of
the
skim
schema
language.
It's
not
entirely
straightforward,
but
I've
written
a
few
drafts
already
and
I've
had
to
sort
of
crack
that
egg
already
so
I
can
connect
with
you
offline.
Just
to
give
you
some
pointers
on
that
one.
Secondly,
I
I
think
we'd
already
discussed
this
a
little
over
email,
but
I'll
mention
it
here.
D
Just
for
the
record
myself
and
a
few
other
people
have
sort
of
in
parallel,
been
looking
at
doing
or
like
representing
non-physical
device
or
like
machine
identities.
I
think
I,
don't
know
like
any
sort
of
like
workload
Identity
or
a
concert
like
a
service
principle
which
might
be
like
a
little
like
you
know,
Azure
or
Microsoft,
but
you
get
the
idea
like
a
non-tangible.
D
You
cannot
pick
it
up
or
turn
it
on
like
a
light
bulb,
but
you
know
I
like
thing
that
has
access
to
an
API,
so
I
I
think
we
could
potentially
write
those
two
separate.
You
know
completely
separate
graphs,
or
you
know,
sort
of
figure
out
a
way
to
just
make
it
like
an
extensional
schemas
of
because
you're
building
this
in
a
sort
of
that
Dynamic
schema
model.
D
I
thought
I
had
a
third
one,
but
I
have
forgotten.
So
maybe
that's
just
the
only
two
things
that
I
talk
about.
E
It
I'll
be
I'll,
briefly
comment
now
and
then
we'll
take
it
offline
to
number
one
absolutely
welcome
your
help.
My
preference
would
be
for
that
to
be
non-normative,
whereas
the
formal
would
be
normative
the,
and
we
can
discuss
that.
The
on
the
second
point
that
you
made
about
Microsoft,
Azure,
workloads
or
other
things
like
that
I
think
that's
worthy
of
discussion
to
make
sure
that
we're
normalizing
correctly
and
and
happy
to
take
that
forward
with
you
as
well.
B
Okay
with
that,
thank
you
Elliot,
so
I
think
you've
gotten
some
comments
and
for
those
who
have
listened,
please
solicit
the
person
who
was
in
the
queue
left,
so
I
presume
you
had
no
question
left.
B
So
just
continue
to
provide
feedback
and
comment.
It
looks
like
the
authors
will
continue
to
evolve
the
draft
and
when
they're
ready,
we
can
take
a
pulse
then,
okay,
so
next
up
Pam.
We
have
you
on
the
docket,
but
we
didn't
get
a
response
from
you
didn't
know.
If
you
could
give
us
an
update
on
the
use
cases,
draft
yeah.
H
H
Thought
so
all
right
I
just
wanted
to
double
check.
So
in
my
in
doing
my
research
I
rent
into
an
area
in
the
specification
that
was
where
I
felt
like
the
Implement
implementation,
was
different
from
what
the
specification
stated
and
so
I
wanted
to
ask
this
group
about
external
IDs
and
provisioning
domains.
It
feels
like
there's
an
echo
on
your
side.
Are
you?
Can
you
hear
me
all
right.
B
We
can
hear
you
just
fine,
you
might
be
hearing
some
of
the
issues
because
people
are
coming
in
and
out
and
the
door
is
slamming.
Oh.
H
That
could
be
okay,
so
so
provisioning
domain
is
defined
in
7643
in
section
1.2,
so
you
can
all
go,
take
a
look
at
it
if
you
need
to,
but
basically
it
is
defined
as
an
administrative
domain
external
to
the
domain
of
a
service
provider
for
different
and
and
the
example
they
give
is
a
different
legal
identity
right,
so
they
have
defined.
Provisioning
domain
is
basically
external.
You
know
external
domains
which
may
or
may
have
one
or
more
clients,
I
presume.
H
So
that's
that's
how
the
the
Define
a
provisioning
domain
and
then
in
section
3.1,
when
they
Define
an
external
ID.
They
defined
that
external
ID
as
being
relative
to
The
provisioning
Domain
and
that
there's
a
local
mapping
between
the
provision,
the
identifier
of
the
provisioning
domain
and
the
skim
server.
H
So
the
implication
there
is
that
there
should
be
a
separate
external
ID
for
every
provisioning
domain
that
a
skim
server
serves,
except
that
I.
Don't
think
any
of
our
implementations
do
that.
So
if
you
know,
if
I
have,
for
example,
especially
in
Downstream
provisioning,
where
I
might
be,
where
the
skim
server,
for
example,
could
be
in
an
AWS,
for
example,
right
in
Amazon
I,
don't
believe
that
there
would
be
a
separate
external
ID
managed
for
every
client
that
wants
to
interact
with
a
given
object
of
resource.
H
B
B
B
H
Okay,
thank
you
for,
for,
for
all
of
your
I
mean
not
exactly
on
participation,
but
at
least
I
know,
there's
not
a
lot
of
concern
or
yep
different
usages.
So
thank
you.
B
Okay
with
that,
we're
into
skim
rolls
and
entitlements
Danny.
C
D
Thank
you
Nancy.
Let
me
know
if
the
audio
is
problematic
again,
yeah.
B
I
think
we,
some
of
the
the
meet
Echo
folks
did
notify
the
AV
part
of
the
issue
is
you're
coming
in
Fairly
loud,
which
may
be
causing.
Oh
okay,
give
that
a
try,
Danny.
D
Okay,
so
actually
a
bit
out
of
order,
I
think
the
roles
and
entitlements
slides
are
a
bit
further,
so
yeah
we
can
either
go
and
yeah
okay,
so
yeah
I've
talked
about
this
draft
in
a
little
more
I,
guess
informally
at
this
of
the
previous
ITF
sessions,
just
once
more
to
sort
of
go
over
it.
It's
also
been
sent
out
to
the
mailing
list
previously.
D
So
the
draft
that's
on
the
screen
being
shared
right
now,
is
a
draft
intended
to
add
new
slash
roles
and
slash
entitlements,
endpoints
and
Associated
schemas,
so
that
skim
clients
can
discover
the
available
values
that
can
be
populated
into
the
user
resources
roles
and
entitlements.
Attributes
and
the
the
goal
here
is
so
that
the
skim
client,
interacting
with
a
skim
server
that
uses
roles
and
or
entitlements,
can
proactively
determine
what
are
the
acceptable
values
and
what
aren't
that
way?
D
It
can
be
more
efficient
and
avoid
sending
requests
that
will
obviously
fail
roles
and
entitlements.
Both
are
some
attributes
that
are
very
frequently
sort
of
like
validated
where
they
only
accept
a
finite
set
of
values,
as
opposed
to
just
being
an
open
string
that
you
can
put
whatever
value
you'd
like
into
so.
This
draft
is,
has
just
recently
been
put
into
a
call
for
adoption,
so
I
found
it
really
really.
D
This
slide
was
just
a
sort
of
reminder
and
a
call
to
our
call
for
assistance
or
whatever
in
if
you
you
know,
have
implemented
skim
and
you
use
roles
and
or
entitlements,
or
you
think
you
might
in
the
future.
Please
review
this
draft
provide
feedback
on.
You
know:
I,
guess
the
the
substance
of
it.
If
you
think
it's
implementable,
useful,
Etc
and
then
finally
there's
a
few
other
opportunities
that
have
come
both
from
feedback
in
the
mailing
list.
D
You
know
feedback
from
other
meetings
that
I've
had
that
I.
Just
I
figured
I'd
cover
here,
I've,
gotten
feedback
that
there
may
be
some
interest
in
expanding
these
roles
and
entitlements
resources
to
have
a
member's
attribute
similar
to
a
group
to
essentially
create
almost
like
a
a
two
mode
system
of
managing
roles.
D
So
if
it's
one
user
and
you're
trying
to
add,
you
know
20
new
roles
to
them,
you
can
patch
user
dot
roles,
whereas
if
there's
one
new
role
for
instance-
and
you
want
to
go-
add
a
large
number
of
users
to
be
a
member
to
that,
you
could
add
them
directly
into
the
roles
dot.
Members
attribute,
you
know,
but
obfuscating
away.
The
fact
that
you
know
you'd
have
to
go
Target
the
correct.
D
You
know
role
inside
of
that,
but
I
I've
seen
similar
actual
similar
thoughts
recently,
even
around
some
other
things
as
well.
I
guess,
like
users.groups
as
a
suggestion,
you
know
people
who
have
implemented
skin
clients
or
skim
servers
wanting
the
ability
to
you
know
if
you're
going
to
go,
add
a
user
to
50
new
groups.
Right
now
you
have
to
make
50
new
pass
requests
or
you
know,
do
a
bulk
request.
D
So
it's
an
interesting
I
guess
Dynamic
not
only
for
roles
and
entitlements,
but
potentially
something
for
us
to
consider
for
a
you
know,
a
future
version
of
this
Kim
standard
as
well.
D
If
we
were
to
expand
the
the
capabilities
inside
of
the
core
schema
protocols
or
the
core
schema
specification,
rather
there's
I
think
also,
there's
probably
a
need
for
whether
in
this
draft
or
in
a
revision
of
the
the
the
the
core
schema
RC
to
provide
a
little
bit
clearer
guidance
on
usage
of
the
sub
attributes
inside
of
the
roles
and
entitlements
attributes
with
specific
focus
on
type,
although
I
think
primary
could
use
it
a
little
bit
as
well
right
now,
I
I,
don't
think
it's
very
clear
to
implementers
on
what
the
type
attribute
is
meant
to
represent,
and
there
are
some
implementations
where
I've
seen
where
I
think
it
like
they're.
D
What
they've
landed
on
makes
sense,
but
it
would
be
nice
to
actually
have
explicit
guidance
as
right
now
any
complex
attribute
in
the
schema,
or
at
least
most
of
them,
I
I,
don't
want
to
say
any,
but
most
of
the
complex
attributes
in
the
skim
core
schema
RC
have
the
same
set
of
sub
attributes.
You
know
type
primary
value
display,
but
you
know
across
emails
addresses
actually
no
addresses
doesn't
fall
into
that.
D
So
you
know
roles
entitlements,
all
others,
I
I
think
there's
potentially
different
uses
for
each
of
them
so
and
then
I
guess.
One
final
thing
that
was
also
came
up
of
sort
of
representing
with
those
roles
and
entitlements
resources,
things
that
are
either
prerequisites.
So
you
know,
in
order
to
have
role
a
you,
must
have
already
assigned
this
user
role
b
or
B,
or
you
know,
assign
it
at
the
same
time
and
then
sort
of
the
opposite
of
this
role
cannot
be
granted.
D
So
I'll
probably
include
summer
all
of
those
other
opportunities
in
the
next
version
of
the
draft
which
I
guess
may
come
after
the
call
for
adoption
finishes
but
yeah.
If
anybody
has
any
questions
or
feedback,
I'd
love
to
hear
them
either
here
or
on
the
mailing
list
or
privately.
B
Yeah
I
mean
just
as
a
reminder:
we
did.
We
put
out
a
call
for
adoption,
call
for
interest
for
this
draft,
so
the
idea
is
for
you
to
provide
comments
on
the
current
draft
and
to
assess
Readiness
for
adoption
and
I.
Think
I
put
the
the
end
for
the
call
for
adoption
towards
the
end
of
the
month,
trying
to
give
us
an
extra
week,
given
that
we're
here.
B
D
Yeah,
that's
all
I've
got
for
roles
and
entitlements,
I
forget
the
order
of
things,
but
oh.
D
B
Next,
one
is
the
reference
value
and
location.
D
All
right,
so
you
can
just
scroll
up
one
cool
yeah,
so
the
Strat
is
a
bit
rougher
still
so
I
previously
I've
faced
a
challenge,
and
this
is
actually
a
very
similar
in
what
the
intent
is
behind
that
roles
and
entitlements.
Draft,
where
attributes
that
are
present
on
a
resource
may
only
accept
values
from
you
know
from
a
predefined
list.
No,
just
as
you
know
an
example,
the
like
the
Enterprise
user
attribute
cost
centers.
Well,
you
know
business
is
generally
predefine.
D
Their
cost,
centers
and
I
shouldn't
be
able
to
go
type
in
you
know
banana
or
purple
and
put
that
as
the
value
for
cost
center.
It's
not
going
to
be
helpful,
for
whatever
Downstream
system
is
consuming
that
data.
So
this
draft
adds
a
few
new
attributes
to
the
schema
definitions.
D
That's
the
urn,
iitf
params
skim
schemas
core
2.0
schema,
so
it
has
to
be
properties
that
Define
any
given
attribute
and
those
properties
eventually
allow
you
to
communicate
and
say
this
attribute
from
a
it
only
accepts
a
limited
set
of
values
versus
just
you
know,
accepting
anything
and
those
values
are
searchable
based
on
XYZ
resources
attribute.
You
know
a
r
b
or
c
or
whatever,
so
you
know
the
user
dot
manager
are
the
users
at
the
user
resources
manager,
dot
value
sub
attribute
only
accepts
values
of
the
user
resources.
D
I
Hi
Dean
sacks
from
Amazon
Danny
I
noticed
there
was
no
way
to
filter
like
in
your
manager.
Example
like
on
a
parameter,
is
a
manager
or
is
in
a
certain
role.
I
Is
that
something
you
see
you
want
to
add
to
the
spec
at
some
point,
because
I
see
a
need
in
a
large
system
where
I
may
not
want
every
user
to
come
back
as
a
potential
manager
in
this
referential
value?
I
want
to
get
a
subset
of
those
users,
so
is
that
something
you're
looking
at
or
is
that
something
we
would
add
somewhere
else.
D
So
I
think
that
would
probably
be
something
that
you
could
add
somewhere
else,
perhaps
as
a
a
new
attribute
either
in
you
know
the
core
or
I
guess
it
would
be
in
what
the
Enterprise
schema
extension
or
just
a
new
extension
in
itself,
something
you
know
it
could
be
a
Boolean
like
is
manager
I
think
this
draft
is
aiming
to
be
a
little
more
agnostic.
Where
you
know
manager
is
the
super
easy
example
based
on.
You
know
what
exists
in
the
spec
today,
I
I
off
the
top
of
my
head.
D
I,
don't
know
how
like
how
we
would
incorporate
that
into
just
these
few
attributes
or,
like
you
know,
schema
properties
that
are
there
but
I
I.
Think
it's
a
very
valid.
You
know
problem
to
go
solve,
probably
just
with
a
Boolean
or
something
yeah
Danny.
I
I
guess
if
you
took
it,
took
a
step
back
and
instead
of
looking
at,
like
an
is
manager
attribute
just
a
general
filtering
capability,
so
that
when
you
are
looking
for
those
referential
values,
you
are
able
to
filter
that
to
a
certain
subset
of
all
values,
as
opposed
to
every
in
the
in
the
case
of
managers
and
you're.
Looking
at
all
the
users,
instead
of
returning
all
of
these
as
return
users
who
meet
a
certain
set
of
characteristics,.
D
B
I
And
then
so
the
yeah,
the
question
I
guess
is:
where
does
it
get
pulled
in?
Does
it
get
pulled
in
here?
Does
it
get
pulled
in
somewhere
else
happy
to
work
with
Danny
offline
on
this.
B
I
D
Yeah
and
then
just
to
sort
of
finish,
this
Slide
the
topic
off
actually
out
of
a
conversation
that
I
was
having
last
week
with
Dean
and
a
few
other
people,
there's
sort
of
a
broader
question
and
I'll
pose
this
out
to
the
mailing
list
as
well,
but
are
there
other
new
schema
properties
that
are
needed?
D
D
So
we
we
have
the
ability
to
say
is
something
single,
valued
or
is
it
multi-valued,
but
for
multi-valued
attributes
we
do
not
have
a
way
to
express
how
many
values
it
can
accept.
Like
you
could
you
can?
You
can
accept?
You
know
two
different
values
for
emails.
D
For
instance,
it's
a
multi-valued
attribute,
but
maybe
you
don't
want
to
accept
infinite
values
for
emails
and
even
Beyond
cardinality
I
think
there's
probably
other
things
that
are
worth
looking
into
trying
to
represent,
but
we
I
think
we
then
also
are
going
to
get
into
the
problem
of
like
over
complicating
things.
Perhaps
so
it's
I
think
it's
a
up
in
line
to
e
or
whatever
the
expression
is.
H
D
The
way
that
this
draft
is
written,
it
only
allows
for
describing
like
that
that
referential
value,
if
it
is
represented
on
the
skin
on
the
same
skim
endpoint,
either
on
the
same
resource
type
or
another
resource
type,
I
hadn't,
actually
considered
the
problem
of
like
representing
the
data
somewhere
else
or
like
I
sort
of
had
at
least
of
it's
a
possibility.
D
But
I
I
didn't
attempt
to
solve
that,
or
you
can
really
cover
it
at
all
inside
of
this
draft,
because
it
was
more
complicated
than
I
had
the
energy
or
brain
power
to
attempt
to
solve
when
I
was
writing.
This.
D
So
yeah
I
guess
the
the
sort
of
lingering
question
which
I
will
again
you
know
post
to
the
the
working
group
would
be
are
there
like?
Is
there
a
need
for
a
more
descriptive
set
of
properties
on
the
schema,
like
properties
definition,
and
maybe
should
this
draft
either
this
draft
directly
or
you
know
a
new
draft
startup
to
try
to
solve
that
broader
problem?
If
there's
actually
agreement
that
it
is
a
problem
and
yeah,
that's
all
I
have
on
this
topic.
H
B
D
D
Yep
we
are
I,
do
not
see
Matt
Peterson
on
the
call.
Unfortunately,.
D
Yes,
I
will
attempt
to
represent
this
one
myself.
So
if
you
could
sorry.
D
Okay,
so
status
update
on
cursor
based
pagination,
we
have
a
version
01
draft,
which
is
published
the
version
zero
zero
was
published
back
in
I,
think
it
was
2017
and
just
recently
within
the
past,
you
know
six
to
12
months
or
so
has
it
become
a
little
more
popular
and
it's
it's
sort
of
become
the
Talk
of
the
Town.
D
If
you
just
look
at
how
much
time
we
spent
the
last
two
skim
meetings
actually
so
version
01
added
some
like
just
like
syntax
fixes
in
the
examples
where
or
you
know,
certain
things
that
should
be
there
were
missing
like
like
what
is
it
like?
The
total
results
counter
that
sort
of
thing
like
just
very
like
Baseline,
sort
of
boring
fixes,
and
then
a
few
new
changes
were
made
based
on
feedback
on
the
mailing
list.
D
A
previous
cursor
attribute
or
object
parameter
was
added,
which
you
know.
Essentially
you
can
now
bi-directionally
go
back
and
forth.
If
the
implementation,
you
know,
decides
to
implement
previous
cursor
rather
than
only
moving
forward,
and
then
we've
made
some
changes
to
the
service
provider,
config
entry
goes
to
better
describe.
Does
the
does
this
game
server
support,
index-based,
pagination,
cursor-based,
pagination
or
both
or
I?
Guess,
potentially,
neither,
although
that
one
seems
really
messy
there.
D
We
Matt
and
I
sort
of
want
to
so
like
this
draft
specifically
outlines
cursor-based
pagination,
but
we
also
wanted
to
have
some
flavor
of
just
being
like
an
expanding
of
pagination
in
general
and
so
like
adding
the
new
service
provider,
config
information
that
actually
also
lets
you
detail.
D
You
know
things
that
may
only
you
may
only
actually
support
index-based
pagination,
but
even
just
being
able
to
you
know
outline
that
and
make
it
discoverable
we
think
is
helpful
for
the
the
big
picture
and
discoverability
and
all
of
that
and
starting
with
version
01
I've
joined
as
a
co-author
on
this,
alongside
Matt
Peterson,
we
are
currently
working
on
a
version
O2
that
I
think
will
come
out
shortly.
D
D
D
We
have
spent
a
substantial
amount
of
time
debating
sort
of
the
necessity
of
a
an
expansion
to
pagination
whether
or
not
there
is
overlap
with
the
currently
adopted
skim
events,
draft
which
profiles,
security
event,
tokens
plus
plus
skim
you
sort
of
nested
as
a
payload
inside
and
then
I've
also
thrown
in,
because
it's
become
a
more
relevant
and
you
know
it's
always
been
part
of
this
conversation
as
well.
D
The
concept
of
any
sort
of
like
change
text
or
a
Delta
query
mechanic
which
there
is
no
draft
for,
but
there
either
is
intention
to
right
at
some
point
in
the
future
so
previously
expressed
concerns
about
the
coexistence
of
these
drafts,
specifically
pagination
skim
events.
We'll
focus
on
right
now
has
been
that
pagination.
D
So
this
would
be
the
other
side
of
the
argument.
Not
the
one
that
I
support
would
be
that
pagination
or
you
know,
a
larger
like
cursor-based
imagination
model
is
unnecessary,
because
if
a
large
set
of
results
are
present,
you
can
actually
just
feed
them
all.
Those
sort
of
like
piecemeal
through
an
event
stream,
as
outlined
in
skim
events.
D
D
So
with
a
cursor-based
pagination
I
should
probably
not
have
just
called
the
top
one
pagination,
that's
a
little
vague,
so
it's
very
helpful
for
an
efficient
retrieval
of
results,
either
a
subset
of
results.
You
know
based
on
a
query
or
all
results.
It
is
needed
for
clients
to
be
able
to
efficiently
get
like
that
initial
set
of
data,
so,
let's
say
you're
a
skim,
client
and
you're
getting
your
data
for
the
first
time.
D
How
do
you
I?
Guess
it's
it's
just
it's
not
the
sort
of
the
right
model
in
you
know
Matt
and
I's
opinion
to
try
to
feed
that
all
through
a
skim
event.
Hub.
It
also
requires
substantially
more
like
infrastructure
investment.
You
need
inbound
connectivity
rules
as
in
our
inbound
connectivity
like
flow
as
a
server
running
and
all
that
stuff.
D
As
a
skim
client,
which
today,
a
client
can
operate
entirely
just
with
outbound
connectivity
and
what
do
you
so
when
you're,
whether
you're,
trying
to
sort
of
see
that
initial
set
of
data
or
just
if
you're
running
an
application
or
a
service
and
you've,
had
an
issue
like
who,
who
hasn't
had
a
bug
or
a
regression
or
something
like
that?
And
your
your
data
is
no
longer
trustworthy
and
you
need
to
pull
it
again
being
able
to
just
do
like
a
get
slash.
D
Users
and
paginate
them
is
sort
of
much
more
efficient
and
reliable
and
honestly
simpler
than
you
know,
hooking
up
an
event
stream
and
trying
to
flow
it
all
through.
D
So
that
being
said,
we
we
do
actually
see
the
value
in
scale
of
events.
It's
probably
just
in
a
narrower
use
case.
That
has
been
argued
in
some
of
our
previous
meetings.
D
D
So
if
a
like
a
a
connection,
is
set
up
for
like
a
sort
of
a
backflow
of
events
from
what
would
normally
be
considered
a
skim,
server
or
service
provider
back
down
to
the
party,
that
would
normally
be
the
client
that
allows
the
skim
service
provider
to
reach
out
to
the
client
and
tell
them
hey.
This
thing
has
happened,
and
this
you
know
it's
sort
of
you
know
configurable
to
like
what
level
of
you
know,
sensitivity
you
want,
but
the
super
like
useful
use
case
that
we
see.
D
Oh
that's,
a
useful
use
case
that
we
see
is
around
sort
of
that.
Those
urgent
high
priority
changes.
So
let's
say
that
an
account
needs
to
be
disabled,
so
in
the
in
the
conversation
of
like
provisioning
data
from
some
sort
of
Human
Resources
system
or
a
human
resources
connected
system
into
you
know
really
anywhere
else
Downstream.
D
If
HR
wants
to
send
a
signal
that
says
hey,
this
user
has
been
terminated
that
that's
an
urgent
thing,
you
know
sometimes
terminations
are
you
know
hostile?
You
can
look
like
layoffs
that
have
happened
at
so
many
companies
recently
and
being
able
to
you
know,
have
the
server
side
or
notify
the
client
of
hey
either.
Just
this
change
has
happened
because
they
can
explicitly
say
you
know,
like
account,
you
know
they
are
no
longer
like
they've
been
terminated
or
they
can
just
say.
D
This
object
has
had
an
update
and
you
should
check
it
again
either
way
it
sort
of
it
solves
that
problem,
and
you
can
expand
that
out
to
things
like
you
know,
password
changes
which
could
in
turn
trigger
you,
know,
invalidating.
You
know
existing
tokens
in
a
service
or
whichever
you
can
expand
out
further,
maybe
just
somebody's
you
know
display
name
or
whatever.
D
If
you
want
to
actually
use
that
you
can
use
that
mechanism
to
notify
of
changes
like
ahead
of
time
and
the
like
some
of
the
feedback
that
I
I've
managed
to
solicit
it.
Pretty
much
says
this
is
not
universally
implementable,
so
the
inbound
connectivity
requirements
are
similarity
and
off-putting
and
a
deal
breaker
for
some
implementations,
like
some
clients
do
not
want
to
have.
D
You
know
to
open
up
inbound
connectivity,
there's
the
costs
both
from
like
a
development
standpoint,
and
you
know,
infrastructure,
operational
standpoint
of
running
the
like
shared
signals
and
event
like
the
the
the
the
things
that
would
like
receive
these
events
and
store
them,
and
all
of
that,
it's
just
it.
It's
making
things
more
complicated
and
that
increased
complexity
is
concerning
and
I'll
stop
there
I
see
Daryl
has
raised
hand
in
the
queue.
G
Hi
Daryl
Miller
Microsoft,
it's
just
worth.
You
know
we
do
pretty
much
this
at
scale
with
Microsoft
graph,
API
across
all
kinds
of
different
resources,
and
we
actually
combine
events
in
Delta
query,
because
one
of
the
challenges
with
events
is
that
you're
a
you're
sending
data
over
the
wire.
So
you
need
to
be
secure
about
where
you're,
sending
that
data
and
be
a
customer
who's
receiving,
it
could
drop
it
on
the
floor
and
then
they've
lost
that
event,
whereas
if
you
you
send
some
events,
just
events
saying
this
has
changed.
G
You
can
then
re.
Have
the
client
come
back
with
Delta
query
and
say:
oh,
you
said
something
has
changed.
What
has
changed
since
I
last
queried
it,
so
it
might
be
worth
looking
at
those
two
things
more
in
combination,
or
at
least
that's
a
pattern
that
we've
found
works
well,
where
people
want
very
reliable
and
efficient
way
of
getting
notifications
as
long
as
the
scale
doesn't
get
too
too
high
because
you
do
have
to
make
you
have
to
receive
the
event
and
then
make
the
secondary
query.
G
D
We
should
talk
more.
Yes,
we
should
I'm
hearing
you
volunteer
to
help
author,
a
skim,
Delta
query
draft.
B
D
Yeah
and
then
yeah
just
a
few
key
points
on
sort
of
the
the.
Why
of
like
a
Delta
query,
so
it's
it
it's
needed
for
synchronization,
we'll
call
them
use
cases
to
help
improve
efficiency.
So
using
the
use
case
of
like
a
an
identity
provider,
you
know
Azure
ad
OCTA,
whichever
is
connected
into
a
Human
Resources
provider,
and
the
human
resource
provider
is
providing
all
of
the
user
information
that,
in
turn,
is
used
to
make
decisions
about.
Does
an
account
get
created,
updated
deleted?
D
You
know
any
any
of
those
for
when
you
start
hitting
really
large
scale
companies
with
hundreds
of
thousands
or
you
know
millions
of
users
in
their
HR
System.
You
know
between
you,
know,
contractors
and
whatnot.
D
You
run
into
issues
where
honestly
even
skim
events,
it's
going
to
be
like
a
fire
hose
and
just
purely
doing
like,
let's
say
a
get,
slash
users
web,
you
know
paginated
or
if
you're
getting
the
raw
set
of
data.
D
If
you're
doing
get
slash
users
and
there's
a
million
users
that
clearly
doesn't
work,
so
you
need
the
pagination,
but
even
then,
even
if
you're
trying
to
paginate
a
set
of
a
million
users-
and
you
really
only
want
the
ones
that
have
changed
you-
you
need
that
ability
to
just
say
give
me
the
ones
that
have
changed.
Since
you
know
the
last
time
I
asked
for
this
and
between
you
know
the
various
concerns
about
skim
events
and
really
just
I
think
problems
of
scale.
D
This
is
not
like
you.
Skim
events
in
place
of
a
Delta
query,
isn't
something
that
Matt
and
I
see
as
being
feasible.
Excuse
me
and
I
I've
seen
I've,
you
know
solicited
other
feedback
that
also
aligns
there
so
I
just
sort
of
covering
where
our
minds
are
at
and
as
far
as
solving
the
Delta
query
problem.
I,
don't
know
that
either
matter
is
especially
me
are
the
right.
People
are
at
least
the
only
people.
You
know,
I
think
we
need
a
larger
set
of
people
involved
to
solve
this.
D
Perhaps
but
the
current
front
runner
of
ideas
to
solve
it
seems
to
be
some
sort
of
Watermark
based
system
with
you
know
some
sort
of
a
fake
token,
representing
from
the
service
writer,
when
that
you
know
to
return
all
changes
since
X
and
that
takes
favor
over
just
going
off
of
say
the
what
is
it
a
meta
dot
last
modified
because,
especially
in
distributed
Cloud
systems,
you
can
run
into
problems
where
small
amounts
of
time
drift
can
create
inconsistent
results.
D
Essentially,
you
know
time
in
most
of
these
systems
is
not
what's
the
word
monotonically
increasing,
and
because
of
that,
just
it's
better,
instead
of
using
a
an
actual
like
date,
time
to
use
that
opaque
token.
That
may,
ultimately,
you
know,
still
represent
a
date
time
internally,
but
it
gives
a
little
more
flexibility
and
you
know
for
the
implementer
to
provide
the
token
which,
in
turn,
can
behind
the
scenes
lead
to
whatever
the
most
sort
of
reliable
results
are
so
yeah.
D
Hopefully,
that
was
helpful
to
people.
This
one
doesn't
really
have
a
direct
like
request
for
feedback,
although
if
anybody
has
you
know,
feedback
or
thoughts
or
whatever
happy
to
hear
them.
This
conversation
has
also
mostly
been
I,
think
covered
over
the
mailing
list
in
a
long
series
of
emails,
I
believe
that
no
I
think
I
already
said
for
pagination,
I,
I,
think
we'll
within
the
next
few
weeks
or
a
month,
or
whichever
we
may
try
to
put
this
up
for
a
call
for
adoption
as
well.
D
I
forgot
about
that
slide.
Thank
you,
laughs,
yeah,
so,
just
as
a
sort
of
a
high
level
overview,
so
I've
been
trying
to
sort
of
help
make
sure
that
we're
making
progress
on
a
lot
of
the
various
items
that
exist
in
the
working
groups
Charter.
D
So
some
of
the
you
know
previous
drafts
are
covering
pieces.
Some
of
them
are
a
little
bit
of
a
stretch.
You
know
we
use
that
one
mind
about
the
just.
You
know
extended
schemas
and
and
go
from
there
solve
problems,
but
some
of
the
upcoming
work
that
I
at
least
intend
to
focus
on
and
try
to
generate
draft
for
and
I'm.
D
You
know
being
public
about
this
year,
because
if
anybody
is
interested
in
any
of
these,
please
for
the
love
of
your
deity
of
choice
reach
out
and
we
can.
We
can
work
together,
so
Human
Resources
schema,
essentially
a
generalized
Human
Resources,
like
a
worker
or
employee
representation,
intentionally
at
least
what
I
and
what
you
know
based
on
what
I've
know
and
have
learned.
D
This
should
not
just
be
an
extension
to
the
user
schema
because
a
worker
or
an
employee-
or
you
know,
whatever
your
label
is-
may
be
different
from
a
user
like
a
a
human
being
in
an
HR.
System
does
not
mean
that
One
account
is
created
for
them,
it
could
be,
zero
accounts
could
be,
one
could
be
fifty
and
just
providing
that
you
know
employee
data
and
then
letting
the
connected
systems
decide.
How
to
you
know
react
to
that
is.
D
Is
the
goal
there
and
you
know
the
the
Uber
goal
for
the
human
resources
schema
is
to
actually
standardize
how
HR
providers
can
represent
their
data,
which
will
in
turn,
allow
you
know
any
HR
provider,
whether
it's
a
really
big
one,
whether
it's
a
new
startup,
they
don't
have
to
go
design
their
own
API.
They
don't
have
to
go
design
their
own.
You
know
API
schema
or
anything,
but
then
they
also
get
the
benefit
of.
D
D
There's
we've
previously
discussed
and
who
knows
one
will
actually,
you
know,
pick
up
the
pen,
some
sort
of
security,
best
current
practices
document,
and
this
is
essentially
just
to
in
lieu
of
modifying
these
right
now
directly
in
the
skim
you
know,
I
guess
schema
or
protocol
rfes,
because
you
know
when
we
crack
those
open,
it's
a
bit
of
a
mess
when
it
gets
to
do.
D
We
increase
the
the
version
of
scam
to
like
2.0.1
or
2.1,
or
whichever
for
now,
we
just
want
to
look
at
writing
a
BCP
to
advise
on
things
like
dropping
user,
password
authentication,
not
using
the
password
attribute
in
like
non-legacy
or
you
know
any
like
internet
facing
or
like
SAS
scenarios
just
to
give
folks
a
a
nudge
towards
aligning
with
more
modern
security
practices,
and
then
we
could
sort
of
call
the
the
last
one.
D
The
reference
attribute:
URL
authorization,
the
the
photo
problem
so
at
least
across
Internet
connected
it's
like
skim,
service,
writers
or
you
know,
clients,
implementation
and
thinking
of
like
the
profile.
Pictures,
for
instance,
with
the
photos
attribute,
is
a
really
really
low
adoption
rate
and
my
take
on
why
that
is.
Is
that
the
so
the
skim
spec
defines
reference
URLs,
and
this
is
not
to
be
confused
with
the
referential
values
draft
earlier,
but
a
reference
URL
or
your
URI.
D
So
essentially
you
can
say
this
user's
profile
picture
is
at
you
know:
HTTP
colon
whack
whack,
whatever
the
URL
is
nothing
in
this
game.
Spec
talks
about
authorization
for
those
urls,
so
in
Internet
connected
systems.
D
If
you
know
your
your
Cloud
IDP
is
trying
to
think
to
a
SAS
app
and
provide
pictures,
you
can
provide
the
URLs,
but
there's
no
way
standardize
to
say
and
here's
the
token
or,
whichever
that
you
use
to
go
access
these
URLs
and
get
these
pictures,
and
that
is
a
problem
that
I
would
like
to
solve.
A
There's
a
couple
of
comments
in
the
chat,
I.
Think
Daryl.
Are
you
here
in
the
room?
Do
you
want
to
do
you
want
to
repeat
that
in
for
the
room
or.
G
Yeah
so
you
mentioned
about
using
like
a
watermark
token,
and
it
is
a
effective
way
of
doing
it
that
can
provide
High
Fidelity
Delta.
The
challenge
is
you're,
putting
a
burden
on
the
server
to
keep
track
of
all
of
those
Delta
tokens,
and
it
can
get
expensive
for
servers.
G
So
it
would
be
worth
having
potentially
multiple
where
one
is
just
the
last
modified
type
thing
for
scenarios
where
the
data
is
not
as
volatile
or
it's
not
as
critical
that
if
a
data
is
is
dropped
with
a
it's,
a
lower
bar
to
entry.
D
Yeah
thanks
girl.
That
makes
sense
so
as
part
the
the
overarching
like
use
case
that
I
am
trying
to
address
with
at
least
start
with
with
the
like.
The
the
Delta
query.
Topic
is
around
actually
we'll
call
it
the
High
Fidelity
things
we'll.
You
know,
go
back
to
the
use
case
of
like
the
human
resources
based
provisioning,
where
you
really
want
to
catch.
D
You
know
every
result
and
I
think
you
are
correct,
though,
that
we
need
to
have
multiple
options,
both
from
a
like
a
cost
to
implement
side
and
I
guess
a
Fidelity
side,
although
you
know
if
it
was
the
same
cost
to
implement,
why
not
use
the
higher
Fidelity
but
yeah
I
think
most
of
the
human
resources
providers
you
know
to
you
know,
keep
on
that
use
case
already
have
some
form
of
a
Delta.
D
You
know
like
a
watermark
based
token
system,
so
it's
not
a
stretch
for
them
at
least,
but
we
I
I.
Yes,
we.
We
should
also
outline
doing
things
like
you
know,
meta.lastmodified,
which
are
already
like
sort
of
you
know:
they're
they're
they're
there,
but
not
explicitly
or
maybe
there
is
an
explicit
example
in
one
of
the
current
specs
but
yeah
I
agree.
A
C
Then,
if
I,
if
I.
A
Can
go
back
to
your
slide,
comparing
these
these
three?
Can
you
make
sure
that
the
in
the
in
the
like
the
the
pagination
draft
that
the
use
cases
for
why
that
is
relevant
is
captured
in
the
intro?
To
that?
That's
usually
a
good
place
to
like
accept
the
context
for
the
draft,
because
it
does
like
what
I'm
hearing
hear
from
you
and
from
others
in
the
room,
but
I
know
it's
been
part
of
the
discussions
elsewhere
is
that
there
is
some
concern
about
overlapping
use
cases
being
solved
by
multiple
drafts.
A
So
we
want
to
make
sure
that,
if
that
the
use
cases
for
each
one
are
clear
so
that
you
know
someone
reading
this
knows
if
I
have
this
problem,
then
I'm
going
to
go,
follow
these
the
set
of
instructions,
and
hopefully
they're
not
thinking
that
they
have
to
do
everything
for
every
use
case,
because
I
think
that
is
the
problem
that
people
are
are
concerned
about.
So
just
you
know
clarifying
which
what
is
the
actual
problem
being
solved
before
we
get
into
the
the
mechanism
in
the
drafts.
I
D
Yeah,
your
I've
heard
your
feedback.
D
And
yeah
in
the
chat,
Dean
chimed
in
just
to
say
that
essentially
he
agreed
with
Daryl,
client
and
server
need
to
maintain
stake.
We
use
larger
green
timestamp
instead,
which
I
think
it
really
boils
down
to.
Is
it
sort
of
a
push
versus
a
pull
thing?
D
If
you're
looking
to
see
what's
changed
in
and
maybe
I'm
not
classifying
it
right,
but
it
I
I,
think
no,
maybe
I
am
I,
don't
know
I,
don't
want
to
speak
for
you,
Dean
I,
don't
know
if
you
have
any
wider
or
you
know
more
clear
opinions
on
that.
I
I
I
can't
say
that
in
all
cases,
I
think
if
there
are
cases
where
that's
a
problem,
then
maybe
you
use
the
watermark
based
method,
but
time
stamps
may
be
appropriate
for
many
implementations,
and
so
that
gets
back
to
your
point
of
multiple
different
mechanisms
for
doing
Delta
queries
not
just
settling
on
the
single
one
yep
and.
C
Hi
Danny,
this
is
Anjali
from
AWS
I
have
a
question
regarding
your
HR
schema
extension.
Can
you
a
little
bit
explain
like?
What's
the
use
case
of
why
we
want
to
do
like
an
important
role?
Would
it
be
more
of
an
extension
of
a
role
that
a
user
may
have
in
the
HR?
System
are
yeah.
D
It's
very
good,
okay,
so
the
the
use
case
there
is.
D
More
and
more
sort
of
identity
system
designs
have
started
to
shift
in
you
know,
let's
say
the
past
five
years
or
so
where
human
resources
is
deemed
the
like,
the
the
true
source
of
Authority
for
the
state
of
most
of
your
user
data.
So
whether
it's
first
name
last
name,
you
know
what
city
they
live
in
their
office.
D
You
know
anything
of
that
sort,
are
you
know,
are
they
employed
or
terminated,
is
sort
of
the
primary
location,
for
that
is
human
resources
and
it
in
turn
flows
down
to
other
things:
identity
providers,
Learning,
Management,
Systems,
any
other
connected
places
and
a
big
challenge
that
exists
when
trying
to
move
that
HR
data
into
other
systems
is
that
right
now
the
human
resources
providers
all
have
generally
like
disparate
and
and
proprietary
schemas,
and
so
there's
not
a
scale
mechanism.
B
Yeah
Danny
I
was
just
going
to
add
to
anjali's
question.
It
was
more
to
get
better
Affinity,
as
Danny
is
saying
that
the
HR
database
is
broader
than
just
the
role
and
the
need
to
have
that
stronger
mapping
and
Affinity.
So
the
examples
were,
for
instance,
adding
pronouns
he.
She
they
sorry
right
as
well
as
some
other
potential
fields.
C
B
C
D
Which
I
I
would
say
that
the
at
least
the
mod,
like
the
angle
that
I've
been
looking
to
approach
this
from,
has
been
to
represent
the
human
resources
data
in
such
a
way
that,
whatever
the
system
is
that's
consuming,
it
can,
you
know,
connect
to
multiple
Human
Resources
systems.
D
Essentially,
you
know
build
once
connect
to
many
utilize,
an
open
standard
and
then
that
system,
whether
you
know
it's
an
identity
provider
or
whichever
can
make
its
own
decisions
about
what
to
do
so
things
like
you,
know
the
you
know
this
value
in
Human
Resources
means
this
user.
Has
this
role
well
like
if
we're
talking
about
the
the
role
attribute
on
the
user
resource
in
skim
at
the
end
of
the
day?
D
D
But
yeah
this
is
the
last
slide.
This
is
that's
truly
now.
I
think
this
is
all
I've
all
I've
got
the
first
time
was
a
false.
B
So
I
think
that
was
it
on
the
agenda,
so
I
could
do
a
call
for
any
other
business
topics
of
discussion
going
once
going
twice
all
right,
so
we
may
give
you
okay,
I
can't
do
the
math
41
minutes
back.
B
So
there
are
a
couple
of
drafts
out
there.
There's
one
for
a
call
for
adoption,
so
I
encourage
all
of
you
to
read
and
provide
comments
so
that
we
can
progress
the
work
that
we've
started
to
do.
A
And
I
would
also
like
to
ask
for
more
comments
on
the
the
options
that
Annie
laid
out
about
the
uses
for
events
versus
pagination
for
sinking
I'm.
Just
if
you
have
any
experience
with
deploying
these,
the
experience
is
very
welcome
as
comments.
So
we
can
make
sure
that
we're
doing
the
right
thing.
B
A
Thing
I
forgot
is
that
we
do
have
time
scheduled
for
a
side
meeting
on
Wednesday
at
4
pm
here
local
time,
so
it's
in
a
small
room,
so
it
won't
be
like
this
and
we
can
use
that
time
to
chat
about
any
of
the
existing
documents
or
any
of
the
things
that
came
up
today.
I
would
love
to
use
that
time
to
Iron
out
some
of
the
details.
A
I
have
set
up
a
zoom
for
that
as
well
to
join
remotely.
So
if
you
are,
if
you
go
to
the
side
meeting
page
linked
from
the
main
itf115,
you
will
find
the
zoom
link
from
there
so
for
everybody
joining
remotely
and
I
set
it
at
4
pm
here,
so
that
it's
not
too
early
in
the
morning
on
the
west
coast
of
the
US,
so
hope
to
see
some
of
you,
some
of
you,
there
yeah
Wednesday.